Topics

[suggestion] Define patching time frames and ensure security of repositories


Enos D'Andrea <mailing28461928@...>
 

Dear David and list,

I believe a couple of things can be improved in the current CII-badges:

1. Early patching of critical vulnerabilities is not mandated: patching time frames are subjective and vulnerability severity ratings are not defined [vulnerabilities_critical_fixed]
2. There appear to be no controls for the integrity and availability of source code repositories [repo_*]

Potential consequences include (but are not limited to):

1. Indefinite delays in patching critical vulnerabilities could allow targeted and public exploitation.
2. Unavailability of repositories and insertion of backdoors or deliberate bugs by external and internal actors.

The following suggestions could mitigate the risk:

1.a. Precise time frames for patching and public disclosure (e.g. Google publicly disclosed a Ms 0-day privilege escalation after just 7 days [1][2])
1.b. Simple and unambiguous definition of vulnerability ratings [5]
1.c. Adaptation of patching time frames to ongoing or potential exploitation [6]

2.a. Review license agreements to ensure integrity [3] and availability of public repositories
2.b. Controls for the integrity of public repositories (e.g. mirroring [4], monitoring, segregation of duties (who uploads cannot merge), mandatory manual code reviews for all new code)
2.e. Define a minimum set of security controls for workstations and servers involved in development and distribution [5]

[1] https://security.googleblog.com/2016/10/disclosing-vulnerabilities-to-protect.html
[2] https://blogs.technet.microsoft.com/mmpc/2016/11/01/our-commitment-to-our-customers-security/
[3] http://arstechnica.com/information-technology/2015/05/sourceforge-grabs-gimp-for-windows-account-wraps-installer-in-bundle-pushing-adware/
[4] http://www.extremetech.com/computing/120981-github-hacked-millions-of-projects-at-risk-of-being-modified-or-deleted
[5] https://www.schneier.com/blog/archives/2015/12/back_door_in_ju.html
[6] https://technet.microsoft.com/en-us/security/gg309177.aspx
[7] https://technet.microsoft.com/en-us/security/cc998259

Your opinion?

--
Kind Regards
Enos D'Andrea


Daniel Stenberg
 

On Tue, 8 Nov 2016, Enos D'Andrea wrote:

Personally I think the existing critera are pretty good already and we should rather focus on fixing "white spots" that can still be present for projects that already reach 100% compliance. I think it is less desireable to add criterias that make the less-than-100%ers just go further away from the bar.

Therefore, you'll find me mostly negative to these new suggestions. I'll elaborate below.

1. Indefinite delays in patching critical vulnerabilities could allow targeted and public exploitation.
Is this an actual problem you see in projects (that otherwise have a chance of reaching 100%) ? If yes, then I'm in favor. If no, it would match my view and I'd be negative.

1.a. Precise time frames for patching and public disclosure (e.g. Google publicly disclosed a Ms 0-day privilege escalation after just 7 days [1][2])
That's hard to work with and opens up so much fine print to specify. What if we say projects must send patches within N days and my project missed the deadline with a day or two last year. Are we compliant? What if we've handled a 100 more after that within the time frame? And which N are we talking about here anyway?

Also, again, is this really a problem in otherwise well run projects?

1.b. Simple and unambiguous definition of vulnerability ratings [5]
Wow, really? Please do tell if you have that sorted out and where the guidelines are. In the projects I have a leading role in, we've always carefully completely stayed away from rating security issues just because of the difficulties, the subjectiveness and pointlessness of discussing severty levels.

1.c. Adaptation of patching time frames to ongoing or potential exploitation [6]
Sorry but what does this mean?

2.a. Review license agreements to ensure integrity [3] and availability of public repositories
Apart from me again not seeing a problem with this in well-run projects, how would such a criteria be phrased? I would imagine most projects want high availability and if they don't, its more often because of a lack of ability or funds and not malice.

2.b. Controls for the integrity of public repositories (e.g. mirroring [4], monitoring,
This is a critiera for the repository hosting? Do you find many good projects with a repository hosting problem?

segregation of duties (who uploads cannot merge),
As in persons who have permissions in the repository? I don't understand what uploading is in this context nor how this is an established best practice.

mandatory manual code reviews for all new code)
I love code reviews. I think having mandatory manual code reviews for all changes is a bit too strict of a critera for best practice. Not because reviews are bad, but mostly because it raises the bar A LOT and will strike off a huge amount of projects at a single blow. Even many established projects at current 100% complaince that are running well and have been functioning for decades.

2.e. Define a minimum set of security controls for workstations and servers involved in development and distribution [5]
Open source as in potentially taking contributions from hundreds or thousands of volunteers from all over the globe, and now I am going to answer a critiera for my project about the "workstations and servers involved in development" ?

I'm sorry, but that's just not feasable. And it is pointless as a generic criteria. If we get quality patches from a person, it doesn't mean anything to our project if the contributor's computer is full of viruses while someone is runnig a DDOS bot on his development machine. The project judges work based on what is delivered, not how the development machines are handled.

--

/ daniel.haxx.se


David A. Wheeler
 

On Tue, 8 Nov 2016, Enos D'Andrea wrote...
Thanks so much for your feedback! I've been out-of-town, so I'm just now seeing these.

Daniel Stenberg:
Personally I think the existing critera are pretty good already and we should
rather focus on fixing "white spots" that can still be present for projects that
already reach 100% compliance. I think it is less desireable to add criterias
that make the less-than-100%ers just go further away from the bar.
Therefore, you'll find me mostly negative to these new suggestions. I'll
elaborate below.
In *general* I agree with this right now. The criteria have turned out to be harder for many more projects than I expected. I originally thought they weren't hard, and in general people haven't disagreed with them. The problem, I think, is that when you combine many criteria, each of which are done by 90% of the projects, the number of projects that do them all isn't 90% :-). Several projects *have* said they didn't need to make any changes - but others have.

However, we absolutely *should* consider all proposals. Also - I still intend to create higher-level badges, with stronger criteria... so even if we don't add them to the "passing" level, they might go into a higher level.

1. Early patching of critical vulnerabilities is not mandated: patching time frames are subjective and vulnerability severity ratings are not defined [vulnerabilities_critical_fixed]
Ok. The issue is this text: "Projects SHOULD fix all critical vulnerabilities rapidly after they are reported" https://github.com/linuxfoundation/cii-best-practices-badge/blob/master/doc/criteria.md#vulnerabilities_critical_fixed


1. Indefinite delays in patching critical vulnerabilities could allow
targeted and public exploitation.
Is this an actual problem you see in projects (that otherwise have a chance of
reaching 100%) ? If yes, then I'm in favor. If no, it would match my view and
I'd be negative.

1.a. Precise time frames for patching and public disclosure (e.g.
Google publicly disclosed a Ms 0-day privilege escalation after just 7
days [1][2])
That's hard to work with and opens up so much fine print to specify. What if
we say projects must send patches within N days and my project missed the
deadline with a day or two last year. Are we compliant? What if we've
handled a 100 more after that within the time frame? And which N are we
talking about here anyway?
Also, again, is this really a problem in otherwise well run projects?

I agree that "rapidly" is in the eye of the beholder, but as Daniel Stenberg notes, "rapidly" is hard to pin down further. In any case, [vulnerabilities_fixed_60_days] creates a maximum of 60 days, so there *is* an unambiguous maximum.

The *intent* of these two criteria was to create a worst-case maximum, and encourage projects to do better than that. Of course, it might not *succeed* in the intent. You could argue that [vulnerabilities_critical_fixed] should be dropped, since [vulnerabilities_fixed_60_days] is where the strong requirement is.



1.b. Simple and unambiguous definition of vulnerability ratings [5]
Wow, really? Please do tell if you have that sorted out and where the
guidelines are. In the projects I have a leading role in, we've always carefully
completely stayed away from rating security issues just because of the
difficulties, the subjectiveness and pointlessness of discussing severty levels.
Vulnerability ratings are definitely hard. The only one that's gotten any real use is CVSS, which we *do* reference [vulnerabilities_fixed_60_days]. CVSS version 2.0 has some known problems, but until NIST starts using version 3.0 it's silly to require anyone else to use it.

We could modify [vulnerabilities_critical_fixed] to more unambiguously define the severity rating by changing "critical vulnerabilities" to "high severity vulnerabilities (per CVSS)".

This email is getting long, I'll reply to more separately.

--- David A. Wheeler


David A. Wheeler
 

Enos D'Andrea:
2.a. Review license agreements to ensure integrity [3] and
availability of public repositories
Daniel Stenberg:
Apart from me again not seeing a problem with this in well-run projects, how
would such a criteria be phrased? I would imagine most projects want high
availability and if they don't, its more often because of a lack of ability or
funds and not malice.
This is trickier than it sounds. We already have requirements for HTTPS, which deals with integrity from the point-of-view that you know which site you're talking to.

However, the citation focuses on an "integrity" which is *not* in the sense of encryption. The citation is for SourceForge's ham-handed GIMP "takeover":
[3] http://arstechnica.com/information-technology/2015/05/sourceforge-grabs-gimp-for-windows-account-wraps-installer-in-bundle-pushing-adware/
SourceForge has long abandoned this practice (after massive blowback).

This is a less-common problem today. There are definitely "spoof" sites for widely-used OSS for Windows (VLC and Audacity in particular have this problem). However, since OSS licenses by *definition* allow forks, the only thing you can really legally enforce are trademarks. Those absolutely *can* be enforced, and I think they are the better mechanism anyway. Another issue is simply, "what is the publicly-agreed-on official version?" But we're really not in a good place to determine who is the "official" release of a program. That can definitely be up-in the-air in an active fork.

I'm not sure there's a serious problem, and I also don't see what we could do if it *is* a serious problem.


2.b. Controls for the integrity of public repositories (e.g. mirroring
[4], monitoring,
This is a critiera for the repository hosting? Do you find many good projects
with a repository hosting problem?
I'm not sure how common these problems are. Most projects use a hosting service (GitHub, GitLab, SourceForge, Savannah, etc.). Also, anyone who uses a distributed version control system (like git or mercurial) has mirroring for free.


segregation of duties (who uploads cannot merge),
As in persons who have permissions in the repository? I don't understand
what uploading is in this context nor how this is an established best practice.
That is controversial. There's a push in the Node.js community to *maximize* who can merge into the master branch (with the notion that you can always remove it later). They focus more on release management than merging.


mandatory manual code reviews for all new code)
I love code reviews. I think having mandatory manual code reviews for all
changes is a bit too strict of a critera for best practice. Not because reviews
are bad, but mostly because it raises the bar A LOT and will strike off a huge
amount of projects at a single blow. Even many established projects at
current 100% complaince that are running well and have been functioning for
decades.
Agreed. I like code reviews too. I even co-edited an IEEE book on software inspections; they're helpful. But for many software projects, it's better to accept higher levels of defects than demand the large up-front cost in time and effort.

2.e. Define a minimum set of security controls for workstations and
servers involved in development and distribution [5]
Open source as in potentially taking contributions from hundreds or
thousands of volunteers from all over the globe, and now I am going to
answer a critiera for my project about the "workstations and servers involved
in development" ?

I'm sorry, but that's just not feasable. And it is pointless as a generic criteria. If
we get quality patches from a person, it doesn't mean anything to our project
if the contributor's computer is full of viruses while someone is runnig a DDOS
bot on his development machine. The project judges work based on what is
delivered, not how the development machines are handled.
Agreed. I can't imagine even figuring out *what* machines are used in most environments. In many cases they're temporary disposable VMs.

I agree with Daniel Stenberg here: It's better to judge the result, than try to evaluate every machine involved in creating the change.

Quick bottom line: This discusses some meta-criteria ideas. Can you instead propose concrete specific criteria text, given these comments? It's much easier to start with close-to-right text and improve them.

--- David A. Wheeler


Enos D'Andrea <mailing28461928@...>
 

Thanks David and Daniel for your replies.

My initial message objected the following current gaps in CII-Badges:
- Publicly known critical vulnerabilities exploited in the wild are
allowed to remain unpatched for up to 60 days
- Official software repositories are sometimes regulated by service
agreements significantly jeopardizing their integrity and availability

Your answers focused mostly on marketing issues rather than on technical
ones. I forgot whether CII Badges was initially meant as a collection of
*common* best practices, or as one of of *required* best practices
ensuring a minimum level of software security. In the latter case,
optimizing marketing strategies for wider adoption would have lower
priority than closing dangerous gaps in a long chain of controls whose
strength is only equal to that of its weakest control.

I would gladly try to draft the criteria corresponding to the gaps reported above, but only after (if) it is first decided that the gaps are relevant and must be addressed with precedence to marketing.

--
Kind Regards
Enos D'Andrea



On 11/11/2016 07:58 PM, Wheeler, David A wrote:
On Tue, 8 Nov 2016, Enos D'Andrea wrote...
Thanks so much for your feedback! I've been out-of-town, so I'm just
now seeing these.

Daniel Stenberg:
Personally I think the existing critera are pretty good already and
we should rather focus on fixing "white spots" that can still be
present for projects that already reach 100% compliance. I think it
is less desireable to add criterias that make the less-than-100%ers
just go further away from the bar. Therefore, you'll find me mostly
negative to these new suggestions. I'll elaborate below.
In *general* I agree with this right now. The criteria have turned
out to be harder for many more projects than I expected. I
originally thought they weren't hard, and in general people haven't
disagreed with them. The problem, I think, is that when you combine
many criteria, each of which are done by 90% of the projects, the
number of projects that do them all isn't 90% :-). Several projects
*have* said they didn't need to make any changes - but others have.

However, we absolutely *should* consider all proposals. Also - I
still intend to create higher-level badges, with stronger criteria...
so even if we don't add them to the "passing" level, they might go
into a higher level.

1. Early patching of critical vulnerabilities is not mandated:
patching time frames are subjective and vulnerability severity
ratings are not defined [vulnerabilities_critical_fixed]
Ok. The issue is this text: "Projects SHOULD fix all critical
vulnerabilities rapidly after they are reported"
https://github.com/linuxfoundation/cii-best-practices-badge/blob/master/doc/criteria.md#vulnerabilities_critical_fixed








1.



Indefinite delays in patching critical vulnerabilities could
allow targeted and public exploitation.
Is this an actual problem you see in projects (that otherwise have
a chance of reaching 100%) ? If yes, then I'm in favor. If no, it
would match my view and I'd be negative.

1.a. Precise time frames for patching and public disclosure
(e.g. Google publicly disclosed a Ms 0-day privilege escalation
after just 7 days [1][2])
That's hard to work with and opens up so much fine print to
specify. What if we say projects must send patches within N days
and my project missed the deadline with a day or two last year. Are
we compliant? What if we've handled a 100 more after that within
the time frame? And which N are we talking about here anyway? Also,
again, is this really a problem in otherwise well run projects?

I agree that "rapidly" is in the eye of the beholder, but as Daniel
Stenberg notes, "rapidly" is hard to pin down further. In any case,
[vulnerabilities_fixed_60_days] creates a maximum of 60 days, so
there *is* an unambiguous maximum.

The *intent* of these two criteria was to create a worst-case
maximum, and encourage projects to do better than that. Of course,
it might not *succeed* in the intent. You could argue that
[vulnerabilities_critical_fixed] should be dropped, since
[vulnerabilities_fixed_60_days] is where the strong requirement is.



1.b. Simple and unambiguous definition of vulnerability ratings
[5]
Wow, really? Please do tell if you have that sorted out and where
the guidelines are. In the projects I have a leading role in, we've
always carefully completely stayed away from rating security issues
just because of the difficulties, the subjectiveness and
pointlessness of discussing severty levels.
Vulnerability ratings are definitely hard. The only one that's
gotten any real use is CVSS, which we *do* reference
[vulnerabilities_fixed_60_days]. CVSS version 2.0 has some known
problems, but until NIST starts using version 3.0 it's silly to
require anyone else to use it.

We could modify [vulnerabilities_critical_fixed] to more
unambiguously define the severity rating by changing "critical
vulnerabilities" to "high severity vulnerabilities (per CVSS)".

This email is getting long, I'll reply to more separately.

--- David A. Wheeler


David A. Wheeler
 

Enos D'Andrea [mailto:mailing28461928@edlabs.it]
My initial message objected the following current gaps in CII-Badges:
- Publicly known critical vulnerabilities exploited in the wild are allowed to
remain unpatched for up to 60 days
- Official software repositories are sometimes regulated by service
agreements significantly jeopardizing their integrity and availability

Your answers focused mostly on marketing issues rather than on technical ones.
Um, no.

You're using the term "marketing" in a way that doesn't make sense to me. When I ask Google to "define marketing" I get: "the action or business of promoting and selling products or services, including market research and advertising." That's not what we're doing.


I forgot whether CII Badges was initially meant as a collection of
*common* best practices, or as one of of *required* best practices ensuring a
minimum level of software security.
Truly "ensuring security" is a non-starter. The only practice that "ensures" security is the use of formal methods down to the code level - and even then, that only ensures the specific statements proved given the assumptions & correctly-running tools. Few people use formal methods that way today, so today we cannot realistically require "ensuring" via formal methods for most software.

If we focus instead on managing risk - which is what we normally do in our lives anyway - then clearly there *are* practices that will generally improve security. The "passing" level - which is what we currently have - focuses on *common* OSS best practices that tend to improve security. We intend to have higher badge levels that go beyond that in the future, but even this lowest bar turns out to be challenging for many projects.

The intro to the criteria text <https://github.com/linuxfoundation/cii-best-practices-badge/blob/master/doc/criteria.md>; gives context that I hope will help:

=======================================================================

There is no set of practices that can guarantee that software will never have defects or vulnerabilities; even formal methods can fail if the specifications or assumptions are wrong. Nor is there any set of practices that can guarantee that a project will sustain a healthy and well-functioning development community. However, following best practices can help improve the results of projects. For example, some practices enable multi-person review before release, which can both help find otherwise hard-to-find technical vulnerabilities and help build trust and a desire for repeated interaction among developers from different organizations.

These best practices have been created to:

encourage projects to follow best practices,
help new projects discover what those practices are, and
help users know which projects are following best practices (so users can prefer such projects).

We are currently focused on identifying best practices that well-run projects typically already follow. We are capturing other practices so that we can create more advanced badges later. The best practices, and the more detailed criteria specifically defining them, are inspired by a variety of sources. See the separate "background" page for more information.

=======================================================================


I would gladly try to draft the criteria corresponding to the gaps reported above....
Specific *actual* criteria text would be great; that is much easier to evaluate.


--- David A. Wheeler