Date   
Re: Support Grsecurity/PaX

Kevin P. Fleming (BLOOMBERG/ 731 LEX)
 

I believe I may not have communicated my thoughts as clearly as I should have, so I'll attempt to clarify :-)

First, it's absolutely true that long-term viability is a component of funding decisions. A goal of funding any project is to allow it to reach a point where it can be self-sustaining at a level that allows it to produce high-quality software, respond well to its users' demands, and continue adapting to the changing needs of the community. Some projects have had difficulty reaching that point on their own, so CII funding has been put in place to get them jump-started, but the CII doesn't plan to be their sole funding source indefinitely. That's why I said that a funding proposal would need to include clear objectives, so that there's a visible path to being self-sustaining.

When I commented on the comparison to the fuzzing and Frama-C projects, I wasn't commenting on the nature of the software involved, I was commenting on the structure of the funding proposals (hypothetical, of course, in the case of grsecurity/PaX). The funding proposals for the fuzzing project and Frama-C, by their nature, have definite horizons, and the goals are to produce tools that the community will adopt in such a way that CII funding is no longer needed for the projects to survive and thrive. The CII might still fund them in order to allow them to grow at a more rapid pace, or to explore research that might not be supported by other community members, but those would be *new* funding proposals.

So, my point with in both situations is that the CII steering committee is unlikely to review, and definitely not approve, a funding request that is essentially just employing the project's developers to continue what they are doing. In each case as we've considered a funding proposal, the most important aspects have been measurable, objective improvements in the project's health (measured in a number of ways, of course). While I certainly can't speak for the entire committee, I have no doubt that a significant item of discussion around a grsecurity/PaX proposal would be the effectiveness of funding a project whose codebase is not incorporated into its 'host' project, and whether such incorporation would be a suitable objective. Clearly, if these tools *could* be merged into Linux proper, they would likely see even greater adoption, and they'd benefit a much larger audience than they do today.

From: pageexec@... At: Aug 21 2015 15:40:49
On 21 Aug 2015 at 18:19, Kevin P. Fleming (BLOOMBERG/ 731 LEX) wrote:

Hi Kevin,

first of all, thanks for your detailed response and new information that
I was not aware of. However as much as it clarified some points, it also
raised new questions. You see, I brought up the issue with fuzzing and
static analysis because Dan Kohn said this earlier:

> Jason, if CII funded Grsecurity/PaX for a year or two, it would keep
> the project going, but then what? It is unlikely that CII could fund
> the project indefinitely, so it would remain an unhealthy project.

The way I read this response suggested to me that long-term viability is
an important (and possibly deal breaker?) factor in your funding decisions.
Now you are saying that it was not for the mentioned projects. This leaves
me confused as I do not know what applies and does not apply to a project
such as ours.

You also said that grsecurity was not comparable to fuzzing/static analysis
and is more like a standalone(?) product. I beg to differ here as we produce
much more than just a kernel patch albeit it is perhaps less advertised.
Namely, due to the nature of our proactive defense mechanisms (both runtime
and compile time), they are also good at catching bugs (almost always with
security impact) and we have found and fixed a number of them for the past
few years. One would think that exposing such technologies to a wider audience
would have a much bigger impact on everyone's security than our own limited
efforts.

cheers,
PaX Team


Re: Support Grsecurity/PaX

Tom Ritter
 

I'm pretty far from the kernel development community. I know the
generalities we've seen in this threat about different communities
attitudes about mainlining, and I understand that pretty much everyone
is either frustrated with the situation or given up on it.

I just wanted to weigh in add support to the notion that PaX/grsec is
a critical piece of security software, is very highly regarded, and
it's always the first thing we recommend to people when they ask "How
an we harden our systems?" Then they almost never do it. It _has_
been very frustrating over the years that it has not been mainlined,
but I would rather it exist out-of-tree and be made as available as
easily and broadly as possible than not exist at all.

Maybe the answer is going to the distros and making it easy to switch
to a patched kernel to drive adoption, or maybe that's a horrible
idea. I think it would be wonderful if something could be done - but I
don't know what the exact plan could be.

-tom

Re: Support Grsecurity/PaX

Meredith Whittaker
 

Seems like there are a number of thoughts, and a general consensus that grsecurity is useful and used and deserves support. Cool! OK. 

What's missing is a proposal, tying funding to specific outcomes. I think this is a next step that would help narrow this conversation, and allow CII to vote on funding during its next Steering meeting (Sept. 17th). 

Cheers,
Meredith 

On Fri, Aug 21, 2015 at 5:29 PM, Tom Ritter <tom@...> wrote:
I'm pretty far from the kernel development community. I know the
generalities we've seen in this threat about different communities
attitudes about mainlining, and I understand that pretty much everyone
is either frustrated with the situation or given up on it.

I just wanted to weigh in add support to the notion that PaX/grsec is
a critical piece of security software, is very highly regarded, and
it's always the first thing we recommend to people when they ask "How
an we harden our systems?"  Then they almost never do it. It _has_
been very frustrating over the years that it has not been mainlined,
but I would rather it exist out-of-tree and be made as available as
easily and broadly as possible than not exist at all.

Maybe the answer is going to the distros and making it easy to switch
to a patched kernel to drive adoption, or maybe that's a horrible
idea. I think it would be wonderful if something could be done - but I
don't know what the exact plan could be.

-tom
_______________________________________________
cii-census mailing list
cii-census@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-census



--
Meredith Whittaker
Open Source Research Lead
Google NYC




Re: Support Grsecurity/PaX

Jason A. Donenfeld
 

Hello PaX Team, Spender,

After spending a few weeks meditating on this thread and its responses, it strikes me that the best thing to do might be to, in fact, apply to the CII.

The initial discussion was met with quite a bit of resistance from Dan, but the ensuing feedback from the community at large has been overwhelmingly in favor of funding Grsecurity/PaX. And actually, I think Dan's early responses form an important part in the development of critical CII policies. It is a relevant question -- how should out of mainline projects be handled, or more generally, how do other Linux kernel projects coexist with Linus' in terms of CII, and even more generally, what is the relationship between the CII's funding and a project's long term financial sustainability? These are all important questions that do need to be addressed during the CII's meetings. But, if anything is certain in all of this, it's that Grsecurity/PaX should/must receive funding. It's a sentiment echoed extremely widely throughout multiple communities and industries that rely on Linux. That means that when the CII does sit down to work out these interesting and complicated policy questions, they will do so with the goal in mind that whatever their policies are, they must allow for the funding of Grsecurity/PaX. I think this is a very good position to be in.

For this reason, I believe it makes sense to put in an application for CII funding. From my assessment of the matter, I do imagine that the committee will be in favor of Grsecurity/PaX -- and why shouldn't they be? -- and that direction will help them formulate the necessary policies to ensure that the CII is useful for critically essential projects like Grsecurity/PaX.

Jason

Re: [CII-badges] Ranking criteria

Sebastian Benthall
 

Hello!

Thanks for inviting me to participate in this project.

At Selection Pressure, we are looking at ways to incorporate project risk measurements into one of our products.

The CII Census looks like a great start on this!

I'm wondering what your plans are moving forward, especially with regard to the Risk Index. I see from the Wheeler and Khakimov paper that a lot of research went into possible metrics, and that the initial Risk Index score is a reflection of that.

What sort of process do you anticipate using for including new features into that calculation, and scoring them?

Do you have a plan for assessing empirically to what extent that Risk Index correlates with software risk?

Thanks!

Sebastian Benthall
PhD Candidate / UC Berkeley School of Information
Data Scientist / Selection Pressure

On Thu, Jan 14, 2016 at 2:27 PM, Dan Kohn <dankohn@...> wrote:
Mailing list is at http://lists.coreinfrastructure.org/mailman/listinfo/cii-census but specific suggestions for improving the project are probably best through the issue tracker.

We encourage you to fork the project and suggest improvements with a pull request.

--
Dan Kohn <mailto:dankohn@...>
Senior Advisor, Core Infrastructure Initiative
tel:+1-415-233-1000

On Thu, Jan 14, 2016 at 5:18 PM, Sebastian Benthall <sbenthall@...> wrote:
I do not see a mailing list listed on the cii-census GitHub page.
Is there one?
Or should general discussion about that project take place on the issue tracker?

Thanks,
Seb

On Thu, Jan 14, 2016 at 7:43 AM, Sebastian Benthall <sbenthall@...> wrote:

Will do. Thanks for referring me to that!

On Jan 14, 2016 6:10 AM, "Wheeler, David A" <dwheeler@...> wrote:
On Wed, Jan 13, 2016 at 9:32 PM, Sebastian Benthall <sbenthall@...> wrote:
> I'm a grad student studying quantitative metrics on open source software projects, a OSS developer and former project manager, and a contracting data scientist at Selection Pressure.

Dan Kohn:
> Sebastian, I think you'll be interested in our sister project, the CII Census. https://github.com/linuxfoundation/cii-census

I agree, please take a look at the census project.  The census project itself does quantitative measures, and on that site you'll also find a paper that points to other related work (you'll find that useful if you're trying to do scholarly work on the topic).

--- David A. Wheeler





Re: [CII-badges] Ranking criteria

David A. Wheeler
 

Sebastian Benthall:
Thanks for inviting me to participate in this project.
At Selection Pressure, we are looking at ways to incorporate project risk measurements into one of our products.
The CII Census looks like a great start on this!
Thanks!

I'm wondering what your plans are moving forward, especially with regard to the Risk Index. I see from the Wheeler and Khakimov paper that a lot of research went into possible metrics, and that the initial Risk Index score is a reflection of that.
What sort of process do you anticipate using for including new features into that calculation, and scoring them?
Do you have a plan for assessing empirically to what extent that Risk Index correlates with software risk?
We run this as an open source software project - if you have an idea for an improvement, please propose it via pull request, issue tracker, or mailing list.

A serious challenge for this project (and others like it) is a lack of 'ground truth'. If we knew ahead-of-time what the right answers were, we'd just use them :-). If we knew what the right answers were for a large data set, we could use that as a training set for statistical analysis and/or a learning algorithm.

Since we lack ground truth, we did what was documented in the paper. Here's a quick summary. We surveyed past efforts, selected a plausible set of metrics based on that, and heuristically developed a way to combine the metrics. We then had experts (hi!) look at the results (and WHY they were the results), look for anomalies, and adjust the algorithm until the results appeared reasonable. We also published everything as OSS, so others could propose improvements. We presume that humans will review the final results, and that helps too.

We're busy getting the CII badging program up-and-running (it's the same people), so we haven't spent as much time on the census recently. But this is definitely not an ignored project. You'll notice I already merged your pull request :-).

--- David A. Wheeler

Re: [CII-badges] Ranking criteria

Sebastian Benthall
 

We run this as an open source software project - if you have an idea for an improvement, please propose it via pull request, issue tracker, or mailing list.

Glad to!
 
A serious challenge for this project (and others like it) is a lack of 'ground truth'.  If we knew ahead-of-time what the right answers were, we'd just use them :-).  If we knew what the right answers were for a large data set, we could use that as a training set for statistical analysis and/or a learning algorithm.

I see. That makes sense.

One thing I'm trying to get a sense of (and I still need to read the paper very thoroughly to find out) is what exactly the "risk" you a measuring is risk of. That would make it easier to identify ground truth or proxies for it in existing data.

For example, 'having a vulnerability to SQL' injection is a very different kind of risk from 'having a low bus factor'.

Identifying when projects have died because of bus factor issues might be possible from observational data of open source communities.
 
Since we lack ground truth, we did what was documented in the paper.  Here's a quick summary.  We surveyed past efforts, selected a plausible set of metrics based on that, and heuristically developed a way to combine the metrics.  We then had experts (hi!) look at the results (and WHY they were the results), look for anomalies, and adjust the algorithm until the results appeared reasonable. 

This is great.

Is there a record of the anomalies and the adjustments?

Is there any sort of formal procedure for further expert review?

I would be interested in designing such a procedure if there isn't one.
 
We also published everything as OSS, so others could propose improvements.  We presume that humans will review the final results, and that helps too.

We're busy getting the CII badging program up-and-running (it's the same people), so we haven't spent as much time on the census recently.  But this is definitely not an ignored project.  You'll notice I already merged your pull request :-).

Thanks! and understood :) 

Re: [CII-badges] Ranking criteria

David A. Wheeler
 

Sebastian Benthall:
One thing I'm trying to get a sense of (and I still need to read the paper very thoroughly to find out) is what exactly the "risk" you a measuring is risk of. That would make it easier to identify ground truth or proxies for it in existing data.
The title of the supporting paper gives that away: "Open Source Software Projects Needing Security Investments". The CII project was started, in part, as a response to the Heartbleed vulnerability of OpenSSL. We're trying to determine what projects are more likely to have serious vulnerabilities and investment is needed.


Is there a record of the anomalies and the adjustments?
A high-level discussion is in the paper. See the git log for a record of many of the actual adjustments (the commit text should give you at least a brief reason as to *why* they were adjusted). I don’t think all adjustments we tried are recorded in the git log, since we weren't particularly trying to do that (sorry). But I think you'll find lots of useful information.


Is there any sort of formal procedure for further expert review?
I would be interested in designing such a procedure if there isn't one.
No, there's no formal procedure. You can propose one.

That said, we're happy to take good ideas from anyone, even if they're not perceived as experts.

--- David A. Wheeler

NTP

john s
 

as an aside, from Kit:

The NTP problem is really bugging me.  
Not even recognized as a top-10 ‘risk’ as defined by the CII Census (https://www.coreinfrastructure.org/programs/census-project) with a really low popularity rating and 0 committers - yikes.  If it is unreal to believe that not one of the main Linux/UNIX distros wouldn’t pay somebody to be a project lead for that thing.  Then there’s this:  http://netpatterns.blogspot.com/2016/01/the-rising-sophistication-of-network.html

-------------------------------------------
John Scott
 240.401.6574
< jms3rd@... >
http://powdermonkey.blogs.com
@johnmscott

On January 15, 2016 at 9:48:50 AM, Wheeler, David A (dwheeler@...) wrote:

Sebastian Benthall:
> One thing I'm trying to get a sense of (and I still need to read the paper very thoroughly to find out) is what exactly the "risk" you a measuring is risk of. That would make it easier to identify ground truth or proxies for it in existing data.

The title of the supporting paper gives that away: "Open Source Software Projects Needing Security Investments". The CII project was started, in part, as a response to the Heartbleed vulnerability of OpenSSL. We're trying to determine what projects are more likely to have serious vulnerabilities and investment is needed.


> Is there a record of the anomalies and the adjustments?

A high-level discussion is in the paper. See the git log for a record of many of the actual adjustments (the commit text should give you at least a brief reason as to *why* they were adjusted). I don’t think all adjustments we tried are recorded in the git log, since we weren't particularly trying to do that (sorry). But I think you'll find lots of useful information.


> Is there any sort of formal procedure for further expert review?
> I would be interested in designing such a procedure if there isn't one.

No, there's no formal procedure. You can propose one.

That said, we're happy to take good ideas from anyone, even if they're not perceived as experts.

--- David A. Wheeler

_______________________________________________
cii-census mailing list
cii-census@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-census

NTP

john s
 

as an aside, from Kit:

The NTP problem is really bugging me.  
Not even recognized as a top-10 ‘risk’ as defined by the CII Census (https://www.coreinfrastructure.org/programs/census-project) with a really low popularity rating and 0 committers - yikes.  If it is unreal to believe that not one of the main Linux/UNIX distros wouldn’t pay somebody to be a project lead for that thing.  Then there’s this:  http://netpatterns.blogspot.com/2016/01/the-rising-sophistication-of-network.html

-------------------------------------------
John Scott
 240.401.6574
< jms3rd@... >
http://powdermonkey.blogs.com
@johnmscott

On January 15, 2016 at 9:48:50 AM, Wheeler, David A (dwheeler@...) wrote:

Sebastian Benthall:
> One thing I'm trying to get a sense of (and I still need to read the paper very thoroughly to find out) is what exactly the "risk" you a measuring is risk of. That would make it easier to identify ground truth or proxies for it in existing data.

The title of the supporting paper gives that away: "Open Source Software Projects Needing Security Investments". The CII project was started, in part, as a response to the Heartbleed vulnerability of OpenSSL. We're trying to determine what projects are more likely to have serious vulnerabilities and investment is needed.


> Is there a record of the anomalies and the adjustments?

A high-level discussion is in the paper. See the git log for a record of many of the actual adjustments (the commit text should give you at least a brief reason as to *why* they were adjusted). I don’t think all adjustments we tried are recorded in the git log, since we weren't particularly trying to do that (sorry). But I think you'll find lots of useful information.


> Is there any sort of formal procedure for further expert review?
> I would be interested in designing such a procedure if there isn't one.

No, there's no formal procedure. You can propose one.

That said, we're happy to take good ideas from anyone, even if they're not perceived as experts.

--- David A. Wheeler

_______________________________________________
cii-census mailing list
cii-census@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-census

Re: NTP

Emily Ratliff
 

This is likely a quirk in the data. CII does fund the NTP project for ongoing maintenance work (part-time - the project certainly could use additional funding). They don't use the github repository as their main development repo, so that may be throwing the numbers off. There are more than 0 committers. 

On the blog article, please also see this thread where the issue is discussed on oss-security:

On Thu, Jan 28, 2016 at 7:30 AM, John Scott <jms3rd@...> wrote:
as an aside, from Kit:

The NTP problem is really bugging me.  
Not even recognized as a top-10 ‘risk’ as defined by the CII Census (https://www.coreinfrastructure.org/programs/census-project) with a really low popularity rating and 0 committers - yikes.  If it is unreal to believe that not one of the main Linux/UNIX distros wouldn’t pay somebody to be a project lead for that thing.  Then there’s this:  http://netpatterns.blogspot.com/2016/01/the-rising-sophistication-of-network.html

-------------------------------------------
John Scott
@johnmscott

On January 15, 2016 at 9:48:50 AM, Wheeler, David A (dwheeler@...) wrote:

Sebastian Benthall:
> One thing I'm trying to get a sense of (and I still need to read the paper very thoroughly to find out) is what exactly the "risk" you a measuring is risk of. That would make it easier to identify ground truth or proxies for it in existing data.

The title of the supporting paper gives that away: "Open Source Software Projects Needing Security Investments". The CII project was started, in part, as a response to the Heartbleed vulnerability of OpenSSL. We're trying to determine what projects are more likely to have serious vulnerabilities and investment is needed.


> Is there a record of the anomalies and the adjustments?

A high-level discussion is in the paper. See the git log for a record of many of the actual adjustments (the commit text should give you at least a brief reason as to *why* they were adjusted). I don’t think all adjustments we tried are recorded in the git log, since we weren't particularly trying to do that (sorry). But I think you'll find lots of useful information.


> Is there any sort of formal procedure for further expert review?
> I would be interested in designing such a procedure if there isn't one.

No, there's no formal procedure. You can propose one.

That said, we're happy to take good ideas from anyone, even if they're not perceived as experts.

--- David A. Wheeler

_______________________________________________
cii-census mailing list
cii-census@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-census

_______________________________________________
cii-census mailing list
cii-census@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-census


Re: NTP

Kit Plummer
 

Thanks for the update Emily.  

Yeah, some of these ‘OG’ projects are tough to track for sure.  :)  Perhaps that is why they are less popular?

Kit

On Jan 28, 2016, at 8:51 AM, Emily Ratliff <eratliff@...> wrote:

This is likely a quirk in the data. CII does fund the NTP project for ongoing maintenance work (part-time - the project certainly could use additional funding). They don't use the github repository as their main development repo, so that may be throwing the numbers off. There are more than 0 committers. 

On the blog article, please also see this thread where the issue is discussed on oss-security:

On Thu, Jan 28, 2016 at 7:30 AM, John Scott <jms3rd@...> wrote:
as an aside, from Kit:

The NTP problem is really bugging me.  
Not even recognized as a top-10 ‘risk’ as defined by the CII Census (https://www.coreinfrastructure.org/programs/census-project) with a really low popularity rating and 0 committers - yikes.  If it is unreal to believe that not one of the main Linux/UNIX distros wouldn’t pay somebody to be a project lead for that thing.  Then there’s this:  http://netpatterns.blogspot.com/2016/01/the-rising-sophistication-of-network.html

-------------------------------------------
John Scott
@johnmscott

On January 15, 2016 at 9:48:50 AM, Wheeler, David A (dwheeler@...) wrote:

Sebastian Benthall:
> One thing I'm trying to get a sense of (and I still need to read the paper very thoroughly to find out) is what exactly the "risk" you a measuring is risk of. That would make it easier to identify ground truth or proxies for it in existing data.

The title of the supporting paper gives that away: "Open Source Software Projects Needing Security Investments". The CII project was started, in part, as a response to the Heartbleed vulnerability of OpenSSL. We're trying to determine what projects are more likely to have serious vulnerabilities and investment is needed.


> Is there a record of the anomalies and the adjustments?

A high-level discussion is in the paper. See the git log for a record of many of the actual adjustments (the commit text should give you at least a brief reason as to *why* they were adjusted). I don’t think all adjustments we tried are recorded in the git log, since we weren't particularly trying to do that (sorry). But I think you'll find lots of useful information.


> Is there any sort of formal procedure for further expert review?
> I would be interested in designing such a procedure if there isn't one.

No, there's no formal procedure. You can propose one.

That said, we're happy to take good ideas from anyone, even if they're not perceived as experts.

--- David A. Wheeler

_______________________________________________
cii-census mailing list
cii-census@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-census

_______________________________________________
cii-census mailing list
cii-census@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-census



Re: NTP

Emily Ratliff
 

I had to look up 'OG'. :-) Always good to learn new lingo.

Absolutely agree with your sentiment. As projects mature and grow larger, the barriers to entry also grow larger in that the amount of knowledge that you need to have before making a meaningful contribution also grows, especially for highly technical and complex projects like crypto and time.

But, that is why it is a best practice to offer advice for incoming contributors and even tag bugs for newcomers to help them learn the code base like the projects on OpenHatch's Easy Bugs have done:
Producing lists like these also take time so this concept is sadly out of reach for critically short-handed projects.


On Thu, Jan 28, 2016 at 8:09 AM, Kit Plummer <kit.plummer@...> wrote:
Thanks for the update Emily.  

Yeah, some of these ‘OG’ projects are tough to track for sure.  :)  Perhaps that is why they are less popular?

Kit

On Jan 28, 2016, at 8:51 AM, Emily Ratliff <eratliff@...> wrote:

This is likely a quirk in the data. CII does fund the NTP project for ongoing maintenance work (part-time - the project certainly could use additional funding). They don't use the github repository as their main development repo, so that may be throwing the numbers off. There are more than 0 committers. 

On the blog article, please also see this thread where the issue is discussed on oss-security:

On Thu, Jan 28, 2016 at 7:30 AM, John Scott <jms3rd@...> wrote:
as an aside, from Kit:

The NTP problem is really bugging me.  
Not even recognized as a top-10 ‘risk’ as defined by the CII Census (https://www.coreinfrastructure.org/programs/census-project) with a really low popularity rating and 0 committers - yikes.  If it is unreal to believe that not one of the main Linux/UNIX distros wouldn’t pay somebody to be a project lead for that thing.  Then there’s this:  http://netpatterns.blogspot.com/2016/01/the-rising-sophistication-of-network.html

-------------------------------------------
John Scott
@johnmscott

On January 15, 2016 at 9:48:50 AM, Wheeler, David A (dwheeler@...) wrote:

Sebastian Benthall:
> One thing I'm trying to get a sense of (and I still need to read the paper very thoroughly to find out) is what exactly the "risk" you a measuring is risk of. That would make it easier to identify ground truth or proxies for it in existing data.

The title of the supporting paper gives that away: "Open Source Software Projects Needing Security Investments". The CII project was started, in part, as a response to the Heartbleed vulnerability of OpenSSL. We're trying to determine what projects are more likely to have serious vulnerabilities and investment is needed.


> Is there a record of the anomalies and the adjustments?

A high-level discussion is in the paper. See the git log for a record of many of the actual adjustments (the commit text should give you at least a brief reason as to *why* they were adjusted). I don’t think all adjustments we tried are recorded in the git log, since we weren't particularly trying to do that (sorry). But I think you'll find lots of useful information.


> Is there any sort of formal procedure for further expert review?
> I would be interested in designing such a procedure if there isn't one.

No, there's no formal procedure. You can propose one.

That said, we're happy to take good ideas from anyone, even if they're not perceived as experts.

--- David A. Wheeler

_______________________________________________
cii-census mailing list
cii-census@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-census

_______________________________________________
cii-census mailing list
cii-census@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-census




Re: NTP

David A. Wheeler
 

Kit:
The NTP problem is really bugging me
On Jan 28, 2016, at 8:51 AM, Emily Ratliff <eratliff@...> wrote:
This is likely a quirk in the data. CII does fund the NTP project for ongoing maintenance work (part-time - the project certainly could use additional funding). They don't use the github repository as their main development repo, so that may be throwing the numbers off. There are more than 0 committers.
Actually, ntp was identified as a risky program. The more detailed paper D-5459 has more info that you (Kit) may have missed. See https://github.com/linuxfoundation/cii-census/blob/master/OSS-2015-06-19.pdf (click on “Raw” to use your local PDF reader). If you look at the list “Riskiest OSS Programs (human-identified subset informed by risk measures)”, on page 6-5 we *specifically* identify ntp as one of the riskiest programs. Once you look at the list when combined with human expertise, ntp jumps out as important. As Emily noted, the Linux Foundation is specifically funding the NTP project.

An *ideal* for the census project would be to have no need for human judgement. Ideally we could create quantitative measures, combine them in a clear and simple way, and demonstrably have a perfect list of exactly what’s riskiest (and in what order). We don’t currently have that ideal… but that doesn’t make the work useless. What we have instead are quantitative measures that can *help* humans make a determination of risk. In my experience there are many tough problems where the computer can't really make the decision... it can only be an *aid* to a human who makes the decision. Since the goal was to help humans make investment decisions, we met the goal.

If people have ideas about how to improve the census, we're all ears. We posted not just how we created the census numbers, but the alternatives we looked at & the code to calculate it. We *want* people to suggest improvements. Metrics is a notoriously hard problem in security.

One *big* problem is the lack of known truth - there are a lot of great learning algorithms, but they require truth data we don't have. Vulnerability counts (for example) are terrible proxies; a low number may mean the software is secure, or it may simply mean that no one has seriously reviewed it AND publicly reported the results. Not all vulnerabilities are equal, either.

--- David A. Wheeler

CII Census proposal: zlib

Timofonic
 

Hello.

Here's a proposal to the census: zlib

Website (90s design, does it count as site?): http://zlib.net
Contributors: Only Madler (the famous Mark Adler from JPI's NASA)
since 2013. Before that, only occasional contributors appear.
Popularity: I'm not sure how to interpret that and there are many
similar packages, but the statistics show it's extremely popular.
Check it yourself https://qa.debian.org/popcon.php?package=zlib
Main Language: C.
Network Exposure: I think it does, as some network-based projects use
it. I'm sure more skilled people can look at that.
Repo: https://github.com/madler/zlib/
Dependencies: The list is absurdly enormous, even Linux kernel uses it.
Application Data Only: No idea how to look for that at Debian
Popularity Contest.
Patches: I've seen patches out there. There's a lot of forks and
unmerged pull requests too.
ABRT crash statistics: I have no idea of what is it.

Kind regards.

CII Census proposal: Dropbear

Timofonic
 

Hello.

Here's a proposal to the census: Dropbear

Website (90s design, but it seems effective despite not having an own
domain): https://matt.ucc.asn.au/dropbear/dropbear.html
Contributors: Mostly Matt Johnston with periodical contributions from
others such as Francois Perrad, Ben Gardner. Occasional contributions
from others such as Henrik Nordström, Chocobo1, Jeremy Kerr. I'm not
sure how to analyze it, as the Mercurial web log seems weird to me.
Popularity: I'm not sure how to interpret that and there are many
similar packages, but the statistics show it's very popular. Anyway, I
think this parameter doesn't do it justice enough as this project is
extremely more popular in embedded Linux distros such as OpenWRT,
Alpine Linux and others. Check it yourself
https://qa.debian.org/popcon.php?package=dropbear
Main Language: C.
Network Exposure: It's a lightweight OpenSSH replacement. Of course,
it has tons of network exposure.
Repo: https://secure.ucc.asn.au/hg/dropbear/
Dependencies: I'm not sure about this, as the project tries to be
standalone because it's embedded-like nature.
Application Data Only: No idea how to look for that at Debian
Popularity Contest.
Patches: I've seen patches out there. There's a lot of forks and
unmerged pull requests too.
ABRT crash statistics: I have no idea of what is it.

Kind regards.

Re: CII Census proposal: zlib

Timofonic
 

I forgot to mention this email idea born from the following GitHub
issue tracker: https://github.com/madler/zlib/issues/299

2017-09-15 16:58 GMT+02:00 timofonic timofonic <timofonic@...>:

Hello.

Here's a proposal to the census: zlib

Website (90s design, does it count as site?): http://zlib.net
Contributors: Only Madler (the famous Mark Adler from JPI's NASA)
since 2013. Before that, only occasional contributors appear.
Popularity: I'm not sure how to interpret that and there are many
similar packages, but the statistics show it's extremely popular.
Check it yourself https://qa.debian.org/popcon.php?package=zlib
Main Language: C.
Network Exposure: I think it does, as some network-based projects use
it. I'm sure more skilled people can look at that.
Repo: https://github.com/madler/zlib/
Dependencies: The list is absurdly enormous, even Linux kernel uses it.
Application Data Only: No idea how to look for that at Debian
Popularity Contest.
Patches: I've seen patches out there. There's a lot of forks and
unmerged pull requests too.
ABRT crash statistics: I have no idea of what is it.

Kind regards.

Re: CII Census proposal: zlib

Timofonic
 

I reactivated it on OpenHub

https://www.openhub.net/p/zlib

2017-09-15 17:31 GMT+02:00 timofonic timofonic <timofonic@...>:

I forgot to mention this email idea born from the following GitHub
issue tracker: https://github.com/madler/zlib/issues/299

2017-09-15 16:58 GMT+02:00 timofonic timofonic <timofonic@...>:
Hello.

Here's a proposal to the census: zlib

Website (90s design, does it count as site?): http://zlib.net
Contributors: Only Madler (the famous Mark Adler from JPI's NASA)
since 2013. Before that, only occasional contributors appear.
Popularity: I'm not sure how to interpret that and there are many
similar packages, but the statistics show it's extremely popular.
Check it yourself https://qa.debian.org/popcon.php?package=zlib
Main Language: C.
Network Exposure: I think it does, as some network-based projects use
it. I'm sure more skilled people can look at that.
Repo: https://github.com/madler/zlib/
Dependencies: The list is absurdly enormous, even Linux kernel uses it.
Application Data Only: No idea how to look for that at Debian
Popularity Contest.
Patches: I've seen patches out there. There's a lot of forks and
unmerged pull requests too.
ABRT crash statistics: I have no idea of what is it.

Kind regards.

Re: CII Census proposal: Dropbear

Timofonic
 

I improved the OpenHub profile by adding the Mercurial repo (it's
finishing to analyze it)...

https://www.openhub.net/p/dropbear

2017-09-15 17:15 GMT+02:00 timofonic timofonic <timofonic@...>:

Hello.

Here's a proposal to the census: Dropbear

Website (90s design, but it seems effective despite not having an own
domain): https://matt.ucc.asn.au/dropbear/dropbear.html
Contributors: Mostly Matt Johnston with periodical contributions from
others such as Francois Perrad, Ben Gardner. Occasional contributions
from others such as Henrik Nordström, Chocobo1, Jeremy Kerr. I'm not
sure how to analyze it, as the Mercurial web log seems weird to me.
Popularity: I'm not sure how to interpret that and there are many
similar packages, but the statistics show it's very popular. Anyway, I
think this parameter doesn't do it justice enough as this project is
extremely more popular in embedded Linux distros such as OpenWRT,
Alpine Linux and others. Check it yourself
https://qa.debian.org/popcon.php?package=dropbear
Main Language: C.
Network Exposure: It's a lightweight OpenSSH replacement. Of course,
it has tons of network exposure.
Repo: https://secure.ucc.asn.au/hg/dropbear/
Dependencies: I'm not sure about this, as the project tries to be
standalone because it's embedded-like nature.
Application Data Only: No idea how to look for that at Debian
Popularity Contest.
Patches: I've seen patches out there. There's a lot of forks and
unmerged pull requests too.
ABRT crash statistics: I have no idea of what is it.

Kind regards.