Date   

A potential best practice

Emily Ratliff <eratliff@...>
 

Hi,

Someone pointed me to this list of projects which maintain a list of "easy bugs" for beginners to work on, so that they can gain experience with contributing to the project: https://openhatch.org/wiki/Easy_bugs_for_newcomers

To me, the presence of an "easy bugs" list represents a very advanced "best practice". There are really very few projects on the list. Projects that have these lists are planning for the long term viability of their project. The bugs on the list are necessarily not security related bugs. 

Do you think the fact that a project maintains an "easy bugs" list would be a useful metric to include in the badging program to indicate a very high level of maturity?

Will the badging program have the concept of optional, but recommended practices? This might be something that we document as a best practice but do not include in the badging program at any level.

Thanks,

Emily


Re: A potential best practice

David A. Wheeler
 

Someone pointed me to this list of projects which maintain a list of "easy bugs" for beginners to work on, so that they can gain experience with contributing to the project: https://openhatch.org/wiki/Easy_bugs_for_newcomers
Nice list. Not complete, of course, and that just strengthens your point. For example, I know that OWASP ZAP also maintains a list of “easy” bugs; they label them on GitHub’s issue tracker as IdealFirstBug. See: https://github.com/zaproxy/zaproxy/issues?q=is%3Aopen+is%3Aissue+label%3AIdealFirstBug

To me, the presence of an "easy bugs" list represents a very advanced "best practice". There are really very few projects on the list. Projects that have these lists are planning for the long term viability of their project. The bugs on the list are necessarily not security related bugs.
Do you think the fact that a project maintains an "easy bugs" list would be a useful metric to include in the badging program to indicate a very high level of maturity?
Will the badging program have the concept of optional, but recommended practices? This might be something that we document as a best practice but do not include in the badging program at any level.
I like it... that way we can *suggest* good things once people have the basics in order.

I think we should continue collecting a larger set of *possible* best practices, and then select down to the ones that would be directly measured. Some of them might be of the flavor "do at least X of these". The ones that aren't selected as "primary" measures would be good candidates for these "recommended" items (whatever they are called).

--- David A. Wheeler


Re: A potential best practice

Emily Ratliff <eratliff@...>
 

I will let the Open Hatch Easy Bugs maintainer know about OWASP ZAP - that is really nice.

Are we ready for a github to serve as a collection point for the main ideas?

Thanks!

Emily

On Wed, Jul 22, 2015 at 12:04 AM, Wheeler, David A <dwheeler@...> wrote:
> Someone pointed me to this list of projects which maintain a list of "easy bugs" for beginners to work on, so that they can gain experience with contributing to the project: https://openhatch.org/wiki/Easy_bugs_for_newcomers

Nice list.  Not complete, of course, and that just strengthens your point.  For example, I know that OWASP ZAP also maintains a list of “easy” bugs; they label them on GitHub’s issue tracker as IdealFirstBug.  See: https://github.com/zaproxy/zaproxy/issues?q=is%3Aopen+is%3Aissue+label%3AIdealFirstBug

> To me, the presence of an "easy bugs" list represents a very advanced "best practice". There are really very few projects on the list. Projects that have these lists are planning for the long term viability of their project. The bugs on the list are necessarily not security related bugs.
> Do you think the fact that a project maintains an "easy bugs" list would be a useful metric to include in the badging program to indicate a very high level of maturity?
> Will the badging program have the concept of optional, but recommended practices? This might be something that we document as a best practice but do not include in the badging program at any level.

I like it... that way we can *suggest* good things once people have the basics in order.

I think we should continue collecting a larger set of *possible* best practices, and then select down to the ones that would be directly measured.  Some of them might be of the flavor "do at least X of these".  The ones that aren't selected as "primary" measures would be good candidates for these "recommended" items (whatever they are called).

--- David A. Wheeler



Re: A potential best practice

Dan Kohn <dankohn@...>
 


--
Dan Kohn <mailto:dankohn@...>
Senior Advisor, Core Infrastructure Initiative
tel:+1-415-233-1000

On Wed, Jul 22, 2015 at 3:46 PM, Emily Ratliff <eratliff@...> wrote:
I will let the Open Hatch Easy Bugs maintainer know about OWASP ZAP - that is really nice.

Are we ready for a github to serve as a collection point for the main ideas?

Thanks!

Emily

On Wed, Jul 22, 2015 at 12:04 AM, Wheeler, David A <dwheeler@...> wrote:
> Someone pointed me to this list of projects which maintain a list of "easy bugs" for beginners to work on, so that they can gain experience with contributing to the project: https://openhatch.org/wiki/Easy_bugs_for_newcomers

Nice list.  Not complete, of course, and that just strengthens your point.  For example, I know that OWASP ZAP also maintains a list of “easy” bugs; they label them on GitHub’s issue tracker as IdealFirstBug.  See: https://github.com/zaproxy/zaproxy/issues?q=is%3Aopen+is%3Aissue+label%3AIdealFirstBug

> To me, the presence of an "easy bugs" list represents a very advanced "best practice". There are really very few projects on the list. Projects that have these lists are planning for the long term viability of their project. The bugs on the list are necessarily not security related bugs.
> Do you think the fact that a project maintains an "easy bugs" list would be a useful metric to include in the badging program to indicate a very high level of maturity?
> Will the badging program have the concept of optional, but recommended practices? This might be something that we document as a best practice but do not include in the badging program at any level.

I like it... that way we can *suggest* good things once people have the basics in order.

I think we should continue collecting a larger set of *possible* best practices, and then select down to the ones that would be directly measured.  Some of them might be of the flavor "do at least X of these".  The ones that aren't selected as "primary" measures would be good candidates for these "recommended" items (whatever they are called).

--- David A. Wheeler



_______________________________________________
CII-badges mailing list
CII-badges@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-badges



we should talk to / work with TODO Group

Atwood, Mark <mark.atwood@...>
 

Hi!

We should talk with the TODO Group about their input on best practices for
running an open source project.

TODO is a roundtable of Open Source Program Offices, as has such has a lot
of experience dealing with open source projects, both well run, and also
less well run and lacking key best practices.

..m

Mark Atwood <mark.atwood@...>
Director of Open Source Engagement
+12064737118


Re: we should talk to / work with TODO Group

Dan Kohn <dankohn@...>
 

Great. Could you please make an introduction? They are welcome to engage on the mailing list or Emily and I can do a call with them first if that's helpful.

--
Dan Kohn <mailto:dankohn@...>
Senior Advisor, Core Infrastructure Initiative
tel:+1-415-233-1000

On Tue, Aug 11, 2015 at 1:37 PM, Atwood, Mark <mark.atwood@...> wrote:
Hi!

We should talk with the TODO Group about their input on best practices for
running an open source project.

TODO is a roundtable of Open Source Program Offices, as has such has a lot
of experience dealing with open source projects, both well run, and also
less well run and lacking key best practices.

..m

Mark Atwood <mark.atwood@...>
Director of Open Source Engagement
+12064737118


_______________________________________________
CII-badges mailing list
CII-badges@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-badges



Require tests for major feature editions? (Issue #2)

David A. Wheeler
 

In pull request #1, Greg KH made this suggestion for a criterion: "Tests for major feature additions as without some kind of test being present, it's hard to verify if something new even works. This is now the "unofficial" rule for the kernel, as we have been burned by this in the past (feature additions that never even worked)."

I like this idea, and have added an issue tracker here: https://github.com/linuxfoundation/cii-best-practices-badge/issues/2

Please comment on it if you agree/disagree. I agree that it's a best practice, and I'd like to add it. The question is, is this widespread enough to require it as a criterion? In particular, I think if it's a criterion it needs to be written in the docs on how to propose changes and actually practiced... not just an aspiration. Aspirations are cheap :-).

One *good* reason to add this criterion is that it helps projects move closer to continuous integration. I think continuous integration is an excellent thing to do, but it's not clear that it's widespread enough (yet) to require it for a basic best practice list. It might be useful to create a draft "next stage" list.... noting that these aren't requirements for the basic level, but are especially likely to be part of a higher-level set of criteria. Continuous integration is easily a candidate for a higher level.

--- David A. Wheeler


Prevent privacy being unintentionally leaked

Mike S <squeakypeep@...>
 

I would like to suggest not only the intrinsic value of privacy but
that leaking data poses unique security challenges, therefore
protecting user privacy is a valuable piece of criteria that ought to
be considered for inclusion into the program.

Example areas of concern include user data, as well as software
channel metadata subject to passive machine fingerprinting. Another
possibility being that this metadata could be used by state actors
within a hostile network to have a passively updated list of the
version numbers of front end facing components on each server. In this
example, the version numbers of software with serious bugs like
Heartbleed could be cross-referenced against which servers are running
the software without any direct interaction with the server, thus
passively revealing a list of vulnerable servers.

While I would hope this last example is not being exploited it remains
a possibility since popular distros seldom encrypt the entire
connection to software repositories. Most use unencrypted hash lists
as the sole way to ensure package integrity and many do not sign all
packages. While it is encouraging that this badge program would ensure
resolutions to this basic security flaw it would be great to see it
tackle some of the privacy issues outlined as well.

Thank you for your time,
Mike S.


Re: Prevent privacy being unintentionally leaked

Florian Weimer
 

* Mike S.:

Example areas of concern include user data, as well as software
channel metadata subject to passive machine fingerprinting. Another
possibility being that this metadata could be used by state actors
within a hostile network to have a passively updated list of the
version numbers of front end facing components on each server. In this
example, the version numbers of software with serious bugs like
Heartbleed could be cross-referenced against which servers are running
the software without any direct interaction with the server, thus
passively revealing a list of vulnerable servers.
If you care about this issue, you need to download *all* updates to a
local, trusted mirror, including those you do not need. Most
distributions already provide such mirroring scripts. Debian has so
many that it's difficult to keep track, Fedora has reposync at least.

While I would hope this last example is not being exploited it remains
a possibility since popular distros seldom encrypt the entire
connection to software repositories.
Anybody can run a mirror and contribute bandwidth, so encryption does
not help here.


Re: Prevent privacy being unintentionally leaked

Mike S <squeakypeep@...>
 

Thank you Florian, I am aware there are limited workarounds to some examples.

However we are talking about best practices here and these examples were simply to illustrate a point: Privacy is important for several reasons so transmitted data should be secure by design, rather than an afterthought.

Mike


Re: Prevent privacy being unintentionally leaked

David A. Wheeler
 

Mike S:
I would like to suggest not only the intrinsic value of privacy but that leaking data poses unique security challenges, therefore protecting user privacy is a valuable piece of criteria that ought to be considered for inclusion into the program.
Absolutely!

Example areas of concern include user data, as well as software channel metadata subject to passive machine fingerprinting. Another possibility being that this metadata could be used by state actors within a hostile network to have a passively updated list of the version numbers of front end facing components on each server. In this example, the version numbers of software with serious bugs like Heartbleed could be cross-referenced against which servers are running the software without any direct interaction with the server, thus passively revealing a list of vulnerable servers.
While I would hope this last example is not being exploited it remains a possibility since popular distros seldom encrypt the entire connection to software repositories. Most use unencrypted hash lists as the sole way to ensure package integrity and many do not sign all packages. While it is encouraging that this badge program would ensure resolutions to this basic security flaw it would be great to see it tackle some of the privacy issues outlined as well.
For our *initial* list of best practices - which I'm starting to call the "bronze" level - we want to focus on practices that are already widely practices, across many different ecosystems.

I don't think working to hide what version numbers are being downloaded is a common practice, and thus is unlikely to be at the "bronze" level. Florian Weimer noted, "Anybody can run a mirror and contribute bandwidth, so encryption does not help here." I think what he means is that many organizations use mirrors, and usually anyone can volunteer to be a mirror. Mirrors can easily track who is downloading what from them, and can easily implement fingerprinting. If you really want to hide what you're running, you probably want to run through an intermediary.

Privacy is important for several reasons so transmitted data should be secure by design, rather than an afterthought.
Agree. I'll add "privacy" as a note for potential future criteria, probably higher than the "bronze" level, as a kind of placeholder. However, that really is too vague to be useful. Can you propose a set of more specific criteria with MUST statements?

--- David A. Wheeler


Re: Prevent privacy being unintentionally leaked

Mike S <squeakypeep@...>
 

Hi David, that's great to hear.
I would hope to hear from others on specific criteria, there are plenty of use cases to consider. To start I propose this for mobile software.

Mobile apps at a minimum must use SSL and certificate pinning when transmitting data to a server. Third party code (analytics and advertising) used by an app must adhere to the highest security standards possible.

> I think what he means is that many organizations use mirrors, and usually anyone can volunteer to be a mirror.  Mirrors can easily track who is downloading what from them, and can easily implement fingerprinting.  If you really want to hide what you're running, you probably want to run through an intermediary.

If you don't mind getting offtopic I will comment on the notion of encrypting repo traffic. I believe implementing it is what some might call low hanging fruit, because the end user need not take any extra steps. Not much work for the distro either since it is already supported by apt, yum and so on. While true it does not solve the issue completely it mitigates risk of this sort of metadata profiling while en route. Sure a mirror may be hostile, on the other hand using rsync or a VPN adds complexity, or maybe your physical location makes it difficult to use these tools. Clearly neither approach is perfect but one requires no effort from end users and is therefore more practical.

Also, SSL is used with the equivalent software channels of all the other major operating systems (windows update, google play, mac app store). If you were updating your OS within a hostile network like China's, using these other systems could be an improvement if you were doing system updates because the Chinese would not have access to your metadata as part of their state-sponsored hacking programs. This could be true regardless of whether those official servers were considered trustworthy.

And honestly even if this were extremely implausible I just find it annoying to think that an OS like windows might be subtly superior to my favorite distros when it comes to protecting privacy and implementing encryption. So to my mind the question is not why encrypt repo traffic, it is why not?

Mike S.


Re: Prevent privacy being unintentionally leaked

David A. Wheeler
 

Mike S:

> Mobile apps at a minimum must use SSL and certificate pinning when transmitting data to a server.

Okay, let’s use that as a starting point, and see where we can get.  A few comments first. (1) You need to state an objective – why do this?  (2)  SSL (well, really TLS) is only one way to do this; I’d say “use encryption to maintain confidentiality and integrity”.  (3) This really isn’t limited to mobile apps.  I would expect this to apply to any clients. (4) ANY data to a server?  Surely there are many web sites that ONLY support unencrypted connections (e.g., http, ntp), and what about web browsers that can connect to unencrypted sites?  (5) Cert pinning is great, but it’s a *specific* approach, and there are problems doing so (captive portals, etc., all play havoc with it).  That said, it’s certainly one of the practical ones, and we can point to useful info like this: https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning (6) IETF keywords like “MUST” are usually capitalized.

Here’s an attempt to tweak the text, with the goal of making it general & achievable:

Client applications (including mobile applications) MUST by default use an encrypted channel to communicate over a network to protect confidentiality and integrity of all data exchanged with any specific site they are configured to connect to.  Applications SHOULD by default use this communication to all sites.  This channel would often be implemented using TLS, but other encryption protocols (such as IPSEC) MAY  be used.  This implementation SHOULD use certificate pinning.

> Third party code (analytics and advertising) used by an app must adhere to the highest security standards possible.

I have no idea what that means.  I’m sure they’ll say they’re already doing it J.  Can you be a LOT more specific so that people can rigorously say “yes it’s doing that” or “no it is not doing that”?

>> I think what he means is that many organizations use mirrors, and usually anyone can volunteer to be a mirror.  Mirrors can easily track who is downloading what from them, and can easily implement fingerprinting.  If you really want to hide what you're running, you probably want to run through an intermediary.

>If you don't mind getting offtopic I will comment on the notion of encrypting repo traffic. I believe implementing it is what some might call low hanging fruit, because the end user need not take any extra steps. Not much work for the distro either since it is already supported by apt, yum and so on. While true it does not solve the issue completely it mitigates risk of this sort of metadata profiling while en route. Sure a mirror may be hostile, on the other hand using rsync or a VPN adds complexity, or maybe your physical location makes it difficult to use these tools. Clearly neither approach is perfect but one requires no effort from end users and is therefore more practical.

I like the idea, however, I have concerns.  In particular, it’s NOT low-hanging fruit to a lot of folks.  Yes, the systems you listed already do it.  However, a lot of distribution systems do NOT support encrypting all repo traffic.  Cygwin, for example, uses mirrors of files with all data (and all downloads) public; they distribute a set of SHA-512 hashes, and sign that set to ensure that you’re only downloading valid files.  Changing that is *NOT* a low-hanging fruit to them.  I think organizations like this are very afraid of the overhead; encrypting individual files doesn’t take much, but it kills a lot of caching systems, which DOES create a lot of overhead.

So this “low-hanging fruit” is actually hard for a number of folks *AND* it doesn’t help against nation-states anyway.  If your worry is a nation-state, really the *ONLY* way is to pre-copy everything separately, since you probably can trust no one else anyway.  I guess maybe we *could* require it at an upper level, but I wouldn’t want to require it at a “basic” level.

That said, maybe we can write up something that described the specific requirements about supporting full encryption & integrity checking for software download/update, similar to the text above, and give it a spin.  Suggested text?

--- David A. Wheeler

 


Re: Prevent privacy being unintentionally leaked

Mike S <squeakypeep@...>
 

> starting point
Sounds good, I reached out to Holmes Wilson of FFTF and last year's privacy-focused Reset the Net campaign, he's joined the mailing list to help refine the criteria. I have no doubt he can propose better wording than I could.

I based the initial proposed criteria off an excellent article he wrote for developers wishing to implement privacy in their software, here's the article:
http://resetthenet.tumblr.com/post/84327981750/how-we-secure-our-phones-ssl-cert-pinning-pfs
In it 3rd party code is characterized as the weakest link in security so it appeared particularly relevant, but I am not sure what specific criteria could address this.

As to our conversation about fully encrypting software downloads/updates, I had not considered those points. Especially overhead, which makes sense considering the only distros I have seen encrypting all transmitted data from the software channel are run by Red Hat, who can no doubt afford it. I appreciate you educating me on the subject and if you believe it impractical I understand.

Mike S.

On Sun, Aug 23, 2015 at 6:57 PM, Wheeler, David A <dwheeler@...> wrote:

Mike S:

> Mobile apps at a minimum must use SSL and certificate pinning when transmitting data to a server.

Okay, let’s use that as a starting point, and see where we can get.  A few comments first. (1) You need to state an objective – why do this?  (2)  SSL (well, really TLS) is only one way to do this; I’d say “use encryption to maintain confidentiality and integrity”.  (3) This really isn’t limited to mobile apps.  I would expect this to apply to any clients. (4) ANY data to a server?  Surely there are many web sites that ONLY support unencrypted connections (e.g., http, ntp), and what about web browsers that can connect to unencrypted sites?  (5) Cert pinning is great, but it’s a *specific* approach, and there are problems doing so (captive portals, etc., all play havoc with it).  That said, it’s certainly one of the practical ones, and we can point to useful info like this: https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning (6) IETF keywords like “MUST” are usually capitalized.

Here’s an attempt to tweak the text, with the goal of making it general & achievable:

Client applications (including mobile applications) MUST by default use an encrypted channel to communicate over a network to protect confidentiality and integrity of all data exchanged with any specific site they are configured to connect to.  Applications SHOULD by default use this communication to all sites.  This channel would often be implemented using TLS, but other encryption protocols (such as IPSEC) MAY  be used.  This implementation SHOULD use certificate pinning.

> Third party code (analytics and advertising) used by an app must adhere to the highest security standards possible.

I have no idea what that means.  I’m sure they’ll say they’re already doing it J.  Can you be a LOT more specific so that people can rigorously say “yes it’s doing that” or “no it is not doing that”?

>> I think what he means is that many organizations use mirrors, and usually anyone can volunteer to be a mirror.  Mirrors can easily track who is downloading what from them, and can easily implement fingerprinting.  If you really want to hide what you're running, you probably want to run through an intermediary.

>If you don't mind getting offtopic I will comment on the notion of encrypting repo traffic. I believe implementing it is what some might call low hanging fruit, because the end user need not take any extra steps. Not much work for the distro either since it is already supported by apt, yum and so on. While true it does not solve the issue completely it mitigates risk of this sort of metadata profiling while en route. Sure a mirror may be hostile, on the other hand using rsync or a VPN adds complexity, or maybe your physical location makes it difficult to use these tools. Clearly neither approach is perfect but one requires no effort from end users and is therefore more practical.

I like the idea, however, I have concerns.  In particular, it’s NOT low-hanging fruit to a lot of folks.  Yes, the systems you listed already do it.  However, a lot of distribution systems do NOT support encrypting all repo traffic.  Cygwin, for example, uses mirrors of files with all data (and all downloads) public; they distribute a set of SHA-512 hashes, and sign that set to ensure that you’re only downloading valid files.  Changing that is *NOT* a low-hanging fruit to them.  I think organizations like this are very afraid of the overhead; encrypting individual files doesn’t take much, but it kills a lot of caching systems, which DOES create a lot of overhead.

So this “low-hanging fruit” is actually hard for a number of folks *AND* it doesn’t help against nation-states anyway.  If your worry is a nation-state, really the *ONLY* way is to pre-copy everything separately, since you probably can trust no one else anyway.  I guess maybe we *could* require it at an upper level, but I wouldn’t want to require it at a “basic” level.

That said, maybe we can write up something that described the specific requirements about supporting full encryption & integrity checking for software download/update, similar to the text above, and give it a spin.  Suggested text?

--- David A. Wheeler

 



Re: Prevent privacy being unintentionally leaked

David A. Wheeler
 

Mike S:

Sounds good, I reached out to Holmes Wilson of FFTF and last year's privacy-focused Reset the Net campaign, he's joined the mailing list to help refine the criteria. I have no doubt he can propose better wording than I could.
Thanks! Wording things precisely is tricky, so the more the help the better.

I based the initial proposed criteria off an excellent article he wrote for developers wishing to implement privacy in their software, here's the article:
http://resetthenet.tumblr.com/post/84327981750/how-we-secure-our-phones-ssl-cert-pinning-pfs
In it 3rd party code is characterized as the weakest link in security so it appeared particularly relevant, but I am not sure what specific criteria could address this.

Thanks. I've added that link to the "background" page.

As to our conversation about fully encrypting software downloads/updates, I had not considered those points. Especially overhead, which makes sense considering the only distros I have seen encrypting all transmitted data from the software channel are run by Red Hat, who can no doubt afford it. I appreciate you educating me on the subject and if you believe it impractical I understand.
Don't get me wrong, I *like* the idea! But if the criteria are too hard, people won't implement the criteria at all. So for the moment I'm proposing that this be a higher-level "silver" criteria. Here's my first try:
Releases MUST be downloadable through a channel that both encrypts and authenticates (e.g., TLS).
That way, third parties will not be able to determine
exactly what version is being downloaded, and this also provides some verification that the correct software is being downloaded from the site.

How's that? Suggestions welcome.

--- David A. Wheeler


Current authentication plans

David A. Wheeler
 

An important issue is how to handle authentication.
Below is our current plan, which may change (suggestions welcome).

--- David A .Wheeler

===========================================================

In general, we want to ensure that only trusted developer(s) of a project
can create or modify information about that project.
That means we will need to authenticate individual *users* who enter data,
and we also need to authenticate that a specific user is a trusted developer
of a particular project.

For our purposes the project's identifier is the project's main URL.
This gracefully handles project forks and
multiple projects with the same human-readable name.
We intend to prevent users from editing the project URL once
a project record has been created.
Users can always create another table entry for a different project URL,
and we can later loosen this restriction (e.g., if a user controls both the
original and new project main URL).

We plan to implement authentication in these three stages:
1. A way for GitHub users to authenticate themselves and show that they control specific projects on GitHub.
2. An override system so that users can report on other projects as well (this is important for debugging and error repair).
3. A system to support users and projects not on GitHub.

For GitHub users reporting about specific projects on GitHub,
we plan to hook into GitHub itself.
We believe we can use GitHub's OAuth for user authentication.
If someone can administer a GitHub project,
then we will presume that they can report on that project.
We will probably use the &#8220;divise&#8221; module
for authentication in Rails (since it works with GitHub).

We will next implement an override system so that users can report on
other projects as well.
We add a simple table of users and what project URLs they can *also*
control (with "*" meaning "any project").
A user who can control any user would presumably also be able to modify
entries of this override table (e.g., to add other users).
This will enable the Linux Foundation to easily
override data if there is a problem.
At the beginnning the users would still be GitHub users, but the project URL
they are reporting on need not be on GitHub.

Finally, we will implement a user account system.
This enables users without a GitHub user account to still use the system.
We would store passwords for each user (as iterated cryptographic hashes
with per-user salt; currently we expect to use bcrypt for iteration),
along with a user email address to eventually allow for password resets.

All users (both GitHub and non-GitHub) would have a cryptographically
random token assigned to them; a project URL page may include the
token (typically in an HTML comment) to prove that a given user is
allowed to represent that particular project.
That would enable projects to identify users who can represent them
without requiring a GitHub account.

Future versions might support sites other than GitHub; the design should
make it easy to add other sites in the future.

We intend to make public the *username* of who last
entered data for each project (generally that would be the GitHub username),
along with the edit time.
The data about the project itself will also be public, as well
as its badge status.

In the longer term we may need to support transition of a project
from one URL to another, but since we expect problems
to be relatively uncommon, there is no need for that capability initially.


certification -vs- guide

Enos <temp4282138782@...>
 

Dear David and List,

The current criteria (v 0.0.4) includes best practices on both quality and security. It sets requirements for a (self) certification program, but also gives recommendations and detailed definitions.

From my (small) experience, compliance is either black, or it is white. It could include fuzzy requirements (for example dependent on the classification or scope of a system or project), but additional contents may be detrimental: a hybrid which is both a guide and a certification may not reap the full benefits of neither (although there could be such a thing as the best compromise for the largest audience).

RECOMMENDATIONS may trigger false expectations on certified projects. They could instead be turned into requirements for higher badges (making contents of all badges a recommendation for the badge below),
or they could depend on a project's classification or scope (as also other normal certification requirements).

DEFINITIONS and explanations are already available from specialized external resources and papers. In order to maintain focus and avoid duplicating information (which will require additional effort to be kept up-to-date), it may be best to only include concise requirements, only when essential, referencing trusted external resources as necessary. Specifically, to me the current sections on testing and patching deadlines appear slightly verbose.

SECURITY and quality within the document are matched into single badge levels. While it is being done intelligently, they wouldn't allow precisely estimating the value of projects with different levels of quality and security. Did you consider two distinct badge courses? Bronze security could become a requirement of silver quality, and vice-versa bronze quality a requirement of silver security.

The above issues are mostly aimed at brainstorming. Not even I think that all proposed modifications should be applied, but I do think that evaluating them now may be constructive. They are not nearly as important as core contents, but given their pervasiveness, if they are implemented, the earlier it is done, the less effort refactoring would require.

Thanks for reading. What do you think?


Kind Regards
--

Enos D'Andrea,
from Italy


Re: certification -vs- guide

David A. Wheeler
 

Enos D'Andrea:
The current criteria (v 0.0.4) includes best practices on both quality and security. It sets requirements for a (self) certification program, but also gives recommendations and detailed definitions.
From my (small) experience, compliance is either black, or it is white. It could include fuzzy requirements (for example dependent on the classification or scope of a system or project), but additional contents may be detrimental: a hybrid which is both a guide and a certification may not reap the full benefits of neither (although there could be such a thing as the best compromise for the largest audience).
I mostly agree. We want compliance requirements to be black or white, to the extent we can. That said, there will always be some criteria that are hard to make perfectly black-and-white. E.G., the requirement that the project website "succinctly describe what the software does (what problem does it solve?), in language that potential users can understand..." would be difficult to fully automate. In those cases my theory is that if there's a reasonable attempt that most people would accept as complying, that's good enough. For example, that criterion is create to deal with projects which say "We're the QZY project" without any hint of what it does; any reasonable attempt would be better than that, and it's hard to figure out if something is secure if you can't figure out what it does :-).

RECOMMENDATIONS may trigger false expectations on certified projects. They could instead be turned into requirements for higher badges (making contents of all badges a recommendation for the badge below), or they could depend on a project's classification or scope (as also other normal certification requirements).
I agree that they could turn into requirements for higher badges. However, I think it is MUCH better to include important recommendations, even if they're not required. While some projects may do just the minimum, I think many projects will do more... but only if it's recommended. I don't think the false expectation risk is high; the word "RECOMMENDED" is standard IETF terminology and it's basically the normal English meaning, so most readers shouldn't be confused.

Of course, this begs the question, "why have a badge?" The paper [Open badges for education: what are the implications at the intersection of open systems and badging?](http://www.researchinlearningtechnology.net/index.php/rlt/article/view/23563)
identifies three general reasons for badging systems, and I think all three reasons apply.

1. Badges as a motivator of behaviour. We hope that by identifying best practices, we'll encourage projects to implement those best practices if they do not do them already.
2. Badges as a pedagogical tool. Some projects may not be aware of some of the best practices applied by others, or how they can be practically applied. The badge will help them become aware of them and ways to implement them.
3. Badges as a signifier or credential. Potential users want to use projects that are applying best practices to consistently produce good results; badges make it easy for projects to signify that they are following best practices, and make it easy for users to see which projects are doing so.

So while the "RECOMMENDED" text is irrelevant as a signifier or credential, it definitely serves as a pedagogical tool. Most project participants *want* their projects to produce good results; we're just giving them information on how they can get there.

DEFINITIONS and explanations are already available from specialized external resources and papers. In order to maintain focus and avoid duplicating information (which will require additional effort to be kept up-to-date), it may be best to only include concise requirements, only when essential, referencing trusted external resources as necessary.
Specifically, to me the current sections on testing and patching deadlines appear slightly verbose.

I've recently tried to tighten them up a little bit. I'm happy to cite other resources (e.g., I do that with CVSS). The problem is that when you cite other sources, people have to track those down and read them too; if the burden is too high, people won't do it.

SECURITY and quality within the document are matched into single badge levels. While it is being done intelligently, they wouldn't allow precisely estimating the value of projects with different levels of quality and security. Did you consider two distinct badge courses? Bronze security could become a requirement of silver quality, and vice-versa bronze quality a requirement of silver security.
Yes, we've talked about having different courses. In particular, in Madrid there was a lot of talk about having a "basic" level, and then a set of specific badges (not a particular level) that required "basic" first. However, quality-not-including-security and security are rather interrelated. Also, we really want to keep things as simple as practical. So we've focused on creating just one badge level to start with, and then move from there.

The above issues are mostly aimed at brainstorming. Not even I think that all proposed modifications should be applied, but I do think that evaluating them now may be constructive. They are not nearly as important as core contents, but given their pervasiveness, if they are implemented, the earlier it is done, the less effort refactoring would require.
Thanks!! This is the *perfect* time for brainstorming.

--- David A. Wheeler


Re: certification -vs- guide

Blibbet <blibbet@...>
 

Hi, first post, just joined.

On 09/04/2015 11:53 AM, Wheeler, David A wrote:
Thanks!! This is the *perfect* time for brainstorming.
Unclear if I'm off-topic of this thread, but, please don't forget
firmware, and only focus on OS/apps.

A lot of firmware is open source these days. UEFI uses BSD-licensed
tianocore.org for common base, U-Boot is just as active as UEFI, and
coreboot has it's Libreboot and Chrome OS variants. For some, the issue
is removing blobs. For security, dealing with blobs is one issue, but
also dealing with security tech in firmware. Eg, Verified coreboot,
UEFI's Secure Boot, U-Boot has a similar option. Many of these work with
or w/o TPM, some work with TrustZone. OpenPOWER has very different
firmware, but parts are open source. Dealing with actual firmware, as
well as virtualized firmware in Xen/KVM/etc.

These are issues for VM software, FOSS-based OEMs, FOSS-friendly IBVs
(Independent BIOS vendors, like AMI, Insyde, Phoenix, Sage Engineering,
etc.).

Granted most is Linux-based, but FreeBSD currently supports UEFI. Having
them give a voice for their firmware [security] infrastructure needs
would be useful.

IMO, the MOST IMPORTANT firmware infrastructure issue needed is a CA
(Certificate Authority), for an alternative to Microsoft as the current
UEFI Forum CA. And having a Linux-friendly CA by itself probably won't
solve it, unless is an OS agnostic CA that UEFI Forum adopts. To get it
used, IMO, you need to bypass traditional IBVs and create a new FOSS
OS-centric IBV, which uses/requires this new CA. Then, you have a
complete firmware solution to offer non-Windows/non-Chrome OEMs. Today,
Linux OEMs normally use COTS BIOS, which means it comes with ACPI tables
with Windows executables embedded in them, Secure Boot that requires
Microsoft to sign any pre-OS boot tool (GRUB, any FOSS OS loader, etc.)
UEFI aside, coreboot and U-Boot both need this, both have similar PKI
issues now with their Verified/etc flavors of boots; see Sage
Engineering CEO's blog post on coreboot need a few years ago.

Today, Linaro is nearly an IBV for ARM, providing binaries of UEFI and
U-Boot for multiple ARM boards. We need a decentralized, community-based
IBV for FOSS OSes, in conjunction with a FOSS-friendly CA.

Then, while we wait for Open Hardware (eg, RISC-V), we have to work with
Intel/AMD/ARM vendors to reduce their blobs. AFAICT, AMD is doing pretty
well w/r/t blobs, on coreboot. Intel FSP (Firmware Support Package) is
main source of Intel blobs. Today, you need an NDA to get source to
modify those, like Purism and Sage Engineering do. Look at most recent
Purism blob, it sounds like they're trying to do a Free Software
alternative to FSP; not sure if possible, but a CII project should be to
try and do so, to help UEFI/coreboot/U-Boot.

I have some more notes on what's required, if this isn't too far
off-topic for CII....

Thanks,
Lee Fisher
RSS: http://firmwaresecurity.com/feed


Re: certification -vs- guide

David A. Wheeler
 

Lee Fisher:
Unclear if I'm off-topic of this thread, but, please don't forget firmware, and only focus on OS/apps.
Great point. To be honest, we haven't focused too much on firmware so far, so a pass specifically looking for problems would be very welcome. That said, we *have* tried to ensure that it's doable for kernels (such as the Linux kernel), so I have hopes that everything we've said so far is reasonable.

If you find a proposed criteria that is *NOT* appropriate for firmware, please post / create an issue / create a pull request.

IMO, the MOST IMPORTANT firmware infrastructure issue needed is a CA (Certificate Authority), for an alternative to Microsoft as the current UEFI Forum CA. And having a Linux-friendly CA by itself probably won't solve it, unless is an OS agnostic CA that UEFI Forum adopts....
I agree, but I think the "badging" work is the wrong forum for *that* problem. It may be a *CII* issue, conceivably, but it's not really a "badge".

... Then, while we wait for Open Hardware (eg, RISC-V), we have to work with Intel/AMD/ARM vendors to reduce their blobs.
Sounds fair, but the criteria are written to apply to open source software. If it's a binary-only blob, the badging criteria won't apply anyway.

not sure if possible, but a CII project should be to try and do so, to help UEFI/coreboot/U-Boot.
I think that'd be better addressed by creating a proposal for a new CII-funded project. This mailing list just focuses on the "badging" work for OSS projects.

--- David A. Wheeler