Brainstorming: additional topics for the criteria
Dear David and list,
I came up with this list of proposed requirements and recommendations to be added to the current criteria. I would suggest only extracting or discussing valuable material (if anything), disregarding the rest.
Do not only measure efforts but also results: if too many vulnerabilities are discovered (e.g. total CVSS points per number of lines of code in a certain period), temporarily revoke badges.
Encouraging code reuse leads to better and more secure code:
- extract code into reusable libraries whenever possible
- document the interfaces of such libraries
- provide a justification when not reusing existing libraries
- provide a justification if forking another project
- write self-descriptive code and/or inline documentation
- attach high-level UML diagrams (advanced badges only)
- forbid obfuscated code and efforts to prevent reuse/portability
Not directly related but IMHO explains why in the amateurish OSS world there is so much mediocre code, and where to start to fix that:
Ref. http://preview.tinyurl.com/edlabs-prog-fun (& follow-ups)
Not only for coding, but also for:
- user interfaces (e.g. hotkeys, command line options, GUI)
- documentation (e.g. man pages)
Limit and justify changes in APIs.
Declare how long major production versions will be maintained
GIVE USERS CONTROL
Do not hardcode values, settings or optional functions, but allow changing them via GUI or configuration files (bad example: Firefox connecting to Google when starting cannot be easily disabled; Google hardcoded as only search provider in old Chromium versions)
Prohibit running closed-source code at runtime (bad example: Chromium's download of binary blobs).
Require software projects to declare they do not allow deliberate vulnerabilities (even by governments) nor they share accidental vulnerabilities before fixing them. Possibly make this an optional requirement (i.e. YES vs. no reply) so those clean can declare it (thus bypassing the silence imposed by "law-enforcement" agencies).
Require explicit user confirmation when downgrading security (e.g. use unsafe or no encryption, accept self-signed certificates, sending non-public data).
When programs are pre-configured with cloud services, explain their license terms concisely and clearly.
Suggest projects not only how to encrypt, but also what and when to encrypt (i.e. any non-public data leaving the computer).
The current basic badge is too demanding for many small projects; consider more basic badges or they may be created by the community by forking CII-Badges.
Push vulnerability advisories to users (e.g. through RSS or distribution lists). Associate vulnerabilities to all vulnerable versions (bad example: it is not disclosed which old version of Chrome are affected by new vulnerabilities, so users must always update).
Ask leaders of projects above a certain number of contributors to read one book about project management.
RESPONSIBILITY / ACCOUNTABILITY
Formally appoint a physical person as security officer. Senior core developers should be physical persons (bad example: TrueCrypt developers disappearing into the void leaving behind a mess).
Limit privileged access to repositories from secure environments only
(workstations or accounts). Obtain development tools from trusted sources (bad example: trojanized Apple development tools downloaded from Baidu's cloud).
Prefer open formats, which can be easily secured and improved upon without copyright infringement (e.g. OGG vs. MP3).
Investigate the origin of vulnerabilities after they are found. Document the findings and share them after the fix is released. Take measures to prevent similar incident in the future.
Mandate complexity of credentials; add incremental delays for failed logons (from same sources and/or for the same account). Encourage two-factor authentication.
Available documentation must match the latest software release.
Maintain multiple documents in case of simultaneous production branches.
Present the software at public events or forums (advanced badges only).
Under certain conditions, the current Criteria could allow a project to reply after 2 month to a critical vulnerability report (because it only counts average response times). Instead, acknowledge reception immediately and set a (sufficiently large) limit for verification and for remediation, notifying the reporter upon both.
As for the output of a static analysis tool, this list may be mostly garbage. It is provided as-is without endorsement. The examples are only meant to give context and not to be discussed in themselves.
David A. Wheeler
I came up with this list of proposed requirements and recommendations to be added to the current criteria. I would suggest only extracting or discussing valuable material (if anything), disregarding the rest. Here goes.......
Thanks for the open brainstorming. That's a long list; I've skimmed it, but I'll have to come back to this list later after further thinking.
I do have an immediate question, though:
STARTING BADGEWhich of the MUST items do you think are too demanding? The SHOULD and SUGGESTED items merely have to be justified if not done.
--- David A. Wheeler
Emily Ratliff <eratliff@...>
I picked this one to comment on:
I would be concerned about this one, especially the way it is described in the parenthetical. We don't want a rule that is subject to gaming (i.e. if I withhold this fix for 72 hours, then I won't lose the badge for my project). We also don't want to penalize the behavior that we are trying to motivate - if a project has sprung for its very first professional code audit and is processing the results, then we don't want to revoke their badge since they are doing the right thing. Revoking the Badge is a sledgehammer, perhaps we can accomplish what you want in this one in a different way.
Wheeler, David A wrote:
Which of the MUST items do you think are too demanding? The SHOULDYou're right: considering the optional items, the current badge is not "too" demanding for many small projects, still I find it to be "very" demanding for the very smallest.
The reason is basically one: the need to correct "all confirmed medium and high vulnerabilities" found through static and dynamic testing.
Corrections themselves are affordable, but learning, setting up and running these tools, and finally parsing their (often very numerous) false positives, require solid IT security skills and a lot of time.
While no details are given as to how to run these tools, for some people there is only one way to do things, and that is to do them well.
When realizing how much time would be needed, projects with 1-2 people and no resources to outsource may just give up.
An alternative could be to setup a body of volunteers that would run the tools and skim the false positives for them. Or it could be a requirement for higher badges that developers "gain experience" by testing other projects. If in doubt I would suggest leaving everything as it is now and only make changes later if needed.
toggle quoted messageShow quoted text
As someone who tries to adhere to a lot of these best practices, I agree that this item is a very high bar.
I would prefer something like a status counter noting how many confirmed items that you have and how many that you've fixed but without it having any impact on your actual badge.
It would act as a "we know, we're working on it, but here's a warning sticker" type of item.
It may also be best as a simple count of open items against the specified version.
This all gets *really* hairy though when the codebase goes through a major rewrite. At that point, every item that was opened in the past is now invalid and the usual stance is "well, just upgrade".
In terms of running the tools, you are 100% correct. I would personally *love* it if someone pounded against my builds all day. I do have the skills to assess the results but it's hard to find the time to run the myriad set of tools out there.
But, if this happened, I would be hard pressed to constantly fend off the false positives (there are thousands) and I detest simply changing my software to make someone else's poorly written scanner be quiet.
On Tue, Oct 6, 2015 at 1:00 PM, Enos <temp4282138782@...> wrote:
Wheeler, David A wrote:
Vice President, Onyx Point, Inc
-- This account not approved for unencrypted proprietary information --
Emily Ratliff wrote:
I picked this one to comment on:You have a point, but as always I will play the devil's advocate...
The best projects go after badges to *increase* their security.
Ordinary projects go after badges to *prove* they incorporate security.
Bad projects go after badges to *pretend* they incorporate security.
If bad projects are allowed to maintain their badge in spite of an outrageous number of vulnerabilities, end-users will realize it and will stop associating the badges to the concept of security. The badges will ultimately lose their meaning for all but the best projects.
While the concept of "maximum CVSS total points per lines of code per unit of time" is not appropriate for the basic badge, I am of the opinion that it (or something similar) is necessary for higher ones.
On projects intentionally withholding vulnerabilities I reference a former post by David:
[...] We can't completely solve the negative side of working aroundJust prohibit projects from arbitrarily withholding patches... If you cannot trust a project to respect that single rule, than you cannot trust it to self-certify. Out of respect for the good projects, and for the survival of the badging program, you ought to find rogue projects and, after a warning or two, revoke their badges.
Performing audits I learned the importance of strict policies: if you have them, then you can always choose not to enforce them, but if you don't have them, you cannot suddenly create them when needed.