Date   

Should we allow a LICENSES/ directory as a way to implement criterion license_locatiion?

David A. Wheeler
 

The criterion “license_location” says:
> The project MUST post the license(s) of its results in a standard location in their source repository. {Met URL} [license_location]

Issue #1544 proposes to also allow files in a directory named “LICENSES/“ to be considered a “standard location”.

I’m not fundamentally opposed to the idea; there are certainly projects with multiple licenses, so having a directory for those files (and explaining their relationships) doesn’t seem insane. The proposed name makes sense & is already in use in at least one project - maybe many.

What do others think?

By way of background, the FSFE “reuse-tool” uses this directory LICENSES. It’s discussed here:

The “details” for the criterion license_location <https://bestpractices.coreinfrastructure.org/en/criteria/0?details=true#0.license_location> currently ysays:

> Details:
> E.g., as a top-level file named LICENSE or COPYING. License filenames MAY be followed by an extension such as ".txt" or ".md". Note that this criterion is only a requirement on the source repository. You do NOT need to include the license file when generating something from the source code (such as an executable, package, or container). For example, when generating an R package for the Comprehensive R Archive Network (CRAN), follow standard CRAN practice: if the license is a standard license, use the standard short license specification (to avoid installing yet another copy of the text) and list the LICENSE file in an exclusion file such as .Rbuildignore. Similarly, when creating a Debian package, you may put a link in the copyright file to the license text in /usr/share/common-licenses, and exclude the license file from the created package (e.g., by deleting the file after calling dh_auto_install). We do encourage including machine-readable license information in generated formats where practical.

Comments welcome, here or in the issue tracker: https://github.com/coreinfrastructure/best-practices-badge/issues/1544

--- David A. Wheeler


FYI: Report on the 2020 FOSS Contributor Survey

David A. Wheeler
 

FYI:

The "Report on the 2020 FOSS Contributor Survey” has been released from the Linux Foundation & The Laboratory for Innovation Science at Harvard. Authors are: Frank Nagle (Harvard Business School), David A. Wheeler (The Linux Foundation), Hila Lifshitz-Assaf (New York University), Haylee Ham, & Jennifer L. Hoffman (Harvard). URL:

It summarizes a survey of OSS contributors, and it focused on security (so I thought it’d be relevant to this group). It has lots of interesting tidbits, for example, "the overwhelming majority (74.87%) of respondents are already employed full-time, and more than half (51.65%) are specifically paid to develop FOSS.”

From a *security* view, one important result is that OSS contributors do *not* want to spend lots more time on security. I don’t think that means that “security is irrelevant”, but it means that we need to do things that do NOT soak up large amounts of time - which is entirely possible. They’re happy to learn. Note that the badging process takes ~20 minutes, which I think is *not* a big use of time, & thus fits into this. We *do* need to make security the default in a lot more systems - I think that’s one important way to implement that finding.

Anyway, I thought many of you would find it interesting

--- David A. Wheeler




FYI: CII Best Practices badge recent minor updates

David A. Wheeler
 

FYI, I thought it might be useful to summarize recent minor updates to the CII Best Practices badge. They don’t change anything substantive, but I wanted to make sure you were aware of them.

Hopefully they show that we continue to maintain the project. As always, help is welcome. Details below!

--- David A. Wheeler

=======================

Details:

* We’ve tweaked some of the criteria text to make them clearer, after creating proposed tweaks and giving time for people to review those tweaks. We want the text to be as clear as possible! For more information:
* When a vulnerability is discovered in a component we use, we update it,  and
  we have tools to help notify us. For example, redcarpet was updated in
* We mention US export control law requirements. This isn’t a badge requirement, but it’s a
  legal requirement that can trip up some OSS developers. We want to
  help developers stay out of unnecessary legal trouble! Details here:
* As noted earlier, we’ve just added Swahili. There’s a lot of translation left to actually do.
* If you know natural languages other than English, as always we’d love your help.
  To help translators, we recently posted information for translators at:
* We made some minor performance improvements.
  Session cookies are no longer sent in certain cases
  The list of “bad passwords” (passwords local users aren’t allowed to use) has been moved from
  memory to the database. We have limited memory in production & there’s no need for this
  list to use so much memory when we can put it in the database instead. Details here:

Over the holidays I worked to upgrade a lot of its infrastructure.
We don’t want to fall too far behind, because when a vulnerability is found we want to be
able to immediately update to fix it. For example:

* We upgraded our OS infrastructure from ubuntu-16 to ubuntu-20.
* We upgraded from PostgreSQL 11.5 to PostgreSQL 12 (the current supported version on Heroku)
* We switched to cimg-based Docker images for use on CircleCI during testing. The old format is deprecated.

We want to update from Rails 5.X to Rails 6.X. We’ve made progress, but we’re not done
with the steps necessary to be able to try that.
The problem is that we used two gems (libraries) that aren’t compatible with Rails 6.

We’ve fixed one problem by removing the gem fastly-rails as noted here:
That was more work than expected. They recommend switching to the "fastly” gem
(aka "fastly-ruby”), but fastly-ruby is not designed to support multi-threading (WHAT?!).
So we had to modify our code to directly call the Fastly API instead.

I’ve confirmed that much of the *application* code now works with Rails 6 (other than logout, oddly).
However, there appears to be at least one more step. For system testing we currently depend on the
gem minitest-rails-capybara, which does *not* support Rails 6. The recommended approach is to
switch to Rails' standard system testing approach (which I believe did not exist when we
started this project). I don’t expect any fundamental roadblocks, we’ll just need to take
time to switch the system test infrastructure and tests to the updated API.
It’s possible there will be problems switching to Rails 6 after that, but hopefully they’ll be small.
We’re not done fixing the infrastructure to move to Rails 6, but we are making progress.

You can see lots more detail here:


Projects that received badges (monthly summary)

badgeapp@...
 

This is an automated monthly status report of the best practices badge application covering the month 2020-12.

Here are some selected statistics for most recent completed month, preceded by the same statistics for the end of the month before that.

Ending dates2020-11-292020-12-30
Total Projects35013570
In Progress Projects 25%+13811410
In Progress Projects 50%+11441171
In Progress Projects 75%+934955
In Progress Projects 90%+725742
Passing Projects499511
Passing Projects, 25%+ to Silver182187
Passing Projects, 50%+ to Silver123125
Passing Projects, 75%+ to Silver7879
Passing Projects, 90%+ to Silver3232
Silver Projects1717
Silver Projects, 25%+ to Gold126128
Silver Projects, 50%+ to Gold3232
Silver Projects, 75%+ to Gold1313
Silver Projects, 90%+ to Gold77
Gold Projects77

Here are the projects that first achieved a Passing badge in 2020-12:

  1. RosaeNLG at 2020-12-01 08:26:17 UTC
  2. argoproj at 2020-12-02 18:41:59 UTC
  3. whylogs-python at 2020-12-03 21:14:15 UTC
  4. argo-events at 2020-12-03 22:54:37 UTC
  5. antrea at 2020-12-04 22:59:37 UTC
  6. rosaenlg-java at 2020-12-06 15:08:09 UTC
  7. WireCloud at 2020-12-14 15:49:44 UTC
  8. Python SDK for Data Attribute Recommendation at 2020-12-15 16:33:16 UTC
  9. Grid eXchange Fabric (GXF): formerly known as the Open Smart Grid Platform at 2020-12-16 15:39:31 UTC
  10. cloud-custodian at 2020-12-16 18:38:00 UTC
  11. FIWARE Cosmor Orion Flink Connector at 2020-12-18 15:12:44 UTC
  12. fiware-cosmos-orion-spark-connector at 2020-12-18 15:31:19 UTC
  13. Adlik at 2020-12-19 06:59:41 UTC
  14. elabftw at 2020-12-24 15:37:57 UTC

We congratulate them all!

Do you know a project that doesn't have a badge yet? Please suggest to them that they get a badge now!


Re: Proposed tweaks to CII Best Practices criteria

David A. Wheeler
 

As mentioned earlier, several issues proposed tweaks to the CII Best Practices criteria or related text. Here are the pull requests that make those changes. Please note any last-minute issues, I intend to merge these this Thursday (January 7) if there are no objections:

* Allow CalVer: https://github.com/coreinfrastructure/best-practices-badge/pull/1530
* Tweak release_notes_vulns" https://github.com/coreinfrastructure/best-practices-badge/pull/1529
* Tweak criterion “test”: https://github.com/coreinfrastructure/best-practices-badge/pull/1528
* Tweak dynamic_analysis_enable_assertions: https://github.com/coreinfrastructure/best-practices-badge/pull/1527
* Mention US export control law: https://github.com/coreinfrastructure/best-practices-badge/pull/1526

--- David A. Wheeler


FLOSS Weekly #609, CII Best Practices translations for Chinese & Swahili

David A. Wheeler
 

FYI:

I was on FLOSS Weekly #609 to talk about “Open Source Security”. It’s available here:
I pointed out the CII Best Practices badge, the edX training course “Secure Software Development Fundamentals”, and the OpenSSF (I even read out its working groups).

We have two new translators for Chinese (thank you!), and a new volunteer who’s starting a Swahili translation. My thanks to all!

--- David A. Wheeler


Rebranding the "CII Best Practices badge" to the OpenSSF - see issue #1515

David A. Wheeler
 

All: Now that the CII Best Practices badge is part of the OpenSSF, there needs to be a discussion about whether or not it should eventually be rebranded to specifically note the OpenSSF, and if so, what its new names/URLs should be.

This issue proposes such a rebranding:
https://github.com/coreinfrastructure/best-practices-badge/issues/1515

It proposes:
• Name: "CII Best Practices Badge" → "OpenSSF Best Practices Badge"
• Repo (GitHub) site: https://github.com/coreinfrastructure/best-practices-badgehttps://github.com/openssf/best-practices-badge/
• Website: https://bestpractices.coreinfrastructure.org/https://bestpractices.dev/ (the goal is to have a much shorter URL; the long domain name has been a complaint in the past).
• Badge display (small image): “cii best practices” → “openssf best practices”

It is *important* that if a *naming* rebrand to the OpenSSF occurs at all, it must occur only *once*. It’s expensive in time & effort to do a rebrand, and it confuses many people. E.g., we’ll have to pay someone to update the logo, code changes will need to be made, and so on.

Please post comments on issue #1515, but if you’d rather discuss it in this mailing list that’s fine too.

--- David A. Wheeler


Proposed tweaks to CII Best Practices criteria

David A. Wheeler
 

We have several proposed tweaks to the CII Best Practices criteria or related text.

Comments are very welcome in either the specific GitHub issue or here on the mailing list.

Details below.

--- David A. Wheeler

==============


* 1507 - Currently we SUGGEST SemVer, this proposes SUGGESTing SemVer *or* CalVer:
This is a proposed slight relaxation of a SUGGESTed criterion to also allow CalVer. See also <https://calver.org/>.

* 1508 - Reword release_notes_vulns to clarify its text

* 1509 - Update test or test_invocation for multi-language projects
Latest revised proposal is to modify criterion “test”, which says: "The project MUST use at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project).” To add "The project MUST clearly show or document how to run the test suite(s) (e.g., via a continuous integration (CI) script or documentation in files such as BUILD.md, README.md, or CONTRIBUTING.md).” Technically this would be a change in the criterion. However, the only way to show that a project uses a test suite (and thus meets the original criterion) is to show it or document it, so it could be argued this was always implied. Alternatively, we could add it as a new criterion, but I don't think we need to in this case.

* 1510 - Add dynamic_analysis_enable_assertions details (to clarify its meaning & application)
This doesn’t change anything substantive, but the criterion was causing some confusion that we want to eliminate.

* 1513 - Add info on how to comply with US export control law.
This doesn't change the criteria at all. This is just a pointer to legal information that’s important to projects that can be accessed from within the US. Most projects *are* distributed from the US even if they don’t start in the US, and it can help protect people within the US, so it seems like a helpful tip.


Projects that received badges (monthly summary)

badgeapp@...
 

This is an automated monthly status report of the best practices badge application covering the month 2020-11.

Here are some selected statistics for most recent completed month, preceded by the same statistics for the end of the month before that.

Ending dates2020-10-302020-11-29
Total Projects34533501
In Progress Projects 25%+13631381
In Progress Projects 50%+11291144
In Progress Projects 75%+918934
In Progress Projects 90%+712725
Passing Projects487499
Passing Projects, 25%+ to Silver179182
Passing Projects, 50%+ to Silver121123
Passing Projects, 75%+ to Silver7778
Passing Projects, 90%+ to Silver3232
Silver Projects1717
Silver Projects, 25%+ to Gold124126
Silver Projects, 50%+ to Gold3132
Silver Projects, 75%+ to Gold1313
Silver Projects, 90%+ to Gold77
Gold Projects77

Here are the projects that first achieved a Passing badge in 2020-11:

  1. kurento-media-server at 2020-11-12 09:42:10 UTC
  2. mlx90632-library at 2020-11-12 22:26:37 UTC
  3. nothing-private at 2020-11-14 09:05:35 UTC
  4. pravega at 2020-11-17 15:32:14 UTC
  5. FIWARE Identity Management - Keyrock at 2020-11-17 16:11:08 UTC
  6. FIWARE PEP Proxy Wilma at 2020-11-17 16:19:47 UTC
  7. Configuration Persistence Service at 2020-11-18 10:24:34 UTC
  8. openleadr-python at 2020-11-19 15:38:44 UTC
  9. Hyperledger Avalon at 2020-11-19 16:23:48 UTC
  10. AMICI at 2020-11-24 19:47:34 UTC
  11. ONNX at 2020-11-25 17:39:26 UTC
  12. go-proxy-cache at 2020-11-26 15:46:07 UTC

We congratulate them all!

Do you know a project that doesn't have a badge yet? Please suggest to them that they get a badge now!


Free set of 3 courses on “Secure Software Development Fundamentals” now available!

David A. Wheeler
 

All: There is now a *free* set of 3 courses on how to develop secure software, titled “Secure Software Development Fundamentals”.

I wrote it, with lots of comments & help from others. Special thanks go to Yannick Moy, who also translates the
CII Best Practices badge work into French!

The plan is to add a link from the “details” of the CII Best Practices badge to the course.
You do *NOT* need to take that course to get a badge, but we try to provide helpful links for people,
and I think this is a useful link. See: https://github.com/coreinfrastructure/best-practices-badge/pull/1505

The set of 3 courses is available on edX here:
https://www.edx.org/professional-certificate/linuxfoundationx-secure-software-development-fundamentals
If you want, you can also pay to take tests to earn a certificate & show that you learned the material
(that's how edX keeps running).

Please let others know about this set of courses. The price is hard to beat, and we really want more people
to learn how to develop secure software.

--- David A. Wheeler


Projects that received badges (monthly summary)

badgeapp@...
 

This is an automated monthly status report of the best practices badge application covering the month 2020-10.

Here are some selected statistics for most recent completed month, preceded by the same statistics for the end of the month before that.

Ending dates2020-09-292020-10-30
Total Projects33883453
In Progress Projects 25%+13251363
In Progress Projects 50%+10951129
In Progress Projects 75%+893918
In Progress Projects 90%+689712
Passing Projects472487
Passing Projects, 25%+ to Silver173179
Passing Projects, 50%+ to Silver115121
Passing Projects, 75%+ to Silver7477
Passing Projects, 90%+ to Silver3132
Silver Projects1717
Silver Projects, 25%+ to Gold119124
Silver Projects, 50%+ to Gold2931
Silver Projects, 75%+ to Gold1313
Silver Projects, 90%+ to Gold77
Gold Projects77

Here are the projects that first achieved a Passing badge in 2020-10:

  1. paypayopa-sdk-python at 2020-10-01 10:46:22 UTC
  2. CCTag at 2020-10-02 13:09:49 UTC
  3. PopSift at 2020-10-02 14:08:32 UTC
  4. paypayopa-sdk-node at 2020-10-07 05:38:49 UTC
  5. pagy at 2020-10-08 15:45:41 UTC
  6. octant at 2020-10-15 23:40:17 UTC
  7. asymptote-glg-contrib at 2020-10-20 11:03:19 UTC
  8. Tremor Event Processing System at 2020-10-21 10:11:40 UTC
  9. couler at 2020-10-22 01:01:19 UTC
  10. Warnings plugin at 2020-10-26 14:11:03 UTC
  11. translator-openpredict at 2020-10-27 23:12:19 UTC
  12. gochk at 2020-10-29 15:12:25 UTC

We congratulate them all!

Do you know a project that doesn't have a badge yet? Please suggest to them that they get a badge now!


Dan Kohn has died

David A. Wheeler
 

All:

I must bring you the sad news that Dan Kohn has died.

Dan was a pioneer who helped many people. Among many other things, he oversaw the explosive growth of the Cloud Native Computing Foundation (CNCF) as director and founded The Linux Foundation Public Health Initiative (LFPH). His Wikipedia page mentions some other achievements, such as the “first secure commercial transaction on the web”, but that page only scratches the surface: https://en.wikipedia.org/wiki/Dan_Kohn

Dan was instrumental in founding the Core Infrastructure Initiative (CII) & the CII Best Practices Badge. You’ll find his name all over the badging work, indeed, he approved a change just 18 days ago.

Dan will be sorely missed.

--- David A. Wheeler


Re: Plan to modify assurance case format (more claims, use SACM notation) - any thoughts?

Kevin W. Wall
 

Other than describing the SACM's ArgumentReasoning symbol as a "half-rectangle", I have no objections. (A "half-rectangle" is also itself a rectangle, so I think some alternate description would be better. Indeed, in certain cases people might even think "square".)

For those playing along at home, this is the symbol David is referring to:

image.png
I think "open rectangle" would be a little better, but there probably is some formal name for this. (Anyone know?)

Overall, I think the notation is similar in flavor to (i.e., has the look and feel) of UML diagrams, so perhaps a UML drawing tool would work better than Libre Draw. Long ago (at a different job) I used both Dia and Umbrello and they were acceptable, but according to this (https://www.linuxlinks.com/best-free-unified-modeling-software/) there are some better choices. Eclipse also has some UML plugins that might work.

So, FWIW, no objections from me.
-kevin

On Wed, Oct 21, 2020 at 7:59 PM David Wheeler <dwheeler@...> wrote:
For the BadgeApp we include an “assurance case”, that is, a set of claims/arguments/evidence explaining why we think it’s secure. You can see the assurance case here:

Some folks at MITRE have been reviewing our assurance case. My thanks - reviews make things better! Two overall suggestions have been made:
1. Use nested claims instead of nested arguments. Claims are simple true/false statements, so this change should make the material easier to follow.
2. Switch to the new SACM graphical notation instead of the older CAE graphical notation. CAE is more common, but SACM notation has many advantages.

I think these are good ideas & I currently plan to implement them over time. We want the assurance case to be maximally clear. I also want its assurance case to be potentially easy to use by others as a starting point (many assurance cases aren’t public, making them hard to learn from).

However, before committing to them, please let me/us know if there are any objections / concerns. If this is a bad idea, I don’t want to do it :-).

Details below.

--- David A. Wheeler


=== DETAILS ===

Up to this point we’ve used claims/argument/evidence (CAE) notation, which is wonderfully simple. Claims (including subclaims) are ovals, arguments are rounded rectangles, evidence (references) are rectangles. You can see its definition here:

Object Management Group (OMG) Structured Assurance Case Metamodel (SACM) specification here:
Historically this specification has worried about defining a standard interchange format for assurance case data. We aren’t trying to exchange with others, and I don’t know of any mature OSS tools that directly support the SACM data format, so this specification hasn’t been focused on a problem we’re trying to solve. However, the newest version of SACM has a new graphical notation. Claims (including subclaims) are rectangles, ArgumentReasoning (aka arguments) are half-rectangles, and evidence are shadowed rectangles. In addition, it uses “big dots” on connections.

Here’s a fragment of our assurance case in CAE graphical notation, including evidence symbols (we often suppress them due to space limitations):

Here’s the same fragment using SACM graphical notation (really a simplified subset of SACM):


Here are advantages of the SACM graphical notation over CAE’s graphical notation:

1. CAE Claim vs. SACM Claim. CAE uses ovals, while SACM uses rectangles. SACM has a *BIG* win here: Rectangles use MUCH less space, so complex diagrams are much easier to create & easier to understand.
2. CAE Argument vs. SACM ArgumentReasoning. CAE uses rounded rectangles, while SACM uses a shape I’ll call a "half-rectangle”. CAE’s rounded rectangles are not very distinct from its evidence rectangles, which is a minor negative for the CAE notation. SACM initially presented some challenges when using our drawing tool (LIbreOffice Draw), but I overcame them:
  - SACM’s half-rectangle initially presented me with a problem: that is *NOT* a built-in shape for the drawing tool I’m using (LIbreOffice Draw). I suspect it’s not a built-in symbol in many tools. I was able to work around this by creating a polygon (many drawing tools support this, and this is a very easy polygon to make). It took a little tweaking, but I managed to create a simple polygon with embedded text. In the longer term, the SACM community should work to get this easy icon into other drawing tools, to simplify its use.
  - SACM’s half-rectangle is VERY hard to visually distinguish if both it & claims are filled with color. I use color fills to help the eye notice type differences. My solution was simple: color fill everything *except* the half-rectangle; this makes them all visually distinct.
3. CAE Evidence vs. SACM ArtifactReference. In CAE this is a simple rectangle. In SACM this is a shadowed rectangle with an arrow; the arrow is hard to add with simple drawing tools, but the shadow is trivial to add with a “shadow” property in LibreOffice (and many other drawing tools), and I think just the shadow is adequate. The shadow adds slightly more space (but MUCH less than ovals), and it takes a moment to draw by hand, but I think that’s a reasonable trade-off to ensure that they are visually distinct. In addition: I tend to record evidence / ArtifactReferences in *only* text, not in the diagrams, because diagrams are time-consuming to maintain. So making *claims* simple to draw, and making evidence/ArtifactReferences slightly more complex to draw, is exactly the right tradeoff.
4. Visual distinctiveness. In *general* the CAE icons for Claim/Argument/Evidence are not as visually distinct as SACM’s Claim/ArgumentReasoning/ArtifactReference, especially when they get shaped to the text contents. That’s an overall advantage for the SACM graphical notation.
5. SACM’s “bigdot”. The bigdot, e.g., in AssertedInference, make the diagrams simpler by making it easy to move an argument / ArgumentReasoning icon away from the flow from supporting claims/evidence to a higher claim. You could also informally do that with CAE, but it’s clearly a part of SACM. In the SACM (like?) diagrams I’ve drawn I’ve omitted the bigdot in some cases, which may not be strictly compliant. I’m not sure how important that is, although I guess one advantage of “bigdot” is that it makes it much easier to add an ArgumentReasoning later.

Nothing is perfect. One problem with SACM’s ArgumentReasoning symbol - a half-rectangle - is that while it’s easy to connect on the left/top/bottom, it’s someone unclear when trying to connect from its bare right-hand-side. A simple solution is to prefer to put them on the right-hand-side. I wish they’d chosen another symbol that was still clearly distinct from the others, easy to hand-draw, already available in simple drawing tools, and did not take a lot of extra space. For example, they could have chosen an uneven pentagon (“pointer”) or callout symbol (with the little tail). But that’s a nit, it’s still an improvement & I try to use standard symbols when it’s reasonable to do so.



--
Blog: http://off-the-wall-security.blogspot.com/    | Twitter: @KevinWWall
NSA: All your crypto bit are belong to us.


Plan to modify assurance case format (more claims, use SACM notation) - any thoughts?

David A. Wheeler
 

For the BadgeApp we include an “assurance case”, that is, a set of claims/arguments/evidence explaining why we think it’s secure. You can see the assurance case here:

Some folks at MITRE have been reviewing our assurance case. My thanks - reviews make things better! Two overall suggestions have been made:
1. Use nested claims instead of nested arguments. Claims are simple true/false statements, so this change should make the material easier to follow.
2. Switch to the new SACM graphical notation instead of the older CAE graphical notation. CAE is more common, but SACM notation has many advantages.

I think these are good ideas & I currently plan to implement them over time. We want the assurance case to be maximally clear. I also want its assurance case to be potentially easy to use by others as a starting point (many assurance cases aren’t public, making them hard to learn from).

However, before committing to them, please let me/us know if there are any objections / concerns. If this is a bad idea, I don’t want to do it :-).

Details below.

--- David A. Wheeler


=== DETAILS ===

Up to this point we’ve used claims/argument/evidence (CAE) notation, which is wonderfully simple. Claims (including subclaims) are ovals, arguments are rounded rectangles, evidence (references) are rectangles. You can see its definition here:

Object Management Group (OMG) Structured Assurance Case Metamodel (SACM) specification here:
Historically this specification has worried about defining a standard interchange format for assurance case data. We aren’t trying to exchange with others, and I don’t know of any mature OSS tools that directly support the SACM data format, so this specification hasn’t been focused on a problem we’re trying to solve. However, the newest version of SACM has a new graphical notation. Claims (including subclaims) are rectangles, ArgumentReasoning (aka arguments) are half-rectangles, and evidence are shadowed rectangles. In addition, it uses “big dots” on connections.

Here’s a fragment of our assurance case in CAE graphical notation, including evidence symbols (we often suppress them due to space limitations):

Here’s the same fragment using SACM graphical notation (really a simplified subset of SACM):


Here are advantages of the SACM graphical notation over CAE’s graphical notation:

1. CAE Claim vs. SACM Claim. CAE uses ovals, while SACM uses rectangles. SACM has a *BIG* win here: Rectangles use MUCH less space, so complex diagrams are much easier to create & easier to understand.
2. CAE Argument vs. SACM ArgumentReasoning. CAE uses rounded rectangles, while SACM uses a shape I’ll call a "half-rectangle”. CAE’s rounded rectangles are not very distinct from its evidence rectangles, which is a minor negative for the CAE notation. SACM initially presented some challenges when using our drawing tool (LIbreOffice Draw), but I overcame them:
  - SACM’s half-rectangle initially presented me with a problem: that is *NOT* a built-in shape for the drawing tool I’m using (LIbreOffice Draw). I suspect it’s not a built-in symbol in many tools. I was able to work around this by creating a polygon (many drawing tools support this, and this is a very easy polygon to make). It took a little tweaking, but I managed to create a simple polygon with embedded text. In the longer term, the SACM community should work to get this easy icon into other drawing tools, to simplify its use.
  - SACM’s half-rectangle is VERY hard to visually distinguish if both it & claims are filled with color. I use color fills to help the eye notice type differences. My solution was simple: color fill everything *except* the half-rectangle; this makes them all visually distinct.
3. CAE Evidence vs. SACM ArtifactReference. In CAE this is a simple rectangle. In SACM this is a shadowed rectangle with an arrow; the arrow is hard to add with simple drawing tools, but the shadow is trivial to add with a “shadow” property in LibreOffice (and many other drawing tools), and I think just the shadow is adequate. The shadow adds slightly more space (but MUCH less than ovals), and it takes a moment to draw by hand, but I think that’s a reasonable trade-off to ensure that they are visually distinct. In addition: I tend to record evidence / ArtifactReferences in *only* text, not in the diagrams, because diagrams are time-consuming to maintain. So making *claims* simple to draw, and making evidence/ArtifactReferences slightly more complex to draw, is exactly the right tradeoff.
4. Visual distinctiveness. In *general* the CAE icons for Claim/Argument/Evidence are not as visually distinct as SACM’s Claim/ArgumentReasoning/ArtifactReference, especially when they get shaped to the text contents. That’s an overall advantage for the SACM graphical notation.
5. SACM’s “bigdot”. The bigdot, e.g., in AssertedInference, make the diagrams simpler by making it easy to move an argument / ArgumentReasoning icon away from the flow from supporting claims/evidence to a higher claim. You could also informally do that with CAE, but it’s clearly a part of SACM. In the SACM (like?) diagrams I’ve drawn I’ve omitted the bigdot in some cases, which may not be strictly compliant. I’m not sure how important that is, although I guess one advantage of “bigdot” is that it makes it much easier to add an ArgumentReasoning later.

Nothing is perfect. One problem with SACM’s ArgumentReasoning symbol - a half-rectangle - is that while it’s easy to connect on the left/top/bottom, it’s someone unclear when trying to connect from its bare right-hand-side. A simple solution is to prefer to put them on the right-hand-side. I wish they’d chosen another symbol that was still clearly distinct from the others, easy to hand-draw, already available in simple drawing tools, and did not take a lot of extra space. For example, they could have chosen an uneven pentagon (“pointer”) or callout symbol (with the little tail). But that’s a nit, it’s still an improvement & I try to use standard symbols when it’s reasonable to do so.


Re: Rate limits for non-badge-image requests

Kate Stewart
 

Adding Sean to this thread, as CHAOSS risk metrics have a dashboard
that uses the CII badge information.

Sean - any impact expected from your perspective?

Thanks, Kate

On Thu, Oct 1, 2020 at 7:30 PM David Wheeler
<dwheeler@linuxfoundation.org> wrote:

Some overeager people are trying to spider the entire best practices site all at once. This can cause trouble for everyone else. Our current rate limits don’t trigger soon enough, because they cover *all* requests, and we can handle many badge image requests.

So I propose adding a new rate limit for anything OTHER than badge images & static files. Details here:
https://github.com/coreinfrastructure/best-practices-badge/issues/1475
https://github.com/coreinfrastructure/best-practices-badge/pull/1478

The default rate limit I’m proposing is up to 15 requests every 15 seconds. That short time window will let us detect, far more quickly, when someone is making too many requests at once. It could be different, e.g., 30 requests / 15 seconds or 20 requests / 10 seconds. Recommendations welcome. The goal is to make it invisible to “normal” users, but stop abuses quickly.

I’d especially like to hear from anyone whose dashboard might be negatively impacted. If you just serve CII badge images it shouldn’t impact you at all.

If we use the CDN to serve the JSON data about individual projects we could exclude that as well, but that would be a different change.

--- David A. Wheeler




Rate limits for non-badge-image requests

David A. Wheeler
 

Some overeager people are trying to spider the entire best practices site all at once. This can cause trouble for everyone else. Our current rate limits don’t trigger soon enough, because they cover *all* requests, and we can handle many badge image requests.

So I propose adding a new rate limit for anything OTHER than badge images & static files. Details here:
https://github.com/coreinfrastructure/best-practices-badge/issues/1475
https://github.com/coreinfrastructure/best-practices-badge/pull/1478

The default rate limit I’m proposing is up to 15 requests every 15 seconds. That short time window will let us detect, far more quickly, when someone is making too many requests at once. It could be different, e.g., 30 requests / 15 seconds or 20 requests / 10 seconds. Recommendations welcome. The goal is to make it invisible to “normal” users, but stop abuses quickly.

I’d especially like to hear from anyone whose dashboard might be negatively impacted. If you just serve CII badge images it shouldn’t impact you at all.

If we use the CDN to serve the JSON data about individual projects we could exclude that as well, but that would be a different change.

--- David A. Wheeler


Projects that received badges (monthly summary)

badgeapp@...
 

This is an automated monthly status report of the best practices badge application covering the month 2020-08.

Here are some selected statistics for most recent completed month, preceded by the same statistics for the end of the month before that.

Ending dates2020-07-302020-08-30
Total Projects33093351
In Progress Projects 25%+12921312
In Progress Projects 50%+10701086
In Progress Projects 75%+875886
In Progress Projects 90%+666680
Passing Projects459464
Passing Projects, 25%+ to Silver166169
Passing Projects, 50%+ to Silver112114
Passing Projects, 75%+ to Silver7273
Passing Projects, 90%+ to Silver3131
Silver Projects1616
Silver Projects, 25%+ to Gold114115
Silver Projects, 50%+ to Gold2728
Silver Projects, 75%+ to Gold1313
Silver Projects, 90%+ to Gold77
Gold Projects77

Here are the projects that first achieved a Passing badge in 2020-08:

  1. taquito at 2020-08-04 23:03:03 UTC
  2. Eclipse Steady at 2020-08-14 09:36:40 UTC
  3. ludwig at 2020-08-16 18:00:30 UTC
  4. egeria at 2020-08-20 09:29:41 UTC
  5. rcosmo at 2020-08-24 04:35:51 UTC
  6. shortlink at 2020-08-24 08:25:12 UTC

We congratulate them all!

Do you know a project that doesn't have a badge yet? Please suggest to them that they get a badge now!


Proposed criteria introduction text

David A. Wheeler
 

All: Here's some proposed criteria introduction text.
Comments? It's lengthy, so I want to fix it up *before* our translators have
to deal with it.

The plan is to use this text to enable people to more easily see
all the criteria in *any* our supported natural languages.
People will be able to view "/criteria" on the BadgeApp and
see this (translated) introduction, and all the translated criteria.

--- David A. Wheeler

====

<h2 id='introduction'>Introduction</h2>
<p>
There is no set of practices that can <i>guarantee</i> that software
will never have defects or vulnerabilities.
Even formal methods can fail if the
specifications or assumptions are wrong.
Nor is there any set of practices that can guarantee that a project will
sustain a healthy and well-functioning development community.</p>
<p>
However, following best practices can help improve the results
of projects.
For example, some practices enable multi-person review before release,
which can both help find otherwise hard-to-find technical vulnerabilities
and help build trust and a desire for repeated interaction among
developers from different organizations.</p>
<p>
This page presents a set of best practices
for Free/Libre and Open Source Software (FLOSS) projects.
Projects that follow these best practices
will be able to voluntarily self-certify and show that they've
achieved the relevant
Core Infrastructure Initiative (CII) Best Practices badge.
Projects can do this, at no cost,
by using a web application (BadgeApp)
to explain how they meet these practices and their detailed criteria.</p>
<p>
These best practices have been created to:</p>
<ol>
<li>encourage projects to follow best practices,</li>
<li>help new projects discover what those practices are, and</li>
<li>help users know which projects are following best practices
(so users can prefer such projects).</li>
</ol>
<p>
The idiom "best practices" means
"a procedure or set of procedures that is preferred or considered
standard within an organization, industry, etc."
(source:
<a href="http://www.dictionary.com/browse/best-practice";
rel="nofollow">Dictionary.com</a>).
These criteria are what we believe are
widely "preferred or considered standard"
in the wider FLOSS community.</p>
For more information on how these criteria were developed,
see the <a
href="https://github.com/coreinfrastructure/best-practices-badge";
CII Best Practices badge GitHub site</a>.</p>
<p></p>
<h3 id='criteria_structure'>Criteria Structure</h3>
<p>
The best practices criteria are divided into three levels:<ul>
<li><b>Passing</b> focuses on best practices
that well-run FLOSS projects typically already follow.
Getting the passing badge is an achievement; at any one time
only about 10% of projects pursuing a badge achieve the passing level.
<li><b>Silver</b> is a more stringent set of criteria than passing but is
expected to be achievable by small and single-organization projects.
<li><b>Gold</b> is even more stringent than silver and includes
criteria that are not achievable by small or single-organization projects.
</ul>
<p>
Every criterion has a short name, shown below as superscripted
text inside square brackets.
<p></p>
<h3 id='criteria_other_projects'>Relationship to Other Projects</h3>
<p>
The Linux Foundation also sponsors the
<a href="https://www.openchainproject.org/";
rel="nofollow">OpenChain Project</a>, which
identifies criteria for a "high quality Free
and Open Source Software (FOSS) compliance program."
OpenChain focuses on how organizations can best use FLOSS and contribute
back to FLOSS projects, while the CII Best Practices badge
focuses on the FLOSS projects themselves.
The CII Best Practices badge and OpenChain work together to help
improve FLOSS and how FLOSS is used.</p>
<p>
<h3 id='criteria_automation'>Criteria Automation</h3>
<p>
In some cases we automatically test and fill in information
if the project follows standard conventions and
is hosted on a site (e.g., GitHub) with decent API support.
We intend to improve this automation in the future;
improvements are welcome!</p>
<p></p>
<h3 id='criteria_changes'>Changes over time</h3>
<p>
We expect that these practices and their detailed criteria will
be updated over time.
We plan to add new criteria but mark them as "future" criteria, so that
projects can add that information and maintain their badge.</p>
<p>
Feedback is <em>very</em> welcome via the
<a href="https://github.com/coreinfrastructure/best-practices-badge";
GitHub site as issues or pull requests</a>.
There is also a
<a href="https://lists.coreinfrastructure.org/mailman/listinfo/cii-badges";
rel="nofollow">mailing list for general discussion</a>.</p>
<p></p>
<h3 id='keywords'>Key words</h3>
<p>
The key words "MUST", "MUST NOT",
"SHOULD", "SHOULD NOT", and "MAY"
in this document are to be interpreted as described in
<a href="https://tools.ietf.org/html/rfc2119"; rel="nofollow">RFC 2119</a>.
The additional term SUGGESTED is added.
In summary, these key words have the following meanings:</p>
<ul>
<li>The term MUST is an absolute requirement, and MUST NOT
is an absolute prohibition.</li>
<li>The term SHOULD indicates a criterion that is normally required,
but there may exist valid reasons in particular circumstances
to ignore it.
However, the full implications must be understood and carefully weighed
before choosing a different course.</li>
<li>The term SUGGESTED is used instead of SHOULD when the criterion must
be considered, but the valid reasons
to not do so are even more common than for SHOULD.</li>
<li>The term MAY provides one way something can be done, e.g.,
to make it clear that the described implementation is acceptable.</li>
</ul>
<p>
Often a criterion is stated as something that SHOULD be done, or is
SUGGESTED, because it may be difficult to implement or the costs
to do so may be high.</p>
<p></p>
<h3 id='criteria_achieving_badge'>Achieving a badge</h3>
<p>
To obtain a badge, all MUST and MUST NOT criteria must be met, all
SHOULD criteria must be either met OR unmet with justification, and
all SUGGESTED criteria have to be considered (it must be
rated as met or unmet, but justification is not required
unless noted otherwise).
An answer of N/A ("not applicable"), where allowed, is considered
the same as being met.
In some cases, especially in the higher levels,
justification and/or a URL may be required.</p>
<p>
Some criteria have special markings that influence this:<ul>
<li><b>{N/A allowed}</b> - "N/A" ("Not applicable") is allowed.
<li><b>{N/A justification}</b> - "N/A" ("Not applicable") is allowed
and requires justification.
<li><b>{Met justification}</b> - "Met" requires justification.
<li><b>{Met URL}</b> - "Met" requires justification with a URL.
<li><b>{Future}</b> - the answer to this criterion currently
has no effect, but it may be required in the future.
</ul>
<p>
A project must achieve the previous level to achieve the next level.
In some cases SHOULD criteria become MUST in higher level badges,
and some SUGGESTED criteria at lower levels become SHOULD or MUST
in higher level badges. The higher levels also require more
justification, because we want others to understand <i>how</i>
the criteria are being met.</p>
<p>
There is one implied passing criterion - every project MUST have
a public website with a stable URL. This is required to create
a badge entry in the first place.</p>
<p></p>
<h3 id='terminology'>Terminology</h3>
<p>
If you are not familiar with
software development or running a FLOSS project, materials such as
<a href="http://producingoss.com/";
rel="nofollow"><em>Producing Open Source Software</em>
by Karl Fogel</a> may be helpful to you.</p>
Here are a few key terms.
<p>
A <em>project</em> is an active entity that has
project member(s) and produces project result(s).
Its member(s) use project sites to coordinate and disseminate result(s).
A project does not need to be a formal legal entity.
Key terms relating to projects are:</p>
<ul>
<li>Project <em>members</em> are the
group of one or more people or companies who work together
to attempt to produce project results.
Some FLOSS projects may have different kinds of members, with different
roles, but that's outside our scope.</li>
<li>Project <em>results</em> are what the project members work together
to produce as their end goal. Normally this is software,
but project results may include other things as well.
Criteria that refer to "software produced by the project"
are referring to project results.</li>
<li>Project <em>sites</em>
are the sites dedicated to supporting the development
and dissemination of project results, and include
the project website, repository, and download sites where applicable
(see <a href="#sites_https">sites_https</a>).</li>
<li>The project <em>website</em>, aka project homepage, is the main page
on the world wide web (WWW) that a new user would typically visit to see
information about the project; it may be the same as the project's
repository site (this is often true on GitHub).</li>
<li>The project <em>repository</em> manages and stores the project results
and revision history of the project results.
This is also referred to as the project <em>source repository</em>,
because we only require managing and storing of the editable versions,
not of automatically generated results
(in many cases generated results are not stored in a repository).</li>
</ul>


--
--- David A. Wheeler
Director of Open Source Supply Chain Security, The Linux Foundation


Rename route "/criteria"->"/criteria_stats", /criteria to display criteria

David A. Wheeler
 

FYI:
I intend to soon rename the route "/criteria" to "/criteria_stats". We
can then use "/criteria" to display the actual criteria in the
selected locale. This is technically a change in the user-visible API,
but in practice I expect no impact.

Details:
https://github.com/coreinfrastructure/best-practices-badge/pull/1453
https://github.com/coreinfrastructure/best-practices-badge/pull/1454

--- David A. Wheeler
Director of Open Source Supply Chain Security, The Linux Foundation


Re: Renaming whitelist->acceptlist, blacklist->denylist

David A. Wheeler
 

All: Minor correction.
The more common term seems to be "allowlist" not "acceptlist" . E.g.:
https://www.zdnet.com/article/linux-team-approves-new-terminology-bans-terms-like-blacklist-and-slave/
So I plan to use "allowlist" everywhere, not acceptlist.

These are new words, so I didn't immediately notice the inconsistency.
I'll note that Google docs is *already* accepting allowlist as a single word.

--- David A. Wheeler
Director of Open Source Supply Chain Security, The Linux Foundation

1 - 20 of 613