Re: Finally completed badge, feedback on process

David A. Wheeler

Christopher Sean Morrison:
I finally pushed enough off my plate and found time to finish filling out BRL-
CAD's badging, which I’d started 6 months ago. Happy to be #8 in the list and
28th to get to 100%.
Congrats!! That's excellent.

Here’s a retrospective with feedback.

In all, it took about 3 interrupted hours total to gather, fact check, and write
up responses for all fields. Probably would have taken an hour
I really appreciate the write-ups. The 1 hour is consistent with what we've been seeing & estimating.

Getting to 100% passing was relatively easy for BRL-CAD with
only one MUST item arguably being unmet beforehand (our website
certificate didn’t match our domain, fixed). The rest was mostly a matter of
documentation and elaboration.
Good to hear. I would expect most well-run projects to "mostly" do well. Fixing domain certificates, and doing a little documentation & elaboration, Is (to me) a *good* thing.

Here’s my top-7 critical feedback:

1) Despite so many fields, it’s too easy to (falsely) pass. Looking at others with
100%, I would challenge some of the subjective MUST responses, and expect
to be challenged in kind. Incorporating 3rd-party review gating before
achieving 100% passing would increase overall value.
We certainly *could* add 3rd-party review gating. For example, we don’t advertise this much, but I *do* review each passing badge to look for nonsense. But it's merely a brief look for nonsense, not a rigorous re-check.

This is a fair concern, and one that's been discussed from the beginning. The reason we didn't *require* 3rd-party review gating is because we're concerned that we would become the bottleneck as the number of projects increases. There are a lot of OSS projects out there.

So the question is... how could we scale that 3rd-party review? My current position is to focus on improving automation... which would make everyone happy (faster to get a badge, more rigorous checking). Are there good scaleable alternatives?

2) Taking a position on distributed vs centralized version control is
contemporary flamebait, both with merit and downsides. There are robust
examples of both being perfectly viable, secure, and best practice. Popularity
should have no bearing on recommendations.
(This is in reference to: "It is SUGGESTED that common distributed version control software be used (e.g., git). [repo_distributed]").

I certainly agree that centralized version control is viable (having used sccs, rcs, cvs, and subversion at one time or other). The argument for this criterion is that distributed systems tend to make it easier to collaborate (because you can easily re-heal changes initiated at different times)... and since it's only SUGGESTED, it does not *mandate* decentralized version control.

That said, we could certainly drop this criterion. It's only SUGGESTED (which hints at your next point), so it doesn't have much "oomph" - and shortening the criteria a little bit is a good thing. I think we should *expect* that as we get more projects & experience there will improvements and tightening of the criteria. The *much* more important issue is to *have* version control, and I think it's generally agreed that version control should stay as a MUST.

3) Most of the SUGGESTED items devalue the badge through dilution. Some
could graduate to SHOULD (e.g., those under Quality) while the remainder
offer little to no value (as they have no bearing on the badge and only
increase burden). I would recommend removing the non-Quality ones.
Reasonable enough. I think the SUGGESTED items have some value, because psychologically people don't like to admit they don't do something (if they think they should be doing it). But I could be mistaken.

I'd like to hear others' opinions on this. I note that Daniel Stenberg agrees with this comment.

4) Private reports MUST … be privately reportable. N/A notwithstanding, I
don’t see how this could ever be Unmet. If there’s no private reporting
mechanism, private reports are de-facto not supported.
That's certainly true in a sense. The point, though, is that the project has to *tell* reporters how to do the private reporting. A lot of projects just don't tell people how to report vulnerabilities (they've never considered the possibility), so the real goal is to get them to write it down ahead-of-time.

5) "Working build system” is not strictly defined (perhaps intentionally) but
“working” is the more questionable part. Flaky open source compilation is
the epitome of “works for me” ignorance. Nobody with a build system will
say it’s not working.

Well, I've seen some non-working build systems, but you're right, people don't want to *admit* they're not working.

Would changing "Working" to "Automated" be an improvement?

6) Treating warnings as errors shouldn’t be a suggestion. Projects SHOULD be
maximally strict, treating warnings as errors, with minimal exceptions (e.g.,
less than 1 exemption per 100 files). Frankly, I think it should be a MUST.
The problem is that it depends on the overall platform (including language and underlying OS) & the tools available. Making it a MUST would be too harsh for many cases. For example, you can find more problems by enabling more warnings, but those higher levels tend to be much noisier. We don't want to *discourage* people from using more sensitive (though noisier) tools.

7) The “BadgeApp” title on individual badging pages make for a terrible title
when sharing with others (e.g., via Facebook). Suggest something like “CII’s
Best Practices Badge for $project_name"
I presume you mean the <title>...</title> value in the HTML page, which shows up on browser tabs.

You're absolutely right, and we can quickly fix this too.

--- David A. Wheeler

Join to automatically receive all group messages.