Date   
More on spam countering efforts

David A. Wheeler
 

FYI, we have implemented some simple spam countering mechanisms on the best practices badge application.

Most trivially, whenever someone tries to create a project badge entry, they now see this:
Please tell us about your free/libre/open source software (FLOSS) project. This MUST be a FLOSS project; nothing else is permitted. Do NOT add an unrelated site to try to improve a site's search engine optimization (SEO). This spamming is forbidden because it harms users, and it will not help SEO anyway (all hyperlinks are marked with ugc and nofollow).
We've also made some changes because we've noticed that so far all spam attempts so far use "local" accounts:
* After creating a local account, we intentionally delay activation emails by 5 minutes. We have our mailer do this, so we don't have to worry about maintaining a job service just to do this.
* After activating a local account, we intentionally delay any login to the account for 1 hour, and explain that it's an anti-spam measure.

For local users these changes are mildly annoying, sorry about that, but it should be acceptable while discouraging some spammers. Our understanding is that many spammers are trying to add their junk to as many sites as possible, so little roadblocks should make the badge site less enticing. Obviously it's possible to work around this, the goal is to make it not worth it. We'll continue to remove spam, too.

--- David A. Wheeler

Projects that received badges (monthly summary)

badgeapp@...
 

This is an automated monthly status report of the best practices badge application covering the month 2020-01.

Here are some selected statistics for most recent completed month, preceded by the same statistics for the end of the month before that.

Ending dates2019-12-302020-01-30
Total Projects28522908
Projects 25%+11141139
Projects 50%+924944
Projects 75%+748769
Projects 90%+562583
Projects passing389404

Here are the projects that first achieved a passing badge in 2020-01:

  1. Choria.io at 2020-01-04 20:53:40 UTC
  2. KWasm at 2020-01-05 03:12:57 UTC
  3. milvus at 2020-01-07 10:22:53 UTC
  4. platformlabeler-plugin at 2020-01-07 22:36:34 UTC
  5. Meshery at 2020-01-08 16:33:10 UTC
  6. hammurabi at 2020-01-16 13:29:50 UTC
  7. e3-core at 2020-01-18 19:53:50 UTC
  8. batect at 2020-01-19 02:38:05 UTC
  9. pastebin.run at 2020-01-19 13:32:18 UTC
  10. warp at 2020-01-19 17:21:43 UTC
  11. prometheus-swarm-discovery at 2020-01-22 01:05:29 UTC
  12. stdgpu at 2020-01-25 13:01:35 UTC
  13. thanos at 2020-01-27 21:07:27 UTC
  14. php-legal-licenses at 2020-01-29 22:26:46 UTC

We congratulate them all!

Do you know a project that doesn't have a badge yet? Please suggest to them that they get a badge now!

Re: Need some advice addressing "unfixable" publicly known vulnerabilities

David A. Wheeler
 

Kevin Wall:

Most Software Compositional Analysis tools / services (e.g., OWASP Dependency Check, BlackDuck, SourceClear, etc.) also flag ESAPI as being vulnerable to CVE-2019-17571 because it uses log4j 1.2.17. However, the ESAPI development team has examined this CVE and the ESAPI source code and do not believe that this CVE is exploitable in the manner that ESAPI is using it because we do not use the affected vulnerable class nor any log4j 1.x class that invokes the vulnerable class. ...
Our problem is that because many Software Composition Analysis (SCA) tools and services now flag ESAPI as being vulnerable to CVE-2019-1757.
For the *badging* application this shouldn't be a big problem. I believe the criteria always talk about "exploitable" vulnerabilities of certain kinds as being unacceptable, not just vulnerabilities. (I think "vulnerability" always implies "exploitable" anyway, but saying things that way makes it clear.) Ignoring the distinction between vulnerabilities *in* the code & vulnerabilities in the code dependencies, let's look at criterion "static_analysis_fixed" as an example:
All medium and higher severity exploitable vulnerabilities discovered with static code analysis MUST be fixed in a timely way after they are confirmed.
Notice that this criterion only focus on *exploitable* vulnerabilities. If a tool (like a static analysis tool) finds a purported vulnerability, but it's not actually exploitable, that does *NOT* count as a true (exploitable) vulnerability. Nor should it. No tool is a substitute for thinking; tools are good at warning people about *possible* problems.

Even though we believe strongly that ESAPI's use of log4j 1 does not expose our users to this vulnerability, that belief--even if released as some official statement (which we are considering) is unlikely to help all that much. For some more details about some of these concerns, please see https://groups.google.com/a/owasp.org/forum/#!topic/esapi-project-users/XxKBjj3HuSw. (And yes, I know I forgot to erase the SMTP headers of the original poster's email resulting in his privacy not being not maintained; I've already apologized to him.)
So as ESAPI developers, what are our options here? I could write-up an official security bulletin and describe how we've analyzed the CVE in question and don't believe it is exploitable the way that ESAPI uses it as well as suggesting how to use alternatives such as SLF4J with either log4j 2 or logback, etc. We could build a "kill switch" that could be set in the ESAPI.properties file which would flat out disallow ESAPI to use log4j 1.x (discussed in the Google Groups URL above). But I'm not sure any of those would be useful or helpful to the masses which is why I am coming to my security conscious comrades and hoping that one of you will think of something that I haven't. We do plan on addressing this in our next release notes and as you can see, have already mentioned it in our Google Groups ESAPI Users list.
You should *definitely* do an official write-up. Publicly explain why it's *not* a problem in your case. I understand your skepticism, but it's still the right thing to do.

Looking at it from the other direction, it's perfectly reasonable for the SCA tools to flag ESAPI as vulnerable as long as they lack any other evidence. You need to provide the SCA suppliers with evidence that this should be squelched.

What probably needs to happen, long-term, is for there to be a standard way to report that "no, we're not vulnerable in this case". Then the SCA vendors can use that information in their tool. Here's a rough idea that I've thought about for 5 seconds (so no guarantees this is a *good* idea): There should be a standard directory name like "not-vulnerabilities". Its contents are files with the vulnerability id with a file format extension , e.g., CVE-2019-17571.md for a markdown file. That file includes a justification of WHY the default configuration (at least) is not vulnerable to that CVE, which may include inherited CVEs.


--- David A. Wheeler

Re: Need some advice addressing "unfixable" publicly known vulnerabilities

Hanno Böck
 

Hi,

Unfortunately I don't have a really good answer for your problem, but I
thought it might be interesting that I looked into a very similar issue
lately, which is bundled jquery.

Plenty of applications bundle either jquery 1 or jquery 2, including
major applications like wordpress. They are unsupported, but jquery 3
introduces breaking changes and thus updates aren't easy, if you have a
vast plugin ecosystem like wordpress then it becomes almost impossible.

There are a couple of obscure CVEs in these versions that from my lay
understanding matter only in very specific circumstances. But they are
there and tools may flag them. I'm actually developing a security tool
myself that is somewhat affected by this (freewvs, optional -3
parameter [1]), where I don't really know how to handle this best.

An easy way out would be if jquery would provide security-only-updates
for their old branches, but they don't want to do that [2] and also it
seems one of the issues can't be fixed without breaking things.


[1] https://freewvs.schokokeks.org/
[2] https://github.com/jquery/jquery/issues/4559
--
Hanno Böck
https://hboeck.de/

Need some advice addressing "unfixable" publicly known vulnerabilities

Kevin W. Wall
 

CII Badging community,

I just updated the ESAPI project on the CII Badges site to account for a newly discovered CVE. Specifically, I added this verbiage:

Most Software Compositional Analysis tools / services (e.g., OWASP Dependency Check, BlackDuck, SourceClear, etc.) also flag ESAPI as being vulnerable to CVE-2019-17571 because it uses log4j 1.2.17. However, the ESAPI development team has examined this CVE and the ESAPI source code and do not believe that this CVE is exploitable in the manner that ESAPI is using it because we do not use the affected vulnerable class nor any log4j 1.x class that invokes the vulnerable class. We have deprecated the use of log4j 1.x in ESAPI and changed the default logger to JUL, but we are unable to remove this dependency without potentially breaking client code. Therefore we intend to follow our ESAPI deprecation policy and keep this dependency (even though it is past end-of-support) until either 2 years or until the next ESAPI major release (which would be ESAPI 3.0). We do not fell this is an issue because SLF4J is also supported and can be used to provide similar functionality.

However, we are in a bit of a pickle really. The latest release, as well as all previous releases of ESAPI, used log4j 1.x as the default logger. As explained above, while we have recently made changes to deprecate this and replaced it with java.util.logging (JUL) as the default, there are thousands of clients out there who are still using ESAPI with log4j 1.x which unfortunately means it would not be prudent to break client applications by completely removing log4j 1 support. (We can also support log4j 2.x via SLF4J, which we also support but making a switch from log4j 1 to log4j 2 is not trivial and thus may not be an option for many of our users.)

Our problem is that because many Software Composition Analysis (SCA) tools and services now flag ESAPI as being vulnerable to CVE-2019-1757. Even though we believe strongly that ESAPI's use of log4j 1 does not expose our users to this vulnerability, that belief--even if released as some official statement (which we are considering) is unlikely to help all that much. For some more details about some of these concerns, please see https://groups.google.com/a/owasp.org/forum/#!topic/esapi-project-users/XxKBjj3HuSw. (And yes, I know I forgot to erase the SMTP headers of the original poster's email resulting in his privacy not being not maintained; I've already apologized to him.)

Anyhow, my concern--because I have observed it first-hand within my company--is that companies will no longer permit their projects to continue using ESAPI and migrate to something else, which is all well and good when their ESAPI dependencies are few and suitable replacements exist. (I even describe potential replacements here: https://www.owasp.org/index.php/Category:OWASP_Enterprise_Security_API#tab=Should_I_use_ESAPI_3F) But for some things in ESAPI there are not ready made replacements (e.g., ESAPI Validators, ESAPI safe logging) and for many others, their ESAPI use is extensive so migrating to something else would be expensive.

So I am looking for suggestions of what I can do to relieve the fears about "Using Components with Known Vulnerabilities" when it comes to ESAPI. The SCA subscription services carry a lot of weight within companies and the FUD factor often takes over especially when they are spending big bucks on those SCA services. I've seen it first-hand.

The dilemma we have is we can't (well, won't; it is simply not good policy if you are an SDK provider) just immediately remove support for a potentially / presumably vulnerable dependency when that will break applications of potentially thousands of clients. (And in an ironic an eat-your-own dogfood sort of way, ESAPI logging is an integral core component of ESAPI so you are using it whatever other ESAPI component / feature you are using.)

So as ESAPI developers, what are our options here? I could write-up an official security bulletin and describe how we've analyzed the CVE in question and don't believe it is exploitable the way that ESAPI uses it as well as suggesting how to use alternatives such as SLF4J with either log4j 2 or logback, etc. We could build a "kill switch" that could be set in the ESAPI.properties file which would flat out disallow ESAPI to use log4j 1.x (discussed in the Google Groups URL above). But I'm not sure any of those would be useful or helpful to the masses which is why I am coming to my security conscious comrades and hoping that one of you will think of something that I haven't. We do plan on addressing this in our next release notes and as you can see, have already mentioned it in our Google Groups ESAPI Users list.

So if you have some ideas, please let me hear them.

Thanks,
-kevin
--
Blog: http://off-the-wall-security.blogspot.com/    | Twitter: @KevinWWall
NSA: All your crypto bit are belong to us.

Re: Did logins change because of the CII-Badges new spam defenses?

David A. Wheeler
 

Kevin W. Wall:
Does the username / password for
https://bestpractices.coreinfrastructure.org/
now require it to be done via GitHub? I just tried to login using my Gmail account (which was how I registered) for the first time in a long time and I am getting "Invalid Username / Password". I keep all my passwords in a password manager so I am 99.9% that the password is correct unless I missed / forgotten an email about all the passwords being reset because of a breach or whatever.
No, absolutely not. I just logged in with a local account & it worked fine.

I figured, NBD, I would use the "Forgot Password" flow, but when I tried that with my email address, I get the error message:
That's mysterious. I just tried the "forget password" with a local account, and it worked fine.

Sorry, can only reset the password for a custom (local) user
which I'm not even quite sure what that "custom (local) user" even means? (I mean, if it won't take an email address, then why prompt for Email?
That shouldn't happen. But since it is *is* happening, we need to track that down and fix it.

Do you have *both* a GitHub account *and* a local account on the badge site? We *should* handle that correctly, but I don't remember if we test that.

I suggest that we have the rest of the discussion via direct email, and off mailing list. I doubt everyone wants to hear about our debugging session :-).

--- David A. Wheeler

Did logins change because of the CII-Badges new spam defenses?

Kevin W. Wall
 

David, et al,

Does the username / password for
now require it to be done via GitHub? I just tried to login using my Gmail account (which was how I registered) for the first time in a long time and I am getting "Invalid Username / Password". I keep all my passwords in a password manager so I am 99.9% that the password is correct unless I missed / forgotten an email about all the passwords being reset because of a breach or whatever.

I figured, NBD, I would use the "Forgot Password" flow, but when I tried that with my email address, I get the error message:

Sorry, can only reset the password for a custom (local) user

which I'm not even quite sure what that "custom (local) user" even means? (I mean, if it won't take an email address, then why prompt for Email?

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/    | Twitter: @KevinWWall
NSA: All your crypto bit are belong to us.

Projects totals for last month impacted by spam countering efforts

David A. Wheeler
 

Some of you may have noticed that the “Total Projects” went down last month (2855 to 2852), but the number of projects at 25%+ went up (1089 to 1114).

 

The explanation is that we’ve been working to delete spam projects, and in the process found some additional spam projects.  My special thanks to Jason Dossett for cleaning out a number of spam projects.

 

The spam projects are generally blatant search engine optimization (SEO) scams to create links to websites that have nothing to do with FLOSS. We already mark the links as user-generated inside the HTML, so the spam won’t even help SEO in any significant way.  It’s just joyless work for us, as far as we’re concerned.

 

If anyone has other ideas on countering spammers that’d be great. Jason had an interesting idea of having a 1-hour cooling-off period between creating a local account & creating a project entry. Spam entries generally seem to be made with local accounts (perhaps the spammers don’t know what GitHub even is). That has its pros & cons of course. It’s hard to make spammers’ lives hard without making legitimate users lives hard; ideas very much welcome.

 

--- David A. Wheeler

Projects that received badges (monthly summary)

badgeapp@...
 

This is an automated monthly status report of the best practices badge application covering the month 2019-12.

Here are some selected statistics for most recent completed month, preceded by the same statistics for the end of the month before that.

Ending dates2019-11-292019-12-30
Total Projects28552852
Projects 25%+10891114
Projects 50%+902924
Projects 75%+732748
Projects 90%+551562
Projects passing382389

Here are the projects that first achieved a passing badge in 2019-12:

  1. ulfius at 2019-12-06 00:30:06 UTC
  2. DBItest at 2019-12-15 19:50:16 UTC
  3. buckinbuds at 2019-12-18 04:31:03 UTC
  4. FastAD at 2019-12-24 21:05:39 UTC
  5. OpenLearnr at 2019-12-24 21:15:40 UTC
  6. heddlr at 2019-12-25 05:05:06 UTC

We congratulate them all!

Do you know a project that doesn't have a badge yet? Please suggest to them that they get a badge now!

Re: Suggestions on countering spammers?

David A. Wheeler
 

Mark Rader:
What I’m thinking is when they create a project or account automatically send them an email with a passcode for verification so you do it for each new project.
I don't think that will be enough of a deterrent. The spammers are already willing to do an email confirmation.

One possibility would be to *require* a repo URL, and then require that it really be a public repo. In many cases it's easy to detect if a repo is really a repo (e.g., allow certain patterns of GitHub/GitLab URLs, and if that doesn't work, load that one page & see if it's repo of a recognized version control system). But that could cause more problems than it solves.

--- David A. Wheeler

Re: Suggestions on countering spammers?

Mark Rader
 

What I’m thinking is when they create a project or account automatically send them an email with a passcode for verification so you do it for each new project.

On Dec 20, 2019, at 2:14 PM, Wheeler, David A <@dwheeler> wrote:

Mark Rader:
Require them to validate their email address.
Good idea, but for local accounts we already do that, and I believe GitHub also requires email validation for their accounts.

So we're going to have to go beyond that.

--- David A. Wheeler

Re: Suggestions on countering spammers?

Trevor Vaughan
 

Pretty sure if you report them the GitHub they'll get banned.


On Fri, Dec 20, 2019 at 3:14 PM David A. Wheeler <dwheeler@...> wrote:
Mark Rader:
> Require them to validate their email address.

Good idea, but for local accounts we already do that, and I believe GitHub also requires email validation for their accounts.

So we're going to have to go beyond that.

--- David A. Wheeler





--
Trevor Vaughan
Vice President, Onyx Point, Inc
(410) 541-6699 x788

-- This account not approved for unencrypted proprietary information --

Re: Suggestions on countering spammers?

David A. Wheeler
 

Mark Rader:
Require them to validate their email address.
Good idea, but for local accounts we already do that, and I believe GitHub also requires email validation for their accounts.

So we're going to have to go beyond that.

--- David A. Wheeler

Re: Suggestions on countering spammers?

Mark Rader
 

Require them to validate their email address.

On Dec 20, 2019, at 11:13 AM, David A. Wheeler <@dwheeler> wrote:

Sadly, spammers have started to add nonsense "projects" to the CII Best Practices site
at a higher rate than before. It appears to be all SEO-related fraud.
I suppose that was inevitable, and I guess it's good that we're "worth" their time.

If anyone has ideas on how to automatically help counter spammers, please
let us know via reply to this mailing list, private email, or this issue:
https://github.com/coreinfrastructure/best-practices-badge/issues/1377

Thanks!

--- David A. Wheeler



Suggestions on countering spammers?

David A. Wheeler
 

Sadly, spammers have started to add nonsense "projects" to the CII Best Practices site
at a higher rate than before. It appears to be all SEO-related fraud.
I suppose that was inevitable, and I guess it's good that we're "worth" their time.

If anyone has ideas on how to automatically help counter spammers, please
let us know via reply to this mailing list, private email, or this issue:
https://github.com/coreinfrastructure/best-practices-badge/issues/1377

Thanks!

--- David A. Wheeler

Projects that received badges (monthly summary)

badgeapp@...
 

This is an automated monthly status report of the best practices badge application covering the month 2019-11.

Here are some selected statistics for most recent completed month, preceded by the same statistics for the end of the month before that.

Ending dates2019-10-302019-11-29
Total Projects27652855
Projects 25%+10641089
Projects 50%+882902
Projects 75%+717732
Projects 90%+538551
Projects passing372382

Here are the projects that first achieved a passing badge in 2019-11:

  1. wordpress-sqrl-login at 2019-11-01 09:37:16 UTC
  2. SQLite-simplecrawler-queue at 2019-11-08 10:53:41 UTC
  3. cloud-mta-build-tool at 2019-11-13 08:51:16 UTC
  4. DymamicAuthProviders at 2019-11-20 10:26:43 UTC
  5. krowdy-ui at 2019-11-26 21:10:22 UTC
  6. .dotfiles at 2019-11-28 10:21:51 UTC

We congratulate them all!

Do you know a project that doesn't have a badge yet? Please suggest to them that they get a badge now!

Projects that received badges (monthly summary)

badgeapp@...
 

This is an automated monthly status report of the best practices badge application covering the month 2019-10.

Here are some selected statistics for most recent completed month, preceded by the same statistics for the end of the month before that.

Ending dates2019-09-292019-10-30
Total Projects26652765
Projects 25%+10241064
Projects 50%+849882
Projects 75%+692717
Projects 90%+516538
Projects passing356372

Here are the projects that first achieved a passing badge in 2019-10:

  1. standup-raven at 2019-10-05 05:27:49 UTC
  2. Group-Office at 2019-10-07 11:54:32 UTC
  3. tqdm at 2019-10-11 15:52:20 UTC
  4. boxrec at 2019-10-14 15:40:37 UTC
  5. PSP at 2019-10-15 12:31:17 UTC
  6. crawl at 2019-10-15 14:14:41 UTC
  7. setup-php at 2019-10-16 13:48:06 UTC
  8. Secure Production Identity Framework for Everyone Runtime Enviroment at 2019-10-17 18:10:47 UTC
  9. augur at 2019-10-23 23:22:27 UTC
  10. cyclone at 2019-10-26 01:37:47 UTC
  11. formatter at 2019-10-27 09:35:20 UTC
  12. FireO at 2019-10-28 13:41:17 UTC
  13. spice at 2019-10-28 23:05:16 UTC

We congratulate them all!

Do you know a project that doesn't have a badge yet? Please suggest to them that they get a badge now!

Re: Proposal: Use CVSS version 3, not version 2, in CII Best Practices measures

David A. Wheeler
 

Here's a pull request that tries to resolve the CVSS issues:
https://github.com/coreinfrastructure/best-practices-badge/pull/1367

It's more text than I'd like, but my goal was to be 100% clear.
For example, instead of "medium or high" it was changed to
"medium or higher" (because we REALLY want critical vulnerabilities fixed!).
Below is the (simplified) diff of criterion vulnerabilities_fixed_60_days.

My goal was to be future-proof and precise.
CVSS is not a perfect system, but we just want a way to let projects
lower the priority of low-importance vulnerabilities, and for task that I
think it does okay.

Comments welcome.

--- David A. Wheeler

=============================================

There MUST be no unpatched vulnerabilities of medium
- or high severity that have been publicly known for more
+ or higher severity that have been publicly known for more
than 60 days.

(In details)
- A vulnerability
- is medium to high severity if its
- <a href="https://nvd.nist.gov/cvss.cfm">CVSS
- 2.0</a> base score is 4 or higher.
+ A vulnerability is considered medium or higher severity if its <a
+ href="https://www.first.org/cvss/"
+ >Common Vulnerability Scoring System (CVSS)</a>
+ base qualitative score is medium or higher.
+ In CVSS versions 2.0 through 3.1, this is
+ equivalent to a CVSS score of 4.0 or higher.
+ Projects may use the CVSS score
+ as published in a widely-used vulnerability database (such as the
+ <a href="https://nvd.nist.gov">National Vulnerability Database</a>)
+ using the most-recent version of CVSS reported in that database.
+ Projects may instead calculate the severity
+ themselves using the latest version of
+ <a href="https://www.first.org/cvss/">CVSS</a> at the time of
+ the vulnerability disclosure,
+ if the calculation inputs are publicly revealed once
+ the vulnerability is publicly known.

Re: Proposal: Use CVSS version 3, not version 2, in CII Best Practices measures

David A. Wheeler
 

Kevin Wall:
I have no objections, but how will moving from CVSSv2 to CVSSv3 affect things if NVD only has CVSSv2 scores available for the particular CVE? Would there be an expectation that we would need to deal with MITRE or maybe NIST to get them to assign a new CVSSv3 score? Because I don't even want to go there.
Good point. I think that shouldn't be required, & it wasn't intended. I think we can solve that.

But first, I think I'm required to note that anyone can calculate a CVSS score.
NVD has a little calculator: https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator
FIRST does too: https://www.first.org/cvss/calculator/3.0 and https://www.first.org/cvss/calculator/3.1
That said, there's a judgement call on a few questions like "privileges required" that are used to do the calculation. In most cases that won't matter, but I imagine people would rather get some "official" ruling on them. There's also the issue that people want to just use someone's calculation instead of doing it themselves; nobody wants to fight over that stuff.

Your question about versioning & clarity also raises a few related issues (which I think can also be resolved):
1. I posted about "version 3", but I really meant the "latest version in the 3 series" which is actually 3.1. We really don't want to be changing the text every time a new CVSS edition comes out. Using "most recent published" should resolve it.
2. When I say “CVSS scores” I really just mean the *base* score. NVD does the same thing, they only use base scares (see https://nvd.nist.gov/vuln-metrics/cvss ). The “temporal score” varies by time, and the “environmental score” varies by environment, so neither are useful for our purposes. Most people just look at NVD score (and thus "do what was intended" anyway), but that should be clearer than it currently is.

The simple solution is to let people use the vulnerability's base CVSS value as (1) published in a widely-used vulnerability database with the most-recent version of CVSS for that vulnerability, or (2) calculated themselves using the current version of CVSS (with the calculation publicly revealed if the vulnerability is publicly known). That means projects might not always use the current version of CVSS, but that's okay. Over time the old values will become irrelevant (through aging out), without requiring a lot of unnecessary work.

CVSS isn't a be-all/end--all. I think of it more as a simple heuristic. Ideally projects would fix all vulnerabilities, but there are some "vulnerabilities" which are very low risk & in some cases it's debatable that they even *are* vulnerabilities. We're simply using CVSS as a mechanism to let projects focus on the "vulnerabilities that are more likely to matter".

--- David A. Wheeler

Re: Proposal: Use CVSS version 3, not version 2, in CII Best Practices measures

Kevin W. Wall
 

I have no objections, but how will moving from CVSSv2 to CVSSv3 affect things if NVD only has CVSSv2 scores available for the particular CVE? Would there be an expectation that we would need to deal with MITRE or maybe NIST to get them to assign a new CVSSv3 score? Because I don't even want to go there.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/  |  Twitter:  @KevinWWall
NSA: All your crypto bit are belong to us.

On Mon, Nov 4, 2019, 09:04 David A. Wheeler <dwheeler@...> wrote:
A very few of our criteria mention CVSS (a method for estimating the risk from a vulnerability). For example, [dynamic_analysis_fixed] says this:
CRITERION: "All medium and high severity exploitable vulnerabilities discovered with dynamic code analysis MUST be fixed in a timely way after they are confirmed."
DETAILS: A vulnerability is medium to high severity if its CVSS 2.0 base score is 4. If you are not running dynamic code analysis and thus have not found any vulnerabilities in this way, choose "not applicable" (N/A).

I'd like to update from CVSS version 2 to version 3. CVSS version 3 has been around for a while, but we didn't use it because the NIST National Vulnerability Database (NVD) only provided version 2 data, and not version 3 data. However, NIST has since added support for version 3 & has supported it for a while. More info:
https://nvd.nist.gov/vuln-metrics/cvss

This should have no effect in practice. CVSS version 3 rates some vulnerabilities more risky than version 2 did (in particular, Heartbleed gets a higher risk score under version 3 compare to version 2). That said, if a project has that many vulnerabilities where the CVSS version change matters, that's a problem in itself.

If you think that's a bad idea, please let us know.  This is already an issue on GitHub:
https://github.com/coreinfrastructure/best-practices-badge/issues/1076

Thanks!

--- David A. Wheeler