Topics

C++ static analysis tools for CII badge

Daniel Heckenberg
 

Hello!

Are there any existing resources that demonstrate an automated static analysis of C++ code for CII badge requirements?  I'm hoping for something like a specific set of clang-tidy checks that covers the CVSS v2 medium and high severity vulnerabilities.  

Background:
I'm the current chair of the TAC for the recently formed Academy Software Foundation 
https://www.aswf.io/  
We're hoping to assist our projects to achieve CII badges by providing examples of static analysis for C++ projects that can be incorporated in normal build processes, as well as our CI systems.  

Thanks!
Daniel

Kevin W. Wall
 

On Wed, Jan 9, 2019 at 3:24 PM Daniel Heckenberg
<@dheck> wrote:

Hello!

Are there any existing resources that demonstrate an automated static analysis
of C++ code for CII badge requirements? I'm hoping for something like a
specific set of clang-tidy checks that covers the CVSS v2 medium and high
severity vulnerabilities.

Background:
I'm the current chair of the TAC for the recently formed Academy Software Foundation
https://www.aswf.io/
We're hoping to assist our projects to achieve CII badges by providing
examples of static analysis for C++ projects that can be incorporated in
normal build processes, as well as our CI systems.
Daniel,

The DHS SWAMP (https://www.dhs.gov/science-and-technology/csd-swamp)
might have some things. I recall talking to Kevin Greene (BCC'd) at an
AppSec USA conference maybe 3 or 4 years ago and I seem to recall that
they had some stuff for C and C++. Not sure if / how well it supports
Continuous Integration though. (Also, I'm not sure that Kevin is still
at DHS, but if he is, perhaps he will reply to you.)

On the commercial side, there are things like Microfocus' Fortify,
which is a SAST tool that does a pretty good job identifying lots of
vulnerabilities in both C and C++. It's a mature product and I have
used it for some sizeable (5M LOC) C++ projects.

Hope that helps.

-kevin
--
Blog: http://off-the-wall-security.blogspot.com/ | Twitter:@KevinWWallNSA: All your crypto bit are belong to us.

Daniel Stenberg
 

On Wed, 9 Jan 2019, Daniel Heckenberg wrote:
Are there any existing resources that demonstrate an automated static analysis of C++ code for CII badge requirements?  I'm hoping for something like a specific set of clang-tidy checks that covers the CVSS v2 medium and high severity vulnerabilities.  
In the curl project (which is C, not C++) we run clang-tidy on every commit/PR using travis [1] (search for "tidy") and analyze it using lgtm [2]. That's pretty easy to setup.

It can be noted that coverity is in my experience the undisputed leader of the static code analyzers for C/C++ - but isn't free, they offer a gratis service to scan code as a service for open source but that's not suitable for on-every-commit runs and since a few days ago the service "unexpectedly ceased operations" so we'll have to see where that goes in the future... Would be a hard blow to open source everywhere if it goes away.

[1] = https://github.com/curl/curl/blob/master/.travis.yml
[2] = https://github.com/curl/curl/blob/master/.lgtm.yml

--

/ daniel.haxx.se

Daniel Heckenberg
 

Thanks for the informative replies, Daniel and Kevin.

I'd also seen the current outage with coverity -- hopefully that is resolved soon.
lgtm looks appealing and may be suitable for our projects.  

A very specific CII badge aspect is that detection and timely remedy of CVSS v2 medium and high severity issues is required.  coverity seems to have a report generator which performs this, but I haven't seen any direct or automatic way to map other C/C++ analysis tool outputs to CVSS scores.  How is this usually done?

Thanks,
Daniel

Daniel Stenberg
 

On Thu, 10 Jan 2019, Daniel Heckenberg wrote:
A very specific CII badge aspect is that detection and timely remedy of CVSS v2 medium and high severity issues is required.  coverity seems to have a report generator which performs this, but I haven't seen any direct or automatic way to map other C/C++ analysis tool outputs to CVSS scores.  How is this usually done?
I don't know about "usually", but I can tell you how we do it in curl (which incidentally also matches what I see in several other C/C++ projects).

In the curl project we run several static code analyzers, fuzzers etc on the code *before release* and we fix the issues we find, meaning that whatever these tools find never typically cause any CVSS scores at all. We fix those problems before release.

The security flaws we do get reported, are thus typically found by others (or tests that runs outside of our CI infra) or by our own developers on released code. They're not issued automatically by anyone and they're received and dealt with by humans.

--

/ daniel.haxx.se

David A. Wheeler
 

On Thu, 10 Jan 2019, Daniel Heckenberg wrote:
A very specific CII badge aspect is that detection and timely remedy of CVSS
v2 medium and high severity issues is required.  coverity seems to
have a report generator which performs this, but I haven't seen any
direct or automatic way to map other C/C++ analysis tool outputs to
CVSS scores.  How is this usually done?
Daniel Stenberg:
I don't know about "usually", but I can tell you how we do it in curl (which incidentally also matches what I see in several other C/C++ projects).
In the curl project we run several static code analyzers, fuzzers etc on the code *before release* and we fix the issues we find, meaning that whatever these tools find never typically cause any CVSS scores at all. We fix those problems before release.
The security flaws we do get reported, are thus typically found by others (or tests that runs outside of our CI infra) or by our own developers on released code. They're not issued automatically by anyone and they're received and dealt with by humans.
I think that is the usual case. Use tools & tests so potential problems can be found & fixed before release. Since they aren't in a release, those normally potential problems do not normally get CVSS scores. Nowadays people often make continuous changes to a git master branch, and then use various ways to release it (put in in a package manager distro, tag it, and/or merge it into a production branch).

If a vulnerability is found in a *released* version of the software, organizations like NIST typically do the CVSS scoring for you. You can also calculate the CVSS score yourself, the CVSS base score is easy to figure out. The reason the criteria use the CVSS score is to ensure that at least the *important* vulnerabilities get fixed relatively quickly once they are known publicly, to reduce the risk to people using that software.

One point that may not be obvious: tool findings are not necessarily vulnerabilities. Many tools are based on heuristics, and they do not "know" the larger environment & expectations.

Hope that helps!

--- David A. Wheeler

Daniel Heckenberg
 

Thanks again for the very helpful replies!

Daniel and David, you've both clarified the essential point: analysis tools detect errors or potentially error-prone code which only become identified vulnerabilities in larger contexts.  Most of the projects in the ASWF domain are used for making images, typically in environments that are not open to general network access or arbitrary inputs from untrusted users.  As far as I'm aware, there are no existing published vulnerabilities (e.g. CVSS scored examples) from these projects.  This is partly why our community seems to be struggling a little to know how to adhere to the spirit of the CII badging requirements.

So... we can't map CVSS medium and high to any specific set of analysis checks or even particular coding errors.  But we'd still like to be able to provide some good-practice examples of specific analysis configurations for our projects to follow. 

Looking at the curl example, and specifically the clang-tidy checks:
https://github.com/curl/curl/blob/52e27fe9c6421d36337c0b69df6ca2b3b2d72613/src/Makefile.am#L145

This appears to be just running the default set of clang-tidy checks with a few globally disabled to avoid false positives.  Similarly, the lgtm config seems to be just the default.  These would be very easy to add for our projects.  Is this a reasonable setup to guide our community?

Thanks!
Daniel

David A. Wheeler
 

Daniel Heckhenberg:

> Daniel and David, you've both clarified the essential point: analysis tools detect errors or potentially error-prone code which only become identified vulnerabilities in larger contexts.  Most of the projects in the ASWF domain are used for making images, typically in environments that are not open to general network access or arbitrary inputs from untrusted users.  As far as I'm aware, there are no existing published vulnerabilities (e.g. CVSS scored examples) from these projects.  This is partly why our community seems to be struggling a little to know how to adhere to the spirit of the CII badging requirements.

Yes.  I hope it’s clear that if you don’t have any published vulnerabilities, fixing them takes zero time J.

> So... we can't map CVSS medium and high to any specific set of analysis checks or even particular coding errors.

Right.  Whether or not a particular coding error is a medium or high vulnerability is very dependent on the intended use of the component, not just the kind of error it is.

> But we'd still like to be able to provide some good-practice examples of specific analysis configurations for our projects to follow. 
> Looking at the curl example, and specifically the clang-tidy checks:
https://github.com/curl/curl/blob/52e27fe9c6421d36337c0b69df6ca2b3b2d72613/src/Makefile.am#L145
> This appears to be just running the default set of clang-tidy checks with a few globally disabled to avoid false positives.  Similarly, the lgtm config seems to be just the default.  These would be very easy to add for our projects.  Is this a reasonable setup to guide our community?

Yes, that’d be just fine.  In the criterion “static_analysis” we even list some similar tools as examples (e.g., SpotBugs, FindBugs, lintr, and goodpractice).

It’s really hard to give specific guidance for checks.  Different languages are often best handled by different tools.  There’s also always a trade-off of how far to configure checkers: if you turn them up too much & too quickly you get flooded by reports.  What you should do is set up at least one checker, and then slowly increase the rigor it enforces.  Adding more checkers over time, and gradually increasing their pickiness, is far more practical than trying to turn on everything at once (unless you’re a brand new project).  The static analysis criterion is focused on making sure you’ve at least started down that path; once you have *some* tools in place, it’s a lot easier to gradually increase what they check.

Criterion “static_analysis_common_vulnerabilities” SUGGESTs that at least one be used to look for common vulnerabilities.  But this is SUGGESTed, not a MUST or SHOULD; there are a long list of reasons that it might not worth be it for your project.

Let me know if you have other questions!

--- David A. Wheeler