Date   
Re: Improvement coming: Tiered percentage display in BadgeApp projects display

Daniel Stenberg
 

On Thu, 14 Jun 2018, David A. Wheeler wrote:

For example, Zephyr has completed passing, and is 93% to silver, so its tiered percentage will show as "193%". I don't know of anywhere else with this kind of measurement, but I think that provides a nice *short* but *useful* status display, and it *seems* relatively intuitive.
But if Zephyr is *also* at the same time 93% of the gold criteria it will still only show 193%, right? That might not be as intuitive...

I'm not objecting, just clarifying I guess.

--

/ daniel.haxx.se

Improvement coming: Tiered percentage display in BadgeApp projects display

David A. Wheeler
 

FYI: Up to now the best practices badge’s multi-project display hasn’t clearly shown progress beyond “passing” until the project actually gets silver or gold.  It’s also been hard to search or sort on (e.g., “who got silver” or “who is at least 50% towards silver”)?  That’s unfortunate, because a number of projects have been making steady progress towards silver and gold.  We don’t expect *all* projects to get those levels, but it’d be nice to find out who is.  So we’re making a tweak to make this information easier to see and sort on.

 

To solve this, we’re switching the multi-project display so that it will display a “tiered percentage”.  In a tiered percentage 100% is passing, 200% is silver, and 300% is gold, and you then add the percentage points towards the next-highest badge you DON’T have.  For example, Zephyr has completed passing, and is 93% to silver, so its tiered percentage will show as “193%”.  I don’t know of anywhere else with this kind of measurement, but I think that provides a nice *short* but *useful* status display, and it *seems* relatively intuitive.  The corresponding JSON data about projects (provided by our REST interface) will also provide this info.

We haven’t deployed it in production, but you can see what it looks like here:

  https://master.bestpractices.coreinfrastructure.org/en/projects?sort=tiered_percentage&sort_direction=desc

 

As always, comments/feedback welcome!

 

--- David A. Wheeler

 

Projects that received badges (monthly summary)

badgeapp@...
 

This is an automated monthly status report of the best practices badge application covering the month 2018-05.

Here are some selected statistics for most recent completed month, preceded by the same statistics for the end of the month before that.

Ending dates2018-04-292018-05-30
Total Projects15011557
Projects 25%+544574
Projects 50%+463486
Projects 75%+370389
Projects passing168178

Here are the projects that first achieved a passing badge in 2018-05:

  1. ONAP POLICY at 2018-05-02 14:24:07 UTC
  2. domain-expiry at 2018-05-05 19:26:20 UTC
  3. StackStorm at 2018-05-08 03:02:37 UTC
  4. Open Policy Agent at 2018-05-10 22:11:24 UTC
  5. Portal Platform at 2018-05-14 17:51:02 UTC
  6. recording at 2018-05-20 11:31:01 UTC
  7. cla-assistant at 2018-05-22 14:35:54 UTC
  8. telepresence at 2018-05-23 20:06:27 UTC
  9. DiscordCrypt at 2018-05-30 05:41:25 UTC

We congratulate them all!

Do you know a project that doesn't have a badge yet? Please suggest to them that they get a badge now!

Thoughts on creating a short intro/motivational video for the badging project?

David A. Wheeler
 

All:

John Mertic and Mark Hornick recently suggested to me that there should be a “short intro/motivational video” about the best practices badging project. Basically, in around 5-6 minutes (and certainly less than 10), explain basic issues like *why* an OSS project should try to get a badge. We already have some 30+ minute briefings about it, but for many that’s too long. I have a short demo video on the mechanics of getting a badge, but that’s about the mechanics, not motivation.

I like the idea, so I asked what such a video should cover. They provided me a quick starter list of potential questions to answer, which I’ve included below (my sincere thanks!).

What *other* questions or issues should a short intro/motivational video cover? Are there things we should skip? What should its title be? I was thinking something direct like, "Why your OSS project should get a CII Best Practices badge" - but other options are welcome.

The whole point of this project is to identify *reasonable* criteria that OSS projects *can* do and *help* them produce better & more secure software. The more projects that work for the badge, the more effective the badging project will be.

They also mentioned that it'd be useful to have separate language-specific "mechanical" videos. I already created the video "Quick demo on how to start getting a CII Best Practices badge" <https://www.youtube.com/watch?v=dhLYLpsvvc0>, but that uses Python - they thought it'd be useful to have other language-specific ones, where we discuss language-specific issues (if any). They had R in mind in particular. But note that these would be separate should videos from the "big picture" motivational video.

Any suggestions welcome. Thanks!!

--- David A. Wheeler



=== Some ideas of things to cover ===

• What's the problem CII addresses?
o Describe at a high level typical problems in software quality, licensing, security that companies face that make the CII a valuable tool for industry and academia.
• What is the CII?
• What projects (types of projects) do or should use this?
o be sure to list R packages explicitly
• What are the CII statistics?
o how many projects are using it?
o time series of project growth
o types of projects breakdown, e.g., language?
o aggregates of levels achieved over time, passed, silver, gold
• For Open Source only? Can it be used by proprietary software too?
• Additional resources
o videos
o documentation
o websites

Some BadgeApp security/privacy improvements: Encrypting email addresses and restricting Gravatar

David A. Wheeler
 

We've recently made some security/privacy improvements to the BadgeApp, and I thought it'd be useful to mention here on the mailing list. In short, we've:
1. Started encrypting email addresses in the internal database. We don't allow users to see other user's email addresses already. The idea of this change is to store the data encrypted at rest, as an additional safeguard. They're encrypted using 'aes-256-gcm' and have a blind index using PBKDF2-HMAC-SHA256, both using strong cryptographically random 256-bit keys. These algorithms and keys should provide PLENTY of protection.
2. *Only* use Gravatar URLs when we've determined that there's an active Gravatar URL. This is so that we don't even leak the cryptographic hash of an email address for a local account unless the user's clearly approved it.

I don't think these are necessary for privacy or GDPR, but they are decent hardening measures because we try to provide as much security and privacy as we reasonably can.

Everything seems to work, but just in case, we're waiting a little bit before we delete the internal database column that stores the unencrypted email addresses. So if you see a problem relating to email address storage, please let us know ASAP before we take that last step.

Details below, which are an extract from the assurance case at https://github.com/coreinfrastructure/best-practices-badge/blob/master/doc/security.md

--- David A. Wheeler

================================

First, encrypted email addresses. We encrypt email addresses within the database, and never send the decryption or index keys to the database. This provides protection of this data at rest, and also means that even if an attacker can view the data within the database, that attacker will not receive sensitive information. Email addresses are encrypted as described here, and almost all other data is considered public or at least not sensitive (Passwords are specially encrypted as described separately).

A little context may be useful here. We work hard to comply with various privacy-related regulations, including the European General Data Protection Regulation (GDPR). We do not believe that encrypting email addresses is strictly required by the GDPR. Still, we want to not just meet requirements, we want to exceed them. Encrypting email addresses makes it even harder for attackers to get this information, because it's encrypted at rest and not available by extracting data from the database system.

It is useful to note why we encrypt just email addresses (and passwords), and not all data. Most obviously, almost all data we manage is public anyway. In addition, the easy ways to encrypt data aren't available to us. Transparent Data Encryption (TDE) is not a capability of PostgreSQL. Whole-database encryption can be done with other tricks but it is extremely expensive on Heroku. Therefore, we encrypt data that is more sensitive, instead of encrypting everything.

We encrypt email addresses using the Rails-specific approach outlined in "Securing User Emails in Rails" by Andrew Kane (May 14, 2018). We use the gem 'attr_encrypted' to encrypt email addresses, and gem 'blind_index' to index encrypted email addresses. This approach builds on standard general-purpose approaches for encrypting data and indexing the data, e.g., see "How to Search on Securely Encrypted Database Fields" by Scott Arciszewski. The important aspect here is that we encrypt the data (so it cannot be revealed by those without the encryption key), and we also create cryptographic keyed hashes of the data (so we can search on the data if we have the hash key). The latter value is called a "blind index".

We encrypt the email addresses using AES with 256-bit keys in GCM mode ('aes-256-gcm'). AES is a well-accepted widely-used encryption algorithm. A 256-bit key is especially strong. The GCM mode is a widely-used strong encryption mode; it provides integrity ("authentication") mechanism. Each separate encryption uses a separate long initialization vector (IV) created using a cryptographically-strong random number generator.

We also hash the email addresses, so they can be indexed. Indexing is necessary so that we can quickly find matching email addresses (e.g., for local user login). We has them using the hashed key algorithm PBKDF2-HMAC-SHA256. SHA-256 is a widely-used cryptographic hash algorithm (in the SHA-2 family), and unlike SHA-1 it is not broken. Using sha256 directly is vulnerable to a length extension attack, but that appears to be irrelevant in this case. In any case, we counter this problem by using HMAC and PBKDF2. HMAC is defined in RFC 2104, which is the algorithm H(K XOR opad, H(K XOR ipad, text)). This enables us to use a private key on the hash, counters length extension, and is very well-studied. We also use PBKDF2 for key extension. This is another well-studied and widely-accepted algorithm. For our purposes we believe PBKDF2-HMAC-SHA256 is far stronger than needed, and thus is quite sufficient to protect the information. The hashes are of email addresses after they've been downcased; this supports case-insensitive searching for email addresses.

The two keys used for email encryption are EMAIL_ENCRYPTION_KEY and EMAIL_BLIND_INDEX_KEY. Both are 256 bits long (aka 64 hexadecimal digits long). The production values for both keys were independently created as cryptographically random values using "rails secret".

Implementation note: the indexes created by blind_index always end in a newline. That doesn't matter for security, but it can cause debugging problems if you weren't expecting that.

Note that 'attr_encrypted' depends on the gem 'encryptor'. Encryptor version 2.0.0 had a major security bug when using AES-*-GCM algorithms. We do not use that version, but instead use a newer version that does not have that vulnerability. Some old documentation recommends using 'attr_encryptor' instead because of this vulnerability, but the vulnerability has since been fixed and 'attr_encryptor' is no longer maintained. Vulnerabilities are never a great sign, but we do take it as a good sign that the developers of encryptor were willing to make a breaking change to fix a security vulnerabilities.

Also: Gravatar is now restricted.

We use gravatar to provide user icons for local (custom) accounts. Many users have created gravatar icons, and those who have created those icons have clearly consented to their use for them.

However, accessing gravatar icons requires the MD5 cryptographic hash of downcased email addresses. Users who have created gravatar icons have already consented to this, but we want to hide even the MD5 cryptographic hashes of those who have not so consented.

Therefore, we track for each user whether or not they should use a gravatar icon, as the boolean field "use_gravatar". Currently this is can only be true for local users (for GitHub users we use their GitHub icon). Whenever a new local user account is created or changed, we check if there is an active gravatar icon, and set use_gravatar accordingly. We also intend to occasionally iterate through local users to reset this (so that users won't need to remember to manipulate their BadgeApp user account). We will then only use the gravatar MD5 when there is an actual gravatar icon to refer to; otherwise, we use a bogus MD5 value. Thus, local users who do not have a gravatar account will not even have the MD5 of their email address revealed.
This is almost certainly not required by regulations such as the GDPR, since without this measure we would only expose MD5s of email addresses, and only in certain cases. But we want to exceed expectations, and this is one way we do that.

The current plan is to iterate through the local users once a month and check with Gravatar. That should be fine for the purpose, and easily scales to a huge number of users.

Re: GDPR - we think we're ready, let me know of any issues

David A. Wheeler
 

I just realized that I should also add a weird special case: Temporarily-retained backups of logs or databases, which can make our theoretical maximum retention time 18 months (1.5 years).  Here’s the issue. We don’t normally do this, but it’s *possible* to make backup copies of logs, and we occasionally make copies of databases.  In all cases, the purpose is to detect defects and/or attacks – we don’t analyze individual user behavior (unless you consider “attacking our site” a valid user behavior).  We don’t retain this information for more than 6 months beyond its normal expiration (and that’d be an unusual case).  Of course, errors can happen, but that’s what we are actively trying to do.  So in an *outside* case, deleted private data can stick around internally for 18 months.  It’s not likely, but it’s *possible*.

 

Overall, I think we have a good story regarding privacy.  We do not share personal data with any third parties.  We do not have advertisements of any kind.  We do not process payments of any kind.  We do not use external tracking tools like Google Analytics.  We self-host our JavaScript, fonts, and images, so users do not trigger downloads from external third-party sites when they request our web pages.  We also set our cookies to “SameSite lax” which further mitigates the risk of cross-origin information leakage to third parties.  We do not allow users to set up loading of external images in the markup text that they provide (images are a common way to insert trackers). As a result, we believe there is no opportunity for a third party to track users (such as by using “third party cookies”), because we don’t load them.  We do use a cloud service (Heroku/Amazon) and content delivery network (CDN) (Fastly) to implement the site, but they simply provide the computation and network delivery service.

 

The BadgeApp front page does have hypertext links to well-known social media sites (including Twitter, Reddit, and Facebook).  However, these links are carefully designed so that viewing the BadgeApp front page does not notify the external sites that the user is viewing the BadgeApp front page, and the BadgeApp never shares personal data with those other sites.  Users must expressly click on those links to go to those other sites, and even in those cases we simply transfer generic information about the badging site; we do not provide any personal information about the user to those external sites.

 

I think we meet the other requirements too.  We don’t store a lot of private information about users, and it isn’t THAT sensitive - their email address is the most sensitive we get (which is not in the “most sensitive” category).  Users can see what we store, and can delete that information, whenever they want to.

 

Again, I’m not a lawyer, but I *think* we’re okay.  Of course, if someone sees a problem, PLEASE let us know.  We *want* to give everyone privacy.

 

--- David A. Wheeler

 

 

From: Georg Link [mailto:linkgeorg@...]
Sent: Monday, May 14, 2018 6:44 PM
To: Wheeler, David A
Cc: cii-badges@...
Subject: Re: [CII-badges] GDPR - we think we're ready, let me know of any issues

 

Sounds reasonable, thanks David.

 

On Mon, May 14, 2018 at 5:24 PM, Wheeler, David A <dwheeler@...> wrote:

Georg Link:
> It might be helpful to additionally document how long activity logs are kept and when they are either anonymized or deleted. Because the goal "to detect and fix erroneous behavior, as well to detect and counter malicious behavior" might not require the data for eternity.

 

Fair enough.

 

The log of activity records requests to the system and related activity.  Logs are rotated daily and log data is archived for 1 year.  After that, it’s gone.

 

Some bugs are intermittent, and some attackers use “low and slow” kinds of attacks.  Thus, we need to log things for a period of time to deal with those cases.  A year seems like a reasonable period of time.

 

Does that help?

 

--- David A. Wheeler

 

Sent: Monday, May 14, 2018 5:55 PM
To: Wheeler, David A
Cc: cii-badges@...
Subject: Re: [CII-badges] GDPR - we think we're ready, let me know of any issues

 

Thanks David,

 

 

Best,

Georg

 

On Mon, May 14, 2018, 15:14 Wheeler, David A <dwheeler@...> wrote:


The system does store activity logs for all requests to the website.  These logs are necessary to detect and fix erroneous behavior, as well to detect and counter malicious behavior.  For logging to meet these requirements, it is necessary and important to record a variety of information, including the specific request, a summary of what action was performed on the request, the IP address of the requester, and also the user id of a logged-in user where relevant.  Therefore, our logs (like most logs) record this data (IP addresses and user id numbers).  We believe that being able to fix erroneous behaviors of the website, and counter malicious behaviors directed against this website, is a legitimate interest.  We do not use the logs for profiling users for marketing or anything like that; we use the logs to help ensure that the site continues to work in spite of errors or network attack.  We do not provide log data to external users, as that could breach others' privacy.  We b
 elieve this is fine under the GDPR; the GDPR requires "data portability" where consent is granted or the data is provided in performance of a contract, but log data is recorded to support a legitimate interest (and thus is not subject to data portability requirements).

 

Re: GnuPG efail - researcher discussion failure

Danny O'Brien <danny@...>
 

From: Tom Ritter <tom@...>
Subject: Re: [CII-badges] GnuPG efail - researcher discussion failure
Date: May 15, 2018 at 9:57:56 AM EDT
To: "Luis R. Rodriguez" <mcgrof@...>
Cc: cii-badges@..., Werner Koch <wk@...>,
Katitza Rodriguez <katitza@...>
I think there's a discussion relating to CII here. I agree this isn't
the right place but since there's no general CII discussion list (nor
is there really enough traffic for one) - we hijack away!
Kat passed on the thread --

I won't hijack this thread any more than it has been, but EFF would be
happy to join any discussion for making this better. God knows we
learned (and re-learned) a lot in this, and I'm pushing for writing up a
public post-mortem to help others in similar situations.

Anyway, just wanted to stick my name here for those of you who don't
have a contact with us.

d.


One of the discussions I've had in the past as it related to CII is
how Open Source projects should handle patches for vulnerabilities.
I've pointed to OpenSSL as a model for example. They are very diligent
about developing fixes and not pre-releasing them; they give
notification of the day and approximate time for patches, and these
things give enterprises (I imagine, not actually in charge of patching
enterprise-deployments of OpenSSL) a lot of comfort and capacity
planning.

This type of coordinated disclosure is another situation where the
interests of affected vendors, affected consumers, and security
researches are not necessarily at odds - but are neither in alignment.
Security Researchers want (and sometimes need to justify their
position in orgs) big press coverage. Fancy websites, demos, and
'simplified' impact statements all work to their favor. (And when I
say 'simplified' I don't mean that derogatorily: "Attack leaks
contents of PGP/S/MIME Encrypted Email" is still accurate and much
simpler than "Poor Content Handling in certain Email Clients may leak
PGP/S/MIME contents")

Proposals that push on security researchers to avoid hype; avoid
trying to make a big impact with their work are doomed to failure. All
you're going to do is push them to coordinate with you less, to the
point where a disclosure date will come and they'll release a website
and exclusive on CNN and you won't have known either was coming.
Instead, i think the way to do it is to push Security Researchers to
coordinate with you _more_.

They've got a big attack on drupal? Hell give them
bigattack.drupal.com so you can lend it legitimacy and show you're
working with them. Work to crate a joint media message together and
provide quotes that can be used in stories about it. Instead of
silently identifying and fixing a variant of their attack you
discover; add to their paper/presentation. If it's significant enough,
you can ask to co-present/co-author. Some bugs are so simple there's
not much meat to the story, but as someone who has reviewed
submissions for security conferences, it's really rare and really
great when a researcher and the researched co-present and tell both
their sides of the story. There are a _lot_ of lessons to be learned
from those types of talks.

While all of this applies to FOSS and non-FOSS, I think (or hope) that
FOSS should be more open to it. I think (or hope) that there's less
ego in FOSS when it comes to projects and it's easier for open source
projects to say "Wow that's a really awesome find" or "That's a really
impressive chain that was built to exploit this" and congratulate and
appreciate researchers instead of seeing it as an us vs them
situation.

-tom

On 14 May 2018 at 18:35, Luis R. Rodriguez <mcgrof@...> wrote:

As you may know there is tons of media coverage over efail:

https://efail.de/

The GnuPG team response seems to indicate that the researchers really
didn't properly engage or tune their message to avoid such hype over
such issues:

https://lists.gnupg.org/pipermail/gnupg-users/2018-May/060315.html
https://lists.gnupg.org/pipermail/gnupg-users/2018-May/060318.html

The tone should therefore have been more about tons of MUAs needing
fixing. But
everything else seems hype.

Since CII started in part as a response to Heartbleed, and the badge
program is
IMHO a success story considering the number of projects which have been
shaping
up to meet the requirements, it has me thinking that despite the badge
program
something is still missing here.

What could be done, from a community, or even CII perspective, to avoid
further
cross channel miscommunication mishaps between security researchers and
our broad
set of FOSS projects in the community?

Cc'ing two folks which I believe are not subscribed. Perhaps this is
Off topic,
but, not sure where *else* could such a topic be discussed in a
proactive
manner.

Luis
_______________________________________________
CII-badges mailing list
CII-badges@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-badges



Migration from Mailman to Groups.io

Brendan OSullivan <bosullivan@...>
 

Greetings CII community!

The Linux Foundation has connected with a new vendor called Groups.io, which provides mailing list services in a simple but modern interface. Groups.io offers all of the capabilities of our existing Mailman mailing service plus additional community tools that make it an exceptional service solution.

We are planning to migrate your existing mailing list archives and user lists to Groups.io on Wednesday May 16th starting at 9:30am PST.

The migration will include details on subscriber preferences and owner or moderator privileges.

Owners and Moderators: Please be aware pending memberships or posts (and similar pending moderation actions) in Mailman will not be preserved in this migration. We recommend re-checking for any such pending decisions and taking action on them within Mailman one hour prior to the start of the migration window.

During the migration window you will still be able to access the archives, however the delivery of messages sent to the mailing lists during this window will be delayed until after the migration of the archives and list members are complete. We will turn off new list signups during the migration window, then this functionality will be restored once it is complete.


FAQs

What are the key differences between Mailman and Groups.io?

  • Groups.io has a modern interface, robust user security model, and interactive, searchable archives
  • Groups.io provides advanced features including muting threads and integrations with modern tools like GitHub, Slack, and Trello
  • Groups.io also has optional extras like a shared calendar, polling, chat, a wiki, and more
  • Groups.io uses a concept of subgroups, where members first join the project “group” (a master list), then they choose the specific “subgroup” lists they want to subscribe to

How do the costs compare?

The Linux Foundation can provide project-branded Groups.io accounts to projects for less cost than managing our in-house Mailman systems.

How is the experience different for me as a list moderator or participant?

In many ways, it is very much the same. You will still find the main group at your existing URL and sub-groups equate to the more focused mailing lists based on the community’s needs. Here is an example of main group and sub-group URL patterns, and their respective emails:

https://lists.projectname.org/g/main

https://lists.projectname.org/g/devs

https://lists.projectname.org/g/ci

main@...

devs@...

ci@...

What is different is Groups.io’s simple but highly functional UI that will make the experience of moderating or participating in the community discussions more enjoyable.

Where do I find the settings and owner/moderator tools?

To change settings while in a group or subgroup, left click “Admin” from the side menu.

Then from “Admin” you can select:

  • “Settings” to change overall settings for a group, including privacy and message policy settings.

  • “Members” to manage people within a group, including adjusting their role and privileges

  • “Pending” to view messages pending moderation

If you’d like to learn more about using Groups.io , please reference their help documentation. If you need assistance with Groups.io, please email helpdesk@... for The Linux Foundation’s helpdesk.


Cheers!
Brendan OSullivan

Helpdesk Analyst

Re: GnuPG efail - researcher discussion failure

David A. Wheeler
 

Luis R. Rodriguez:
As you may know there is tons of media coverage over efail:

https://efail.de/
...
What could be done, from a community, or even CII perspective, to avoid further
cross channel miscommunication mishaps between security researchers and
our broad set of FOSS projects in the community?
I don't know what can be done, but it's definitely a worthy topic, because
media circuses and miscommunication are happening a lot. Unfortunately,
for many the economics encourage it. The researchers get quick (valuable) notoriety,
and the media get good clickbait (and many in the media don't
understand the issues anyway).

Cc'ing two folks which I believe are not subscribed. Perhaps this is Off topic,
but, not sure where *else* could such a topic be discussed in a proactive
manner.
That's a fair "scope of mailing list" question.

If we can somehow turn this into some kind of "best practice" kind of thing,
this is definitely on-topic for this mailing list. I don't know if we can,
but the discussion on *trying* to do so is definitely on-topic.
I don't know of any "generic CII" mailing list, but since many of us are involved in
the CII generally, and it's closely related, I think it's okay for now.

An alternative (and much larger) forum is the oss-security mailing list.

--- David A. Wheeler

Re: GnuPG efail - researcher discussion failure

Tom Ritter
 

I think there's a discussion relating to CII here. I agree this isn't
the right place but since there's no general CII discussion list (nor
is there really enough traffic for one) - we hijack away!

One of the discussions I've had in the past as it related to CII is
how Open Source projects should handle patches for vulnerabilities.
I've pointed to OpenSSL as a model for example. They are very diligent
about developing fixes and not pre-releasing them; they give
notification of the day and approximate time for patches, and these
things give enterprises (I imagine, not actually in charge of patching
enterprise-deployments of OpenSSL) a lot of comfort and capacity
planning.

This type of coordinated disclosure is another situation where the
interests of affected vendors, affected consumers, and security
researches are not necessarily at odds - but are neither in alignment.
Security Researchers want (and sometimes need to justify their
position in orgs) big press coverage. Fancy websites, demos, and
'simplified' impact statements all work to their favor. (And when I
say 'simplified' I don't mean that derogatorily: "Attack leaks
contents of PGP/S/MIME Encrypted Email" is still accurate and much
simpler than "Poor Content Handling in certain Email Clients may leak
PGP/S/MIME contents")

Proposals that push on security researchers to avoid hype; avoid
trying to make a big impact with their work are doomed to failure. All
you're going to do is push them to coordinate with you less, to the
point where a disclosure date will come and they'll release a website
and exclusive on CNN and you won't have known either was coming.
Instead, i think the way to do it is to push Security Researchers to
coordinate with you _more_.

They've got a big attack on drupal? Hell give them
bigattack.drupal.com so you can lend it legitimacy and show you're
working with them. Work to crate a joint media message together and
provide quotes that can be used in stories about it. Instead of
silently identifying and fixing a variant of their attack you
discover; add to their paper/presentation. If it's significant enough,
you can ask to co-present/co-author. Some bugs are so simple there's
not much meat to the story, but as someone who has reviewed
submissions for security conferences, it's really rare and really
great when a researcher and the researched co-present and tell both
their sides of the story. There are a _lot_ of lessons to be learned
from those types of talks.

While all of this applies to FOSS and non-FOSS, I think (or hope) that
FOSS should be more open to it. I think (or hope) that there's less
ego in FOSS when it comes to projects and it's easier for open source
projects to say "Wow that's a really awesome find" or "That's a really
impressive chain that was built to exploit this" and congratulate and
appreciate researchers instead of seeing it as an us vs them
situation.

-tom

On 14 May 2018 at 18:35, Luis R. Rodriguez <mcgrof@...> wrote:
As you may know there is tons of media coverage over efail:

https://efail.de/

The GnuPG team response seems to indicate that the researchers really
didn't properly engage or tune their message to avoid such hype over
such issues:

https://lists.gnupg.org/pipermail/gnupg-users/2018-May/060315.html
https://lists.gnupg.org/pipermail/gnupg-users/2018-May/060318.html

The tone should therefore have been more about tons of MUAs needing fixing. But
everything else seems hype.

Since CII started in part as a response to Heartbleed, and the badge program is
IMHO a success story considering the number of projects which have been shaping
up to meet the requirements, it has me thinking that despite the badge program
something is still missing here.

What could be done, from a community, or even CII perspective, to avoid further
cross channel miscommunication mishaps between security researchers and our broad
set of FOSS projects in the community?

Cc'ing two folks which I believe are not subscribed. Perhaps this is Off topic,
but, not sure where *else* could such a topic be discussed in a proactive
manner.

Luis
_______________________________________________
CII-badges mailing list
CII-badges@...
https://lists.coreinfrastructure.org/mailman/listinfo/cii-badges

Re: GnuPG efail - researcher discussion failure

Werner Koch <wk@...>
 

Hi!

On Tue, 15 May 2018 01:35, mcgrof@... said:

The tone should therefore have been more about tons of MUAs needing fixing. But
everything else seems hype.
Below is a mail I just sent to the gnupg-users list. I hope it shows a
bit of that overhyped thing by having a table easy to read in a mail.
The table is from draft 0.9.0 which was published yesterday a efail.de.


Shalom-Salam,

Werner

--8<---------------cut here---------------start------------->8---
Doesn't CERT read the paper before produciong a report? The table of
vulnerable MUAs is easy enough to read. To better see what we are
discussing, here is the table in plain text format with the check marks
replaced by yes and no.

TABLE OF VULNERABLE MAIL CLIENTS

| OS | Client | S/MIME | PGP |
| | | | -MDC | +MDC | SE |
|---------+-----------------+--------+------+------+-----|
| Windows | Outlook 2007 | yes | yes | yes | no |
| | Outlook 2010 | yes | no | no | no |
| | Outlook 2013 | user | no | no | no |
| | Outlook 2016 | user | no | no | no |
| | Win. 10 Mail | yes | – | – | – |
| | Win. Live Mail | yes | – | – | – |
| | The Bat! | user | no | no | no |
| | Postbox | yes | yes | yes | yes |
| | eM Client | yes | no | yes | no |
| | IBM Notes | yes | – | – | – |
| Linux | Thunderbird | yes | yes | yes | yes |
| | Evolution | yes | no | no | no |
| | Trojitá | yes | no | no | no |
| | KMail | user | no | no | no |
| | Claws | no | no | no | no |
| | Mutt | no | no | no | no |
| macOS | Apple Mail | yes | yes | yes | yes |
| | MailMate | yes | no | no | no |
| | Airmail | yes | yes | yes | yes |
| iOS | Mail App | yes | – | – | – |
| | Canary Mail | – | no | no | no |
| Android | K-9 Mail | – | no | no | no |
| | R2Mail2 | yes | no | yes | no |
| | MailDroid | yes | no | yes | no |
| | Nine | yes | – | – | – |
| Webmail | United Internet | – | no | no | no |
| | Mailbox.org | – | no | no | no |
| | ProtonMail | – | no | no | no |
| | Mailfence | – | no | no | no |
| | GMail | yes | – | – | – |
| Webapp | Roundcube | – | no | no | yes |
| | Horde IMP | user | no | yes | yes |
| | AfterLogic | – | no | no | no |
| | Rainloop | – | no | no | no |
| | Mailpile | – | no | no | no |


- = Encryption not supported
no = Not vulnerable
yes = Vulnerable
user = Vulnerable after user consent

-MDC = with stripped MDC, +MDC = with wrong MDC, SE = SE packets

My conclusion is that S/MIME is vulnerable in most clients with the
exception of The Bat!, Kmail, Claws, Mutt and Horde IMP. I take the
requirement for a user consent as non-vulnerable. Most of the
non-vulnerable clients use GnuPG as their engine.

For OpenPGP I see lots of no and only a few vulnerable clients: Support
for Outlook 2007 has long been dropped and Gpg4win/GpgOL gives a big
warning when you try to use it with OL2007. All other Outlook versions
are not vulnerable. The case for Thunderbird/Enigmail is not that clear
because the researcher confirmed that Enigmail 2.0 is in general not
vulnerable; we don't know which version of Enigmail was tested. I don't
know Postbox, Apple mailers or Horde IMP.
--8<---------------cut here---------------end--------------->8---


--
# Please read: Daniel Ellsberg - The Doomsday Machine #
Die Gedanken sind frei. Ausnahmen regelt ein Bundesgesetz.

GnuPG efail - researcher discussion failure

Luis R. Rodriguez <mcgrof@...>
 

As you may know there is tons of media coverage over efail:

https://efail.de/

The GnuPG team response seems to indicate that the researchers really
didn't properly engage or tune their message to avoid such hype over
such issues:

https://lists.gnupg.org/pipermail/gnupg-users/2018-May/060315.html
https://lists.gnupg.org/pipermail/gnupg-users/2018-May/060318.html

The tone should therefore have been more about tons of MUAs needing fixing. But
everything else seems hype.

Since CII started in part as a response to Heartbleed, and the badge program is
IMHO a success story considering the number of projects which have been shaping
up to meet the requirements, it has me thinking that despite the badge program
something is still missing here.

What could be done, from a community, or even CII perspective, to avoid further
cross channel miscommunication mishaps between security researchers and our broad
set of FOSS projects in the community?

Cc'ing two folks which I believe are not subscribed. Perhaps this is Off topic,
but, not sure where *else* could such a topic be discussed in a proactive
manner.

Luis

Re: GDPR - we think we're ready, let me know of any issues

Georg Link
 

Sounds reasonable, thanks David.

On Mon, May 14, 2018 at 5:24 PM, Wheeler, David A <dwheeler@...> wrote:

Georg Link:
> It might be helpful to additionally document how long activity logs are kept and when they are either anonymized or deleted. Because the goal "to detect and fix erroneous behavior, as well to detect and counter malicious behavior" might not require the data for eternity.

 

Fair enough.

 

The log of activity records requests to the system and related activity.  Logs are rotated daily and log data is archived for 1 year.  After that, it’s gone.

 

Some bugs are intermittent, and some attackers use “low and slow” kinds of attacks.  Thus, we need to log things for a period of time to deal with those cases.  A year seems like a reasonable period of time.

 

Does that help?

 

--- David A. Wheeler

 

Sent: Monday, May 14, 2018 5:55 PM
To: Wheeler, David A
Cc: cii-badges@lists.coreinfrastructure.org
Subject: Re: [CII-badges] GDPR - we think we're ready, let me know of any issues

 

Thanks David,

 

 

Best,

Georg

 

On Mon, May 14, 2018, 15:14 Wheeler, David A <dwheeler@...> wrote:


The system does store activity logs for all requests to the website.  These logs are necessary to detect and fix erroneous behavior, as well to detect and counter malicious behavior.  For logging to meet these requirements, it is necessary and important to record a variety of information, including the specific request, a summary of what action was performed on the request, the IP address of the requester, and also the user id of a logged-in user where relevant.  Therefore, our logs (like most logs) record this data (IP addresses and user id numbers).  We believe that being able to fix erroneous behaviors of the website, and counter malicious behaviors directed against this website, is a legitimate interest.  We do not use the logs for profiling users for marketing or anything like that; we use the logs to help ensure that the site continues to work in spite of errors or network attack.  We do not provide log data to external users, as that could breach others' privacy.  We b
 elieve this is fine under the GDPR; the GDPR requires "data portability" where consent is granted or the data is provided in performance of a contract, but log data is recorded to support a legitimate interest (and thus is not subject to data portability requirements).


Re: GDPR - we think we're ready, let me know of any issues

David A. Wheeler
 

Georg Link:
> It might be helpful to additionally document how long activity logs are kept and when they are either anonymized or deleted. Because the goal "to detect and fix erroneous behavior, as well to detect and counter malicious behavior" might not require the data for eternity.

 

Fair enough.

 

The log of activity records requests to the system and related activity.  Logs are rotated daily and log data is archived for 1 year.  After that, it’s gone.

 

Some bugs are intermittent, and some attackers use “low and slow” kinds of attacks.  Thus, we need to log things for a period of time to deal with those cases.  A year seems like a reasonable period of time.

 

Does that help?

 

--- David A. Wheeler

 

Sent: Monday, May 14, 2018 5:55 PM
To: Wheeler, David A
Cc: cii-badges@...
Subject: Re: [CII-badges] GDPR - we think we're ready, let me know of any issues

 

Thanks David,

 

 

Best,

Georg

 

On Mon, May 14, 2018, 15:14 Wheeler, David A <dwheeler@...> wrote:


The system does store activity logs for all requests to the website.  These logs are necessary to detect and fix erroneous behavior, as well to detect and counter malicious behavior.  For logging to meet these requirements, it is necessary and important to record a variety of information, including the specific request, a summary of what action was performed on the request, the IP address of the requester, and also the user id of a logged-in user where relevant.  Therefore, our logs (like most logs) record this data (IP addresses and user id numbers).  We believe that being able to fix erroneous behaviors of the website, and counter malicious behaviors directed against this website, is a legitimate interest.  We do not use the logs for profiling users for marketing or anything like that; we use the logs to help ensure that the site continues to work in spite of errors or network attack.  We do not provide log data to external users, as that could breach others' privacy.  We b
 elieve this is fine under the GDPR; the GDPR requires "data portability" where consent is granted or the data is provided in performance of a contract, but log data is recorded to support a legitimate interest (and thus is not subject to data portability requirements).

Re: GDPR - we think we're ready, let me know of any issues

Georg Link
 

Thanks David,

It might be helpful to additionally document how long activity logs are kept and when they are either anonymized or deleted. Because the goal "to detect and fix erroneous behavior, as well to detect and counter malicious behavior" might not require the data for eternity.

Best,
Georg


On Mon, May 14, 2018, 15:14 Wheeler, David A <dwheeler@...> wrote:

The system does store activity logs for all requests to the website.  These logs are necessary to detect and fix erroneous behavior, as well to detect and counter malicious behavior.  For logging to meet these requirements, it is necessary and important to record a variety of information, including the specific request, a summary of what action was performed on the request, the IP address of the requester, and also the user id of a logged-in user where relevant.  Therefore, our logs (like most logs) record this data (IP addresses and user id numbers).  We believe that being able to fix erroneous behaviors of the website, and counter malicious behaviors directed against this website, is a legitimate interest.  We do not use the logs for profiling users for marketing or anything like that; we use the logs to help ensure that the site continues to work in spite of errors or network attack.  We do not provide log data to external users, as that could breach others' privacy.  We b
 elieve this is fine under the GDPR; the GDPR requires "data portability" where consent is granted or the data is provided in performance of a contract, but log data is recorded to support a legitimate interest (and thus is not subject to data portability requirements).

May 16 - mailing list will undergo some changes

David A. Wheeler
 

All: On May 16th the CII-badges mailing list will undergo some changes, courtesy of the Linux Foundation’s IT group.

 

If the only thing you do is receive emails on the mailing list, I’m told that nothing should change for you.  But sometimes problems happen, so I thought it’d be wise to warn people now!

 

Thanks.

 

--- David A. Wheeler

GDPR - we think we're ready, let me know of any issues

David A. Wheeler
 

The EU General Data Protection Regulation (GDPR)'s official beginning enforcement date is 2018-05-25, which is just 11 days away.

As far as I know, we don't have any GDPR issues - but if you think we do, PLEASE let me know.

Below is a quick set of highlights of why we think we're okay from a GDPR viewpoint. This isn't a complete rationale for why we think we meet the GDPR, but hopefully it gives you a sense of the situation.

Now, a caveat. I'm a US citizen, who works for a US company, and I am *not* a lawyer. European law is *way* outside my field of expertise. What's more, the GDPR is intentionally worded in a very high-level aspirational way, making it a little hard for a non-lawyer to be sure we've addressed absolutely everything.

That said, I can say that we've honestly tried to meet and in many places exceed the GDPR requirements. We want the BadgeApp to respect user privacy, regardless of where the user lives. As always, please let us know if there's a problem.

Thank you!

--- David A. Wheeler

=====================================================================

There are many reasons we think we don't have any GDPR issues. From the very beginning, we have always considered user privacy very important. For example:
* We *never* give user data to anyone else unless we're legally required to do so. We don't sell (or display) ads. We don't sell tracking info or perform services for others who want users tracked.
* We only use personal data to perform badge-related functions, for example, to authenticate users, to determine if users are authorized to make changes, to log which user modified data, to communicate with users (e.g., via email) about badge-related issues (including reminder emails and password resets), to help users grant edit rights to others, to help users ensure that they are granting addition rights to the correct user, to display to others who "owns" the project entry, and to display to others which users are allowed to make modifications.
* We don't collect/store much. The main private data we store is user email addresses. Email addresses are *only* used for badging-related activities. We do send reminders to projects who don't have passing badge, but those are focused emails to specific users who already specifically told us that they want to actively pursue a badge & yet have not made any edits for a long time. If a user keeps pursuing a badge (via edits), or the project gets a passing badge, that user will never see a reminder message. Reminder emails are NOT sent as part of a mailing list.
* Users can always delete their accounts at any time if they want to (though we hope they won't want to). I think that meets the "right of erasure" aka "right to be forgotten".
* Unlike many web sites, we *intentionally* directly host files (like jquery), and our links to social networks (like Facebook) do *NOT* provide any tracking data unless the user actively clicks on a link that social network.
* We have a really good security story. See: https://github.com/coreinfrastructure/best-practices-badge/blob/master/doc/security.md

The big issue we dealt with months ago was user "data portability" - a GDPR requirement that users be able to get data about themselves in some standard format. It's not clear how *useful* this is, because we don't store much information about users. That said, we don't need to apologize for "not storing much information about users". In any case, I think we've completely met that GDPR requirement - a while ago we added the ability for users to get information about themselves in JSON format.

The system does store activity logs for all requests to the website. These logs are necessary to detect and fix erroneous behavior, as well to detect and counter malicious behavior. For logging to meet these requirements, it is necessary and important to record a variety of information, including the specific request, a summary of what action was performed on the request, the IP address of the requester, and also the user id of a logged-in user where relevant. Therefore, our logs (like most logs) record this data (IP addresses and user id numbers). We believe that being able to fix erroneous behaviors of the website, and counter malicious behaviors directed against this website, is a legitimate interest. We do not use the logs for profiling users for marketing or anything like that; we use the logs to help ensure that the site continues to work in spite of errors or network attack. We do not provide log data to external users, as that could breach others' privacy. We believe this is fine under the GDPR; the GDPR requires "data portability" where consent is granted or the data is provided in performance of a contract, but log data is recorded to support a legitimate interest (and thus is not subject to data portability requirements).

Proposal: Minor clarification of license_location

David A. Wheeler
 

I’m proposing a minor clarification of the license_location criterion here:

  https://github.com/coreinfrastructure/best-practices-badge/issues/1133

 

Comments welcome!

 

--- David A. Wheeler

Projects that received badges (monthly summary)

badgeapp@...
 

This is an automated monthly status report of the best practices badge application covering the month 2018-04.

Here are some selected statistics for most recent completed month, preceded by the same statistics for the end of the month before that.

Ending dates2018-03-302018-04-29
Total Projects14591501
Projects 25%+526544
Projects 50%+449463
Projects 75%+357370
Projects passing159168

Here are the projects that first achieved a passing badge in 2018-04:

  1. Cloud Native Interactive Landscape at 2018-04-02 13:52:17 UTC
  2. mbt at 2018-04-03 23:08:20 UTC
  3. BIND9 at 2018-04-06 23:48:56 UTC
  4. byobu at 2018-04-09 01:53:28 UTC
  5. libcluon at 2018-04-13 19:16:24 UTC
  6. Fluentd at 2018-04-16 16:26:18 UTC
  7. vinnie at 2018-04-27 23:43:20 UTC
  8. Nelson numerical interpreter at 2018-04-29 09:27:23 UTC

We congratulate them all!

Do you know a project that doesn't have a badge yet? Please suggest to them that they get a badge now!

Re: Https links are not accepted in CII badging

David A. Wheeler
 

Seshu m:

 

The picture you sent me of:

  https://bestpractices.coreinfrastructure.org/en/projects/1702#sites_https

does show an “X” (unsatisfied criterion), but in the picture it appears that someone (at the time) expressly told the system that the criterion was “Unmet”.  That would be the correct result if the BadgeApp was told that this criterion was “Unmet”.  Our automated checkers will sometimes set something as “met” if they weren’t known before, but if a human expressly says they’re unmet, we normally presume the human is right.

 

It looks like someone has *CHANGED* the value of the sites_https criterion for project 1702 since you posted your question.  Here’s what I see.  Notice that it is now marked as “Met” and thus has a green checkmark (“satisfied”):

 

When I view the badging site, the only criterion left for ONAP is this one:

https://bestpractices.coreinfrastructure.org/en/projects/1702#vulnerabilities_fixed_60_days

 

In short, I think everything is working properly.  Please let me know if I’ve misunderstood something!

 

--- David A. Wheeler