The Eclipse Foundation recently received financial support from the OpenSSF’s Alpha-Omega project. We are thrilled to be able to help our projects improve the security of their Software Supply Chain. We have a number of initiatives that are being started, but today we will focus on the 1026 git repositories of the 254 Eclipse Projects hosted at Github, spread among 50 different organizations.
As firm believers that best decisions are backed by good data, we’ve run Scorecard on the mentioned repositories. This gives an overview of the projects’ current security posture and lets us see how impactful we can be in improving it.
From Scorecard’s GitHub page
Scorecards is an automated tool that assesses a number of important heuristics ("checks") associated with software security and assigns each check a score of 0-10. You can use these scores to understand specific areas to improve in order to strengthen the security posture of your project.
Each individual check is then combined in a global score: the scorecard. Some checks need to run with advanced permissions (e.g.
admin: repo_hook or
repo> public_repo). Given the number of repositories and the GitHub API requests limits, we were only able to run the checks with basic permissions. We had to skip the Webhooks check, and the Branch-Protection one is not as thorough as it could be. Also, we decided to exclude some other checks:
The complete list of checks we’ve run is the following: Code-Review, Contributors, Dependency-Update-Tool, Fuzzing, License, Maintained, Branch-Protection, SAST, Signed-Releases, Vulnerabilities, Pinned-Dependencies, Token-Permissions, Security-Policy, CI-Tests, Dangerous-Workflow, Binary-Artifacts
As we would like to improve the situation globally, we’ve decided to do statistical analysis of the results and not name top and low projects. Thus, we’ve created histograms of the various scores to analyze the distribution of the results. We will get through all of those in reverse criticality order, starting with the global score.
The Global Score is the final result as outputted by Scorecard. It is the aggregate of all the checks that have run on the repository.
On the above chart, we have a median score of
5.2 and more than two third of our projects are in the range
[4.2, 5.7). As we will see in the detailed analysis, there are a couple of checks most of our projects fail explaining this average score. We will focus on helping them improve their scores on those checks. The top 5% of Eclipse projects have a score above or equal to
Dangerous Workflow check determines whether the project’s GitHub Action workflows has dangerous code patterns. Some examples of these patterns are untrusted code checkouts, logging github context and secrets, or use of potentially untrusted inputs in scripts.
While this check is considered critical in terms of risk, dangerous workflows have been detected in only 4 repositories. Projects have been notified and fixes are on their way.
While we have very few dangerous workflows, the permissions on token are usually elevated and as such a source of risk.
Token Permissions check determines whether the project’s automated workflows tokens are set to read-only by default
A third of our repositories contain tokens which are not read-only. This does not follow the principle of least privilege, as it is highly possible that most of those tokens do not require elevated permissions. A best practice for projects is to always configure their workflows with the
contents: read directive. We can help by verifying that all organizations have the most restrictive settings for tokens enabled by default. If that is not the case, we will communicate with the project to check whether the permissive setting is required.
While those results are valuable, the reality is that most Eclipse projects do not use GitHub Action to build their code. They use one of the 260 Jenkins instances operated by the Eclipse Foundation. Unfortunately, Scorecard does not analyze Jenkins pipeline files, (yet?).
Two checks can be analyzed at the same time as the former can enforce the latter. Both also reduce the risk that a compromised contributor can inject malicious and/or vulnerable code. Let’s look at branch protection first.
The observation is edifying: more than 80% of repositories don’t have branch protection activated. The most basic protection (prevent force push and prevent branch deletion) should be activated, at the very least, on main and release branches. Also, release tags should be protected as well to prevent compromised contributors to change tags to some commit with malicious code. Note that tags protection is not currently checked by scorecard.
Code Reviews can be enforced by branch protection rules, and this should be activated whenever possible. However, this is not always possible for projects without enough reviewers to practically require that all contributions be reviewed.
About 25% of repositories have only a few or no code reviews, the remaining have reviews ranging from often to systematical. It’s quite encouraging given that for the most majority of those repositories, code reviews are not enforced by branch protection as we’ve seen previously. There are good chances that some of those projects would be willingful to enforce code review as they are already quite accustomed to the process.
Combining those two checks could feel weird, but the reality is that some projects are mature and do not require a lot of maintenance, feature wise. However, it is critically important to continue to monitor dependencies for updates, especially security updates. See below the percentage of repositories with a tool configured to check on dependency updates. Unfortunately, almost no project has such a tool configured. We will help them get there.
This is already a bad thing, but the next check shows us that about half of the repositories received only a few commits or none in the last 90 days.
It’s not automatically an issue, as the documentation of this check states:
A lack of active maintenance is not necessarily always a problem. Some software, especially smaller utility functions, does not normally need to be maintained.
However, having a dependency update tool on those repositories is of high priority to avoid delivering rotting dependency trees.
Binary artifacts in the source repository cannot be reviewed. This is the major issue with them. They can be replaced by misbehaving equivalents and their provenance can be difficult to establish.
As expected, only a handful of projects have binaries in their source repositories. We will investigate the lowest 10%. It may very well be that some of those are test data which does not seem to be excluded by Scorecard.
This check did not detect any open, unfixed vulnerability via OSV (Open Source Vulnerabilities). This result should not be interpreted positively with too much haste, as the 100% is suspicious. We may just not do a good enough job at reporting vulnerabilities. This is also an ongoing work.
This check tries to determine if the project cryptographically signs release artifacts.
Most of our projects don’t use GitHub releases to publish their binaries. As this check only supports GitHub releases, we do not get a lot of info.
For the hundred or so repositories using GitHub releases, the situation is clearer: they don’t sign their binaries. We will promote usage of a signature tool (PGP, minisig, sigstore…) to remediate this.
This check tries to determine if the project pins its dependencies.
Unfortunately, it works only by looking for unpinned dependencies in Dockerfiles, shell scripts and GitHub workflows. As most of our projects are Java based, it is safe to assume that we cannot draw much conclusion for repositories with a score of 10: nothing bad has been detected because the dependencies in those repositories have most probably not been analyzed.
On the other hand, there are a substantial number of projects with not perfect scores. Those should be investigated.
This check tries to determine if the project has published a security policy
The next three checks are more about code quality / security rather than risks on the supply chain. The systematic usage of CI-Tests, Static Application Security Testing (SAST) tools, and Fuzzing can prevent known classes of bugs from being inadvertently introduced in the codebase.
The CI-Tests checks if the project runs tests before pull requests are merged. About 20% of repositories do that systematically, while another 20% do it irregularly. A large majority do not run CI tests before merging PR, or do it occasionally. This is probably due to the lack of knowledge about how this can be configured. Indeed, given the widespread adoption of continuous integration service among Eclipse projects, there is little chance that those projects do not have one that builds commits once they are merged. For projects using Jenkins at https://ci.eclipse.org, there are instructions available, and others can use GitHub Actions.
The usage of SAST tools is a bit deceitful, but Scorecard is only able to detect the usage of three tools on the market (for good reasons, it’s challenging to detect those): CodeQL, LGTM and SonarCloud. Those 3 solutions are perfectly fine, but the addition of other major competitors on the market, like Sonatype Lift, would be great. SAST tools should be more widely adopted, but it’s quite difficult to draw more conclusions given the limited number of supported options.
Last check: fuzzing. Only three of our projects fulfill the criterions. It’s not really surprising, fuzz testing is hard and requires projects to write new tests specially for that purpose. The rewards can be great though, uncovering programming errors not detected otherwise. OSS-Fuzz is one project providing infrastructure to run fuzz testing more easily and that reports back detected bugs to the projects.
The last two checks provide an indication about the health and the soundness of a project: the declaration of its license and the diversity of contributors. While not strictly an indicator about the security of the project or its supply chain, it indicates that the projects are developed professionally.
The first check verifies that the repository contains a file in the top-level directory with an appropriate license text. This is actually a mandatory file for any project at the Eclipse Foundation. Scorecard finds that 20% of the scanned repositories don’t have a license file. After a quick scan, it seems that most of the offenders are not the projects’ main repositories and that missing license is an oversight in secondary repositories. It must be fixed nonetheless.
Finally, the contributors check tries to determine if the project has recent contributors from multiple companies. This is one of the objectives of the Eclipse Foundation: to ensure a level playing field for everyone to participate and contribute. The results are quite positive. About 60% of the repositories received contributions from at least 3 different companies in the last 30 commits; each of those contributors must have had at least 5 commits in the last 30 commits. The rest is divided between projects receiving less than that and projects which have received 0 contributions from external organizations. This last part is quite hard to interpret. Scorecard uses contributors’ affiliation from their GitHub profile and many don’t share it.
The outcome of this analysis helps us prioritize what we should do on which repositories. In order to have the best and broadest impact, we will focus on:
We have many other initiatives in the making to improve the security of the supply chain of our projects. We will share them here soon. Stay tuned!