-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new security policy #44918
Comments
Thanks for outstanding work the Security Team has been doing, FIlippo! It would be helpful if the issues were also tagged to distinguish between:
The distinction is not cut-and-dry in this era of automated CI/CD deployments and state-level actors engaging in supply-chain attacks, but it would help users assess whether the release warrants expedited deployment or not. |
Ah, that's a good idea, @fazalmajid. The two classes do require different preparations, sometimes even by different teams, so it makes sense to mention that in the pre-announcement. We can use a statement like this in pre-announcements: "The upcoming Go 1.A.B and Go 1.X.Y releases include fixes for HIGH severity (per our policy at golang.org/security) vulnerabilities in the Go toolchain / in the Go standard library / in both the Go toolchain and the Go standard library." |
Why do you want to use three tier severity scale, when common practice in industry is a four tier scale? The mentioned handling procedures could be similar for Medium and High severity issues. |
Thanks for outstanding work @FiloSottile For |
@p-rog I prefer to ask why introduce a fourth tier, when we wouldn't do anything differently for it? What benefit would it provide? How would we pick what's a MEDIUM and what's a HIGH? What would we communicate to users about how differently they should treat them? The current criteria are clear: LOW are things we are comfortable fixing in public, CRITICAL are things we want to fix right now, HIGH is everything else. It's not an easy assessment to make, but it's a necessary and useful one. What would be the criteria for MEDIUM? CVSS is an excellent example of how these scales break down when they try to apply more rigid and abstract criteria to software that's reused in diverse contexts. In my experience, CVSS is unusable for anything that is not a piece of software that's deployable on its own: for example, how do you pick remote vs local exploitation for a library? If it's used on remote inputs, it's remote, if it's used on local inputs, it's local! (This is not a made up example, different distributions scored the recent RCE in libgcrypt differently with CVSSv3 because of rating it local vs remote. In our scale, it'd be clearly a "let's fix that right now", so a CRITICAL.) A standard library is the ultimate context-dependent software, so it would be especially meaningless for us to try and use criteria like the CVSS ones. |
I do not understand the |
@FiloSottile I understand your point of view. Your proposed severity levels are in direct relation to how you want to handle these cases. But it's not the purpose of the severity rating. The severity rating should show how serious the vulnerability is. That how you will handle the cases is of course in relation to the severity level, but severity scale takes into account the potential risk of the discovered vulnerability. Maybe take a look at Red Hat security ratings (https://access.redhat.com/security/updates/classification). In regards to the CVSS, it's not ideal, because not each use case can be included into one CVSS score for a vulnerability. But, the worse scenario should be taken into the consideration in CVSS calculation. Then CVSS makes sense. Could be HIGH but in some scenarios like if an application uses only local inputs the impact could be lower and the CVSS could be different and this will be covered by the application vendor. In the other words, a flaw in standard library from your side should be analyzed in relation to the worst possible scenario and based on that you should assign the best Severity level and express in CVSS. Based on my experience the three tier scale can't handle all cases. That's why four tier scale is more popular in industry. |
If analysed in the worst possible scenario, no vulnerability in the standard library (and arguably in any library) is ever going to be local, since applications might take remote input and pass it to the library, but that score is not going to be particularly useful to most users. However, it's true that we might be misusing the concept of severity, especially if we'd score any non-CRITICAL vulnerability as LOW if it's already widely known and not worth fixing in private. Maybe we should rename the tiers PUBLIC, PRIVATE, and URGENT (or something similar if anyone has better ideas? |
That's a good point. I'd be open to declaring (In general, we should progressively document the security expectations of the various parts of the distribution, but that's beyond the scope of this proposal.) |
If it won't be called Severity scale but maybe "Handling scale" or "Handling types" then it's a very good idea! |
If we have a vulnerability that can cause code execution while downloading (but not building or running) module dependencies, such as for |
That reads a little odd to me, and that it's too focused on the mechanism rather than the criticality; I think people who work with security concerns at their orgs but are not deep in the Go ecosystem would be confused to hear "a PRIVATE level security issue has been discovered and will be addressed in release X.Y on date Z". The original LOW/HIGH/CRITICAL sounds fine to me, FWIW. |
But the proposed Severity scale is based on how cases will be handled, that's why it would be better to call it "Handling scale" or "Handling types" with levels PUBLIC, PRIVATE, and URGENT. The severity scale should be directly related to the impact of the flaws. |
To be clear, if we do switch to something like PUBLIC, PRIVATE, and URGENT, we will not surface those labels in announcements. We'll simply pre-announce an undisclosed vulnerability fix for PRIVATE vulnerabilities, and just list them in the release announcements for the rest. |
ah ok thanks :) |
I updated the proposal to refer to PUBLIC/PRIVATE/URGENT tracks rather than severity, based on the feedback in this thread. |
Based on the discussion above, this proposal seems like a likely accept. |
No change in consensus, so accepted. 🎉 |
Change https://golang.org/cl/352029 mentions this issue: |
Change https://go.dev/cl/393357 mentions this issue: |
Make it possible to use the higher-level fix summary fields, instead of CustomSummary, when a release has both bug fixes and security fixes. Rewrite the recent hand-written custom summaries to use the new fields, which produces equivalent output that differs largely in white-space and some trivial consistency fixes. Fixes golang/go#51719. Updates golang/go#38488. Updates golang/go#44918. Change-Id: I672cea21f63cb4ab9764efb6cbc783cf503b791c Reviewed-on: https://go-review.googlesource.com/c/website/+/393357 Trust: Dmitri Shuralyov <[email protected]> Reviewed-by: Carlos Amedee <[email protected]>
The golang.org/security page is updated according to the new security policy. Fixes golang/go#44918 Change-Id: I66306aa0368ee12f89f68f97a2ae1412d98da628 Reviewed-on: https://go-review.googlesource.com/c/website/+/352029 Trust: Julie Qiu <[email protected]> Trust: Katie Hockman <[email protected]> Run-TryBot: Julie Qiu <[email protected]> TryBot-Result: Go Bot <[email protected]> Reviewed-by: Katie Hockman <[email protected]>
Make it possible to use the higher-level fix summary fields, instead of CustomSummary, when a release has both bug fixes and security fixes. Rewrite the recent hand-written custom summaries to use the new fields, which produces equivalent output that differs largely in white-space and some trivial consistency fixes. Fixes golang/go#51719. Updates golang/go#38488. Updates golang/go#44918. Change-Id: I672cea21f63cb4ab9764efb6cbc783cf503b791c Reviewed-on: https://go-review.googlesource.com/c/website/+/393357 Trust: Dmitri Shuralyov <[email protected]> Reviewed-by: Carlos Amedee <[email protected]>
Background
The current Go security policy, golang.org/security, dictates that whenever a valid security vulnerability is reported, it will be kept confidential and fixed in a dedicated release.
The security release process is handled by the Security and Release teams in coordination, and deviates from the general release process in that for example it doesn't use the public Builders or TryBots. This led to issues going undetected in security releases in the past.
There are no tiers, and the distinction is binary: either something is a security fix, or it’s not.
Security releases are pre-announced on golang-announce three days before the release.
We’ve issued six security releases in the past eight months, on top of the eight regularly scheduled point releases.
Proposal
We propose introducing three separate tracks for security fixes.
xml.NewTokenDecoder
with a customTokenReader
#44913, crypto/elliptic: incorrect operations on the P-224 curve #43786, net/http/cgi,net/http/fcgi: Cross-Site Scripting (XSS) when Content-Type is not specified #40928, encoding/binary: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs #40618, and crypto/x509: certificate validation bypass on Windows 10 #36834.The Security team reserves the right to choose the track of specific issues in exceptional circumstances based on our case-by-case assessment.
We also propose the following handling procedures for each track.
All security issues are issued CVE numbers.
Motivation
Fundamentally, this proposal is about making the security policy scale.
Every package can be used in many different ways, some of them security-critical depending on context. So almost anything not behaving as documented can be argued to be a security issue. We want to fix these issues for affected users, but doing so in separate security releases imposes a cost on all Go users. With each security release, the Go community needs to scramble to assess it and update. If security releases become too frequent, users will stop paying attention to them, and the ecosystem will suffer.
The introduction of the tracks helps the community assess their exposure in each point release, and merging the security and non-security patch releases will lead to fewer overall updates and a more predictable schedule.
Originally, the rationale for dedicated security releases was that there should be nothing in the way of applying a security patch, like concerns about the stability of other changes. However, since security releases are made on top of the previous minor release, this only works if systems were updated to the latest minor release in the time between that and the security release. This time is on average two weeks, which doesn’t feel like long enough to be valuable. It’s also important to note that only critical fixes are backported to minor releases in the first place.
The text was updated successfully, but these errors were encountered: