You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"Does the GDPR allow..."
I'd be wary of using the 6(1)(e)/6(1)(f) distinction, as there's a risk of creating barriers to sharing between CSIRTs using different legal bases. This came up a long time ago in ENISA's LEA/CSIRTs workshops - that CSIRTs were reluctant to share information with entities that could do things with it that weren't lawful for the CSIRT itself. Government CSIRTs need to be particularly careful here.
It also struck me (having been drafting a chapter on "necessary") that MISP can probably make that case more easily. With netflow and other logs that you know contain mostly innocent traffic, the best argument I can find is that because any individual or system might be the victim of an attack, keeping attack-detecting data about all of them is the least intrusive way to be able to protect them all. I'm distinguishing here from the Watson/Tele2 case where there was grave doubt about keeping data of people who had no possible connection to terrorists. But for MISP to have data, someone has already decided that this does relate to an attack, so most of the innocent stuff will never be put into your system
One other thing I'd have expected to see is that I presume most of the information in MISP is (functionally at least) pseudonymous, as defined and encouraged in the GDPR? I've been making that point about network logfiles, where it should be relatively simple to separate the "link IP address to person" process so as to fully satisfy the definition. I don't know whether that works for MISP - maybe you don't have access to the linking information at all, which would help under Breyer - but if you do allow people to upload login and DHCP records, it might be worth offering separate controls for those?
"What are the grounds..."
Consent seems highly questionable, given the various statements in the GDPR (and the recent Article 29 draft guidance) about linking consent to the provision of a service. Also, I don't think $badguy is going to consent (though I'd love him to exercise his right to object ;-). My strong preference, as above, is for everyone to use Legitimate Interests or, if they are a body with permission to use other justifications, to commit to only doing things that would be permitted under legitimate interests.
"Conclusion" (and maybe also elsewhere)
I've been struck by how discussion has actually moved further in the direction of security/incident response since the GDPR was agreed. In the discussions of the draft ePrivacy Regulation, the Parliament, the Council and the Regulators have all been pointing out areas where the Commission's proposed NIS exemptions need to be enlarged - e.g. to allow patching to be an opt-out interference with an end-point device rather than opt-in. Particularly from the Parliament and the EDPS/Art29, whose opinions were otherwise critical of the Commission for allowing too much processing, that's startling.
Incidentally, I presume you already know about my academic (though, I hope, readable) paper on "Incident Response: Protecting Individual Rights Under the GDPR" (https://script-ed.org/article/incident-response-protecting-individual-rights-under-the-general-data-protection-regulation/)? As time permits (and it hasn't much, lately) I'm working on another on the issues raised by incident detection - hence "necessary", but also "purpose limitation" and "profiling". At current rate that's going to take much of next year. But I was then thinking there might be a paper specifically on information sharing, prompted by a comment in one paper I read suggesting that the GDPR does cause a problem because of the requirement to notify when you obtain personal data from a third party. I hope that's wrong
The text was updated successfully, but these errors were encountered:
"Does the GDPR allow..."
I'd be wary of using the 6(1)(e)/6(1)(f) distinction, as there's a risk of creating barriers to sharing between CSIRTs using different legal bases. This came up a long time ago in ENISA's LEA/CSIRTs workshops - that CSIRTs were reluctant to share information with entities that could do things with it that weren't lawful for the CSIRT itself. Government CSIRTs need to be particularly careful here.
It also struck me (having been drafting a chapter on "necessary") that MISP can probably make that case more easily. With netflow and other logs that you know contain mostly innocent traffic, the best argument I can find is that because any individual or system might be the victim of an attack, keeping attack-detecting data about all of them is the least intrusive way to be able to protect them all. I'm distinguishing here from the Watson/Tele2 case where there was grave doubt about keeping data of people who had no possible connection to terrorists. But for MISP to have data, someone has already decided that this does relate to an attack, so most of the innocent stuff will never be put into your system
One other thing I'd have expected to see is that I presume most of the information in MISP is (functionally at least) pseudonymous, as defined and encouraged in the GDPR? I've been making that point about network logfiles, where it should be relatively simple to separate the "link IP address to person" process so as to fully satisfy the definition. I don't know whether that works for MISP - maybe you don't have access to the linking information at all, which would help under Breyer - but if you do allow people to upload login and DHCP records, it might be worth offering separate controls for those?
"What are the grounds..."
Consent seems highly questionable, given the various statements in the GDPR (and the recent Article 29 draft guidance) about linking consent to the provision of a service. Also, I don't think $badguy is going to consent (though I'd love him to exercise his right to object ;-). My strong preference, as above, is for everyone to use Legitimate Interests or, if they are a body with permission to use other justifications, to commit to only doing things that would be permitted under legitimate interests.
"Conclusion" (and maybe also elsewhere)
I've been struck by how discussion has actually moved further in the direction of security/incident response since the GDPR was agreed. In the discussions of the draft ePrivacy Regulation, the Parliament, the Council and the Regulators have all been pointing out areas where the Commission's proposed NIS exemptions need to be enlarged - e.g. to allow patching to be an opt-out interference with an end-point device rather than opt-in. Particularly from the Parliament and the EDPS/Art29, whose opinions were otherwise critical of the Commission for allowing too much processing, that's startling.
And the Art29s' draft opinion on Breach Notification goes a lot further than Recital 49, pretty much saying that preparations to detect and mitigate incidents are mandatory, and that failure to have them may actually lead to a separate fine from the one for not reporting! See https://community.jisc.ac.uk/blogs/regulatory-developments/article/article-29-wp-draft-breach-notification
Incidentally, I presume you already know about my academic (though, I hope, readable) paper on "Incident Response: Protecting Individual Rights Under the GDPR" (https://script-ed.org/article/incident-response-protecting-individual-rights-under-the-general-data-protection-regulation/)? As time permits (and it hasn't much, lately) I'm working on another on the issues raised by incident detection - hence "necessary", but also "purpose limitation" and "profiling". At current rate that's going to take much of next year. But I was then thinking there might be a paper specifically on information sharing, prompted by a comment in one paper I read suggesting that the GDPR does cause a problem because of the requirement to notify when you obtain personal data from a third party. I hope that's wrong
The text was updated successfully, but these errors were encountered: