I found a Vulnerability. They found a Lawyer
229 points - today at 7:19 PM
SourceComments
1) If you make legal disclosure too hard, the only way you will find out is via criminals.
2) If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper. The difference is that knowledge of a bad foundation doesn’t inherently make a building more likely to collapse, while knowledge of a cyber vulnerability is an inherent risk.
3) Random audits by passers-by is way too haphazard. If a website can require my real PII, I should be able to require that PII is secure. I’m not sure what the full list of industries would be, but insurance companies should be categorically required to have an cyber audit, and laws those same laws should protect white hats from lawyers and allow class actions from all users. That would change the incentives so that the most basic vulnerabilities are gone, and software engineers become more economical than lawyers.
I guess I should feel lucky they didn't try to have me criminally prosecuted.
Governments and companies talk a big game about how important cybersecurity is. I'd like to see some legislation to prevent companies and governments [1] behaving with unwarranted hostility to security researchers who are helping them.
https://www.nilsbecker.de/rechtliche-grauzonen-fuer-ethische...
Why sign anything at all? The company was obviously not interested in cooperation, but in domination.
> every account was provisioned with a static default password
Hehehe. I failed countless job interviews for mistakes much less serious than that. Yet someone gets the job while making worse mistakes, and there are plenty of such systems on production handling real people's data.
The only sensible approach here would have been to cease all correspondence after their very first email/threat. The nation of Malta would survive just fine without you looking out for them and their online security.
Here all databases with personal information must be registered there and data must be secure.
I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.
Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?
By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.
What are the odds an insurer would reach for a lawyer? They probably have several on speed dial.
The problem is this is literally a matter of national security, and currently we sacrifice national security for the convenience of wealthy companies.
Also, we all have our private data leaked multiple times per month. We see millions of people having their private information leaked by these companies, and there are zero consequences. Currently, the companies say, "Well, it's our code, it's our responsibility; nobody is allowed to research or test the security of our code because it is our code and it is our responsibility." But then, when they leak the entire nation's private data, it's no longer their responsibility. They're not liable.
As security issues continue to become a bigger and bigger societal problem, remember that we are choosing to hamstring our security researchers. We can make a different choice and decide we want to utilize our security researchers instead, for the benefit of all and for better national security. It might cause some embarrassment for companies though, so I'm not holding my breath.
This sounds like a cultural mismatch with their lawyers. Which is ironic, since the lawyers in question probably thought of themselves as being risk-averse and doing everything possible to protect the organisation's reputation.
Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.
For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".
So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.
So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.
So it sort of makes sense that companies would go on the attack because there's a risk that their insurance company will catch wind and they'll be on the hook.
By the way, I had a story when I accidentally hacked an online portal in our school. It didn't go much and I was "caught" but anyways. This is how we learn to be more careful.
I believe in every single system like that it's fairly possible to find a vulnerability. Nobody cares about them and people that make those systems don't have enough skill to do it right. Data is going to be leaked. That's the unfortunate truth. It gets worse with the come of AI. Since it has zero understanding of what it is actually it will make mistakes that would cause more data leaks.
Even if you don't consider yourself as an evil person, would you still stay the same knowing real security vulnerability? Who knows. Some might take advantage. Some won't and still be punished for doing everything as the "textbook way".
A) in EU; GDPR will trump whatever BS they want to try B) no confirmation affected users were notified C) aggro threats D) nonsensical threats, sourced to Data Privacy Officer w/seemingly 0 scruples and little experience
Due to B), there's a strong responsibility rationale.
Due to rest, there's a strong name and shame rationale. Sort of equivalent to a bad Yelp review for a restaurant, but for SaaS.
Based on this interaction, you have wonder what it's like to file a claim with them.