Funny how CrowdStrike already sounds like some malware’s name.
It literally sounds like a DDoS!
Botnet if you will
Not too surprising if the people making malware, and the people making the security software are basically the same people, just with slightly different business models.
Reminds me of the tyre store that spreads tacks on the road 100m away from their store in the oncoming lanes.
People get a flat, and oh what do you know! A tyre store! What a lucky coincidence.
Classic protection racket. “Those are some nice files you’ve got there. It’d be a shame if anything happened to them…”
It sounds like the name of a political protest group.
This is, in a lot of ways, impressive. This is CrowdStrike going full “Hold my beer!” about people talking about what bad production deploy fuckups they made.
I’m volunteering to hold their beer.
Everyone remember to sue the services not able to provide their respective service. Teach them to take better care of their IT landscape.
Typically auto-applying updates to your security software is considered a good IT practice.
Ideally you’d like, stagger the updates and cancel the rollout when things stopped coming back online, but who actually does it completely correctly?
Applying updates is considered good practice. Auto-applying is the best you can do with the money provided. My critique here is the amount of money provided.
Also, you cannot pull a Boeing and let people die just because you cannot 100% avoid accidents. There are steps in between these two states.
you cannot pull a Boeing and let people die
You say that, but have you considered the savings?
People are temporary. Money is forever.
I have. They are not mine. The dead people could be.
Edit: I understand you were being sarcastic. This is a topic where I chose to ignore that.
That’s totally fair. :)
I work at a different company in the same security space as cloudstrike, and we spend a lot of time considering stuff like “if this goes sideways, we need to make sure the hospitals can still get patient information”.
I’m a little more generous giving the downstream entities slack for trusting that their expensive upstream security vendor isn’t shipping them something entirely fucking broken.
Like, I can’t even imagine the procedureal fuck up that results in a bsod getting shipped like that. Even if you have auto updates enabled for our stuff, we’re still slow rolling it and making sure we see things being normal before we make it available to more customers. That’s after our testing and internal deployments.I can’t put too much blame on our customers for trusting us when we spend a huge amount of energy convincing them we can be trusted to literally protect all their infrastructure and data.
I can put the blame to your customers. If I make a contract with a bank they are responsible for my money. I don’t care about their choice of infrastructure. They are responsible for this. They have to be sued for this. Same for hospitals. Same for everyone else. Why should they be exempt from punishment for not providing the one service they were trusted to provide? Am I expected to feel for them because they made the “sensible choice” of employing the cheapest tools?
This was a business decision to trust someone external. It should not be tolerated that they point their fingers elsewhere.
You seem knowledgable. I’m surprised that it’s even possible for a software vendor to inject code into the kernel. Why is that necessary?
I’m actually willing to believe that CrowdStrike was actually compromised by a bad actor that realised how fragile CS was.
Can’t get hacked if your machine isn’t running.
You’re hired!
Idk boss, people weren’t too happy the last time we tried that.
What’s the saying about dying a hero or becoming the villain?
ItS NoT A wInDoWs PrObLeM – Idiots, even on Lemmy
I genuinely can’t tell at whom you are addressing this. Those claiming it is a Windows problem or those that say otherwise?
Hi, idiot here. Can you explain how it is a windows problem?
If you patch a security vulnerability, who’s fault is the vulnerability? If the OS didn’t suck, why does it need a 90 billion dollar operation to unfuck it?
Redhat is VALUED at less than that.
https://pitchbook.com/profiles/company/41182-21
It’s a fucking windows problem.
Sure, but they weren’t patching a windows vulnerability, windows software, or a security issue, they were updating their software.
I’m all for blaming Microsoft for shit, but “third party software update causes boot problem” isn’t exactly anything they caused or did.
You also missed that the same software is deployed on Mac and Linux hosts.
Hell, they specifically call out their redhat partnership: https://www.crowdstrike.com/partners/falcon-for-red-hat/
Crowdstrike completely screwed the pooch with this deploy but ideally, Windows wouldn’t get crashed by a bas 3rd party software update. Although, the crashes may be by design in a way. If you don’t want your machine running without the security software running, and if the security software is buggy and won’t start up, maybe the safest thing is to not start up?
Are we acting like Linux couldn’t have the same thing happen to it? There are plenty of things that can break boot.
CrowdStrike also supports Linux and if they fucked up a Windows patch, they could very well fuck up a linux one too. If they ever pushed a broken update on Linux endpoints, it could very well cause a kernel panic.
Yeah, it’s a crowd strike issue. The software is essentially a kernel module, and a borked kernel module will have a lot of opportunities to ruin stuff, regardless of the OS.
Ideally, you want your failure mode to be configurable, since things like hospitals would often rather a failure with the security system keep the medical record access available. :/. If they’re to the point of touching system files, you’re pretty close to “game over” for most security contexts unfortunately. Some fun things you can do with hardware encryption modules for some cases, but at that point you’re limiting damage more than preventing a breach.
Architecture wise, the windows hybrid kernel model is potentially more stable in the face of the “bad kernel module” sort of thing since a driver or module can fail without taking out the rest of the system. In practice… Not usually since your video card shiting the bed is gonna ruin your day regardless.
deleted by creator
Because it isn’t. Their Linux sensor also uses a kernel driver, which means they could have just as easily caused a looping kernel panic on every Linux device it’s installed on.
There’s no way of knowing that, though. Perhaps their Linux and Darwin drivers wouldn’t have paniced the system?
Regardless, doing almost anything at the kernel level is never a good idea
It’s not impossible. Crowdstrike has done it recently to linux machines.
Kernel panic observed after booting 5.14.0-427.13.1.el9_4.x86_64 by falcon-sensor process:
https://access.redhat.com/solutions/7068083Paywalled, unfortunately
Also, it’s less about “their” drivers and more about what a kernel module can do.
Saying “there’s no way to know” doesn’t fit, because we do know that a malformed kernel module can destabilize a linux or mac system.“Malformed file” isn’t a programming defect or something you can fix by having a better API.
Having the data exposed to userspace via an API would avoid having to have a kernel module at all… Which when malformed wouldn’t compromise the kernel.
I mean, sure. But typically operating systems don’t expose that type of information to user space, instead providing a kernel interface with user mode configuration.
It’s why they use the same basic approach on mac and Linux.
Security operations being one of the things that is often best done at the kernel level because of the need to monitor network and file operations in a way you can’t in user mode.
deleted by creator
Maybe this is a case of hindsight being 20/20 but wouldn’t they have caught this if they tried pushing the file to a test machine first?
It’s not hindsight, it’s common sense. It’s gross negligence on CS’s part 100%
Well, it is hindsight 20/20… But also, it’s a lesson many people have already learned. There’s a reason people use canary deployments lol. Learning from other people’s failures is important. So I agree, they should’ve seen the possibility.
I saw one rumor where they uploaded a gibberish file for some reason. In another, there was a Windows update that shipped just before they uploaded their well-tested update. The first is easy to avoid with a checksum. The second…I’m not sure…maybe only allow the installation if the windows update versions match (checksum again) :D
Windows has beta channels for their updates
It’s a sequence of problems that lead to this:
- The kernel driver should have parsed the update, or at a minimum it should have validated a signature, before trying to load it.
- There should not have been a mechanism to bypass Microsoft’s certification.
- Microsoft should never have certified and signed a kernel driver that loads code without any kind signature verification, probably not at all.
Many people say Microsoft are not at fault here, but I believe they share the blame, they are responsible when they actually certify the kernel drivers that get shipped to customers.
Now threat actors know what EDR they are running and can craft malware to sneak past it. yay(!)
Smart threat actors use the EDR for distribution. Seems to be working very well for whoever owned Solar Winds.
SHOULD’VE USED OPENBSD LMAO
Who says it was accidental?
Netflix knew they were going to move from DVD rentals to streaming over the Internet. It is right in their name.
CrowdStrike knew they were eventually going to _________. It is right in their name.