Omg its so hard to reboot into safemode delete one bad file and then reboot again. SO HARD guys. So lets run windows 3.1 and windows 95 and deploy old crusty windows 2008 servers again till this vlows over. /s
I’m in IT for a living. Even my dimmest bulbs at my company were able to solve this and keep the same pc running, minus crowdstrike’s bad update. I struggle to imagine what it must be like for other companies IT staff to make the choices they did during this.
I know my industry thanks. I’ve over a decade of experiance in sizable complex organizations. You know who else likes to cling rmto outdated hardware and software? Hopefully this scares you because it should: medical organizations.
What they people did (and what other claimed was done) during this fiasco was objectively worse than the actual fix but everyone in this thread just isn’t happy that I didn’t join them in shitting on microsoft. This is where lemmy shows that its users are becoming more like reddit users every day. Or don’t know what /s means.
Hopefully none of those systems were exposed to anything internet facing for obvious reasons, but given the shear incompetance observed I wouldn’t be surprised.
Hi, I also know my industry, with over a quarter century of experience in Fortune 500 companies. The old motto of IT used to be ‘if it ain’t broke, don’t fix it’ and then salespeople found out that fear is a great sales tool.
When proper precautions are taken and a risk analysis is performed there is no reason that old operating systems and software can’t continue to be used in production environments. I’ve seen many more systems taken down from updates gone wrong than from running ‘unsupported’ software. Just because a system is old, doesn’t mean it is vulnerable, often the opposite is true.
Theres not fixing what ain’t broke and there is refusing to budget to move on when needed. There are a lot of ifs and assumptions in your reply trying to put me in my place here. Old software that won’t run on anything modern becomes a recipe for disaster when said hardware breaks down and can’t be replaced with anything that functions.
Fuck it lets see if all 3 of my replies can get to negative 3 digits: none of this matters because the original problem was pretty damm easy to fix but here I am taking shit on social media for saying so.
P.s. in medical client sitiations there are compliance laws involved, and I keep seeing hospitals and practices not meet them until they start eating fines because they want to use every machine till it literly falls apart.
…and just how many PCs do you intend to “reboot into safemode delete one bad file and then reboot again”? Manually, or do you have some remote access tool that doesn’t require a running system?
Swapping entire devices takes more time and labor than this. So does factory resetting, and all the other solutions I saw other IT people roll out. They panicked.
Its a crashing os not dead machine so we do have the ability to remote access before the OS even loads, though we don’t always deploy that on smaller organizations. In those cases we walk someone through booting into recovery or send a tech to the office. We had this handled in the 1st hrs, a few hundred endpoints affected the rest were not on crowdstrike. We almost switched to it 6 mo ago…dodged a bullet.
If you have no idea how long it may take and if the issue will return - and particularly if upper management has no idea - swapping to alternate solutions may seem like a safer bet. Non-Tech people tend to treat computers with superstition, so “this software has produced an issue once” can quickly become “I don’t trust anything using this - what if it happens again? We can’t risk another outage!”
The tech fix may be easy, but the manglement issue can be harder. I probably don’t need to tell you about the type of obstinate manager that’s scared of things they don’t understand and need a nice slideshow with simple words and pretty pictures to explain why this one-off issue is fixed now and probably won’t happen again.
As for the question of scale: From a quick glance we currently have something on the order of 40k “active” Office installations, which mostly map to active devices. Our client management semi-recently finished rolling out a new, uniform client configuration standard across the organisation (“special” cases aside). If we’d had CrowdStrike, I’d conservatively estimate that to be at least 30k affected devices.
Thankfully, we don’t, but I know some amounts of bullets were being sweated until it was confirmed to only be CrowdStrike. We’re in Central Europe, so the window between the first issues and the confirmation was the prime “people starting work” time.
Ppl judging on what they think you mean not what you actually mean. Lemmy is reddit 2.0 now but without as many maga morons cut your losses and stop responding
Omg its so hard to reboot into safemode delete one bad file and then reboot again. SO HARD guys. So lets run windows 3.1 and windows 95 and deploy old crusty windows 2008 servers again till this vlows over. /s
I’m in IT for a living. Even my dimmest bulbs at my company were able to solve this and keep the same pc running, minus crowdstrike’s bad update. I struggle to imagine what it must be like for other companies IT staff to make the choices they did during this.
Complexity increases exponentially in large organizations, for a number of reasons.
I know my industry thanks. I’ve over a decade of experiance in sizable complex organizations. You know who else likes to cling rmto outdated hardware and software? Hopefully this scares you because it should: medical organizations.
What they people did (and what other claimed was done) during this fiasco was objectively worse than the actual fix but everyone in this thread just isn’t happy that I didn’t join them in shitting on microsoft. This is where lemmy shows that its users are becoming more like reddit users every day. Or don’t know what /s means.
Hopefully none of those systems were exposed to anything internet facing for obvious reasons, but given the shear incompetance observed I wouldn’t be surprised.
Hi, I also know my industry, with over a quarter century of experience in Fortune 500 companies. The old motto of IT used to be ‘if it ain’t broke, don’t fix it’ and then salespeople found out that fear is a great sales tool.
When proper precautions are taken and a risk analysis is performed there is no reason that old operating systems and software can’t continue to be used in production environments. I’ve seen many more systems taken down from updates gone wrong than from running ‘unsupported’ software. Just because a system is old, doesn’t mean it is vulnerable, often the opposite is true.
Theres not fixing what ain’t broke and there is refusing to budget to move on when needed. There are a lot of ifs and assumptions in your reply trying to put me in my place here. Old software that won’t run on anything modern becomes a recipe for disaster when said hardware breaks down and can’t be replaced with anything that functions.
Fuck it lets see if all 3 of my replies can get to negative 3 digits: none of this matters because the original problem was pretty damm easy to fix but here I am taking shit on social media for saying so.
P.s. in medical client sitiations there are compliance laws involved, and I keep seeing hospitals and practices not meet them until they start eating fines because they want to use every machine till it literly falls apart.
Yes, yes it is if you run bitlocker with external verification.
It’s even harder if the server you use for the verification itself is down.
…and just how many PCs do you intend to “reboot into safemode delete one bad file and then reboot again”? Manually, or do you have some remote access tool that doesn’t require a running system?
Swapping entire devices takes more time and labor than this. So does factory resetting, and all the other solutions I saw other IT people roll out. They panicked.
Its a crashing os not dead machine so we do have the ability to remote access before the OS even loads, though we don’t always deploy that on smaller organizations. In those cases we walk someone through booting into recovery or send a tech to the office. We had this handled in the 1st hrs, a few hundred endpoints affected the rest were not on crowdstrike. We almost switched to it 6 mo ago…dodged a bullet.
If you have no idea how long it may take and if the issue will return - and particularly if upper management has no idea - swapping to alternate solutions may seem like a safer bet. Non-Tech people tend to treat computers with superstition, so “this software has produced an issue once” can quickly become “I don’t trust anything using this - what if it happens again? We can’t risk another outage!”
The tech fix may be easy, but the manglement issue can be harder. I probably don’t need to tell you about the type of obstinate manager that’s scared of things they don’t understand and need a nice slideshow with simple words and pretty pictures to explain why this one-off issue is fixed now and probably won’t happen again.
As for the question of scale: From a quick glance we currently have something on the order of 40k “active” Office installations, which mostly map to active devices. Our client management semi-recently finished rolling out a new, uniform client configuration standard across the organisation (“special” cases aside). If we’d had CrowdStrike, I’d conservatively estimate that to be at least 30k affected devices.
Thankfully, we don’t, but I know some amounts of bullets were being sweated until it was confirmed to only be CrowdStrike. We’re in Central Europe, so the window between the first issues and the confirmation was the prime “people starting work” time.
Ppl judging on what they think you mean not what you actually mean. Lemmy is reddit 2.0 now but without as many maga morons cut your losses and stop responding