I guarantee this update didn’t drop on Thanksgiving. Photo OP probably hasn’t turned it on since their last BBQ months ago and is just noticing - on Thanksgiving - that an update pushed a while ago that they now need to install to get started.
Pro tip: Start up your electronics a day or two in advance of events, so you can pre-patch anything that needs it.
Source: Former IT guy here, who had to ensure that updates ran at the most convenient times possible for thousands of users. “Patching Tuesday” is an unofficial but well recognized “holiday” for IT folks. It’s not first thing Monday morning, which could throw off the workflow for the week, but it also gives the max amount of time to resolve any issues that patching might cause, so we (hopefully) don’t have to work through the weekend.
Pay attention to when your stuff requires patches. A lot of the time, it’ll pop up on Tuesdays.
mic_check_one_two@lemmy.dbzer0.com 20 hours ago
I used to work at a theater owned by a city. So we used the city’s IT department, and their network. During COVID, live-streaming took off. The city wanted us to install a streaming video package. After a month or two of installing a full video system, we finally get around to testing the stream. Boot up AWS, and it runs fine. We’re streaming in full 4K. Great!
So the show rolls around. It’s Saturday, 7:30pm start time. We start the show… And the stream instantly shits the bed. Like we go from full gigabit upload speed, to less than a single megabit. We’re lucky to get 56kbps speeds. We’re getting one or two frames per second if we’re lucky.
Sunday, we test the stream ahead of time, and it works flawlessly. Show starts, and the upload speed drops to fucking dial up.
Monday morning rolls around, and IT strolls in to check their tickets. Sees a hundred from us, and gives us a call. They run a test on their end. No issues. They run a test on AWS. No issues. They run a test on the fiber backbone between the theater and city hall. No issues. They call the ISP. ISP said they didn’t have any issues over the weekend. IT shrugs, and marks the tickets as solved.
Next weekend, same thing. We’re wondering if IT is automatically throttling us, or if we have a malicious user on the network. We’re asking about QoS, or maybe automatic port control kicking in when the stream starts. Monday rolls around, and IT marks it as solved again.
Third weekend, same thing. This time, the city manager’s office is getting calls from angry patrons who paid for streaming and can’t watch their streams. Monday morning, IT rolls up. They run some more tests, and still can’t find anything wrong. They swear up and down that it’s nothing on their end, and it must be something on ours.
After four months of this back and forth, IT finally admits that they have all of their maintenance tasks to run at 7:30 over the weekend. Every single computer, server, and fucking toaster connected to the city network begins their updates at exactly 7:30. Thousands of city devices, all singularly focused on devouring our upload speeds. Servers run off-site backups. Those backups consume all of the upload speeds for the entire city network. IT refuses to change the time, because “this is what works for us. It’s after city hall closes, so we don’t have any users who are affected. It hasn’t been a problem in the past.”
rekabis@lemmy.ca 18 hours ago
And in those four months, did no-one think of firing up WireShark to see what was floating across that network during that time period?
Seems like someone dropped the debug/analysis ball…
mic_check_one_two@lemmy.dbzer0.com 16 hours ago
I wasn’t in IT, so my hands were tied. If I tried running a network scan, I’d have been able to hear the screeching all the way from city hall.
rekabis@lemmy.ca 4 hours ago
Never said it had to be you.
But a threat to do exactly that would have likely called IT’s bluff long before the four-month mark.
Lv_InSaNe_vL@lemmy.world 11 hours ago
As someone in IT, the answer is in the comment.
So nobody was ever in the office and nobody on the team wanted to stay (I’m guessing here) 2.5 hrs after work to actually do any troubleshooting.
rekabis@lemmy.ca 4 hours ago
Read the comment more carefully… while IT was most certainly not at their posts, this implementation team was actively monitoring the rollout and witnessing the carnage.
GreenKnight23@lemmy.world 16 hours ago
what can you expect, they’re probably getting paid 40-50% of what they should be getting paid.
pay less get less.
my pride as an IT worker wouldn’t have allowed me to let it fester for 18 weeks though.