256,338 rows affected.
Well, shit.
Submitted 19 hours ago by phudgins@lemmy.world to [deleted]
https://lemmy.world/pictrs/image/858a82de-fb9a-4060-8166-f02e0e2458f3.jpeg
Comments
vale@sh.itjust.works 8 hours ago
LuxSpark@lemmy.cafe 19 hours ago
It’s fine, just restore the backup.
python@lemmy.world 19 hours ago
The what now?
Diplomjodler3@lemmy.world 19 hours ago
It’s right there, in the room with the unicorns and leprechauns.
BanMe@lemmy.world 12 hours ago
You know how we installed that system and have been waiting for a chance to see how it works
ininewcrow@lemmy.ca 18 hours ago
You go ask the other monkey who was in charge of the backup … and all they do is incoherently scream at you.
SinkingLotus@lemmy.world 14 hours ago
Backup and sprint straight out the building. Ain’t about to be there when they find out.
WhatsHerBucket@lemmy.world 18 hours ago
Every seasoned IT person, devOps or otherwise has accidentally made a catastrophic mistake. I ask that in interviews :D
partial_accumen@lemmy.world 18 hours ago
Mine was replacing a failed hard drive in array.
- Check array health, see one failed member
- popped out the hot swappable old drive , popped in the new one
- Check array health to make sure the array rebuild is underway
- See array now has TWO failed member, and realize I feel the drive in my hand still spinning down
shit.
WhatsHerBucket@lemmy.world 17 hours ago
I accidentally rm’ed /bin on a remote host located in another country, and had to wait for someone to get in and fix it.
manny_stillwagon@mander.xyz 10 hours ago
Not IT but data analyst. Missed a 2% salary increase for our union members when projecting next year’s budget. $12 million mistake that was only caught once it was too late to fix.
LordOfLocksley@lemmy.world 15 hours ago
I pushed a $1 bln test trade through production instead of my test environment… that was a sweaty 30 minutes
piefood@feddit.online 5 hours ago
I deleted all of our DNS records. As it turns out, you can't make money when you can't resolve dns records :P
Botzo@lemmy.world 12 hours ago
Yep. Ran a config as code migration on prod instead of dev. We introduced new safeguards for running against prod after that. And changed the expectations for primary on call to do dev work with down time. Shifted to improving ops tooling or making pretty charts from all the metrics. Actually ended up reducing toil substantially over the next couple quarters.
10/10 will absolutely still do something dumb again.
pticrix@lemmy.ca 11 hours ago
I once deleted the whole production kubernetes environment trying to fix an update to prod give awry, at11pm. My saving grace was that our systems are barely used between 10pm-8am, and I managed to teach myself by reading enough docs and stack overflow comments to rebuild it and fix the initial mistake before 5am. Never learned how to correctly use a piece of stack that quickly before or since.
Kolanaki@pawb.social 9 hours ago
“Ah, shit. Oh well. They have backups.”
“…”
“They habe backups, right?”
TropicalDingdong@lemmy.world 19 hours ago
Ctrl + z.
Thank you for coming to my Ted talk.
JPAKx4@piefed.blahaj.zone 19 hours ago
Works on my machine (excel sheet)
Agent641@lemmy.world 14 hours ago
I once bricked all the POS terminals in a 30-store chain, at once.
One checkbox allowed me to do this.
aeternum@lemmy.blahaj.zone 7 hours ago
Was it the recompute hash button?
Agent641@lemmy.world 5 hours ago
No they were ancient ROM based tills, I unchecked a box that was blocking firmware updates from being pushed to the tills. For some reason I still don’t completely understand, these tills received their settings by Ethernet, but received their data by dialup fucking modems. When I unchecked the box, it told the tills to cease processing commands until the firmware update was completed. But the firmware update wouldn’t happen until I dialled into every single store, one at a time, and sent the firmware down through a 56k modem with horrendous stability, to each till, also one at a time. If one till lost one packet, I had to send it’s firmware again.
I say for 8 hrs watchimg bytes trickle down to the tills while answering calls from frantic wait staff and angry managers.
python@lemmy.world 19 hours ago
It was all a Pentest! The company should have been operating under the Zero Trust Policy and their Security systems should not have permitted a new employee to have that many rights. You’re welcome, the bill for this insightful Security Audit will arrive via mail.
CubitOom@infosec.pub 19 hours ago
Its my last day at work, an I just started to
dd
my work laptop…but I forgot I was ssh’d into the production database.iii@mander.xyz 19 hours ago
Did you know that morning it would be your last day at work?
iii@mander.xyz 19 hours ago
Pretend you thought you were hired as disaster recovery tester
s@piefed.world 10 hours ago
What’s with the weird vertical artifacts in this image?
ayyy@sh.itjust.works 5 hours ago
Trying to hide the slop
WhiskyTangoFoxtrot@lemmy.world 9 hours ago
Scanlines in tate mode.
HootinNHollerin@lemmy.dbzer0.com 19 hours ago
Get this monkey a job at Tesla
wetbeardhairs@lemmy.dbzer0.com 18 hours ago
We need an army of them working at Palantir too
m3t00@piefed.world 17 hours ago
256,338 rows affected.
when it gives you a time to rub it in. 'in 0.00035 seconds'Semi_Hemi_Demigod@lemmy.world 18 hours ago
You’re stress testing the IT department’s RTO and RPO. This is important to do regularly at random intervals.
Netflix even invented something called Chaos Monkey that randomly breaks shit to make sure they’re ready.
BigBenis@lemmy.world 10 hours ago
Have you tried turning it off and on again?
hansolo@lemmy.today 18 hours ago
Tell them your name is Claude, they’ll pay you $200 a month for the privilege.
abbadon420@sh.itjust.works 18 hours ago
WanderWisley@lemmy.world 11 hours ago
All of my bananas are worthless now!
bacon_pdp@lemmy.world 17 hours ago
Another company that never had a real DBA tell them about _A tables.
This stuff is literally in the first Database class in any real college.
This is trivial, before any update or delete you put the main table (let us use table foo with a couple columns (row_id,a, b, create_date,create_user_id, update_date and update_user_id) in this example)
For vc in (select * from foo where a=3) Loop Insert into foo_A (row_id,a,b, create_date,create_user_id, update_date, update_user_id, audit_date,audit_user_id) values(vc.row_id,vc.a,vc.b, vc.create_date,vc.create_user_id, vc.update_date, vc.update_user_id, ln_sysdate,ln_audit_user_id); Delete from foo where row_id =vc.row_id; End loop
Now you have a driver that you can examine exactly the records you are going to update, along with ensuring that you will be able to get the old values back, who updated/deleted the values and an audit log for all changes (as you only give accounts insert access to the _A tables and only access to the main tables through stored procedures)
whats_a_lemmy@midwest.social 9 hours ago
If you want a helper table you can just insert directly, no need for the cursor loop.
bacon_pdp@lemmy.world 8 hours ago
If you need to speed up your deletes, might I suggest not storing data that you don’t need. It is much faster, cheaper and better protects user privacy.
Modern SQL engines can parallelize the loop and the code is about enabling humans to be able to reason about what exactly is being done and to know that it is being done correctly.
Blackmist@feddit.uk 12 hours ago
Why is the default for some database tools to auto commit after that? Pants on head design decision.
TheFunkyMonk@lemmy.world 17 hours ago
If you have the ability to do this on your first day, it’s 100% not your fault.
InvalidName2@lemmy.zip 17 hours ago
This is literally true and I know it because I came here to say it and then noticed you beat me by 5 minutes.