YoFrodo,

That’s not an oopsie daisy that’s the whole oopsie bouquet

KISSmyOS,

If the company gave a noob unlimited access and can’t restore their data from backups, it’s really their fault, not the employee’s.

DrM, (edited )

We had a management course in the university where this was one of the main things they highlighted:

Managers faults are the managers fault.
Employees faults are the managers fault. Without exception.

And if you think about it, that’s completely true. If an employee does something stupid, it’s most of the time because they a) had the opportunity to do it and b) they weren’t taught well enough. If the employee keeps doing this mistake, the manager is at fault because he allows the employee to do the job where he can make the mistake. He obviously isn’t fit for that position.

mcmoor,

And people wonder why manager is paid more

smeg,

Well yes, but they wonder that when the manager isn’t taking responsibility and ensuing mistakes don’t happen. A good manager is worth their weight in gold, but thanks to the Peter Principle most of them just end up there without being qualified or even wanting to do it!

abraxas,

The problem is often checks and balances. A bad manager often (but not always) has a more secure job than a good employee.

I have this opinion as a manager. If I have to terminate an employee, it’s my fault. It’s not a hard and fast rule and there are times where terminations happened because of unpredictable reasons… but it’s my job to find the candidate. It’s my job to match their skills to the job. It’s my job to give them a process wherein they can thrive. It’s my job to remediate non-issues before they become issues. There’s very few things that aren’t my job that could lead to a person being fired.

I rate my team’s success higher than any other metric, even technical goals and milestones. I want to say it’s because I care about them (and I do!), but that’s not the reason. It’s my JOB to make them succeed. It’s my JOB for them to stay happy, for them to get recognition so they don’t feel marginalized. Bad managers aren’t bad because they put the company over the team. They’re bad because they put themselves over the team (and by extension, the company).

smeg,

I wish my last manager realised that. As well as being a people manager they were also a team lead and a sort of project manager. Guess which of the three roles they cared least about?

abraxas, (edited )

I know a lot of people who are great strategists or great team leads but who cannot actually focus on the needs of the team. I’ve seen so many situations where a little intra-team conflict turned into six figures of lost revenue and jobs lost because the manager couldn’t bring himself to get involved before money was being lost. You can’t NOT fire someone who crosses too many lines, but you can absolutely be at fault for them crossing those lines when they gave months’ notice and you could’ve talked situations down or improved policies.

I was lucky. My first managing role was under someone whose philosophy was "the manager’s job is to focus on their team. If you can get 33% more productivity out of each team member, you do more good for the company than you could ever do by “just being better” or “just designing better” than them. And I thought 33% was crazy, until I actually started learning you can. By

dudinax,

When’s the last time you tested backup restore and how long did it take?

cybersandwich,

0, thanks for asking.

Seriously though, how are you guys testing your home backups? I don’t have a spare Synology nas sitting around or spare 16tb drives.

Knusper,

The only way to test restoring a backup is to actually restore it. And for that, you do need spare hardware.

So, to answer your question, I don’t test my home backups either. I reckon pretty much no one is dedicated enough to do that.

I’m hoping, if shit really hits the fan that I can still pick out my important files and just manually re-setup the rest of the system. So, with a longer downtime in that sense.

That strategy is just absolutely not viable for companies, where downtimes are more expensive than spare hardware, and where you really can’t tell users you restored some files, they should do the rest.

LetterboxPancake,

“Eh, go away. I suppose it’ll work flawlessly. I’ll test it if I need it. I’ll have to look into the procedure anyways. Get off my back!”

sag,

F*cking Gitlab moment

LetterboxPancake, (edited )

Yeah, that was extremely funny, but I had nothing stored there at that moment. I guess some gitlab administrator lost twenty pounds in sweat that day.

smeg,

You’re allowed to say “fucking” on the internet

Sotuanduso,

Wasn’t there some saying about if you’re in a server room, the calmer the “Oops,” the worse the problem?

Stamets,
@Stamets@startrek.website avatar

If there isn’t then there should be.

jarfil,

“Ooopppsss… 💤”, both containers of the UPS flow battery ruptured at the same time and flooded the whole server room… call me tomorrow for the planning meeting when things stop burning and firefighters have had a chance to enter the building.

bappity,
@bappity@lemmy.world avatar

internally screaming

QuarterSwede,
@QuarterSwede@lemmy.world avatar

This is funny, cute, and too relatable.

LetterboxPancake,

Forget coffee, this will wake you up. There’s nothing like dropping the wrong database scheme on a lazy Monday morning.

Bonehead,

If you can, always set the title of whatever window you're working on to capital bold letters, preferably red, saying PRODUCTION SERVER - DON'T FUCK IT UP. This has saved my dumbass a few times when I looked up before hitting enter.

Sharkwellington, (edited )

On SecureCRT I make the backgrounds of production devices a rosy tint so I have something to remind me as I’m working. If it’s a core switch, fire engine red background and neon green letters. An added benefit to this is that I want to get off the core devices as soon as possible.

LetterboxPancake,

I use IntelliJ for this and my prod connection is red, has warning symbols and it’s read only. I can switch on write mode if necessary, but it will prompt for it. Saves me a lot of stress.

franklin,
@franklin@lemmy.world avatar

This here is wisdom

LetterboxPancake,

💖

MrNesser,

Had a colleague do this to the local AD server years ago.

Thankfully they pulled the plug before the changes could propagate through the network completely but it still took 3 days to recover the data and restore the AD server.

Stamets,
@Stamets@startrek.website avatar

Yikes. At least it was only 3 days and not weeks or months of cleanup trying to rebuild shit!

You might like this little video then. Well, it’s 10 minutes long but still. It’s a story detailing a Dev who deleted their entire production database. Real story that actually happened. If you went through something similar then you definitely gonna relate a little.

thepianistfroggollum,

That’s on the company for not having a proper disaster recovery plan in place.

Or DR test was literally the CIO wiping a critical server or DB and we had to have it back up in under an hour.

MrNesser,

To be fair to the company it was a friday afternoon when said person ran a script

NaibofTabr, (edited )
  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • oklahoma
  • feritale
  • SuperSentai
  • KamenRider
  • All magazines