Reminds me of a major incident I got involved in. I was the Problem Manager and not MIM (Major Incident Management), but I’ve had years of MIM experience so was asked to help out on this one. The customer manufactured blood plasma and each of the lots on the production floor was worth a cool $1 million. The application that was down and had brought production down was not the app that actually handled production, but an application (service) that supplied data to it.
Of course the customer thought that app was not Mission Critical so it didn’t have redundancy. I joined the call and first thing I asked was when did the last change go through on this app… Spoiler: I had the change in front of me and it went in the previous night. The admin of the app speaks up that he did a change the previous night… And NO the MIM team had NOT looked at that change yet… Did I mention this was FOUR FUCKING HOURS into the outage? That is MIM 101. Something goes down, look to see who last fucked with it.
This is why you need experienced MIM people in enterprise environments.
So I took control of the MIM, instructed the App Admin to share his screen and walk us through the change he did the previous night… Two screens in and OH… Look at that… There’s a check box that put the app into read only (or something like that, this happened back in 2009 and I don’t remember all the details). I’d never seen the application before in my life, but knew that check box being checked, just based on the verbiage, could not be right… So I asked… The Admin, sounding embarrassed, said yeah he forgot to uncheck that box last night…
Fuck me.
He unchecked the box, bounced the app and what do you know… It started to work.
A single damn check box brought down the production line of a multi-billion dollar company.
My investigation for that Problem was a bit scathing to multiple levels of the customer. If a service supports a Tier 1 production app and that Tier 1 app would stop working if that service goes down… GUESS WHAT! That service is MISSION FUCKING CRITICAL and it should be supported as such. My employer was not on the hook for this one, as both applications involved were customer supported.
I would love to say that the above is an uncommon occurrence, but honestly it is the main reason for outages in my experience. Something small and stupid that is easily missed.
Reminds me of a major incident I got involved in. I was the Problem Manager and not MIM (Major Incident Management), but I’ve had years of MIM experience so was asked to help out on this one. The customer manufactured blood plasma and each of the lots on the production floor was worth a cool $1 million. The application that was down and had brought production down was not the app that actually handled production, but an application (service) that supplied data to it.
Of course the customer thought that app was not Mission Critical so it didn’t have redundancy. I joined the call and first thing I asked was when did the last change go through on this app… Spoiler: I had the change in front of me and it went in the previous night. The admin of the app speaks up that he did a change the previous night… And NO the MIM team had NOT looked at that change yet… Did I mention this was FOUR FUCKING HOURS into the outage? That is MIM 101. Something goes down, look to see who last fucked with it.
This is why you need experienced MIM people in enterprise environments.
So I took control of the MIM, instructed the App Admin to share his screen and walk us through the change he did the previous night… Two screens in and OH… Look at that… There’s a check box that put the app into read only (or something like that, this happened back in 2009 and I don’t remember all the details). I’d never seen the application before in my life, but knew that check box being checked, just based on the verbiage, could not be right… So I asked… The Admin, sounding embarrassed, said yeah he forgot to uncheck that box last night…
Fuck me.
He unchecked the box, bounced the app and what do you know… It started to work.
A single damn check box brought down the production line of a multi-billion dollar company.
My investigation for that Problem was a bit scathing to multiple levels of the customer. If a service supports a Tier 1 production app and that Tier 1 app would stop working if that service goes down… GUESS WHAT! That service is MISSION FUCKING CRITICAL and it should be supported as such. My employer was not on the hook for this one, as both applications involved were customer supported.
I would love to say that the above is an uncommon occurrence, but honestly it is the main reason for outages in my experience. Something small and stupid that is easily missed.