Select Page

Bringing Human Factors to Life with Marty Ohme

Bringing Human Factors to Life with Marty Ohme

LISTEN TO THE EPISODE: 

ABOUT THE EPISODE

There’s a safety decision behind every chain of events. We invite you to join us for a captivating episode of The Safety Guru featuring Marty Ohme, a former helicopter pilot in the U.S. Navy and current System Safety Engineer. Don’t miss this opportunity to gain from Marty’s extensive expertise and insights on system factors, organizational learning and safety culture, and effective risk management to mitigate future risks. Learn from the best practices of the U.S. Navy, as Marty brings human factors to life with real-world examples that can make a difference in your organization.

READ THIS EPISODE

Real leaders leave a legacy. They capture the hearts and minds of their teams. Their origin story puts the safety and well-being of their people first. Great companies ubiquitously have safe, yet productive operations. For those companies, safety is an investment, not a cost for the C-suite. It’s a real topic of daily focus. This is the Safety Guru with your host, Eric Michrowski, a globally recognized ops and safety guru, public speaker and author. Are you ready to leave a safety legacy? Your legacy success story begins now.

Hi, and welcome to the Safety Guru. Today, I’m very excited to have with me, Marty Ohme. He’s a retired naval aviator, also a system safety engineer. He’s got some great stories he’s going to share with us today around human factors, organizational learning. Let’s get into it. Marty, welcome to the show.

Thank you. I appreciate the opportunity to spend some time with you and share some interesting stuff with your audience.

Yeah. Let’s start maybe with your background and your story in the Navy.

Sure. I graduated from the United States Naval Academy with a bachelor’s in aerospace engineering. I’ve been fascinated with flight and things that fly since a very young age, so that lined up nicely for that. I went on to fly the H-46 Delta and the MH60 Sierra to give your audience an idea of what that looks like. The H-46 was flown for many, many years by the Marine Corps and the Navy. It looks like a small Chinook, the tandem motor helicopter. Then the MH-60 Sierra is basically a Black Hawk painted gray. There are some other differences But both aircraft were used for missions primarily for logistics and search and rescue. Then we did a little bit of special operations support. There’s a lot more of that going on now since I retired than I personally did. Then I also had time as a flight instructor at our helicopter flight school down in Florida. After my time as an instructor, I went on to be an Airbus on one of our smaller amphibians’ ships. Most people think of the Airbus on the big aircraft carrier. This is a couple of steps down from that, but it’s a specialty for helicopter pilots as part of our career. Later on, I went to Embry-Rural Aeronautical University, and they like to call it the Harvard of the Skies to get a master’s in aviation safety and aviation management. That was a prelude for me to go to what is now the Naval Safety Command, where I wrapped up my Navy career. I served as an operational risk management program manager and supported a program called the Culture Workshop, where we went to two individual commands and talk to them about risk management and the culture that they had there in their commands. Since retirement from the Navy, I work as a system safety engineer at APT We do system, software, and explosive safety. If you want to figure out and understand what that means, the easiest way to look at it is we’re at the very top of the hierarchy of controls at the design level. We sit with the engineers, and we work with them to design the things out or minimize the risk and the hazards within a design. You can do that with hardware, you can do that with software. And then explosives is a side to that. I don’t personally work in the explosives division, but we have a lot of work that goes on there for those things.

That’s Marty in a nutshell.

Well, glad to have you on the show. Tell me a little bit about organizational culture. We’re going to get into Swiss cheese and some of the learning components, but culture is a key component of learning.

Absolutely. So military services, whatever country, whatever environment, they’re all high-risk environments.

Absolutely. Specific to the Navy, my background, if somebody’s hurt far out at sea, it could be days to reach high-level care. It’s obviously improved over time with the capabilities of helicopters and other aircraft, but you may be stuck on that ship for an awfully long time before you can get to a high level of care. That in and of itself breeds a culture of safety. You don’t want people getting hurt out at sea because of the consequences of that. When I say culture of safety, in this case, a lot of people hear culture, and they think about language like English or Spanish or French or whatever the case may be. What food people eat, what clothes they wear, those kinds of things. Here, what we mean is how things get done around here. There’s processes and procedures, how people approach things, and the general idea. In fact, the US Navy is in the middle of launching a campaign called What Right Looks Like in order to try to focus people in on making sure they’re doing the right kinds of things. Something that’s been around the Navy for a long time and is specific to safety is using the word mishap instead of accident.

Sure. Because in just general conversation, most people will think, well, accidents happen? Really, we want a culture where we think of things as mishaps and that mishaps are preventable. We really want to focus people on thinking how to avoid the mishap to begin with and reduce that risk that’s produced by all the hazards in that high-risk environment.

In an environment like the Navy, it’s incredibly important to get us tight. You talked about what right looks like. But you’ve got a lot of very young people joining a very young age who can make very critical decisions at the other end of the world without necessarily having the ability to ring the President for advice and guidance at every call that happens. But tough decisions can happen at any given point in time. Tell me a little bit about how that gets instilled.

Sure. Organizations have to learn, and they have to learn from mistakes. These high-risk environments, you have to… When something goes wrong, because it will, you need to ask yourself what went wrong and why. In these kinds of environments, and you think about it, then that’s what leads to a mishap investigation. Then in order to do that learning, you have to really learn. You’ve got to apply the lessons that came out of those investigations. Then that means you have to have good records of those mishaps. I mentioned the naval safety command. That’s part of the responsibility of naval safety command is to keep those records and make them useful to the fleet.

Sure. We’ve just touched a little bit on building a culture of learning, how the Navy does it. Let’s talk a little bit about Swiss cheese. We’ve touched on Swiss cheese a few times on the podcast, so most listeners are probably familiar with it, but I think it’s worthwhile to have a good refresh on it.

Absolutely. As I mentioned about having good records, if the records aren’t organized well or structured in a way to make them effective, then it’s going to be very difficult to apply those lessons. As an example, if there’s a vehicular mishap, commonly referred to as a car accident, but we’re going to use the mishap virology here. If you have three police officers write a report on a single vehicle mishap, they’re all going to come out different, probably. One of them might say the road was wet, one of them might say there was a loss of traction, the third one might say that the driver was going too fast. It’s a lot more difficult to analyze the aggregated mishap data if every investigator uses different terms and different approach. This is where Swiss cheese comes into play, and it’s the follow-on. The follow-on works. Dr. James Risen provided a construct that you can use to organize mishap reporting with the Swiss cheese model. In his model, the slices of cheese represent barriers to mishaps. He also identified that there are holes in the cheese that represent the holes in your barriers. Then he labeled them as latent or active failures.

Latent failures are existing, maybe persistent conditions in the environment, and active failures are usually something that’s done by a person, typically at the end. His model has four layers of cheese, three with latent failures, and one with active failures. So, no barriers, perfect. If we look at our vehicle mishap in that way, if you start at the bottom, let’s say it’s a delivery driver. They’ve committed an unsafe act by speeding.

Sure.

Why did they do that? Well, in our scenario, he needs a delivery performance bonus to pay hospital bills It’s because he has a newborn baby. He’s got this existing precondition to an unsafe act. Sure. Well, prior to him going out for the day, his supervisor looks at his delivery plan, but he didn’t really do a good job reviewing it and see that it was unrealistic. Sure. The thing is that the supervisor sees unrealistic delivery plans every day. It’s ingrained in him that this is normal. All these people are trying to execute unreasonable plans because the company pay is generally low and they give bonuses for meeting the targets for a number of deliveries per day. The company, as an organization, has set a condition to encourage people to have unrealistic plans, which the supervisor sees every day and just passes it off as everybody does it. Then we roll down and we have this precondition of, I need a bonus because I have bills to pay. This is the way that the Swiss cheese model is constructed. A little bit later on, Dr. Chapelle and Wegman developed the human factors analysis and classification system or HFACs.

They did that by taking reasons for slice of cheese, and they named the holes in the cheese, the holes in the barriers, after they studied mishap reports from naval aviation.

Tell me about some of those labels that they identified.

Some specific ones that they came up with are things like there was a lack of discipline, so it was an extreme violation due to lack of discipline. Sure. That would be at the act level. A precondition might be that someone was distracted, for example. Sure. A supervisory hole would be that there was not adequate training provided to the individual who was involved in the mishap. Then overall organizational culture, it might just be that there’s an attitude there that allows for unsafe tasks to be done. That sets everything up and through all the barriers to put our individuals, sets them up for failure and the mishap. We You see that in our delivery driver rec example where there’s all decisions, everything at every level, there’s a human decision made. There’s a policy decision. There’s a decision made to accept all these unreasonable plans. There was a decision that, okay, I must have this bonus. Now, that one, you saw if you could argue that one back and forth, but there was also a decision made to violate the speed limit, and that’s your active one down at the bottom. Yeah.

These helped essentially a taxonomy so that there is more standardization, if I’m hearing you correctly, in terms of incident investigations and classifications of learnings.

That’s correct. The decisions in this stack and the Swiss cheese come together. As you’re alluding to, there’s a taxonomy. So, Chapelle and Wegman, after, I think it was 80 mishaps in naval aviation that they were able to assign standardized labels. Those are the labels that became the names for the holes in the cheese. Once they put it in that taxonomy, they found 80% of the mishaps involved a human factor of some sort. I personally argue that there’s a human factor at every level, even if you go back and look something like United Flight 232 that crashed in Sioux City, Iowa, it all rolled back even to where there was a flaw in the raw metal that was used to machine the turban blade that ultimately failed. Sure. Did they make a decision not to do certain inspection on that block of metal before, and then it just keeps going down the way. There’s a decision in every chain of events.

Also, no redundancy in terms of the hydraulics, from what I remember in that incident.

Right. A design decision.

A design decision, exactly. That’s a great one. I like to use that as an example for many things, but we won’t pull that thread too hard today. But all these human factors, all these decisions, this is why in the US, the Department of Defense, uses HVACs as a construct for mishap and reporting so that aids in organizing the mishap reporting and the data so we can learn from our mistakes. It makes actionable data. There are other systems that also have taxonomies. Maritime Cyprus collects data. I ran across it when I was preparing for something else. Their number one, near miss, shows situational awareness as a factor in those things.

Situational awareness is a tough one to change and to drive.

It is. It’s a lot of training and a lot of tools and those kinds of things. I bought a new vehicle recently, and it likes to tell me to put the brakes on because thinks I’m going to hit something because it thinks it’s more aware than I am. It did it to me this morning, as a matter of fact. But it can be an interesting challenge.

Yes. Okay. Let’s go through some examples. I know when we talked about You had a couple of really interesting ones, Avianca, Aero Peru. Maybe let’s go through some of those examples of human factors at play and how they translate into an incidence from an aviation standpoint.

Sure. Avianca Flight 52 was in January of 1990. The aircraft was flying up to JFK out of Medellín, Colombia. The air crew received their information from dispatch about weather and other conditions as they were getting ready to go out on their flight. The problem was dispatch gave them weather information that was 9 to 10 hours old. Also, they did not have the information that showed there was a widespread storm that was causing bad conditions through a lot of the up and down, a lot of the East Coast. The other part was dispatch there had a standard alternate they built for JFK, which was Boston, Logan. Boston, Logan had just as bad a condition as JFK. They weren’t going to be able to use that in ultra, but they didn’t check. Then the air crew didn’t check either. They didn’t confirm how old the forecast was. They didn’t do any of those things. They launched on their flight with the fuel that was calculated to be necessary for that flight. For those who are not in the aviation world, when you’re calculating your fuel for a flight, you got to be able to get to your destination, what you think you need for your destination, what you’re going to need to get from there to your alternate in case you can’t get to your destination.

Then there’s a buffer that’s put-on top of that. Depending on what rule you’re using, it could be time, it could be percentage. It just depends on what rules you’re operating under and what aircraft you’re in. They have X amount of fuel. They launch out on their flight where they had 158 people on board. They get up there, and because of the weather, things are backed up JFK all the way up the East Coast as well. They can put in a hole near Virginia for quite some time. Then they get put in a hole when they get closer to JFK. They tried to get in a JFK, and they had a missed approach. They couldn’t see the runway when they did the approach and they had to go around. To go back into holding. The captain, understandably, is starting to become concerned about their fuel state. Sure. He’s asking the co-pilot if he has communicated to air traffic control what their fuel situation is. The co-pilot says, yes, I have. Well, the nuance here is that the international language of aviation is English, and the captain didn’t speak English. Co-captain did, and that met the requirement of one of them to be able to speak English to communicate with the air traffic control, but the captain didn’t know exactly what the co-pilot was telling air traffic control.

Well, that becomes a problem when the co-pilot was not using standard language. He was saying things like, hey, we’re getting low on fuel. That’s not the standard language that needs to be used. Correct. You have two phrases. You have minimum fuel, which indicates to air traffic control that you can accept no unnecessary delays. He never said minimum fuel. When they got even lower on fuel, he never used the word emergency. So, air traffic control did not know how dire the situation was. They It did offer them an opportunity to go to their alternate at some point, but by then they were so low on fuel, they couldn’t even make it to their alternate, even though Boston, the weather was too low there anyway for them to get in. Ultimately, they had another missed approach. They were coming around to try one more time, and they actually ran out of fuel. They ran the fuel tanks nearly dry on approach, and they crashed the aircraft in Cove Neck, New York.

Wow.

Here we have an aircraft, and you would think that there would be… There’s almost no reason for an aircraft to run out of fuel in flight, especially an eyeliner. But with these conditions that were set, they did. Just as an aside, there were 85 survivors out of the 158, and a lot of that had to do with the fact that there was no fire.

Because there’s no fuel to burn.

Because there’s no fuel to burn. I understand this It had a positive impact on what materials were used in aircraft later on, specifically cushions and stuff like that that don’t produce the toxic fumes when they burn because they could show that people could survive the impact. It was the fire and the fumes that were killed. That’s just an aside. That’s the overview. If we back up a little bit and talk about what human factors rolled into play here. Dispatch had this culture. It was an organizational culture. It wasn’t like it. Sure. They used as a general policy to use Boston, Logan as the alternate for JFK. That was just the standard. They didn’t even check. They may or may not have been trained properly on how to check the weather and make sure that it was adequate for either for an aircraft to get into its primary destination or to its alternate, because the forecast clearly showed that the conditions were too poor for the aircraft to shoot those approaches. That’s an organizational level failure, and you can look at that as being that’s one slice of cheese. If we start going a little bit further down without trying to look at every aspect of it, if we look at what the pilots did, they didn’t check the weather.

They just depended on dispatch and assumed it was correct. Then once they started getting into this situation that they were in, there was communication in the cockpit. That was good, except it was inadequate. More importantly, the pilot couldn’t speak, was the only one in the cockpit that could speak English, so the captain didn’t have full situational awareness, which we mentioned a moment ago. Then he failed to use the proper terminology. That was a specific failure on his part. I don’t know. We can’t say if that was because he didn’t want to admit they were… If he didn’t want to declare an emergency because he was embarrassed, which is possible. He didn’t want to have to answer the captain, perhaps. If you had declared an emergency and ATC comes back and ask them later, why did you declare an emergency? Why didn’t you just tell us this stuff earlier? We don’t have those answers. Unfortunately, those two gentlemen didn’t survive the crash. But these are all things that can roll into a roll into that. When you break it down into HVACs, these preconditions, maybe he was embarrassed, maybe he felt that there was a power dynamic in the cockpit that he couldn’t admit making a mistake to the captain.

Then he had the active failure not using the correct language with ATC, the standard air traffic control language.

It feels as some CRM elements, some psychological safety, probably at play because you would expect the co-pilot to at least ask, do you want me to declare an emergency or something along those lines. For seek clarity if you’re unsure.

Absolutely. That’s a really interesting one to me. I use it as an example with some regularity when I’m talking about these kinds of things.

This episode of the Safety Guru podcast is brought to you by Propulo Consulting, the leading safety and safety culture advisory firm. Whether you are looking to assess your safety culture, develop strategies to level up your safety performance, introduce human performance capabilities, re-energize your BBS program, enhance supervisory safety capabilities, or introduce unique safety leadership training and talent solutions, Propulo has you covered. Visit us at propulo.com.

How about Aero Peru? Because I think the Avianca one is a phenomenal, really interesting one. Actually, one I haven’t touched on much before. So, it’s a great example of multiple levels of failure. How about Aero Peru?

Aero Peru is another one that’s really interesting. It had a unique problem. So, the short version, just to give an overview of the flight like we did with Avianca, Aero Peru was flying from Miami, and they were ultimately headed to Chile, but they had a stopover. They had stopovers in Ecuador and Peru. During one of those stopovers, they landed during the day, and then the plane was scheduled to take off at night. During that interim time, the ground crew washed the aircraft and polished it. Then the aircraft launched. They got up a couple of hundred feet off the runway Anyway, and the air crew noticed that there was a problem with air speed and altimeter. It wasn’t reading correctly. Well, they were already in the air. You can’t really get back on the ground at that point. You’re already in the air. They flew out and they’re out over the Pacific, and they get up into the cloud. Now they’re flying on instruments, so you don’t have any outside reference out there. Just even if it was clear, flying over the water at night is a very dark place. They got out there, they’re flying on instruments.

Their attitude indication is correct, but they know their altimeter is not reading right, the airspeed is not reading right. There’s another instrument in the cockpit called vertical speed indicator. It also operates off air pressure, just like your altimeter and your airspeed indicator.

Sure.

They’re very confused. To their credit, they are aviating. In the aviation world, we say, Aviate, navigate, communicate. Because if you stop aviating, stop flying the aircraft, you’re going to crash. To their credit, they aviated, They navigated, they stayed out over the water to make sure that they wouldn’t hit anything because they just didn’t know how high they were. Then they started talking to air traffic control. They’re very confused by all this that’s going on. There is on YouTube at least one video where you can listen to the cockpit recording, and then they’ll show you what else is going on in the cockpit. We don’t have the video, but they represent it electronically so you can see it. It’s interesting to listen to the actual audio because then hear the confusion and the attempts to make decisions and determine what’s going on. Ultimately, they get out over the water. They know these things are not right. They are asking air traffic control, hey, can you tell us our altitude? Because our instruments are not right. The problem with that is that the altimeter tells a box in the aircraft called the transponder. I sometimes call it the Marco Polo box when I explain it to people because the radar from the air traffic control sends out a ping like a Marco, and then the box comes back with a Polo.

But the Polo is a number that’s been assigned, so they know who the aircraft is on radar and the altitude. Well, the altimeter feeds the altitude to the transponder, so air traffic control can only tell the aircraft what the air altimeter already says. But that didn’t occur to anybody, and they’re under high stress, and this is a unique one. So, it’s just as an aside, my only real criticism of the air crew is you have a general idea of what power settings you need and what attitude you need for things, so they didn’t really seem to stick to that. But we all have to remember that when we’re looking at these, we’re Monday morning, quarterbacking them. I don’t ding them too hard. At any rate, long story short, they’re trying to figure out how to get turned around and go back. They’re trying to figure out what’s going on. Ultimately, they start getting overspeed indications warnings from the aircraft that’s telling them they’re going too fast, and they’re getting stall warnings from the aircraft.

At the same time?

At the same time. They don’t know if they’re going too fast or too slow. Overspeed is based on air pressure, which obviously all their air pressure instruments are not working properly. But stall warning is a totally separate instrument. It looks like a weathervane. If you walk onto aircraft at the airport today across the ramp, you may see the little weather looking thing up near the nose. That’s what is there for us for stall warning. They actually were stalling because they were trying to figure out how to get down and slow down since they were getting altitude and speed indications that were higher and faster than they wanted. Their radar altimeter, which again does not work, which also does not work on air pressure. It actually sends a radar signal down, was telling them they were low. They were getting, I’m high, I’m low, I’m slow, I’m fast. All this information coming at them.

That would be horribly confusing at the same time.

Horribly confusing, and there’s alarms going on off in the cockpit that are going to overwhelm your senses. There was a lot going on in the cockpit. Ultimately, they flew the aircraft into the water and there were no survivors. What happened here? When they were washing the aircraft, in order to keep water and polish out of the ports called the static ports that measure the air pressure at the altitude where the aircraft is at that time, had been covered with duct tape. Then the maintenance person failed to take the duct tape off. They forgot. Then when the supervisor came through, they didn’t see the duct tape either because that part of the aircraft, it looks like bare metal, so it’s silver. So, the gray or silver duct tape against the server, they didn’t see it. The pilots did not see it when they primly to the aircraft. So, when the aircraft When it took off, those ports were sealed, and the aircraft was not able to get correct air pressure sensing. Now we have to ask, how in the world did this Sure. Right. If you want to put it in a stag, start looking at slices of cheese, we have to ask these questions.

Why was he using duct tape? Was it because they didn’t have the proper Plug, which would have had a remove before flight banner on it? Was it they didn’t have it, or was it just too much trouble to go get it because they have to check it out and check it back in? Was this normal? Did they do this all the time? Did the supervisor know that and either not care or, hey, this is how we get it done around here. That’s a cultural piece. Sure.

At least use duct tape that’s flashing red or something.

Something. When you start looking at it in those terms, you have the, Is there a culture? Was there a lack of resources? Was there not adequate training? They didn’t know they shouldn’t use duct tape. It just seemed like the thing to Then the supervisor, did he know they were using duct tape? If he did, and it was for one of these other reasons, like resources or whatever the case may be, why didn’t he look carefully to make sure the duct tape wasn’t there because he knew they were using it? Did the air crew know that that’s how they were covering the static ports? Then when you get into the stuff with the air crew, they tried to do the right things. As we talked about, it was a very confusing set of circumstances. Like I said, standard attitudes and power settings would have been helpful. This is how these things stack up and how those holes line in the cheese to give you that straight path for a mishap to occur. It’s just a pretty interesting example of it.  

And multiple points of failure that had to align.

Absolutely.

Because assuming the duct tape was not used just that one time, this probably many times where it was used before and didn’t cause an issue because they removed it prior.

Correct. Correct.

Fastening example. So, the last one I think you’re going to touch on is around non-aviation going into maritime, the Costa Concordia.

Correct. This was from 2012. A lot of people probably remember the images of Costa Concordia is rolled over. It’s rolled over on its side. It’s heavily listing. It’s run aground off an island in Italy. This one is truly human from beginning to end. No equipment failed. There was nothing wrong with the ship, anything along those lines. That’s part of the reason that it’s such a good example here. The captain or the ship’s master, depending on how you want to use it in your terminology that you’re going to use, decided he was going to… They got underway with passengers on board. He decided he wanted to do what was called a cruise by where he would sail close by an island, specifically a town on the island, so that he could show off for his friends and wave at them when he went by.

Always a great idea.

Yeah. Most dangerous words in aviation, watch this. He decided he was going to do this, and he had done it before at the same place. But there were some differences. One, the previous time it had been planned. He briefed his deck, his bridge crew, what is going to happen. They checked all the weather conditions, et cetera, et cetera. It was during the day when he did it the first time. This was at night, and he just decided on a whim as they were on their way out that he was going to do this. As they’re sailing in there, they actually hit an outcropping as they were approaching the town It ripped a big old gash down the side of the ship. I think it was about 150 or 170 feet long, if I recall correctly, or about 50 meters. That caused flooding in the ship and a power loss. Then they ended up, as you saw in the photos, and 32 people lost their lives. That’s a real brief overview. But what I want to do here is talk a little bit more about what led into We’ve talked very generally about slices of cheese in holes.

Sure. For this one, I’m going to go into a little bit more detail and use some actual HVACs codes or names for the holes and names for the slices of cheese. When you look at the at the Cruise Company itself, the attitude there seemed to be this captain was getting the job done. When that happens in an organization, somebody gets the job done is obviously has a little bit higher… They’re regarded in a better way than people that don’t necessarily get the job done. The problem comes when that individual is doing in an unsafe manner. Maybe they’re hiding some stuff about how they’re doing it. They’re doing things that are unsafe, but they’re getting away with it. You have to watch out for those things in an organization, excuse me, and for what people may be doing how they may be getting things done. At that level, he was accomplishing things. So organizationally, you have that. Then you can call it organizational or supervision in that next slice of cheese, depending on how you want to look at it. They probably didn’t provide adequate training. In the aviation world, we use simulators a lot. They’re using simulators a lot more in the maritime world now as well, and they can put an entire Bridge crew on a simulator together and practice scenarios and practice their coordination.

Well, they hadn’t had that with this crew. They failed to provide that training. This captain had an incident pulling into another port where he was accused of coming in too fast, which if you do any boating at all, you might see or might be going by a lake or whatever, you might see buys that say no wake zone. Well, the belief is that he pulled into this port too fast, created a wake, and that damaged either or equipment or ships. There weren’t any real serious consequences for him on that. So, they may have failed to identify or correct risky or unsafe practices. Sure. Then that’s, again, if they didn’t identify it, then they didn’t retrain him. Now they failed to provide the adequate training for him, failed to provide adequate training for the Bridge crew as a whole. Now we’ve hit organizational with the culture, we’ve hit supervision with the training on safe practices. Now we go into the preconditions for the next level. Complacency. He decided on a whim, essentially, that he was going to do this sail by. So didn’t check the conditions, those kinds of things. He didn’t consider the fact that it was… 

We’ll get back to that one in just a second. Let’s see. Partly because, or partly maybe because the crew didn’t have the training in one of these Bridge simulators, there was a lack of assertiveness from the crew members to him. That may have been because he was known to be very intimidating. He would yell at people when he didn’t like the information or when they told them things that weren’t correct. Rank position intimidation is one of our holes. Lack of assertion is a hole. Complacency, he didn’t think this was a big deal. And distraction, and this one’s very interesting to me personally. One, he’s on the Bridge Wing, which if you look at a ship, you usually have the enclosed Bridge. Then outside from that, you’ve got a weather area, weather deck, where you can see further out, those kinds of things. He’s standing on the Bridge Wing on the weather deck, talking to one of his friends ashore on his phone. Hey, look at us. Look at we’re coming by. Just get ready. Here we come. Then part of the distraction was there were ships guests on the Bridge Wing with him, which was a violation of policy to have guests on the Bridge Wing when they were in close proximity to shore.

And he had his girlfriend. Excuse me. His mistress. He was married and he was having an affair and had his mistress on the ship with him in violation of policy. So, he had all this distraction going on in addition to he just thought of this as no big deal. So now we’ve covered three slices of cheese, and let’s get to the last one, the ax. So, we have an extreme violation, lack of discipline, where we talked about all these preconditions, and those are examples of lack of discipline as well, where he failed to focus on what he was doing, allowed these distractions on the bridge, et cetera. And inadequate real-time risk assessment, day versus night. I checked the weather, I didn’t check the weather, et cetera. In this case, this is one where we’ve taken the codes, the names of those holes in the cheese and apply them to this specific case. There’s a whole lot of stuff with this one. There’s a reason that mishap reports are hundreds of pages long. But this one comes down to these examples of codes where he violated all these things. That was just before they actually had a problem.

It got worse after that, if you all are familiar with that case. Yeah.

Well, phenomenal story, but very applicable to other industries because there’s a lot of other industries where somebody is known for getting it done and might be doing some risky things in getting it done, just hasn’t been an event or a mishap, and people are not paying attention to those things. How did you actually get the job done? Or in the case of the driver, you’re talking about, the delivery driver, maybe he historically got it done, cutting corners, and they just decide not to look at some of those cutting corners.

Right.

Right. Festinating. So really good illustration, I think, in terms of culture, learning, and then Swiss cheese in terms of how different layers come together. Swiss cheese is not cheddar cheese. It has holes in it. It’s just a matter of those holes can line up at any given point in time. They’re existing.

Right. That’s where the latent versus active conditions may be. In the case of DOD and H-Facts, you have the organizational supervision and preconditions. Those are all your latent layers, and then your active layers, that last thing. In this case, where the extreme violations occurred in the inadequate real-time risk assessment.

I think the part I also like about Swiss Trees is it forces people to look at beyond the aviator, beyond the ship’s captain, beyond the team member in an organization that makes a mistake to the latent conditions that are linked to decisions that the organization has made over the time. These people in finance, people in HR, people in a corporate office are making decisions, not necessarily connecting to how it impacts somebody in the field. We don’t know about Aero Peru, but maybe it’s even somebody where in procurement, they forgot to buy the proper tools to do it and use what you have to because you go on to get the job done. A lot of conditions that impact other people in the organization. I think that’s also another reflection in Swiss cheese for me.

Absolutely.

Great. Any closing thoughts that you’d like to add?

Sure. Just a couple of things. Aviators are, on the whole, willing to admit their mistakes. It’s because we know that it’s a very unforgiving environment. The ocean and aviation are very unforgiving environments. As an attitude, as a culture, we want to share with others so they either don’t make the same mistake we did, or they understand how we got out of a situation. If you look at Aero Peru, I mean, seriously, has anybody else had that problem ever where there’s duct tape, I run the static ports? I don’t know, but by talking about-Never heard of them. Yeah. By sharing this story, we have the ability to help others avoid that situation in the future. That’s really the way that we do it. The second thing that’s big in aviation is we’ve always had… The way that we really made big improvements in safety in our MSAP record is by planning and talking about these things. Somewhere later, somebody came along and named this the P-bed process, planning, briefing, executing, and debrief. But we’ve been doing it for decades. You actually have a flight You may not execute to that plan specifically, but at least you have a plan to deviate from, I like to say.

Sure. Then you brief it so that everybody understands what’s going on. Then obviously you go and execute it, and you may have to make changes to it along the way. That’s fine. When you come back, let’s debrief it. Hey, we had this mission. Did we accomplish it? Did we have any problems? What did we do well? What did we not do well? So that we can improve later. That really helps in a lot of ways, in a lot of industries or situations, if you just talk about what you’re going to do to plan it out and make sure everybody understands. When you plan it, if you have the right people involved, they can come up with solutions to problems that you see in planning. They may identify a problem that you see that you can avoid in the planning stage instead of running across it in the execution stage. So that planning, briefing, executing, debriefing is a real useful thing to have out Something that can be transposed in any other industry as well in terms of really thinking through the planning.

I think your point around the voluntary reporting is huge because having been in aviation, you hear about things that people would rather not talk about. I fell asleep, things of that nature. But if you don’t know about it, you can’t do anything about it because unless the plane crashed, you would have no knowledge that both pilots fell asleep unless they went off course dramatically. Chances are nothing’s going to happen because they’re going to be on autopilot and it’s pre-programmed and all good. But if you know something’s happening, you can start understanding what are the conditions that could be driving to it.

Right. Absolutely.

Excellent. Well, Marty, thank you so much for joining me today and for sharing your story. Pretty rich, interesting, and thought-provoking story with really good examples. Thank you.

Happy to be here.

Thank you for listening to The Safety Guru on C-suite Radio. Leave a legacy. Distinguish yourself from the past. Grow your success. Capture the hearts and minds of your teams. Elevate your safety. Like every successful athlete, top leaders continuously invest in their safety leadership with an expert coach to boost safety performance. Begin your journey at execsafetycoach.com. Come back in two weeks for the next episode with your host, Eric Michrowski. This podcast is powered by Propulo Consulting.  

The Safety Guru with Eric Michrowski

More Episodes: https://thesafetyculture.guru/

C-Suite Radio: https://c-suitenetwork.com/radio/shows/the-safety-guru/

Powered By Propulo Consulting: https://propulo.com/

Eric Michrowski: https://ericmichrowski.com

ABOUT THE GUEST

Marty Ohme is an employee-owner at A-P-T Research, where he works as a System Safety Engineer. This follows a U.S. Navy career as a helicopter pilot, Air Boss aboard USS TRENTON, and program manager at what is now Naval Safety Command, among other assignments. He uses his uncommon perspective as both engineer and operator to support the development of aerospace systems and mentor young engineers. Marty holds a Bachelor of Science from the United States Naval Academy and a Master of Aeronautical Science from Emory-Riddle Aeronautical University. He may be reached through LinkedIn.

For more information: https://www.apt-research.com/

RELATED EPISODE

STAY CONNECTED

EXECUTIVE SAFETY COACHING

Like every successful athlete, top leaders continuously invest in their Safety Leadership with an expert coach to boost safety performance.

Safety Leadership coaching has been limited, expensive, and exclusive for too long.

As part of Propulo Consulting’s subscription-based executive membership, our coaching partnership is tailored for top business executives that are motivated to improve safety leadership and commitment.
Unlock your full potential with the only Executive Safety Coaching for Ops & HSE leaders available on the market.

Explore your journey with Executive Safety Coaching at https://www.execsafetycoach.com.
Executive Safety Coaching_Propulo