Select Page

Bringing Human Factors to Life with Marty Ohme

Bringing Human Factors to Life with Marty Ohme

LISTEN TO THE EPISODE: 

ABOUT THE EPISODE

There’s a safety decision behind every chain of events. We invite you to join us for a captivating episode of The Safety Guru featuring Marty Ohme, a former helicopter pilot in the U.S. Navy and current System Safety Engineer. Don’t miss this opportunity to gain from Marty’s extensive expertise and insights on system factors, organizational learning and safety culture, and effective risk management to mitigate future risks. Learn from the best practices of the U.S. Navy, as Marty brings human factors to life with real-world examples that can make a difference in your organization.

READ THIS EPISODE

Real leaders leave a legacy. They capture the hearts and minds of their teams. Their origin story puts the safety and well-being of their people first. Great companies ubiquitously have safe, yet productive operations. For those companies, safety is an investment, not a cost for the C-suite. It’s a real topic of daily focus. This is the Safety Guru with your host, Eric Michrowski, a globally recognized ops and safety guru, public speaker and author. Are you ready to leave a safety legacy? Your legacy success story begins now.

Hi, and welcome to the Safety Guru. Today, I’m very excited to have with me, Marty Ohme. He’s a retired naval aviator, also a system safety engineer. He’s got some great stories he’s going to share with us today around human factors, organizational learning. Let’s get into it. Marty, welcome to the show.

Thank you. I appreciate the opportunity to spend some time with you and share some interesting stuff with your audience.

Yeah. Let’s start maybe with your background and your story in the Navy.

Sure. I graduated from the United States Naval Academy with a bachelor’s in aerospace engineering. I’ve been fascinated with flight and things that fly since a very young age, so that lined up nicely for that. I went on to fly the H-46 Delta and the MH60 Sierra to give your audience an idea of what that looks like. The H-46 was flown for many, many years by the Marine Corps and the Navy. It looks like a small Chinook, the tandem motor helicopter. Then the MH-60 Sierra is basically a Black Hawk painted gray. There are some other differences But both aircraft were used for missions primarily for logistics and search and rescue. Then we did a little bit of special operations support. There’s a lot more of that going on now since I retired than I personally did. Then I also had time as a flight instructor at our helicopter flight school down in Florida. After my time as an instructor, I went on to be an Airbus on one of our smaller amphibians’ ships. Most people think of the Airbus on the big aircraft carrier. This is a couple of steps down from that, but it’s a specialty for helicopter pilots as part of our career. Later on, I went to Embry-Rural Aeronautical University, and they like to call it the Harvard of the Skies to get a master’s in aviation safety and aviation management. That was a prelude for me to go to what is now the Naval Safety Command, where I wrapped up my Navy career. I served as an operational risk management program manager and supported a program called the Culture Workshop, where we went to two individual commands and talk to them about risk management and the culture that they had there in their commands. Since retirement from the Navy, I work as a system safety engineer at APT We do system, software, and explosive safety. If you want to figure out and understand what that means, the easiest way to look at it is we’re at the very top of the hierarchy of controls at the design level. We sit with the engineers, and we work with them to design the things out or minimize the risk and the hazards within a design. You can do that with hardware, you can do that with software. And then explosives is a side to that. I don’t personally work in the explosives division, but we have a lot of work that goes on there for those things.

That’s Marty in a nutshell.

Well, glad to have you on the show. Tell me a little bit about organizational culture. We’re going to get into Swiss cheese and some of the learning components, but culture is a key component of learning.

Absolutely. So military services, whatever country, whatever environment, they’re all high-risk environments.

Absolutely. Specific to the Navy, my background, if somebody’s hurt far out at sea, it could be days to reach high-level care. It’s obviously improved over time with the capabilities of helicopters and other aircraft, but you may be stuck on that ship for an awfully long time before you can get to a high level of care. That in and of itself breeds a culture of safety. You don’t want people getting hurt out at sea because of the consequences of that. When I say culture of safety, in this case, a lot of people hear culture, and they think about language like English or Spanish or French or whatever the case may be. What food people eat, what clothes they wear, those kinds of things. Here, what we mean is how things get done around here. There’s processes and procedures, how people approach things, and the general idea. In fact, the US Navy is in the middle of launching a campaign called What Right Looks Like in order to try to focus people in on making sure they’re doing the right kinds of things. Something that’s been around the Navy for a long time and is specific to safety is using the word mishap instead of accident.

Sure. Because in just general conversation, most people will think, well, accidents happen? Really, we want a culture where we think of things as mishaps and that mishaps are preventable. We really want to focus people on thinking how to avoid the mishap to begin with and reduce that risk that’s produced by all the hazards in that high-risk environment.

In an environment like the Navy, it’s incredibly important to get us tight. You talked about what right looks like. But you’ve got a lot of very young people joining a very young age who can make very critical decisions at the other end of the world without necessarily having the ability to ring the President for advice and guidance at every call that happens. But tough decisions can happen at any given point in time. Tell me a little bit about how that gets instilled.

Sure. Organizations have to learn, and they have to learn from mistakes. These high-risk environments, you have to… When something goes wrong, because it will, you need to ask yourself what went wrong and why. In these kinds of environments, and you think about it, then that’s what leads to a mishap investigation. Then in order to do that learning, you have to really learn. You’ve got to apply the lessons that came out of those investigations. Then that means you have to have good records of those mishaps. I mentioned the naval safety command. That’s part of the responsibility of naval safety command is to keep those records and make them useful to the fleet.

Sure. We’ve just touched a little bit on building a culture of learning, how the Navy does it. Let’s talk a little bit about Swiss cheese. We’ve touched on Swiss cheese a few times on the podcast, so most listeners are probably familiar with it, but I think it’s worthwhile to have a good refresh on it.

Absolutely. As I mentioned about having good records, if the records aren’t organized well or structured in a way to make them effective, then it’s going to be very difficult to apply those lessons. As an example, if there’s a vehicular mishap, commonly referred to as a car accident, but we’re going to use the mishap virology here. If you have three police officers write a report on a single vehicle mishap, they’re all going to come out different, probably. One of them might say the road was wet, one of them might say there was a loss of traction, the third one might say that the driver was going too fast. It’s a lot more difficult to analyze the aggregated mishap data if every investigator uses different terms and different approach. This is where Swiss cheese comes into play, and it’s the follow-on. The follow-on works. Dr. James Risen provided a construct that you can use to organize mishap reporting with the Swiss cheese model. In his model, the slices of cheese represent barriers to mishaps. He also identified that there are holes in the cheese that represent the holes in your barriers. Then he labeled them as latent or active failures.

Latent failures are existing, maybe persistent conditions in the environment, and active failures are usually something that’s done by a person, typically at the end. His model has four layers of cheese, three with latent failures, and one with active failures. So, no barriers, perfect. If we look at our vehicle mishap in that way, if you start at the bottom, let’s say it’s a delivery driver. They’ve committed an unsafe act by speeding.

Sure.

Why did they do that? Well, in our scenario, he needs a delivery performance bonus to pay hospital bills It’s because he has a newborn baby. He’s got this existing precondition to an unsafe act. Sure. Well, prior to him going out for the day, his supervisor looks at his delivery plan, but he didn’t really do a good job reviewing it and see that it was unrealistic. Sure. The thing is that the supervisor sees unrealistic delivery plans every day. It’s ingrained in him that this is normal. All these people are trying to execute unreasonable plans because the company pay is generally low and they give bonuses for meeting the targets for a number of deliveries per day. The company, as an organization, has set a condition to encourage people to have unrealistic plans, which the supervisor sees every day and just passes it off as everybody does it. Then we roll down and we have this precondition of, I need a bonus because I have bills to pay. This is the way that the Swiss cheese model is constructed. A little bit later on, Dr. Chapelle and Wegman developed the human factors analysis and classification system or HFACs.

They did that by taking reasons for slice of cheese, and they named the holes in the cheese, the holes in the barriers, after they studied mishap reports from naval aviation.

Tell me about some of those labels that they identified.

Some specific ones that they came up with are things like there was a lack of discipline, so it was an extreme violation due to lack of discipline. Sure. That would be at the act level. A precondition might be that someone was distracted, for example. Sure. A supervisory hole would be that there was not adequate training provided to the individual who was involved in the mishap. Then overall organizational culture, it might just be that there’s an attitude there that allows for unsafe tasks to be done. That sets everything up and through all the barriers to put our individuals, sets them up for failure and the mishap. We You see that in our delivery driver rec example where there’s all decisions, everything at every level, there’s a human decision made. There’s a policy decision. There’s a decision made to accept all these unreasonable plans. There was a decision that, okay, I must have this bonus. Now, that one, you saw if you could argue that one back and forth, but there was also a decision made to violate the speed limit, and that’s your active one down at the bottom. Yeah.

These helped essentially a taxonomy so that there is more standardization, if I’m hearing you correctly, in terms of incident investigations and classifications of learnings.

That’s correct. The decisions in this stack and the Swiss cheese come together. As you’re alluding to, there’s a taxonomy. So, Chapelle and Wegman, after, I think it was 80 mishaps in naval aviation that they were able to assign standardized labels. Those are the labels that became the names for the holes in the cheese. Once they put it in that taxonomy, they found 80% of the mishaps involved a human factor of some sort. I personally argue that there’s a human factor at every level, even if you go back and look something like United Flight 232 that crashed in Sioux City, Iowa, it all rolled back even to where there was a flaw in the raw metal that was used to machine the turban blade that ultimately failed. Sure. Did they make a decision not to do certain inspection on that block of metal before, and then it just keeps going down the way. There’s a decision in every chain of events.

Also, no redundancy in terms of the hydraulics, from what I remember in that incident.

Right. A design decision.

A design decision, exactly. That’s a great one. I like to use that as an example for many things, but we won’t pull that thread too hard today. But all these human factors, all these decisions, this is why in the US, the Department of Defense, uses HVACs as a construct for mishap and reporting so that aids in organizing the mishap reporting and the data so we can learn from our mistakes. It makes actionable data. There are other systems that also have taxonomies. Maritime Cyprus collects data. I ran across it when I was preparing for something else. Their number one, near miss, shows situational awareness as a factor in those things.

Situational awareness is a tough one to change and to drive.

It is. It’s a lot of training and a lot of tools and those kinds of things. I bought a new vehicle recently, and it likes to tell me to put the brakes on because thinks I’m going to hit something because it thinks it’s more aware than I am. It did it to me this morning, as a matter of fact. But it can be an interesting challenge.

Yes. Okay. Let’s go through some examples. I know when we talked about You had a couple of really interesting ones, Avianca, Aero Peru. Maybe let’s go through some of those examples of human factors at play and how they translate into an incidence from an aviation standpoint.

Sure. Avianca Flight 52 was in January of 1990. The aircraft was flying up to JFK out of Medellín, Colombia. The air crew received their information from dispatch about weather and other conditions as they were getting ready to go out on their flight. The problem was dispatch gave them weather information that was 9 to 10 hours old. Also, they did not have the information that showed there was a widespread storm that was causing bad conditions through a lot of the up and down, a lot of the East Coast. The other part was dispatch there had a standard alternate they built for JFK, which was Boston, Logan. Boston, Logan had just as bad a condition as JFK. They weren’t going to be able to use that in ultra, but they didn’t check. Then the air crew didn’t check either. They didn’t confirm how old the forecast was. They didn’t do any of those things. They launched on their flight with the fuel that was calculated to be necessary for that flight. For those who are not in the aviation world, when you’re calculating your fuel for a flight, you got to be able to get to your destination, what you think you need for your destination, what you’re going to need to get from there to your alternate in case you can’t get to your destination.

Then there’s a buffer that’s put-on top of that. Depending on what rule you’re using, it could be time, it could be percentage. It just depends on what rules you’re operating under and what aircraft you’re in. They have X amount of fuel. They launch out on their flight where they had 158 people on board. They get up there, and because of the weather, things are backed up JFK all the way up the East Coast as well. They can put in a hole near Virginia for quite some time. Then they get put in a hole when they get closer to JFK. They tried to get in a JFK, and they had a missed approach. They couldn’t see the runway when they did the approach and they had to go around. To go back into holding. The captain, understandably, is starting to become concerned about their fuel state. Sure. He’s asking the co-pilot if he has communicated to air traffic control what their fuel situation is. The co-pilot says, yes, I have. Well, the nuance here is that the international language of aviation is English, and the captain didn’t speak English. Co-captain did, and that met the requirement of one of them to be able to speak English to communicate with the air traffic control, but the captain didn’t know exactly what the co-pilot was telling air traffic control.

Well, that becomes a problem when the co-pilot was not using standard language. He was saying things like, hey, we’re getting low on fuel. That’s not the standard language that needs to be used. Correct. You have two phrases. You have minimum fuel, which indicates to air traffic control that you can accept no unnecessary delays. He never said minimum fuel. When they got even lower on fuel, he never used the word emergency. So, air traffic control did not know how dire the situation was. They It did offer them an opportunity to go to their alternate at some point, but by then they were so low on fuel, they couldn’t even make it to their alternate, even though Boston, the weather was too low there anyway for them to get in. Ultimately, they had another missed approach. They were coming around to try one more time, and they actually ran out of fuel. They ran the fuel tanks nearly dry on approach, and they crashed the aircraft in Cove Neck, New York.

Wow.

Here we have an aircraft, and you would think that there would be… There’s almost no reason for an aircraft to run out of fuel in flight, especially an eyeliner. But with these conditions that were set, they did. Just as an aside, there were 85 survivors out of the 158, and a lot of that had to do with the fact that there was no fire.

Because there’s no fuel to burn.

Because there’s no fuel to burn. I understand this It had a positive impact on what materials were used in aircraft later on, specifically cushions and stuff like that that don’t produce the toxic fumes when they burn because they could show that people could survive the impact. It was the fire and the fumes that were killed. That’s just an aside. That’s the overview. If we back up a little bit and talk about what human factors rolled into play here. Dispatch had this culture. It was an organizational culture. It wasn’t like it. Sure. They used as a general policy to use Boston, Logan as the alternate for JFK. That was just the standard. They didn’t even check. They may or may not have been trained properly on how to check the weather and make sure that it was adequate for either for an aircraft to get into its primary destination or to its alternate, because the forecast clearly showed that the conditions were too poor for the aircraft to shoot those approaches. That’s an organizational level failure, and you can look at that as being that’s one slice of cheese. If we start going a little bit further down without trying to look at every aspect of it, if we look at what the pilots did, they didn’t check the weather.

They just depended on dispatch and assumed it was correct. Then once they started getting into this situation that they were in, there was communication in the cockpit. That was good, except it was inadequate. More importantly, the pilot couldn’t speak, was the only one in the cockpit that could speak English, so the captain didn’t have full situational awareness, which we mentioned a moment ago. Then he failed to use the proper terminology. That was a specific failure on his part. I don’t know. We can’t say if that was because he didn’t want to admit they were… If he didn’t want to declare an emergency because he was embarrassed, which is possible. He didn’t want to have to answer the captain, perhaps. If you had declared an emergency and ATC comes back and ask them later, why did you declare an emergency? Why didn’t you just tell us this stuff earlier? We don’t have those answers. Unfortunately, those two gentlemen didn’t survive the crash. But these are all things that can roll into a roll into that. When you break it down into HVACs, these preconditions, maybe he was embarrassed, maybe he felt that there was a power dynamic in the cockpit that he couldn’t admit making a mistake to the captain.

Then he had the active failure not using the correct language with ATC, the standard air traffic control language.

It feels as some CRM elements, some psychological safety, probably at play because you would expect the co-pilot to at least ask, do you want me to declare an emergency or something along those lines. For seek clarity if you’re unsure.

Absolutely. That’s a really interesting one to me. I use it as an example with some regularity when I’m talking about these kinds of things.

This episode of the Safety Guru podcast is brought to you by Propulo Consulting, the leading safety and safety culture advisory firm. Whether you are looking to assess your safety culture, develop strategies to level up your safety performance, introduce human performance capabilities, re-energize your BBS program, enhance supervisory safety capabilities, or introduce unique safety leadership training and talent solutions, Propulo has you covered. Visit us at propulo.com.

How about Aero Peru? Because I think the Avianca one is a phenomenal, really interesting one. Actually, one I haven’t touched on much before. So, it’s a great example of multiple levels of failure. How about Aero Peru?

Aero Peru is another one that’s really interesting. It had a unique problem. So, the short version, just to give an overview of the flight like we did with Avianca, Aero Peru was flying from Miami, and they were ultimately headed to Chile, but they had a stopover. They had stopovers in Ecuador and Peru. During one of those stopovers, they landed during the day, and then the plane was scheduled to take off at night. During that interim time, the ground crew washed the aircraft and polished it. Then the aircraft launched. They got up a couple of hundred feet off the runway Anyway, and the air crew noticed that there was a problem with air speed and altimeter. It wasn’t reading correctly. Well, they were already in the air. You can’t really get back on the ground at that point. You’re already in the air. They flew out and they’re out over the Pacific, and they get up into the cloud. Now they’re flying on instruments, so you don’t have any outside reference out there. Just even if it was clear, flying over the water at night is a very dark place. They got out there, they’re flying on instruments.

Their attitude indication is correct, but they know their altimeter is not reading right, the airspeed is not reading right. There’s another instrument in the cockpit called vertical speed indicator. It also operates off air pressure, just like your altimeter and your airspeed indicator.

Sure.

They’re very confused. To their credit, they are aviating. In the aviation world, we say, Aviate, navigate, communicate. Because if you stop aviating, stop flying the aircraft, you’re going to crash. To their credit, they aviated, They navigated, they stayed out over the water to make sure that they wouldn’t hit anything because they just didn’t know how high they were. Then they started talking to air traffic control. They’re very confused by all this that’s going on. There is on YouTube at least one video where you can listen to the cockpit recording, and then they’ll show you what else is going on in the cockpit. We don’t have the video, but they represent it electronically so you can see it. It’s interesting to listen to the actual audio because then hear the confusion and the attempts to make decisions and determine what’s going on. Ultimately, they get out over the water. They know these things are not right. They are asking air traffic control, hey, can you tell us our altitude? Because our instruments are not right. The problem with that is that the altimeter tells a box in the aircraft called the transponder. I sometimes call it the Marco Polo box when I explain it to people because the radar from the air traffic control sends out a ping like a Marco, and then the box comes back with a Polo.

But the Polo is a number that’s been assigned, so they know who the aircraft is on radar and the altitude. Well, the altimeter feeds the altitude to the transponder, so air traffic control can only tell the aircraft what the air altimeter already says. But that didn’t occur to anybody, and they’re under high stress, and this is a unique one. So, it’s just as an aside, my only real criticism of the air crew is you have a general idea of what power settings you need and what attitude you need for things, so they didn’t really seem to stick to that. But we all have to remember that when we’re looking at these, we’re Monday morning, quarterbacking them. I don’t ding them too hard. At any rate, long story short, they’re trying to figure out how to get turned around and go back. They’re trying to figure out what’s going on. Ultimately, they start getting overspeed indications warnings from the aircraft that’s telling them they’re going too fast, and they’re getting stall warnings from the aircraft.

At the same time?

At the same time. They don’t know if they’re going too fast or too slow. Overspeed is based on air pressure, which obviously all their air pressure instruments are not working properly. But stall warning is a totally separate instrument. It looks like a weathervane. If you walk onto aircraft at the airport today across the ramp, you may see the little weather looking thing up near the nose. That’s what is there for us for stall warning. They actually were stalling because they were trying to figure out how to get down and slow down since they were getting altitude and speed indications that were higher and faster than they wanted. Their radar altimeter, which again does not work, which also does not work on air pressure. It actually sends a radar signal down, was telling them they were low. They were getting, I’m high, I’m low, I’m slow, I’m fast. All this information coming at them.

That would be horribly confusing at the same time.

Horribly confusing, and there’s alarms going on off in the cockpit that are going to overwhelm your senses. There was a lot going on in the cockpit. Ultimately, they flew the aircraft into the water and there were no survivors. What happened here? When they were washing the aircraft, in order to keep water and polish out of the ports called the static ports that measure the air pressure at the altitude where the aircraft is at that time, had been covered with duct tape. Then the maintenance person failed to take the duct tape off. They forgot. Then when the supervisor came through, they didn’t see the duct tape either because that part of the aircraft, it looks like bare metal, so it’s silver. So, the gray or silver duct tape against the server, they didn’t see it. The pilots did not see it when they primly to the aircraft. So, when the aircraft When it took off, those ports were sealed, and the aircraft was not able to get correct air pressure sensing. Now we have to ask, how in the world did this Sure. Right. If you want to put it in a stag, start looking at slices of cheese, we have to ask these questions.

Why was he using duct tape? Was it because they didn’t have the proper Plug, which would have had a remove before flight banner on it? Was it they didn’t have it, or was it just too much trouble to go get it because they have to check it out and check it back in? Was this normal? Did they do this all the time? Did the supervisor know that and either not care or, hey, this is how we get it done around here. That’s a cultural piece. Sure.

At least use duct tape that’s flashing red or something.

Something. When you start looking at it in those terms, you have the, Is there a culture? Was there a lack of resources? Was there not adequate training? They didn’t know they shouldn’t use duct tape. It just seemed like the thing to Then the supervisor, did he know they were using duct tape? If he did, and it was for one of these other reasons, like resources or whatever the case may be, why didn’t he look carefully to make sure the duct tape wasn’t there because he knew they were using it? Did the air crew know that that’s how they were covering the static ports? Then when you get into the stuff with the air crew, they tried to do the right things. As we talked about, it was a very confusing set of circumstances. Like I said, standard attitudes and power settings would have been helpful. This is how these things stack up and how those holes line in the cheese to give you that straight path for a mishap to occur. It’s just a pretty interesting example of it.  

And multiple points of failure that had to align.

Absolutely.

Because assuming the duct tape was not used just that one time, this probably many times where it was used before and didn’t cause an issue because they removed it prior.

Correct. Correct.

Fastening example. So, the last one I think you’re going to touch on is around non-aviation going into maritime, the Costa Concordia.

Correct. This was from 2012. A lot of people probably remember the images of Costa Concordia is rolled over. It’s rolled over on its side. It’s heavily listing. It’s run aground off an island in Italy. This one is truly human from beginning to end. No equipment failed. There was nothing wrong with the ship, anything along those lines. That’s part of the reason that it’s such a good example here. The captain or the ship’s master, depending on how you want to use it in your terminology that you’re going to use, decided he was going to… They got underway with passengers on board. He decided he wanted to do what was called a cruise by where he would sail close by an island, specifically a town on the island, so that he could show off for his friends and wave at them when he went by.

Always a great idea.

Yeah. Most dangerous words in aviation, watch this. He decided he was going to do this, and he had done it before at the same place. But there were some differences. One, the previous time it had been planned. He briefed his deck, his bridge crew, what is going to happen. They checked all the weather conditions, et cetera, et cetera. It was during the day when he did it the first time. This was at night, and he just decided on a whim as they were on their way out that he was going to do this. As they’re sailing in there, they actually hit an outcropping as they were approaching the town It ripped a big old gash down the side of the ship. I think it was about 150 or 170 feet long, if I recall correctly, or about 50 meters. That caused flooding in the ship and a power loss. Then they ended up, as you saw in the photos, and 32 people lost their lives. That’s a real brief overview. But what I want to do here is talk a little bit more about what led into We’ve talked very generally about slices of cheese in holes.

Sure. For this one, I’m going to go into a little bit more detail and use some actual HVACs codes or names for the holes and names for the slices of cheese. When you look at the at the Cruise Company itself, the attitude there seemed to be this captain was getting the job done. When that happens in an organization, somebody gets the job done is obviously has a little bit higher… They’re regarded in a better way than people that don’t necessarily get the job done. The problem comes when that individual is doing in an unsafe manner. Maybe they’re hiding some stuff about how they’re doing it. They’re doing things that are unsafe, but they’re getting away with it. You have to watch out for those things in an organization, excuse me, and for what people may be doing how they may be getting things done. At that level, he was accomplishing things. So organizationally, you have that. Then you can call it organizational or supervision in that next slice of cheese, depending on how you want to look at it. They probably didn’t provide adequate training. In the aviation world, we use simulators a lot. They’re using simulators a lot more in the maritime world now as well, and they can put an entire Bridge crew on a simulator together and practice scenarios and practice their coordination.

Well, they hadn’t had that with this crew. They failed to provide that training. This captain had an incident pulling into another port where he was accused of coming in too fast, which if you do any boating at all, you might see or might be going by a lake or whatever, you might see buys that say no wake zone. Well, the belief is that he pulled into this port too fast, created a wake, and that damaged either or equipment or ships. There weren’t any real serious consequences for him on that. So, they may have failed to identify or correct risky or unsafe practices. Sure. Then that’s, again, if they didn’t identify it, then they didn’t retrain him. Now they failed to provide the adequate training for him, failed to provide adequate training for the Bridge crew as a whole. Now we’ve hit organizational with the culture, we’ve hit supervision with the training on safe practices. Now we go into the preconditions for the next level. Complacency. He decided on a whim, essentially, that he was going to do this sail by. So didn’t check the conditions, those kinds of things. He didn’t consider the fact that it was… 

We’ll get back to that one in just a second. Let’s see. Partly because, or partly maybe because the crew didn’t have the training in one of these Bridge simulators, there was a lack of assertiveness from the crew members to him. That may have been because he was known to be very intimidating. He would yell at people when he didn’t like the information or when they told them things that weren’t correct. Rank position intimidation is one of our holes. Lack of assertion is a hole. Complacency, he didn’t think this was a big deal. And distraction, and this one’s very interesting to me personally. One, he’s on the Bridge Wing, which if you look at a ship, you usually have the enclosed Bridge. Then outside from that, you’ve got a weather area, weather deck, where you can see further out, those kinds of things. He’s standing on the Bridge Wing on the weather deck, talking to one of his friends ashore on his phone. Hey, look at us. Look at we’re coming by. Just get ready. Here we come. Then part of the distraction was there were ships guests on the Bridge Wing with him, which was a violation of policy to have guests on the Bridge Wing when they were in close proximity to shore.

And he had his girlfriend. Excuse me. His mistress. He was married and he was having an affair and had his mistress on the ship with him in violation of policy. So, he had all this distraction going on in addition to he just thought of this as no big deal. So now we’ve covered three slices of cheese, and let’s get to the last one, the ax. So, we have an extreme violation, lack of discipline, where we talked about all these preconditions, and those are examples of lack of discipline as well, where he failed to focus on what he was doing, allowed these distractions on the bridge, et cetera. And inadequate real-time risk assessment, day versus night. I checked the weather, I didn’t check the weather, et cetera. In this case, this is one where we’ve taken the codes, the names of those holes in the cheese and apply them to this specific case. There’s a whole lot of stuff with this one. There’s a reason that mishap reports are hundreds of pages long. But this one comes down to these examples of codes where he violated all these things. That was just before they actually had a problem.

It got worse after that, if you all are familiar with that case. Yeah.

Well, phenomenal story, but very applicable to other industries because there’s a lot of other industries where somebody is known for getting it done and might be doing some risky things in getting it done, just hasn’t been an event or a mishap, and people are not paying attention to those things. How did you actually get the job done? Or in the case of the driver, you’re talking about, the delivery driver, maybe he historically got it done, cutting corners, and they just decide not to look at some of those cutting corners.

Right.

Right. Festinating. So really good illustration, I think, in terms of culture, learning, and then Swiss cheese in terms of how different layers come together. Swiss cheese is not cheddar cheese. It has holes in it. It’s just a matter of those holes can line up at any given point in time. They’re existing.

Right. That’s where the latent versus active conditions may be. In the case of DOD and H-Facts, you have the organizational supervision and preconditions. Those are all your latent layers, and then your active layers, that last thing. In this case, where the extreme violations occurred in the inadequate real-time risk assessment.

I think the part I also like about Swiss Trees is it forces people to look at beyond the aviator, beyond the ship’s captain, beyond the team member in an organization that makes a mistake to the latent conditions that are linked to decisions that the organization has made over the time. These people in finance, people in HR, people in a corporate office are making decisions, not necessarily connecting to how it impacts somebody in the field. We don’t know about Aero Peru, but maybe it’s even somebody where in procurement, they forgot to buy the proper tools to do it and use what you have to because you go on to get the job done. A lot of conditions that impact other people in the organization. I think that’s also another reflection in Swiss cheese for me.

Absolutely.

Great. Any closing thoughts that you’d like to add?

Sure. Just a couple of things. Aviators are, on the whole, willing to admit their mistakes. It’s because we know that it’s a very unforgiving environment. The ocean and aviation are very unforgiving environments. As an attitude, as a culture, we want to share with others so they either don’t make the same mistake we did, or they understand how we got out of a situation. If you look at Aero Peru, I mean, seriously, has anybody else had that problem ever where there’s duct tape, I run the static ports? I don’t know, but by talking about-Never heard of them. Yeah. By sharing this story, we have the ability to help others avoid that situation in the future. That’s really the way that we do it. The second thing that’s big in aviation is we’ve always had… The way that we really made big improvements in safety in our MSAP record is by planning and talking about these things. Somewhere later, somebody came along and named this the P-bed process, planning, briefing, executing, and debrief. But we’ve been doing it for decades. You actually have a flight You may not execute to that plan specifically, but at least you have a plan to deviate from, I like to say.

Sure. Then you brief it so that everybody understands what’s going on. Then obviously you go and execute it, and you may have to make changes to it along the way. That’s fine. When you come back, let’s debrief it. Hey, we had this mission. Did we accomplish it? Did we have any problems? What did we do well? What did we not do well? So that we can improve later. That really helps in a lot of ways, in a lot of industries or situations, if you just talk about what you’re going to do to plan it out and make sure everybody understands. When you plan it, if you have the right people involved, they can come up with solutions to problems that you see in planning. They may identify a problem that you see that you can avoid in the planning stage instead of running across it in the execution stage. So that planning, briefing, executing, debriefing is a real useful thing to have out Something that can be transposed in any other industry as well in terms of really thinking through the planning.

I think your point around the voluntary reporting is huge because having been in aviation, you hear about things that people would rather not talk about. I fell asleep, things of that nature. But if you don’t know about it, you can’t do anything about it because unless the plane crashed, you would have no knowledge that both pilots fell asleep unless they went off course dramatically. Chances are nothing’s going to happen because they’re going to be on autopilot and it’s pre-programmed and all good. But if you know something’s happening, you can start understanding what are the conditions that could be driving to it.

Right. Absolutely.

Excellent. Well, Marty, thank you so much for joining me today and for sharing your story. Pretty rich, interesting, and thought-provoking story with really good examples. Thank you.

Happy to be here.

Thank you for listening to The Safety Guru on C-suite Radio. Leave a legacy. Distinguish yourself from the past. Grow your success. Capture the hearts and minds of your teams. Elevate your safety. Like every successful athlete, top leaders continuously invest in their safety leadership with an expert coach to boost safety performance. Begin your journey at execsafetycoach.com. Come back in two weeks for the next episode with your host, Eric Michrowski. This podcast is powered by Propulo Consulting.  

The Safety Guru with Eric Michrowski

More Episodes: https://thesafetyculture.guru/

C-Suite Radio: https://c-suitenetwork.com/radio/shows/the-safety-guru/

Powered By Propulo Consulting: https://propulo.com/

Eric Michrowski: https://ericmichrowski.com

ABOUT THE GUEST

Marty Ohme is an employee-owner at A-P-T Research, where he works as a System Safety Engineer. This follows a U.S. Navy career as a helicopter pilot, Air Boss aboard USS TRENTON, and program manager at what is now Naval Safety Command, among other assignments. He uses his uncommon perspective as both engineer and operator to support the development of aerospace systems and mentor young engineers. Marty holds a Bachelor of Science from the United States Naval Academy and a Master of Aeronautical Science from Emory-Riddle Aeronautical University. He may be reached through LinkedIn.

For more information: https://www.apt-research.com/

RELATED EPISODE

STAY CONNECTED

EXECUTIVE SAFETY COACHING

Like every successful athlete, top leaders continuously invest in their Safety Leadership with an expert coach to boost safety performance.

Safety Leadership coaching has been limited, expensive, and exclusive for too long.

As part of Propulo Consulting’s subscription-based executive membership, our coaching partnership is tailored for top business executives that are motivated to improve safety leadership and commitment.
Unlock your full potential with the only Executive Safety Coaching for Ops & HSE leaders available on the market.

Explore your journey with Executive Safety Coaching at https://www.execsafetycoach.com.
Executive Safety Coaching_Propulo

Cracking the Code: The Human Factors Behind Organizational Failures with Martin Anderson

Cracking the Code The Human Factors Behind Organizational Failures

LISTEN TO THE EPISODE: 

ABOUT THE EPISODE

You don’t want to miss our latest episode of ‘Cracking the Code: The Human Factors Behind Organizational Failures’ on The Safety Guru. Join us as Martin Anderson, a renowned expert on human factors and performance, shares his valuable insights and examples about the human factors behind organizational failures. Learn how to effectively and constructively embed lessons learned in your organization.

READ THIS EPISODE

Real leaders leave a legacy. They capture the hearts and minds of their teams. Their origin story puts the safety and well-being of their people first. Great companies ubiquitously have safe yet productive operations. For those companies, safety is an investment, not a cost, for the C suite. It’s a real topic of daily focus. This is The Safety Guru with your host, Eric Michrowski. A globally recognized ops and safety guru, public speaker, and author. Are you ready to leave a safety legacy? Your legacy success story begins now.

Hi, and welcome to The Safety Guru. Today I’m very excited to have with me Martin Anderson, who’s a human factors expert. We’re going to have a really interesting series of topics of conversation today. He’s got a deep background in human factors across oil and gas regulatory environments. His passion is really to understand how people perform in complex systems and also, ultimately, why organizations fail. So, Martin, welcome to the show. Really excited to have you with me. Let’s get started with a bit of an introduction.

Yeah, thank you very much, Eric, and certainly, thank you for having me on the show. It’s a real privilege to be invited here. Yeah, so in terms of my background, I started off with a psychology degree, and then I did a master’s in human factors. And after a few years of work experience, I followed that up with a Master’s in Process Safety and Loss Prevention. I’ve been a human factors specialist for over 30 years now. I’ve worked for a couple of boutique consultancies. I’ve been a regulator working as a specialist inspector in human factors for the UK Health and Safety Executive. I spent a few years as a human factors manager in an oil and gas company. I spent a lot of time assessing existing installations but also had input into the design of new facilities, working on 40, 50-billion-dollar mega projects. And over that time, I visited over 150 different oil, gas, and chemical facilities, both onshore and offshore, which gave me quite an insight into how some of these major organizations operate. And one of the reasons I created the website humanfactors101.com, was to share some of those insights. The other thing I’d like to talk about is going back 30 years, right to the start of my career.

I read a document which was called Organizing for Safety. It was published by the UK Health and Safety Executive in 1993. There’s a quote from that document I would like to read out because it had a huge impact on me at that point. It goes like this, different organizations doing similar work are known to have different safety records, and certain specific factors in the organization are related to safety. So, if we unpack that quote, it really contains two statements. First of all, the different companies doing the same things have got different safety records. And secondly, perhaps more importantly, there are specific factors that could explain this difference in safety performance. And I thought this was amazing. I thought if these factors could be identified and managed, then this safety could be massively improved. And over the next 30 years or so, one disaster at a time, these organizational factors have revealed themselves in major incidents, which I guess we’ll come to in a moment.

I think that’s a great topic to get into. So why do organizations fail? Because I think when we had the original conversations, I was fascinated by some of your connections between multiple different industries and common themes that were across all of them.

Yeah, sure. What might be helpful, first of all, because we introduced me as a human factors specialist to just briefly define what we mean by human factors, and then we’ll go into looking at some of the organizational incidents if that’s okay. Sure. For me, Human Factors is composed of three main things. We’re really looking at, first of all, what people are being asked to do. That’s the work they’re doing. Secondly, who is doing it? This is all about the people. And thirdly, where are they actually working? Which is the organization? So ideally, all three of these aspects need to be considered, the work, the people, and the organization. But my experience is that companies tend to focus on just one or two of these, usually the people one. Within the UK HCC, our team defined human factors as a set of 10 topics, which has become widely known as the top 10 used by industry consultants and regulators worldwide. Because prior to that, we would turn up to do an inspection, say, we’re here to inspect your human factors. And they were like, I don’t know what you mean. How do we prepare for that?

Whom do you want to speak to? What do you want to go and look at? So, after creating that top 10, we were able to say, the agenda for the inspection is that we want to come and look at how you manage fatigue. We want to come and look at your supervision arrangements or your competency assurance system. So, this helped to operationalize human factors. So, the other description, really, of human factors. A lot of people come to human factors through human error. They hear about human error. But if we identify human error, we need to understand how and why it occurred and not simply blame people. Are we setting people up to succeed? Are we setting them up to fail? Are we providing systems, equipment, and an environment that supports people to do the work that we’re asking them to do? And to introduce, as we move towards talking about organizational failures, I’d like to read a quote from Professor James Reason, who is a psychologist at the University of Manchester. And this quote is about 25 years old, but it’s still one of my favorites. And Reason said that rather than being the main instigators of an accident, operators tend to be the inheritors of system defects created by poor design, incorrect installation, faulty maintenance, and bad management decisions.

Their part is usually that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking. And I think that’s a really good introduction to our discussion on organizational failures.

So, let’s go there because we had a really interesting conversation on organizational failures and some of the common themes. So, what are some of the common themes, and why do organizations fail?

Exactly. When you say, why do organizations fail? Let’s just think about a few of those from different industries because these organizational disasters have occurred to the NASA space shuttles, the Harold of Free enterprise, Ferry Disaster, Shenandoah, the Kings Cross Fire, Piper Alpha, Caterpillar, Texas City, Burnsville, Deepwater Horizon, the Condo, lots of different rail incidents around the world, several so-called friendly fire events. And there’s also been organizational disasters in sectors such as healthcare and finance. In the UK, these include inadequate care during children’s heart surgery at the Bristol Royal Infirmary over a 10-year period. And, of course, most listeners will be familiar with the so-called rogue trader that caused the collapse of Bearings Bank. So, there were so many disasters in so many different industries. And I know when we had a conversation earlier, what we were considering was that, okay, they’re all in different industries, but there are lots of common themes that we could pull out of those from space shuttles to Bearings Bank, for instance.

So, what are some of the themes? Because I think the part that really caught my attention is I think you’ve done an activity where you had taken the facts from a different event, mastered it, and told me a little bit about that story in terms of how you mastered the facts that were from an existing element and people thought it was something different.

Yeah. So, the example there was that… I don’t know if readers are familiar with the Nimrod disaster. So, this goes back to 2006. Nimrod was a recollonance aircraft. And shortly after air-to-air refueling, it was on a routine mission over Afghanistan. Shortly after that refueling, there was a fire which led to the loss of the aircraft and, sadly, the 14th service personnel. And I was asked to get involved and advise that investigation. And as I started to read some of the initial information from that investigation, I started to think, this sounded just like another incident I’m really familiar with, which was one of the shuttle incidents the Columbia incident. So I put a presentation together, and on one side of the slide, I put the information from the Nimrod incident, and on the right-hand side of the slide, I put information from the Columbia incident. And then, I went through several of the issues that were involved, and I produced this PowerPoint presentation, and I mixed up the left and right sides, and I didn’t say which was in which. And when we showed it to the investigation team, they couldn’t determine which information came from the incident they were investigating from the NIMOD incident and which information came from the shuttle Columbia incident many years previously. 

It just showed you the two very different incidents in different industries, different locations, and different people, that the organizational issues were almost identical. That was quite powerful, the fact that people couldn’t tell the difference between the facts from one and the facts from the other because these causes just overlap so much. When you look at the very detailed technical level, there are differences between these events. But the common factors when you really start looking at the deeper or, the broader organizational issues, then there are so much many similarities.

What are some of the themes in general that you’ve looked at? You mentioned Bearing’s Bank, which sounds very different than Piper Alpha. What are some of the common themes?

It does. You think, what has the failure of a 100-year-old bank got to do with the failure of an oil refinery or an offshore oil platform or any of the other incidents that we’ve spoken about? People and organizations fail in very similar ways. The findings from these disasters are getting quite repetitive just because you’re seeing the same things over and over. When you look at all of these incidents and pull out some of the main themes, what are the things that we’re seeing? Because the important thing is that we can go and look for these in an existing organization. You see things like a lot of outsourcing to contractors without proper oversight. We call that in the nuclear industry, we call that not having intelligent customer capability because they don’t know what the contractors are doing. They can’t explain what the contracts are doing. Then you’ve got inappropriate targets or priorities or pressures because, in almost all of these cases, there were significant production pressures, whatever production means for your organization. Another key issue that you see almost every time is a failure to manage organizational change. And by that, I mean a failure to consider the impact of that organizational change on safety.

So, a lot of organizations are going through almost like a tsunami of changes and not really considering how that impacts how they manage safety or not considering that each of those separate changes has a cumulative effect which is more powerful than the individual changes. You also see a lot of assumptions that things are safe. So even if you have evidence to the contrary, assuming that everything is safe, rather than going and looking for information, rather than challenging, or rather than having a questioning attitude, organizations are pretty bad at looking for bad news or responding to bad news, not wanting to hear bad news. So in almost all of the incidents that we’ve spoken about, it wasn’t a complete surprise to everybody in the organization. There were people in the organization that knew things were going wrong, that they were getting close to the boundaries of safety, but they couldn’t either get that information to be heard by the right people, or people didn’t react or respond to that. So it’s really interesting when you look, and you read the detailed investigation reports, and there are always people that knew that things were going wrong. So that information is available in the organization.

And I think that’s a good thing because that means that, hey, this is good. We can proactively do something about this. We can go and look for some of these things. So the things that I mentioned there, and there are a lot more, Eric, that we could talk about. There are lots of organizational issues we could proactively go and look for because these incidents are devastating for the people involved, for the organizations involved, but they’re a free lesson for everybody else. Sure.

If you choose to learn from them and if you choose to see the analogy between a space shuttle, Nimrod, and Barings Bank, and whatever industry you’re in.

Yeah, exactly. Because you have to go looking for those issues, for those factors in your organization, so, there are two things or maybe three things you mentioned there. So, you need to go looking at other incidents. You need to take the lessons from those. You need to go and look for them in your organization, and you need to act on that. So, this failure to learn from other industries, for me, is perhaps the greatest organizational failure of all. The organizations think, well, it doesn’t apply to me because that was in a children’s hospital, or that was a bank, or that was an offshore platform. What’s that got to do with me in my industry? Failure to learn those lessons is the biggest failure because you can get away from the technical specifics of the incident and just try and look at the deeper organizational issues. But who in organizations is doing this, Eric? Which person, which role, which part of the organization goes looking for these events and draws the lessons and then goes and challenges their own organization? It’s actually quite difficult to do that. It’s like the problem with safety, isn’t it? Really, is that you can go into a boardroom and you can pitch a new product to a new market, and people give you money, and they’ll listen to you.

But if you go in and pitch that you want to spend money to protect and safeguard the installation against things that may or may not happen in the future is a much harder sell. It’s a problem for safety more generally.

One of the things I know we talked about was around what you call organizational learning disability, so people are good at investigating, but not true learning, and not embedding the change. I’ve seen this many times where people learn the same lesson over and over.

And that’s it. When we have these large investigations into these disasters, there’s always this proclamation that this must never happen again, and we need to learn the lessons. And then something else happens a year or two later in a different industry, but the same issues. So, you talked about a learning disability. Why do organizations fail to learn? Given that, there’s this wealth of information out there available as to why organizations fail. For me, I think there are two issues. I think there’s this failure to learn from other industries. All industries think they’re unique. They don’t think that they can learn because it’s a totally different industry. It’s nothing to do with them. But they all employ the same kinds of people. There aren’t different people working in different industries. They all employ the same people. They organize themselves in very similar ways, and they have the same targets and priorities and so on. So, first of all, that assumption doesn’t apply to me. It’s a different sector. So, failure to learn from other industries, we’ve spoken about, but failure to learn from your own investigations. And we see this in major incidents like NASA failing to learn from the previous incidents it had.

So, you have the Mars orbital and failure to learn from that. You have Challenger, then Columbia, and so on. So, what we find is that there’s a lot of sharing but not enough learning. So, after an incident, then there’s a safety bulletin put together, it goes on the intranet, there might be a bit of a roll, and so on. But you’re not, actually… If you’re not changing something, you’re not learning. So, something in the organization has to change for a lesson to be embedded. And you need to go back and confirm that you’ve changed the right thing. So, you can’t just change something and assume everything will be okay. So if you’re not changing anything structurally in the organization or in one of the systems or one of the processes, then you’re not embedding the learning. So that’s the first thing is this failure to embed the lessons that you come up with. I think the other problem is that investment derogations are not always of great quality. They’re not identifying the right issues. They may not be getting to the root causes. They might focus on human error. They might focus on blame. And Investigations that are done by external bodies generally are starting to look at these organizational issues.

But investigations that are done internally by the organizations themselves into their own events rarely confront organizational failures. It’s very challenging for the investigation team to raise issues that suggest there are failures at the leadership level. It’s challenging for the investigation team, and it’s challenging for the leadership to receive that information. So quite often, the recommendations and the actions are all aimed at employees, a bit like a lot of safety initiatives, behavioral safety, safety culture, and so on, are quite often aimed at the front-line workforce rather than the whole organization. We often see that in investigations as well if they’re not challenging these organizational issues, whether that’s because of a lack of understanding or whether or not that’s not accepted by senior leadership. Because people doing these investigations aren’t always competent. And I mean that in the nicest possible way. They don’t have the right experience, or they’re not given enough time, or it’s seen as a development opportunity. So, investigations need to have the right people doing them, asking the right questions in order to get the right recommendations out of them. Because if the process isn’t right, you’re not going to get the right recommendations coming out of it.

So, what are you going to learn because you haven’t got to the real issues? So yeah, I think there are two issues there, failure to learn from other industries, but also failure to learn from your own investigations. And we can talk about some tips that maybe could help organizations get to some of those organizational issues when they’re doing investigations. Absolutely. And also, it’d be useful to talk about how you can go and look for some of these organizational issues before you actually have an incident, which is what we want to get to. We want to have it, we want to learn, but we don’t want to have incidents in order to be able to learn. So why can’t we learn proactively without having an incident in the first place?

This episode of The Safety Guru podcast is brought to you by Propulo Consulting, the leading safety, and safety to your advisory firm. Whether you are looking to assess your safety culture, develop strategies to level up your safety performance, introduce human performance capabilities, reenergize your BBS program, enhance supervisory safety capabilities, or introduce unique safety leadership training and talent solutions, Propulo has you covered. Visit us at propulo.com.

Let’s start first in terms of how you can identify some of these organizational factors through the investigation process.

Through that investigation process, what you’re really trying to do to get to the organizational issues is you’re trying to zoom out from the detail, taking a helicopter view. You’re zooming out and looking down, trying to see this bigger picture. So, for example, most people who’ve done an investigation would have put together a timeline. So, a list of what happened to who or what equipment and when and draw a timeline and start to map what happened. But the problem is that a lot of those timelines start on the day of the event. And what I’d propose is that your timeline goes back to weeks, months, or even years before the event occurred. You’re trying to identify what might have changed in the organization in that period in terms of changes to equipment, processes, people, priorities, the direction the company was going, and so on. So, your timeline needs to go way back because of the organizational issues that we see in all of these events. These events didn’t just occur overnight. As Reason said in that quote, there was trouble brewing for weeks, months, and years beforehand. So, there are indications in the organization. So, your timeline needs to go back and look for those issues.

That automatically forces you to think not just about the actual incident but more widely about your organization. The other thing you can do really is review previous incidents that have occurred or other sources of data, maybe looking at audits or regulatory inspections, or staff surveys. You’re trying to identify common threads and trends, and you’re trying to identify how long these conditions have existed and how extensive they are across the company. Why did this event surprise us? Because, as I say, the information is normally available in the organization. So why did this come as a surprise? You’re looking not just at individuals, but you should be looking at systems. You should be looking at processes, and your mindset as an investigator should be thinking about what were the organizational conditions. What was the context in the organization that set people up to fail? So that going back way before the incident is quite a helpful change of mindset for people, rather than just going, okay, what happened on this day? And thinking about how you responded to the incident. It’s quite a useful tool to help you think more about organizational issues.

And how broad do you go? Because when you start going back to Zoom out years before decisions, changes in leadership, changes in investment, you can open up a very big can of worms. And I see if it’s Deep-Water Horizon, Piper Alpha, that there’s a need to go deeper. But how deep and how wide do you cast the net? Because I think it’s incredibly important like you said. Otherwise, you just limit to that person that made a mistake as opposed to start understanding what’s changed in the environment, the context. Sure.

It’s a lot easier in those big disasters to do that because they’ll have a huge team of people in these investigations. Some of them have taken five, six, eight years. They have the time and the resource. In an organization, you generally don’t have that much time to do an investigation. Quite often, the people doing it have other jobs, so they want to get back to the day job. So, it’s one of the reasons why the investigations are quite compressed in terms of time because most people are not full-time investigators. So, I think what you can do is it depends on the incident that you’ve had as to how far you want to go back. But I think looking at whether or not those conditions exist in other facilities or workplaces is a useful step that can really help you identify whether this is unique to this scenario or is this a systemic issue that we have in our organization. Organization. I think going back and looking at what might be key issues, so if you’ve had a merger or an acquisition or a major change in your direction or a new product or you’ve opened a new facility, those major organizational changes, if you had a downsizing exercise two years ago and since then there’s obviously been issues in terms of staffing and resources, then those are the key things you need to need to be mapping out.

As you say, you can’t map everything, but you’re looking for key significant changes or events or shifts in priorities or policies that might have occurred in the previous years. And I guess the time and effort that you spend in that partly depends on the consequences or the potential consequences of the event that you’re looking at.

But there’s still an element of you can focus the conversations like you just said in terms of what are the major shifts that happen as opposed to unearthing every piece. You’re still rewinding the movie further back. The other part I think is interesting to explore is what you talked about in terms of how we know and explore some of these organizational factors before something happens. And you mentioned that in all the incidents, you talked about somebody who knew something was up before. So how do we identify these themes before a major event?

Yeah, you’re right there, Eric. I think there’s always information available, and it’s just maybe not getting to the right people, or people aren’t taking action on it. So, these warning signs, red flags, whatever you want to call them, they’re unnoticed, they’re ignored or not getting to the right person because, as we’ve said, these incidents incubate over a long period of time. Those warnings accumulate. And that’s a great thing because that means that we have an opportunity to go and look for them and to find them. So, if you start looking, first of all, you should have a means for people to be able to raise those concerns in an independent, confidential way, some reporting system so that those concerns are coming to you. So that’s like one mechanism is some industries are much better than others at having confidential reporting systems where people can safely report a near miss or an error or challenge or frustration that they’re having. And that gives the organization an opportunity to do something about it. You’ve got to have the right culture for that, of course, because if your previous investigations blame individuals, then people are not going to come forward because they’ve seen what’s happened to other people.

So, they’re going to keep quiet, and these things get brushed under the carpet. So, it does depend on the culture that you’ve got. But having an independent, confidential way for people to raise those issues can be quite useful. So that allows issues to come to you. But you also need to go looking for these issues as well.

Yeah, I think.

That’s important. Organizations have had quite a few events. So, do they investigate them individually, or do they try and join the dots between different incidents? They might appear unrelated, but are they? Are you starting to accept things, either conditions or behaviors, that you wouldn’t have accepted a few years ago? People’s risk acceptance might change over time. Are you contracting more out? And do you really understand the technical work that those contractors are doing? Can you explain it? Can you challenge it if necessary? Are you having lots of budget cuts? The conversation is always around targets, budget challenges, focus on efficiencies, put productivity initiatives, and so on is a really good red flag. Are you starting to focus more on temporary fixes? Are you patching equipment? Are you stretching the life of equipment rather than investing or permanent solutions? Are you may be reacting to things rather than predicting and planning ahead? Now, organizations do lots of safety-related activities, and previous podcasts have talked about safety work and the work of safety. But if organizations start to see the completion of safety activities as being more important as to whether they’re effective, that’s quite often a big warning sign as well.

Companies are doing risk assessments, investigations, audits, and writing a safety case if that applies to your industry. And if the completion of that, if getting that done is more important than using it as a learning exercise and then whether it’s effective, that’s also a bit of a trigger for the organization. So, there are these things you can go looking for. I think one of the biggest things for me is because there are lots of questions we could ask, is that if you assume that your assessment of these major risks is incorrect and go proactively seeking information to continuously revise your assessment, you’re more likely to pick up these issues. Whereas if you assume that everything’s okay until it isn’t, it is too late at that point. Organizations are getting better in their maturity in their approach to investigations. But that maturity hasn’t carried over to being proactive in looking for issues. We’re getting better and better investigations, but we don’t want to have incidents to investigate. In organizations, there are tools or techniques. There are ways you can go and proactively look in your organization to find these issues. The maturity of investigations just hasn’t translated over to proactively going and looking for things.

There are lots of reasons why that might be the case.

I think it’s an interesting point because I think if you’ve got… The other element that comes to mind is if you’ve got an incident that happened, it’s clear who owns the investigation. But who owns this proactive view? Because in some organizations, it could be an audit, but an audit is not always necessarily equipped to do it. I know that in one organization, an audit made an audit in safety, and their focus in terms of driving safety improvement was to find ways to get employees back to the office faster, which has no impact on safety. But from a financial standpoint, if you don’t have expertise in what safety means, that might sound like a viable solution to reduce a rate, right? It could be your safety organization, but that safety organization needs to have the right visibility. It could be some form of a red team that’s constantly looking for challenging pieces. What have you seen be most effective in terms of where this resides and the practice around kicking the tire? Is that what you’ve got?

I think part of the issue there that I alluded to earlier on, Eric, is that I just don’t think this is a formal role within organizations. The departments that you mentioned quite often don’t have the expertise, experience, or time to be able to go and look for these issues proactively. So, the audits, investigations, they’re all quite constrained in their agenda, and so on. So, I don’t think there is a good example that I know of a function in an organization that is proactively going and looking at these areas. You do have risk committees and all these audit committees, whether or not you’re looking in the financial sector or whether or not you’re looking in oil and gas. I think there are pieces of the puzzle held by different people within an organization that can contribute to this review that we’re talking about. But I don’t think there’s really good practice out there of how that’s been pulled together into a cohesive, proactive, challenging go look to see whether or not we have any of these issues, particularly when you’re trying to learn from other industries. So if there’s been a big incident in one industry and there’s a big report that’s come out, and there are lessons and recommendations in that, organizations in that industry might look at that and might go and challenge themselves.

But that’s relatively short-lived, I think. If you ask people in organizations, what are the main failures in Piper Alpha? What were the main failures of Bearings Bank? What are the main failures in the shuttle incidents? A lot of people, including safety people, just can’t tell you what those organizational learnings would be. So not only are they not going looking for these things, but quite often, that experience, that understanding is just not available, Eric. But I think it’s a big gap. I think there’s a role for human factors, people, and systems people to be able to fulfill that role. But it’s very difficult for an organization to fund a position whose role it is, is to go looking for things that may or may not happen or that might be very unlikely to happen. In these times, it’s quite challenging to resource that position in an organization.

A couple of things that come to mind because I’ve seen some organizations do quite well at learning through case studies of others. So as a senior leadership team looking at something like the 737 MAX and what transpired around the box, looking at the Challenger, looking at Texas City, or looking at Deepwater Horizon, and using these as case studies to say, how could this happen here? And driving that reflection because then you’re starting to force this learning out of the industry and push that it could potentially happen here. And the other piece I’ve seen, and I think this is a… You talked about the human factors piece, I’ve seen some organizations that proactively, or maybe it’s every few years, run a safety culture assessment as an example. Now, my challenge with a lot of safety culture assessments is that people will do a survey which will give you no insights into what you’re talking about. But when I’m thinking about a robust one, you’re looking at surveying and speaking to a lot of employees to look about what could go wrong. And you also do a review of system factors. You look at a lot of the practices, the processes, the changes, the things that have occurred over the past few years.

So essentially, you’re kicking the tires on a regular basis at the organization. But what I’m talking about is it’s closer to really kicking the tires, but looking at the system components as well, even though the analysis, because the survey won’t be good enough.

I think you’re right. Organizations are doing surveys; they’re running focus groups. Some leaders will be doing walk-arounds. They’re going to facilities and talk to their staff. If prepared for that, that can be really, really helpful. They’re if you prepare them in terms of what they should ask, that can work quite well. I think these are all activities, and these are all tools that we have available, but I don’t think typically they are aimed at trying to pull out these deeper organizational issues, or maybe they’re not. The different sources of information maybe are not combined to give that overall view. Occasionally, organizations will get an independent organization in to do that review for them, which can be quite interesting. But again, that takes you back to the issue of you having to learn from those recommendations as well. And we have seen quite a few cases where independent contractors who’ve been asked to come in and review an organization quite often temper their findings because they want to get continual employment from that company. And we’ve seen that in some of the major financial events. But Bearings Bank is a good example where the auditors did not see issues, or when they saw issues, were not communicating them to the board because they didn’t want to alert the board to some of the issues that were there, which contributed to the demise of the bank.

So, there were lots of barriers and structural issues that might prevent some of the tools you suggested from working really effectively. But there are tools out there that can be used. We’re making general comments about what we’re seeing in the industry. It’s not to say that there are some organizations that are doing this well. I think it’d be really good to unpack those lessons in learning and communicate those more widely because there are pockets of good practice. I’m not saying no one’s doing anything at all here. There are pockets out there. We need to understand what they are, what is effective, and help to share those more widely for other organizations that maybe are not doing this proactively.

That’s often the tricky part because once something goes wrong, it makes front page news. The 37 MAX makes front page news, multiple investigations, lots of insights, lots of learnings. But does that mean that Airbus, on the other hand, that hasn’t had such a failure, is doing all of this proactively, you don’t necessarily know because they’re generally quieter about it. So, it could actually just be pure luck or actually good practices. And that’s the tricky part.

It could, but it could also be… If you look at an organization that’s had a few incidents or a couple of disasters, people might think, oh, well, actually X, Y, and Z is a bad company. It’s because of them. It’s the fundamental attribution error. If someone is driving poorly, you think it’s because they’re a bad driver. Whereas if you do something, if you cut someone up and so on, then you think, well, there’s all these other reasons why I did that. So, we tend to attribute failures to people because it’s an issue with them not thinking about all the contextual factors that influence behavior. So maybe that fundamental attribution error is something that’s important when we’re looking at these disasters because it’s easy to say, well, they’re just a bad company, and that won’t happen to us. We’re different. We employ different people. We’ve got all these processes and systems, and it won’t happen to us. Risk blindness is an issue for us as well.

I think if you touch briefly on Bearings Bank, the same symptoms that happen in Bearings Bank would probably have happened in many other locations because it’s not that hard to have a rogue trader. The difference there was the size of that rogue trader, but they’re present everywhere. Nab in Australia had three rogue traders on the FX side roughly around the same time. And there are lots of other examples that don’t get reported or get reported on the hundreds page of the newspaper if you really seek to look at them because it’s never a cause for success, but they happen a lot more often than we think.

I think they do. I think you’re right that we pick these examples, and we talk about these big disasters, partly because there’s so much information available on them. And it does become a little bit unfair that we keep going back to the same disasters, but they’re the ones on which we have much information. They’re the ones who’ve been investigated to the end of the degree. But you’re right, there are lots of other failures going on. Not all of them become so high profile. But we do know that lots of other organizations maybe have similar events, but they just, like you say, they don’t make the press for whatever reason, and they don’t become case studies on training courses for the next 30 years. But you’re right. You can pick Bearings Bank, and there would have been several of the banks with the same issues at the same time because they had the same processes or didn’t have those processes in place as Bearings Bank, but it just didn’t play out in the same way. As you know, maybe they had a huge loss, but it wasn’t enough to destroy the bank, and therefore it’s less visible to everybody else.

But you’re right, we’re picking a few case studies here because these are the ones, we have detail on. But it’s not to say this isn’t occurring much more widely than that.

So, Martin, thank you very much for joining me. I think a really interesting series of topics, the link that a lot of organizations relation feels for the same reasons. I think what’s really big takeaway is how do we learn better from investigations and then how do we learn proactively before anything ever occurs? How do we have that questioning attitude on an ongoing basis because it’s too easy to close your eyes and something and think, No, it’s okay? We’re okay. And really, how do you drive that questioning attitude within the business? So, Martin, these are really interesting topics. Obviously, your website, human factors101.com is an excellent source for insights. Is that the best way if somebody wants to reach you to get more insights?

Yes, certainly. I write quite a lot on that website, so you can go there and have a look. There’s a lot more information on there, or you can follow me on LinkedIn. If you search for Human Factors 101, you’ll find me there on LinkedIn. Please get in touch.

Excellent.

Thank you for listening to the Safety Guru on C-suite Radio. Leave a legacy, distinguish yourself from the pack, grow your success, capture the hearts and minds of your teams, and elevate your safety. Like every successful athlete, top leaders continuously invest in their safety leadership with an expert coach to boost safety performance. Begin your journey at execsafetycoach.com. Come back in two weeks for the next episode with your host, Eric Michrowski. This podcast is powered by Propulo Consulting.

The Safety Guru with Eric Michrowski

More Episodes: https://thesafetyculture.guru/

C-Suite Radio: https://c-suitenetwork.com/radio/shows/the-safety-guru/

Powered By Propulo Consulting: https://propulo.com/

Eric Michrowski: https://ericmichrowski.com

ABOUT THE GUEST

Martin Anderson has 30 years of experience in addressing human performance issues in complex organizations. Before joining an oil and gas company in Australia as Manager of Human Factors, he played a key role in developing human factors within the UK Health & Safety Executive (HSE), leading interventions on over 150 of the UK’s most complex major hazard facilities, both onshore and offshore. He has particular interests in organizational failures, safety leadership, and investigations. Martin has contributed to the strategic direction of international associations and co-authored international guidance on a range of human factors topics.

For more information: www.humanfactors101.com.

STAY CONNECTED

RELATED EPISODE

EXECUTIVE SAFETY COACHING

Like every successful athlete, top leaders continuously invest in their Safety Leadership with an expert coach to boost safety performance.

Safety Leadership coaching has been limited, expensive, and exclusive for too long.

As part of Propulo Consulting’s subscription-based executive membership, our coaching partnership is tailored for top business executives that are motivated to improve safety leadership and commitment.
Unlock your full potential with the only Executive Safety Coaching for Ops & HSE leaders available on the market.
Explore your journey with Executive Safety Coaching at https://www.execsafetycoach.com.
Executive Safety Coaching_Propulo

From Fighter Pilot to Airline Pilot: Lessons in Human Performance with Brandon Williams

The Safety Guru with Eric Michrowski: Episode 26 - From Fighter Pilot to Airline pilot: Lessons in Human Performance with Brandon Williams

LISTEN TO THE EPISODE: 

ABOUT THE EPISODE

Safety is a cornerstone of the airline and aviation industry. Our guest Brandon Williams, founder of LeadTac, adjunct professor, F-15 fighter and airline pilot, started his career in the U.S. Air Force. Brandon highlights the importance of considering human factors and mitigating human error using a systems based approach. Listen to learn how to implement Human Factors Leadership and peer accountability to reduce human errors and improve safety performance.

To learn about Human Performance: https://www.propulo.com/hop/

READ THIS EPISODE

The Safety Guru Ep 26 – From Fighter Pilot to Airline Pilot Lessons in Human Performance with Brandon Williams

Real leaders leave a legacy. They capture the hearts and minds of their teams. Their origin story puts the safety and well-being of their people first, great companies ubiquitously have safe yet productive operations. But those companies’ safety is an investment, not a cost for the C-suite. It’s a real topic of daily focus. This is the safety guru with your host, Eric Michrowski, a globally recognized ops and safety guru, public speaker and author. Are you ready to leave a safety legacy? your legacy success story begins now.

Hi, and welcome to the safety guru. today. I’m very excited to have with me, Brandon Williams. Brandon is a results-oriented leader, a business speaker. And the reason we’re bringing him on the show today is he’s got some amazing experience. Back in the day he was in the US Air Force, f 15. fighter pilot has deep expertise in human factors, in fact, worked in the Air Force around safety is also a joint Professor on the topic of human factors, as well as a few others that are related to safety. So, Brandon, welcome to the show. Really happy to have you with me today.

Thanks, Eric. I appreciate it and humbled to be invited to be on the podcast. And always happy to talk about safety, human factors or leadership or any all the above. So, thank you. Excellent. So maybe why don’t you start in a little bit about some of your background as a fighter pilot, but also how it evolved into flight safety airline pilot in the passion you have for it? And particularly for this topic that’s so critical around human factors and understanding human error.

Absolutely. Well, I went to the United States Air Force Academy. So that’s where I got ever since I was a little boy, I want to be a pilot, I think. So, I come from the Atlanta, Georgia area, I still live here. Now my wife and two small children, by going to the Air Force Academy graduating out of there, went on to Air Force pilot training. And that was probably my first exposure to, you know, what, what we call hrs high reliability organizations. So, getting into that world, and that’s where it first started, I would say, whether you want to talk about my aviation experience, or flying or safety or anything like that, it really started there.

Go on Air Force pilot training went on to fly at 15 ease, like you said, I served 12 years active duty in that time, in addition to being a pilot, also was involved in flight safety. So, I went to the Air Force flight safety School, which qualified me to be a what they call a safety officer. So, every unit, every organization in the flying organization in the Air Force, has a safety officer whose job it is to, you know, maintain and monitor run safety programs, you’re qualified to do safety investigations or mishaps, so your part of a Safety Board, and you come up with recommendations and, and do all that. So that was really a fun experience getting to do that, and seeing a whole other side of that.

But also served in several leadership roles in my time in the military. You know, common misconception, I think is as fun as it would be just to fly airplanes. And that’s it, you know, military organizations like anything else. So, we still have budgets and programs and people to manage. And, you know, you name it all the no fun stuff, if you will. So, several leaders, several leadership roles, they’re leading people in organizations got out of active duty, like I said, after about 12 years, went into the Air Force reserves just part time. And at that time, I also was kind of at a crossroads of what I was going to do. Part of me wanted to go into the business world, start my own business, go into some kind of management, consulting, or even safety related and because I had that experience, turn, and the other part of me wanted to go be an airline pilot, and still start my own business. So that’s, that’s actually what I ended up doing. Kind of the best of both worlds, I guess.

So, I have been a pilot at a major airline for several years now. And also started I also got into actually management, consulting leadership development. Around that time, did that for about seven or eight years, still do that off and on involve workshops, keynote speaking, strategy, consulting, and then started my own business called lead tack, which is leading tactically, and that really involves taking the idea of human factors and a lot of those things we talked about, as in the air force training as a fighter pilot, how we operate in complex environments, and how that it’s kind of two sides to it. I go and I talk to businesses and companies and all different industries, just taking business leaders, how they lead from human factors perspective to how we can help them mitigate error in their teams, kind of taking those aspects of HR is higher. Lobby organizations and taking that to a business setting or any kind of team. And then also, I still stay in the human factor safety world. So, things we’re talking about, and how we establish, you know, these ideas of human factors, how we mitigate human error, all kinds of different stuff. I’m sure we’ll talk about some of it here. But it involves that too. And then for the last 10 years, I’ve also, as you said, I’ve also been an adjunct professor, where I’ve actually designed and built and I teach safety courses, human factors, courses, some other aviation courses and management courses. So, a lot of stuff going on. But you know, it’s awesome, because I think I’m the luckiest people in the world because I would get to wake up and kind of do, you know, a lot of stuff that I’ve always wanted to do. So that’s me and my background. And yeah, the Air Force gently set that up. And I mean, set the stage clearly for what I do now. That’s awesome. So, can you talk about human error? Can you share a little bit about that concept? Because I think when we first connect to the part that’s always impressed me is airline aviation has probably done the most leaps and bounds of any sector in understanding where human error is going to happen? And how do you reduce the risk of doing it? So, I started the airline industry as well, I got to see it firsthand. It’s a, it’s a very different mindset. So, talk a little bit about this concept of human error and how it transposes to businesses that often blame the individual as opposed to try to think about what’s the right thing? and air? We all make errors. We all make mistakes. Absolutely. You just set it there. Right? There are two errors human right. I mean, that’s what makes us human. Yeah, um, you know, a lot of times in modern society, and you hear people that want to fix human error by saying, well, we’ll take the human out of the process, put more technology into it, which don’t get me wrong, technology is definitely way to mitigate, Your Honor. Absolutely. However, you know, when you look at it from a human factor standpoint, and how you want to really reduce human error, the human is seen more as a as a variable that can actually affect change, for the better, if that makes sense by helping reduce human error. And there’s many different ways. That’s a simple way to put it. But that’s kind of my approach to what I call human factors, leadership. And like you said, the aviation world, I think, kind of, in a way led a lot of this. I mean, and I think the reason why that was the 1970s, early 80s, when the jumbo airliner was, you know, at its heyday, a lot of them were coming on, you would have an airline crash. And you think back to event, I just think anything about aviation, you know, the names of Tenerife, if you say that, you know exactly what that is referring to the major accident that happened there in the 70s. Yeah. And so that that accident actually is cited, a lot of times it’s kind of a water, and there was a few of those around that time, major aircraft accidents. And for those out there not familiar it was basically the world’s worst commercial airline disaster involve each aircraft colliding on a runway essentially, that’s for people that are 247. They couldn’t get bigger than that. Absolutely, but the astonished thing about that is there’s two things. So that is number one. Around that time, we realize our experts in aviation will realize Wait, guys, and we can’t afford and we cannot let you know, we can’t have a loss of life of 200, some 300, some 500 people, I mean, we got to stop what we’re doing. Something’s not right, because we’ve had aviation accidents since the beginning of aviation. Right. And the classic approach was like you were talking about the blame and train right, like, Well, clearly, the pilot made an error that was it, tell people not to do that, again, problem solved, right, go about your day. Well, around this time, we started realizing that’s not working. And we can’t afford to keep operating on this. And this is where the idea of, of human factors and how we mitigate human error from a systems-based approach Sure, really comes into play. And when I when I say systems based, I mean, instead of the blame and train approach, focusing on the one individual human error, as you know, and people in your world know, mishaps don’t just happen because of one decision, there is a chain of events that lead up to a mishap. So, a system-based approach is looking at the entire system. So, the or, and how that’s how we may define that. That’s the organization, the culture, the leadership, the resources, the and as far as the Human Factors part, the actual state of that human being. So, you’re talking psychological factors, fatigue train, I mean, there’s so many different things that go into that. And so how do we mitigate that? And so big picture, what my model does, and what human really does the study of human factors is, is looking at mitigating human error from a systems-based approach. So how do we put those stop gaps in the system? Because that because rarely, I mean, if ever in our society now, professional organizations, does anybody show up to work and speak.

I’m not going to bring a game today. I’m not going to I’m not going to do that just does it really happen? I mean, there’s sure there’s isolated cases, but that just typically doesn’t happen. So, when we talk about bad apples, or we talk about, you know, bad performers or bad actors, a lot of times that’s, that’s human error. And that’s not mitigate. And that’s any industry. It’s not just high reliability or high reliability organizations. Yep. It’s not just safe aviation. It’s not just the medical world, it’s, you know, any kind of business or any kind of team you lead. And like you said, the aviation world kind of led that because I think we were kind of forced to, because it gets a lot of attention when you crash an airplane, unfortunately, medical world, which they have caught up and they’re doing better, they’re still behind us a little bit. But, you know, hospital, sadly, I mean, you expect people to, unfortunately pass away in a hospital. So, it didn’t really get the attention that it deserved. And I think when medical kind of the health, health, health and medical world kind of caught up with that and said, hey, look at what, you know, the aviation rules, look at what the military and the aircraft carrier look at what they do look at nuclear power industry is another what they do, you know, why can we not take some of these in there, and they’re doing that now, they’ve been doing that for a while, but it’s getting there. But anyway, yeah, that the whole idea of human errors, like you said, and, you know, if you if you look up the definition of human error, it, it’s kind of one of those things, it’s like saying, how do you define leadership? Or how do you define culture?

You know, there’s so many studies on it, it’s probably one of the most, the best ways to describe it is it’s really an unintentional outcome based on human action, unintentional human action, even that one didn’t really capture it truly. But human factors in the idea of human factors leadership, what I do in the study of human factors is really looking at human error from a systems-based approach. That’s great. So, it gets into just culture, which is often linked in terms of themes. How do you create a just culture? What is it good jazz culture? And how do you start creating it?

Right, so adjust culture is really again, I talked about the blame and train approach to management or blame a train approach to mitigating human error, they just culture environment as a kind of the exact opposite of that. So, you instead of living in, in, in fear, a fear of retribution, fear of what can happen if we point out mistakes, or errors or gaps in the system, a just culture encourages that. And the whole idea behind that is because, sure, if you can’t identify those gaps, if we cover them up, or we don’t talk about them, or we don’t bring them up, well, then guess what I mean, bad things are going to happen. You know, you see this a lot in the business world still, because, you know, they may lose some money, but they’re not going to lose life, most likely, if these errors keep happening, and they, whether it’s, you know, your own self-preservation, you know, trying to protect loyalty, or loyalty to someone else, an organization to a team, you know, trying to just push through it, you know, you name it, you really don’t see this a lot. I mean, in organizations, businesses are so different their cultures, but you just don’t see this idea of just culture because that, again, I relate it back to flying in the air force, and a fighter squadron, you know, what I call it a just culture. In terms of, you know, there was this idea, this environment, that every time after we landed, every time after we flew a mission, we went in a room, we conducted what’s called an open, honest debrief. You know, like I talked about the difference between a debrief and investigation or difference between a just culture and a culture that looks at investigation. So again, you know, what is an investigation? You know, you’re trying to find blame, you’re trying to assign blame to someone, right? Well, just culture is just the opposite, that we’re not we’re not Nestle concerned about the blame. Yes, we want to fix it, we want to find the root cause. But we’re concerned about, you know, fixing the system, right? Again, going back to a system-based approach, how do we find those errors? How do we find those gaps in systems so that the team or whoever else does this next time doesn’t, doesn’t make the same mistakes, the same errors, the same ideas, and establish and adjust culture? Going back to that? You know, how do we do that? Well, there’s several ways I talked about when I work with my clients, but, you know, it’s like anything else? Where does most stuff always start any kind of change, especially when you have the word culture starts at the top? Right? So, when you have team leaders, leaders of an organization, you know, C suite types VPS, you know, you name it, and even especially, and I think even more importantly, informal leaders because I think a lot of times, they have more influence than to those formal leaders, for sure. You know, when those leader whoever is in those leadership positions, whatever in everybody’s leader, in some sense, like I was saying, when you step up, and you can admit your shortcomings and you can see that when you go and go in a room and you talk about what went wrong, when people see you take that, that feedback or when you admit your own errors, or you even more so when somebody you know, someone in your team, you know, has some missteps or whatever, when we don’t we there’s no retribution, but we say, Okay, let’s find out why this happened. And it’s not Eric’s fault. Let’s find out what was going on that day. So, we look at the environment, you know, maybe Eric, you know, wasn’t on the right team, maybe his teammates that were assigned to him were way too inexperienced for this, this job or this project? Maybe Eric wasn’t getting all the information he needed. You know, maybe Eric has some, you know, personally, she’s going at home affecting his his personal stat, you know, because that’s a huge, huge part of human factors is, you know, a lot of times what we miss, especially in the business world, that I found exactly opposite of what I experience in the military is we just show up and work with each other, and we have no idea what is going on in someone else’s life.

You know, outside of work, which, you know, not that you’re trying to pry, right, it’s getting to know people it’s getting and that goes back to other stuff that that we may or may not get into here later, is when you talk about mutual support and morale and everything like that, how do we establish that it’s really about getting, you know, how you drive this culture of mutual support, and getting to know everyone you work with peer accountability, all that kind of stuff, but it just culture really centers around, you got to see leaders that establish that you got to see leaders that are support that so someone brings something up, you know, we’re, we’re going to take that input, we’re going to fix the system, we’re not going to blame and train the person. And the other way, you know, when I when I put a slide up, last thing, I’ll talk about just culture, when I talk about it to a group, I’ll typically have a slide and it’ll say, three words on or three ideas. One is decentralized execution. One is pure accountability. And the other is that open honest debrief I talked about, you’ve got kind of arrows pointing to all of them. And the reason those are all important, I talked about the open honest, deeper, if that’s where we get our feedback loop, right. That’s where we get the where we did have our missteps, how we’re going to fix it. decentralized execution is leadership backed autonomy. So that trust you’re putting out as a leader, so people know that you have their back. Because we all know that one of the biggest motivating factors is autonomy. And then finally, pure accountability, which is the idea that it’s not the bad word accountability, where bosses are running accountability, but the idea that, you know, that mutual support idea that where I’m not going to do a bad job, not because, you know, I don’t want to look bad. I mean, that is one of the reasons but also, I don’t want to like Eric down, I don’t let the team down. If I don’t do my job or my role, then that looks bad on Eric. And I know, Eric, I know his family. I don’t want him to, to suffer for that. So, you know, he has these three main parts that goes into Joe’s culture, you know, a lot of things that go into that. But that’s kind of the big picture behind it, if you will, yeah. And I think you brought out a lot of really important points. And I want to double click on your peer accountability comment, I get that comes up a lot of conversations I have with the boardroom level. It takes a lot, though, to create the psychological safety for people to speak out. Like I remember the airline industry, pilots are comfortable raising issues. It’s like I fell asleep, we both fall asleep. And a raise those themes, so something can get done. A lot of businesses, I remember one organization, a mining organization, he had mining, you know, those huge trucks on the periphery of an open pit, and they would regularly tumble down regularly, maybe a couple times a year. Right. And it happened for several years, because nobody actually had the comfort to say, I blinked I fell asleep. Right, which is something that you’d have as a given in the airline industry. Is that comfort to share those things? Because otherwise nobody would know same as in that truck. Nobody knew what happened. There’s only one operator there. So how do you create that?

Well, you know, going back to the Well, first of all, again, I’ll go back to you’ve got to have leadership buy in on it. Okay, so you can’t just say, hey, from now on, when you point out stuff, I’m not going to you know, let’s point out to be open, honest. I mean, we see we see that cheap, right? I mean, you see that all the time, where, Hey, open door policy, you told me anything. And then next thing, magic as you, you know, you told him this wasn’t a good idea. So that’s the first thing is Who do you have, you know, in certain roles that are going to establish and allow that autonomy, right? Right. Because, you know, allowing that autonomy, knowing that my superior whoever that may be, has my back, if they’re going to allow Give me that decentralized execution authority, that ability to go and make autonomous decisions. That’s the first thing. Because when we know that people have our back, you know, what, I think we’re more willing to admit, we’re more willing to drive that peer accountability, if you will. Because we know that someone trust us to do this. Therefore, I’m going to do the best I can and I want my team to do that way. So, I’m going to drive some that appear accountable.

The other thing I go back to is I think about pure accountability in a flying organization in the military. So, I think about kind of fighter squandered right now, we talked about the military.

You know, a lot of people think, Oh, well, the military, you know, you did it, because you were told to your order to do stuff, you know, everybody. Okay? Well, yes, we do have a rank structure. And that’s where a very important reason in a military unit, however, first of all, I don’t ever remember being you know, like, in the movies, you see, you know, ever being told or ordered, you know, you will do this right, order you to do this. I mean, we don’t, right. I don’t think we’ve had firing squads in the military for several year, you know, you know, things like that, for years, I think. So, I think back on it, and there was just this idea this, this kind of couldn’t put your just put your finger on it, that and this is flying or non-flying, like I said, we had many jobs we did outside of flying, you know, running in the organization, running programs, you know, things like that. So, when I think back on it, I say what was it? And I think it goes back to this idea, again, that, because of how we train the idea of mutual support, right, established through morale, and again, getting to know people, because a lot of times in the military, you know, at least in a flying Squadron, the people you work with a lot of times the people you play with to so you, you naturally got to know these people, and these are the people you got to work with, you know, their families know their kids.

And so, you build that camaraderie morale, which helps enforce that peer accountability, because it’s not the kind of accountability where you’re, hey, do this or else, it’s the kind of accountability that you’re saying, you’re picking somebody up, because when they’re having a bad day, or when they’re having a misstep, you’re going to step in, provide that mutual support. Because you, because you want them one day to help you out as well. I mean, you want them to see that, because you’re going to have your days as well. So, I think it all goes back to number one, starting at the top, you know, talk is cheap, make let them see you do that. Let them see you support people, let them see you take accountability for your own actions as a leader as well, because I mean, there’s nothing more demoralizing though later than either a can admit when they’re wrong, or even worse, will blame their team for someone they’re wrong. And so, you know, that that’s the first thing. And then the second thing is establishing that, that mutual support within organizations establishing that idea of camaraderie, and, you know, the idea of morale and how we get to know each other. And there’s simple things, just one example. I mean, you know, now, with a lot of people working remotely, and business organizations, especially, you know, it’s a little tougher, we have to make a little bit more of an effort to do that. But even we weren’t working remotely, let’s be honest. I mean, you go to the office, go to your job, I mean, you may see somebody at the watercooler or something like that, right. But it’s like, you know, when you have some time, you know, detach. Nobody likes Mandatory Fun, but encouraging these, you know, whether it’s gotten togethers in the office, or just take 1520 minutes, get the team together and just talk about non-business-related stuff. I mean, Hey, how’s it going? What’s going on your life? You know, what’s, how are things going, I heard your dad was in the hospital, how’s that, you know, things like, I mean, getting to know people, and really establishing that, that, that core or know each other. And then the other thing, I’ll bring an exam from the airlines, one thing we’ve done is basically voluntary reporting. And it’s not even necessarily anonymous, um, we call them Aviation Safety action report. So, say something happens, right? And so, we say, you know, we’re flying along, and let’s say we miss an altitude or missile radio call, and we catch it and nothing bad really happens. We keep going about our day, right? Well, back in the day that have been Okay, great. Let’s just keep going. You know, nobody knows about it. Right? Well, now, not only is it you know, it’s encouraged to say, Hey, you know, report that put those up in one way. It’s, it’s, it’s encouraged by saying, hey, look, if you report this and come forth with this, you know, now say you make a make a minor missed out, or make good, a major misstep. And you put this report in, you’re essentially, you know, you’re kind of raising your hands and hey, look, I messed up, here’s what happened. Here’s what we did. And so now with all the data that are on airplanes, so they can see everything we do, almost, I mean, they get all this data that comes back, they can monitor another key point of safety program. When they when you come back and say, hey, look, here’s where we messed up, your kind of fessing up, you know, right away saying, look, we’re not trying to hide this. Here’s that. So, it’s almost like, you know, we’re not going to come down on you for something like this, especially if you report now, if there’s willful disregard different termination. I mean, that’s a different story. But for the most part, all these are and we get 1000s in my own company, we get 1000s and 1000s of these per year of pilots reporting mistakes there and it’s not anonymous. I mean, they see your name they see you know, who it who did it, but it’s a great reporting for him because what it’s done is it showed a lot of gaps in the system. So, they can fix the system. They can, you know, go back and so we don’t make those mistakes.

Good. But again, it’s about a culture that says, Look, if you can, if we can identify this stuff early up, you know, early on, when these things happen, you know, you’re not going to get slapped on the wrist for this, you’re not going to get any job action taken, obviously, again, you know, there’s a 1% if it’s willful, you know, disregard or something. Yeah.

I was exceptions, but for the most part, that’s one great example of how you, you kind of identify that and bring some of that pure Cambodian Yeah, and love you turn around peer accountability, because a lot of people see this as removing accountability, but it’s not it’s creating a different level, right? ability around it, right. And the reason I say that is because I think accountability gets a negative connotation times, you know, a lot of times it’s, I kind of call it the, you call it the vice print and vice principal approach, Vice Principal, and, you know, school was always kind of the, the hammer, the one that was always winning, right? So, when you think of accountability, always kind of think of advice principle that, you know, the one that’s do this, or else kind of approach, you know, if you don’t do this, you going to get fired or whatever. Versus peer accountability, which is that kind of accountability, that is, is maintained with your peers, your colleagues, even subordinates are superior. I mean, it doesn’t really matter rank. Right ci, again, IT systems-based thinking we’re all trying to accomplish the same mission, the same task.

The same objective, Harvard Business view, actually did a study where they said that in poor performing teams, there’s no accountability in mediocre teams, bosses drive accountability, but no high performing teams, peers and colleagues enforce that accountability. Exactly. And I think there’s an interesting pivot here, because I think a lot of organizations part of the struggle when you talk about that peer accountability and sharing things is, if I think about two pilots, they both realize that they need each other to support and you’ll have the first officer calling the captain at a rank, if he sees something or she sees something that’s, that’s inappropriate, or sees a potential error, right? Where, whereas I’ve seen that sometimes it gets called brother’s keeper, but that’s not the intent of at all, oh, I’ll help you cover it up in dysfunctional organizations, where make sure nobody knows or in some cases, they’ll say, sleep it off. So, if you’re drunk, rather than calling it out, just sleep it off. And in the seat, that’s a dysfunctional view of that, versus what you’re talking about is uncomfortable calling because there’s no ramification if you call it out, but I’m dealing with it because we need each other to be successful. Is that fair? Right? Oh, absolutely. And the example uses, I mean, you know, using the, you know, say that the drunk of your tie, it’s funny, you said that, because what you’ll see now is even an airline interviews now, what they’re looking for people that are, you know, looking out that they’re going to look out for the team look out for the other person in turn. In other words, you know, there used to be a common situational question was, you show up to the lobby of the hotel getting ready to go? And you know, you notice alcohol on the captain’s breath? What are you going to do? You know, your new hire? What are you going to say? What are you going to do?

What they don’t want is someone that’s going to, well, I’m going to call the company right away, I’m going to tell the chief pilot that this guy is drunk, and but what they want you to do is, number one is safety. Right? Do not let that captain, you know, do not let them nearby. For that’s the first thing, right? Whatever you have to do, hopefully, you want to make a scene doing it. And they want you to take that situation. And, you know, de-escalate that situation, in the best way possible. Right? Right. So how are you going to do that, you know, there’s different ways you can phone a friend, you know, you can talk to the captain say, Look, man call in sick, we’ll get this taken care of, you know, let’s just not get you do not get to the airport, you know, you may get you may get a slap on the wrist, but you’re at least you’re not going to sacrifice your license or the company or anything like that, by doing this, that’s the first thing. But the other thing they’re looking out for really, is that you’re looking out for the person, you know, you’re not just going to sell this person out and say, Well, I’m going to, you know, I’m going to call it, you know, company right now and call this person out. So, but you’re exactly right. I mean, that that’s really what it’s about. And when you talk about two pilots working together, you know, you want to establish that really starts with again, like I said earlier started leadership stats for the captain stylish in that tone, not just the pilot, but with a crew, the cabin crew, you know, the maintainers that the ground crew, everybody, you know, establishing that tone of Hey, look most guys and gals I fly with now will say this. They’re like, Hey, you know, if you see some Speak up, you know, no matter what it is, don’t assume I know everything. And that starts with establishing the tone. And he ate and again, talk is cheap. So, people can say that. But if you point something out to someone, hey, and there’s different ways when you talk about, we talk about crew resource management, Career Resource Management communications, you escalate it, like say we’re approaching a thunderstorm and a captain.

You see that thunderstorm? 100 miles out there. What do you think and just kind of, Oh, yeah, we’ll be fine? So, as we get closer, if they’re not turning and are Hey, you owe me.

Ask we can’t get No, we’ll be fine. And so, you keep escalating until it gets to the point of Hey, Captain, I recommend or think we should probably be exactly, you know, you kind of escalate that up, if you will. That’s one small example of how you handle things like that, and how you work together that peer accountability where you escalate the tone, the communication you’re using. And the other thing goes back to just the organization. How does the organization set it up? How does the organization train? I mean, when we train these skill sets, you’re talking about of how we work together, that’s ingrained in our training, how we communicate, how we were going to handle certain situations, how we divide duties, I mean, all these kinds of things go into that.

Right. So that gets me to crew resource management, which is another thing that’s often being quoted, at least in the airline industry as being a fundamental step change. And that occurred, can you share a little bit of some of the specifics there, because that that is an area where there’s been huge leaps, in terms of how you have that communication, front and back-end crews and the cases where it didn’t happen. We know the other incidences of planes getting shot down, etc., because it wasn’t the right communication, right. Absolutely. And, you know, my human factors leadership model is a poll a lot of the attributes, I think of crew resource management, some people call it team resource management, a poll a lot of aspects from that. So, if you carry that I think that that’s important is we kind of talked a little bit about communication, and more specifically with communication and crew resource management, communications, a big, big area, right. There’s a lot of things that I talked about tone, but there’s other things such as, you know, briefing, so you know, what are we expecting to happen really briefing is really kind of looking into the future, predicting what’s going to happen, so we’re better prepared. So, briefing, before we do something, something I call sea three, calm, clear, concise, correct. And clear, concise, correct speaks for itself. But what that really also means is no assumptions. So don’t ever assume anything. Because as we know, in the safety world, human factors, assumptions are always been a something that leads can lead to mishaps. So that’s, you know, one aspect, another aspect is situational awareness. I actually do a whole workshop on situational awareness. And it’s, you know, situation awareness is just that it’s the idea of all those variables affecting your current state at that time, everything from the environment that’s coming in, and how do we take all that in and determine our situation and determine our next course of action? how those variables and it’s a very, you know, when you’re especially working in a complex area, nuclear power aviation, you know, working on aircraft carrier, you name it, all complex environments with many different variables. So how do we take all that in? How do we can, you know, I talked about consistent monitoring, some monitoring, everything, you know, we’re doing some we’re flying an airplane as a crew and CRM part of CRM now is dividing duty. So, you have one pilot flying, and one we call pilot monitoring. Sure, so you’re not just over there asleep, just becoming, right, you’re, you’re monitoring the airplanes on autopilot, but you’re monitoring that aircraft is sure is doing exactly what it’s supposed to be doing? Correct. Um, and another way I put that is, I call it healthy paranoia, you know, a little bit of, you know, kind of little thing in the back of your head, right, that saying, hey, what could go wrong right now? You know, yeah, we’re flying along straight and level 25,000 feet on Oh, but what if I lost engine? Right? What would I do? You know, what if this happened? I mean, you’re not, you know, it’s not about being paranoid. But it’s about I call it again, healthy paranoia, just kind of an idea of kind of what is maintained to waste awareness. I’m part of CRM, I talked about, you know, pilot monitoring pilot flying his team roles, and team roles, well defined roles, delegation of duties, all that kind of stuff goes into mutual support, right. So, if we haven’t met, we have a critical situation. And this, like I said earlier, Human Factors looks as a human as a way to help mitigate human error. This is a great example, if we have a, you know, when things go normal 100%, point A to point B, great, you know, all the procedures, automation, everything’s great. But what happens when something doesn’t go normal, those non-standard situations, that’s when the human has to come in, that’s when the human has to come in and make that human decision making. And so, when all those things that happen, you start having a critical situation, how do you delegate duties? How do you determine if you have a time threat or a no time threat? So, in other words, do we have time to kind of look into this? Or is our time critical? Do we have a fire on airplane we need to land Pretty soon, we probably don’t have time to look at everything we need to we forgot to get the airplane on the ground quickly? So, you know, understanding team roles, mutual support, decision making, that time no time kind of goes into that when we make decisions understanding the perception before we make the decision. So, in other words, we may have a false perception of something and then if we did, we act on that, that’s bad. So, decision making goes into that a lot.

To that I talked earlier about decentralized execution. So having that leadership back to autonomy is a big part of CRM. And then you know, the final thing I’ll tell you about CRM, which part of the human factor’s leadership model is SRP standard operating procedures, right. So, ensuring were followed, because slaps are critically important, as you know, in HR Oh, and a lot of times I love it. When I bring this up in business. They’re like, Well, you know, the military, you guys have standards? And because you all, you know, your margin aligns, and you know, you do, we’re told and everything’s very structured, I’m like, Yes, but I’m, like, let me let me always give an example of, you know, flying fighters, or even an example of, say, a special operations team, such as Navy SEALs or something like that. And I said, do you think that when they drop a team, a Navy SEAL somewhere, they’re expecting them to follow orders to a tee and know the exact situation they’re going to face? They’re like, No, I’m like, exactly, exactly. They want them to have full autonomy, right? I mean, in. And that goes for almost every military organization, they want us to have that autonomous decision-making ability, because we’re not robots, they want us out there making it. But in order to do that standard operating procedures serves as guide rules, right? So, when you are out there making those autonomous decisions, and you’re saying, Hey, I’m going to make this decision. What you know, you’re at a crossroad, you say, Well, here’s our standard operating procedures wants us to do this. Okay, so I’m going to make this because that’s more in line with how our standards are written how our operating procedures want us to do. I mean, there are some things that are black and white, we have to do, always turn the switch on, always do that switch off, don’t ever do the opposite. But there’s always going to be the human factors that comes in there, and making those autonomous decision. So, making sure we understand the standard operating procedures, and adhering to them is another critical part of that of that CRM. So, communication, situation awareness, decision making team roles, and, you know, standard operating procedures. And I kind of put training under that as well. I love it, I think you brought in a lot of really good examples from the airline industry from fighting for being a fighter pilot. And really, I think this is this is the next leap in terms of safety is getting to that point where you’ve got adjust culture where people are comfortable, feel safe, raising issues, escalating issues, you’ve got the right level of support, and we’re looking at the system, the culture, trying to prevent things. So really appreciate you jumping in sharing your insights, your ideas, and all your wisdom from all your experience.

Well, I appreciate era, thank you so much. And, you know, always good you’re writing. And I think if I could sum it up, I think everything I talked about the way I look at it as human factors is, is really it’s looking at it from a systems-based approach, like you said, so it’s not about I mean, you’re never going to get rid of human error, as long as we have humans involved in it. The other example I’ll kind of leave you with is people say, Well, what about you know, as robots’ computers come more and more, we start seeing more automation? Okay, well, great. But I said, here’s the thing, you know, someone designs out automation, exactly, probably installs that automation, someone has to work on and maintain that automation. Someone designs a software, I mean, there’s always going to be human in the chain somewhere.

So that, you know, it’s all about a systems-based approach, and how we fix the system versus that blame and train approach, which as we found over the years really doesn’t get the results we want. Yeah, you bring an interesting point. Because even if you think about m cast the whole issue on the 737 Max, yeah, even if you go Airbus, and there was a, an issue, I think, was the Paris Air Show with it, where the rightest him thought he had landed, but he hadn’t landed and went into writers app. That’s a technology that’s a system but that’s designed by human that can still make a mistake in resigning. Exactly. And a lot of times we look at that in terms of well, you know, the engineers might say, they could have just done this, and that would have happened well, okay, maybe. But you know, do you really consider in a situation what the human actually sees, you know, you have to process that how much time that takes to go through what are they looking at? And there’s so many things that go into that. So, you’re exactly right. And those are great examples. Great. Well, thank you so much for joining in Brandon. If somebody wants to get in touch with you. How can they do that? Absolutely. Well, to see some more of my info we talked about, you go to my website and plenty material there. And also, my email address B as in Bravo, B. Williams, at lead dash tag COMM and there’s also a way to contact me on my website. But absolutely feel free to reach out more info or anything else for sure. Excellent. Thank you, Brandon. Thanks, I appreciate it.

The Safety Guru with Eric Michrowski

More Episodes: https://thesafetyculture.guru/

C-Suite Radio: https://c-suitenetwork.com/radio/shows/the-safety-guru/

Powered By Propulo Consulting: https://propulo.com/

Eric Michrowski: https://ericmichrowski.com

ABOUT THE GUEST

Brandon Williams is an accomplished and results-oriented leader, top business speaker, executive consultant, and technical expert with proven leadership experience in managing cross-functional teams and organizations. His experiences as a United States Air Force F-15E Fighter Pilot and Officer leading diverse teams of men and women from all backgrounds set the stage for his Human Factors Leadership methodology. In addition to his experience as a Fighter Pilot, Brandon is recognized for his expertise in Human Factors, having designed courses for and taught at several universities. His world-class execution of numerous speaking engagements to Business Leaders from all over the globe consistently deliver superior results in how to lead High Performing Teams through Complexity and mitigate Human Error.

Discover More: https://www.lead-tac.com/

 

STAY CONNECTED

RELATED EPISODES