Select Page
Cracking the Code The Human Factors Behind Organizational Failures

LISTEN TO THE EPISODE: 

ABOUT THE EPISODE

You don’t want to miss our latest episode of ‘Cracking the Code: The Human Factors Behind Organizational Failures’ on The Safety Guru. Join us as Martin Anderson, a renowned expert on human factors and performance, shares his valuable insights and examples about the human factors behind organizational failures. Learn how to effectively and constructively embed lessons learned in your organization.

READ THIS EPISODE

Real leaders leave a legacy. They capture the hearts and minds of their teams. Their origin story puts the safety and well-being of their people first. Great companies ubiquitously have safe yet productive operations. For those companies, safety is an investment, not a cost, for the C suite. It’s a real topic of daily focus. This is The Safety Guru with your host, Eric Michrowski. A globally recognized ops and safety guru, public speaker, and author. Are you ready to leave a safety legacy? Your legacy success story begins now.

Hi, and welcome to The Safety Guru. Today I’m very excited to have with me Martin Anderson, who’s a human factors expert. We’re going to have a really interesting series of topics of conversation today. He’s got a deep background in human factors across oil and gas regulatory environments. His passion is really to understand how people perform in complex systems and also, ultimately, why organizations fail. So, Martin, welcome to the show. Really excited to have you with me. Let’s get started with a bit of an introduction.

Yeah, thank you very much, Eric, and certainly, thank you for having me on the show. It’s a real privilege to be invited here. Yeah, so in terms of my background, I started off with a psychology degree, and then I did a master’s in human factors. And after a few years of work experience, I followed that up with a Master’s in Process Safety and Loss Prevention. I’ve been a human factors specialist for over 30 years now. I’ve worked for a couple of boutique consultancies. I’ve been a regulator working as a specialist inspector in human factors for the UK Health and Safety Executive. I spent a few years as a human factors manager in an oil and gas company. I spent a lot of time assessing existing installations but also had input into the design of new facilities, working on 40, 50-billion-dollar mega projects. And over that time, I visited over 150 different oil, gas, and chemical facilities, both onshore and offshore, which gave me quite an insight into how some of these major organizations operate. And one of the reasons I created the website humanfactors101.com, was to share some of those insights. The other thing I’d like to talk about is going back 30 years, right to the start of my career.

I read a document which was called Organizing for Safety. It was published by the UK Health and Safety Executive in 1993. There’s a quote from that document I would like to read out because it had a huge impact on me at that point. It goes like this, different organizations doing similar work are known to have different safety records, and certain specific factors in the organization are related to safety. So, if we unpack that quote, it really contains two statements. First of all, the different companies doing the same things have got different safety records. And secondly, perhaps more importantly, there are specific factors that could explain this difference in safety performance. And I thought this was amazing. I thought if these factors could be identified and managed, then this safety could be massively improved. And over the next 30 years or so, one disaster at a time, these organizational factors have revealed themselves in major incidents, which I guess we’ll come to in a moment.

I think that’s a great topic to get into. So why do organizations fail? Because I think when we had the original conversations, I was fascinated by some of your connections between multiple different industries and common themes that were across all of them.

Yeah, sure. What might be helpful, first of all, because we introduced me as a human factors specialist to just briefly define what we mean by human factors, and then we’ll go into looking at some of the organizational incidents if that’s okay. Sure. For me, Human Factors is composed of three main things. We’re really looking at, first of all, what people are being asked to do. That’s the work they’re doing. Secondly, who is doing it? This is all about the people. And thirdly, where are they actually working? Which is the organization? So ideally, all three of these aspects need to be considered, the work, the people, and the organization. But my experience is that companies tend to focus on just one or two of these, usually the people one. Within the UK HCC, our team defined human factors as a set of 10 topics, which has become widely known as the top 10 used by industry consultants and regulators worldwide. Because prior to that, we would turn up to do an inspection, say, we’re here to inspect your human factors. And they were like, I don’t know what you mean. How do we prepare for that?

Whom do you want to speak to? What do you want to go and look at? So, after creating that top 10, we were able to say, the agenda for the inspection is that we want to come and look at how you manage fatigue. We want to come and look at your supervision arrangements or your competency assurance system. So, this helped to operationalize human factors. So, the other description, really, of human factors. A lot of people come to human factors through human error. They hear about human error. But if we identify human error, we need to understand how and why it occurred and not simply blame people. Are we setting people up to succeed? Are we setting them up to fail? Are we providing systems, equipment, and an environment that supports people to do the work that we’re asking them to do? And to introduce, as we move towards talking about organizational failures, I’d like to read a quote from Professor James Reason, who is a psychologist at the University of Manchester. And this quote is about 25 years old, but it’s still one of my favorites. And Reason said that rather than being the main instigators of an accident, operators tend to be the inheritors of system defects created by poor design, incorrect installation, faulty maintenance, and bad management decisions.

Their part is usually that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking. And I think that’s a really good introduction to our discussion on organizational failures.

So, let’s go there because we had a really interesting conversation on organizational failures and some of the common themes. So, what are some of the common themes, and why do organizations fail?

Exactly. When you say, why do organizations fail? Let’s just think about a few of those from different industries because these organizational disasters have occurred to the NASA space shuttles, the Harold of Free enterprise, Ferry Disaster, Shenandoah, the Kings Cross Fire, Piper Alpha, Caterpillar, Texas City, Burnsville, Deepwater Horizon, the Condo, lots of different rail incidents around the world, several so-called friendly fire events. And there’s also been organizational disasters in sectors such as healthcare and finance. In the UK, these include inadequate care during children’s heart surgery at the Bristol Royal Infirmary over a 10-year period. And, of course, most listeners will be familiar with the so-called rogue trader that caused the collapse of Bearings Bank. So, there were so many disasters in so many different industries. And I know when we had a conversation earlier, what we were considering was that, okay, they’re all in different industries, but there are lots of common themes that we could pull out of those from space shuttles to Bearings Bank, for instance.

So, what are some of the themes? Because I think the part that really caught my attention is I think you’ve done an activity where you had taken the facts from a different event, mastered it, and told me a little bit about that story in terms of how you mastered the facts that were from an existing element and people thought it was something different.

Yeah. So, the example there was that… I don’t know if readers are familiar with the Nimrod disaster. So, this goes back to 2006. Nimrod was a recollonance aircraft. And shortly after air-to-air refueling, it was on a routine mission over Afghanistan. Shortly after that refueling, there was a fire which led to the loss of the aircraft and, sadly, the 14th service personnel. And I was asked to get involved and advise that investigation. And as I started to read some of the initial information from that investigation, I started to think, this sounded just like another incident I’m really familiar with, which was one of the shuttle incidents the Columbia incident. So I put a presentation together, and on one side of the slide, I put the information from the Nimrod incident, and on the right-hand side of the slide, I put information from the Columbia incident. And then, I went through several of the issues that were involved, and I produced this PowerPoint presentation, and I mixed up the left and right sides, and I didn’t say which was in which. And when we showed it to the investigation team, they couldn’t determine which information came from the incident they were investigating from the NIMOD incident and which information came from the shuttle Columbia incident many years previously. 

It just showed you the two very different incidents in different industries, different locations, and different people, that the organizational issues were almost identical. That was quite powerful, the fact that people couldn’t tell the difference between the facts from one and the facts from the other because these causes just overlap so much. When you look at the very detailed technical level, there are differences between these events. But the common factors when you really start looking at the deeper or, the broader organizational issues, then there are so much many similarities.

What are some of the themes in general that you’ve looked at? You mentioned Bearing’s Bank, which sounds very different than Piper Alpha. What are some of the common themes?

It does. You think, what has the failure of a 100-year-old bank got to do with the failure of an oil refinery or an offshore oil platform or any of the other incidents that we’ve spoken about? People and organizations fail in very similar ways. The findings from these disasters are getting quite repetitive just because you’re seeing the same things over and over. When you look at all of these incidents and pull out some of the main themes, what are the things that we’re seeing? Because the important thing is that we can go and look for these in an existing organization. You see things like a lot of outsourcing to contractors without proper oversight. We call that in the nuclear industry, we call that not having intelligent customer capability because they don’t know what the contractors are doing. They can’t explain what the contracts are doing. Then you’ve got inappropriate targets or priorities or pressures because, in almost all of these cases, there were significant production pressures, whatever production means for your organization. Another key issue that you see almost every time is a failure to manage organizational change. And by that, I mean a failure to consider the impact of that organizational change on safety.

So, a lot of organizations are going through almost like a tsunami of changes and not really considering how that impacts how they manage safety or not considering that each of those separate changes has a cumulative effect which is more powerful than the individual changes. You also see a lot of assumptions that things are safe. So even if you have evidence to the contrary, assuming that everything is safe, rather than going and looking for information, rather than challenging, or rather than having a questioning attitude, organizations are pretty bad at looking for bad news or responding to bad news, not wanting to hear bad news. So in almost all of the incidents that we’ve spoken about, it wasn’t a complete surprise to everybody in the organization. There were people in the organization that knew things were going wrong, that they were getting close to the boundaries of safety, but they couldn’t either get that information to be heard by the right people, or people didn’t react or respond to that. So it’s really interesting when you look, and you read the detailed investigation reports, and there are always people that knew that things were going wrong. So that information is available in the organization.

And I think that’s a good thing because that means that, hey, this is good. We can proactively do something about this. We can go and look for some of these things. So the things that I mentioned there, and there are a lot more, Eric, that we could talk about. There are lots of organizational issues we could proactively go and look for because these incidents are devastating for the people involved, for the organizations involved, but they’re a free lesson for everybody else. Sure.

If you choose to learn from them and if you choose to see the analogy between a space shuttle, Nimrod, and Barings Bank, and whatever industry you’re in.

Yeah, exactly. Because you have to go looking for those issues, for those factors in your organization, so, there are two things or maybe three things you mentioned there. So, you need to go looking at other incidents. You need to take the lessons from those. You need to go and look for them in your organization, and you need to act on that. So, this failure to learn from other industries, for me, is perhaps the greatest organizational failure of all. The organizations think, well, it doesn’t apply to me because that was in a children’s hospital, or that was a bank, or that was an offshore platform. What’s that got to do with me in my industry? Failure to learn those lessons is the biggest failure because you can get away from the technical specifics of the incident and just try and look at the deeper organizational issues. But who in organizations is doing this, Eric? Which person, which role, which part of the organization goes looking for these events and draws the lessons and then goes and challenges their own organization? It’s actually quite difficult to do that. It’s like the problem with safety, isn’t it? Really, is that you can go into a boardroom and you can pitch a new product to a new market, and people give you money, and they’ll listen to you.

But if you go in and pitch that you want to spend money to protect and safeguard the installation against things that may or may not happen in the future is a much harder sell. It’s a problem for safety more generally.

One of the things I know we talked about was around what you call organizational learning disability, so people are good at investigating, but not true learning, and not embedding the change. I’ve seen this many times where people learn the same lesson over and over.

And that’s it. When we have these large investigations into these disasters, there’s always this proclamation that this must never happen again, and we need to learn the lessons. And then something else happens a year or two later in a different industry, but the same issues. So, you talked about a learning disability. Why do organizations fail to learn? Given that, there’s this wealth of information out there available as to why organizations fail. For me, I think there are two issues. I think there’s this failure to learn from other industries. All industries think they’re unique. They don’t think that they can learn because it’s a totally different industry. It’s nothing to do with them. But they all employ the same kinds of people. There aren’t different people working in different industries. They all employ the same people. They organize themselves in very similar ways, and they have the same targets and priorities and so on. So, first of all, that assumption doesn’t apply to me. It’s a different sector. So, failure to learn from other industries, we’ve spoken about, but failure to learn from your own investigations. And we see this in major incidents like NASA failing to learn from the previous incidents it had.

So, you have the Mars orbital and failure to learn from that. You have Challenger, then Columbia, and so on. So, what we find is that there’s a lot of sharing but not enough learning. So, after an incident, then there’s a safety bulletin put together, it goes on the intranet, there might be a bit of a roll, and so on. But you’re not, actually… If you’re not changing something, you’re not learning. So, something in the organization has to change for a lesson to be embedded. And you need to go back and confirm that you’ve changed the right thing. So, you can’t just change something and assume everything will be okay. So if you’re not changing anything structurally in the organization or in one of the systems or one of the processes, then you’re not embedding the learning. So that’s the first thing is this failure to embed the lessons that you come up with. I think the other problem is that investment derogations are not always of great quality. They’re not identifying the right issues. They may not be getting to the root causes. They might focus on human error. They might focus on blame. And Investigations that are done by external bodies generally are starting to look at these organizational issues.

But investigations that are done internally by the organizations themselves into their own events rarely confront organizational failures. It’s very challenging for the investigation team to raise issues that suggest there are failures at the leadership level. It’s challenging for the investigation team, and it’s challenging for the leadership to receive that information. So quite often, the recommendations and the actions are all aimed at employees, a bit like a lot of safety initiatives, behavioral safety, safety culture, and so on, are quite often aimed at the front-line workforce rather than the whole organization. We often see that in investigations as well if they’re not challenging these organizational issues, whether that’s because of a lack of understanding or whether or not that’s not accepted by senior leadership. Because people doing these investigations aren’t always competent. And I mean that in the nicest possible way. They don’t have the right experience, or they’re not given enough time, or it’s seen as a development opportunity. So, investigations need to have the right people doing them, asking the right questions in order to get the right recommendations out of them. Because if the process isn’t right, you’re not going to get the right recommendations coming out of it.

So, what are you going to learn because you haven’t got to the real issues? So yeah, I think there are two issues there, failure to learn from other industries, but also failure to learn from your own investigations. And we can talk about some tips that maybe could help organizations get to some of those organizational issues when they’re doing investigations. Absolutely. And also, it’d be useful to talk about how you can go and look for some of these organizational issues before you actually have an incident, which is what we want to get to. We want to have it, we want to learn, but we don’t want to have incidents in order to be able to learn. So why can’t we learn proactively without having an incident in the first place?

This episode of The Safety Guru podcast is brought to you by Propulo Consulting, the leading safety, and safety to your advisory firm. Whether you are looking to assess your safety culture, develop strategies to level up your safety performance, introduce human performance capabilities, reenergize your BBS program, enhance supervisory safety capabilities, or introduce unique safety leadership training and talent solutions, Propulo has you covered. Visit us at propulo.com.

Let’s start first in terms of how you can identify some of these organizational factors through the investigation process.

Through that investigation process, what you’re really trying to do to get to the organizational issues is you’re trying to zoom out from the detail, taking a helicopter view. You’re zooming out and looking down, trying to see this bigger picture. So, for example, most people who’ve done an investigation would have put together a timeline. So, a list of what happened to who or what equipment and when and draw a timeline and start to map what happened. But the problem is that a lot of those timelines start on the day of the event. And what I’d propose is that your timeline goes back to weeks, months, or even years before the event occurred. You’re trying to identify what might have changed in the organization in that period in terms of changes to equipment, processes, people, priorities, the direction the company was going, and so on. So, your timeline needs to go way back because of the organizational issues that we see in all of these events. These events didn’t just occur overnight. As Reason said in that quote, there was trouble brewing for weeks, months, and years beforehand. So, there are indications in the organization. So, your timeline needs to go back and look for those issues.

That automatically forces you to think not just about the actual incident but more widely about your organization. The other thing you can do really is review previous incidents that have occurred or other sources of data, maybe looking at audits or regulatory inspections, or staff surveys. You’re trying to identify common threads and trends, and you’re trying to identify how long these conditions have existed and how extensive they are across the company. Why did this event surprise us? Because, as I say, the information is normally available in the organization. So why did this come as a surprise? You’re looking not just at individuals, but you should be looking at systems. You should be looking at processes, and your mindset as an investigator should be thinking about what were the organizational conditions. What was the context in the organization that set people up to fail? So that going back way before the incident is quite a helpful change of mindset for people, rather than just going, okay, what happened on this day? And thinking about how you responded to the incident. It’s quite a useful tool to help you think more about organizational issues.

And how broad do you go? Because when you start going back to Zoom out years before decisions, changes in leadership, changes in investment, you can open up a very big can of worms. And I see if it’s Deep-Water Horizon, Piper Alpha, that there’s a need to go deeper. But how deep and how wide do you cast the net? Because I think it’s incredibly important like you said. Otherwise, you just limit to that person that made a mistake as opposed to start understanding what’s changed in the environment, the context. Sure.

It’s a lot easier in those big disasters to do that because they’ll have a huge team of people in these investigations. Some of them have taken five, six, eight years. They have the time and the resource. In an organization, you generally don’t have that much time to do an investigation. Quite often, the people doing it have other jobs, so they want to get back to the day job. So, it’s one of the reasons why the investigations are quite compressed in terms of time because most people are not full-time investigators. So, I think what you can do is it depends on the incident that you’ve had as to how far you want to go back. But I think looking at whether or not those conditions exist in other facilities or workplaces is a useful step that can really help you identify whether this is unique to this scenario or is this a systemic issue that we have in our organization. Organization. I think going back and looking at what might be key issues, so if you’ve had a merger or an acquisition or a major change in your direction or a new product or you’ve opened a new facility, those major organizational changes, if you had a downsizing exercise two years ago and since then there’s obviously been issues in terms of staffing and resources, then those are the key things you need to need to be mapping out.

As you say, you can’t map everything, but you’re looking for key significant changes or events or shifts in priorities or policies that might have occurred in the previous years. And I guess the time and effort that you spend in that partly depends on the consequences or the potential consequences of the event that you’re looking at.

But there’s still an element of you can focus the conversations like you just said in terms of what are the major shifts that happen as opposed to unearthing every piece. You’re still rewinding the movie further back. The other part I think is interesting to explore is what you talked about in terms of how we know and explore some of these organizational factors before something happens. And you mentioned that in all the incidents, you talked about somebody who knew something was up before. So how do we identify these themes before a major event?

Yeah, you’re right there, Eric. I think there’s always information available, and it’s just maybe not getting to the right people, or people aren’t taking action on it. So, these warning signs, red flags, whatever you want to call them, they’re unnoticed, they’re ignored or not getting to the right person because, as we’ve said, these incidents incubate over a long period of time. Those warnings accumulate. And that’s a great thing because that means that we have an opportunity to go and look for them and to find them. So, if you start looking, first of all, you should have a means for people to be able to raise those concerns in an independent, confidential way, some reporting system so that those concerns are coming to you. So that’s like one mechanism is some industries are much better than others at having confidential reporting systems where people can safely report a near miss or an error or challenge or frustration that they’re having. And that gives the organization an opportunity to do something about it. You’ve got to have the right culture for that, of course, because if your previous investigations blame individuals, then people are not going to come forward because they’ve seen what’s happened to other people.

So, they’re going to keep quiet, and these things get brushed under the carpet. So, it does depend on the culture that you’ve got. But having an independent, confidential way for people to raise those issues can be quite useful. So that allows issues to come to you. But you also need to go looking for these issues as well.

Yeah, I think.

That’s important. Organizations have had quite a few events. So, do they investigate them individually, or do they try and join the dots between different incidents? They might appear unrelated, but are they? Are you starting to accept things, either conditions or behaviors, that you wouldn’t have accepted a few years ago? People’s risk acceptance might change over time. Are you contracting more out? And do you really understand the technical work that those contractors are doing? Can you explain it? Can you challenge it if necessary? Are you having lots of budget cuts? The conversation is always around targets, budget challenges, focus on efficiencies, put productivity initiatives, and so on is a really good red flag. Are you starting to focus more on temporary fixes? Are you patching equipment? Are you stretching the life of equipment rather than investing or permanent solutions? Are you may be reacting to things rather than predicting and planning ahead? Now, organizations do lots of safety-related activities, and previous podcasts have talked about safety work and the work of safety. But if organizations start to see the completion of safety activities as being more important as to whether they’re effective, that’s quite often a big warning sign as well.

Companies are doing risk assessments, investigations, audits, and writing a safety case if that applies to your industry. And if the completion of that, if getting that done is more important than using it as a learning exercise and then whether it’s effective, that’s also a bit of a trigger for the organization. So, there are these things you can go looking for. I think one of the biggest things for me is because there are lots of questions we could ask, is that if you assume that your assessment of these major risks is incorrect and go proactively seeking information to continuously revise your assessment, you’re more likely to pick up these issues. Whereas if you assume that everything’s okay until it isn’t, it is too late at that point. Organizations are getting better in their maturity in their approach to investigations. But that maturity hasn’t carried over to being proactive in looking for issues. We’re getting better and better investigations, but we don’t want to have incidents to investigate. In organizations, there are tools or techniques. There are ways you can go and proactively look in your organization to find these issues. The maturity of investigations just hasn’t translated over to proactively going and looking for things.

There are lots of reasons why that might be the case.

I think it’s an interesting point because I think if you’ve got… The other element that comes to mind is if you’ve got an incident that happened, it’s clear who owns the investigation. But who owns this proactive view? Because in some organizations, it could be an audit, but an audit is not always necessarily equipped to do it. I know that in one organization, an audit made an audit in safety, and their focus in terms of driving safety improvement was to find ways to get employees back to the office faster, which has no impact on safety. But from a financial standpoint, if you don’t have expertise in what safety means, that might sound like a viable solution to reduce a rate, right? It could be your safety organization, but that safety organization needs to have the right visibility. It could be some form of a red team that’s constantly looking for challenging pieces. What have you seen be most effective in terms of where this resides and the practice around kicking the tire? Is that what you’ve got?

I think part of the issue there that I alluded to earlier on, Eric, is that I just don’t think this is a formal role within organizations. The departments that you mentioned quite often don’t have the expertise, experience, or time to be able to go and look for these issues proactively. So, the audits, investigations, they’re all quite constrained in their agenda, and so on. So, I don’t think there is a good example that I know of a function in an organization that is proactively going and looking at these areas. You do have risk committees and all these audit committees, whether or not you’re looking in the financial sector or whether or not you’re looking in oil and gas. I think there are pieces of the puzzle held by different people within an organization that can contribute to this review that we’re talking about. But I don’t think there’s really good practice out there of how that’s been pulled together into a cohesive, proactive, challenging go look to see whether or not we have any of these issues, particularly when you’re trying to learn from other industries. So if there’s been a big incident in one industry and there’s a big report that’s come out, and there are lessons and recommendations in that, organizations in that industry might look at that and might go and challenge themselves.

But that’s relatively short-lived, I think. If you ask people in organizations, what are the main failures in Piper Alpha? What were the main failures of Bearings Bank? What are the main failures in the shuttle incidents? A lot of people, including safety people, just can’t tell you what those organizational learnings would be. So not only are they not going looking for these things, but quite often, that experience, that understanding is just not available, Eric. But I think it’s a big gap. I think there’s a role for human factors, people, and systems people to be able to fulfill that role. But it’s very difficult for an organization to fund a position whose role it is, is to go looking for things that may or may not happen or that might be very unlikely to happen. In these times, it’s quite challenging to resource that position in an organization.

A couple of things that come to mind because I’ve seen some organizations do quite well at learning through case studies of others. So as a senior leadership team looking at something like the 737 MAX and what transpired around the box, looking at the Challenger, looking at Texas City, or looking at Deepwater Horizon, and using these as case studies to say, how could this happen here? And driving that reflection because then you’re starting to force this learning out of the industry and push that it could potentially happen here. And the other piece I’ve seen, and I think this is a… You talked about the human factors piece, I’ve seen some organizations that proactively, or maybe it’s every few years, run a safety culture assessment as an example. Now, my challenge with a lot of safety culture assessments is that people will do a survey which will give you no insights into what you’re talking about. But when I’m thinking about a robust one, you’re looking at surveying and speaking to a lot of employees to look about what could go wrong. And you also do a review of system factors. You look at a lot of the practices, the processes, the changes, the things that have occurred over the past few years.

So essentially, you’re kicking the tires on a regular basis at the organization. But what I’m talking about is it’s closer to really kicking the tires, but looking at the system components as well, even though the analysis, because the survey won’t be good enough.

I think you’re right. Organizations are doing surveys; they’re running focus groups. Some leaders will be doing walk-arounds. They’re going to facilities and talk to their staff. If prepared for that, that can be really, really helpful. They’re if you prepare them in terms of what they should ask, that can work quite well. I think these are all activities, and these are all tools that we have available, but I don’t think typically they are aimed at trying to pull out these deeper organizational issues, or maybe they’re not. The different sources of information maybe are not combined to give that overall view. Occasionally, organizations will get an independent organization in to do that review for them, which can be quite interesting. But again, that takes you back to the issue of you having to learn from those recommendations as well. And we have seen quite a few cases where independent contractors who’ve been asked to come in and review an organization quite often temper their findings because they want to get continual employment from that company. And we’ve seen that in some of the major financial events. But Bearings Bank is a good example where the auditors did not see issues, or when they saw issues, were not communicating them to the board because they didn’t want to alert the board to some of the issues that were there, which contributed to the demise of the bank.

So, there were lots of barriers and structural issues that might prevent some of the tools you suggested from working really effectively. But there are tools out there that can be used. We’re making general comments about what we’re seeing in the industry. It’s not to say that there are some organizations that are doing this well. I think it’d be really good to unpack those lessons in learning and communicate those more widely because there are pockets of good practice. I’m not saying no one’s doing anything at all here. There are pockets out there. We need to understand what they are, what is effective, and help to share those more widely for other organizations that maybe are not doing this proactively.

That’s often the tricky part because once something goes wrong, it makes front page news. The 37 MAX makes front page news, multiple investigations, lots of insights, lots of learnings. But does that mean that Airbus, on the other hand, that hasn’t had such a failure, is doing all of this proactively, you don’t necessarily know because they’re generally quieter about it. So, it could actually just be pure luck or actually good practices. And that’s the tricky part.

It could, but it could also be… If you look at an organization that’s had a few incidents or a couple of disasters, people might think, oh, well, actually X, Y, and Z is a bad company. It’s because of them. It’s the fundamental attribution error. If someone is driving poorly, you think it’s because they’re a bad driver. Whereas if you do something, if you cut someone up and so on, then you think, well, there’s all these other reasons why I did that. So, we tend to attribute failures to people because it’s an issue with them not thinking about all the contextual factors that influence behavior. So maybe that fundamental attribution error is something that’s important when we’re looking at these disasters because it’s easy to say, well, they’re just a bad company, and that won’t happen to us. We’re different. We employ different people. We’ve got all these processes and systems, and it won’t happen to us. Risk blindness is an issue for us as well.

I think if you touch briefly on Bearings Bank, the same symptoms that happen in Bearings Bank would probably have happened in many other locations because it’s not that hard to have a rogue trader. The difference there was the size of that rogue trader, but they’re present everywhere. Nab in Australia had three rogue traders on the FX side roughly around the same time. And there are lots of other examples that don’t get reported or get reported on the hundreds page of the newspaper if you really seek to look at them because it’s never a cause for success, but they happen a lot more often than we think.

I think they do. I think you’re right that we pick these examples, and we talk about these big disasters, partly because there’s so much information available on them. And it does become a little bit unfair that we keep going back to the same disasters, but they’re the ones on which we have much information. They’re the ones who’ve been investigated to the end of the degree. But you’re right, there are lots of other failures going on. Not all of them become so high profile. But we do know that lots of other organizations maybe have similar events, but they just, like you say, they don’t make the press for whatever reason, and they don’t become case studies on training courses for the next 30 years. But you’re right. You can pick Bearings Bank, and there would have been several of the banks with the same issues at the same time because they had the same processes or didn’t have those processes in place as Bearings Bank, but it just didn’t play out in the same way. As you know, maybe they had a huge loss, but it wasn’t enough to destroy the bank, and therefore it’s less visible to everybody else.

But you’re right, we’re picking a few case studies here because these are the ones, we have detail on. But it’s not to say this isn’t occurring much more widely than that.

So, Martin, thank you very much for joining me. I think a really interesting series of topics, the link that a lot of organizations relation feels for the same reasons. I think what’s really big takeaway is how do we learn better from investigations and then how do we learn proactively before anything ever occurs? How do we have that questioning attitude on an ongoing basis because it’s too easy to close your eyes and something and think, No, it’s okay? We’re okay. And really, how do you drive that questioning attitude within the business? So, Martin, these are really interesting topics. Obviously, your website, human factors101.com is an excellent source for insights. Is that the best way if somebody wants to reach you to get more insights?

Yes, certainly. I write quite a lot on that website, so you can go there and have a look. There’s a lot more information on there, or you can follow me on LinkedIn. If you search for Human Factors 101, you’ll find me there on LinkedIn. Please get in touch.

Excellent.

Thank you for listening to the Safety Guru on C-suite Radio. Leave a legacy, distinguish yourself from the pack, grow your success, capture the hearts and minds of your teams, and elevate your safety. Like every successful athlete, top leaders continuously invest in their safety leadership with an expert coach to boost safety performance. Begin your journey at execsafetycoach.com. Come back in two weeks for the next episode with your host, Eric Michrowski. This podcast is powered by Propulo Consulting.

The Safety Guru with Eric Michrowski

More Episodes: https://thesafetyculture.guru/

C-Suite Radio: https://c-suitenetwork.com/radio/shows/the-safety-guru/

Powered By Propulo Consulting: https://propulo.com/

Eric Michrowski: https://ericmichrowski.com

ABOUT THE GUEST

Martin Anderson has 30 years of experience in addressing human performance issues in complex organizations. Before joining an oil and gas company in Australia as Manager of Human Factors, he played a key role in developing human factors within the UK Health & Safety Executive (HSE), leading interventions on over 150 of the UK’s most complex major hazard facilities, both onshore and offshore. He has particular interests in organizational failures, safety leadership, and investigations. Martin has contributed to the strategic direction of international associations and co-authored international guidance on a range of human factors topics.

For more information: www.humanfactors101.com.

STAY CONNECTED

RELATED EPISODE

EXECUTIVE SAFETY COACHING

Like every successful athlete, top leaders continuously invest in their Safety Leadership with an expert coach to boost safety performance.

Safety Leadership coaching has been limited, expensive, and exclusive for too long.

As part of Propulo Consulting’s subscription-based executive membership, our coaching partnership is tailored for top business executives that are motivated to improve safety leadership and commitment.
Unlock your full potential with the only Executive Safety Coaching for Ops & HSE leaders available on the market.
Explore your journey with Executive Safety Coaching at https://www.execsafetycoach.com.
Executive Safety Coaching_Propulo