Canadian Underwriter

Aggregation Aggravation

March 1, 2016   by Greg Meckbach

Print this page Share

Risk professionals, particularly those at small organizations, may not fully appreciate “interdependencies” with partners and suppliers and, as such, may not have a firm grasp of the full impact of a related failure. The cascading impact can begin with a number of incidents, including a cyber security breach, a terrorist attack or the failure of critical infrastructure.

Whether or not risk managers understand the impact to their own organization, aggregation of risk “really depends on the sophistication of the organization and how critical it really is,” suggests Michael Loeters, vice president and regional practice leader, risk management (Ontario) for BFL Canada.

CU March 2016 Cover - SQUARE -Fortune 500 firms tend to be “well-aware of interdependencies, especially from a business interruption standpoint,” says Kent Pitkin, national director for commercial lines at managing general agent April Canada, part of the France-based April Group. Pitkin cites the ice storm that hit southern Ontario shortly before Christmas 2013 as an example of an event that could lead to aggregation of risk.

Environment Canada notes that freezing rain fell most of the day on December 21, with 16.6 millimetres of rain reported at Toronto Pearson International Airport and another 13.6 millimetres of precipitation (mostly freezing rain) the following day. During the height of the storm, Toronto Hydro reported about 300,000 of its 726,000 customers were affected, mainly as a result of branches and trees falling on power lines.

“The small to medium-sized guy doesn’t really understand the interdependencies of how these events affect not only their insurance, but their business in general,” Pitkin suggests. Events like the ice storm can prevent a business from receiving materials or even opening, he points out.

That differs from large manufacturers, especially those in the aviation and auto sectors, who know full well that an event affecting business partners may prevent them from shipping or receiving components, Pitkin says. Regardless of company size, however, these incidents can affect not only a specific company, but multiple suppliers or multiple components in the supply chain, he explains.



“Just-in-time inventory was a boon to supply chain because you only have parts in front of you that you need for that day, or that week,” says Darius Delon, associate vice president of risk services at Mount Royal University. “So if you have a supply of parts for seven days, and your worst possible scenario for shutting down the factory is only seven days long, you’re good,” Delon says.

However, organizations relying on just-in-time delivery “need mechanisms in place to actually identify the supply chain risk and how it impacts the larger entity,” he cautions.

“Identifying aggregation of risk and interdependencies is an important component of your overall approach to determining the risk profile of your organization,” Nowell Seaman, director of global risk management for Potash Corporation of Saskatchewan, says, commenting in his capacity as vice president and board member of RIMS, the risk management society.

“You are trying to look at all significant sources of risk and aggregation could certainly be overlooked. Not surprisingly a small or mid-sized firm – used to looking at the risks of its own operation – might not be thinking that far,” Seaman suggests.

Andrew Graham, an adjunct professor at the Queen’s University School of Policy Studies, is of the view that most organizations do not have a robust understanding of their interdependencies. “It’s not [only] risks to their businesses, but risks in other businesses that will affect them,” states the former senior deputy commissioner for Correctional Service Canada and former warden of the Kingston Penitentiary.

Citing information from the federal government, Graham notes in Canada’s Critical Infrastructure: When is Safe Enough Safe Enough?, published by the Macdonald-Laurier Institute, critical infrastructure includes electrical power, water treatment, sewage treatment, hospitals, the blood supply, banking and securities. Other examples are telecommunications and broadcasting, chemical manufacturers, railways, natural gas, oil production and transmission systems.

Critical infrastructure, Graham notes, includes “those physical and information technology facilities, networks, services and assets, which, if disrupted or destroyed, would have a serious impact on the health, safety, security or economic well-being of Canadians or the effective functioning of governments in Canada.”

Risk managers should carry out “a very critical assessment of their infrastructure and their susceptibility,” Delon suggests. It is important to know “what it is that is susceptible – whether that’s hacking, attacking or just vandalism – because a lot of those infrastructure pieces are critical to the day-to-day life of a Canadian,” he points out.

Pitkin says an interruption of financial networks, for example, can cause liability risk. “It could cripple an area of the economy, especially if it was more on the point-of-sale side where your average person can’t go to the bank and get $20 out for lunch,” he notes. “People could lose money based on [securities] trades that they could have made.”

One major concern to risk managers is the potential for “a very large successful malicious attack that brings down, say, all of the infrastructure necessary to make the financial transfers that we all rely on,” Seaman warns.

“We rely heavily on the government and large organizations that run those financial systems have the controls and the means in place to prevent such an attack, but I think the concern is a large malicious attack and I don’t think we have necessarily experienced something of the magnitude that people are concerned about,” he contends.


“The highest level of threat to Canada’s [critical infrastructure] rests in the areas of natural disasters and system degradation, both of which lend themselves to investment in resilience and redundancy,” Graham writes in his report, adding that other threats include terrorism and vandalism and hacking.

“I think that one of the problems is a lot of firms lack imagination with respect to what could happen to them or what could affect them if something went wrong with pipeline infrastructure or anything like that,” Graham told Canadian Underwriter. “By lack of imagination I mean there’s a certain point at which you have to sit down and say, ‘What’s the worse thing that could happen?'”

One bad thing that could happen is a cyber attack on the power grid, AIR Worldwide notes in the report, Aggregated Cyber Risk: The Nightmare Scenarios.

Such an attack “could lead to business interruption losses across a large geographic area,” the report states.

“It is plausible that such a power outage could cause an extreme aggregation loss and could be caused by the types of malware and viruses that hackers have already produced.”

Critical infrastructure “has a cyber component,” agrees José Fernandez, a computer engineering professor at École Polytechnique de Montréal, whose areas of expertise include the security of critical infrastructure control systems and malicious software.

“It’s not somebody going to a hydro power distribution and putting a bomb and blowing it up,” Fernandez says, but, instead, something like someone sending the wrong computer commands to an electrical utility.

“The electrical grid is very sensitive to perturbations,” he says. “We saw that in 2003,” Fernandez reports, referring to the August 14 outage when large portions of the Midwest and Northeast United States and Ontario experienced an electric power blackout.

“The outage affected an area with an estimated 50 million people and 61,800 megawatts (MW) of electric load in the states of Ohio, Michigan, Pennsylvania, New York, Vermont, Massachusetts, Connecticut, New Jersey and the Canadian province of Ontario,” the joint U.S.-Canada Power System Outage Task Force notes in its report, released in 2004.

The computer supervisory control and data acquisition alarm and logging software of FirstEnergy, which is comprised of seven U.S. electrical utility operating companies, failed some time after 2:14 pm that day, reports the task force, chaired by the U.S. secretary of energy and Canada’s natural resources minister.

“I hear people say that can’t happen again,” Graham says. “I don’t believe it. It could easily happen again. The more complex a system is, the more vulnerable it is to breaking down in a way that makes it harder to put back together again,” he contends.


Although the outage was caused by an accident, “the same thing could have happened by somebody sending the wrong commands or the wrong information, and that is definitely a possibility,” Fernandez cautions. If that were to take place, such an incident could affect critical infrastructure for “days, if not weeks,” he maintains, pointing out that miscreants could carry out such an attack using existing technology.

“As we are moving towards smart grid and smart meters, one of the groups that will be interested in manipulating those smart grids will be the pot growers who are doing hydroponic growing of marijuana because it consumes a lot of electricity, which is a telltale sign of an illegal operation, so they have been manipulating the old-style meters,” Fernandez expects.

“We don’t have proof of this yet, but [grow-up operators] will definitely be interested in manipulating the smart meters to hide their consumption, and as they do that, they will develop tools that could eventually be used to manipulate not only the meters, but the grid and actually force the control systems to make the wrong decisions and create brownouts and blackouts,” he ventures.

That being the case, a technology used for one purpose can later be used for a more destructive purpose, which is “exactly” what happened with spam email, Fernandez recounts.

“Most of the hacking tools, the malware technologies, were developed in the early to mid-2000s to support the spam industry because they needed to have some infected machines to generate the spam,” he points out. “But the same technology is now being used for denial of service, for extortion, for much more nefarious cyber crimes.”

Malware can also enter an organization’s computer system when a business partner is infected, suggests Greg Markell, Toronto-based account manager, cyber/directors and officers for HUB International HKMB, part of HUB International Ltd. “System integration is incredibly complex, and any connected end points that aren’t evaluated can be potential vulnerability points for threat actors to potentially find a back door into client systems,” Markell warns.

“Right now, the biggest challenge that we have is educating our clients on the potential for interconnectivity issues, in terms of what that means,” he says. “To do due diligence and audits on every single integrated system and end point that is connected to other organizations or their supply chain can be a very heavy lift, in terms of time spent, money spent,” he reports.

With regard to electrical power failure, “from an aggregation standpoint, many departments will look at the institution – whether it’s the City of Calgary or a big building downtown, or a campus – and say, ‘Oh, okay. If power goes out for my department, I have refrigeration that relies on back-up power and we have it. And that power is infinite and large and powers the entire institution or the entire building,'” Delon says.

“So people’s misconception of the risk controls that are in place makes it a bigger risk,” he comments.

This is because for many organizations, their back-up power systems provide “a fraction” of the power they actually use daily, he explains. So while back-up power may keep the building lights and ventilation going, “all the other power that is used – like refrigeration and all that other stuff – is usually not on back-up power,”

he adds.

That is one reason there needs to be “oversight and aggregation of all those risks,” in a large organization, says Delon.

“Once you see everyone relying on the same risk control method, it should tweak you to say, ‘Hold on, all those departments and all those floors don’t have back-up power.’ So you either then go best practice, and actually have emergency generation that’s capable of giving you full power – which is an expensive proposition – but if you do that, then you actually have full back-up power,” he says.


Cloud computing – whereby an organization uses someone else’s computer server and storage hardware for its own applications – is another way that cyber incidents can cause aggregation of risk, Loeters suggests. “A lot of organizations today are outsourcing the back-up of their data to third parties and certainly a lot more organizations today are starting to use software as a service,” he points out.

“Critical applications that they are using in their business are not installed in their server room like it used to be. They are being hosted by the software company – like a, for example – and I don’t think a lot of organizations realize that if that software vendor or that back-up vendor is in one location and that location goes down for whatever reason, and that data or that application is not replicated, duplicated, redundant mirrored, et cetera at another location, then they are really scooped,” Loeters says.

A key question around cyber risk is where is the data being hosted, he says. “I don’t think a lot of risk managers are asking that question, in that particular context, but it’s becoming a much bigger issue because if something does happen to that facility where that software is being hosted, and you no longer have access to the data and you no longer have access to the application, it can have a very, very significant impact on your organization,” Loeters warns.

“You might not be able to order product, for example. You won’t have access to your accounting system. You won’t have access to your prospects, your customer database,” he goes on to say.

In general, Delon suggests, aggregation of risk is an issue “that hasn’t really hit the spotlight,” among risk managers. “A lot of effort in the past, with regards to risk management transitioning into enterprise risk management, into strategic risk management, has been focusing on the top risks and dealing with the top risks, and not necessarily diving into the minutia to see, ‘Well, these low-level risks are also top risks if you look at it from a different perspective,'” he explains.

“Identifying aggregation of risk and interdependencies is an important component of your overall approach to determining the risk profile of your organization,” Seaman suggests. “You are trying to look at all significant sources of risk and aggregation could certainly be overlooked,” he points out.


Fernandez suggests it is likely “within a half-generation, maybe even less than that, maybe five to 10 years,” that terrorists will attempt a cyber attack on critical infrastructure. Right now, that is “definitely the most likely to create insurable damage,” he says, although he notes cyber terrorism is “fundamentally incompatible” with the idea of terrorism for some religious extremists.

“You don’t go to heaven by pressing ‘enter’ on the keyboard,” he comments.

A requirement for an “air gap” between networks controlling electrical power systems and the public Internet is stipulated in a lot of critical infrastructure regulations, Fernandez says.

“The problem is, nobody follows that principle because there are too many advantages of not following it, like in terms of when you have a service call, you don’t have to have the guy fly in from Germany,” he adds.

In general, Pitkin suggests that the aggregation of risk from terrorism is more of an issue for insurers than for actual insureds.

After the hijacking of four passenger airplanes by al-Qaeda operatives on 9-11, “people actually realized… that they had a lot of value in one area,” he says.

“If you are talking about some type of manufacturing or some type of process industry, usually they are centered around certain areas,” Pitkin adds.

For example, automotive assembly plants tend to be located near parts manufacturers, he says, suggesting that an incident in a certain area can affect multiple organizations.

Delon regards terrorism as more of a reputational risk. “You can say all you want, whether [the risk is] high or low, from a true risk perspective, but if the population says, ‘Hey, I’m worried,’ and that precludes them from coming to your downtown office building… that issue is then just sitting there unaddressed and it’s weighing in the minds of others,” he explains. “It needs to be addressed, regardless of what you consider the true risk to be.”

The risk from terrorism is “very different from the kind of risks typically insured,” such as auto, the Insurance Information Institute reports. “There have been few terrorist attacks, so there is little data on which to base estimates of future losses, either in terms of frequency or severity,” the institute notes.

That said, “terrorism losses are also likely to be concentrated geographically, since terrorism is usually targeted to produce a significant economic or psychological impact,” it adds.


A risk manager considering aggregation of risk needs to have “a fairly open imagination about things, but it has to be grounded in a strong sense of probability,” says Graham. “In other words, a terrorist act is entirely possible, but… more probably, you are going to get a vandalism act or a theft act or a negligence act that’s going to have a major effect on you,” he says.

Graham cites as an example the May 2000 tragedy in Walkerton, Ontario, where seven people died and more than 2,300 became sick when drinking water was contaminated with E. coli.

An official inquiry found operators at the local Public Utilities Commission (PUC) “engaged in a host of improper operating practices, including failing to use adequate doses of chlorine, failing to monitor chlorine residuals daily, making false entries about residuals in daily operating records, and misstating the locations at which microbiological samples were taken,” Justice Dennis O’Connor, then Ontario’s associate chief justice, noted in the Report of the Walkerton Inquiry.

In the inquiry, O’Connor found it was “not unusual” for PUC employees to mislabel bottles taken for testing.

The Walkerton incident is “seared in our memories,” Graham comments. Risk managers “have to have a realistic and open understanding of the risks they are facing, and that conversation is a difficult conversation within industry, let alone at a public level,”

he suggests.

Some risk managers “tend to treat it more like a process and avoid getting people in a panic,” Graham says. “Fair enough, I don’t want people panicking, but that gets in the way of research and analysis and serving the industry they are a part of,” he maintains.

“If we never identify [critical] infrastructure weaknesses, we can never then fix them, because if we perceive we have no weakness, then we don’t do anything about it,” Delon suggests.

“I think we have to be okay with ourselves to say, ‘You know what? We do have some weaknesses. We don’t necessarily know, but let’s go out and find them, and once we do, let’s not be critical of those who perhaps didn’t find them in the past,’ because this is not a witch hunt. This is, ‘Hey, how can we do continuous improvement to our infrastructure, to our organization?”

Print this page Share

Have your say:

Your email address will not be published. Required fields are marked *