If people intend to use data to help mitigate risk or save lives, that data must be trustworthy, says an artificial intelligence expert.
Sean Griffin, CEO and co-founder of Disaster Tech, a disaster risk data firm, pointed out during CatIQ Connect’s recent Quarterly Webinar Series that his company is part of an organization called the National Science Foundation AI Institute for Research on Trustworthy AI on Climate Weather and Coastal Oceanography. The key word in that title, he emphasized, is ‘trustworthy.’
“We need these models to be trusted,” he said during his session entitled AI Applications in Catastrophes: Risks and Rewards. “Not just by the researchers who build them, but by the practitioners and by the public who rely on these models to make life-saving or risk mitigation decisions that may affect a community for decades and generations to come.”
One issue affecting our reliance on AI is that it has built-in assumptions about data quality. Human influences will bias artificial intelligence and the way it interprets the data.
AI is something that humans train and put together, as Griffin observed. This process is “going to bias [AI], naturally, based on the human’s understanding of the issue at hand,” he said. “It’s going to bias [AI] based on the person or the people who are building the model and training the model on their institutional understanding. Many of these models that have been created for AI applications are not being done with practitioners on the ground.”
Sean Griffin of Disaster Tech speaks at the recent CatIQ Connect Quarterly Webinar Series during his session entitled ‘AI Applications in Catastrophes: Risks and Rewards.’
The institute is taking a research-to-operations (also referred to as R2O) approach. In other words, AI experts like Griffin are working with practitioners, risk managers, and researchers to make sure that an AI model meets the objective of attaining an operational goal. So, “if [I], as a risk manager, go into the field to leverage this model for decision making, [the AI model] actually has the types of tasks and procedures that I’m looking to accomplish.”
Access to high-performance computing (HPC) helps, Griffin said. Not even a decade ago, it was “nearly impossible” to get access to such technology to run models unless you worked for a government agency, national lab, or a university, he said. Now, thanks to connected devices and satellite imagery, the volume and speed at which data is collected leaves a large digital footprint.
“You have to have the [ability to] compute to be able to deal with the data that is provided to you,” he said.
Nowadays, graphics and computing technology companies like Nvidia and Intel have greatly reduced the price of the infrastructure needed, Griffin said. “And cloud computing has really changed the game. We’re really at the intersection [where] artificial intelligence, cloud, and high-performance computing is converging to allow access at a much more democratized level than ever before, because it’s just much cheaper.”
One emerging issue is the shortage of technical skills to take on this work, Griffin said. “We need to shore up that technical skills gap, so that we have professionals who have that blend of practitioners skills and technology skills to be able to take advantage of these HPCs, AI, and cloud technologies in concert with one another to leverage these models for the real-time application.”