Artificial intelligence: time to worry, or just to think?
Artificial intelligence (AI) is constantly in the headlines at the moment, with one particular variant, ChatGPT, taking up a large amount of space in newspapers and magazines.
ChatGPT is a form of AI that can be used to absorb a large amount of information and write a script based on the topic at hand. Used correctly it can be a useful tool to help summarise a complicated topic for someone looking to explain something and as such it has caused a lot of uncertainty over how some professions, such as journalism, will be able to cope with it.
There is of course a downside. Used incorrectly it can be used to cheat at homework or, in one memorable case that hit the headlines, it can be used to write a formal legal submission—and in the process make up the legal citations referring to past cases that supported it, resulting in judicial fury.
How does this new tool impact insurance and captive insurance in particular? Marcus Schmalbach, chief executive officer of Ryskex, told Captive International that one central aspect of AI is machine learning (ML) which can help its user to create or identify patterns in data and give insights into the greatest use of that data. According to Schmalbach, ML can be of great benefit to a captive for the general insurance aspects of it.
Schmalbach points to an important aspect of tools such as ChatGPT: “It’s a fantastic tool, but it still makes mistakes. It’s fascinating, but you will never have, as some people expect, the opportunity to say: ‘ChatGPT, what will be my premium for earthquake in Tokyo next week?’, or ‘what are the hurricane underwriting guidelines?’. The answer is always: ‘I can’t do that, I’m not able to make predictions. I’m not able to give you realistic data on that’.
“But then, and this is very interesting, the machine says: ‘If you are interested in how we can underwrite this, this is what you should take into account’ and it gives you a list of very important things you should have in mind. It may cost you thousands of dollars just to get the list.
“So, it has an impact, but the final decisions and the final underwriting will never be done by ChatGPT.”
The human element was also mentioned to Captive International by Matthew Queen, owner of The Queen Firm and chief executive of Sherbrooke Corporate, who pointed out that while AIs such as ChatGPT can absorb the information, it does not calculate. But, he points out, it does have its uses.
Underwriting memoranda on individual risks can be produced very fast in the captive insurance space, Queen said, adding that conceivably, if someone trained an AI with enough human support on data choice, you could probably automate a good chunk of a feasibility study.
If you know how to calculate loss runs, and if you have some sort of a database where you can store them, and have an objective way of reviewing the data, you can then compare that with the known commercial rates. And you can spit out a feasibility study in relatively short order.
According to Queen, the fears that AI will doom large parts of the jobs market by automating are largely baseless, due to this inability of present AI to calculate. But there is a major caveat: AI is evolving all the time and does occasionally spring surprises, even on its owners.
“These things are not live. These are not conscious. But they are very intuitive, in terms of their ability to work with someone like you and me,” said Queen.
“Earlier this year it was announced that one AI had developed the ability to translate Bengali, based on a relatively small number of words. This baffled its owners, who had not programmed that ability into it.”
As a result of this new and unpredictable technology cyber risk analytics firm CyberCube is urging the insurance industry to pay particularly close attention to AI. Ashwin Kashyap, co-founder of CyberCube and chief product officer, said that AI is truly transformational to the world at large, and impacts all industry verticals, including insurance.
“In our opinion, it is as big as the cloud, the mobile phone, and other transformational technologies that we’ve seen over the past several decades,” Kashyap said. “As a result, we need to pay close attention to what it means to the cyber insurance market. From CyberCube’s perspective, we firmly believe that ubiquity of AI is not an ‘if’ question, but a ‘when’ question.
“And when that becomes reality, you should expect a regime change in terms of what the cyber threat landscape would look like.”
It’s worth pausing to consider one particular point about humanity’s quest to develop technology in new and different ways. Just because we can do something, should we? Or should we pause for thought and think about implications?