Pictured from left to right: Britni Strong, Janell Forget, and Katarina Vickovic
4 December 2025news

Regulators race as AI rewrites risk

Artificial intelligence took centre stage at the Cayman Captive Forum 2025 as industry leaders warned that captives must quickly adapt to a fast-evolving regulatory and technological landscape. In a panel titled “Smart Risk: AI Transformations in Captive Insurance,” Katarina Vickovic (Honigman), Janell Forget (UMass Memorial Health), and Britni Strong (Strategic Risk Solutions) outlined how AI is already reshaping underwriting, claims handling and governance—and why failing to prepare may expose organisations to significant new risks.

The panel began by grounding the audience in what is meant by AI—a term often invoked but less often clearly defined. Vickovic noted that AI can be viewed across four dimensions: functionality, capability, technological approach and application domain. Functionality refers to how AI interacts with its environment, from simple reactive systems such as smart thermostats to more sophisticated tools with limited memory, like self-driving cars. Capability spans the spectrum from narrow systems focused on specific tasks to multi-domain intelligence. Technological approach includes traditional machine-learning techniques and the now-ubiquitous generative AI, while application domain concerns the sector in which AI operates—healthcare, finance, insurance, and beyond.

Strong extended the framing by outlining five broad stages of AI development: generative AI, agentic AI, artificial general intelligence (AGI), artificial superintelligence and the more speculative singularity. She stressed that although generative AI currently dominates public discussion, organisations are already adopting agentic AI, which can autonomously plan, reason and execute tasks with minimal human intervention. In insurance settings, this shift is profound. While generative AI supports drafting policies, summarising data or creating customer-facing content, agentic AI can autonomously validate claims, trigger alerts for risk managers, adjust premiums or even initiate payments based on continuous data flows. Looking further ahead, AGI could transform insurers from reactive risk bearers into proactive orchestrators capable of modelling global geopolitical and financial dynamics, or even designing captive programmes autonomously.

Turning to the legal landscape, Vickovic emphasised that regulation in the United States remains fragmented. There is no comprehensive federal AI law, and regulatory priorities have shifted significantly across administrations. Recent federal activity under the current Trump administration has focused on deregulation and promoting innovation, including the rescission of prior AI oversight frameworks and the launch of America’s AI Action Plan, which aims to accelerate deployment, build AI infrastructure and expand the technical workforce. Meanwhile, Congress is considering a raft of AI-related bills, though their progress remains uncertain.

The most active regulatory developments are occurring at the state level. In the 2024–25 legislative session, all 50 states and the District of Columbia proposed or enacted AI legislation, spanning issues from transparency and criminal misuse to professional credential protection. Colorado’s comprehensive AI Act—focusing on consumer protection, algorithmic fairness and risk management—has become a model for other states. Common features include mandatory risk-management frameworks, regular impact assessments, disclosure to end users (such as patients) and, in some cases, presumptions about whether reasonable care was exercised when harm occurs. Litigation trends are also shifting: early suits targeted AI developers, but plaintiffs are now increasingly pursuing deployers, such as healthcare organisations, insurers or service providers. The UnitedHealth class action on AI-driven Medicare decisions was offered as a cautionary example.

In the healthcare sector, Forget described AI as both long-standing and fast-accelerating. At UMass Memorial Health, AI is now embedded across clinical and operational workflows, forming part of the organisation’s formal strategic priorities. A patient may interact with at least five AI tools during a single admission, from triage algorithms to clinical decision support and discharge planning. To ensure safety, quality and compliance, UMass implemented a robust governance framework comprising strategic oversight and a Responsible AI Committee. This committee evaluates each proposed AI tool, assessing fairness, appropriateness, validity, effectiveness and—above all—patient safety. Regular auditing, transparency, appropriate contracting and education are core components. Forget underscored that without strong governance, clinicians may adopt AI informally, increasing risk; with proper support, however, AI can enhance care and protect patients.

Strong concluded by describing how service providers are increasingly leveraging AI to create efficiencies for captives. SRS’s data science team develops AI-enabled data warehouses, underwriting applications and extract-transform-load platforms to help organisations derive actionable insight from vast datasets. Examples included real-time operational monitoring networks, dynamic medical stop-loss pricing tools and multilingual employee education platforms that reduced risk and cost simultaneously.

Ultimately, the panel agreed that AI is here to stay. For captives and their insureds, measured and responsible adoption—paired with strong governance, legal awareness and collaborative engagement with service providers—will be essential in realising AI’s benefits while mitigating its risks. Organisations must remain curious, nimble and proactive as they build frameworks that balance innovation with safety, ethics and compliance.

Did you get value from this story?  Sign up to our free daily newsletters and get stories like this sent straight to your inbox.