Last updated on

Consumer AI Acceptance Just Doubled in P&C Insurance — Here's What the Insurity 2026 Report Means for Carriers

AI Trust
Consumer AI Acceptance Just Doubled in P&C Insurance — Here's What the Insurity 2026 Report Means for Carriers

Consumers stopped worrying about whether AI is in their insurance. They started asking who's watching it.

That is the cleanest summary of the Insurity 2026 AI in Insurance Consumer Survey, released in April. The headline number is the one every executive will quote: consumer support for AI use in property and casualty insurance climbed from 20% in 2025 to 39% in 2026. Resistance fell in parallel, with the share of respondents saying they are less likely to purchase from an insurer using AI dropping from 44% to 36%.

A doubling of stated support in twelve months is a real shift. The question is what consumers are saying yes to, and what they are still saying no to. The Insurity data, drawn from over 1,000 adults surveyed in February 2026, is unusually specific on that point. Once you read past the top-line number, the survey describes a population that has accepted AI as part of insurance operations and drawn a hard line around AI making decisions on their behalf.

That distinction is the operational story carriers need to plan around.

The Assistance-vs-Autonomy Gap Is the Real Finding

The survey breaks consumer comfort down task by task, and the pattern is consistent. Where AI assists, comfort is high. Where AI decides, comfort collapses.

On the assistance side, 46% of respondents are comfortable with AI generating an insurance quote. 39% are comfortable with AI tracking the status of a claim. 38% are comfortable with AI updating their personal information.

On the autonomy side, only 22% are comfortable with AI filing a claim on their behalf. Only 16% are comfortable with AI canceling or renewing a policy. And as Insurance Business reported, nearly half of respondents express distrust when AI is described as making determinations on claim approvals, fraud flags, or policy adjustments.

Read the spread. A consumer who is comfortable with AI generating a quote is, three times out of four, not comfortable with AI making the call to cancel their policy. The same person, in the same survey. The 39% headline number does not contradict the resistance numbers. It contains them.

For underwriting, claims, and customer experience leaders, that has a specific implication. The marketing path of "AI-powered insurance" treats consumer acceptance as a single dial. The data treats it as a sorting problem. Carriers who deploy AI in quoting, status updates, and information capture are operating well within consumer comfort. Carriers who deploy AI in claim adjudication, policy disposition, and adverse-action workflows are operating well outside it, and the consumer surface area for backlash is the part of the workflow that ends in a decision letter.

The carriers least exposed to backlash are the ones whose deployment maps cleanly to the assistance side of that line, with documented human checkpoints at every autonomy threshold.

There is a secondary finding in the survey that deserves attention. Trust in AI-driven decisions sits at 33%, with another 26% saying they need more information before they can decide. Roughly a quarter of the consumer population is undecided rather than opposed, and the deciding factor for that group is, by their own statement, information. Carriers who give them that information through clear disclosures, customer-facing summaries of how AI is used in specific functions, and accessible explanations of human review steps are converting that uncertain quartile. Carriers who treat AI use as an internal operational detail are not.

"Visible Human Oversight" Is the Operating Phrase

Jatin Atre, President at Insurity, framed the survey's takeaway in language worth quoting directly. "Consumers have moved past the hype cycle. They are not impressed by the fact that insurers are using AI. They care about how it is being used." On the consequence of treating AI as a cost-cutting lever rather than infrastructure, he was specific: deploying AI "simply to cut costs or automate decisions without explanation, trust will erode," but consumer confidence builds when AI accelerates claims, sharpens underwriting intelligence, and clarifies interactions through "visible human oversight."

That phrase, "visible human oversight," is the consumer-facing requirement most carriers cannot currently demonstrate.

A carrier can have human oversight in its operating model and still fail this test. If a customer cannot tell, from the artifacts they receive, that a human reviewed the AI output before the carrier acted on it, the oversight is not visible. The denial letter that does not mention human review reads, to a regulator and to a plaintiff's attorney, like a denial letter generated by a machine. The fraud flag that triggers an investigation reads the same way. The policy non-renewal notice reads the same way.

The carrier may have a perfectly defensible internal process. The customer experience does not reflect it.

This is where consumer trust and examination readiness converge. The same artifact that satisfies a consumer who asks "did a human look at this?" is the artifact that satisfies an examiner asking the same question, and the artifact a litigator will subpoena in a bad-faith claim. Carriers who treat oversight as a back-office process, documented in policy manuals and committee minutes, miss the surface area where trust is actually contested. Trust is contested on the artifacts the customer holds in their hand.

A claim file that records, per decision, which AI output was generated, which human reviewed it, what they changed, and when, is also a claim file that produces the visibility consumers in the Insurity survey are asking for. A carrier without that artifact has a marketing problem and a regulatory problem at the same time.

What "Operating Infrastructure" Looks Like in Practice

Atre's other framing, that the industry "cannot treat AI as a marketing headline. It has to treat it as operating infrastructure," translates into a small number of capabilities that are not optional in 2026.

The first is per-decision attribution. Every AI-influenced customer-facing decision, whether a quote, a claim disposition, a coverage recommendation, or a non-renewal, must produce a record that names the model, the inputs, the output, the human reviewer if any, and the final decision. This is the artifact layer. Without it, the carrier cannot prove anything to anyone, including itself.

The second is autonomy boundary enforcement. The Insurity data is specific about which functions consumers will tolerate as autonomous and which they will not. A governance program that grants the same autonomy level to a quote generator and to a claim denial workflow has not done the work of mapping its decisions to the consumer comfort gradient the survey describes. The 16% comfort number on policy cancellation is a hard signal that the autonomy threshold for cancellation workflows belongs to a human, every time, with the human review timestamp captured as part of the record.

The third is continuous, population-level monitoring. Carriers monitor AI in production on dimensions that go beyond model accuracy, including drift, bias, decision distribution, and exception rates by segment, because regulators ask, and because the Lokken discovery ruling and the next two years of bad-faith litigation will ask too. Continuous supervision of AI in production is the substrate that turns AI from a marketing claim into a defensible operating capability.

The fourth is examination-ready documentation as a default state. The NAIC AI Systems Evaluation Tool pilot, the At-Bay 2026 InsurSec data on AI-enabled fraud, and the rising frequency of state-level AI inquiries all point to the same operating expectation: a carrier should be able to produce, on a few business days' notice, a defensible inventory of AI systems in use, governance documentation, and decision-level audit trails. Carriers without that capability are running an unhedged exposure on every regulatory cycle.

These capabilities are infrastructure investments, not slide decks. They have a cost. The cost is the price of operating AI in a market where consumer support has crossed the inflection point and consumer expectations have hardened around oversight.

What the 39% Number Should Change in Q3 Planning

For carriers planning Q3 and Q4 deployments, the Insurity data is most useful when it is translated into specific go/no-go thresholds rather than treated as ambient market context.

Three things should change.

First, expansion of AI in quote generation, claim status communication, and information capture should accelerate. The data supports it. Consumer acceptance is high in these functions, the regulatory exposure is contained, and the operational benefit is real.

Second, expansion of AI into adjudication-stage decisions, claim denials, fraud determinations, and policy disposition should be governed by tighter human-in-the-loop requirements than carriers currently apply. The 16% acceptance number on policy cancellation is the floor. Any deployment in those functions that does not produce per-decision human review, captured as a verifiable artifact, is operating on borrowed time.

Third, the consumer-facing trust narrative should be rewritten. The Insurity data tells carriers that "we use AI" is no longer a differentiator and "we use AI carefully" is no longer a credible message without proof. The credible message is specific: which functions use AI, which decisions a human reviews, and what artifact the customer can request to see what happened on their file. Every word in that sentence matters, because consumers in 2026 are reading carrier disclosures with a level of skepticism that the 39% acceptance number does not soften.

The doubling of consumer support is real. The condition attached to it is the part of the survey carriers will spend the next twelve months either building toward or wishing they had.

Join our newsletter for AI Insights