Autonomous Vehicles Change Insurance Liability. AI Supervision Determines Who Pays.

Enterprise AILast updated on
Autonomous Vehicles Change Insurance Liability. AI Supervision Determines Who Pays.

In October 2023, a Cruise robotaxi operating in San Francisco struck a pedestrian who had been knocked into its path by a separate hit-and-run vehicle. The robotaxi's AI system detected the pedestrian under the vehicle, assessed the situation, and executed a pullover maneuver, dragging the pedestrian 20 feet before stopping. The incident resulted in a suspension of Cruise's operating permit and a fundamental question the insurance industry has been slow to answer: when the driver is software, who carries the liability? Cruise ultimately faced over $2 million in combined fines from NHTSA ($1.5 million), the DOJ ($500,000), and California's PUC ($112,500) for safety and reporting failures tied to the incident.

Traditional auto insurance is built on a premise that has held for over a century. A human driver operates a vehicle. The driver's behavior, skill, and judgment determine accident causation. Insurance prices the risk of that human decision-making through rating factors like age, driving history, and mileage. Fault determination follows from reconstructing what the driver did and whether it met the standard of reasonable care.

Autonomous vehicles dismantle that premise. At SAE Level 4 and above, no human driver exists. The AI system perceives the environment, plans the route, and executes driving decisions. When that system causes harm, the question becomes: what did the model fail to anticipate, and who is responsible for that failure?

The insurance industry's answer will determine how autonomous vehicle risk is underwritten, priced, and distributed. The answer requires capabilities most carriers do not yet have.

The Liability Shift: From Driver to Manufacturer to Algorithm

Auto insurance liability has historically followed a clear chain. The driver causes an accident through negligence. The driver's liability policy responds. The injured party is compensated. Disputes center on fault determination between human actors.

As vehicle automation increases, liability migrates from the driver to the vehicle manufacturer, and ultimately to the AI system that made the driving decision. This migration follows the SAE levels of driving automation.

SAE Levels 0-2 represent driver assistance. Adaptive cruise control, lane-keeping assist, and automatic emergency braking support the human driver but do not replace them. The driver remains responsible for vehicle operation. Liability stays with the driver, though ADAS failures can create product liability claims against the manufacturer. Insurance rating remains primarily driver-based, though carriers are beginning to incorporate ADAS presence as a rating factor.

SAE Level 3 introduces conditional automation where the vehicle handles driving in specific conditions but requires the human to resume control when the system requests it. This is where liability becomes genuinely ambiguous. If the system fails to request a handoff in time, is the human at fault for not monitoring a system that told them monitoring was unnecessary? Mercedes-Benz's Drive Pilot system, certified for Level 3 operation in certain conditions, includes a manufacturer liability commitment for accidents that occur while the system is engaged. That commitment acknowledges where the liability logically belongs, but it raises underwriting questions the insurance industry has not fully resolved.

SAE Levels 4-5 represent high and full automation. No human driver is required. The vehicle operates autonomously within its operational design domain (Level 4) or universally (Level 5). At these levels, the liability framework for auto insurance fundamentally breaks. There is no driver to hold at fault. The "operator" is an algorithm. Fault determination becomes model evaluation: did the AI system perform within its design specifications? Did it handle the scenario within its operational design domain? Did its perception, planning, or control systems fail in a way that a properly designed system should have avoided?

This is not a theoretical concern. Waymo operates thousands of autonomous rides daily. Zoox is testing in multiple cities. Tesla's Full Self-Driving system, though classified as Level 2, is increasingly operated by drivers who treat it as higher-level automation regardless of the manufacturer's classification. The vehicles are on the road. The liability questions are live.

Underwriting AI: Evaluating the System Inside the Vehicle

For carriers underwriting autonomous vehicle risk, the traditional rating model provides almost no useful signal. The vehicle owner's driving history is irrelevant when the vehicle drives itself. The owner's age, gender, and years of experience are meaningless rating factors when the owner never touches the steering wheel.

What matters is the AI system: its perception capabilities, its decision-making architecture, its failure modes, and its performance boundaries. Underwriting autonomous vehicle insurance requires evaluating the AI that operates the vehicle, and that evaluation demands capabilities the insurance industry has not historically needed.

Operational design domain boundaries. Every autonomous driving system has an operational design domain (ODD): the conditions under which it is designed to operate. Waymo's system operates in mapped urban environments with speed limits and weather constraints. A system designed for highway driving may not handle unprotected left turns at urban intersections. The ODD defines the boundary between situations the AI can handle and situations where it will fail. Underwriters need to assess ODD boundaries the way they currently assess driver capabilities, because the ODD is the system's equivalent of skill and experience.

Perception system limitations. Autonomous vehicles perceive their environment through cameras, lidar, radar, and ultrasonic sensors. Each modality has known failure modes. Cameras lose effectiveness in low light and direct glare. Lidar performance degrades in heavy rain and snow. Radar struggles with stationary object classification. The fusion architecture that combines these inputs determines whether the system can compensate for single-sensor failures. A carrier underwriting a fleet in Phoenix faces a different perception risk profile than one underwriting a fleet in Seattle, and the difference has nothing to do with the vehicle owner.

Decision-making under uncertainty. The hardest underwriting challenge is evaluating how an AV system handles novel situations outside its training distribution. The Cruise incident illustrates this: the system encountered a pedestrian trapped under the vehicle, a scenario likely underrepresented in training data. Its response was consistent with its general programming but catastrophically wrong for the specific situation. Evaluating how a system responds to edge cases is central to assessing the risk it carries.

Software update risk. Unlike a human driver whose skill level changes gradually, an autonomous driving system's capabilities change with each software update. A carrier that underwrites a fleet based on version 4.2 of the driving software may face a materially different risk profile when the fleet updates to version 4.3. Over-the-air updates can change perception algorithms, planning heuristics, and control parameters overnight. Traditional annual policy terms assume the insured risk remains roughly stable between renewals. Autonomous vehicle risk can change between Monday and Tuesday.

ADAS Data and the Pricing Problem

Before fully autonomous vehicles dominate the market, advanced driver assistance systems are generating data that carriers can use to build the rating frameworks that autonomous vehicle insurance will eventually require.

Telematics data from ADAS-equipped vehicles provides granular information about system engagement and performance. When adaptive cruise control is engaged, how often does the driver override it? When automatic emergency braking activates, was the activation appropriate or a false positive? When lane-keeping assist intervenes, does the driver consistently correct the system's steering input?

This data creates a bridge between driver-rated and system-rated insurance. A vehicle where the ADAS systems engage frequently and perform well represents a different risk than one where the driver consistently overrides or disengages the assistance systems. Carriers that build rating models incorporating ADAS performance data are developing the analytical capabilities they will need when the driver disappears entirely.

The challenge is access. OEMs control vehicle data, and the terms under which insurers can access telematics and ADAS performance data vary by manufacturer. Some OEMs are building their own insurance products, using proprietary vehicle data as a competitive advantage. Tesla Insurance prices policies based on a real-time Safety Score derived from vehicle telemetry that no other carrier can access. This data asymmetry creates a market where the manufacturer of the AI system also controls the information needed to price the risk it creates.

For the broader insurance market, resolving data access is a prerequisite for building viable AV insurance products. A carrier cannot underwrite AI system risk without evaluating AI system performance, and that evaluation requires data the carrier does not currently control.

The Product Liability Dimension

As liability shifts from drivers to manufacturers, autonomous vehicle insurance shifts from personal auto to product liability. This changes the insurance market in ways the industry is only beginning to address.

OEM liability absorption. Several manufacturers have signaled willingness to accept liability for accidents caused by their autonomous systems while engaged. Volvo made this commitment early. Mercedes-Benz extended it to Drive Pilot. Waymo carries its own liability for its robotaxi operations. If the system drives, the system's manufacturer bears responsibility for its driving decisions. But manufacturer self-insurance and direct liability acceptance reduce the addressable market for traditional auto carriers. If the OEM covers the system's liability, what risk remains for the policyholder's insurer?

Residual risk. Even with OEM liability commitments, residual risk remains. The vehicle owner still faces liability for incidents outside the autonomous system's engagement, for vehicle maintenance failures that contribute to accidents, and for situations where the system requested a handoff that the owner failed to execute. Insuring this residual risk requires understanding the boundary between system liability and owner liability, a boundary that shifts with every software update and ODD modification.

Reinsurance implications. Autonomous vehicle risk concentrates in ways that traditional auto insurance does not. A software defect in a widely deployed autonomous driving system can simultaneously affect millions of vehicles. A single perception algorithm failure in a specific weather condition could produce correlated losses across an entire fleet. This concentration risk resembles cyber exposure more than traditional auto exposure, and it requires reinsurance structures that account for systemic, correlated loss potential.

Regulatory Fragmentation

The regulatory landscape for autonomous vehicle insurance remains fragmented in ways that compound the underwriting challenge. The United States has no federal autonomous vehicle legislation. State regulations vary significantly.

California requires AV operators to carry a minimum of $5 million in liability coverage. Arizona, which has been more permissive of AV testing, imposes fewer requirements. Some states have updated their insurance codes to address autonomous vehicles explicitly. Others apply existing motor vehicle insurance frameworks that assume a human driver.

This fragmentation means a carrier underwriting autonomous vehicle risk across multiple states must navigate different liability standards, coverage requirements, and regulatory expectations in each jurisdiction. The compliance burden scales with geographic expansion, and it changes each legislative session as states update their approaches.

International variation adds another layer. The UK's Automated Vehicles Act 2024 establishes a framework where the "authorized self-driving entity" bears primary liability. The EU's proposed AI Liability Directive creates a presumption of causality for AI-related harm. Japan has amended its Road Traffic Act to accommodate Level 4 vehicles. Each framework makes different assumptions about how liability distributes between manufacturers, operators, and owners.

What Carriers Need

The autonomous vehicle insurance market will grow as deployment scales. Analysts project the global AV market could reach trillions within the decade. The insurance premium pool associated with that market represents significant revenue for carriers that can underwrite the risk. But underwriting AV risk requires capabilities that sit outside the traditional auto insurance skillset.

Carriers need the ability to evaluate AI systems, not just vehicles. The perception stack, the decision-making architecture, the operational design domain, and the software update cadence all factor into risk assessment. This evaluation is not a one-time underwriting exercise. It is an ongoing assessment that must account for the fact that the insured system changes with every update.

Carriers need data access arrangements with OEMs that provide sufficient telemetry to assess system performance. Without this data, underwriting AV risk is guesswork with actuarial formatting.

Carriers need regulatory monitoring across every jurisdiction where they operate, because the liability framework for autonomous vehicles is being written in real-time by legislators, regulators, and courts that are still learning how the technology works.

The fundamental shift is this: auto insurance has always been about pricing human behavior behind the wheel. Autonomous vehicles replace human behavior with algorithmic behavior. The carriers that learn to evaluate, price, and monitor algorithmic driving decisions will underwrite the next generation of mobility risk. The carriers that continue pricing vehicles based on their owners' demographics will be underwriting a risk that no longer exists.

Join our newsletter for AI Insights