At the 2026 PLRB Claims Conference in Washington D.C., Swept AI CEO Shane Emmons joined a live panel on The Future of Insurance Podcast alongside Sadiq Isu (Founder & CEO, All Talentz) and Kevin Meyer (Managing Director, PropertyExpert), hosted by Bryan Falchuk. The conversation covers where agentic AI is moving from pilot to production in claims: demand package analysis, long-running claims summarization, and first notice of loss systems that restoration companies are putting directly in policyholders' hands. That last use case caught even the panelists off guard: people are talking to these agentic systems during some of the worst moments of their lives with a candor that complicates the assumption that claims handling always requires a human voice on the other end. The panel also works through AI drift (what Shane calls "heresy," the persistent biases an AI latches onto and won't release), the litigation exposure of AI-driven coverage decisions, and the talent pipeline risk of automating the entry-level claims work that used to train the next generation of expert adjusters.
Transcript
Sadiq Isu: Hello, my name is Sadiq Isu and I'm the founder and CEO of All Talentz.
Kevin Meyer: Hi, my name is Kevin Meyer and I'm the managing director of PropertyExpert.
Shane Emmons: I'm Shane Emmons, founder and CEO of Swept AI.
Bryan Falchuk: And I'm Bryan Falchuk. And this is The Future of Insurance. Thank you all for the presentations, different perspectives, but I also saw a thread through all of it. The two of you are what Swept AI is talking about when Shane talks about having to watch over. So if you feel like there's eyes on you, there are. Or there should be. But you can see how it comes together across the full piece of the complexity of claims, the situations of claims. There are ways to bring these tools in, in any setting, any language. And we have to make sure that we're doing that right the whole time. And it's not just for compliance, not just external compliance. There's internal compliance pieces to that as well. I did catch the hybrid notion in both of your perspectives on how to actually put this into action. That was probably one of my first questions: if we're talking about the mix of human and AI and we have to watch the AI, are we watching the humans differently? How do you think about it when you have to deliver the right kind of outcome, following the right rules, the right procedures, but you've got sort of two cooks in the kitchen? I'm curious to hear that perspective from actually living the claims. Does the mix create a complexity in how we ensure the right rules are being followed, whether it's a human or otherwise? Or do you just not watch the human? I'm curious about all that because I think we're all going to start to hit that more and more. Sadiq, do you want to start things off?
Sadiq: On the area of humans, it's still going to remain the same exactly the way it is right now in terms of how we deal with human elements on everyday activities. In our organization, even though it's a remote organization, we still watch the humans. We still have tools, maybe AI tools, watching the humans and ensuring that they are doing and delivering the responsibilities that they are supposed to deliver on. But at the same time, when you take AI as an example, I'm not an expert in AI, but I'm very knowledgeable in a lot of things. AI is also a tool that was trained by humans. It's called a machine learning process where the machine has to learn all of the processes and it continues to learn with all of the available data that it has. So therefore, the ability to be able to control it and supervise it is based on the data. If you feed it with wrong data, it will give you wrong information. Period. If you feed it with right data, it gives you right information. It's the same thing as humans. You train them right, they do the right thing. You train them wrong, they do the wrong thing. It's pretty much the same thing. It's just machine and humans.
Bryan: I can imagine your wheels are turning, but I want to keep them turning because I want to add in your perspective now.
Kevin: From my perspective, we look at more like how do we empower the human. When I talked about the product we built on the cat side, we had a problem: there's not enough humans in that market. And the other problem was there's not enough time to get the work product done if we just rely on humans. Even if we parachute 10 million people in there, it still takes a certain amount of time to do each file. So we looked at it from the other side: the human capacity we can get to, how do we unlock it? How do we get people from outside the market to be able to work in Thailand, for that example? And then how do we do it in a way they can do it super efficiently? That was the example I used in the presentation, but anything we do, whether it's daily or cat, that's always the balance we're looking for. We have volume that goes straight through our process, but at the end of the day everything we start with is: how do we empower humans and how do we make them more efficient and more accurate?
Bryan: So Shane, the wheel that I thought might be turning was the notion of drift, which we hear about hallucinations a lot, but we don't hear about drift. As you brought that up, it's like, yeah, once you start heading down the wrong path it's going to drift further and further away from where it needs to go. Is that the concern you have as we talk about this, or is there something else brewing in navigating the human plus AI total quality picture?
Shane: It's one that always goes into my head. Recently I've heard somebody else label drift as "heresy." It's the thing that gets inside the AI that it can't let go for whatever reason, that you don't know about, and it keeps surfacing back up. It could be that it really latches on to some claim for whatever reason and just loves to resurface some piece of it. And they're really nefarious to get in. The same thing happens in people, which is why the question of "do you watch people and the AI with the same tool?" matters. People get these biases inside themselves. Even the classic stuff of judges, but I'm sure there are adjusters who do worse adjusting before lunch, and when they come back they're a lot friendlier. I'm going to call at 1:30. I'm not calling at 11:30 about my claim. I know that well enough. The AI gets that sort of thing inside it too. It's not going to get hungry, but it's going to have these biases, and we've got to detect it. And when I hear what Kevin just said about augmenting, though, people often talk about "we could use this to watch our people in the same way and judge them." They very quickly turn to "our people actually just need to be augmented. They don't need to be doing this." Once we hit this level of automation and accuracy and the ability to supervise it like we did them, they start doing new, more impactful things. That's usually what I really like to see. As much as I think you could watch them the same, we see the pattern repeat over and over: this trust gap closes and humans suddenly are doing something much more interesting.
Kevin: Tagging on what Shane just said, at the end of the day the market always defers to where the accuracy is. At the end of the day, there is someone, an adjuster, adjudicating it with a policyholder at the end. That's the ultimate watching over the process. And I think what Shane looks at in watching the AI, when you take all the guardrails off, there's tons of exposure there. But a lot of the use cases we've been talking about today, having that guardrail at the end of the process, which is the human touch, the impactful piece, dealing with the emotional aspect of the claim and the accurate payment out, I think that's what we're all trying to empower. Ultimately, if you can get your people to spend more time on the human side of it and give the machines the other part, that's where you want to be. Not complex decision-making, but the ability to have empathy.
Bryan: Do we have audience questions? Someone want to be brave? You might win $100 if you speak. You might not. You won't. But you might.
Audience Member: Hi, this is quite useful. I'm curious what you're seeing in the last few months as agent adoption has increased. What's the pull you're seeing from the carriers? Is it on the FNOL side, on the communication layer, or somewhere else when you're talking about claims and AI?
Bryan: So what are they seeing change?
Audience Member: What's the biggest pull? Like, we have a budget for this, we actually want to go ahead and deploy this in production. What are the biggest use cases you're actually seeing where AI gets deployed this year and moves from pilots to production?
Sadiq: Agentic AI, for those of you that are wondering what that means, is an AI that can independently take its own action based on information that it had gotten prior or that you had fed it. It should be able to independently think and independently take actions and make things happen. But I personally don't think that we're still there yet, most especially in this industry where emotional cases are actually very hard and very strong. We're dealing with emotional situations a lot. I experienced a fire damage and I was telling this story yesterday: when we went out to that fire job, the mother, the father, the two sisters died in that fire. And you could see scratches on the door. They were trying to escape and get out. Even we that went in to do the fire mitigation, we got emotional about that situation. So imagine an agentic AI is deployed to actually handle such a claim. Just let that emotion ring in your head for a little bit and see if that person is willing to deal with a machine through that experience. We're going to get there at some point, but I don't think we're ready. That's my personal opinion. The experts can say otherwise, but I feel like somebody like me gets too emotional when it comes to situations like that, and if I have to deal with a machine in order to face that experience, I don't know how I'm going to deal with it.
Bryan: Can I ask a clarifying point? I think you gripped everyone with that scenario. There's no question. Does your comment apply across the board, an agentic AI taking the claim on? Or do you still say that if it's a piece of the puzzle? Because the agentic AI that's largely being deployed is for a specific piece of it: it's doing the communication, or you have one that goes off and makes the payment. It's sort of components. Do you think there's space for this "piece of the process" agentic AI today? And are you seeing adoption and openness to that, versus the more holistic picture you're painting?
Sadiq: The picture I'm painting is more holistic, but I think I'm much more focused on the human-facing aspect. I don't think we're ready for that. For the rest of the process, maybe taking the claims process through payment and some of those things, yes, absolutely. So long as it's not human-facing and is in the background doing what it should be doing, being monitored. I love when you talked about guardrails, staying within the guardrails and being monitored. I think that's actually going to be helpful. But when it comes to human-facing, that's my worry. Are we really there yet? A lot about the claims process is very consumer or human-facing. Once the first person picks up a phone and files a claim, there is that emotional aspect. I want a human to take that call because I want somebody to be empathetic about my situation. I do not want a machine to talk to me about my life and the situation of my claim. I want a human to deal with that process. I want that human to come to my house, experience what I've experienced, see the situation of things. Then at the back end, maybe agentic AI is doing the schedule for the adjuster, understanding all of the logistics, reminding the adjuster of the time and when you have to be on the job site, feeding information through the claims process, making the estimate happen. And then the adjuster running through the estimate with the individual ensuring that the scope of work is being done. So I think that's why the collaboration is extremely important.
Bryan: The two of your thoughts on it. Different perspectives.
Kevin: I'll put mine out there and you can tell me how you manage it. My view on it, and going to the original question of do you see this happening anywhere: we have some active pilots around the world leveraging agentic AI in the FNOL space, or that's probably misleading, immediately post-FNOL. Broker-facing, broker-reported FNOL, mainly commercial. We've taken the residential aspect out of it. It's more cut and dry. Through these pilots, we've notoriously seen a lot of information missing at the front end. So it's really good on those interaction points. On commercial schedules, especially in small business, it's really good at making coverage determinations when it's trained correctly. That's where we've got active live pilots in market. On the cat side, which I talked about earlier, we have some thoughts long-term on the digital reporting front end, where we don't get the right information, having a digital path and an agentic path that helps get the information inbound, and potentially on the back end too, getting to an agreement on settlement amount. But all of those, at the end of the day, you have to look at what your problem is that you're solving. On the pilots, we have a market that's broker-interfacing that was trying to get the right information at the front end and make a quick decision that's not complicated. And we have a broker there interfacing with the policyholder. On the cat stuff, we're looking at speed. It's balancing speed versus accuracy.
Shane: Going to that original question of what are we seeing with agentic in the last few months, I can think of three instances that are pretty interesting. One is dealing with demand packages. I've seen a couple of folks that we've validated for them, and they're deploying agentic solutions going through reams and reams of documents that they have to turn around in 10 days to respond to. They've never been able to do a good job at this. And now we're showing improved results through their agentics. They're resorting it, finding the things, trying to find that needle in the needle stack of how are they making these arguments. They're doing it not just more efficiently, because that was always a terrible process, but they're finding they're able to do it much more effectively. It was always a process that was stacked against them, and now they have a piece of technology with these agentics that are much more active in finding what matters. Another one that just started a pilot last week after getting validation: these really long-running claims, which is super common in Michigan where they have unlimited medical. Something comes up and the adjuster needs to be reinformed. Anytime there's activity on a claim, they have an agentic system going out there pulling old notes, summarizing it, looking at what it said last time, looking for new policy information, and packaging it up so that when the adjuster goes and looks at that email, all the research has at least been started for them to validate. That seems to have really alleviated pressure, and also I think in the long run it addresses the problem Sadiq raised: when somebody says "sorry, it's my last day, I can't cut you a check now," the other person can get it done because the agentic system has been keeping everything up to date. The other one that I found super interesting, and I still don't know necessarily what I think about it, we've validated it but I just don't know: it's a first notice of loss agentic system, deployed by a restoration company. It's given to the policyholders to come in and interact with. And it seems like they're using it because they're not trusting their insurance company is doing right by them. They're talking to this agentic system that they feel does have their back. That's a really interesting dynamic. From the agentic standpoint, I've been quite shocked, actually. I was going to say surprised but I'm more shocked. When we're doing the monitoring and seeing the level of intimacy people are having with this agentic system for some pretty terrible moments in their life. It makes me think that there are ways we can have this hybrid model that I would not have expected. I would expect immediate rejection of this tool. But maybe it's because of where it's coming from, or whatever the case may be, people are opening up and feel seemingly more honest and less protected, and they're having a better outcome because of it. That's been pretty interesting. That one was deployed at the end of the year and the other two in the last month or so.
Bryan: Maybe the issue is more that we have presumptions on behalf of the customer for what they would want, rather than just asking them. Or maybe they don't even know, but we have to experiment. That's nothing new. We presume certain parts of the population will or won't follow a digital path.
Bryan: We'll take one more. This will be our last one.
Audience Member: With the apprehensions, my brain went to class action lawsuits. You're using a model to decide coverages, and I would think an attorney starts noticing a trend, even though it might be correct, and they sue class action. "You've used this model, we're going to sue over all these cases." Do you see those as limitations with AI taking off as making coverage decisions for insurance companies?
Kevin: Rather than a tool, yeah. I mean that's always a concern, right? And it's why we're kind of wading into those waters. We think there's the right tool for the right problem, and that's why we've waded into the areas we have. Also, the nice thing about being a global company is we can mitigate our risk by picking markets that are less risky on that front. We always try to do the right thing as a business. We heavily monitor the outcomes of these decisions, and our clients do as well. That's how we've tried to layer those risks. We always take the approach like, this may or may not be a viable product for us, but we'll learn a bunch of stuff along the way. And it's okay if it's not. We'll get to where we need to be from the learnings.
Shane: My primary perspective is making sure that you understand what's underlying the model. Every model is, I mean it's in the name, it's a model. It's a trend. So there's going to be some argument against it. From our perspective, usually when you're going to dissect these models, it's to understand them well enough to make them defensible, or to say that you're willing to defend them because they have such high utility. But I think law is probably one of the last stops for agentics, because they're squinting a lot harder on how AI is used. Whether your questions into AI are discoverable is a real issue. How you asked the question into your tool, were you looking to deny the claim out the gate, or were you looking to cover it? It will answer differently. We're talking early stages here, much more advanced stuff is coming, but that's where my lens goes: you start to get your troublesome claims, your legal stuff, and how this all impacts it. It might help us. But especially in America, we're very litigious, and there are going to be big firms that see a trend and could see a big payday. That's where my lens is.
Bryan: I assume whatever is asked will be demanded. It's discoverable. And number two, if we haven't seen the lawsuits yet, don't think that there aren't plans. There's an expectation that someone will open the door for a very large lawsuit and take a very large piece of that for their fee. I'm sure that's already being plotted. I can't say I blame them. I don't like it and I don't agree, but I can't blame them. It's the model. But we should assume as much and be very mindful as we're doing prompt engineering. There are classes you can take on that outside of the industry, on how to get the most from these models. We have to be very thoughtful about what we're asking and why we're asking it. And to be fair, PLRB sees that ourselves: when we get a question in to our attorneys, not to an AI model, we're also looking at how it is being asked. Let's make sure the conversation stays unbiased and unpositioned. We're not here to take a position on it. We're here to give you the facts. And generally, the questions we get from the carriers are actually quite benign. But we certainly have an eye to that.
Bryan: We are out of time, but I'm going to cheat. It's the nice thing about being the CEO: I can make a call. I had one thing, rapid fire. When we talk about how AI can be helpful, I said this when I was introducing you, Kevin: there's so much talk about handling the simple stuff. We're struggling to hire new adjusters, so we'll use AI to solve for that, and our experienced folks will do the complex stuff. Now I know we can use AI to help on the complex stuff too. The problem with that mentality is it works great for today's staffing problem but it's actually making an even bigger one later, because now we have fewer and fewer people going through the ranks from start to becoming those super experienced adjusters. Frankly, every industry is going to struggle with this. Software was like the first to do the vibe coding and the newbie stuff, and now they're already struggling with that. They weren't hiring the next class of engineers. Are we making a problem for ourselves if we start deploying this on what would have been fertile training ground? And do you think there's a path to combat that longer-term risk?
Kevin: I don't know if I have a position on it, but logic would say if we change the way we adjudicate simple claims and make it faster and take more on with technology, there'll be fewer claims to train newbies on. You balance that with there's also fewer people coming into the industry, which is a problem. My prior companies were in auto, so one of the biggest wakeups I had coming into property was the amount of fees associated with files. It's really quite a bit higher than I was accustomed to. So you've got the cost balance, the labor shortage balance, and technology. Over time you'll take some of these simple claims out of the process. There are a lot of companies working towards that, so there'll be fewer claims to train people on. But that's balanced against fewer people coming into the market and increasing costs across the board. I'm not sure you're creating a problem. I think you're just solving, at least in current conditions, a problem that's out there.
Shane: There's this falsehood we often have of assuming the most intense, difficult adjusting had to take a path through the easiest stuff to get there. It's probably a false equivalency, but there are no brain surgeons who started on toes and were just foot doctors. They were brain surgeons, and we still found ways to train them into that expertise. I think what we're going to have to start thinking about is: what are the ways that we train expert people without a 10-year torturous existence through the lower stuff? How do you get into that higher level? I have the same thing with engineers all the time, and the reality is they haven't wanted 10 years of menial tasks for decades now. That's why I'm an entrepreneur instead of still working there. And that's where I think we as business owners and leaders have to get to: we're going to have to train experts and can't let them learn through osmosis for years.
Sadiq: I personally feel like if it is guided and monitored from the beginning, we would always be able to see that path together going forward. Because if it is separated now, there is always going to be that gap. That gap creates additional opportunity. Anytime there is a gap, there is always an opportunity for improvement or change. That change might not come fast enough, there might be impact, but if it's coordinated and guided from the very beginning, we would not have that gap. The reason AI opportunities came into this industry is because of the employment gap. So if we continue in that path and completely drift one-sided, the gap is going to happen again. I was in Nigeria recently, and the airport caught fire starting from the server room. Everything automated completely did not work. An airline that was supposed to leave at 9:00 a.m. left at 6:30 p.m. because they did manual check-in for every single person boarding the plane. Just imagine that. And that is what we're going to see at some point. Something is going to happen. There is going to be a gap. And we have to fix it. So I think if it is guided from now, then it can help elevate whatever that gap would be in the future.
Bryan: Well, we are over time, so we'll stop there. But thank you so much to Sadiq, to Kevin, to Shane, and to all of you, especially those who stayed in the room. We appreciate that and the questions. Thank you all so much. You will be able to catch this at future-of-insurance.com.
Want to learn more about how Swept AI helps organizations supervise and evaluate AI systems in production? Get in touch.
