Published Date : 26/10/2025
The rush to implement Artificial Intelligence (AI) in fleet safety is a double-edged sword. While AI offers promising advancements, its hasty adoption can lead to significant risks, especially when it comes to safety. Sean Ritchie, Vice President of Fleet Solutions at Solera, highlighted this during a discussion at the American Trucking Associations' 2025 Management Conference & Exhibition in San Diego.
Too much focus has been on AI adoption simply because many feel they are behind the curve and fear being outcompeted. However, the stakes are different when it comes to safety. According to Johan Land, Samsara's head of engineering for safety, the number of trucks involved in fatal crashes has increased by 49% in the last decade. This is a life or death situation, and given the current state of AI, fleets are best served by balancing AI with human intelligence.
One of the primary issues with AI in fleet safety is the problem of false positives. Nine out of every 10 accidents are caused by drivers' mistakes, but false positives, where AI flags a risk that doesn't exist, are a significant concern. Ritchie emphasized that no matter the AI video-based system, false positives are inevitable. This creates two major issues: wasted time and driver distraction. Even the best AI solutions have a 2%-10% false positive rate, and some are much worse. Every second spent reviewing non-risk videos is time not spent on actual risks.
False positives also create noise, preventing fleets from focusing on the 20% of drivers who represent 80% of the fleet's risk. In-cab feedback systems that produce false positives become distractions and eventually discredit the entire system.
Another critical issue is the concept of 'risk totality'—the idea that all risks in a video are identified by AI. Ritchie noted that upwards of 30% of the risk a fleet is liable for is missed by AI-only solutions. Fleets are liable for acting on all data presented to them, not just AI-flagged risks, making a supplemental human review necessary. If the video only has AI risk labels but contains other unflagged risks, the fleet is still liable for those risks. In court, opposing counsel will likely subpoena all the video for the driver and find situations where there was risk and the fleet did not act.
Ritchie also cautioned against outsourcing coaching to in-cab technologies. While in-cab feedback is a good supplement, it is a poor substitute for human interaction. Humans need a human-to-human connection, and the part of video-based safety that works is human coaching. It allows for collaboration and improvement. He urged fleets not to sacrifice effectiveness for convenience and to balance AI-based safety solutions with human talent.
Land noted that Samsara's AI platform identifies high-priority drivers representing the most risk and escalates those for manager-led training. Less serious infractions are handled by AI, but the goal is to empower managers to have great conversations with their drivers. Ritchie believes that AI-only solutions will improve but are not there yet. He estimates that it will be 3 to 5 years before AI can fully solve these issues.
Until then, the stakes remain high. Video-based safety has saved countless lives, but the risk of getting it wrong is literally life and death.
Q: What is the main risk of hasty AI adoption in fleet safety?
A: The main risk is the potential for false positives and missed risks, which can lead to wasted time, driver distraction, and increased liability.
Q: What is the 'risk totality' concept in fleet safety?
A: Risk totality refers to the idea that all risks in a video must be identified, not just those flagged by AI, to avoid liability issues.
Q: Why is human-to-human coaching important in fleet safety?
A: Human-to-human coaching is crucial because it provides a connection and collaboration that in-cab technologies cannot, allowing for effective improvement and risk management.
Q: What is the current state of AI in fleet safety according to Sean Ritchie?
A: According to Sean Ritchie, AI in fleet safety is not yet fully effective and is about 3 to 5 years away from being able to solve these issues on its own.
Q: How does Samsara's AI platform assist in fleet safety?
A: Samsara's AI platform identifies high-priority drivers who represent the most risk and escalates them for manager-led training, while handling less serious infractions with AI.