Inspiration
Many of the most memorable plays in football are times where a wide receiver manages to catch a deep ball and run incredibly far in one explosive play. Defenders naturally aim to stop this, but there are certain restrictions on what the defenders and receivers can do to prevent or facilitate the catching and subsequent running of the ball.
What it does
This site takes in video input in the form of a .mp4 file. These files should be video clips that contain footage of a potential pass interference violation.
How We Built It
We built FairCall as a multi-stage AI system designed to detect and classify Defensive Pass Interference (DPI) and Offensive Pass Interference (OPI) in NFL video footage. Given the complexity of pass interference rulings, our approach combined object detection, heuristic-based decision-making, and a custom LLM-powered analysis layer, all integrated into an interactive Streamlit-based UI for ease of use.
Dataset Creation and Preprocessing
Since there were no publicly available datasets for pass interference detection, we manually compiled and labeled a dataset of 70+ NFL videos, covering a variety of interference scenarios. These videos contained over 25,000 frames, which we narrowed down to approximately 10,000 frames by filtering out non-relevant frames where no interference-like action occurred. This preprocessing was crucial to maintaining efficiency and focusing our model training on meaningful data.
To ensure accurate labeling, we annotated key objects in each frame:
- Receiver and Defender – Identifying the primary matchup.
- Football – Tracking ball arrival time relative to the players.
- Moments of Contact – Marking instances of potential interference.
We automated portions of this annotation using object-tracking techniques but still required manual verification due to the complexity of pass interference, which often involves subtle movements like hand placement or slight pushes.
Object Detection with YOLO
To detect key elements within each play, we used YOLO (You Only Look Once), a real-time object detection model. YOLO was trained to recognize:
- Players (receivers and defenders) with bounding boxes.
- Football trajectory to understand when the ball was catchable.
- Key moments of contact in the play.
However, YOLO alone was insufficient for identifying interference because it classifies objects in single frames and does not inherently track motion sequences. To address this, we fed YOLO’s outputs into a heuristic-based decision system, allowing us to infer whether a contact event should be flagged as interference based on movement patterns, position, and timing relative to the ball's arrival.
Heuristic-Based Interference Detection
Since pass interference rulings are rule-based but highly subjective, we implemented custom heuristics based on NFL guidelines to determine whether an interaction qualified as DPI or OPI. Some of the key heuristics included:
- Relative positioning – Whether the defender was playing the ball or restricting the receiver’s movement.
- Hand usage – Detecting pushing, grabbing, or arm extension before ball arrival.
- Early contact – Determining if the defender initiated contact before the ball was catchable.
- Defender’s body orientation – Differentiating legal plays (e.g., face-guarding without contact) from clear interference.
These heuristics were applied to YOLO-detected bounding boxes and tracked across frames to analyze movement sequences and not just static positions.
Custom LLM for Contextual Analysis
Recognizing that interference rulings involve rule interpretation and expert judgment, we integrated a custom fine-tuned LLM (Language Model) trained on NFL officiating guidelines and past referee explanations. The LLM was responsible for:
- Providing explanations for flagged plays (e.g., why an action qualified as DPI/OPI).
- Handling borderline cases where heuristics were inconclusive, offering a human-like interpretation.
- Suggesting rule clarifications based on existing game laws.
This LLM component allowed FairCall to not only detect interference but also justify the decision, making it more explainable and aligned with how human referees analyze plays.
Streamlit-Based UI for Visualization and Interaction
To make our system accessible and interactive, we built a Streamlit UI, which provided:
- Video upload and processing – Users could upload NFL footage for analysis.
- YOLO object detection visualization – Bounding boxes for players, football, and key interactions were displayed.
- Play classification results – The UI highlighted whether the play involved DPI, OPI, or was legal.
- LLM-generated explanations – Each ruling came with a textual breakdown of why it was flagged as interference (or not).
This allowed users—whether referees, analysts, or fans—to interact with the AI, understand decisions, and refine the system's accuracy over time.
Final Implementation
By combining YOLO for object detection, heuristic-based logic for rule enforcement, and an LLM for contextual decision-making, we developed FairCall, an AI-powered tool capable of providing real-time interference analysis in NFL games. The result was an intelligent system that not only detects pass interference but also explains its reasoning, bridging the gap between AI automation and human decision-making in sports officiating.
Challenges we ran into
One of the biggest challenges in developing FairCall was the lack of a publicly available dataset specifically labeled for pass interference, requiring us to manually annotate 40+ videos for Defensive Pass Interference (DPI) and 15+ for Offensive Pass Interference (OPI), as well as 10+ for NoCall while carefully identifying the receiver, defender, contact, and the football in each play. We generated over 25000+ video frames, which was cleaned down to about 10000+ relavant frames. Given the complexity of NFL footage—where multiple players move dynamically, hand-fighting occurs, and legal vs. illegal contact is often subjective—manual labeling was time-consuming and error-prone. Automating the process using object tracking tools helped to some extent, but distinguishing between incidental contact and actual interference remained a challenge. Initially, we used YOLO for object detection to track key elements such as the receiver, defender, football, and their spatial positioning, but YOLO’s limitation in temporal understanding meant that it struggled to detect the sequence of movements that define pass interference—such as an early push, grabbing, or an illegal block before the ball arrived. Since YOLO classifies frames independently, it often failed in plays where interference was subtle but developed over time, leading us to explore heuristic-based rules models for better classification. However, defining clear heuristics was difficult, as NFL rules for pass interference depend on subjective factors like hand placement, contact timing, and whether the defender was making a play for the ball. Implementing these heuristics required extracting features like relative velocity, defender’s position before the ball’s arrival, and whether illegal contact significantly altered the receiver’s ability to make a catch. Additionally, real-time processing posed a computational challenge—full video analysis was infeasible, so we optimized inference by extracting key frames, using frame-differencing techniques to isolate moments of potential interference, and reducing resolution to speed up detection without compromising accuracy. Despite these efforts, false positives remained an issue, particularly in hand-fighting scenarios, simultaneous plays for the ball, and cases where incidental contact occurred after the ball was touched, requiring further fine-tuning of detection thresholds. Furthermore, the final fine-tuning the custom LLM also involved several challenges because of its inherent 'hallucination'
Accomplishments that we're proud of
One of our biggest achievements in developing FairCall was creating a custom-labeled dataset from scratch, manually annotating 70+ plays despite the complexity of NFL footage, where interference depends on movement patterns and timing rather than just static positions. We successfully implemented YOLO for object detection, tracking receivers, defenders, and the football, but recognizing its limitations in understanding temporal sequences, we incorporated heuristic-based rules to analyze contact timing, hand placement, and defender positioning. This hybrid approach allowed us to distinguish legal contact from clear interference, improving classification accuracy. Additionally, we optimized real-time processing by using frame-differencing and strategic downsampling, enabling faster inference without compromising accuracy. The final prototype not only detects potential pass interference but also differentiates between DPI and OPI, demonstrating the potential of AI to bring objectivity to officiating in the NFL. We also used the LLM, which provided well positioned reasoning based on the YOLO data and the frames. We compiled all this through streamlit, and were happy of how it turned out.
What we learned
We learned a lot about AI and machine learning, how to create custom datasets, fine-tune models, and build and fine-tune custom LLMs, and also learned more about building a full product as a team. We also learned a lot about time management, as the timing of this hackathon necessitated efficient time usage.
What's next for Pass Interference Prediction
Future work will primarily consist of including other forms of fouls in football, especially those that are ambiguous and hard to consistently call. Past that, it can expand to other sports and increase in speed.
Log in or sign up for Devpost to join the conversation.