Overview

In contested electromagnetic environments, the ability to detect, classify, and locate RF emitters is a core signals intelligence (SIGINT) capability. Operators need to rapidly answer: What's transmitting? Where is it? Is it friendly?

Your task: build Find My Force — a system that takes RF signal data, classifies emitters by type, estimates their geographic location from multi-receiver observations, and presents the tactical picture on a Common Operating Picture (COP). You will work with real IQ waveform data representing common radar and communications modulation techniques, a live simulation feed from a network of simulated receivers, and the challenge of distinguishing friendly signals from hostile and civilian emitters.

Event Date: March 7th, 2026
Event Time: 8:30 AM – 7:30 PM
Build Time: 9:55 AM – 4:00 PM
Format: Software-only; all work is done on your team's laptops using provided datasets and a live simulation feed
Team Size: Up to 5

The Scenario

You are a signals intelligence operator supporting a joint exercise. Multiple friendly platforms — UAVs, ground vehicles, communications relays — are operating in an area alongside civilian RF activity and potential adversary emitters. Your sensor network consists of several receivers at known positions, each reporting RF detections with signal characteristics and received signal strength.

You have been briefed on what your own forces' signals look like — their modulation types, waveform characteristics, and expected signal profiles. You have labeled training data for these friendly emissions. But you have no labeled examples of hostile emitters. Intelligence suggests adversary forces are operating radar and communications systems in the area, but you don't know exactly what they look like. Your system must figure that out.

Your job: build a pipeline that classifies detected signals, locates them geographically, determines whether they're friendly, civilian, or potentially hostile, and displays the full picture so an operator can make decisions under pressure.

The Two-Stage Challenge

This challenge has two core technical problems that feed into each other:

Stage 1 — Signal Classification

Given a snapshot of raw IQ (in-phase/quadrature) data from an RF emission, classify the signal by modulation type and/or signal type. This is the machine learning core of the challenge.

You will receive a labeled training dataset of real radar and communications waveforms containing several signal types that friendly forces operate. Each sample is a 256-element vector: 128 in-phase components followed by 128 quadrature components, captured at 10 MS/s across a range of signal-to-noise ratios (SNR).

The training data covers friendly signal types only. The live simulation feed will contain all signal types — including hostile emitters and civilian devices that you have never seen labeled examples of. Your classifier must:

  • Correctly identify known friendly signal types
  • Detect signals that don't match any known friendly pattern and flag them as unknown/hostile
  • Handle low-SNR conditions where signals are buried in noise
  • Produce a confidence score for each classification

This is a semi-supervised learning problem. You know what friendly looks like; you must discover what hostile looks like. Teams that treat this as a pure supervised classification task will miss the hostile emitters entirely.

Stage 2 — Observation Association & Geolocation

When an emitter transmits, multiple receivers in the network may detect it near-simultaneously. But the feed gives you a flat stream of independent receiver observations — it doesn't tell you which observations came from the same emission. Before you can geolocate anything, you need to associate observations into groups that likely correspond to a single emitter at a single moment.

This is an association problem. Consider: if receiver A and receiver B both report a detection within 100ms of each other, with similar IQ characteristics, they probably detected the same emitter. But if three emitters are active at once, you might have 15 observations to sort into 3 groups — and some receivers may not have detected all emitters.

Once you've grouped observations, estimate the emitter's geographic location. Each receiver reports RSSI (which attenuates with distance) and potentially time-of-arrival (which increases with distance). With distance estimates from 3+ receivers at known positions, use trilateration, multilateration, or optimization-based approaches to compute a position fix.

This is a well-defined estimation problem, but it gets interesting when:

  • Your association logic is imperfect (wrong groupings lead to wrong positions)
  • Some receivers miss the detection (fewer than 3 reports for a given emitter)
  • RSSI measurements are noisy or affected by terrain/obstructions
  • Multiple emitters are active simultaneously and must be distinguished
  • You need to maintain a position track over time as the emitter moves

Requirements

What You're Building Core System (Required)

1. Signal Classifier (ML Model)

Train a machine learning model on the provided IQ training data that can:

  • Classify signals into known friendly types
  • Detect out-of-distribution signals (potential hostile/unknown emitters)
  • Operate across a range of SNR conditions (-20 dB to +18 dB)
  • Output a classification label and confidence score

You must go beyond hard-coded rules — your system must learn from the training data. Approaches could include CNNs on raw IQ or spectrograms, tree-based models on extracted features, autoencoders for anomaly detection, or any combination. What matters is that the model generalizes.

Feature engineering ideas:

  • Amplitude envelope statistics (mean, variance, kurtosis, crest factor)
  • Phase difference patterns
  • Zero-crossing rate
  • Energy distribution across the I and Q components
  • Spectral features (FFT-based)
  • Duty cycle estimation (for pulsed signals)
  • Cyclostationary features

2. Geolocation Engine

Implement a position estimator that takes multi-receiver observations and computes an emitter location. At minimum, support RSSI-based trilateration. The core math:

  • RSSI relates to distance via a path-loss model: RSSI = RSSI_ref - 10 * n * log10(d / d_ref) where n is the path-loss exponent and d is distance
  • With distance estimates from 3+ receivers at known positions, solve for the emitter position (nonlinear least squares, gradient descent, or closed-form approximation)
  • Produce a position estimate (latitude, longitude) and an uncertainty radius

3. Track Management

Maintain persistent tracks for emitters over time. As the simulation feed provides new observations, update existing tracks with new position estimates and classification results. Handle:

  • Track creation (new emitter detected)
  • Track update (existing emitter re-detected, refine position and classification)
  • Track staleness (emitter goes silent — show last known position, age out over time)

4. Common Operating Picture (COP)

Display the tactical picture on an interactive map. For each tracked emitter, show:

  • A marker at the estimated position
  • Color coding by classification (friendly / hostile-unknown / civilian)
  • Confidence indicator for both classification and position
  • Last-seen timestamp
  • Details on interaction (click/hover): signal type, confidence, position accuracy, observation count
  • Staleness handling — visually distinguish stale tracks
Beyond the Basics

The scoring system is designed so that a basic implementation leaves significant room for improvement. Here's where top teams invest their time:

Classification Depth

  • Anomaly/novelty detection — The hardest problem: you have no hostile training labels. Can you train an autoencoder or one-class SVM on friendly data and detect signals that don't fit? Can you cluster the unlabeled data and identify coherent groups of unknown signal types?
  • Low-SNR performance — At -10 dB and below, signals are almost invisible. Preprocessing techniques (filtering, averaging across observations, noise estimation) can recover signal structure that raw classification misses.
  • Multi-task classification — Can you predict both the modulation type AND the signal type? These are correlated but not identical; a model that captures both will classify more accurately.
  • Confidence calibration — A model that says "95% confident" should be right 95% of the time. Calibrated confidence scores are more operationally useful than raw softmax outputs.

Geolocation Depth

  • TDoA (Time Difference of Arrival) — More accurate than RSSI-based methods. If the simulation provides timestamps, use time differences between receivers to compute hyperbolic position fixes.
  • Weighted least squares — Weight receiver contributions by their RSSI confidence or distance. Closer receivers give better estimates.
  • Kalman filtering — For moving emitters, use a Kalman filter to smooth position estimates over time and predict future positions.
  • Geometric Dilution of Precision (GDOP) — Estimate position uncertainty based on receiver geometry. Report this to the operator so they know how much to trust a position fix.

Operational Features

  • Track correlation — A vehicle might carry multiple emitters (radio + radar + jammer). Can you correlate co-located tracks into a single platform?
  • Frequency hopping detection — Some emitters hop across frequencies. Can you detect that multiple single-frequency detections are actually one emitter?
  • Jamming detection — If a broadband noise source degrades signal quality at certain receivers, can you detect it and visualize the affected area?
  • CoT/ATAK export — Output track data in Cursor on Target XML format for ingestion into ATAK/TAK.
Submissions
  • Participants will receive technical scores through the automated system on findmyforce.online
  • Participants will also need to have a live demo and presentation ready of their system for judging in CEME 1204.

Hackathon Sponsors

Prizes

$CAD 0 in cash

Devpost Achievements

Submitting to this hackathon could earn you:

Judges

Waseef Nayeem

Waseef Nayeem
Chief Technical Lead / Red Team Hack

Laura Aslan

Laura Aslan
Co-Founder / Mini-Checkout

Will Stahl

Will Stahl
Founder / Unamint Studios

Judging Criteria

  • Entrepreneurial Drive – 10 Points
    Assessment of the team’s initiative, problem understanding, and overall execution mindset. Judges will evaluate leadership, proactiveness, and the ability to translate ideas into actionable outcomes.
  • Algorithm Approach – 5 Points
    Evaluation of the logic, methodology, and structure of the algorithmic solution. Judges will consider originality, efficiency, clarity of design, and appropriateness of the selected approach.
  • Technical Implementation – 10 Points
    This is the score from the website. Assessment of the technical quality of the solution, including functionality, robustness, integration, performance, and overall execution. Working prototypes and demonstrated technical competence will score higher.
  • Feasibility & Scalability – 5 Points
    Evaluation of the practicality of implementation in real-world scenarios, including operational viability, scalability potential, and resource considerations.
  • Storytelling & Vision – 3 Points
    Assessment of clarity, persuasiveness, and strategic vision during the presentation. Judges will evaluate how effectively the team communicates the problem, solution, impact, and long-term vision.

Questions? Email the hackathon manager

Tell your friends

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.