AI-powered interview integrity detection that runs entirely in your browser.
OpticGuard runs alongside any video interview and flags two common cheating signals in real time.
Vision Mode uses MediaPipe to track 468 facial landmarks at 60fps and compute iris position relative to each eye's corners.
It looks for reading sweeps: a lateral eye movement of sufficient amplitude (>3% of eye width), lasting 150ms to 5s, followed by a saccadic return. Head rotation is tracked separately so head-coupled movement does not trigger false positives. Sweep frequency and span over a rolling 2.5-second window drive the confidence score, which rises on repeated detections and decays when behavior normalizes.
Audio Mode runs pitch detection via autocorrelation on 2048-sample audio frames every 100ms, extracting the fundamental frequency of the voice in the 85-350Hz range.
Five signals are combined with calibrated weights:
- Pitch variance (22%) - scripted speech is monotone
- Pitch range (10%) - limited frequency range indicates low expressiveness
- Pause irregularity (20%) - natural speech has uneven pause lengths
- Long pause deficit (28%) - pauses over 1 second signal real thinking; their absence is the strongest cheating indicator
- Phrase tempo variability (20%) - AI-read answers tend to be metronomic
Scripted and AI-read answers score low on nearly all five. A high vision confidence score also tightens the audio threshold.
Both modes run simultaneously and cross-reinforce each other.
All processing is local. No audio, video, or biometric data ever leaves your device. You can verify this yourself by watching the Network tab in DevTools during a session — zero outbound requests.