Bioacoustics researchers, birders, and machine‑learning practitioners often need a tool to visualize and annotate spectrograms to make the most of their sound recordings. Sound‑data annotation is a crucial step in building high‑quality machine‑learning models for sound based identification. To support this workflow, a new open‑source tool—Spectrolipi—is now available.
Link: https://spectrolipi.com
Features:
Spectrogram playback with different options for visualization (Colourmap, Gain, Zoom).
Create, edit, import, and export annotation boxes.
Automatic annotation assistance using templates for repeated sound patterns.
‘Repeat annotation’ feature - to quickly annotate the similar recurring sound patterns on the spectrogram.
Maintain file‑level metadata.
Species selection with predefined or custom lists (common & scientific names).
Basic sound editing & export: cut, silence, filters (high‑pass / low‑pass), normalize, and single‑step undo.
Export sound clips for machine learning based on the annotations. Clips are auto-segregated in respective species folders.
Integration with Xeno-canto to upload the annotations: Manually (json file) or directly with API.
No data privacy concerns: All processing happens locally in your browser (except Birdnet model downloading). No other installation is required.
Analyze and create annotations with Birdnet V2.4
Please refer 'Spectrolipi - Guide.docx' and 'Spectrolipi - FAQ.docx' to read details about this tool: Guide and FAQ
BirdNET AI model by the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology in collaboration with Chemnitz University of Technology. Stefan Kahl, Connor Wood, Maximilian Eibl, Holger Klinck. Birdnet analyzer, Birdnet models
Code for analysis with Birdnet tfjs model was referred (& improvised) from: https://github.com/georg95/birdnet-web