spectrogram – Hackaday https://hackaday.com Fresh hacks every day Thu, 10 Oct 2024 15:34:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 156670177 Supercon 2023: Receiving Microwave Signals from Deep-Space Probes https://hackaday.com/2024/10/10/supercon-2023-receiving-microwave-signals-from-deep-space-probes/ https://hackaday.com/2024/10/10/supercon-2023-receiving-microwave-signals-from-deep-space-probes/#comments Thu, 10 Oct 2024 17:00:23 +0000 https://hackaday.com/?p=724218 Here’s the thing about radio signals. There is wild and interesting stuff just getting beamed around all over the place. Phrased another way, there are beautiful signals everywhere for those …read more]]>

Here’s the thing about radio signals. There is wild and interesting stuff just getting beamed around all over the place. Phrased another way, there are beautiful signals everywhere for those with ears to listen. We go about our lives oblivious to most of them, but some dedicate their time to teasing out and capturing these transmissions.

David Prutchi is one such person. He’s a ham radio enthusiast that dabbles in receiving microwave signals sent from probes in deep space. What’s even better is that he came down to Supercon 2023 to tell us all about how it’s done!

Space Calling

David’s home setup is pretty rad.

David notes that he’s not the only ham out there doing this. He celebrates the small community of passionate hams that specialize in capturing signals directly from far-off spacecraft. As one of these dedicated enthusiasts, he gives us a look at his backyard setup—full of multiple parabolic dishes for getting the best possible reception when it comes to signals sent from so far away. They’re a damn sight smaller than NASA’s deep space network (DSN) 70-meter dish antennas, but they can still do the job.  He likens trying to find distant space signals as to “watching grass grow”—sitting in front of a monitor, waiting for a tiny little spike to show up on a spectrogram.

Listening to signals from far away is hard. You want the biggest, best antenna you can get.

The challenge of receiving these signals comes down to simple numbers. David explains that a spacecraft like JUNO emits 28 watts into a 2.5-meter dish, which comes out to roughly 44.5 dBm of signal with a 44.7 dBi gain antenna. The problem is one of distance—it sits at around 715 million kilometers away on its mission to visit Jupiter. That comes with a path loss of around -288 dB. NASA’s 70-meter dish gets them 68 dBi gain on the receive side, which gets them a received signal strength around -131 dBm. To transmit in return, they transmit around the 50-60 kW range using the same antenna. David’s setup is altogether more humble, with a 3.5-meter dish getting him 47 dBi gain. His received signal strength is much lower, around -152 dBm.

His equipment limits what he can actually get from these distant spacecraft. National space agencies can get full signal from their dishes in the tens-of-meters in diameter, sidebands and all. His smaller setup is often just enough to get some of the residual carrier showing up in the spectrogram.  Given he’s not getting full signal, how does he know what he’s receiving is the real deal? It comes down to checking the doppler shift in the spectrogram, which is readily apparent for spacecraft signals. He also references the movie Contact, noting that the techniques in that film were valid. If you move your antenna to point away from the suspected spacecraft, the signal should go away. If it doesn’t, it might be that you’re picking up local interference instead.

Some hobbyists have been able to decode video feeds from spacecraft downlinks. 

Working at microwave frequencies requires the proper equipment. You’ll want a downconverter mounted as close to your antenna as possible if you’re working in X-Band.

However, demodulating and decoding full spacecraft signals at home is sometimes possible—generally when the spacecraft are still close to Earth. Some hobbyists have been able to decode telemetry from various missions, and even video signals from some craft! David shows some examples, noting that SpaceX has since started encrypting its feeds after hobbyists first started decoding them.

David also highlights the communications bands most typically used for deep space communication, and explains how to listen in on them. Most of it goes on in the S-band and X-band frequencies, with long-range activity focused on the higher bands.

David has pulled in some truly distant signals.

Basically, if you want to get involved in this kind of thing, you’re going to want a dish and some kind of software defined radio. If you’re listening in S-band, that’s possibly enough, but if you’re stepping up into X-band, you’ll want a downconverter to step that signal down to a lower frequency range, mounted as close to your dish as possible. This is important as X-band signals get attenuated very quickly in even short cable runs. It’s also generally required to lock your downconverter and radio receiver to some kind of atomic clock source to keep them stable. You’ll also want an antenna rotator to point your dishes accurately, based on data you can source from NASA JPL. As for finding downlink frequencies, he suggests looking at the ITU or the Australian Communication and Media Authority website.

He also covers the techniques of optimizing your setup. He dives into the minutae of pointing antennas at the Sun and Moon to pick up their characteristic noise for calibration purposes. It’s a great way to determine the performance of your antenna and supporting setup. Alternatively, you can use signals from geostationary military satellites to determine how much signal you’re getting—or losing—from your equipment.

Ultimately, if you’ve ever dreamed of listening to distant spacecraft, David’s talk is a great place to start. It’s a primer on the equipment and techniques you need to get started, and he also makes it sound really fun, to boot. It’s high-tech hamming at its best, and there’s more to listen to out there than ever—so get stuck in!

]]>
https://hackaday.com/2024/10/10/supercon-2023-receiving-microwave-signals-from-deep-space-probes/feed/ 5 724218 mainer1
Feast Your Eyes on These AI-Generated Sounds https://hackaday.com/2024/05/28/feast-your-eyes-on-these-ai-generated-sounds/ https://hackaday.com/2024/05/28/feast-your-eyes-on-these-ai-generated-sounds/#comments Tue, 28 May 2024 11:00:23 +0000 https://hackaday.com/?p=681000 The radio hackers in the audience will be familiar with a spectrogram display, but for the uninitiated, it’s basically a visual representation of how a range of frequencies are changing …read more]]>

The radio hackers in the audience will be familiar with a spectrogram display, but for the uninitiated, it’s basically a visual representation of how a range of frequencies are changing with time. Usually such a display is used to identify a clear transmission in a sea of noise, but with the right software, it’s possible to generate a signal that shows up as text or an image when viewed as a spectrogram. Musicians even occasionally use the technique to hide images in their songs. Unfortunately, the audio side of such a trick generally sounds like gibberish to human ears.

Or at least, it used to. Students from the University of Michigan have found a way to use diffusion models to not only create a spectrogram image for a given prompt, but to do it with audio that actually makes sense given what the image shows. So for example if you asked for a spectrogram of a race car, you might get an audio track that sounds like a revving engine.

The first step of the technique is easy enough — two separate pre-trained models are used, Stable Diffusion to create the image, and Auffusion4 to produce the audio. The results are then combined via weighted average, and enter into an iterative denoising process to refine the end result. Normally the process produces a grayscale image, but as the paper explains, a third model can be kicked in to produce a more visually pleasing result without impacting the audio itself.

Ultimately, neither the visual nor audio component is perfect. But they both get close enough that you get the idea, and that alone is pretty impressive. We won’t hazard to guess what practical applications exist for this technique, but the paper does hint at some potential use for steganography. Perhaps something to keep in mind the next time we try to hide data in an episode of the Hackaday Podcast.

]]>
https://hackaday.com/2024/05/28/feast-your-eyes-on-these-ai-generated-sounds/feed/ 7 681000 soundimage_feat
Your Noisy Fingerprints Vulnerable to New Side-Channel Attack https://hackaday.com/2024/02/23/your-noisy-fingerprints-vulnerable-to-new-side-channel-attack/ https://hackaday.com/2024/02/23/your-noisy-fingerprints-vulnerable-to-new-side-channel-attack/#comments Fri, 23 Feb 2024 12:00:01 +0000 https://hackaday.com/?p=664699 Here’s a warning we never thought we’d have to give: when you’re in an audio or video call on your phone, avoid the temptation to doomscroll or use an app …read more]]>

Here’s a warning we never thought we’d have to give: when you’re in an audio or video call on your phone, avoid the temptation to doomscroll or use an app that requires a lot of swiping. Doing so just might save you from getting your identity stolen through the most improbable vector imaginable — by listening to the sound your fingerprints make on the phone’s screen (PDF).

Now, we love a good side-channel attack as much as anyone, and we’ve covered a lot of them over the years. But things like exfiltrating data by blinking hard drive lights or turning GPUs into radio transmitters always seemed a little far-fetched to be the basis of a field-practical exploit. But PrintListener, as [Man Zhou] et al dub their experimental system, seems much more feasible, even if it requires a ton of complex math and some AI help. At the heart of the attack are the nearly imperceptible sounds caused by friction between a user’s fingerprints and the glass screen on the phone. These sounds are recorded along with whatever else is going on at the time, such as a video conference or an online gaming session. The recordings are preprocessed to remove background noise and subjected to spectral analysis, which is sensitive enough to detect the whorls, loops, and arches of the unsuspecting user’s finger.

Once fingerprint patterns have been extracted, they’re used to synthesize a set of five similar fingerprints using MasterPrint, a generative adversarial network (GAN). MasterPrint can generate fingerprints that can unlock phones all by itself, but seeding the process with patterns from a specific user increases the odds of success. The researchers claim they can defeat Automatic Fingerprint Identification System (AFIS) readers between 9% and 30% of the time using PrintListener — not fabulous performance, but still pretty scary given how new this is.

]]>
https://hackaday.com/2024/02/23/your-noisy-fingerprints-vulnerable-to-new-side-channel-attack/feed/ 28 664699 Hand holding mobile phone with application for scanning fingerprint
Bass Reactive LEDs For Your Car https://hackaday.com/2023/04/25/bass-reactive-leds-for-your-car/ https://hackaday.com/2023/04/25/bass-reactive-leds-for-your-car/#comments Wed, 26 Apr 2023 05:00:41 +0000 https://hackaday.com/?p=586718 A view of the inside of a car, with drivers wheel on the left and control panel in the middle, with red LED light displayed in the floor area under the drivers wheel and passenger side.[Stephen Carey] wanted to spruce up his car with sound reactive LEDs but couldn’t quite find the right project online. Instead, he wound up assembling a custom bass reactive LED …read more]]> A view of the inside of a car, with drivers wheel on the left and control panel in the middle, with red LED light displayed in the floor area under the drivers wheel and passenger side.

[Stephen Carey] wanted to spruce up his car with sound reactive LEDs but couldn’t quite find the right project online. Instead, he wound up assembling a custom bass reactive LED display using an ESP32.

A schematic of the Bass LED reactive circuit, with an ESP32 on a breadboard connected to a KY-040 encoder module, a GY-MAX4466 microphone module and LED strips below.

The entirety of the build is minimal, consisting of a GY-MAX4466 electret microphone module, a KY-040 encoder for some user control and an ESP32 attached to a Neopixel strip. The only additional electronic parts are some passive resistors to limit current on the data lines and a capacitor for power line noise suppression. [Stephen] uses various enclosures from Thingiverse for the microphone, rotary encoder and ESP32 box to make sure all the modules are protected and accessible.

The magic, of course, is in the software, with the CircuitPythyon ulab library used to do the heavy lifting of creating the spectrogram and frequency filtering. [Stephen] has made the code is available on GitHub for those wanting to take a closer look.

It wasn’t very long ago that sound reactive LEDs used to be a heavy lift, requiring optimized FFT libraries or specialized components to do the spectrogram. With faster and cheaper microcontroller boards, we’re seeing many great projects, like the sensory bridge or Raspberry Pi driven LED spectrogram, that can now take spectrograms and Fourier transform calculations as basic infrastructure to build on top of them. We’re happy to see [Stephen] leverage the ESP32’s speed and various circuit Python libraries to create a very cool LED car hack.

Video after the break!

]]>
https://hackaday.com/2023/04/25/bass-reactive-leds-for-your-car/feed/ 32 586718 bass_reactive_prim A schematic of the Bass LED reactive circuit, with an ESP32 on a breadboard connected to a KY-040 encoder module, a GY-MAX4466 microphone module and LED strips below.
Identifying Malware by Sniffing its EM Signature https://hackaday.com/2022/01/19/identifying-malware-by-sniffing-its-em-signature/ https://hackaday.com/2022/01/19/identifying-malware-by-sniffing-its-em-signature/#comments Wed, 19 Jan 2022 15:00:48 +0000 https://hackaday.com/?p=516041 The phrase “extraordinary claims require extraordinary evidence” is most often attributed to Carl Sagan, specifically from his television series Cosmos. Sagan was probably not the first person to put forward …read more]]>

The phrase “extraordinary claims require extraordinary evidence” is most often attributed to Carl Sagan, specifically from his television series Cosmos. Sagan was probably not the first person to put forward such a hypothesis, and the show certainly didn’t claim he was. But that’s the power of TV for you; the term has since come to be known as the “Sagan Standard” and is a handy aphorism that nicely encapsulates the importance of skepticism and critical thinking when dealing with unproven theories.

It also happens to be the first phrase that came to mind when we heard about Obfuscation Revealed: Leveraging Electromagnetic Signals for Obfuscated Malware Classification, a paper presented during the 2021 Annual Computer Security Applications Conference (ACSAC). As described in the mainstream press, the paper detailed a method by which researchers were able to detect viruses and malware running on an Internet of Things (IoT) device simply by listening to the electromagnetic waves being emanated from it. One needed only to pass a probe over a troubled gadget, and the technique could identify what ailed it with near 100% accuracy.

Those certainly sound like extraordinary claims to us. But what about the evidence? Well, it turns out that digging a bit deeper into the story uncovered plenty of it. Not only has the paper been made available for free thanks to the sponsors of the ACSAC, but the team behind it has released all of code and documentation necessary to recreate their findings on GitHub.

Unfortunately we seem to have temporarily misplaced the $10,000 1 GHz Picoscope 6407 USB oscilloscope that their software is written to support, so we’re unable to recreate the experiment in full. If you happen to come across it, please drop us a line. But in the meantime we can still walk through the process and try to separate fact from fiction in classic Sagan style.

Baking a Malware Pi

The best way of understanding what this technique is capable of, and further what it’s not capable of, is to examine the team’s test rig. In addition to the aforementioned Picoscope 6407, the hardware configuration includes a Langer PA-303 amplifier and a Langer RF-R H-Field probe that’s been brought to rest on the BCM2837 processor of a Raspberry Pi 2B. The probe and amplifier were connected to the first channel of the oscilloscope as you might expect, but interestingly, the second channel was connected to GPIO 17 on the Pi to serve as the trigger signal.

As explained in the project’s Wiki, the next step was to intentionally install various rootkits, malware, and viruses onto the Raspberry Pi. A wrapper program was then used that would first trigger the Picoscope over the GPIO pin, and then run the specific piece of software under examination for a given duration. This process was repeated until the team had amassed tens of thousands of captures for various pieces of malware including bashlite, mirai, gonnacry, keysniffer, and maK_it. This gave them data on what the electromagnetic (EM) output of the Pi’s SoC looked like when its Linux operating system had become infected.

But critically, they also performed the same data acquisition on what they called a “benign” dataset. These captures were made while the Raspberry Pi was operating normally and running tools that would be common for IoT applications. EM signatures were collected for well known programs and commands such as mpg123, wget, tar, more, grep, and dmesg. This data established a baseline for normal operations, and gave the team a control to compare against.

Crunching the Numbers

As explained in section 5.3 of the paper, Data Analysis and Preprocessing, the raw EM captures need to be cleaned up before any useful data can be extracted. As you can imagine, the probe picks up a cacophony of electronic noise at such close proximity. The goal of the preprocessing stage is to filter out as much of the background noise as possible, and identify the telltale frequency fluctuations and peaks that correspond to individual programs running on the processor.

The resulting cleaned up spectrograms were then put through a neural network designed to classify the EM signatures. In much the way a computer vision system is able to classify objects in an image based on its training set, the team’s software demonstrated an uncanny ability to pick out what type of software was running on the Pi when presented with a captured EM signature.

When asked to classify a signature as ransomware, rootkit, DDoS, or benign, the neural network had an accuracy of better than 98%. Similar accuracy was achieved when the system was tasked with drilling down and determining the specific type of malware that was running. This meant the system was not only capable of detecting if the Pi was compromised, but could even tell the difference between a gonnacry or bashlite infection.

Accuracy took a considerable hit when attempting to identify the specific binary being executed, but the system still manged a respectable 82.28%. Perhaps most impressively, the team claims an accuracy of 82.70% when attempting to identify between various types of malware even when attempts were made to actively obfuscate their execution, such as running them in a virtualized environment.

Realistic Expectations

While the results of the experiment are certainly compelling, it’s important to stress that this all took place under controlled and ideal conditions. At no point in the paper is it claimed that this technique, at least in its current form, could actually be used in the wild to determine if a computer or IoT device has been infected with malware.

At the absolute minimum, data would need to be collected on a much wider array of computing devices before you could even say if this idea has any practical application outside of the lab. For their part, the authors say they chose the Pi 2B as a sort of “boilerplate” device; believing it’s 32-bit ARM processor and vanilla Linux operating system provided a reasonable stand-in for a generic IoT gadget. That’s a logical enough assumption, but there’s still far too many variables at play to say that any of the EM signatures collected on the Pi test rig would be applicable to a random wireless router pulled off the shelf.

Still, it’s hard not to come away impressed. While the researchers might not have created the IT equivalent of the Star Trek medical tricorder, a device that you can simply wave over the patient to instantly see what malady of the week they’ve been struck by, it certainly seems like they’re tantalizingly close.

]]>
https://hackaday.com/2022/01/19/identifying-malware-by-sniffing-its-em-signature/feed/ 17 516041 TempestShielding
Analyzing CNC Tool Chatter with Audacity https://hackaday.com/2020/01/16/analyzing-cnc-tool-chatter-with-audacity/ https://hackaday.com/2020/01/16/analyzing-cnc-tool-chatter-with-audacity/#comments Thu, 16 Jan 2020 16:30:00 +0000 https://hackaday.com/?p=394590 When you’re operating a machine that’s powerful enough to tear a solid metal block to shards, it pays to be attentive to details. The angular momentum of the spindle of …read more]]>

When you’re operating a machine that’s powerful enough to tear a solid metal block to shards, it pays to be attentive to details. The angular momentum of the spindle of a modern CNC machine can be trouble if it gets unleashed the wrong way, which is why generations of machinists have developed an ear for the telltale sign of impending doom: chatter.

To help develop that ear, [Zachary Tong] did a spectral analysis of the sounds of his new CNC machine during its “first chip” outing. The benchtop machine is no slouch – an Avid Pro 2436 with a 3 hp S30C tool-changing spindle. But like any benchtop machine, it lacks the sheer mass needed to reduce vibration, and tool chatter can be a problem.

The analysis begins at about the 5:13 mark in the video below, where [Zach] fed the soundtrack of his video into Audacity. Switching from waveform to spectrogram mode, he was able to identify a strong signal at about 5,000 Hz, corresponding to the spindle coming up to speed. The white noise of the mist cooling system was clearly visible too, as were harmonic vibrations up and down the spectrum. Most interesting, though, was the slight dip in frequency during the cut, indicating loading on the spindle. [Zach] then analyzed the data from the cut in the frequency domain and found the expected spindle harmonics, as well the harmonics from the three flutes on the tool. Mixed in among these were spikes indicating chatter – nothing major, but still enough to measure.

Audacity has turned out to be an incredibly useful tool with a broad range of applications. Whether it be finding bats, dumping ROMs, detecting lightning strikes, or cloning remote controls, Audacity is often the hacker’s tool of choice.

]]>
https://hackaday.com/2020/01/16/analyzing-cnc-tool-chatter-with-audacity/feed/ 32 394590 First Chips on Avid CNC Benchtop Pro - Aluminum testing and chatter analysis [Fixed Audio] 17-43 screenshot
Audio Algorithm Detects When Your Team Scores https://hackaday.com/2015/04/24/audio-algorithm-detects-when-your-team-scores/ https://hackaday.com/2015/04/24/audio-algorithm-detects-when-your-team-scores/#comments Sat, 25 Apr 2015 05:01:21 +0000 http://hackaday.com/?p=153715 [François] lives in Canada, and as you might expect, he loves hockey. Since his local team (the Habs) is in the playoffs, he decided to make an awesome setup for …read more]]>

[François] lives in Canada, and as you might expect, he loves hockey. Since his local team (the Habs) is in the playoffs, he decided to make an awesome setup for his living room that puts on a light show whenever his team scores a goal. This would be simple if there was a nice API to notify him whenever a goal is scored, but he couldn’t find anything of the sort. Instead, he designed a machine-learning algorithm that detects when his home team scores by listening to his TV’s audio feed.

goal[François] started off by listening to the audio of some recorded games. Whenever a goal is scored, the commentator yells out and the goal horn is sounded. This makes it pretty obvious to the listener that a goal has been scored, but detecting it with a computer is a bit harder. [François] also wanted to detect when his home team scored a goal, but not when the opposing team scored, making the problem even more complicated!

Since the commentator’s yell and the goal horn don’t sound exactly the same for each goal, [François] decided to write an algorithm that identifies and learns from patterns in the audio. If a home team goal is detected, he sends commands to some Phillips Hue bulbs that flash his team’s colors. His algorithm tries its best to avoid false positives when the opposing team scores, and in practice it successfully identified 75% of home team goals with 0 false positives—not bad! Be sure to check out the setup in action after the break.

]]>
https://hackaday.com/2015/04/24/audio-algorithm-detects-when-your-team-scores/feed/ 20 153715 goaldetect goal