Preferences

ML feels like an overkill for a single signal. If you want to process large, wideband scans, you could iterate over peaks and check things like: does phase shift create any interesting set of points, does PLL find consistent amplitude keying, does the frequency move around the centre. That covers 99% of what you're going to find in the wild. The first part (transform the signal) is definitely not great for ML, but the second part (does the result look like set of points / digital on/off key / voice) could be classified that way.

There are some existing projects like https://github.com/randaller/cnn-rtlsdr (that one tries to identify more specific tv signals though)


anilakar
Automatic classifiers tend to look for power above the background noise and then AM demodulate the signal around it. That demodulated signal, or video as it's called, is centered around 0 Hz and can be matched against a database of spectrum masks for various modulations, baud rates and other parameters.

No neural nets required. Just good old regression.

vdqtp3
So how would they fare against something below the noise floor, like FT8
trothamel
FT8 isn't actually below the noise floor if you look at the bandwith the signal is detected in, rather than the 2500 Hz reference bandwidth.

https://tapr.org/pdf/DCC2018-KC5RUO-TheReal-FT8-JT65-JT9=SNR...

Has details, but basically you should add 26 dB to account for the difference between 2500 Hz and the 6.25 Hz bandwidth each FT8 tone is detected in.

This item has no comments currently.