Pages

Sunday, May 24, 2020

May 2020 Hackerthon

Over the weekend, the owner of a Discord server called The Hive Mind ran a "hackerthon" during which members were supposed to build something and show pictures on Sunday. Since I was already working on a UTSC broadcast utility for LimeSDR's, I decided to finish it if possible and have something to enter if that was all I could do, and then see if I could transmit the same thing from a Raspberry Pi running on a power bank. If you're not familiar with UTSC, it's a TV standard I created in 2017 as a highly reliable and license-free alternative to ATSC.

Before I begin the description, here are the pictures I submitted when the event ended.







One of the first things I did during the hackerthon was to test the range of the LimeSDR Mini at its highest power. I used DATV Express to transmit QPSK at 625 kilosymbols/sec to make it as much like UTSC as possible and walked around outside with a laptop, SDRplay RSP1, and TV rabbit ears. The setup was on the second floor, about 25 feet above ground level, on a table near a window. The transmitting antenna was an adjustable TV dipole with ladder line oriented vertically. I walked until I was about 955 feet away (291 meters) and the signal was roughly 21 dB above the noise.

After I was done, I worked on a QPSK transmitter in Visual Studio. I copied the modulator from Charles Brain's (G4GUO) DATV Express code, which is public domain. I first tried this about a year ago and it worked but stuttered badly. I copied the code from an older version that had an issue with accessing the webcam when I compiled it so I assumed the modulator code was broken as well. It took a while for G4GUO to get back to me the second time I contacted him so I paused the project until a few days ago. That's when I tried again and succeeded. I think the issue was due to using a different buffer size the first time. I made sure to notify him via Twitter that I didn't need his help anymore.

Now that I could transmit QPSK, I wanted to see if I could use my LimeSDR Mini to do it from a Raspberry Pi. I have a Raspberry Pi 3B V1.2 and it was easy to set up the SDR and C++ environments. Since it uses Linux (specifically Raspbian), I didn't need any drivers. All I had to do was download the LimeSuite repo from GitHub and build and install it. When it was done, it left some include and library files in a folder that was easily accessible. I found them and used them with g++ to compile an example I found.

Once I checked that the API was accessible from C++ on the Pi, I copied my code from Visual Studio and removed everything specific to Windows. When I was done, it was able to transmit but had the stuttering issue. I spent hours trying different buffer sizes and noticed something odd: with a small buffer it stuttered rapidly, but with a large one it would transmit smoothly for a few seconds, nothing for slightly longer, and repeat. I tried transmitting pure noise and it didn't stutter so I decided to profile the code. What I found was that the RRC (root-raised cosine) method was too slow on the Pi's CPU. The large buffer size was letting it build up a large array of samples to transmit but the RRC method couldn't keep up, causing gaps longer than what was being transmitted. I decided to create a dummy signal generator instead. This would transmit QPSK with the same bandwidth as UTSC but with no real data. This was done by filling the data buffer with random bytes and using a bool variable called rrcRanOnce. I ran the RRC method once and set the variable to true, and then I had an "if" block that kept it from running again after that. This let me transmit the same RRC-filtered samples in a loop, producing a smooth QPSK signal on the Pi.

On Windows, my code originally had a GUI but I changed to a command-line project because I wanted to print debug information. I added methods to create UTSC packets and do the interleaving and de-interleaving. I wrote the output to a file and verified it in a hex editor. I also added command-line switches so I could use it outside of Visual Studio.

My transmitter is called UTSCTransmitterCli.exe and takes arguments such as an input file and the channel to use. UTSC is meant for the 902-928 MHz band. Each signal takes up about 850 kHz, so there's space for 30 channels. Here's the current UTSC air interface specification.

Intended band: 902-928 MHz
Bandwidth: 843.75 kHz
Modulation: QPSK (or π/4 QPSK)
Symbol rate: 625 kilosymbols/sec
Rolloff: 0.35
Total data rate: 1.25 megabits/sec
FEC: LDPC, 4/5 (250 kilobits/sec)
Usable data rate: 1 megabit/sec

My transmitter uses π/4 QPSK. To achieve this, I duplicated the QPSK symbol array in my code and rotated each one by π/4 radians (45 degrees). Then, for each bit pair, the code checks if the index is odd or even and uses a ternary operator to choose the array that the symbol comes from.

As I said, my app takes an input file and transmits it without any processing. I have an option to create a file with UTSC packets but you can transmit anything you want. If the file is highly random, like if it's compressed, then the signal will be smooth but if not, then there will be patterns based on the content. Here are some examples.

A 7-zip file

A WAV file with music

A Visual Studio 2013 ISO image

Having visible patterns isn't desirable because they could violate power spectral density requirements. Here's what a basic UTSC signal without video would look like without any padding or interleaving.


The empty timeslots with spikes are the empty space for video that is zeroed out. Here's what it looks like when the empty space is filled with random bytes.


Notice the timeslots with peaks. That's from the WAV audio (8 bits, 44.1 kHz). With this method you can see how much of the packet is taken up by sound.

In this case the peaks aren't that bad but even if we were operating within the rules for PSD, there's another problem. If you live in the US then you probably have smart energy meters transmitting in this band. Those can briefly interfere with the signal. If we assume that a burst lasts 20 ms, then it could corrupt 25,000 bits (0.02 seconds * 1250000 bits/second) or 3125 bytes. This could mean the difference between a visible image or blocky colorful garbage, or it could cause bursts of noise in the audio.

To solve this, UTSC uses an interleaver. This is a scrambler that randomly and uniformly rearranges the data bit by bit at the transmitter and restores it at the receiver. If a burst of noise damages the scrambled data, the damage will be spread very evenly over the packet after it's unscrambled, which is not only easier to fix with error correction, but also produces audio that sounds better than bursts of noise if the error correction fails.

Here is the same broadcast from the last picture but with the interleaver enabled.


As I said earlier, the interleaver scrambles bit-by-bit. For example, bit 1633 in a plain packet would end up as bit 32 in an interleaved one, bit 952430 would become bit 33, and so on. Bits 0-31 inclusive are taken by the "UTSC" sync header. I generated a lookup table for this and it's defined in interleaver.h in my project.

Now that I've explained how this works, here's the full UTSC broadcast sequence.

(audio, video, EPG, files, etc.)->[packet muxer]->[FEC generator]->[interleaver]->[transmitter]

I don't have a FEC generator yet so I use random bytes as a placeholder.

Conclusion

I want it to be as easy as possible for people to get started with UTSC so I'm starting what I call the UTSC Ecosystem Project. It's a collection of guides and open-source programs for setting up a station. The goal is for anyone to be able to start with a PC or Raspberry Pi and a LimeSDR and be broadcasting in about 15 minutes instead of needing a week of free time and a PhD in Linux as is the case for too many open source projects.

1 comment:

  1. The May 2020 Hackathon was an incredible event! It showcased amazing talent and innovation. For those looking to host their projects, DV Hosting offers fantastic services to get your ideas off the ground!

    ReplyDelete