Thursday, May 17, 2018

UTSC Air Interface: First Tests

Tonight I enlisted the help of an associate in Texas, Tech2025 (aka RFShibe) with transmitting a dummy UTSC signal. It was kind of funny because he casually asked if a HackRF could transmit UTSC, which led me to ask if he had access to one. One thing led to another, and he ended up helping me test my air interface.

I would have done it myself, and indeed I tried numerous times, but my LimeSDR Mini isn't operating like I need it to, even after the firmware upgrade.

Fortunately, Tech2025 happened to own a HackRF and agreed to transmit for me if I sent a flowgraph, and then he would show the result on an RTL dongle connected to another computer.

After I built a GNUradio flowgraph that uses a Random Source block to transmit QPSK at the proper clock rate for UTSC, I sent it via Discord's file sharing function. Tech2025 transmitted it and sent back screenshots to prove that it worked.

Here's how it looks on my end in GNUradio:



In the following real-world test, a QPSK signal carries random bytes ranging from 0 to 255.

Credit: Tech2025/RFShibe

In the next image, the range was from 0 to 3.

Credit: Tech2025/RFShibe

This signal is roughly as wide as a UTSC channel should be, so we're off to a good start.

Wednesday, May 16, 2018

Theory on UTSC decoder latency

The goal of UTSC is to provide a digital TV standard that operates as much like analog TV as possible. This means maximum reliability, range, and weak-signal performance. A UTSC channel should be able to degrade gradually and have the sound continue working long after the picture is lost. This is in contrast to ATSC's terrible cliff effect.

One of the things I noticed about analog vs digital is that digital TV has a noticeable delay between the time you tune in a channel and the time it's displayed. Analog, on the other hand, can be shown immediately which allows you to flip through channels much more quickly.

I wanted UTSC channels to be shown as quickly as possible and I figured it should be possible to bring the latency reasonably close to that of analog TV. If you already read my standard, you know that the channels are sent in packets taking 1 second each to transmit. My initial idea was to have decoders that immediately start decoding and playing a channel once they see the preamble indicating a new packet. This would involve playing the sound and video immediately, once enough data has arrived. The maximum latency would be around 1 second. This worst-case latency would occur if the decoder tuned in right after the preamble and had to wait for another packet.

Sound obviously carries much less data than video and the standard has the sound being transferred earlier in the packet than the video, so under this proposal the decoder could wait for the sound plus a couple of video frames, and then start playing. Assuming 48 Kbit Opus audio, this would lead to a theoretical minimum latency of just over 45 milliseconds.

However, last night I realized that the minimum latency can't be less than 1 second. I don't believe it's possible to build a good decoder that doesn't wait for a whole packet before it starts decoding. Here are the 4 reasons I believe it's not possible.

Problem 1: No way to find packet preamble

One problem is that there is no way to verify packet validity unless you wait for a whole packet. The "UTSC" preamble that marks the beginning of a packet only works because I added a CRC32 field to check against the rest of the packet. This is because "UTSC" could occur anywhere in the stream, and you don't want the decoder to find a false beginning. Obviously the preamble doesn't matter once you lock onto a station, but you don't want to get garbage by starting the decode process in the wrong place.

Problem 2: Can't use FEC to correct errors

Another problem is that there is 250 Kbit/sec of FEC protecting the data. This amounts to 4/5 FEC. Without an entire packet, you don't get the FEC and so you can't correct any errors. You might argue that only the first packet would be played without FEC and that all future packets would be protected by it. But in reality, because you started decoding the first packet without FEC, you must continue to do so or you risk a brief interruption in the playback. Here is an illustration of this issue, which assumes that no interleaver is used.


In Scenario 1, the decoder waits for a whole packet plus the FEC before decoding and playing. In Scenario 2, the decoder waits until just enough data is available before decoding and playing. Notice that if Scenario 2 continues, it will never get to receive the FEC before playing a packet.

Problem 3: Time discrepancy in video compression

The biggest problem in my opinion is the uneven distribution of data inherent to digital video compression, especially the interframe variety used by almost every codec.

In analog TV, every element of each frame took the same amount of time, each time a frame was transmitted. There were some tolerances, such as the power grid deviating from 60 Hz or when they lowered the frame rate to add color, but overall it was reasonably precise and unchanging.

In digital video, more data is spent on keyframes than on inter frames. In case you didn't know, keyframes are the initial frame that inter frames build on. The compressor encodes a regular image to start the video, and then frames after that are just differences between the current frame and the initial one. Every so often another keyframe is sent.

If the decoder tries to start decoding before a whole packet is received, then it will most likely fail to play the video properly. This is because much more data is sent in the initial keyframe of each packet than in the rest of the frames. Since the channel bandwidth is constant, this means that keyframes will take longer to send than inter frames.


Since digital frames would be received at indeterminate intervals, you can't just start playing the video as soon as you get the first few frames. If you don't wait for the entire packet, you're very likely to run out of data when a longer frame is being transmitted.

Problem 4: The interleaver

Even though I think #3 is the biggest issue, I saved this one for last because the interleaver is one of the more recent developments. To make this section short, UTSC packets are scrambled by an interleaver, and because the entire packet is scrambled, a receiver must wait until the entire packet is received before decoding it. This means the absolute minimum latency is about 1 second.

Below is a longer explanation of the interleaver.

Although UTSC could be transmitted on any band wide enough, such as 500 MHz or 2.4 GHz, I think it's best suited to the 900 MHz band. The problem is that many smart energy meters transmit FHSS (hopping bursts) all over 900 MHz. Since reliability is the focus of UTSC, I needed a way to somehow filter those. The FEC is good, I think, but it won't fix huge burst errors when every energy meter in a neighborhood transmits over a station.

I decided to use a fully random interleaver, a sort of scrambler. Since this is part of the air interface (the way it's transmitted), it doesn't affect the packet format that I released in 2017.

I generated a large amount of encryption-grade randomness, verified it with a program called ENT, and then used it to generate random integers for interleaver bit positions. This means that once you have a UTSC packet that's ready to transmit, you simply copy bit-by-bit into a new interleaved packet, using the bit positions I generated.

Since there are 1,000,000 (data) + 250,000 (FEC) bits in a UTSC packet, we have 1,250,000 bits, starting at bit 0 and ending at bit 1,249,999. We do NOT want to interleave the "UTSC" preamble, because we need receivers to be able to find it, but we DO want to interleave the CRC32 that comes right after it because we want it to be more resistant to burst errors.

This means we only have to interleave 1,250,000 - 32 = 1249968 bits, numbered from 0 to 1249967. So when we start populating the bits in our interleaved packet for transmitting, bit #1601 from the plain unscrambled packet goes first at position #0, then bit #952398, and so on. Since the pattern is made from high-quality randomness, the bit positions are extremely well distributed.

On the other end, the receiver would have a copy of the interleaver's bit ordering scheme and would work the process backward. To reproduce the original packet, the receiver would take bit #0 from the received packet and put it at bit #1601, and put bit #1 at bit #952398, and so on. At the end, the original packet will have been reconstructed and any burst errors will be evenly distributed over the entire packet, making it easier for the FEC to fix.

Here's a picture showing a 20-millisecond burst error. The drawing is to scale, showing how much that error would damage a UTSC packet. You may want to open the image in another tab and zoom in to see it in detail.

Left: a 20-ms error in a plain UTSC packet
Right: the same error in an interleaved packet.

I didn't know if I wanted to interleave, because I immediately saw that it would prevent instant playback. I wondered if I should leave some of the bit flags un-interleaved so the channel could indicate whether it was interleaved or not, but I realized that an error could flip the flag and confuse the decoder, not to mention the issue with burst errors breaking any non-interleaved channels. In the end, I decided that all UTSC channels will be interleaved.

Monday, May 7, 2018

New fiber optic lines

A few days ago (May 3) I saw some colored tubes sticking out of the ground at the corner of the local post office. I asked inside about them and the lady at the counter hadn't even noticed them. She said they must have been put in during her lunch break, which seems odd considering how long it would take. Anyway, I took some pictures because I knew it must have something to do with fiber optics.


I wondered why the town hadn't been dug up but some Googling revealed that they have horizontal drilling machines for this job.

I took a different route home and noticed that there was some new road paint. In addition to new dashed white lines (not shown), someone had spray-painted MH next to a BellSouth manhole cover.

A manhole cover directly across the street from my yard.

This photo doesn't show the fiberglass junction box nearby, or the fiber optic cable on a utility pole. The cable on the pole leads into the ground near the fiberglass box. There are a bunch of poles carrying fiber optic cable around this part of town. I saw an AT&T truck putting new fiber on the poles several months ago, and the manhole cover has a BellSouth logo so I believe this section is managed by them. With all of this infrastructure available right by my yard, I wonder why AT&T refuses to connect me to their fiber.

Anyway, I don't think it's a coincidence that the paint appeared at the same time as those tubes.

Today I went to check if anything had happened and, to my surprise, there was a call-before-you-dig marker beside a new fiberglass junction box set partway into the sidewalk.



Apparently this is being done by the Palmetto Rural Telephone Co-op, a company I hadn't heard of before.

This is only 1/4 mile from my house so perhaps the neighborhood will be offered better Internet service. I can't imagine why a small-town post office, or any post office for that matter, would need its own dedicated fiber lines.

Sunday, March 18, 2018

UTSC Datagram Specification

I finalized this in 2017 but forgot to release it. This is the format for sending files (aka datacasting) across a dedicated UTSC channel. With such a setup you could send about 10 GiB of files per day.

utsc_datagram_finished_release.txt

Friday, March 16, 2018

New data fuzzer

The main focus of my UTSC standard is reliability so I needed a way to test how it responds to bit errors. Randomly corrupting data is called "fuzzing" but I couldn't find a program that was easy to use so I wrote one in Liberty BASIC.

My program takes a file to be fuzzed, and another file containing high-quality random bytes. It uses 24-bit values from the random file to get byte positions to fuzz, and uses "mod 8" to get the bit position to flip. This means it can randomly flip bits in files up to 16 MiB.

You can fuzz anything you like, but I wanted to fuzz audio so I could see how it would sound when the signal is weak and bits are being corrupted. The sound worked on analog TV when the signal was too weak for the picture to come through, and I want UTSC to do the same. First I tried Opus files. They can handle some bit errors but they stop playing altogether if there are too many. And if the header is corrupted, they won't play at all. I added a header-skipping feature to my fuzzer but obviously a real-world signal could lose the header.



Then I got to thinking about WAV audio. It has a very small header (44 bytes) and the audio portion can withstand unlimited bit errors without stopping. Of course, you need the header to know the sample rate and format, but what if "best practices" were defined for UTSC that define a default WAV format? After some tests I found that 24 kHz mono 8-bit WAV files are a good compromise between quality and bandwidth. As I wrote on the LostCarrier.Online Discord channel today, "With 17% bit errors, it degrades like analog sound and fades into the noise rather than glitching." My code was off by a factor of 8, so I meant 2.125% bit errors.

I already specified Opus as the recommended audio format on UTSC, but it can't handle anywhere near enough bit errors to be reliable in bad conditions. Let me demonstrate with a 10-second clip from Syn Cole's "Feel Good" from NoCopyrightSounds.

This falls outside of YouTube use, so hopefully 10 seconds is short enough to fall under Fair Use, but if not I'm including the attribution and I'll gladly swap the clip for something else if the owner complains.

"Syn Cole - Feel Good [NCS Release]"
https://www.youtube.com/watch?v=q1ULJ92aldE
Syn Cole
https://soundcloud.com/syncole
https://www.facebook.com/SynCole
https://twitter.com/SynColeOfficial
https://www.instagram.com/SynCole/




Notice that with only 0.2% of the bits flipped, the Opus file is barely playable. In contrast, the WAV files still contain obvious music even with about 50% errors. I say "about" because since this is a random process, some bits may be flipped twice and be unchanged, so 50% is only a reasonable figure. We can assume it's very close to 50% because of the high quality of the randomness used.

These results make me think that 24 kHz mono 8-bit audio is the optimal format to use if you want to ensure audio reliability at low bandwidth. However, the bandwidth is much higher than Opus. With Opus at 48 kbit/sec, the audio takes about 5.7% of the channel bandwidth, counting overhead. Using WAV as I've described would take 192 kbit/sec, or about 19.24%. That's roughly 4 times as much bandwidth just to make sure the sound gets through.

UTSC offers 1 Mbit/sec of bandwidth. So with Opus audio, about 94% of the bandwidth is available for video compared to 80.75% when using WAV. It's up to the broadcaster to decide if losing 135.8 kbit/sec of video bandwidth is worth it. If extra-high quality is desired, it may not be.

Tuesday, February 13, 2018

LimeSDR Mini unboxing

Yesterday my 2 LimeSDR Mini's arrived. Before I show its performance, let's see some unboxing photos.







Driver Setup


The LimeSDR is not well documented. You can't just Google "limesdr mini drivers" and expect to find anything. After I put in a lot of trial and error, Jeff from LostCarrier.Online linked me a USB controller driver that somehow makes this work.

So to install the drivers, it looks like you need to start by installing PothosSDR. It won't install drivers, but it will provide a Start Menu link to something called Zadig. Use that to install a driver for your LimeSDR Mini. Then download the USB controller driver and have Device Manager update your LimeSDR with the new drivers. Windows should prefer the USB controller driver and quickly begin upgrading to it once you choose the Update Driver Software option.

This does not work for EXTIO-based programs like HDSDR. You'll need SDRConsole v3 to try your LimeSDR Mini. It has good support for receiving from a LimeSDR, but can't transmit even though there is a Transmit tab. Also, I heard somewhere that 20 MHz is the most bandwidth you can do over USB 2.

I have an NVidia GTX 750 Ti so SDRConsole can use CUDA to accelerate the FFT that generates the waterfall. Still, for me it stutters over 7.5 MHz.

One more thing: choose an antenna once your LimeSDR Mini is running in SDRConsole. I spent a few minutes troubleshooting the blank waterfall before realizing that no antenna was selected.



Reception tests

3 ATSC TV pilots (spikes) at once:


LTE at 2.1 GHz:



I couldn't get SDRAngel or Foobar2000 (with jocover's plugin) to transmit. I'm still trying to figure out how to transmit and when I do I'll write another post showing how to do it.

Monday, February 5, 2018

Jammer on 2018 AM Rally

Last night I heard a looping recorded message jamming AM transmissions on 80 meters. It was mixed in with other QSO's, but here's what I managed to get:

"Why don't you narrow 'er up, because like narrow it up, I'll have a sideband QSO below me, one above me, and I'm the ****** in the middle."

As I said, it was looping but one contestant seemed to think it was a live person. I think it was controlled by a live person, because I remember it following the contest when it went lower in the band, but it was looping, not to mention it was the exact same tone of voice each time. The real proof came during one cycle when it started stuttering like a slow computer. Everyone else's voice was fine, so it wasn't an issue on my end.

Here's a bit I managed to record.