Thursday, August 23, 2018

Transmitting with the LimeSDR Mini

I was an early bird purchaser of the LimeSDR Mini, and I acquired a pair for $99 each. They arrived on February 10, 2018. I described in a previous post how I was able to use them for receiving, but it wasn't until August 17 that I was able to transmit a clean and reliable signal.

Right now, the transmit function seems to only work on Linux. I tried on Windows and got varying results. Initially I was able to transmit some distorted FM audio in the Windows version of GNUradio, but I could not transmit a digital QPSK signal. It's been a while since I tried, but I don't think I can transmit anything in Windows anymore. Currently, when I try to transmit from Windows, I get an unstable solid carrier.

On the LostCarrier.Online Discord channel, Ballistic Autistic told everyone that he succeeded in transmitting good FM audio with his LimeSDR Mini from GNUradio on Ubuntu. I asked him if he would try QPSK and send me a reception screenshot, which he did.

Credit: Ballistic Autistic

Once I knew that it was possible to make it work, I downloaded the GNUradio Live DVD image, which is a Ubuntu distribution, and used Universal USB Installer to install it to a flash drive. I installed the driver, compiled and installed gr-limesdr, and then had a working transmit setup. Since it's hard to find instructions for using the LimeSDR Mini, I'll explain the full process.

I would recommend updating the firmware on your LimeSDR Mini, but I don't think it's necessary if you don't want to.

Transmitting with a LimeSDR Mini, start to finish

1. Download the GNUradio Live DVD image. Get it from here: GNU Radio Live SDR Environment

2. Install it to a flash drive using Universal USB Installer. I chose to have a 1GB persistent area.

3. Boot from the drive

4. Open a terminal (command prompt) and enter the commands linked here to install the driver in Ubuntu.

5. Run "mkdir gr-limesdr"

6. Visit https://github.com/myriadrf/gr-limesdr and enter the commands shown for Linux installation.

7. Open GNUradio Companion. You should now have LimeSDR source and sink blocks. You can click in the list of blocks on the right and press Ctrl+F to search for them.

If you haven't already, plug in your LimeSDR Mini. Now let's put together a flowgraph to test the transmitting function. You should have a second SDR on another computer to receive with.

Transmitting FM audio

Before starting, make sure you have a WAV file recorded as 48 kHz mono.

In GNUradio Companion, click Open and you should see a "ubuntu" folder on the left. Navigate there if it's not already selected and then open gr-limesdr/examples/FM_transmitter.grc.

1. Remove the "LimeSuite Sink (TX)" block, and add a fresh one from the list on the right.

2. Connect the output of "Rational Resampler" to the input of the new "LimeSuite Sink (TX)" block.

3. Double-click the LimeSuite block and you'll see that the "Device serial" field is empty. Make sure your LimeSDR Mini is connected with USB, then open a terminal and run "LimeUtil --find". This will print a list of all Lime devices. Copy the serial number of yours and paste it into the LimeSuite block's "Device serial" field.

4. Change "Device type" to LimeSDR-Mini

5. Choose an "RF frequency" that you want to transmit on. Even though there is very little transmit power, you want to choose a frequency that won't interfere with anything. I like 903.4 MHz but that's not usually license-free outside North America. Frequencies are measured in Hz, so you could type something like 903400000 or just 903.4e6, with e6 meaning MHz.

6. Set the "Sample rate" to 5e6, which means 5 MHz. Click OK.

7. Double-click the "Rational Resampler" block. Change "Interpolation" to 125 and "Decimation" to 12. Click OK.

The Resampler converts the 480 kHz FM quadrature (a.k.a. IQ data; in the next step) to 5 MHz. This block converts between IQ sample rates using the formula
    output_rate = input_rate * (Interpolation/Decimation).

Since the FM block will output IQ data at a rate of 480 kHz and we're transmitting with 5 MHz bandwidth,
    480000 * (125/12) = 5000000

8. Double-click the "NBFM Transmit" block. Change "Audio Rate" to 48000. Change "Quadrature Rate" to 480000. Change "Max Deviation" to your desired FM deviation. I'm going to use 24e3 (24 kHz). Click OK.

9. Finally, double-click the "Wav File Source" block. For the "File" field, click the "..." button to browse for the WAV file you want to transmit. Once it's in the "File" field, click OK.

Connect a receive SDR to another computer, open your desired SDR program, and navigate to the frequency you chose. At the computer you'll be transmitting with, connect your LimeSDR Mini to an antenna and click the green play button at the top. The result should be clear FM audio.

For some reason, I can't get my SDR to transmit on low frequencies like FM broadcast or the 6m ham band. Frequencies like 400 MHz and higher work well, but 900 MHz seems to work best.

The resulting signal received with an SDRplay RSP1

Transmitting QPSK

For this experiment, I'm going to transmit a QPSK signal with a rate of 1.25 Mbit/sec.

1. Create a new flowgraph and follow the instructions in the previous example to set up a "LimeSuite Sink (TX)" block. For this section I set mine, on the CH0 tab, to use a digital filter of about 750 kHz. This makes the signal edges look cleaner in the waterfall.

2. Add the following blocks: "Constellation Modulator" and "Constellation Object".

3. Double-click the "Constellation Modulator" block. In "Constellation", point it to the "Constellation Object" block by entering its name, variable_constellation_0. Change "Differential Encoding" to No and "Samples/Symbol" to 8. Click OK.

We're going to try transmitting a "Random Source" with random values evenly distributed between 0 and 255, and optionally a file with non-random data.

4. Add a "Random Source" block and double-click it. Change "Output Type" to Byte, "Minimum" to 0, and "Maximum" to 255. Click OK.

5. Connect the output of "Random Source" to the input of "Constellation Modulator". Connect the output of "Constellation Modulator" to the input of "LimeSuite Sink (TX)".

6. (Optional) Now add a "File Source" block and double-click it. Change "Output Type" to Byte. Choose a file and click OK. Right-click the block and choose Disable, then connect its output to the input of "Constellation Modulator".

Click the green play button to run the flowgraph.

For my optional file, I chose the Monero blockchain. Notice that the signal is not as smooth because the file isn't very random. If you want to switch between Random Source and File Source, just disable one and enable the other.


Random bytes from 0 to 255 inclusive

Monero blockchain

I had to reduce the transmit power by 5 dB for the random source because it appears to produce a much stronger signal.

Transmitting 16-QAM

This experiment will produce a 16-QAM signal with a rate of 2.5 Mbit/sec. It will occupy the same RF bandwidth as the QPSK signal in the previous section.

1. Follow the instructions above to create the flowgraph but don't transmit yet.

2. Double-click the "Constellation Object" block and change "Constellation Type" to 16QAM. Click OK and then try transmitting.

You'll notice that 16-QAM produces a smoother signal than QPSK.

Random bytes from 0 to 255 inclusive

Monero blockchain

Like the previous section, the random source transmission power had to be reduced by 5 dB so the waterfalls would look the same.

Thursday, August 2, 2018

JMemPGP: Java PGP API for handling strings

I've been looking for ways to use PGP in Java programs and the Bouncy Castle API seems to be the most common method. The problem is that almost every example involves reading a file and writing the result to another file. Others have asked on Stack Exchange about processing data solely from memory but solutions are very hard to find. I decided I would write my own API based on the Bouncy Castle methods so I could use PGP to operate on Java Strings and byte[] arrays.

My API is called JMemPGP (Java Memory PGP). The only files it needs are public and private keys, depending on what operation you want. The actual input and output data consist of a pair of byte[] arrays. If you want to use a String, you can use the String.getBytes() method.

I'm going to demonstrate the 4 basic PGP operations using JMemPGP: encrypt, decrypt, sign, and verify.

For this tutorial, you need GPG4Win, GPGshell, NetBeans, and two files from the Bouncy Castle website. Start NetBeans downloading now. Make sure to get a version that contains the JDK.

First, download and install GPG4Win and GPGshell. Then open Kleopatra and create a certificate. If you're not prompted to create one at startup, then navigate to File->New Certificate...


Click "Create a personal OpenPGP key pair", fill in the fields on the next page, and then I would suggest going into "Advanced Settings" and changing the key size to 4096, but that's not necessary to continue. Click Next and then Create Key. Follow the instructions shown for providing random input. When you're done, you should see your new certificate in the list.


Right-click it and choose "Export Certificates..."


Let's save it to the C drive. You might have to save it to a different folder if you're on Windows 10. Let's name it pub.gpg.

Now right-click the certificate again and choose "Export Secret Keys..." Make sure "ASCII armor" is unchecked. Save it as sec.gpg and click OK.

You should now have two files, as shown:


Now we need those two files from the Bouncy Castle website. Navigate to https://www.bouncycastle.org/latest_releases.html and scroll down to the "Signed Jar Files" section.


You need the two files that are highlighted. There may be a newer version by the time you download it and that's fine.

Now it's time to install NetBeans. The installer is pretty simple so just run it. Once it's done, open NetBeans and navigate to File->New Project...

The default project type should be a Java Application, so click Next. For a project name, just type PGPTutorial.


Now right-click the project's package in the pane on the left and choose New->Java Class...



Call the new class JMemPGP and click Finish. Now we need to install the Bouncy Castle API. Right-click the project this time, the item at the top with capital letters, and choose Properties at the bottom of the menu. Now choose the Libraries category and click "Add JAR/Folder".


Use the Ctrl key to select both JAR files, click Open, and then click OK to exit the Properties dialog.


Now visit my article on yours.org to get the JMemPGP API. It costs $1 to unlock the paywall. Once you're in, select the code and copy it to the clipboard. In NetBeans, go to your file JMemPGP.java, which should be open in the editor already, and replace the contents with what you just copied, but make sure to preserve the line "package pgptutorial;". Now click the Save All button at the top or press Ctrl+S.

There are just a couple more things we need. Add the following imports to your main file, PGPTutorial.java.
    import java.io.ByteArrayInputStream;
    import java.io.ByteArrayOutputStream;
    import java.security.Security;
    import org.bouncycastle.jce.provider.BouncyCastleProvider;

Change your main() method to:
    public static void main(String[] args) throws Exception{

Finally, add this line to the beginning of your main() method:
    Security.addProvider(new BouncyCastleProvider());


Now we're ready to start using the API for the 4 basic PGP operations. Here is what PGPTutorial.java should look like when you're done.







The 4 basic PGP operations

Encrypt

Let's say we want to encrypt the string "OneDirection" with our PGP public key. Copy this code to update your main() method:

This code starts with a String, converts it to a byte[] array, connects a ByteArrayInputStream to the byte[] array, encrypts the data, and returns it in a ByteArrayOutputStream. This is converted back to a byte[] array, and then to a String for printing to the screen.

Run the app and you should get output similar to this:

You can copy-paste the PGP message block and decrypt it with GPGtray. You could paste it into GPGtray's text window and decrypt from there, but we'll just use the quick decrypt option. Right-click the tray icon and select "Clipboard Decrypt.../Verify".


You should be prompted for the passphrase you used when you created your certificate. Enter it and click OK. Here is what your output should look like.


Notice that it says "0/12 Bytes". This means that our program encrypted just the 12 bytes in "OneDirection", with no padding.

You can also encrypt custom byte arrays, such as binary data.



Again, notice that we get an output of precisely 5 bytes.

Decrypt

We can also decrypt from within Java. Notice that this time we have to provide our passphrase within the program. I used "test" as mine.




Output should be similar to this:



Sign



The output will be a detached signature. If you were to type the text "OneDirection" into Notepad and save it as a *.txt file, you could copy-paste this detached signature into a file and save it as *.txt.asc and verify it with GpgEx.


Now right-click file.txt.asc and choose More GpgEx options->Verify.

Click "Decrypt/Verify".


As you would expect, if you change file.txt at all, the signature will not work. Let's change the text to "OneRepublic" and see what happens.



Save file.txt and try verifying it again.



Verify

You can also verify signatures from within Java.



Copy and paste this code, run it, and look at the last line it prints.


Let's change the line that says
    bIn = new ByteArrayInputStream(str);
to say
    bIn = new ByteArrayInputStream("OneDirection".getBytes());


Run the program again and you'll see that the signature is still valid. But if you change the string to say "OneRepublic" like in the last example, the signature will not match.


Run the program again and see what the last line says.


Thursday, May 17, 2018

UTSC Air Interface: First Tests

Tonight I enlisted the help of an associate in Texas, Tech2025 (aka RFShibe) with transmitting a dummy UTSC signal. It was kind of funny because he casually asked if a HackRF could transmit UTSC, which led me to ask if he had access to one. One thing led to another, and he ended up helping me test my air interface.

I would have done it myself, and indeed I tried numerous times, but my LimeSDR Mini isn't operating like I need it to, even after the firmware upgrade.

Fortunately, Tech2025 happened to own a HackRF and agreed to transmit for me if I sent a flowgraph, and then he would show the result on an RTL dongle connected to another computer.

After I built a GNUradio flowgraph that uses a Random Source block to transmit QPSK at the proper clock rate for UTSC, I sent it via Discord's file sharing function. Tech2025 transmitted it and sent back screenshots to prove that it worked.

Here's how it looks on my end in GNUradio:



In the following real-world test, a QPSK signal carries random bytes ranging from 0 to 255.

Credit: Tech2025/RFShibe

In the next image, the range was from 0 to 3.

Credit: Tech2025/RFShibe

This signal is roughly as wide as a UTSC channel should be, so we're off to a good start.

Wednesday, May 16, 2018

Theory on UTSC decoder latency

The goal of UTSC is to provide a digital TV standard that operates as much like analog TV as possible. This means maximum reliability, range, and weak-signal performance. A UTSC channel should be able to degrade gradually and have the sound continue working long after the picture is lost. This is in contrast to ATSC's terrible cliff effect.

One of the things I noticed about analog vs digital is that digital TV has a noticeable delay between the time you tune in a channel and the time it's displayed. Analog, on the other hand, can be shown immediately which allows you to flip through channels much more quickly.

I wanted UTSC channels to be shown as quickly as possible and I figured it should be possible to bring the latency reasonably close to that of analog TV. If you already read my standard, you know that the channels are sent in packets taking 1 second each to transmit. My initial idea was to have decoders that immediately start decoding and playing a channel once they see the preamble indicating a new packet. This would involve playing the sound and video immediately, once enough data has arrived. The maximum latency would be around 1 second. This worst-case latency would occur if the decoder tuned in right after the preamble and had to wait for another packet.

Sound obviously carries much less data than video and the standard has the sound being transferred earlier in the packet than the video, so under this proposal the decoder could wait for the sound plus a couple of video frames, and then start playing. Assuming 48 Kbit Opus audio, this would lead to a theoretical minimum latency of just over 45 milliseconds.

However, last night I realized that the minimum latency can't be less than 1 second. I don't believe it's possible to build a good decoder that doesn't wait for a whole packet before it starts decoding. Here are the 4 reasons I believe it's not possible.

Problem 1: No way to find packet preamble

One problem is that there is no way to verify packet validity unless you wait for a whole packet. The "UTSC" preamble that marks the beginning of a packet only works because I added a CRC32 field to check against the rest of the packet. This is because "UTSC" could occur anywhere in the stream, and you don't want the decoder to find a false beginning. Obviously the preamble doesn't matter once you lock onto a station, but you don't want to get garbage by starting the decode process in the wrong place.

Problem 2: Can't use FEC to correct errors

Another problem is that there is 250 Kbit/sec of FEC protecting the data. This amounts to 4/5 FEC. Without an entire packet, you don't get the FEC and so you can't correct any errors. You might argue that only the first packet would be played without FEC and that all future packets would be protected by it. But in reality, because you started decoding the first packet without FEC, you must continue to do so or you risk a brief interruption in the playback. Here is an illustration of this issue, which assumes that no interleaver is used.


In Scenario 1, the decoder waits for a whole packet plus the FEC before decoding and playing. In Scenario 2, the decoder waits until just enough data is available before decoding and playing. Notice that if Scenario 2 continues, it will never get to receive the FEC before playing a packet.

Problem 3: Time discrepancy in video compression

The biggest problem in my opinion is the uneven distribution of data inherent to digital video compression, especially the interframe variety used by almost every codec.

In analog TV, every element of each frame took the same amount of time, each time a frame was transmitted. There were some tolerances, such as the power grid deviating from 60 Hz or when they lowered the frame rate to add color, but overall it was reasonably precise and unchanging.

In digital video, more data is spent on keyframes than on inter frames. In case you didn't know, keyframes are the initial frame that inter frames build on. The compressor encodes a regular image to start the video, and then frames after that are just differences between the current frame and the initial one. Every so often another keyframe is sent.

If the decoder tries to start decoding before a whole packet is received, then it will most likely fail to play the video properly. This is because much more data is sent in the initial keyframe of each packet than in the rest of the frames. Since the channel bandwidth is constant, this means that keyframes will take longer to send than inter frames.


Since digital frames would be received at indeterminate intervals, you can't just start playing the video as soon as you get the first few frames. If you don't wait for the entire packet, you're very likely to run out of data when a longer frame is being transmitted.

Problem 4: The interleaver

Even though I think #3 is the biggest issue, I saved this one for last because the interleaver is one of the more recent developments. To make this section short, UTSC packets are scrambled by an interleaver, and because the entire packet is scrambled, a receiver must wait until the entire packet is received before decoding it. This means the absolute minimum latency is about 1 second.

Below is a longer explanation of the interleaver.

Although UTSC could be transmitted on any band wide enough, such as 500 MHz or 2.4 GHz, I think it's best suited to the 900 MHz band. The problem is that many smart energy meters transmit FHSS (hopping bursts) all over 900 MHz. Since reliability is the focus of UTSC, I needed a way to somehow filter those. The FEC is good, I think, but it won't fix huge burst errors when every energy meter in a neighborhood transmits over a station.

I decided to use a fully random interleaver, a sort of scrambler. Since this is part of the air interface (the way it's transmitted), it doesn't affect the packet format that I released in 2017.

I generated a large amount of encryption-grade randomness, verified it with a program called ENT, and then used it to generate random integers for interleaver bit positions. This means that once you have a UTSC packet that's ready to transmit, you simply copy bit-by-bit into a new interleaved packet, using the bit positions I generated.

Since there are 1,000,000 (data) + 250,000 (FEC) bits in a UTSC packet, we have 1,250,000 bits, starting at bit 0 and ending at bit 1,249,999. We do NOT want to interleave the "UTSC" preamble, because we need receivers to be able to find it, but we DO want to interleave the CRC32 that comes right after it because we want it to be more resistant to burst errors.

This means we only have to interleave 1,250,000 - 32 = 1249968 bits, numbered from 0 to 1249967. So when we start populating the bits in our interleaved packet for transmitting, bit #1601 from the plain unscrambled packet goes first at position #0, then bit #952398, and so on. Since the pattern is made from high-quality randomness, the bit positions are extremely well distributed.

On the other end, the receiver would have a copy of the interleaver's bit ordering scheme and would work the process backward. To reproduce the original packet, the receiver would take bit #0 from the received packet and put it at bit #1601, and put bit #1 at bit #952398, and so on. At the end, the original packet will have been reconstructed and any burst errors will be evenly distributed over the entire packet, making it easier for the FEC to fix.

Here's a picture showing a 20-millisecond burst error. The drawing is to scale, showing how much that error would damage a UTSC packet. You may want to open the image in another tab and zoom in to see it in detail.

Left: a 20-ms error in a plain UTSC packet
Right: the same error in an interleaved packet.

I didn't know if I wanted to interleave, because I immediately saw that it would prevent instant playback. I wondered if I should leave some of the bit flags un-interleaved so the channel could indicate whether it was interleaved or not, but I realized that an error could flip the flag and confuse the decoder, not to mention the issue with burst errors breaking any non-interleaved channels. In the end, I decided that all UTSC channels will be interleaved.

Monday, May 7, 2018

New fiber optic lines

A few days ago (May 3) I saw some colored tubes sticking out of the ground at the corner of the local post office. I asked inside about them and the lady at the counter hadn't even noticed them. She said they must have been put in during her lunch break, which seems odd considering how long it would take. Anyway, I took some pictures because I knew it must have something to do with fiber optics.


I wondered why the town hadn't been dug up but some Googling revealed that they have horizontal drilling machines for this job.

I took a different route home and noticed that there was some new road paint. In addition to new dashed white lines (not shown), someone had spray-painted MH next to a BellSouth manhole cover.

A manhole cover directly across the street from my yard.

This photo doesn't show the fiberglass junction box nearby, or the fiber optic cable on a utility pole. The cable on the pole leads into the ground near the fiberglass box. There are a bunch of poles carrying fiber optic cable around this part of town. I saw an AT&T truck putting new fiber on the poles several months ago, and the manhole cover has a BellSouth logo so I believe this section is managed by them. With all of this infrastructure available right by my yard, I wonder why AT&T refuses to connect me to their fiber.

Anyway, I don't think it's a coincidence that the paint appeared at the same time as those tubes.

Today I went to check if anything had happened and, to my surprise, there was a call-before-you-dig marker beside a new fiberglass junction box set partway into the sidewalk.



Apparently this is being done by the Palmetto Rural Telephone Co-op, a company I hadn't heard of before.

This is only 1/4 mile from my house so perhaps the neighborhood will be offered better Internet service. I can't imagine why a small-town post office, or any post office for that matter, would need its own dedicated fiber lines.

Sunday, March 18, 2018

UTSC Datagram Specification

I finalized this in 2017 but forgot to release it. This is the format for sending files (aka datacasting) across a dedicated UTSC channel. With such a setup you could send about 10 GiB of files per day.

utsc_datagram_finished_release.txt

Friday, March 16, 2018

New data fuzzer

The main focus of my UTSC standard is reliability so I needed a way to test how it responds to bit errors. Randomly corrupting data is called "fuzzing" but I couldn't find a program that was easy to use so I wrote one in Liberty BASIC.

My program takes a file to be fuzzed, and another file containing high-quality random bytes. It uses 24-bit values from the random file to get byte positions to fuzz, and uses "mod 8" to get the bit position to flip. This means it can randomly flip bits in files up to 16 MiB.

You can fuzz anything you like, but I wanted to fuzz audio so I could see how it would sound when the signal is weak and bits are being corrupted. The sound worked on analog TV when the signal was too weak for the picture to come through, and I want UTSC to do the same. First I tried Opus files. They can handle some bit errors but they stop playing altogether if there are too many. And if the header is corrupted, they won't play at all. I added a header-skipping feature to my fuzzer but obviously a real-world signal could lose the header.



Then I got to thinking about WAV audio. It has a very small header (44 bytes) and the audio portion can withstand unlimited bit errors without stopping. Of course, you need the header to know the sample rate and format, but what if "best practices" were defined for UTSC that define a default WAV format? After some tests I found that 24 kHz mono 8-bit WAV files are a good compromise between quality and bandwidth. As I wrote on the LostCarrier.Online Discord channel today, "With 17% bit errors, it degrades like analog sound and fades into the noise rather than glitching." My code was off by a factor of 8, so I meant 2.125% bit errors.

I already specified Opus as the recommended audio format on UTSC, but it can't handle anywhere near enough bit errors to be reliable in bad conditions. Let me demonstrate with a 10-second clip from Syn Cole's "Feel Good" from NoCopyrightSounds.

This falls outside of YouTube use, so hopefully 10 seconds is short enough to fall under Fair Use, but if not I'm including the attribution and I'll gladly swap the clip for something else if the owner complains.

"Syn Cole - Feel Good [NCS Release]"
https://www.youtube.com/watch?v=q1ULJ92aldE
Syn Cole
https://soundcloud.com/syncole
https://www.facebook.com/SynCole
https://twitter.com/SynColeOfficial
https://www.instagram.com/SynCole/




Notice that with only 0.2% of the bits flipped, the Opus file is barely playable. In contrast, the WAV files still contain obvious music even with about 50% errors. I say "about" because since this is a random process, some bits may be flipped twice and be unchanged, so 50% is only a reasonable figure. We can assume it's very close to 50% because of the high quality of the randomness used.

These results make me think that 24 kHz mono 8-bit audio is the optimal format to use if you want to ensure audio reliability at low bandwidth. However, the bandwidth is much higher than Opus. With Opus at 48 kbit/sec, the audio takes about 5.7% of the channel bandwidth, counting overhead. Using WAV as I've described would take 192 kbit/sec, or about 19.24%. That's roughly 4 times as much bandwidth just to make sure the sound gets through.

UTSC offers 1 Mbit/sec of bandwidth. So with Opus audio, about 94% of the bandwidth is available for video compared to 80.75% when using WAV. It's up to the broadcaster to decide if losing 135.8 kbit/sec of video bandwidth is worth it. If extra-high quality is desired, it may not be.