Thursday, August 2, 2018

JMemPGP: Java PGP API for handling strings

I've been looking for ways to use PGP in Java programs and the Bouncy Castle API seems to be the most common method. The problem is that almost every example involves reading a file and writing the result to another file. Others have asked on Stack Exchange about processing data solely from memory but solutions are very hard to find. I decided I would write my own API based on the Bouncy Castle methods so I could use PGP to operate on Java Strings and byte[] arrays.

My API is called JMemPGP (Java Memory PGP). The only files it needs are public and private keys, depending on what operation you want. The actual input and output data consist of a pair of byte[] arrays. If you want to use a String, you can use the String.getBytes() method.

I'm going to demonstrate the 4 basic PGP operations using JMemPGP: encrypt, decrypt, sign, and verify.

For this tutorial, you need GPG4Win, GPGshell, NetBeans, and two files from the Bouncy Castle website. Start NetBeans downloading now. Make sure to get a version that contains the JDK.

First, download and install GPG4Win and GPGshell. Then open Kleopatra and create a certificate. If you're not prompted to create one at startup, then navigate to File->New Certificate...

Click "Create a personal OpenPGP key pair", fill in the fields on the next page, and then I would suggest going into "Advanced Settings" and changing the key size to 4096, but that's not necessary to continue. Click Next and then Create Key. Follow the instructions shown for providing random input. When you're done, you should see your new certificate in the list.

Right-click it and choose "Export Certificates..."

Let's save it to the C drive. You might have to save it to a different folder if you're on Windows 10. Let's name it pub.gpg.

Now right-click the certificate again and choose "Export Secret Keys..." Make sure "ASCII armor" is unchecked. Save it as sec.gpg and click OK.

You should now have two files, as shown:

Now we need those two files from the Bouncy Castle website. Navigate to and scroll down to the "Signed Jar Files" section.

You need the two files that are highlighted. There may be a newer version by the time you download it and that's fine.

Now it's time to install NetBeans. The installer is pretty simple so just run it. Once it's done, open NetBeans and navigate to File->New Project...

The default project type should be a Java Application, so click Next. For a project name, just type PGPTutorial.

Now right-click the project's package in the pane on the left and choose New->Java Class...

Call the new class JMemPGP and click Finish. Now we need to install the Bouncy Castle API. Right-click the project this time, the item at the top with capital letters, and choose Properties at the bottom of the menu. Now choose the Libraries category and click "Add JAR/Folder".

Use the Ctrl key to select both JAR files, click Open, and then click OK to exit the Properties dialog.

Now visit my article on to get the JMemPGP API. It costs $1 to unlock the paywall. Once you're in, select the code and copy it to the clipboard. In NetBeans, go to your file, which should be open in the editor already, and replace the contents with what you just copied, but make sure to preserve the line "package pgptutorial;". Now click the Save All button at the top or press Ctrl+S.

There are just a couple more things we need. Add the following imports to your main file,
    import org.bouncycastle.jce.provider.BouncyCastleProvider;

Change your main() method to:
    public static void main(String[] args) throws Exception{

Finally, add this line to the beginning of your main() method:
    Security.addProvider(new BouncyCastleProvider());

Now we're ready to start using the API for the 4 basic PGP operations. Here is what should look like when you're done.

The 4 basic PGP operations


Let's say we want to encrypt the string "OneDirection" with our PGP public key. Copy this code to update your main() method:

This code starts with a String, converts it to a byte[] array, connects a ByteArrayInputStream to the byte[] array, encrypts the data, and returns it in a ByteArrayOutputStream. This is converted back to a byte[] array, and then to a String for printing to the screen.

Run the app and you should get output similar to this:

You can copy-paste the PGP message block and decrypt it with GPGtray. You could paste it into GPGtray's text window and decrypt from there, but we'll just use the quick decrypt option. Right-click the tray icon and select "Clipboard Decrypt.../Verify".

You should be prompted for the passphrase you used when you created your certificate. Enter it and click OK. Here is what your output should look like.

Notice that it says "0/12 Bytes". This means that our program encrypted just the 12 bytes in "OneDirection", with no padding.

You can also encrypt custom byte arrays, such as binary data.

Again, notice that we get an output of precisely 5 bytes.


We can also decrypt from within Java. Notice that this time we have to provide our passphrase within the program. I used "test" as mine.

Output should be similar to this:


The output will be a detached signature. If you were to type the text "OneDirection" into Notepad and save it as a *.txt file, you could copy-paste this detached signature into a file and save it as *.txt.asc and verify it with GpgEx.

Now right-click file.txt.asc and choose More GpgEx options->Verify.

Click "Decrypt/Verify".

As you would expect, if you change file.txt at all, the signature will not work. Let's change the text to "OneRepublic" and see what happens.

Save file.txt and try verifying it again.


You can also verify signatures from within Java.

Copy and paste this code, run it, and look at the last line it prints.

Let's change the line that says
    bIn = new ByteArrayInputStream(str);
to say
    bIn = new ByteArrayInputStream("OneDirection".getBytes());

Run the program again and you'll see that the signature is still valid. But if you change the string to say "OneRepublic" like in the last example, the signature will not match.

Run the program again and see what the last line says.

Thursday, May 17, 2018

UTSC Air Interface: First Tests

Tonight I enlisted the help of an associate in Texas, Tech2025 (aka RFShibe) with transmitting a dummy UTSC signal. It was kind of funny because he casually asked if a HackRF could transmit UTSC, which led me to ask if he had access to one. One thing led to another, and he ended up helping me test my air interface.

I would have done it myself, and indeed I tried numerous times, but my LimeSDR Mini isn't operating like I need it to, even after the firmware upgrade.

Fortunately, Tech2025 happened to own a HackRF and agreed to transmit for me if I sent a flowgraph, and then he would show the result on an RTL dongle connected to another computer.

After I built a GNUradio flowgraph that uses a Random Source block to transmit QPSK at the proper clock rate for UTSC, I sent it via Discord's file sharing function. Tech2025 transmitted it and sent back screenshots to prove that it worked.

Here's how it looks on my end in GNUradio:

In the following real-world test, a QPSK signal carries random bytes ranging from 0 to 255.

Credit: Tech2025/RFShibe

In the next image, the range was from 0 to 3.

Credit: Tech2025/RFShibe

This signal is roughly as wide as a UTSC channel should be, so we're off to a good start.

Wednesday, May 16, 2018

Theory on UTSC decoder latency

The goal of UTSC is to provide a digital TV standard that operates as much like analog TV as possible. This means maximum reliability, range, and weak-signal performance. A UTSC channel should be able to degrade gradually and have the sound continue working long after the picture is lost. This is in contrast to ATSC's terrible cliff effect.

One of the things I noticed about analog vs digital is that digital TV has a noticeable delay between the time you tune in a channel and the time it's displayed. Analog, on the other hand, can be shown immediately which allows you to flip through channels much more quickly.

I wanted UTSC channels to be shown as quickly as possible and I figured it should be possible to bring the latency reasonably close to that of analog TV. If you already read my standard, you know that the channels are sent in packets taking 1 second each to transmit. My initial idea was to have decoders that immediately start decoding and playing a channel once they see the preamble indicating a new packet. This would involve playing the sound and video immediately, once enough data has arrived. The maximum latency would be around 1 second. This worst-case latency would occur if the decoder tuned in right after the preamble and had to wait for another packet.

Sound obviously carries much less data than video and the standard has the sound being transferred earlier in the packet than the video, so under this proposal the decoder could wait for the sound plus a couple of video frames, and then start playing. Assuming 48 Kbit Opus audio, this would lead to a theoretical minimum latency of just over 45 milliseconds.

However, last night I realized that the minimum latency can't be less than 1 second. I don't believe it's possible to build a good decoder that doesn't wait for a whole packet before it starts decoding. Here are the 4 reasons I believe it's not possible.

Problem 1: No way to find packet preamble

One problem is that there is no way to verify packet validity unless you wait for a whole packet. The "UTSC" preamble that marks the beginning of a packet only works because I added a CRC32 field to check against the rest of the packet. This is because "UTSC" could occur anywhere in the stream, and you don't want the decoder to find a false beginning. Obviously the preamble doesn't matter once you lock onto a station, but you don't want to get garbage by starting the decode process in the wrong place.

Problem 2: Can't use FEC to correct errors

Another problem is that there is 250 Kbit/sec of FEC protecting the data. This amounts to 4/5 FEC. Without an entire packet, you don't get the FEC and so you can't correct any errors. You might argue that only the first packet would be played without FEC and that all future packets would be protected by it. But in reality, because you started decoding the first packet without FEC, you must continue to do so or you risk a brief interruption in the playback. Here is an illustration of this issue, which assumes that no interleaver is used.

In Scenario 1, the decoder waits for a whole packet plus the FEC before decoding and playing. In Scenario 2, the decoder waits until just enough data is available before decoding and playing. Notice that if Scenario 2 continues, it will never get to receive the FEC before playing a packet.

Problem 3: Time discrepancy in video compression

The biggest problem in my opinion is the uneven distribution of data inherent to digital video compression, especially the interframe variety used by almost every codec.

In analog TV, every element of each frame took the same amount of time, each time a frame was transmitted. There were some tolerances, such as the power grid deviating from 60 Hz or when they lowered the frame rate to add color, but overall it was reasonably precise and unchanging.

In digital video, more data is spent on keyframes than on inter frames. In case you didn't know, keyframes are the initial frame that inter frames build on. The compressor encodes a regular image to start the video, and then frames after that are just differences between the current frame and the initial one. Every so often another keyframe is sent.

If the decoder tries to start decoding before a whole packet is received, then it will most likely fail to play the video properly. This is because much more data is sent in the initial keyframe of each packet than in the rest of the frames. Since the channel bandwidth is constant, this means that keyframes will take longer to send than inter frames.

Since digital frames would be received at indeterminate intervals, you can't just start playing the video as soon as you get the first few frames. If you don't wait for the entire packet, you're very likely to run out of data when a longer frame is being transmitted.

Problem 4: The interleaver

Even though I think #3 is the biggest issue, I saved this one for last because the interleaver is one of the more recent developments. To make this section short, UTSC packets are scrambled by an interleaver, and because the entire packet is scrambled, a receiver must wait until the entire packet is received before decoding it. This means the absolute minimum latency is about 1 second.

Below is a longer explanation of the interleaver.

Although UTSC could be transmitted on any band wide enough, such as 500 MHz or 2.4 GHz, I think it's best suited to the 900 MHz band. The problem is that many smart energy meters transmit FHSS (hopping bursts) all over 900 MHz. Since reliability is the focus of UTSC, I needed a way to somehow filter those. The FEC is good, I think, but it won't fix huge burst errors when every energy meter in a neighborhood transmits over a station.

I decided to use a fully random interleaver, a sort of scrambler. Since this is part of the air interface (the way it's transmitted), it doesn't affect the packet format that I released in 2017.

I generated a large amount of encryption-grade randomness, verified it with a program called ENT, and then used it to generate random integers for interleaver bit positions. This means that once you have a UTSC packet that's ready to transmit, you simply copy bit-by-bit into a new interleaved packet, using the bit positions I generated.

Since there are 1,000,000 (data) + 250,000 (FEC) bits in a UTSC packet, we have 1,250,000 bits, starting at bit 0 and ending at bit 1,249,999. We do NOT want to interleave the "UTSC" preamble, because we need receivers to be able to find it, but we DO want to interleave the CRC32 that comes right after it because we want it to be more resistant to burst errors.

This means we only have to interleave 1,250,000 - 32 = 1249968 bits, numbered from 0 to 1249967. So when we start populating the bits in our interleaved packet for transmitting, bit #1601 from the plain unscrambled packet goes first at position #0, then bit #952398, and so on. Since the pattern is made from high-quality randomness, the bit positions are extremely well distributed.

On the other end, the receiver would have a copy of the interleaver's bit ordering scheme and would work the process backward. To reproduce the original packet, the receiver would take bit #0 from the received packet and put it at bit #1601, and put bit #1 at bit #952398, and so on. At the end, the original packet will have been reconstructed and any burst errors will be evenly distributed over the entire packet, making it easier for the FEC to fix.

Here's a picture showing a 20-millisecond burst error. The drawing is to scale, showing how much that error would damage a UTSC packet. You may want to open the image in another tab and zoom in to see it in detail.

Left: a 20-ms error in a plain UTSC packet
Right: the same error in an interleaved packet.

I didn't know if I wanted to interleave, because I immediately saw that it would prevent instant playback. I wondered if I should leave some of the bit flags un-interleaved so the channel could indicate whether it was interleaved or not, but I realized that an error could flip the flag and confuse the decoder, not to mention the issue with burst errors breaking any non-interleaved channels. In the end, I decided that all UTSC channels will be interleaved.

Monday, May 7, 2018

New fiber optic lines

A few days ago (May 3) I saw some colored tubes sticking out of the ground at the corner of the local post office. I asked inside about them and the lady at the counter hadn't even noticed them. She said they must have been put in during her lunch break, which seems odd considering how long it would take. Anyway, I took some pictures because I knew it must have something to do with fiber optics.

I wondered why the town hadn't been dug up but some Googling revealed that they have horizontal drilling machines for this job.

I took a different route home and noticed that there was some new road paint. In addition to new dashed white lines (not shown), someone had spray-painted MH next to a BellSouth manhole cover.

A manhole cover directly across the street from my yard.

This photo doesn't show the fiberglass junction box nearby, or the fiber optic cable on a utility pole. The cable on the pole leads into the ground near the fiberglass box. There are a bunch of poles carrying fiber optic cable around this part of town. I saw an AT&T truck putting new fiber on the poles several months ago, and the manhole cover has a BellSouth logo so I believe this section is managed by them. With all of this infrastructure available right by my yard, I wonder why AT&T refuses to connect me to their fiber.

Anyway, I don't think it's a coincidence that the paint appeared at the same time as those tubes.

Today I went to check if anything had happened and, to my surprise, there was a call-before-you-dig marker beside a new fiberglass junction box set partway into the sidewalk.

Apparently this is being done by the Palmetto Rural Telephone Co-op, a company I hadn't heard of before.

This is only 1/4 mile from my house so perhaps the neighborhood will be offered better Internet service. I can't imagine why a small-town post office, or any post office for that matter, would need its own dedicated fiber lines.

Sunday, March 18, 2018

UTSC Datagram Specification

I finalized this in 2017 but forgot to release it. This is the format for sending files (aka datacasting) across a dedicated UTSC channel. With such a setup you could send about 10 GiB of files per day.


Friday, March 16, 2018

New data fuzzer

The main focus of my UTSC standard is reliability so I needed a way to test how it responds to bit errors. Randomly corrupting data is called "fuzzing" but I couldn't find a program that was easy to use so I wrote one in Liberty BASIC.

My program takes a file to be fuzzed, and another file containing high-quality random bytes. It uses 24-bit values from the random file to get byte positions to fuzz, and uses "mod 8" to get the bit position to flip. This means it can randomly flip bits in files up to 16 MiB.

You can fuzz anything you like, but I wanted to fuzz audio so I could see how it would sound when the signal is weak and bits are being corrupted. The sound worked on analog TV when the signal was too weak for the picture to come through, and I want UTSC to do the same. First I tried Opus files. They can handle some bit errors but they stop playing altogether if there are too many. And if the header is corrupted, they won't play at all. I added a header-skipping feature to my fuzzer but obviously a real-world signal could lose the header.

Then I got to thinking about WAV audio. It has a very small header (44 bytes) and the audio portion can withstand unlimited bit errors without stopping. Of course, you need the header to know the sample rate and format, but what if "best practices" were defined for UTSC that define a default WAV format? After some tests I found that 24 kHz mono 8-bit WAV files are a good compromise between quality and bandwidth. As I wrote on the LostCarrier.Online Discord channel today, "With 17% bit errors, it degrades like analog sound and fades into the noise rather than glitching." My code was off by a factor of 8, so I meant 2.125% bit errors.

I already specified Opus as the recommended audio format on UTSC, but it can't handle anywhere near enough bit errors to be reliable in bad conditions. Let me demonstrate with a 10-second clip from Syn Cole's "Feel Good" from NoCopyrightSounds.

This falls outside of YouTube use, so hopefully 10 seconds is short enough to fall under Fair Use, but if not I'm including the attribution and I'll gladly swap the clip for something else if the owner complains.

"Syn Cole - Feel Good [NCS Release]"
Syn Cole

Notice that with only 0.2% of the bits flipped, the Opus file is barely playable. In contrast, the WAV files still contain obvious music even with about 50% errors. I say "about" because since this is a random process, some bits may be flipped twice and be unchanged, so 50% is only a reasonable figure. We can assume it's very close to 50% because of the high quality of the randomness used.

These results make me think that 24 kHz mono 8-bit audio is the optimal format to use if you want to ensure audio reliability at low bandwidth. However, the bandwidth is much higher than Opus. With Opus at 48 kbit/sec, the audio takes about 5.7% of the channel bandwidth, counting overhead. Using WAV as I've described would take 192 kbit/sec, or about 19.24%. That's roughly 4 times as much bandwidth just to make sure the sound gets through.

UTSC offers 1 Mbit/sec of bandwidth. So with Opus audio, about 94% of the bandwidth is available for video compared to 80.75% when using WAV. It's up to the broadcaster to decide if losing 135.8 kbit/sec of video bandwidth is worth it. If extra-high quality is desired, it may not be.

Tuesday, February 13, 2018

LimeSDR Mini unboxing

Yesterday my 2 LimeSDR Mini's arrived. Before I show its performance, let's see some unboxing photos.

Driver Setup

The LimeSDR is not well documented. You can't just Google "limesdr mini drivers" and expect to find anything. After I put in a lot of trial and error, Jeff from LostCarrier.Online linked me a USB controller driver that somehow makes this work.

So to install the drivers, it looks like you need to start by installing PothosSDR. It won't install drivers, but it will provide a Start Menu link to something called Zadig. Use that to install a driver for your LimeSDR Mini. Then download the USB controller driver and have Device Manager update your LimeSDR with the new drivers. Windows should prefer the USB controller driver and quickly begin upgrading to it once you choose the Update Driver Software option.

This does not work for EXTIO-based programs like HDSDR. You'll need SDRConsole v3 to try your LimeSDR Mini. It has good support for receiving from a LimeSDR, but can't transmit even though there is a Transmit tab. Also, I heard somewhere that 20 MHz is the most bandwidth you can do over USB 2.

I have an NVidia GTX 750 Ti so SDRConsole can use CUDA to accelerate the FFT that generates the waterfall. Still, for me it stutters over 7.5 MHz.

One more thing: choose an antenna once your LimeSDR Mini is running in SDRConsole. I spent a few minutes troubleshooting the blank waterfall before realizing that no antenna was selected.

Reception tests

3 ATSC TV pilots (spikes) at once:

LTE at 2.1 GHz:

I couldn't get SDRAngel or Foobar2000 (with jocover's plugin) to transmit. I'm still trying to figure out how to transmit and when I do I'll write another post showing how to do it.