Monday, October 1, 2018

Offline Blackberry 10 Development

Recently I've become interested in developing for Blackberry 10. After getting set up, I learned that Blackberry's security is so tight that you can't even run homemade apps on your own phone without a key from Blackberry's servers. I also heard that they would be closing their app store at the end of 2019, which concerned me because I didn't know if they would allow developers to keep writing apps after that.

I asked about it on Reddit and via direct email to Blackberry, with mixed responses. Fortunately, after I asked about it a Blackberry rep told the community that there are no plans to disable the app signing process, so we should be able to keep writing apps after the app store closes.

I decided to archive everything I could from Blackberry's official app development sources, including developer.blackberry.com and the entire BlackberryDev YouTube channel. The YouTube channel is not as interesting as the website, but I thought I might as well get everything in case things start getting taken down.

In this post I'm going to show you how you can download everything you need to develop for Blackberry 10 and install it from disk, in case Blackberry's site goes down. This obviously won't help if they can't sign your apps, but perhaps a solution for signing will be found by the time it's needed.

First, I downloaded developer.blackberry.com using HTTrack. The way I configured it, it was able to get the entire website, including the API documentation and many ZIP files. Unfortunately, I must have made a mistake with my job file because it missed a huge collection of EXE files, so I had to make a list and copy-paste it into a new HTTrack job so I could get them.

I decided to keep the two sets separate because the website and ZIP collection is 2.95 GiB, while the EXE set is 297 GiB, which is mostly phone firmware.

Anyway, to develop apps we need the Momentics IDE. The latest version right now is 2.1.2.


Direct links:
32-bit:
http://downloads.blackberry.com/upr/developers/downloads/momentics-2.1.2-201503050937.win32.x86.setup.exe

64-bit:
http://downloads.blackberry.com/upr/developers/downloads/momentics-2.1.2-201503050937.win32.x86_64.setup.exe

Once you pick a version, download and install it. I'm going to use the 32-bit version and install it with the default settings so it will end up in C:\bbndk. At the end, leave the option checked to start the IDE.





I unplugged my Internet cable so I could demonstrate how to do this offline. Click Cancel.


Click Yes because we will be installing an SDK from a pair of ZIP files. In the main IDE, go to Help->Update API Levels...


In the next dialog, go to the Custom tab. Note that this dialog will be much wider if you're connected to the Internet, because the other tabs will be populated.


I suggest API level 10.3.1.995. You'll need the following two files:
http://downloads.blackberry.com/upr/developers/update/bbndk/ndktargetrepo_10.3.1.995/packages/bbndk.win32.libraries.10.3.1.995.zip

http://downloads.blackberry.com/upr/developers/update/bbndk/ndktargetrepo_10.3.1.995/packages/bbndk.win32.tools.10.3.1.12.zip

Download and extract them. I used the C drive, so they ended up in C:\[zip file name]\.

(Update) I couldn't get the visual QML designer to work and it turns out I was missing some files.

http://downloads.blackberry.com/upr/developers/update/bbndk/ndktargetrepo_10.3.1.995/packages/bbndk.win32.cshost.10.3.1.995.zip (Necessary for the visual QML designer).

http://downloads.blackberry.com/upr/developers/update/bbndk/ndktargetrepo_10.3.1.995/packages/bbndk.win32.qmldocs.10.3.1.995.zip

http://downloads.blackberry.com/upr/developers/update/bbndk/ndktargetrepo_10.3.1.995/packages/bbndk.win32.documents.10.3.1.995.zip

These last three files will contain a folder called "target_10_3_1_995". Extract them to the same folder where you extracted win32.libraries.10.3.1.995.zip, so that the "target_10_3_1_995" folders from each zip file get merged with the one you already extracted.


In the dialog shown above, click Add New Custom SDK.


For "Target Path", click Browse and find the target folder from bbndk.win32.libraries.10.3.1.995.


Now for "Host Path", click Browse and find the host folder from bbndk.win32.tools.10.3.1.12.


In the Import SDK Platform dialog, change the Name field as desired.


Click Finish.


Now you have a fully functional API level so click OK.

(Update) If you want to be extra sure that you archive a working API level for offline installation, it looks like you can also use the automatic installer in the API Levels window to install an API, then close Momentics IDE and create a ZIP archive of your C:\bbndk folder.

In the IDE, go to File->New->Blackberry Project.


Just to finish the tutorial quickly, accept all the default options to create the project.


Click Yes.


Now you have a project ready to compile and run on a Blackberry 10 phone. Even though I have a BB10 phone, I don't want to have to get it set up and get a debug token yet, so let's use an emulator.

First we need VMware Player. My computer can't handle the latest version but older versions are still available from the official site. They're unlisted so you have to know the direct URL. I'm going to be using VMware Player 7.1.0.

https://download3.vmware.com/software/player/file/VMware-player-7.1.0-2496824.exe

Download this and install it.

Now you need the Blackberry emulation files. These files contain everything you need to emulate BB10 phones of various sizes, like square screens with a physical keyboard, or regular "tall" screens.

http://downloads.blackberry.com/upr/developers/update/bbndk/simulator/simulatorrepo_10.3.1.995/packages/bbndk.win32.simulator.10.3.1.995.zip

http://downloads.blackberry.com/upr/developers/update/bbndk/simulator/simulatorrepo_10.3.1.995/packages/bbndk.win32.simulatorController.10.3.1.995.zip

(Optional) http://downloads.blackberry.com/upr/developers/update/bbndk/simulator/simulatorrepo_10.3.1.995/packages/bbndk.win32.bbmServerSimulator.10.3.1.995.zip

Download these and extract them to a folder called "10.3.1.995_emulator" or something similar.

If you click the green play button at the top-left of the IDE, you'll get this error message:


Click in the box the message is pointing to and select Add New Target...


In the Device Manager dialog, go to the Simulator tab.


Click Begin Simulator Setup.


You'll get this error if you're not online but that's okay because we already have all the files we need. Click OK on the error message, then click the link at the bottom left.


Browse for the VMX file within the bbndk.win32.simulator.10.3.1.995 subfolder.


Click Open, then OK. The simulator will start automatically. By default, it will pick a "tall" resolution similar to a normal smartphone. If you want, you can choose a different resolution before it starts. To do that, click in the VM to capture the mouse and keyboard, press Enter, and follow the prompts. To un-capture the mouse and keyboard, press Ctrl+Alt.

For this first test, we're going to let it use the default resolution.




Back in the Momentics IDE Device Manager, it will ask for an IP address. It was entered automatically for me but if it's not automatic for you, you can find the right address at the bottom-left of the emulator.



Click Pair.


Now you have a working emulator that is recognized and listed by Momentics IDE's Device Manager. Notice that the "Open Controller" button is disabled. Since we installed from disk rather than the Internet, Momentics IDE doesn't know where to find the controller. To start it manually, navigate to the bbndk.win32.simulatorController.10.3.1.995 subfolder where you extracted the emulator ZIP's. Go into the subfolders to find "controller.exe". The full path is shown in the address bar in the next picture.


Double-click "controller.exe".


Again, mine was able to find the IP address automatically but if yours doesn't, you can enter it at the bottom.

Back in the Momentics IDE Device Manager, click Close. Notice that in the main IDE, you now have a valid simulator that you can run apps on.


Try clicking the green play button again.


Don't worry about these warnings. If you get an error dialog about a Java NullPointerException, just click OK and compile again.

When it's done, it should look like this.


Let's try it again but with a 720x720 square phone. To do that, we'll close and re-open the emulator. Once it starts, we'll click inside the VM, wait for the prompt, and press Enter, 2, and Enter again. Don't forget to press Ctrl+Alt to get out when you're done.





You may have to reconfigure the simulator in the IDE's Device Manager if it gives an error when you try to run the app. One time it said it didn't know the simulator's API level so I had to re-import the *.vmx file and pair it again.

If you're successful, the app should look like this.


I don't know why the color scheme is different for the BB10 keyboard phone.


On a side note, this simulator is surprisingly fast. I ran it on a computer from 2011 with a Core 2 Quad and 4GB RAM and it feels as fast as a real phone. In contrast, the Android emulator is barely usable on the same computer.

Sunday, September 23, 2018

Patching WinArchiver Virtual Drive for LZMA2

I've been looking for a way to mount 7-Zip archives as virtual drives, similar to how IMDisk works with uncompressed disk images. At first I wanted a method where I could read and write to the virtual drives, but that proved to be too much so I settled for a read-only solution.

I found a free program called WinArchiver Virtual Drive. It supports 7-Zip, which is great. I tried it out and it was able to mount one of my archives as a CD drive. I navigated to the virtual drive and saw the contents of the archive. The problem was, when I tried to open a file, I saw a notice in the taskbar about an unsupported method, meaning compression method. The archive I had tried was compressed with LZMA2. I tried again with LZMA and it worked, but I didn't want to have to start using LZMA because it doesn't seem to compress as well.

I looked in the installation folder and saw a file called 7z.dll, dated March 24, 2010. I tried replacing it with a newer copy from my 7-Zip installation, but that produced an error.

Several weeks later, it occurred to me that I had tried to use a 64-bit DLL when the program and the original DLL were 32-bit. I had removed WinArchiver Virtual Drive so I reinstalled it and then went to the 7-Zip website and downloaded a 32-bit copy of 7-Zip.

Not wanting to replace my 64-bit copy of 7-Zip, I was able to open the installer EXE in 7-Zip and get the 32-bit DLL I needed. After replacing the DLL from 2010, Win Archiver was able to mount and read archives compressed with LZMA2.

Update:

The way WinArchiver Virtual Drive handles its cache seems flawed. First, if the archive is solid then the program has to extract the entire contents to disk. Having to extract the whole archive is perfectly normal if it's solid, but extracting to disk keeps you from accessing archives bigger than your free space. I can understand saving individual files to a cache, but even then it should ideally be done in memory if the files being accessed are small enough.

The other problem is that the cache is not cleared until you unmount the drive. The way the program works currently, if you mount a non-solid archive you can access files quickly like you'd expect, and they're stored in the cache. The problem is, each new file you access is added to the cache and if you run out of space, the program will give an error and refuse to access any more files. This brings us back to the first issue, where you can't make full use of archives bigger than your free space.

Fully Decoded UTSC Packet

I released some UTSC packets for people to try and decode, to see if my format could be followed by other people. They contained a still image (instead of video since I don't have a VP9 encoder yet), music, an EPG, and some filecasts containing Bitcoin Cash blocks.

Tech2025 was able to extract the EPG but couldn't get any further. There was another guy, Jess, who he introduced to the format. I was pinged on Discord and asked to share some details. I brought Jess up to speed and provided him with the packets and the format documentation.

To be honest, I did help him a little, like with getting set up with a hex editor and explaining some of the byte structure. However, I let him do most of it so that we could honestly say he decoded it. The deal was that once someone decoded it, I would publish all the details of what was in the packets. I'm going to show what was in the first packet. This post will just describe the first one because the others are similar.






Recall that UTSC packets contain just 1 second of content.
The music is from Golden Alley, created by Nicolai Heidlas and Francesco Rea.


The EPG was a file called EPG0.json. It was contained within a 7-Zip archive compressed with PPMD.


Note: the EPG format is not defined so these are just suggested parameters.


The filecasts were split 7-Zip archives of the latest 3 Bitcoin Cash blocks at the time I created the packets. They were compressed with LZMA2. There is no need for me to share them here.


Finally, I wrote a UTSC parser in Liberty BASIC to show and extract all the information quickly. I didn't release it until after Jess had finished extracting everything.


Thursday, August 23, 2018

Transmitting with the LimeSDR Mini

I was an early bird purchaser of the LimeSDR Mini, and I acquired a pair for $99 each. They arrived on February 10, 2018. I described in a previous post how I was able to use them for receiving, but it wasn't until August 17 that I was able to transmit a clean and reliable signal.

Right now, the transmit function seems to only work on Linux. I tried on Windows and got varying results. Initially I was able to transmit some distorted FM audio in the Windows version of GNUradio, but I could not transmit a digital QPSK signal. It's been a while since I tried, but I don't think I can transmit anything in Windows anymore. Currently, when I try to transmit from Windows, I get an unstable solid carrier.

On the LostCarrier.Online Discord channel, Ballistic Autistic told everyone that he succeeded in transmitting good FM audio with his LimeSDR Mini from GNUradio on Ubuntu. I asked him if he would try QPSK and send me a reception screenshot, which he did.

Credit: Ballistic Autistic

Once I knew that it was possible to make it work, I downloaded the GNUradio Live DVD image, which is a Ubuntu distribution, and used Universal USB Installer to install it to a flash drive. I installed the driver, compiled and installed gr-limesdr, and then had a working transmit setup. Since it's hard to find instructions for using the LimeSDR Mini, I'll explain the full process.

I would recommend updating the firmware on your LimeSDR Mini, but I don't think it's necessary if you don't want to.

Transmitting with a LimeSDR Mini, start to finish

1. Download the GNUradio Live DVD image. Get it from here: GNU Radio Live SDR Environment

2. Install it to a flash drive using Universal USB Installer. I chose to have a 1GB persistent area.

3. Boot from the drive

4. Open a terminal (command prompt) and enter the commands linked here to install the driver in Ubuntu.

5. Run "mkdir gr-limesdr"

6. Visit https://github.com/myriadrf/gr-limesdr and enter the commands shown for Linux installation.

7. Open GNUradio Companion. You should now have LimeSDR source and sink blocks. You can click in the list of blocks on the right and press Ctrl+F to search for them.

If you haven't already, plug in your LimeSDR Mini. Now let's put together a flowgraph to test the transmitting function. You should have a second SDR on another computer to receive with.

Transmitting FM audio

Before starting, make sure you have a WAV file recorded as 48 kHz mono.

In GNUradio Companion, click Open and you should see a "ubuntu" folder on the left. Navigate there if it's not already selected and then open gr-limesdr/examples/FM_transmitter.grc.

1. Remove the "LimeSuite Sink (TX)" block, and add a fresh one from the list on the right.

2. Connect the output of "Rational Resampler" to the input of the new "LimeSuite Sink (TX)" block.

3. Double-click the LimeSuite block and you'll see that the "Device serial" field is empty. Make sure your LimeSDR Mini is connected with USB, then open a terminal and run "LimeUtil --find". This will print a list of all Lime devices. Copy the serial number of yours and paste it into the LimeSuite block's "Device serial" field.

4. Change "Device type" to LimeSDR-Mini

5. Choose an "RF frequency" that you want to transmit on. Even though there is very little transmit power, you want to choose a frequency that won't interfere with anything. I like 903.4 MHz but that's not usually license-free outside North America. Frequencies are measured in Hz, so you could type something like 903400000 or just 903.4e6, with e6 meaning MHz.

6. Set the "Sample rate" to 5e6, which means 5 MHz. Click OK.

7. Double-click the "Rational Resampler" block. Change "Interpolation" to 125 and "Decimation" to 12. Click OK.

The Resampler converts the 480 kHz FM quadrature (a.k.a. IQ data; in the next step) to 5 MHz. This block converts between IQ sample rates using the formula
    output_rate = input_rate * (Interpolation/Decimation).

Since the FM block will output IQ data at a rate of 480 kHz and we're transmitting with 5 MHz bandwidth,
    480000 * (125/12) = 5000000

8. Double-click the "NBFM Transmit" block. Change "Audio Rate" to 48000. Change "Quadrature Rate" to 480000. Change "Max Deviation" to your desired FM deviation. I'm going to use 24e3 (24 kHz). Click OK.

9. Finally, double-click the "Wav File Source" block. For the "File" field, click the "..." button to browse for the WAV file you want to transmit. Once it's in the "File" field, click OK.

Connect a receive SDR to another computer, open your desired SDR program, and navigate to the frequency you chose. At the computer you'll be transmitting with, connect your LimeSDR Mini to an antenna and click the green play button at the top. The result should be clear FM audio.

For some reason, I can't get my SDR to transmit on low frequencies like FM broadcast or the 6m ham band. Frequencies like 400 MHz and higher work well, but 900 MHz seems to work best.

The resulting signal received with an SDRplay RSP1

Transmitting QPSK

For this experiment, I'm going to transmit a QPSK signal with a rate of 1.25 Mbit/sec.

1. Create a new flowgraph and follow the instructions in the previous example to set up a "LimeSuite Sink (TX)" block. For this section I set mine, on the CH0 tab, to use a digital filter of about 750 kHz. This makes the signal edges look cleaner in the waterfall.

2. Add the following blocks: "Constellation Modulator" and "Constellation Object".

3. Double-click the "Constellation Modulator" block. In "Constellation", point it to the "Constellation Object" block by entering its name, variable_constellation_0. Change "Differential Encoding" to No and "Samples/Symbol" to 8. Click OK.

We're going to try transmitting a "Random Source" with random values evenly distributed between 0 and 255, and optionally a file with non-random data.

4. Add a "Random Source" block and double-click it. Change "Output Type" to Byte, "Minimum" to 0, and "Maximum" to 255. Click OK.

5. Connect the output of "Random Source" to the input of "Constellation Modulator". Connect the output of "Constellation Modulator" to the input of "LimeSuite Sink (TX)".

6. (Optional) Now add a "File Source" block and double-click it. Change "Output Type" to Byte. Choose a file and click OK. Right-click the block and choose Disable, then connect its output to the input of "Constellation Modulator".

Click the green play button to run the flowgraph.

For my optional file, I chose the Monero blockchain. Notice that the signal is not as smooth because the file isn't very random. If you want to switch between Random Source and File Source, just disable one and enable the other.


Random bytes from 0 to 255 inclusive

Monero blockchain

I had to reduce the transmit power by 5 dB for the random source because it appears to produce a much stronger signal.

Transmitting 16-QAM

This experiment will produce a 16-QAM signal with a rate of 2.5 Mbit/sec. It will occupy the same RF bandwidth as the QPSK signal in the previous section.

1. Follow the instructions above to create the flowgraph but don't transmit yet.

2. Double-click the "Constellation Object" block and change "Constellation Type" to 16QAM. Click OK and then try transmitting.

You'll notice that 16-QAM produces a smoother signal than QPSK.

Random bytes from 0 to 255 inclusive

Monero blockchain

Like the previous section, the random source transmission power had to be reduced by 5 dB so the waterfalls would look the same.

Thursday, August 2, 2018

JMemPGP: Java PGP API for handling strings

I've been looking for ways to use PGP in Java programs and the Bouncy Castle API seems to be the most common method. The problem is that almost every example involves reading a file and writing the result to another file. Others have asked on Stack Exchange about processing data solely from memory but solutions are very hard to find. I decided I would write my own API based on the Bouncy Castle methods so I could use PGP to operate on Java Strings and byte[] arrays.

My API is called JMemPGP (Java Memory PGP). The only files it needs are public and private keys, depending on what operation you want. The actual input and output data consist of a pair of byte[] arrays. If you want to use a String, you can use the String.getBytes() method.

I'm going to demonstrate the 4 basic PGP operations using JMemPGP: encrypt, decrypt, sign, and verify.

For this tutorial, you need GPG4Win, GPGshell, NetBeans, and two files from the Bouncy Castle website. Start NetBeans downloading now. Make sure to get a version that contains the JDK.

First, download and install GPG4Win and GPGshell. Then open Kleopatra and create a certificate. If you're not prompted to create one at startup, then navigate to File->New Certificate...


Click "Create a personal OpenPGP key pair", fill in the fields on the next page, and then I would suggest going into "Advanced Settings" and changing the key size to 4096, but that's not necessary to continue. Click Next and then Create Key. Follow the instructions shown for providing random input. When you're done, you should see your new certificate in the list.


Right-click it and choose "Export Certificates..."


Let's save it to the C drive. You might have to save it to a different folder if you're on Windows 10. Let's name it pub.gpg.

Now right-click the certificate again and choose "Export Secret Keys..." Make sure "ASCII armor" is unchecked. Save it as sec.gpg and click OK.

You should now have two files, as shown:


Now we need those two files from the Bouncy Castle website. Navigate to https://www.bouncycastle.org/latest_releases.html and scroll down to the "Signed Jar Files" section.


You need the two files that are highlighted. There may be a newer version by the time you download it and that's fine.

Now it's time to install NetBeans. The installer is pretty simple so just run it. Once it's done, open NetBeans and navigate to File->New Project...

The default project type should be a Java Application, so click Next. For a project name, just type PGPTutorial.


Now right-click the project's package in the pane on the left and choose New->Java Class...



Call the new class JMemPGP and click Finish. Now we need to install the Bouncy Castle API. Right-click the project this time, the item at the top with capital letters, and choose Properties at the bottom of the menu. Now choose the Libraries category and click "Add JAR/Folder".


Use the Ctrl key to select both JAR files, click Open, and then click OK to exit the Properties dialog.


Now visit my article on yours.org to get the JMemPGP API. It costs $1 to unlock the paywall. Once you're in, select the code and copy it to the clipboard. In NetBeans, go to your file JMemPGP.java, which should be open in the editor already, and replace the contents with what you just copied, but make sure to preserve the line "package pgptutorial;". Now click the Save All button at the top or press Ctrl+S.

There are just a couple more things we need. Add the following imports to your main file, PGPTutorial.java.
    import java.io.ByteArrayInputStream;
    import java.io.ByteArrayOutputStream;
    import java.security.Security;
    import org.bouncycastle.jce.provider.BouncyCastleProvider;

Change your main() method to:
    public static void main(String[] args) throws Exception{

Finally, add this line to the beginning of your main() method:
    Security.addProvider(new BouncyCastleProvider());


Now we're ready to start using the API for the 4 basic PGP operations. Here is what PGPTutorial.java should look like when you're done.







The 4 basic PGP operations

Encrypt

Let's say we want to encrypt the string "OneDirection" with our PGP public key. Copy this code to update your main() method:

This code starts with a String, converts it to a byte[] array, connects a ByteArrayInputStream to the byte[] array, encrypts the data, and returns it in a ByteArrayOutputStream. This is converted back to a byte[] array, and then to a String for printing to the screen.

Run the app and you should get output similar to this:

You can copy-paste the PGP message block and decrypt it with GPGtray. You could paste it into GPGtray's text window and decrypt from there, but we'll just use the quick decrypt option. Right-click the tray icon and select "Clipboard Decrypt.../Verify".


You should be prompted for the passphrase you used when you created your certificate. Enter it and click OK. Here is what your output should look like.


Notice that it says "0/12 Bytes". This means that our program encrypted just the 12 bytes in "OneDirection", with no padding.

You can also encrypt custom byte arrays, such as binary data.



Again, notice that we get an output of precisely 5 bytes.

Decrypt

We can also decrypt from within Java. Notice that this time we have to provide our passphrase within the program. I used "test" as mine.




Output should be similar to this:



Sign



The output will be a detached signature. If you were to type the text "OneDirection" into Notepad and save it as a *.txt file, you could copy-paste this detached signature into a file and save it as *.txt.asc and verify it with GpgEx.


Now right-click file.txt.asc and choose More GpgEx options->Verify.

Click "Decrypt/Verify".


As you would expect, if you change file.txt at all, the signature will not work. Let's change the text to "OneRepublic" and see what happens.



Save file.txt and try verifying it again.



Verify

You can also verify signatures from within Java.



Copy and paste this code, run it, and look at the last line it prints.


Let's change the line that says
    bIn = new ByteArrayInputStream(str);
to say
    bIn = new ByteArrayInputStream("OneDirection".getBytes());


Run the program again and you'll see that the signature is still valid. But if you change the string to say "OneRepublic" like in the last example, the signature will not match.


Run the program again and see what the last line says.


Thursday, May 17, 2018

UTSC Air Interface: First Tests

Tonight I enlisted the help of an associate in Texas, Tech2025 (aka RFShibe) with transmitting a dummy UTSC signal. It was kind of funny because he casually asked if a HackRF could transmit UTSC, which led me to ask if he had access to one. One thing led to another, and he ended up helping me test my air interface.

I would have done it myself, and indeed I tried numerous times, but my LimeSDR Mini isn't operating like I need it to, even after the firmware upgrade.

Fortunately, Tech2025 happened to own a HackRF and agreed to transmit for me if I sent a flowgraph, and then he would show the result on an RTL dongle connected to another computer.

After I built a GNUradio flowgraph that uses a Random Source block to transmit QPSK at the proper clock rate for UTSC, I sent it via Discord's file sharing function. Tech2025 transmitted it and sent back screenshots to prove that it worked.

Here's how it looks on my end in GNUradio:



In the following real-world test, a QPSK signal carries random bytes ranging from 0 to 255.

Credit: Tech2025/RFShibe

In the next image, the range was from 0 to 3.

Credit: Tech2025/RFShibe

This signal is roughly as wide as a UTSC channel should be, so we're off to a good start.

Wednesday, May 16, 2018

Theory on UTSC decoder latency

The goal of UTSC is to provide a digital TV standard that operates as much like analog TV as possible. This means maximum reliability, range, and weak-signal performance. A UTSC channel should be able to degrade gradually and have the sound continue working long after the picture is lost. This is in contrast to ATSC's terrible cliff effect.

One of the things I noticed about analog vs digital is that digital TV has a noticeable delay between the time you tune in a channel and the time it's displayed. Analog, on the other hand, can be shown immediately which allows you to flip through channels much more quickly.

I wanted UTSC channels to be shown as quickly as possible and I figured it should be possible to bring the latency reasonably close to that of analog TV. If you already read my standard, you know that the channels are sent in packets taking 1 second each to transmit. My initial idea was to have decoders that immediately start decoding and playing a channel once they see the preamble indicating a new packet. This would involve playing the sound and video immediately, once enough data has arrived. The maximum latency would be around 1 second. This worst-case latency would occur if the decoder tuned in right after the preamble and had to wait for another packet.

Sound obviously carries much less data than video and the standard has the sound being transferred earlier in the packet than the video, so under this proposal the decoder could wait for the sound plus a couple of video frames, and then start playing. Assuming 48 Kbit Opus audio, this would lead to a theoretical minimum latency of just over 45 milliseconds.

However, last night I realized that the minimum latency can't be less than 1 second. I don't believe it's possible to build a good decoder that doesn't wait for a whole packet before it starts decoding. Here are the 4 reasons I believe it's not possible.

Problem 1: No way to find packet preamble

One problem is that there is no way to verify packet validity unless you wait for a whole packet. The "UTSC" preamble that marks the beginning of a packet only works because I added a CRC32 field to check against the rest of the packet. This is because "UTSC" could occur anywhere in the stream, and you don't want the decoder to find a false beginning. Obviously the preamble doesn't matter once you lock onto a station, but you don't want to get garbage by starting the decode process in the wrong place.

Problem 2: Can't use FEC to correct errors

Another problem is that there is 250 Kbit/sec of FEC protecting the data. This amounts to 4/5 FEC. Without an entire packet, you don't get the FEC and so you can't correct any errors. You might argue that only the first packet would be played without FEC and that all future packets would be protected by it. But in reality, because you started decoding the first packet without FEC, you must continue to do so or you risk a brief interruption in the playback. Here is an illustration of this issue, which assumes that no interleaver is used.


In Scenario 1, the decoder waits for a whole packet plus the FEC before decoding and playing. In Scenario 2, the decoder waits until just enough data is available before decoding and playing. Notice that if Scenario 2 continues, it will never get to receive the FEC before playing a packet.

Problem 3: Time discrepancy in video compression

The biggest problem in my opinion is the uneven distribution of data inherent to digital video compression, especially the interframe variety used by almost every codec.

In analog TV, every element of each frame took the same amount of time, each time a frame was transmitted. There were some tolerances, such as the power grid deviating from 60 Hz or when they lowered the frame rate to add color, but overall it was reasonably precise and unchanging.

In digital video, more data is spent on keyframes than on inter frames. In case you didn't know, keyframes are the initial frame that inter frames build on. The compressor encodes a regular image to start the video, and then frames after that are just differences between the current frame and the initial one. Every so often another keyframe is sent.

If the decoder tries to start decoding before a whole packet is received, then it will most likely fail to play the video properly. This is because much more data is sent in the initial keyframe of each packet than in the rest of the frames. Since the channel bandwidth is constant, this means that keyframes will take longer to send than inter frames.


Since digital frames would be received at indeterminate intervals, you can't just start playing the video as soon as you get the first few frames. If you don't wait for the entire packet, you're very likely to run out of data when a longer frame is being transmitted.

Problem 4: The interleaver

Even though I think #3 is the biggest issue, I saved this one for last because the interleaver is one of the more recent developments. To make this section short, UTSC packets are scrambled by an interleaver, and because the entire packet is scrambled, a receiver must wait until the entire packet is received before decoding it. This means the absolute minimum latency is about 1 second.

Below is a longer explanation of the interleaver.

Although UTSC could be transmitted on any band wide enough, such as 500 MHz or 2.4 GHz, I think it's best suited to the 900 MHz band. The problem is that many smart energy meters transmit FHSS (hopping bursts) all over 900 MHz. Since reliability is the focus of UTSC, I needed a way to somehow filter those. The FEC is good, I think, but it won't fix huge burst errors when every energy meter in a neighborhood transmits over a station.

I decided to use a fully random interleaver, a sort of scrambler. Since this is part of the air interface (the way it's transmitted), it doesn't affect the packet format that I released in 2017.

I generated a large amount of encryption-grade randomness, verified it with a program called ENT, and then used it to generate random integers for interleaver bit positions. This means that once you have a UTSC packet that's ready to transmit, you simply copy bit-by-bit into a new interleaved packet, using the bit positions I generated.

Since there are 1,000,000 (data) + 250,000 (FEC) bits in a UTSC packet, we have 1,250,000 bits, starting at bit 0 and ending at bit 1,249,999. We do NOT want to interleave the "UTSC" preamble, because we need receivers to be able to find it, but we DO want to interleave the CRC32 that comes right after it because we want it to be more resistant to burst errors.

This means we only have to interleave 1,250,000 - 32 = 1249968 bits, numbered from 0 to 1249967. So when we start populating the bits in our interleaved packet for transmitting, bit #1601 from the plain unscrambled packet goes first at position #0, then bit #952398, and so on. Since the pattern is made from high-quality randomness, the bit positions are extremely well distributed.

On the other end, the receiver would have a copy of the interleaver's bit ordering scheme and would work the process backward. To reproduce the original packet, the receiver would take bit #0 from the received packet and put it at bit #1601, and put bit #1 at bit #952398, and so on. At the end, the original packet will have been reconstructed and any burst errors will be evenly distributed over the entire packet, making it easier for the FEC to fix.

Here's a picture showing a 20-millisecond burst error. The drawing is to scale, showing how much that error would damage a UTSC packet. You may want to open the image in another tab and zoom in to see it in detail.

Left: a 20-ms error in a plain UTSC packet
Right: the same error in an interleaved packet.

I didn't know if I wanted to interleave, because I immediately saw that it would prevent instant playback. I wondered if I should leave some of the bit flags un-interleaved so the channel could indicate whether it was interleaved or not, but I realized that an error could flip the flag and confuse the decoder, not to mention the issue with burst errors breaking any non-interleaved channels. In the end, I decided that all UTSC channels will be interleaved.