Monday, August 14, 2017

Root-Raised Cosine

Yesterday I finally figured out the root-raised cosine. I've been trying to understand it for about a year. It's very necessary for transmitting PSK signals because merely mixing a square wave with a carrier wave makes a wave with sharp transitions that cause lots of spurious signals. The clean, narrow PSK signals you may have seen are all using the root-raised cosine.

Here is the resource that I found yesterday to explain it properly: Pulse Shaping with raised cosine filters.

I found it confusing at first for two reasons. The first is that I didn't know if Wikipedia's formula was for time or frequency domain, and the second is that I had no idea that the RRC is centered on each PSK symbol, meaning that a time of 0 is the center.

Here is the time-domain formula from the University of Stuttgart's Webdemo (linked above):
[REF] Stephan ten Brink, "Pulse Shaping," webdemo, Institute of Telecommunications, University of Stuttgart, Germany, Aug. 2017. [Online] Available: http://webdemo.inue.uni-stuttgart.de

Why to use it

If I told you to multiply a square wave with a cosine and sine wave to make a QPSK signal, you'd get a result similar to the top stereo track shown below.


Those are some sharp transitions. This is what I got when I first tried to make my own QPSK signals. It seems well and good, right? We have our digital wave mixed with I (cosine) and Q (sine) to make an IQ signal playable in an SDR program. Well, yes, but there's a slight problem...


[Vertical is frequency, horizontal is time]
This isn't what QPSK is supposed to look like. See all the spurious signals splattering everywhere? Satellites like Inmarsat have neat and narrow QPSK, so why does mine look so bad?

It turns out that we've simply placed a square wave (which is full of harmonics) into the RF spectrum by mixing with a carrier.

Now, notice the bottom stereo track. It is the same QPSK signal, but smoothed out using a root-raised cosine filter.


Notice how narrow it becomes:


The signal is also good enough that Signals Analyzer can lock onto the 80 kBit bitrate:


I initially made the mistake of entering 40000 in the BR (bitrate) field because it's 40 kHz QPSK, and with QPSK the bitrate is twice the symbol rate.

(Below) SA can also lock onto the bitrate of the unfiltered QPSK, which means that although it's undesirable for transmitting, it is nonetheless a valid signal (although I did have to zoom out the bottom-left constellation window a bit).



How to use it

The formula generates "taps", which means an array of values to be used on the signal you want to process. In our case, we multiply the taps by our signal.

Here are the variables:

t: time, in fractions of a second, since the center of the symbol.
T: length of half a symbol, in seconds (1 / (2*symbol rate)). (Why not 1/symbol rate? Pitfall explained below)
alpha: roll-off factor, ranges from 0 to 1 (1=wide, 0=brick wall filter)

To maintain the parameters of the signals shown earlier, let's assume we want a QPSK signal with a symbol rate of 40 kHz (80 kbit/sec) and we'll have it in an IQ file sampled at 1 MHz.

Our variables would be:
t: x/sample rate (in our case, x/1000000). x is the FOR loop variable.
T: 0.0000125
alpha: 0.1 (very high roll-off)

40 kHz is a convenient value since we want an odd number of taps. Since 1,000,000/40,000 = 25, it will take 25 samples to make one symbol and so we need 25 taps.

The center value will be 12 (base 0) or 13 if you prefer base 1. We want to start at 0 so we can do our time values properly, so we want a FOR loop to go backwards from 12 to 0.

Pseudocode:
---------------------------------------
for (x = 12; x >= 1; x--) {
    taps[12 - x] = [The formula depicted above, substituting (x/1,000,000) for t]
}

taps[12] = 1

for (x = 1; x <= 12; x++) {
    taps[12 + x] = [The formula depicted above, substituting (x/1,000,000) for t]
}
---------------------------------------

This code will give you 25 taps. Think of it as a matrix with just one column; you would use matrix multiplication to multiply each point by a corresponding point in time on an unfiltered QPSK signal. Just make sure to align this so that the taps begin at the beginning of the QPSK symbol, otherwise it won't filter properly. Here's a crude ASCII drawing of what I mean:

|     Taps       |        |   QPSK   |
|   Matrix       |   *   |       IQ       |
|                   |        |   samples |

Note that both matrices are only ONE symbol long; the taps repeat at the start of each symbol.

Pitfall

The pitfall I was referring to is that 0 is the center. This is what my first RRC taps looked like when I calculated starting from 0:


I mistakenly thought that was the whole filter but it's only the right half. Again, here is the right half of another RRC filter:

And finally, here is the output of this mistake. I applied the right half of an RRC filter to the QPSK starting at the beginning of each symbol, which kept the ends from matching properly. When it's done right, the ends meet perfectly. I eventually found out it needed to be mirrored and applied with 0 being the center of the symbol.


This is why we use 1/(2 * bitrate). If you use 1/bitrate, then the right half will span the entire symbol time when you only want it to span half. With 1/(2 * bitrate), each half will cover half the symbol.

I hope this helped if you had no idea how to program the RRC. Use the comment section below if you have any questions or if I left something out.


Tuesday, August 8, 2017

UTSC v1 Packet Specification

After showing the spec to Foxx, wordsun, and Corrosive, only Corrosive had a suggestion and it was to allow embedding a list of ID's in the packets to facilitate pay-per-view. In other words, broadcasters could include a list of ID's of various decoder boxes so only specific paying viewers can see a channel. This is in contrast to my current method of handling encryption like Wi-Fi, using a single password for the channel. I told him his one-key-per-viewer idea most likely wasn't feasible since the packets need to be small.

Here is a link to the document: utsc_finished_release.txt

License:

The UTSC name and specification are Copyright 2017 Designing on a Juicy Cup. The specification may be freely implemented by anyone for any purpose as long as this copyright notice is displayed in the license. The UTSC name may be used in products implementing this standard as long as attribution is made to Designing on a Juicy Cup.

Monday, August 7, 2017

UTSC v1 Standard Finalized

Since 2016, I've been working on a way to transmit digital TV in the 900 MHz Part 15 band. The main focus is on reliability, because ATSC fails miserably in that department. The second focus is on unlicensed operation, because broadcasting is a near-monopoly.

The format officially consists of a 1 Mbit data stream containing VP9 video at about 900 kbits and Opus audio at 48 kbits. Opus is extremely resilient and can withstand high loss, similar to analog TV's sound. It also sounds amazing at that bitrate. Other services, such as audio or data, could be conveyed as well.

My proposed standard is called UTSC. The acronym means nothing, officially. It is designed to be expansible like WAV, meaning that new features can be added without breaking compatibility with the first receivers. My current research suggests that I can fit 32 channels in the band in any given area.

The standard can accommodate any video codec, resolution, and frame rate in theory, but VP9 960x540 @ 30.000 fps is suggested.

I finalized the standard today and I'm documenting it here as proof that I devised this first. If someone else claims to have been first, you can verify with the Wayback Machine that no site before this date carried this info.

The encoder and air interface are proprietary and will not be released yet. However, I'm planning to release the packet format for public review. I'm submitting it privately to Foxx, Corrosive, and wordsun for a pre-review.

Saturday, June 24, 2017

Velvet Ant vs Ziploc Bag

About a week ago I saw a weird bug walking away from a wood pile. It looked dangerous so I caught it in a ziploc bag. It turned out to be a velvet ant. Its jaws were so powerful that it stretched and nearly punctured the bag when I held it taut. Knowing nothing about velvet ants, I didn't realize that the jaws were the least of my worries. I did not know I had to watch out for a stinger, but thankfully I wasn't stung.

As you'd expect by the bright coloring, an article described the pain of their sting as "life-changing, pray-for-death pain". Here is a YouTube video of someone being stung by one:


Needless to say, I was glad to have caught it in a bag. Eventually the bag was placed under a basket on a table and forgotten.

Then today as I entered the living room I saw a bug running across the table. I thought it was a roach and hit it hard and flung it down so I could get a clear path to kill it. But after getting it onto the floor, I realized with horror that this was not a roach, but the velvet ant! Quickly snatching up an envelope, I put it on top of the retreating wasp (that's what they really are) and delivered one quick blow which instantly killed it. It was running to the edge of the table and if I had entered the living room just 5 seconds sooner or later I would've missed it.

Apparently velvet ants can escape from ziploc bags. Here is a picture of the hole it made:

Monday, June 12, 2017

Faulty Marvel Walkie-Talkies

Recently I had the chance to test some children's walkie-talkies. These are generic blue walkie-talkies that can accept plastic front plates with Marvel characters. The label did not specify the frequency but a quick Google search for the FCC ID, 08KAK-2, revealed that they operate in the 49 MHz band. However, that's not where they actually operate...

I played a song on YouTube while holding down the talk button and this is what I got:

(The vertical bars are my LED monitor)

Apparently, this is an incredibly unstable oscillator that actually operates in the 6 meter ham band.

Because of the waterfall, it was trivial to figure out that this was FM. While the width appears to be around 24 kHz here, it can go up to 75 kHz when you blow the mic.

I'm really surprised that the other walkie-talkie can pick up the signal, considering how the transmitter jumps around not only each time you push talk, but even as you're transmitting.

As a ham (Extra class, by the way), I know I would HATE seeing something like this in the 6 meter band. But since the toys work despite their instability, I would expect any narrow FM in this range to "bleed" into the toy's passband, so kids should hear any hams they're interfering with.

Thursday, March 30, 2017

Huge ice maker output

This post is about computer cooling, primarily for GPU's and hard drives. Before I begin, I wanted to suggest that you enter to win an EVGA GTX 1080 Ti. The contest closes 12 days from now. Full disclosure: I do benefit if you enter.

My computer setup happens to be in a room that the previous owners neglected to insulate, so we don't usually run AC there. That presented quite a problem last summer when my computer's aging fan, even with the dust blown out, couldn't keep up and the computer would keep shutting down for its own protection. Losing work randomly made me eventually devise a system of a well-insulated box containing the computer, a fan, and frozen water bottles. It worked pretty well and the box was about as cold as AC, but it still didn't put out nearly as much consistent cold air as was necessary. Using this over the summer, I observed several problems:
  • The bottles didn't have much surface area
  • It quickly goes from frozen bottle to water bottle with ice core. The water was insulating the ice core, and ice "steals" a lot more energy than cold water possibly could (because of the heat of fusion).
  • None of my freezers could freeze bottles as fast as the computer could melt them. I eventually figured out that if I could freeze enough bottles initially, I could swap them in and out and have enough consistently. I worked out the math to find out how many bottles I would need.
I also noticed that our old freezer, a Kenmore Frostless from 1990, froze bottles slightly faster than the brand-new Frigidaire we got in 2014. Considering the old freezer draws less electricity, I suspect the R-12 is responsible. Naturally, I shifted the bulk of my ice production to the Kenmore.

This past winter (2016-2017), I decided to build a better system before it became necessary, so I would have time to perfect it. Since the ice does most of the work and the water (even though it's near-freezing) is virtually useless, I would need my new system to discard the water. Since freezers' automatic ice makers are much faster than freezing bottles, they would be the source of ice. This also solves the surface area problem since a pile of ice cubes will have a lot more area than a cluster of bottles.

I put together a crude system consisting of a plastic container with a hole in the bottom. I fill it with ice and mount a desk fan on top. The fan takes up about half of the top's area and forces air through all the cubes and out the top again. This system worked well. It produced consistent and very cold air, almost like a small AC unit. There are now only two problems:
  • The freezer's ice maker must be able to keep up
  • A bucket must be kept underneath and emptied periodically
This is a lot better than where I was last summer. Lately it's been cold to mild in South Carolina so I haven't had to put it into "production" use yet. However, the old freezer's ice maker was still too slow. A quick Google search for making ice faster revealed something called "Quick Ice", which is a feature on fancy fridges that uses a fan to blow across the ice maker. At one website they write, "For models with the Quick Ice feature, ice production can be increased by nearly 48% to about 6.2 lbs per day." This did not sound impressive, but I knew it couldn't get worse so I decided to try the idea and see how much I could make. Using my strongest CPU fan and the steel wire used for the Inmarsat antenna, I made a mountable fan setup for the old Kenmore freezer. Of course, I first checked to see exactly where the plastic ceiling rail was, so I wouldn't end up making the wrong mount. Then I mounted it and wired it to a 12-volt external hard drive power supply. I emptied the ice maker's bucket and then ran the ice maker and fan for 24 hours before checking it. When I returned, I was very surprised to have 14.3 LB of ice! That happens to be exactly 3 full ice cream buckets (below).


Here's the freezer's ice bucket before being emptied:


And here are my freezer settings:


At 144 BTU/LB for ice, this comes out to 2059.2 BTU of cooling per day, or 85.8 BTU/hour. Nowhere near an AC unit, but still quite useful for small enclosures. Plus, since I'm not there to get the ice during the night, I can get more BTU's per hour using it only during the hottest parts of the day. If that's 8 hours a day, I would get 257.4 BTU/hour.

This is probably not financially feasible in the long term, but it's still a neat project. I was thinking that if I ever set up my GPU for a remote compute service and was still using ice, my website could have a footer that says, "Our GPU setup is proudly cooled with CFC-12, an energy-efficient refrigerant."

New GPU and HDMI audio output

This is just a quick little hack I realized one day when I was listening to P25 police radio. When I first installed my nVidia GPU, Windows set the default audio device to the HDMI audio output, so I set it back to my soundcard. A while later I was about to use Stereo Mix as the input for DSD+ and my USB sound dongle as the output, but decided I might as well use the GPU audio output and used the HDMI audio instead. It worked perfectly once I specified which audio devices for DSD+ to use. Not very exciting, I admit, but in my next post I'll describe my new ice cooling system.

Monday, March 20, 2017

First successful satcom ACARS

Two days ago, using the Satellite AR app,  I noticed that Inmarsat 3F2 at 15W is well above my horizon, probably enough to be received. Yesterday I temporarily put a satellite dish on my roof to try and aim at it. I had no success but I did easily receive the one at 54W. Using JAERO, I was able to decode an AERO channel and get some aircraft messages. When I looked up one of the registration numbers, it turned out to be from a plane flying over Algeria.


Here is an Inmarsat page with coverage maps for their satellites: http://www.inmarsat.com/about-us/our-satellites/our-coverage/

Saturday, March 11, 2017

Video Motion Analysis

I updated my video analysis program to output the standard deviation of each residual frame so I could see the motion levels from frame to frame. Then I saved the results into an Excel sheet and generated graphs. Below is a combined graph of BarScene, Dog Run (my own sequence), and snow (99.99% entropy).

(Click to enlarge)


You'll notice that Dog Run had to be repeated since it was much shorter than the others. There are some other things worth mentioning:

  • The spikes in the blue BarScene graph are scene changes; therefore an encoder could, on the first pass, detect scene changes for keyframe purposes using a standard deviation threshold value.
  • At nearly 100% entropy, snow is obviously not going to survive lossy compression very well. Its high standard deviation would also prompt the encoder to make every frame a keyframe. This would be a major problem unless you had good "hard" rate control.
Here's the BarScene graph, which has the most distinct activity.


I found Ronald Bultje's "Overview of the VP9 video codec" very helpful. I'm close to fully understanding the VP9 bitstream and once I do, I'll be able to start my actual encoding experiments.

Monday, March 6, 2017

4k Video Downloader and initial VP9 tests

As many of you already know, most YouTube videos are in VP9 format. It's an open-source replacement for H.264 (and arguably H.265) that offers far better compression which can mean either less data for the same quality or more quality for the same data. The downside is that is takes a lot more computational power to process, both compressing and decompressing.

As a side effect of switching browsers, I got my first experience with VP9. I left Internet Explorer in 2014 because I needed to visit sites with HTML5, which it did not yet support. At the time, Opera had about the best support, not to mention faster loading times and low memory use. I don't know if you've seen it in my screenshots, but Opera is still my only browser.

Months ago, or maybe a year, I wanted a better way to download YouTube videos. I knew about Any Video Converter, but it only supports retrieving them as MP4 and I wanted the higher-quality VP9 copies that are served to supporting browsers. I spent a while searching on Google for a way to download YouTube videos as VP9 but got no results.

Fortunately, just as I was about to give up I saw a result deep in the Google pages for a program called 4K Video Downloader. It was quite fortuitous since I don't remember what keywords I used and I can't seem to find it anymore without its name. I'm glad I found it because I was so determined to get VP9 videos that I was considering youtube-dl, a command-line app.

Here's the download link. It's free, which is surprising for a program of this quality. You may want to download it soon if you're interested or it may go the way of Networx and IcoFX, both of which I obtained and archived before you had to pay. One thing I should mention is that "4K" is just what the company prefixes all its titles with; while it can download YouTube videos in 4K, it doesn't have to.

In addition to archiving or watching on a mobile device*, I'm currently doing some experiments that require VP9 videos, and I will describe them after I show you how 4K Video Downloader works.

*Please note that most mobile devices consume FAR less power when playing MP4 rather than VP9. I recommend VP9 for all YouTube archival but only for mobile when space is a concern.



I really like how they designed the interface. It's neither gaudy nor oversimplified like some other video apps I've used. It's like Windows XP: responsive, a few nice icons, some changeable settings, and not in-your-face shiny and trying to be "helpful" like Windows 10.

(Click to enlarge)



I love the flexibility I see here, but let's get to the real reason I use this so much. First, let's copy a YouTube URL.


Now open 4K Video Downloader and notice that the green Paste Link icon at the top now has a YouTube logo. Click the button.


A dialog labeled "Parsing" will open. It normally takes only a few seconds. Sometimes it takes excessively long, and occasionally it will fail, but usually it's successful and you'll get this window:


Alternately, you can also download just the sound:



Obviously you can't open 2 menus at once; I'm just editing to show you everything concisely. MKV is how you get VP9, but sometimes a very new video won't have the MKV option yet.

I like how it shows you the file size before you download, so there are no surprises. You'd be surprised how much smaller MKV can be versus MP4, but don't assume that's always the case. I've seen videos where it's the opposite, so if you want to save space and don't care which format, always compare the MKV sizes to those of MP4.

A quick note: I don't condone stealing music this way. Please buy songs before downloading the music videos. Of course, if you buy the song then you probably don't need the audio download feature.

Once you choose your format and output folder, you can click Download.


It often shows the company's own ads but they can be closed.
After it finishes, it's ready to play in your favorite player. I normally use Media Player Classic because Windows Media Player 10 was the last good version.

As I said near the beginning, I really appreciate the company offering this for free. They did such a good job that I was planning to eventually buy the full version. My only complaint is that there is no way to download YouTube Live streams. For that, I use the command-line app Livestreamer.


Some people are aware of the "ss" YouTube trick, but it's not as good because the site providing the service wants you to pay for any downloads of good quality.

(Below) The "ss" YouTube trick. The free service strips the audio from 480p and 1080p downloads.



And now, a note on the ethics of downloading. Obviously every tool has good and bad uses, so this is not inherently evil. There are often videos I want to own that aren't being sold as a movie or song. As I said above, do not download music videos without supporting the artist. What is okay to save? I personally think it's fine to download:

  • anything that's not regular, for-sale movies or music is fine to download. For example, Big Buck Bunny is copyrighted but okay to download.
  • Vlogs
  • Homemade songs (assuming they aren't being sold somewhere)
If in doubt, use common sense.

And now for my experiments.

I stumbled upon Ronald Bultje's EVE VP9 encoder a while ago. I've always been fascinated by video compression but regarded it as a sort of secret art that required huge teams. I wrote to Ronald Bultje and asked if he wrote his encoder alone or as a team, and he said he did it alone. I was very impressed and inspired to create my own, since his is for sale.

libvpx is the only well-known free VP9 encoder. Its development was primarily done by Google but it has major problems. It's incredibly slow and usually produces blurry output, even at excessive bitrates. Considering the published results of the EVE encoder, which was written by one person, I figured it wouldn't be hard to do better than libvpx.

Pieter Kapsenberg, an expert in this field, informed me via email that libvpx's blurriness was possibly caused by the use of 32x32 transforms and also not using much CPU to compress as well as possible. Although I have not found out for sure why it's slow, I believe it stems from the motion search, which can be solved with GPU-based algorithms.

I recently bought an nVidia GTX 750 Ti and installed the CUDA Toolkit in Visual Studio 2015. I'm actually impressed with how well it integrated with Visual Studio, because most plugins are quite prone to failure. I didn't even have to manually configure directories. After some initial trouble with using the wrong driver version, I was able to start writing CUDA kernels.

My first experiment involves generating residuals (differences between frames) so I can analyze motion and see how best to compress it. I initially thought I'd work with BMP's but who wants a finished encoder that only accepts folders of BMP's? I would've also had to convert from RGB to YUV, which is incredibly fast on CUDA but requires the use of imprecise floating point values, and the U and V are signed, which would require conversion before saving. I quickly saw that it would be better to just accept yuv420p output from FFmpeg, so I recorded some videos and converted them to raw YUV using FFmpeg. I got out my nice Canon camcorder, put it on a tripod to keep the surrounding scene still, and called our dog. In the style of the Xiph.org media collection, I named the file Dog Run. It was taken as 1080p @ 24 Mbit/sec in AVCHD format. I increased the saturation and contrast before downsampling 4x to 960x540 raw YUV.

My program has a flag that toggles between two residual modes. One mode subtracts the current frame from the first frame, and the other mode subtracts the current frame from the one before it. It does a simple subtraction of the 960x540 frames, subtracting the 0-255 Y-plane pixels (U and V are ignored) and producing output in the range of +/- 255. Within the same kernel, this output is divided by 2 and added to 127 to produce the final 8-bit grayscale values used for the images below. According to the CUDA Profiler, it takes about 80 microseconds per frame, making the hard drive the biggest bottleneck. It looked really cool when I finally got to view the 2 types of residuals alongside the original video.
(Click to enlarge)


Notice that the residual on the right is a lot greater because a lot has changed between this frame and the first one.

I also noticed that this is a lot like calculus because we're sort of taking the derivative of the video, by seeing at what points (frames) it's changing the fastest. This should be useful for doing the first pass to find what parts of the video need the most bits.

How 4K Video Downloader ties in

Analyses and algorithms are well and good but to be successful you have to know how to create a bitstream. To better understand VP9, I needed a good VP9 bitstream that I could open in a hex editor. 4K Video Downloader fulfilled the need perfectly.

MKV/WebM are complex container formats so I converted the downloaded video to IVF using FFmpeg. Then I opened it in HHD Hex Editor Neo, extracted a frame into a separate file, and then displayed that file as bits in the hex editor. I copy-pasted the entire bitstream of that frame into a TXT file, opened it in Notepad++, and spent several hours deciphering the uncompressed header while writing a Word document of the exact details. The information is all there in the official VP9 spec PDF, but my document makes it easier to understand since each element in the PDF branches off into a different section, and does this recursively for many levels, making it hard to remember where you are after a while.

Wednesday, March 1, 2017

ATSC RF recording available

I've been in touch with Light Coder as I tried to perform his requested experiment. I found that 8-bit recording in HDSDR somehow creates a DC spike that isn't present when recording 16-bit. After I told him, Light Coder asked in an email if I would send him an ATSC recording to help him debug HDSDR, and since people had asked for ATSC before, I figured I might as well make a blog post out of it. So here's a screenshot and the download:


This is the first ATSC signal I encountered while scanning with the rabbit ears, and I believe it's PBS.

atsc_rfchannel_33.7z (660 MiB)

Monday, February 27, 2017

Associates Degree finished

It's been a long time since I wrote a post because I was finishing the final semester of an Associates degree.  A lot of tough assignments were coming quickly and I wanted to finish as well as I could. Today was the last day and I am able to start experimenting again.

On February 5, Light Coder, one of the HDSDR authors, asked me in an email to retry my 8-bit TV IQ recording using the newest HDSDR. When I wrote that post, I had to use SDR# because HDSDR did not have 8-bit recording support. I responded that I needed some time but that I could probably get to it after February 27. I'm finally ready and I'm setting up everything I need for the experiment. My next post will be about the results.

Thursday, January 19, 2017

AT&T GSM finally gone

Well, AT&T finally turned off GSM everywhere. Here's a picture of the spectrum shown in the previous post, minus the GSM:


I guess now we know that that was AT&T doing the weird GSM-within-LTE scheme. There is still a tiny carrier of some kind in the LTE, but it doesn't look usable.

Monday, January 2, 2017

AT&T (seemingly) forgets to shut down 2G GSM

According to many articles, AT&T promised to shut down 2G GSM on or before January 1, 2017. Imagine my surprise when, using the *#*#4636#*#* Android trick, I was able to force my phone onto EDGE on January 2.

I had already tried this with success on January 1, but I figured AT&T might wait until midnight.


In my area AT&T prioritizes WCDMA and LTE, so I usually have to be 3 stories up and lean the phone on a window to get EDGE.


I have no idea why AT&T is delaying to shut down their 2G network, although I can't say I'm disappointed. In my opinion, GSM can and should be kept running in the WCDMA/LTE guard bands as others have suggested because not only is AT&T's LTE unavailable in large areas I've visited, it's also great to be able to keep using old phones and cheap plans.

As it turns out, AT&T isn't the only company to forget to turn off a network. I heard from Trango on StarChat that Inmarsat has yet to disable their B phone service as planned for the end of 2016. He told us that people were checking their service by making brief phone calls and just saying "Check". Let's hope Verizon forgets to turn off CDMA 1x in 2019...



On a side note, I saw what looks like GSM within an LTE signal.

The SDRplay isn't wide enough to show an entire LTE signal, but I scrolled and checked it against a 700-MHz LTE signal and the width and unmistakable sharp edges matched perfectly. However, the modulation pattern did not appear to be the same, so perhaps they use something else in the PCS (1900 MHz) band.

While the SDRplay isn't wide enough to show all of an LTE signal, I did scroll the waterfall window to see the edges of the signal. The signal width and sharp edges made it clear that this was LTE, although the modulation pattern is different from what I've seen in the 700 MHz band.

The 2 small signals inside the big one have to be GSM because they make a GSM-like noise and are about 200 kHz wide.


If this is intentional, then it's a clever use of spectrum because GSM, being narrow, can give a better signal per watt while LTE, being wide, can afford to lose a few of its miniature OFDM carriers.

Sunday, January 1, 2017

New Year 2017 Shortwave Pirate (4020 kHz)

Last night (December 31, 2016) I did a wideband recording spanning 3205 to 5205 kHz, from 11:48 PM to 12:15 AM. I was going through it this morning and noticed an upper-sideband signal that was just above the 80 meter band, on 4020 kHz. It was quite strong so I listened and it turned out to be interesting music. If it had been near the 40 meter band I would've immediately recognized it as pirate radio but since it was so low I thought it must have been a weird single-sideband broadcaster. Upon Googling the frequency, I was pleasantly surprised to find a YouTube video dated today, January 1, 2017, with that exact frequency in the title, along with the mention of it being a pirate.

Here is that video: