Sunday, April 21, 2019

Server upgrades, and raw packets UI abuse

This month I'm upgrading the servers quite a bit. The old blade server hardware chassis is getting shut down and replaced with a brand new pre-owned and dumpster-dived blade server chassis. The new second-hand blades have 8 CPU cores and 192 GB memory each, which is quite sweet! I suspect the web site & database will be doing very little disk reads with this much cache around.

The downside is that we're not going to run the fibre channel SAN network, and the associated disk racks, since they're a bit power-hungry. So, internal disks only it is. To get relatively good disk IO performance I installed 1 TB SSD disks in the blades – two SSDs in each for RAID1 (~170€ with 24% VAT each, 3 blades, total 6 SSDs). The blades have CERC 6/i raid controllers, which are SATA2 and quite slow. The Samsung SSDs are capable of some 500+ Mbytes/sec reads, but the hardware controller in RAID1 config only gives out 130 Mbytes/sec. With Linux software RAID1 it can balance reads across the SSDs and give out 2*140 Mbytes/sec, which is alright. being a database application, especially with the huge cache memory, it'll be mostly random-access write heavy, so the streaming read benchmark above is not too relevant. The random-access speed of the SSD gives a nice boost, and the memory will help an awful lot.

The new servers are already running a live replica of the database, but the web service is still running on the old ones. Hoping to get the move done soon. The operating system is also getting a bump to the next major release, and some adjustments are needed to support that.

In less happy news: A few smart folks have, again, figured that it would be a great idea to download raw APRS-IS packets programmatically (using software they've written) from the user interface, the raw packets view. This is forbidden by the TOS ( and a longer reasoning can be found in the FAQ ( 
Downloading data for application use using the user interface (for example, fetching the /info/ page just to get the current coordinates of a station) wastes both human and computer resources.  
First, you need to write a parser to extract the data from the HTML (and then fix your parser every time I change my HTML layout). Second, needs to do the user interface magic (language and timezone selection, user-friendly template formatting, session set-up) for every request, all of which is unnecessary.  
[...] All of this extra overhead consumes CPU time, which in turn heats up the computer room, consumes electricity, destroys tropical rain forests, accelerates global warming, and kills kittens. And baby seals.
The correct way to obtain a raw APRS-IS feed is to connect to the APRS-IS  ( - you'll get a lightweight TCP  session with all the packets in real time. As clean CRLF-delimited rows with none of my HTML encoding to mess it up. For free, from the very same place where I get them from, and much faster! 

If one chooses to fetch the packets by hammering on my user interface, as opposed to an API, it'll create unnecessary extra load on my servers, and make me a little bit upset. Especially when someone fetches 1000 packets every time, via the Tor network, repeatedly every few seconds, and ends up fetching the same packets over and over again. That is not a great way to make use of this free service.

To make this sort of abuse harder, you'll now need to log in to view the raw packets. It's a little bit more clumsy if you're not logged in already, but the login cookies have a long lifetime so you don't need to do it that often.