Showing posts with label hardware. Show all posts
Showing posts with label hardware. Show all posts

Sunday, April 21, 2019

Server upgrades, and raw packets UI abuse

This month I'm upgrading the aprs.fi servers quite a bit. The old blade server hardware chassis is getting shut down and replaced with a brand new pre-owned and dumpster-dived blade server chassis. The new second-hand blades have 8 CPU cores and 192 GB memory each, which is quite sweet! I suspect the web site & database will be doing very little disk reads with this much cache around.

The downside is that we're not going to run the fibre channel SAN network, and the associated disk racks, since they're a bit power-hungry. So, internal disks only it is. To get relatively good disk IO performance I installed 1 TB SSD disks in the blades – two SSDs in each for RAID1 (~170€ with 24% VAT each, 3 blades, total 6 SSDs). The blades have CERC 6/i raid controllers, which are SATA2 and quite slow. The Samsung SSDs are capable of some 500+ Mbytes/sec reads, but the hardware controller in RAID1 config only gives out 130 Mbytes/sec. With Linux software RAID1 it can balance reads across the SSDs and give out 2*140 Mbytes/sec, which is alright.

aprs.fi being a database application, especially with the huge cache memory, it'll be mostly random-access write heavy, so the streaming read benchmark above is not too relevant. The random-access speed of the SSD gives a nice boost, and the memory will help an awful lot.

The new servers are already running a live replica of the database, but the web service is still running on the old ones. Hoping to get the move done soon. The operating system is also getting a bump to the next major release, and some adjustments are needed to support that.

In less happy news: A few smart folks have, again, figured that it would be a great idea to download raw APRS-IS packets programmatically (using software they've written) from the aprs.fi user interface, the raw packets view. This is forbidden by the TOS (https://aprs.fi/page/tos) and a longer reasoning can be found in the FAQ (https://aprs.fi/page/faq): 
Downloading data for application use using the user interface (for example, fetching the /info/ page just to get the current coordinates of a station) wastes both human and computer resources.  
First, you need to write a parser to extract the data from the HTML (and then fix your parser every time I change my HTML layout). Second, aprs.fi needs to do the user interface magic (language and timezone selection, user-friendly template formatting, session set-up) for every request, all of which is unnecessary.  
[...] All of this extra overhead consumes CPU time, which in turn heats up the computer room, consumes electricity, destroys tropical rain forests, accelerates global warming, and kills kittens. And baby seals.
The correct way to obtain a raw APRS-IS feed is to connect to the APRS-IS  (http://aprs-is.net/Connecting.aspx) - you'll get a lightweight TCP  session with all the packets in real time. As clean CRLF-delimited rows with none of my HTML encoding to mess it up. For free, from the very same place where I get them from, and much faster! 

If one chooses to fetch the packets by hammering on my user interface, as opposed to an API, it'll create unnecessary extra load on my servers, and make me a little bit upset. Especially when someone fetches 1000 packets every time, via the Tor network, repeatedly every few seconds, and ends up fetching the same packets over and over again. That is not a great way to make use of this free service.

To make this sort of abuse harder, you'll now need to log in to view the raw packets. It's a little bit more clumsy if you're not logged in already, but the login cookies have a long lifetime so you don't need to do it that often.

Wednesday, July 4, 2012

Upgrade: New server, Dead Reckoning, and some smaller stuff

One of aprs.fi's servers getting new disks
First I'd like to thank everyone who showed up in Ham Radio 2012, Friedrichshafen! It was really nice to meet you all.

aprs.fi's second server migration reached completion today. It's now running on two blade servers, each having two quad-core Xeon processors with 12MB cache each, for a total of 8 CPU cores at 3 GHz and 32G of RAM per server. That's a total of 16 CPU cores and 64G of RAM for aprs.fi alone! The memory really helps, as I can now fit more and more stuff in memory and avoid slow hard disk seeks. We found the blade server in a dumpster, so it's not the latest and greatest in the market, but certainly useful for a few more years.

I also upgraded the aprs.fi server software with a few visible changes and several important features which are not directly visible to the end users.

The most visible change is that Dead Reckoning was enabled for all stations which have moved recently. If the station has moved within the past 30 minutes, a blue line will be displayed, indicating where the station would be right now, assuming it has continued on the same course and speed.

For stations which transmit quite often in relation to their speed, or do not turn quickly (ships, airplanes and high-altitude balloons, for example), the DR'ed position will be surprisingly accurate. For cars driving a very curved road (in the city) it will be less accurate, but the DR line still provides an indication on the relative age, speed and usefulness of the displayed position. Your internal non-artificial algorithm can easily figure that the car probably turned along the road, even if the blue line ends in a forest.

The blue line becomes gradually more translucent during the first 10 minutes after the reception of the position report. If it's almost completely translucent, the position is more than 10 minutes old and both the DR'ed position and the displayed old position are pretty outdated. This information can be very useful, too.

It can take a small while until everyone gets used to the DR lines. After having them for a couple months on my development server I can assure you that they really improve the usefulness of the real-time map view!

Ham::APRS::DeviceID module was upgraded to version 1.05, adding detection for a number of new APRS devices: TrackPoint, BPQ32, ircDDB Gateway, DIXPRS, dsDIGI, dsTracker, DireWolf, MiniGate, YAAC, and MotoTRBO. New version of the module will appear on the CPAN soon.


Fixed calendar date selection in the data export tool.


Implemented a nice user and team management web UI (for administrator use only). Especially useful when running the software in a closed "intranet" mode.


Gave a face lift for the service management command line tools.


Web server software was upgraded (as usual).


I've also implemented TETRA LIP position packet decoding and support for Google Maps Enterprise licensing. More on that later!

Tuesday, November 30, 2010

Outages due to upgrades during early December

The aprs.fi service might have some small service breaks during the following few days. I'm upgrading the master server with 4*1TB disks (RAID10, 2TB usable capacity) and doing some other upgrades while I'm at it (including but not limited to the new operating system kernel, switching filesystems, LVM, database engine upgrade, etc). The first two disks already went in tonight and replaced half of the old ones, and the process will continue with data migration from the old disks to the new ones.

The first reboot (due to kernel upgrade) will be on Wednesday morning (around 0600 UTC probably). The web service should keep on running happily, but if the reboot will only take a short while, I probably won't bother to move the APRS data collection master function to the second server during the reboot, and some data will be lost during the boot.

Sorry for the trouble!

Thursday, July 15, 2010

Problem and Solution, part 2

The Solution to the Problem has finally been installed in production. It took a while since I wanted to benchmark different configurations well before doing the actual installations. Enough space for backups for some time! I'm now able to store database dumps and transaction logs for much longer than a week, and there's also plenty of room for database growth.


The new disks are faster then the old ones, and they're a bit cooler, too:


4 1TB SATA disks set up as a RAID10 array, giving 1862.62 true gigabytes of space.

I've also ordered more of the same disks for the live aprs.fi servers.

Thursday, May 6, 2010

Problem and Solution

This, my friends, is a problem.


And here, my friends, arrives the solution:


Well packaged.


4 units of 1 terabytes each. After RAID10, that's a nice 2 TB of storage for aprs.fi.


This is just for the backups. I'll need to get 4 more of these within a couple of months to upgrade the live servers.

Thanks go to an anonymous Finnish site user for the donation of 2 disks (and organizing delivery in 26 hours), and all other donors for the other two!

Sunday, December 6, 2009

Maintenance break on Saturday 5th of December

On Saturday, roughly between 18 and 19 UTC, I upgraded operating system components and installed security patches. These upgrades required taking the service down for some time, and even data collection was affected for a short while. Sorry for the trouble!

Before the reboot, the master server had an uptime of 681 days. Thanks go to the UPS and the diesel generator - there have been a few small power outages in Espoo during that time.

In the near future I'll be installing a new master server for aprs.fi, the hardware is already waiting for the OS installation in the basement.

I've also bought an 80 GB Intel X25-M SATA solid-state drive (SSD), a very quick flash-based hard disk-like device. The amount of disk I/O operations is starting to become an issue on aprs.fi, so I'm going to try moving the busiest database tables (which are also the smaller ones) on SSD. We'll see how that improves things! It might even allow me to implement some new features (which require additional I/O capacity).

Thursday, January 24, 2008

The 2-hour outage last night

After adding 4G of memory on the server (which went well, as usual) I had to do some disk arrangements, which took some time to complete. After booting up the Apache web server didn't return web pages correctly any more - web browsers would show the HTML source code instead of nicely formatted pages. After playing with a network analyzer (thanks Wireshark) and getting a little help from an ex-colleague I finally got it fixed.

I had upgraded a good bunch of libraries on the system earlier, but hadn't really restarted the web server processes afterwards (I'm only running a graceful restart after log rotation and config changes), so the web server was still using the old libraries. Now, after the reboot, new libraries were linked in, and it broke. There was an extra line break (\r\n) after the Server: Apache... HTTP header, and web browsers promptly stopped parsing the headers, and assumed a text/plain content type, and showed the HTML.

The strange thing was that enabling content-encoding compression (SetOutputFilter DEFLATE) fixed the problem. While it did work, the header was:

Server: Apache/2.2.6 (Unix) mod_ssl/2.2.6 OpenSSL/0.9.8g

While it didn't work, the header was:

Server: Apache/2.2.6 (Unix) mod_ssl/2.2.6
(+ the extra CRLF, then Connection: close and the following headers)

The workaround was to set ServerTokens Prod, so that the header only says Apache, and doesn't give any hints of version numbers or modules. My best guess is that disabling DEFLATE changed library linking order so that the openssl version query returned "\r\n". gnutls maybe?

The sysadmin lesson to learn here is probably that you should restart services depending on libraries which you have upgraded, so that incompatibility is revealed right away. It's nice to know which upgrade broke what. It's not fun to see applications break after "just a reboot".

On the positive side, the system isn't swapping any more.

Wednesday, January 23, 2008

Maintenance outage: adding memory

The aprs.fi server is going to be down for some time tonight or tomorrow, while I'm adding 4G of memory in it. Sorry for the inconvenience.

Thursday, January 17, 2008

Ordered a new server.

Today I ordered a new server for aprs.fi. Intel Core 2 Quad (Q6600 2.4 GHz, single CPU with 4 CPU cores), 4GB of ram initially (will probably upgrade to 8GB later this year), 2*320G SATA disks (16 MB cache, NCQ, 5-year warranty). 1155 EUR, about 1700 USD. 1U rack case, two gigabit Ethernet interfaces (Intel). This should help performance a little bit, but I'll probably have to get another one later this year. This one should arrive in about a week.

I've also ordered a 4GB memory upgrade for the current server, which is 183 EUR (270 USD) - fully buffered ECC memory is a bit pricey, and that's what the server requires. The new one eats unbuffered, which is only around 100 EUR for 4GB now.


In case you're wondering, my paypal account is hessu@hes.iki.fi. Using aprs.fi is free, and will remain so.