I've improved the "first receiver" detection algorithm a bit. It's now a bit tougher and shouldn't get as many false positives. For example, the packet:
SRCCALL>DST,DIGI1*,WIDE2*,qAR,IGATE:data
is no longer considered to be heard first by DIGI1. Often in these cases the packet was transmitted as SRCCALL>DST,WIDE2-2, and then digipeated by a non new-N digi which changes that to WIDE2-1, and then by DIGI1 which changes that to DIGI1*,WIDE2*. And DIGI1 gets a too big range estimate for that.
On the other hand, the following packet:
SRCCALL>DST,DIGI*,WIDE2-1,qAR,IGATE:data
will give the receiving point to DIGI1. That packet is probably an original WIDE2-2 packet with a single used hop, or a WIDE1-1,WIDE2-1 packet which has been digipeated by a callsign-substitution fill-in DIGI1.
Because this changes the receiver performance metrics heavily (in the good direction), I've deleted the old "heard" databases and receiver histogram data of APRS igates and digipeaters. Sorry for that. They'll grow back soon.
The news of https://aprs.fi/ - new features and interesting attractions found in the APRS and AIS worlds.
Tuesday, January 26, 2010
The time to test has arrived
Okay, I'm now officially fed up. Fed up with my own bugs caused by the complexity of the aprs.fi software. Every now and then I change something in one corner, maybe to fix a bug or to add a little feature, and that breaks something small in another corner of the project. Because I fail to notice the bug, it might be broken for days until someone actually tells me. Sometimes it's the embedded maps, sometimes it's the Facebook integration, sometimes it's AIS feeding using one of the three methods. And usually they're broken because I changed something that was quite far from the broken part.
It's time to do some automatic testing. It's no longer feasible to manually verify that things work after making a change and before installing the software on the production servers - there are too many things to test. It takes too long, and something is easily forgotten.
Writing automatic tests in hobby projects like this one is usually not done, because it generally feels like the time spent on writing testing code is wasted - hey, I could be implementing useful features during that time. But on the other hand, once some testing infrastructure is in place, it's much quicker and safer to implement changes since it takes only one or two commands to run the test suite and to see that the change didn't break anything.
A little terminology:
Unit tests execute small parts of the code base (usually a single function/method, or a single module/unit/class). They feed stuff to that little piece of code and see that the expected results come out. They're often run before actually compiling and building the whole application. As an example, I can write a test to run the APRS packet digipeater path parser with different example paths, and check that the correct stations are identified as the igates and the digipeaters.
System tests run the complete application, feeding data from the input side (for example, APRS or AIS packets using a simulated APRS-IS or JSON AIS connection) and checking that the right stuff comes out at the output side (icons pop up on the map, updated messages show up on the generated web pages).
The open-source Ham::APRS::FAP packet parser, which is used by aprs.fi, already has a fairly complete set of unit tests. After changing something, we can just run the command "make test" and within seconds we know if the change broke any existing functionality. If you follow the previous link to CPAN, and click View Reports (on the CPAN Testers row) you'll get a nice automatically generated report from the CPAN testers network. The volunteer testers run different versions of Perl on different operating systems and hardware platforms, automatically download all new modules which are submitted to the CPAN, run the unit tests included with the modules, and send the results to the cpantesters.org web site. Thanks to them, I can happily claim that the parser works fine on 8 different operating systems (including Windows), a number of different processor architectures (including less common ones like DEC Alpha and MIPS R14000 in addition to the usual 32-bit and 64-bit Intels), and with all current versions of Perl, even though I only run it on Linux and Solaris myself.
Last Friday SP3LYR reported on the aprs.fi discussion group that negative Fahrenheit temperatures reported by an Ultimeter weather station were displayed incorrectly by aprs.fi: -1F came up as 1831.8F and 999.9C. I copied a problematic packet from the aprs.fi raw packets display and pasted it to the testing code file in the FAP sources (t/31decode-wx-ultw.t), and added a few check lines which verify the results. Sure enough, the parsed temperature was incorrect, and "make test" failed after adding a test with a low enough temperature. There were a couple of test packets in there before, but none of them had a temperature below 0 Fahrenheit.
Only after adding a test case for this bug I started figuring out where the actual bug was. After fixing the bug the "make test" command passed and didn't complain about the wrong parsing result any more. I committed the changes to the SVN revision control system, and then installed the fixed FAP.pm module on aprs.fi. Because none of the other tests broke after the fix I can be sure that I didn't break anything else with the fix. And because there's now a test in the unit test suite for this potential bug, I'm sure that same bug will not accidentally reappear later.
This is called test-driven development, and it can be applied to normal feature development just as well. First write a piece of code which verifies if the new feature works, and then write the code which actually implements the functionality. When the test passes, you're done. You need to write a bit more code, but it's much more certain that the piece of code works, and won't break later on during the development cycle.
None of this is news to a professional programmer. But from now on I'll try to apply this approach to this hobby project too, at least to some degree. Yesterday I added a few unit tests to the code to get started:
It ran tests from 3 test files, and the files contained 83 different checks in total. The first file makes sure all Perl modules compile and load. The second file tests the magic character set converter using input strings in different languages and character sets, checking that the correct UTF-8 comes out. The third one runs 24 example APRS packets through the digipeater path inspector. By comparison, the Ham::APRS::FAP module's test suite has 18 files and 1760 tests, and it's just one component being used by aprs.fi.
In the near future I'll try to implement a few system tests which automatically reinstall the whole aprs.fi software in a testing sandbox, feed some APRS and AIS data in from the different interfaces, and see that they pop up on the presented web pages after a few seconds. I want to know that the live map API works, the embedded maps and info pages load, and that the Facebook integration runs. With a single 'make test' command, in 30 seconds, before installing the new version on the servers.
But now, some laundry and cleaning up the apartment... first things first.
It's time to do some automatic testing. It's no longer feasible to manually verify that things work after making a change and before installing the software on the production servers - there are too many things to test. It takes too long, and something is easily forgotten.
Writing automatic tests in hobby projects like this one is usually not done, because it generally feels like the time spent on writing testing code is wasted - hey, I could be implementing useful features during that time. But on the other hand, once some testing infrastructure is in place, it's much quicker and safer to implement changes since it takes only one or two commands to run the test suite and to see that the change didn't break anything.
A little terminology:
Unit tests execute small parts of the code base (usually a single function/method, or a single module/unit/class). They feed stuff to that little piece of code and see that the expected results come out. They're often run before actually compiling and building the whole application. As an example, I can write a test to run the APRS packet digipeater path parser with different example paths, and check that the correct stations are identified as the igates and the digipeaters.
System tests run the complete application, feeding data from the input side (for example, APRS or AIS packets using a simulated APRS-IS or JSON AIS connection) and checking that the right stuff comes out at the output side (icons pop up on the map, updated messages show up on the generated web pages).
The open-source Ham::APRS::FAP packet parser, which is used by aprs.fi, already has a fairly complete set of unit tests. After changing something, we can just run the command "make test" and within seconds we know if the change broke any existing functionality. If you follow the previous link to CPAN, and click View Reports (on the CPAN Testers row) you'll get a nice automatically generated report from the CPAN testers network. The volunteer testers run different versions of Perl on different operating systems and hardware platforms, automatically download all new modules which are submitted to the CPAN, run the unit tests included with the modules, and send the results to the cpantesters.org web site. Thanks to them, I can happily claim that the parser works fine on 8 different operating systems (including Windows), a number of different processor architectures (including less common ones like DEC Alpha and MIPS R14000 in addition to the usual 32-bit and 64-bit Intels), and with all current versions of Perl, even though I only run it on Linux and Solaris myself.
Last Friday SP3LYR reported on the aprs.fi discussion group that negative Fahrenheit temperatures reported by an Ultimeter weather station were displayed incorrectly by aprs.fi: -1F came up as 1831.8F and 999.9C. I copied a problematic packet from the aprs.fi raw packets display and pasted it to the testing code file in the FAP sources (t/31decode-wx-ultw.t), and added a few check lines which verify the results. Sure enough, the parsed temperature was incorrect, and "make test" failed after adding a test with a low enough temperature. There were a couple of test packets in there before, but none of them had a temperature below 0 Fahrenheit.
Only after adding a test case for this bug I started figuring out where the actual bug was. After fixing the bug the "make test" command passed and didn't complain about the wrong parsing result any more. I committed the changes to the SVN revision control system, and then installed the fixed FAP.pm module on aprs.fi. Because none of the other tests broke after the fix I can be sure that I didn't break anything else with the fix. And because there's now a test in the unit test suite for this potential bug, I'm sure that same bug will not accidentally reappear later.
This is called test-driven development, and it can be applied to normal feature development just as well. First write a piece of code which verifies if the new feature works, and then write the code which actually implements the functionality. When the test passes, you're done. You need to write a bit more code, but it's much more certain that the piece of code works, and won't break later on during the development cycle.
None of this is news to a professional programmer. But from now on I'll try to apply this approach to this hobby project too, at least to some degree. Yesterday I added a few unit tests to the code to get started:
$ make testperl
--- perl tests ---
PERL_DL_NONLAZY=1 /usr/bin/perl \
"-MExtUtils::Command::MM" "-e" \
"test_harness(0, 'libperl', 'libperl')" \
tests/pl/*.t
tests/pl/00load-module.......ok
tests/pl/11encoding..........ok
tests/pl/20aprs-path-tids....ok
All tests successful.
Files=3, Tests=83, 1 wallclock secs ( 0.26 cusr + 0.06 csys = 0.32 CPU)
It ran tests from 3 test files, and the files contained 83 different checks in total. The first file makes sure all Perl modules compile and load. The second file tests the magic character set converter using input strings in different languages and character sets, checking that the correct UTF-8 comes out. The third one runs 24 example APRS packets through the digipeater path inspector. By comparison, the Ham::APRS::FAP module's test suite has 18 files and 1760 tests, and it's just one component being used by aprs.fi.
In the near future I'll try to implement a few system tests which automatically reinstall the whole aprs.fi software in a testing sandbox, feed some APRS and AIS data in from the different interfaces, and see that they pop up on the presented web pages after a few seconds. I want to know that the live map API works, the embedded maps and info pages load, and that the Facebook integration runs. With a single 'make test' command, in 30 seconds, before installing the new version on the servers.
But now, some laundry and cleaning up the apartment... first things first.
Sunday, January 24, 2010
Temporary markers and AIS receiver position uploading
Here's the list of changes in today's upgrade. A few of the smaller bug fixes were already deployed on Friday and Saturday.
You can now drop temporary markers on the map by right-clicking the map and selecting "Add marker". The markers can be moved around, just drag-and-drop.
It's now possible to conveniently upload the position of an AIS receiver. The instructions are on the AIS feeding page, step-by-step instructions, step 8.
At this point you probably figured that the context menu (which opens up on right-click) and the marker feature add a good bit of infrastructure, and can be later used for a bunch of other nice features like object/item uploading and home position marking. Stay tuned.
The Facebook app's canvas page failed to load since a couple weeks. Fixed!
Humidity parsing of h0 (100%) was fixed for normal APRS weather packets. Negative Fahrenheit temperature parsing was fixed in peet bros ULTW packets. These fixes went to the Ham::APRS::FAP svn trunk, and will be included in the next FAP release.
You can now drop temporary markers on the map by right-clicking the map and selecting "Add marker". The markers can be moved around, just drag-and-drop.
It's now possible to conveniently upload the position of an AIS receiver. The instructions are on the AIS feeding page, step-by-step instructions, step 8.
At this point you probably figured that the context menu (which opens up on right-click) and the marker feature add a good bit of infrastructure, and can be later used for a bunch of other nice features like object/item uploading and home position marking. Stay tuned.
The Facebook app's canvas page failed to load since a couple weeks. Fixed!
Humidity parsing of h0 (100%) was fixed for normal APRS weather packets. Negative Fahrenheit temperature parsing was fixed in peet bros ULTW packets. These fixes went to the Ham::APRS::FAP svn trunk, and will be included in the next FAP release.
Sunday, January 10, 2010
AIS receiving statistics and some small enhancements
The receiving performance statistics are now generated for (directly connected) AIS receivers too, but only if the position of the receiver is known. There is currently no other method of uploading the position to aprs.fi besides sending an APRS packet (object, item, or normal position packet) on behalf of the AIS receiver. One packet is enough, there's no need to send it all the time. I'm going to implement a method to drop items/objects (and positions of other things like AIS receivers) on the map some time in the near future.
The service status page now shows the time the aprs.fi software was upgraded on the server.
Wildcard callsign lookup result and moving stations tables are now sortable. Callsign sorting was fixed to be alphabetic in a few other places, too.
AIS receivers now have their own green tower symbol to distinguish them from AIS base stations.
The background colour of APRS item labels is now a bit lighter, to make the name of the item more readable.
A couple of smaller bug fixes were also done, and some code re-factoring to break things into smaller parts.
The service status page now shows the time the aprs.fi software was upgraded on the server.
Wildcard callsign lookup result and moving stations tables are now sortable. Callsign sorting was fixed to be alphabetic in a few other places, too.
AIS receivers now have their own green tower symbol to distinguish them from AIS base stations.
The background colour of APRS item labels is now a bit lighter, to make the name of the item more readable.
A couple of smaller bug fixes were also done, and some code re-factoring to break things into smaller parts.