Lynne Kiesling
When I read Gina Kolata’s New York Times article on the inaccuracy of GPS watches, I was not impressed with her journalism and her analysis. Her main theme was that we spend all of this money on GPS watches to record our training, and they aren’t even accurate. Her example:
On Sunday, I tried a little experiment with friends who also have GPS watches. I started from my house, and Jen Davis and Martin Strauss started from her house; we met up along the way.
My route was 15.96 miles, according to Google Maps. My watch said it was 15.54. Jen’s watch, an older model, did much better. Her route was 19.1 miles. Her watch said 19.02.
First, it’s impossible to interpret her two data points because she indicates nothing about the age of the devices, the brand, the software version, and so on. All GPS devices are different, and she does her readers a disservice by glossing over those details and by not informing them of the changes in GPS accuracy as the hardware and software have advanced over the past decade. Second, her device performed at 97.38% accuracy and her friend’s at 99.58% accuracy. What do they expect, 100%? You don’t have to be a statistically-literate scientist or social scientist to have a realistic expectation that anything north of 95% accuracy is acceptable. Even a Type A data-centric recreational athlete should not have expectations of 100% accuracy!
You may have read this same article because Glenn Reynolds linked to it at Instapundit. Unfortunately, I don’t think he reflected critically enough on the article.
For a more thorough analysis of GPS devices, and a thorough debunking of Kolata’s article, I recommend the DC Rainmaker blog. Ray is famous in multisport athlete circles for his thorough, detailed reviews of training devices and their performance. He argues that Kolata missed the boat in her conclusion that GPS devices are unreliable training partners. His critique focuses on two essential facts to remember when using a GPS device. First, as I alluded to above, not all hardware/software are the same, and software updates can improve accuracy:
In the world of GPS watches, the reality is that not all devices are created equal. As I’ve shown before in four posts of accuracy tests, some units do simply perform better than others. Sometimes that is correlated to price, and other times it’s tied to the GPS chipset used and/or the firmware. To base the entire article (and all GPS watches in general) on what appears to be a single watch on a single run being off seems a bit of a stretch. For example, when the Timex Global Trainer first came out, there were indeed accuracy issues with it. On average, it was 2.5% off (short) – was her watch a Global Trainer? Or perhaps, it was an original Garmin FR610 – which also had issues early on with some routes showing about 2% short. Yet, both have been fixed by their respective companies (June for the FR610, August for the Global Trainer).
I found it strange that the author didn’t note the brand, nor contact them for an official reason, explanation, or PR response. Isn’t that the most basic journalistic thing to do?
In my mind, this is no different than saying “cars are unreliable”, because your particular car is in the mechanics shop. As in fact the author noted, her friends route was just about spot on, within .08 miles after 19 miles – or 99.58% accurate.
Second, and this is interesting, the Kolata article focuses on complaints that race directors get after races from runners when their GPS distances do not match the stated distance of the race. But Ray points out that you can get mismatch if you take corners wide in the race:
As I’ve gone into in (probably painful) detail in the past, when you’re running a big race with lots of folks, you usually end up running quite a few corners wide. And those corners add up. Remember that races are measured according to USATF standards and certified non-GPS devices, which require that the measuring person take the absolute shortest possible route during the measurement, right up to the edge of the curb. That’s not how the vast majority of folks run their races though. Instead, most folks are forced into much wider paths, often with swerving around other runners. Every time you swerve around a runner – you’ve probably added 5-10 feet to your path.
He also looks at some race results that suggest that faster runners end up running more accurate distances, in large part because they are running with fewer people and less congestion, and thus do not have to take corners wide to avoid other runners as much.
In my own experience, GPS accuracy has gotten better over the 5 years that I’ve trained with a GPS device. I currently use a Garmin Forerunner 610, and for reasons I won’t bore you with, when I ride my road bike I use it as well as a CycleOps non-GPS computer that is paired with my PowerTap. Both devices generally yield distance estimates within 2% of each other.
Thus, if you are considering a GPS device and the Kolata article made you think again, I would not give her article much credence, because I don’t think she really understands the technology space or the importance of the details involved — an example of very superficial journalism. Instead, bookmark DC Rainmaker and use his detailed reviews to guide your purchases.
When I were a lad, I used to estimate bike ride distances with map, string to deal with road curves, then ruler with straight string and finally look at the scale of the map.
When I were older than a lad but younger than I am now I would use Expedia.com driving directions.
Currently, the Average Joe can use their watch to be more accurate, for less effort, than my old methods.
Oh no, world not yet perfect.
Dang, I think I’ll just have to lay down and weep, eh?
The calibration standard was Google maps? What is the accuracy of Google’s orthorectification with the surface maps? Is it even linear, or is it dependent on local features? etc. In other words, have you ever noticed that sometimes your route seems to go right through yards and sidewalks when viewed in Google maps with the satellite imagery turned on?
Good critique but it seems to me that there may have been another potential problem with the original test. Is Google Maps really safe to judge as a 100% accurate benchmark for distance? Perhaps I’m just overly cautious but that seems a remarkable assumption.