Forums > Windsurfing   Gps and Speed talk

Speed Accuracy

Reply
Created by Gorgo > 9 months ago, 12 Jul 2021
Roo
782 posts
13 Jan 2022 4:08AM
Thumbs Up

Julien I think you are crazy to agree with me. It was Mediatek on the GW60 watch.

mathew
QLD, 2045 posts
13 Jan 2022 9:12AM
Thumbs Up

Select to expand quote
boardsurfr said..


mathew said..
have you got a comprehension problem ?
...stop being nasty to everybody that has a differing message.


How about you follow your own advice?

Phrasing offensive statements as a question does not really change anything about their offensiveness. The question still clearly shows what you think, but leaves you the lame excuse of "just asking a question".

You have not made any contribution to the topic of this thread. Your only "contributions" were to call me a "bully" with a "comprehension problem", followed by issuing directives. Nasty, indeed.



Fine.


Select to expand quote
boardsurfr said..
Comparing the +/- numbers for the speed results, the reported accuracy looks very similar. But the Motion data are 10 Hz, and the Locosys data are 5 Hz, and these numbers are based on Gaussian error propagation. That means that, if the error estimates were indeed nearly identical, the numbers in the results table for the Locosys data should be about 1.4 fold higher, not roughly the same.



That is not how error-propagation works - in simple words, the longer the sample-period the lower the error, because the error get swamped by the real signal - this is signal-processing 101. You are wrong.

Now stop being a bully.

AUS4
NSW, 1255 posts
13 Jan 2022 7:48PM
Thumbs Up

Select to expand quote
mathew said..

boardsurfr said..



mathew said..
have you got a comprehension problem ?
...stop being nasty to everybody that has a differing message.



How about you follow your own advice?

Phrasing offensive statements as a question does not really change anything about their offensiveness. The question still clearly shows what you think, but leaves you the lame excuse of "just asking a question".

You have not made any contribution to the topic of this thread. Your only "contributions" were to call me a "bully" with a "comprehension problem", followed by issuing directives. Nasty, indeed.




Fine.



boardsurfr said..
Comparing the +/- numbers for the speed results, the reported accuracy looks very similar. But the Motion data are 10 Hz, and the Locosys data are 5 Hz, and these numbers are based on Gaussian error propagation. That means that, if the error estimates were indeed nearly identical, the numbers in the results table for the Locosys data should be about 1.4 fold higher, not roughly the same.




That is not how error-propagation works - in simple words, the longer the sample-period the lower the error, because the error get swamped by the real signal - this is signal-processing 101. You are wrong.

Now stop being a bully.


Arnt you THE ORIGINAL BULLY.

John340
QLD, 3123 posts
13 Jan 2022 8:12PM
Thumbs Up

FFS stop this handbags at 10 paces

tbwonder
NSW, 649 posts
13 Jan 2022 9:51PM
Thumbs Up

Select to expand quote


boardsurfr said..



To summarize: reducing the SDOP/sAcc filter threshold from 4.0 to 1.0 affected about 10% of the files (if we exclude a "bad" unit). In a small subset of cases, the stricter threshold eliminated artifact speeds in crashes. In a similar amount of cases, it eliminated larger portions of a session where (probably) the arm band moved so the Motion was partially blocked by the arm and body. In the majority of cases where the stricter filter eliminated points, the effect was rather small.



I have run many of my files through speedreader and compared the category results with filters set at 4 kts and then 1kt. There is almost no difference in results. The distance is normally a few metres shorter.

Tightening the filters raises a lot of questions.
Firstly does the filter need to be tightened for all categories or just the 2sec and 5*10?
Over longer categories wouldn't any errors tend to cancel out?
Should we exclude a 10 sec run made up of 100 data points if just 1 point has an sdop of 1.1? or should we be looking at average sdop over the samples?

I agree that bad data is probably caused by wearing the device incorrectly. Tightening the filters may encourage users to be more careful.
As I mentioned before a measure of "track overall quality" would be a great feature in Speedreader.

tbwonder
NSW, 649 posts
13 Jan 2022 10:00PM
Thumbs Up

Select to expand quote
John340 said..
FFS stop this handbags at 10 paces


Made me think of this.
The women of the Batley Townswomens' Guild recreating the The Battle of Pearl Harbor


Mr Keen
QLD, 569 posts
13 Jan 2022 9:19PM
Thumbs Up

Select to expand quote
tbwonder said..

John340 said..
FFS stop this handbags at 10 paces



Made me think of this.
The women of the Batley Townswomens' Guild recreating the The Battle of Pearl Harbor




Gold!!

Dezza
NSW, 925 posts
13 Jan 2022 10:41PM
Thumbs Up

Can fit a lot of GPS's into those handbags

tbwonder
NSW, 649 posts
13 Jan 2022 10:46PM
Thumbs Up

You should know better Dezza, the Motions should be worn on your arm - not in your hand bag!

boardsurfr
WA, 2321 posts
13 Jan 2022 11:57PM
Thumbs Up

Select to expand quote
mathew said..

boardsurfr said..
Comparing the +/- numbers for the speed results, the reported accuracy looks very similar. But the Motion data are 10 Hz, and the Locosys data are 5 Hz, and these numbers are based on Gaussian error propagation. That means that, if the error estimates were indeed nearly identical, the numbers in the results table for the Locosys data should be about 1.4 fold higher, not roughly the same.

That is not how error-propagation works - in simple words, the longer the sample-period the lower the error, because the error get swamped by the real signal - this is signal-processing 101. You are wrong.

Now stop being a bully.


You are quite amusing. It is good to see that you have a basic understanding of error propagation, but from what you write, your understanding is quite limited. I have no desire to change that, but since there are others who follow this thread with interest, I'll give a quick introduction in the basic principles of error propagation.

The basic idea is that a measurement can be inaccurate due to random error. For example, if we are surfing at 40 knots, but our GPS can only measure with an accuracy of 2 knots, we may get a reading of 38 knots or 42 knots, or anything in between. If we measure just once, there's a good chance that we measure 38 or 42 knots, so that we have an actual error of 2 knots. But if we measure multiple times, some values are likely to be higher than 40 knots, and some lower. If we average the measurements, the actual error is very likely to be lower.

The commonly used term for the estimate error of multiple measurements is standard deviation. Wikipedia and plenty of other sources explain it quite nicely in detail, but basically, it is the square root of the average square deviation.

The square root term means that if I measure the same thing 4 times, I can reduce the estimated error by 2 (the square root of 4). If I measure 100 times, the error estimate is reduced by a factor of 10.

For speedsurfing, if I measure a 2-second run at 5 Hz, I get 10 points, so the error estimate will be (roughly) the average error of single points, divided by the square root of 10 (3.16). So the error estimate (standard deviation) for the 2-second run will be about 3-fold lower than the error estimate for single points.

If I measure the 2-second same run at 10 Hz, I get 20 points, and the error estimate gets reduces by a factor 4.47 (the square root of 20). So simply going from 5 Hz to 10 Hz, while all else is equal, will reduce the error estimate by a factor of 4.47/3.13 = 1.41. That's the square root of 2.

The principle of calculating error estimates this way is often called "Gaussian Error Propagation". That's what GPSResults does, and that's also what I implemented in GPS Speedreader. The +/- numbers in the category results are actually "2 sigma" error estimates - twice the calculated standard deviation. "2 sigma" indicates a 95% confidence level - for 95% of all measurements, the "true" speed should fall within the range given by the +/- estimates - if, and only if, the errors for individual points are completely random.

And that is the important part that is quite easily missed by someone not familiar with statistics or scientific data analysis. There are two things that can violate this underlying assumption:
1. "Colored noise": non-random errors, for example due to distortion of the GPS signal in the atmosphere, or bias in the GPS chip.
2. Correlation (linkage) between neighboring data points.

Both of these apply to GPS data from the units we are using. I'll have to leave it to another post to explain this in more detail.

choco
SA, 4032 posts
14 Jan 2022 6:17AM
Thumbs Up

Having read this thread and trying to digest the information at the end of the day if you want accuracy setup a camera timing system, GPSs inherently are not pin point accurate and never will be.

JulienLe
405 posts
14 Jan 2022 4:23AM
Thumbs Up

For 160AUD and zero constraints, you can compare yourself to others within 0.02kn over 500m.

If anything, finding these bad runs are a testimony to the usefulness of the speed accuracy field.

369 ("bad firmware") seems fine now. And maybe 448 will have a perfectly reasonnable explanation for this bad period.

The important bit is noticing it. As these will always happen. For whatever reason.

mathew
QLD, 2045 posts
14 Jan 2022 8:52AM
Thumbs Up

Select to expand quote
boardsurfr said..

mathew said..


boardsurfr said..
Comparing the +/- numbers for the speed results, the reported accuracy looks very similar. But the Motion data are 10 Hz, and the Locosys data are 5 Hz, and these numbers are based on Gaussian error propagation. That means that, if the error estimates were indeed nearly identical, the numbers in the results table for the Locosys data should be about 1.4 fold higher, not roughly the same.


That is not how error-propagation works - in simple words, the longer the sample-period the lower the error, because the error get swamped by the real signal - this is signal-processing 101. You are wrong.

Now stop being a bully.


The basic idea is that a measurement can be inaccurate due to random error. For example, if we are surfing at 40 knots, but our GPS can only measure with an accuracy of 2 knots, we may get a reading of 38 knots or 42 knots, or anything in between. If we measure just once, there's a good chance that we measure 38 or 42 knots, so that we have an actual error of 2 knots. But if we measure multiple times, some values are likely to be higher than 40 knots, and some lower. If we average the measurements, the actual error is very likely to be lower.



That ^^ and everything else you said - is entirely 100% accurate, but fails to take into account one key factor... the most important one which is the sampling technique.

[ I am quite aware of the 1.4 reference in your previous post - it is the root-2 rounded - there is no need to describe it further, except that others may find it interesting. ]

Sampling anything, is bounded in finite-time. If you could instantaneously sample, then your analysis would hold, but you cannot - everything requires a finite sample-period.

I will assume you understand time-aliasing (in the digital domain) and why Nyquist frequency is an important design constraint.

In practise, there is no need to build anything that is over-engineered, if for no other reason than it takes longer and is more expensive to build highly-accurate vs reasonably-accurate stuff. Ergo, the the carrier-sampler will be in-loop of the PLL (the PLL is the phase-locked-loop of the carrier-decoder); the PLL for all gps units is a digital-PLL; all digital PLL's use a delay-loop; all delay-loops have finite loop time resulting in averaging. That simple case of simply locking onto the carrier, results in an averaging-out of a lot of the randomness of signal-sampling.

Similarly, even the apparent-instantaneous behaviour of reading out a digital-value from a memory register (aka the frequency of the carrier), is a result of an analog signal being propagated through silicon. Almost all digital systems use a clock frequency - that propagation needs to settle to a constant zero/one before the clock cycle. In other words, clock-speed limits your ability to take instantaneous measures, instead resulting in averaged-measures.

-> the sample-rate and the sample-period both make up the signal quality. Its not just the sample-rate.

mathew
QLD, 2045 posts
14 Jan 2022 9:02AM
Thumbs Up

Select to expand quote
mathew said..
-> the sample-rate and the sample-period both make up the signal quality. Its not just the sample-rate.



Therefore, 5Hz vs 10Hz comparisons of errors, cannot simply be compared in the way it has been presented - it is more nuanced than that.

mathew
QLD, 2045 posts
14 Jan 2022 9:14AM
Thumbs Up

Select to expand quote
choco said..
Having read this thread and trying to digest the information at the end of the day if you want accuracy setup a camera timing system, GPSs inherently are not pin point accurate and never will be.


That is both, an accurate statement and not accurate.

To use camera timing, you either need:
a/ two cameras with perfectly synchronised clocks and perfectly synchronised framing
b/ a start-time trigger + end-camera
c/ avoid the cameras... just use synchronised-gates + a human

In all of those, there is some errors that will creep in. In any measurement - if you are playing for sheep-stations - then you also need to qualify and quantify any/all errors. **

** it turns out that when experts do research on some topic, its not the core-research itself that takes any time, the other 99% is taken up defining the bounds of the problem, which includes error-bounds. Same applies to measuring anything. ( Its one of the reasons why the idea of "claimed speed" is quite interesting, and a whole other discussion. )

boardsurfr
WA, 2321 posts
15 Jan 2022 2:29AM
Thumbs Up

Select to expand quote
tbwonder said..
As I mentioned before a measure of "track overall quality" would be a great feature in Speedreader.


I like the idea, although it gets a bit more complicated when you look at things like the periods where the Motions seem to have slipped, causing accuracy to get substantially worse.

The bigger question, though, is how many users even look at their data in Speedreader or similar software. From reading the forum, I get the impression that many people just upload their data directly to ka72.com, and post from there to GPSTC, sometimes directly from a phone. For those, the "overall track quality" feature would be better placed in the Motion firmware (which Julien may be able to do quickly), and/or on ka72 (which seems less likely to happen).

boardsurfr
WA, 2321 posts
15 Jan 2022 12:40PM
Thumbs Up

Select to expand quote
mathew said..
-> the sample-rate and the sample-period both make up the signal quality. Its not just the sample-rate.

Now that is, indeed, confusing. Sampling rate is just the inverse of the sampling period, so those are just two different descriptions of the same thing.

Select to expand quote
mathew said..
I will assume you understand time-aliasing (in the digital domain) and why Nyquist frequency is an important design constraint.

This seems to be what mathew's post was about. What he appears to be saying is that you cannot just look at error estimates, but you have to also take into account errors that get introduced if you do not sample often enough. If what you measure (in our case speed) changes with a given frequency, you have to take point measurements at two times that frequency to avoid "aliasing" errors. A somewhat simplified example: if you try to get a top 2 second speeds, but get a speed measurement only every 3 or 4 seconds (which can happen with some watches on the wrong settings), then there's a pretty good chance you only measure before or after the top speed, but not the top speed itself. A bit more accurately: if your speed changes significantly within one second while your are surfing, then you'll have to measure speed more than once per second, or your measurements will be less accurate due to "aliasing" errors.

It's pretty easy to measure the potential "aliasing error" at lower herz rates. The difference between 5 hz and 10 hz data is actually quite small. If I don't forget, I'll post some of the results, perhaps in a separate thread. Not tomorrow, though - we finally have enough wind for a speed session in the forecast .

mathew
QLD, 2045 posts
17 Jan 2022 11:38AM
Thumbs Up

Select to expand quote
boardsurfr said..

mathew said..
-> the sample-rate and the sample-period both make up the signal quality. Its not just the sample-rate.


Now that is, indeed, confusing. Sampling rate is just the inverse of the sampling period, so those are just two different descriptions of the same thing.


They are not the same thing. If someone says to me "it took 10 seconds of monitoring someone taking a dump", that would be the sample-period (10 seconds), it doesn't say how often dumpage occurs.

Sample-rate - how often/frequently you sample the thing... once per day, once per hour, once per second, 5 times per second

Sample-period - the time it takes to take the sample; at once per day, it may take one second to take the sample (aka it doesn't take 23hrs59).

Select to expand quote
boardsurfr said..
This seems to be what mathew's post was about. What he appears to be saying is that you cannot just look at error estimates, but you have to also take into account errors that get introduced if you do not sample often enough.


Yes that is one aspect. Also your sample-period is too short which can result in jitter. And the sample-period is long (wrt the sample-rate) resulting in a _natural_ averaging of some aspects of the thing being sampled.

K888
110 posts
1 May 2022 8:38PM
Thumbs Up

Select to expand quote
JulienLe said..
GT11: SiRF Star II,
GT31: SiRF Star III,
and as Roo once mentionned here, GW52 and GW60 both contain plain-text references to MediaTek frames. And their specifications would be consistent with something like MT3318.

Chalko's paper: "This article explores and tests the "Speed Dilution of Precision" (SDOP) parameters developed and kindly provided by SiRF (many thanks SiRF) and implemented by Locosys Technology Inc (many thanks Roger at Locosys) into firmware of their GT31 hand-held GPS datalogger."

Now, is SiRF's-era SDOP the same as MT's-era SDOP? Wasn't SDOP SiRF's property? Locosys' website mentions SDOS. Did anyone ever notice something similar to SDOP in MediaTek's datasheets? You have four hours. A4 sheets, 2cm margin on each side.



It took me longer than 4 hours to investigate and it's not on A4 sheets but here is my answer paper. :D

www.seabreeze.com.au/forums/Windsurfing/Gps/Locosys-Detective-Work?page=1



Subscribe
Reply

Forums > Windsurfing   Gps and Speed talk


"Speed Accuracy" started by Gorgo