Measurement
Errors |
Jess
Riddle |
Nov
14, 2006 03:10 PST |
Ed,
I'm not sure I follow you completely follow your comments. I
assume
when you say our methods under estimate heights you are
referring to
the potential ways a rangefinder could return a too-short
reading:
the rangefinder is not at click over, the rangefinder
unintentionally
hits an intervening twig, the actual high point isn't being
measured.
Further, obtaining click over for both the base and top
measurements
in forest conditions is extremely difficult, so slight
underestimates
are to be expected. However, some random errors are still
involved.
The clinometer could be slightly misread too high as easily as
it
could be misread too low. The rangefinder is still subject to
smaller
random errors, and maybe other small errors from changes in
ambient
lighting. Hence, assuming a cluster of measurements indicates
the
same top and base are being measured each time, I would expect
the
average of a cluster of measurements to be slightly low due to
the
systematic click over error, but the highest measurement could
be
slightly high due to random errors. The highest measurement
could be
the highest of a set of measurements because the systematic
errors are
at a minimum, the random errors are at their maximum positive,
or some
combination of those two. It seems like we need to now if the
expected systematic error is larger or smaller than the expected
magnitude of the random error to tell if the highest measurement
is
likely to be an over estimation. When in doubt, I'd rather be
slightly low than slightly low. Are you seeing some other factor
that
makes you certain that the highest measurement won't be an over
estimation?
Jess
On 11/9/06, Edward Frank <ed_f-@hotmail.com>;
wrote:
|
Dale,
Using the ENTS methodology if you are getting a tight
cluster of heigt
values, that indicates you do not have a significant
error. Our methods do
not overmeasure height, but under measure it. The best
value, that one that
represents the most accurate height, for a clustered
series of measurements
is the highest value. By averaging the values you are
not being
conservative, but deliberately introducing additional
error into the
measurement. Therefore in your series, the best height
value is the 120.8
foot height for the oak.
Ed Frank
|
|
RE:
Foundation Ridge Flat update (errors) |
Edward
Frank |
Nov
15, 2006 08:13 PST |
ENTS,
This post may eventually appear a couple times on the list as it
seems
to be in hold for no apparent reason in attempts to post from my
email
account. Everyone is jumping on my case in this comment, some
are
misrepresenting what I have said - that really annoys me - so I
am
posting this reply again, this time directly from topica. I will
post a
more lengthy detailed discussion at some point and we can all
discuss it
reasonably or jump on me in force as the case may be...
Edward Frank
-------------------------------------------
Jess,
I don't think we are in major disagreement - it is related to
the
magnitude
of the errors we assign. It would be a long discussion that I
need to
work up in more detail and will do so when I get time. I will
reiterate
some of the basics that everyone knows, just to show the train
of
thought in a much shortened version below.
There are different types of errors: busts, random errors,
systematic
errors, and other non-random errors. Hitting an intervening
branch
should produce an anomalous low value that should be
distinguishable
from the cluster created by true readings of the top. When you
first
start using a clinometer you make errors reading the instrument.
However once you become proficient, you are not really making
mistakes
reading the clinometer, the errors involved are related to the
degree
of resolution you can discern when reading the instrument. You
will
improve with practice and everything else being equal, the
cluster of
your height measurements will become tighter. There is a limit
to how
well you can resolve the readings, so you will always have some
spread
in the measurements. This is considered random error. If it is
truly
random averaging a series of data points will give a closer
approximation of the actual height. However all points in the
dataset are equally valid, otherwise they could not be averaged
together. Each contributes to the final average. So some are not
really
better than others, some are just closer to the average value.
In a sense you are asking an unfair question. Could the high
value be
the composite of high random errors on both ends? Yes it could.
Likewise it may be the correct value and all of the other
measurements
taken within the cluster, due to random error are low. What you
know is
that the correct measurement in a tight cluster of values
probably lies
within the cluster, choosing an average value to represent the
height
value doe not make it correct, only means that a height is close
to the
average is the "most likely." The
more measurements, the more likely
the true height will be closer to the average value, but
generally not
enough measurements are taken
for true randomness in any case, and the average value is not
necessarily the true height.
Errors from the laser measurements are not random, they are
non-random
(meaning they are in the same direction, but varying in
magnitude in an
irregular fashion) errors that always underestimate the distance
to the
target. You talk of click-over at both the base and top. Think
about
the mechanics. As you tilt your head up and down to measure you
are
moving the back of the laser a couple of inches. As you move a
little
side to side or back and forth to obtain click over at the base
after
getting one at the top,
could you change the instrument height by an inch or so? If you
stay on
the same plane fine, but are you perfect? You measure the top at
click
over. Are you at the exact point of click over every time, or
maybe an
inch or two past to keep the reading from flickering? Are you
hitting
the exact tip of the branch with the laser or an inch or two
farther
down. You can see the top and get it with the clinometer - are
you
getting the exact same point
with a good bounce from the laser, if not you are
underestimating the
distance. These are all tiny errors insignificant in themselves,
but
together since they are all in the same direction they are
cumulative.
If you are not exactly at the higher value at exactly the point
of click
over every shot through the tiny openings, you are
underestimating the
distance and hence making the tree shorter.
So it becomes a question of whether the generally random errors
resulting from resolution limitations are bigger or smaller than
non-random cumulative errors resulting from the laser
rangefinder
process. If the laser errors are greater in magnitude than the
resolution errors then even the highest value obtained would
still be
underestimated. In any case the average value must be somewhat
lower
than the true height because of the addition of laser
rangefinder errors
into the problem. If the magnitude of
the resolution errors are greater than any from the laser
rangefinder
then the value may be lower than the maximum measured error, but
still
would be higher than the average value, assuming there are
enough
measurements, and the errors really do cancel out.
I think that the laser errors are greater, therefore since I
cannot
justify supporting a reading higher than any measured, I believe
the
highest measured value of a tight cluster of measurements is the
best
value of the series. I will expand on this in a forthcoming
post, when
I get some other projects out of the way.
Ed Frank
|
RE:
Foundation Ridge Flat update |
beth_k-@yahoo.com |
Nov
15, 2006 08:18 PST |
Ed, Don, and Dale,
In my profession of medical laboratory technology when we do
corralations between two instruments proforming the tests (ie.
Complete
Blood Count--CBC) we throw out the highest number and the lowest
number
and then find the average. Maybe this same thing can be applied
in this
case.
Beth
|
RE:
Foundation Ridge Flat update |
foresto-@npgcable.com |
Nov
15, 2006 12:58 PST |
Beth-
In Bob/Ed/Dale's world, numbers are numbers are numbers!
I understand your profession's tossing of the highest and
lowest, and that is
generally employed in larger samples to remove outliers (both
tails of the
normal curve)which might disproportionately skew the
average/mean/median.
In Dale's case, the grouping was tight, the tails very short and
the removal of
highest and lowest would have reduced the sample size to N=2.
What Ed was
thinking, I think ?, was that Dale had four trees with readings
x1, x2, x3, x4
instead of one tree with four estimates of actual
height...correct me Ed, if
I've misread you!
-Don
|
RE:
Foundation Ridge Flat update (errors) |
foresto-@npgcable.com |
Nov
15, 2006 13:13 PST |
Ed-
Thanks for the well-thought-out (and patient!) response. I am in
complete
general agreement, and specifically welcome your points on the
loss of
accuracy/precision that is involved in the minor adjustments
occuring at 1)
clickover, and 2)the pivoting up and down of instrument (whether
clinometer or
laser (since the laser incorporates a clinometer) as a source of
error...we
can't be calling tree measures accurate to a 0.1 foot, if we are
raising and
lowering the instrument of measure, 0.1 to 0.2 feet in the
process of measuring.
Thanks for the added clarity!
-Don
|
RE:
Foundation Ridge Flat update |
Edward
Frank |
Nov
15, 2006 15:59 PST |
Beth, Don,
Thank you for the comments. The situation suggested by Beth is
not
similar because the measurements made by ENTS are a combination
of one
type of random error and another type of non-random
pseudo-systematic
errors, and not just random errors. Throwing out the highest and
lowest
is a reasonable strategy if you are not sure if there is some
other type
of error involved potentially creating anomalous high and low
values
(Its utility can be debated, but the idea is commonly used). The
entire
question with our measurements is which of the error sources are
the
predominant factor in errors in height. This is presuming that
other
errors, such as intervening branches can be distinguished from
the
cluster of valid measurements - and I think they can be.
As for reducing the sample size, that is an issue because it is
often
difficult to get a good reading on the very top of the tree
there may
not be enough measurements to make any average meaningful. This
is
exacerbated if you toss the upper and lower values. If the
grouping is
tight, then I am accepting that the values that are high or low
in the
cluster are not anomalous outliers, but real values.
Ed
|
Agreeing
with Ed |
Robert
Leverett |
Nov
16, 2006 09:32 PST |
Ed,
I agree with your assessment. As the owner (at
one time or another)
of seven lasers and five clinometers, I've learned to recoginze
the
patterns associated with the sources of potential error
(instrument
driven, shifting position, intervening obstruction).
Averaging in low shots that are often laser
clippings of intervening
twigs clearly robs the tree. I'll have much more to say on laser
testing
and error sources in the coming weeks. I'm presently, retesting
3
different lasers with some taped distances as checks. The
patterns are
interesting, but do not contradict what you've written.
Bob
|
|