PSN-L Email List Message

Subject: Re: Network time standard
From: ChrisAtUpw@.......
Date: Sun, 10 Apr 2005 23:52:01 EDT


In a message dated 10/04/2005, ian@........... writes:

Hi,
an interesting discussion.  I note that a spec of <0.1  seconds has been 
mentioned (below).  Could I ask, how is this  derived?  Apologies if this is 
documented somewhere on the psn  site.
Hi Ian,
 
    It is very simple. You have to determine the start  of the P wave signal 
which is of the order of 1 Hz against a noisy  background. Local P waves may 
also have higher frequency components added  in.  There may also be 
uncertainties in the filter delay, particularly if  you use Butterworth filters. The 
delay in Bessel filters is shorter and better  defined. This accuracy of wave 
timing should be achievable by amateur  seismologists in practice.

I only ask because I know that using specs that  are higher than needed can 
lead to costs that could have been avoided.  I  guess one would start by 
determining what is a reasonable error in calculating  epicentre distance that can 
be tolerated and working back from there to derive  a time spec.
    I agree that you can't use the sub microsecond  accuracy of a good GPS 
clock. However, if your clock only had an accuracy of 1  sec, you would have a 
possible error of ~10 km. If it had an error of 10 sec,  the possible error is 
~100 km. Neither would be particularly helpful when  estimating the depth of a 
quake at, say 40 km, or of it's position. I'm sorry to  put it so bluntly, 
but if you CAN'T give error limits to your  measurements, you are JUST 
COLLECTING GARBAGE !
 
    If your signals are to have any value,  you cannot accept a cumulative 
timing error which  builds up in an unknown way to the equivalent of  many kms 
uncertainty - eg a rubbishy timing  system. 

Another question is, which of the many factors  influencing epicentre 
calculation is the limiting one?  I would imagine  that the average speed from the 
epicentre to a psn station would vary from  station to station since each 
station is located on a different part of the  Earth and the wave will travel 
through different parts of the Earth at a  slightly different speed for each  
direction.  

If all the psn stations  were locked in time to less than 0.1 seconds, then 
the average speed of the  wave would have to be no worse than this for the data 
to benefit.  For a  teleseismic event which took, say, 15 minutes to arrive, 
all the "rays" would  have to travel at the same average speed to within about 
0.01% of each  other.  Is this possible?!
    Sure. You do not seem to be thinking correctly  about the problem. Let's 
put the measurements in context. With a P wave velocity  to ~10 km / sec, it 
takes under an hour for the wave to traverse the  earth. At a frequency of 1 
Hz, the wavelength is about 10 km. Structures  which are smaller than this will 
not effect the transmission significantly at  any great distance. The wave 
path traverses regions of the Earth which have  different velocities and the 
track is inevitably curved. What you observe with  the seismometer is the 'net 
result at one observation point'. But  this signal may be modified significantly 
by the local sub surface geology. 
    Under favourable conditions, you can measure  the time difference between 
the P & S waves and get a rough  estimate of the distance to the source, but 
only if you can make  'reasonable' assumptions about the average wave 
velocities. The location  programme then has the job of reverse tracing the waves to 
the source, for  several seismometer responses at different places and getting 
the best  overall 'fit'. The average travel / time curves that you have seen 
give the  first approximation to this relationship and THEY ARE CURVES - you do 
NOT  have straight line relationships. See the AmaSeis overplots. Moreover, 
each  seismometer location and wave direction may have slightly different 
properties  depending on the sub surface geology - the curves are only averaged  
values.


    We wouldn't be having this discussion if  computers were fitted with 
reasonably accurate clocks. The 4.194 MHz timing  crystals can be trimmed to a few 
seconds per fortnight, but high precision  temperature tracking modules can 
give 0.1 ppm. The 32 kHz crystals  often found in watches are much more 
temperature  sensitive.                           The lousy apology for a clock 
fitted to my current computer drifted 8  sec in the first 2 hours and was down 28 
sec on a day. Even  hourly web updates would not give me anywhere near the 
precision  required. You used to be able to buy input expansion  boards with clock 
modules on them, but I haven't seen any about  lately.
 
    Since you can get 60 KHz receiver modules and  aerials, it would be 
helpful if A/D boards were able to read and update  their clocks directly using 
WWVB signals. This should be maybe 1/3 the  cost of a GPS system and you would 
not be dependant on having a permanent  phone connection.


    Perhaps Larry could stock them? 
 
    The A/D board that I use has the timing and radio  signal synchronisation 
built into it's microprocessor. It has a low drift  A/T cut crystal, which is 
frequency trimmed. It is not inside the hot computer  case. The 
microprocessor is set for hourly radio updates and there is a lock  idicator to confirm the 
update status. I can periodically sync the computer  clock with the board or 
with the net, but I am not  dependant on the computer software clock, or on a 
permanent net  connection, for accurate timing and sampling. 
 
    I bought a 60 KHz radio corrected digital quartz  crystal clock and it 
has been a very valuable reference for the station. I can  thoroughly recommend 
them. 
 
    Regards,
 
    Chris Chapman
 
    





In a message dated 10/04/2005, ian@........... writes:
<= FONT=20 style=3D"BACKGROUND-COLOR: transparent" face=3DArial color=3D#000000=20 size=3D2>Hi,
an interesting discussion.  I note that a spec of <= ;0.1=20 seconds has been mentioned (below).  Could I ask, how is this=20 derived?  Apologies if this is documented somewhere on the psn=20 site.
Hi Ian,
 
    It is very simple. You have to determine the st= art=20 of the P wave signal which is of the order of 1 Hz against a noisy=20 background. Local P waves may also have higher frequency components add= ed=20 in.  There may also be uncertainties in the filter delay, particularly=20= if=20 you use Butterworth filters. The delay in Bessel filters is shorter and bett= er=20 defined. This accuracy of wave timing should be achievable by amateur=20 seismologists in practice.
<= FONT=20 style=3D"BACKGROUND-COLOR: transparent" face=3DArial color=3D#000000=20 size=3D2>    I only ask because I know that using spec= s that=20 are higher than needed can lead to costs that could have been avoided.&nbs= p; I=20 guess one would start by determining what is a reasonable error in calcula= ting=20 epicentre distance that can be tolerated and working back from there to de= rive=20 a time spec.
    I agree that you can't use the sub microsecond=20 accuracy of a good GPS clock. However, if your clock only had an accuracy of= 1=20 sec, you would have a possible error of ~10 km. If it had an error of 10 sec= ,=20 the possible error is ~100 km. Neither would be particularly helpful when=20 estimating the depth of a quake at, say 40 km, or of it's position. I'm sorr= y to=20 put it so bluntly, but if you CAN'T give error limits to your=20 measurements, you are JUST COLLECTING GARBAGE !
 
    If your signals are to have any value,=20 you cannot accept a cumulative timing error which=20 builds up in an unknown way to the equivalent of=20 many kms uncertainty - eg a rubbishy timing=20 system.
<= FONT=20 style=3D"BACKGROUND-COLOR: transparent" face=3DArial color=3D#000000=20 size=3D2>    Another question is, which of the many fa= ctors=20 influencing epicentre calculation is the limiting one?  I would imagi= ne=20 that the average speed from the epicentre to a psn station would vary from= =20 station to station since each station is located on a different part of th= e=20 Earth and the wave will travel through different parts of the Earth at a=20 slightly different speed for each=20 direction.  

    If all the psn stati= ons=20 were locked in time to less than 0.1 seconds, then the average speed of th= e=20 wave would have to be no worse than this for the data to benefit.  Fo= r a=20 teleseismic event which took, say, 15 minutes to arrive, all the "rays" wo= uld=20 have to travel at the same average speed to within about 0.01% of each=20 other.  Is this possible?!
    Sure. You do not seem to be thinking corre= ctly=20 about the problem. Let's put the measurements in context. With a P wave velo= city=20 to ~10 km / sec, it takes under an hour for the wave to traverse the=20 earth. At a frequency of 1 Hz, the wavelength is about 10 km. Structure= s=20 which are smaller than this will not effect the transmission significantly a= t=20 any great distance. The wave path traverses regions of the Earth which have=20 different velocities and the track is inevitably curved. What you observe wi= th=20 the seismometer is the 'net result at one observation point'. Bu= t=20 this signal may be modified significantly by the local sub surface geology.=20
    Under favourable conditions, you can measu= re=20 the time difference between the P & S waves and get a rough=20 estimate of the distance to the source, but only if you can make=20 'reasonable' assumptions about the average wave velocities. The location=20 programme then has the job of reverse tracing the waves to the source, for=20 several seismometer responses at different places and getting the best=20 overall 'fit'. The average travel / time curves that you have seen give= the=20 first approximation to this relationship and THEY ARE CURVES - you do N= OT=20 have straight line relationships. See the AmaSeis overplots. Moreover, each=20 seismometer location and wave direction may have slightly different properti= es=20 depending on the sub surface geology - the curves are only averaged=20 values.
<= FONT=20 style=3D"BACKGROUND-COLOR: transparent" face=3DArial color=3D#000000 size= =3D2>
    We wouldn't be having this discussion if=20 computers were fitted with reasonably accurate clocks. The 4.194 MHz tim= ing=20 crystals can be trimmed to a few seconds per fortnight, but high precisi= on=20 temperature tracking modules can give 0.1 ppm. The 32 kHz crystals=20 often found in watches are much more temperature=20 sensitive.          &n= bsp;            =   =20 The lousy apology for a clock fitted to my current computer drifted= 8=20 sec in the first 2 hours and was down 28 sec on a day. Even=20 hourly web updates would not give me anywhere near the precisi= on=20 required. You used to be able to buy input expansion=20 boards with clock modules on them, but I haven't seen any about=20 lately.
 
    Since you can get 60 KHz receiver modules a= nd=20 aerials, it would be helpful if A/D boards were able to read and update=20 their clocks directly using WWVB signals. This should be maybe 1/3=20= the=20 cost of a GPS system and you would not be dependant on having a permanen= t=20 phone connection.
    Perhaps Larry could stock them?
 
    The A/D board that I use has the timing and rad= io=20 signal synchronisation built into it's microprocessor. It has a low d= rift=20 A/T cut crystal, which is frequency trimmed. It is not inside the hot comput= er=20 case. The microprocessor is set for hourly radio updates and there is a lock= =20 idicator to confirm the update status. I can periodically sync the computer=20 clock with the board or with the net, but I am not=20 dependant on the computer software clock, or on a permanent ne= t=20 connection, for accurate timing and sampling.
 
    I bought a 60 KHz radio corrected digital quart= z=20 crystal clock and it has been a very valuable reference for the station. I c= an=20 thoroughly recommend them. 
 
    Regards,
 
    Chris Chapman
 
    

[ Top ] [ Back ] [ Home Page ]