PSN-L Email List Message

Subject: Re: Representative stations?
From: Dave Nelson dave.nelson@...............
Date: Thu, 17 Mar 2011 07:07:02 +1100



Hi Dan,
              As I said in my first and the first reply to you.  The more 
stations the
better.  As we loose stations under any scheme, we loose information, as 
per my
reasons outlined in my earlier reply.  Its as simple as that !   :)   It 
doesnt matter which/where the station/s is/are that are lost, because 
quakes are always
happening in different places, so some of the recorders are always closer than
others.
     As told by me and repeated by later replys the seismo's provide two 
main lines
  of research.  1)  initially recording any quakes and being able to derive 
information
of the actual quake. 2) the data collected that tells us of the deep 
structure of the
earth.

   For any given event, you only need 3 sensors relatively well spaced on a 
circle around the event to triangulate its location and its rough 
depth.  But for another
quake, even maybe within 500km of that first event, All or maybe none of those
three sensors may be able to provide useful data on location or depth, only 
a reasonable indication of magnitude.

   It still boils back to the MORE Sensors = MORE Data = MORE Accurate
Results   :)  simple as that, which is why event seismo organisation around 
the
world is contineously upgrading and increasing the density of their seismo
networks.

quote....  Perhaps I'm getting it wrong, and you would choose to keep the array
intact, and selectively remove the 'outliers'. I'd be interested to
know.?

   Outliers ??  ones on the outer edge ??.   as stated above there are none 
on the
outer edge  because quakes are spread all across the network so some 
sensors that may be on the outer edge for one event are closer to another event

We are not dealing with a fixed number of proteins in a fixed location.  We 
are dealing with random (chaos theory) events spread in time and space.

Dave Nelson
Sydney



At 01:11 AM 17/03/2011, you wrote:
>Cheers Brett,
>
>Indeed that visualization is incredibly cool, but I'm still not
>getting my head round the argument.
>
>Let me try again (I'm really sorry if this is coming over as trolling,
>I'm just trying to understand properly): If I were to force you at
>gunpoint to close one of those stations (from the YouTube
>visualization) wouldn't you be more willing to close one in the middle
>of the array rather than one scattered around?
>
>If I forced you to close 50 stations, wouldn't you just thin out the
>array in certain places rather than removing all the scattered
>stations?
>
>Perhaps I'm getting it wrong, and you would choose to keep the array
>intact, and selectively remove the 'outliers'. I'd be interested to
>know.
>
>
>Thanks again for the links to the very nice images.
>Dan.
>
>P.S. I'm not proposing that we should close any stations! I'm just
>wondering if I can use much less data to represent most of the
>'information'. This is a concept from bioinformatics, where you can
>trim a protein sequence database to 10% of its original size, yet keep
>90% accuracy in terms of protein family identification of a query
>sequence [1].
>
>[1] http://www.ncbi.nlm.nih.gov/pubmed/10871268
>__________________________________________________________


Hi Dan,
             As I said in my first and the first reply to you.  The more stations the
better.  As we loose stations under any scheme, we loose information, as per my
reasons outlined in my earlier reply.  Its as simple as that !   :)   It doesnt matter which/where the station/s is/are that are lost, because quakes are always
happening in different places, so some of the recorders are always closer than
others.
    As told by me and repeated by later replys the seismo's provide two main lines
 of research.  1)  initially recording any quakes and being able to derive information
of the actual quake. 2) the data collected that tells us of the deep structure of the
earth.

  For any given event, you only need 3 sensors relatively well spaced on a circle around the event to triangulate its location and its rough depth.  But for another
quake, even maybe within 500km of that first event, All or maybe none of those
three sensors may be able to provide useful data on location or depth, only a reasonable indication of magnitude.

  It still boils back to the MORE Sensors = MORE Data = MORE Accurate
Results   :)  simple as that, which is why event seismo organisation around the
world is contineously upgrading and increasing the density of their seismo
networks.

quote....  Perhaps I'm getting it wrong, and you would choose to keep the array
intact, and selectively remove the 'outliers'. I'd be interested to
know.?

  Outliers ??  ones on the outer edge ??.   as stated above there are none on the
outer edge  because quakes are spread all across the network so some sensors that may be on the outer edge for one event are closer to another event

We are not dealing with a fixed number of proteins in a fixed location.  We are dealing with random (chaos theory) events spread in time and space.

Dave Nelson
Sydney



At 01:11 AM 17/03/2011, you wrote:
Cheers Brett,

Indeed that visualization is incredibly cool, but I'm still not
getting my head round the argument.

Let me try again (I'm really sorry if this is coming over as trolling,
I'm just trying to understand properly): If I were to force you at
gunpoint to close one of those stations (from the YouTube
visualization) wouldn't you be more willing to close one in the middle
of the array rather than one scattered around?

If I forced you to close 50 stations, wouldn't you just thin out the
array in certain places rather than removing all the scattered
stations?

Perhaps I'm getting it wrong, and you would choose to keep the array
intact, and selectively remove the 'outliers'. I'd be interested to
know.


Thanks again for the links to the very nice images.
Dan.

P.S. I'm not proposing that we should close any stations! I'm just
wondering if I can use much less data to represent most of the
'information'. This is a concept from bioinformatics, where you can
trim a protein sequence database to 10% of its original size, yet keep
90% accuracy in terms of protein family identification of a query
sequence [1].

[1] http://www.ncbi.nlm.nih.gov/pubmed/10871268
__________________________________________________________

[ Top ] [ Back ] [ Home Page ]