PSN-L Email List Message

Subject: Re: Sample Rate Question
From: Brett Nordgren brett3nt@.............
Date: Thu, 26 Jul 2012 09:34:08 -0400


Hi Randy,

I don't know how Amaseis does their decimation, but I have been told 
how is should be done.

First, yes, downsampling does reduce noise.  I think that velocity 
noise would possibly be reduced by the square root of the decimation 
ratio and noise "power" by the ratio itself, though that needs to be confirmed.

An excellent e-book reference that includes much material on all this 
stuff is at:
http://www.analog.com/en/embedded-processing-dsp/learning-and-development/content/scientist_engineers_guide/fca.html
It goes at things mainly from a real-world perspective and I have 
found it very useful although it is still quite thorough.

In order to take advantage of decimation, though, you need to be 
including fractional counts in your result-data.  Some software, 
after decimating, continues to save the result-data as integer 
values, so you end up losing most of the resolution improvement you 
obtained by decimating.

The most obvious scheme for decimation is, as you suggested, using a 
moving average.  In your example you average the 40 points between 
-1/12 second and +1/12 second to get the decimated value for 
T=0.  Then you average the 40 points between T=1/12 second and T=3/12 
seconds to get the decimated value for T=1/6 second.  Etc.  This is 
using what is called a "rectangular" window, as all points are 
weighted equally in the average.  If your goal is to plot the 
decimated values over time, this is a very good approach.  However, 
if you are planning to do FFT's and look at the data as a function of 
frequency, it is pretty lousy.

The better approach is to weight the incoming samples in some manner 
so that, in the same example above, the sample at 1/6 second is 
weighted by 1, while samples progressively farther above and below 
1/6 second are given progressively lower weights as you add them to 
the total.  Sometimes the windows are even designed to overlap in 
frequency.  Finally you have to multiply the sum by a constant, based 
on what shape of window function you used.  An often-used window 
function is based on a cosine-squared shape.

This windowing process is somewhat equivalent to putting the input 
data through a Low-Pass filter and can do a decent job of reducing 
peaks in the output data at alias frequencies.  Windows like cos^2 do 
a pretty good job.  Decimating, using a simple moving average, can 
leave alias peaks in the spectrum.

Brett


At 10:02 AM 7/24/2012, you wrote:
>Hi All,
>
>I am looking at my noise and aliasing and it brings up a question on 
>sample rates.
>
>Using a Dataq 154 and their collection software the data rate is set 
>as a reduction from the basic 240 sps either by averaging such as 
>using 40 samples to get 6 sps, by taking the max value of the 40 
>samples or by one of several other options .  My understanding is 
>that the averaging method provides some reduction of noise and 
>aliasing and would be the better option from that respect and also 
>because the sample would be taken at a set time period versus a max 
>value occurring anywhere during the 40 sample period.  Also by 
>sampling at 240 and using a factor of 240 to average and reduce the 
>rate will result in 60 hz aliasing into the nyquist bin with little 
>effect on frequencies of interest.
>
>The question then is this.  When I log using AmaSeis and the DI-154 
>at 6 sps do I get an average, a max of a group of samples or a 
>specific Amaseis controlled sample rate of single 6 sps samples?
>
>Randy
>


__________________________________________________________

Public Seismic Network Mailing List (PSNLIST)

To leave this list email PSNLIST-REQUEST@.............. with 
the body of the message (first line only): unsubscribe
See http://www.seismicnet.com/maillist.html for more information.

[ Top ] [ Back ] [ Home Page ]