PSN-L Email List Message

Subject: Re: Sample Rate Question
From: GMVoeth HM gmvoeth@...........
Date: Wed, 19 Sep 2012 10:56:44 -0700


Hey Guys, Ive been thinking of a great way to test your A/D
converter for Alizing.

Create a WAV test file within Audacity using a square wave
that increases slowly in frequency between like Fn and 5X Fn
then LP filter it like -96Dbc or more at the Alize point
then move the wave file to a MP3 player which will
play either FLAC or Wave then you can pump a calibration
tone directly into your A/D or filter or whatever.
Any changes which do not look like the original
signal can be thought of as distortions.
If you keep the Fn below 16KHz a stereo MP3
file is a faithful reproduction.
If you keep it below 20KHz a mono mp3
will be a faithful reproduction.

Simply take like 128K and make it
12.8K and that's the top mono Fn.
If Stereo then Fn/2 like 6.4K
is the best you can do.

This is all CBR MP3 which is the
only way to go as far as I'm concerned.

Seismic Frequencies are much lower than Audio
so MP3 files should make good quality test signals.
There seems to be no lower limits.

I use 100SPS 0 to 50Hz for my seismic recordings
but if you make a recording an hour or more in length
you can speed it up in a wav file like X500
where 20sec becomes 25Hz now audiable.
100 SPS becomes 50000sps and 1Hz becomes 500Hz.

I don't see any of you trying these things
to look at whatever ???

Just a Thought.
Regards,
geoff




On 7/26/2012 6:34 AM, Brett Nordgren wrote:
> Hi Randy,
>
> I don't know how Amaseis does their decimation, but I have been told 
> how is should be done.
>
> First, yes, downsampling does reduce noise.  I think that velocity 
> noise would possibly be reduced by the square root of the decimation 
> ratio and noise "power" by the ratio itself, though that needs to be 
> confirmed.
>
> An excellent e-book reference that includes much material on all this 
> stuff is at:
> http://www.analog.com/en/embedded-processing-dsp/learning-and-development/content/scientist_engineers_guide/fca.html 
>
> It goes at things mainly from a real-world perspective and I have 
> found it very useful although it is still quite thorough.
>
> In order to take advantage of decimation, though, you need to be 
> including fractional counts in your result-data.  Some software, after 
> decimating, continues to save the result-data as integer values, so 
> you end up losing most of the resolution improvement you obtained by 
> decimating.
>
> The most obvious scheme for decimation is, as you suggested, using a 
> moving average.  In your example you average the 40 points between 
> -1/12 second and +1/12 second to get the decimated value for T=0.  
> Then you average the 40 points between T=1/12 second and T=3/12 
> seconds to get the decimated value for T=1/6 second.  Etc. This is 
> using what is called a "rectangular" window, as all points are 
> weighted equally in the average.  If your goal is to plot the 
> decimated values over time, this is a very good approach. However, if 
> you are planning to do FFT's and look at the data as a function of 
> frequency, it is pretty lousy.
>
> The better approach is to weight the incoming samples in some manner 
> so that, in the same example above, the sample at 1/6 second is 
> weighted by 1, while samples progressively farther above and below 1/6 
> second are given progressively lower weights as you add them to the 
> total.  Sometimes the windows are even designed to overlap in 
> frequency.  Finally you have to multiply the sum by a constant, based 
> on what shape of window function you used.  An often-used window 
> function is based on a cosine-squared shape.
>
> This windowing process is somewhat equivalent to putting the input 
> data through a Low-Pass filter and can do a decent job of reducing 
> peaks in the output data at alias frequencies.  Windows like cos^2 do 
> a pretty good job.  Decimating, using a simple moving average, can 
> leave alias peaks in the spectrum.
>
> Brett
>
>
> At 10:02 AM 7/24/2012, you wrote:
>> Hi All,
>>
>> I am looking at my noise and aliasing and it brings up a question on 
>> sample rates.
>>
>> Using a Dataq 154 and their collection software the data rate is set 
>> as a reduction from the basic 240 sps either by averaging such as 
>> using 40 samples to get 6 sps, by taking the max value of the 40 
>> samples or by one of several other options .  My understanding is 
>> that the averaging method provides some reduction of noise and 
>> aliasing and would be the better option from that respect and also 
>> because the sample would be taken at a set time period versus a max 
>> value occurring anywhere during the 40 sample period.  Also by 
>> sampling at 240 and using a factor of 240 to average and reduce the 
>> rate will result in 60 hz aliasing into the nyquist bin with little 
>> effect on frequencies of interest.
>>
>> The question then is this.  When I log using AmaSeis and the DI-154 
>> at 6 sps do I get an average, a max of a group of samples or a 
>> specific Amaseis controlled sample rate of single 6 sps samples?
>>
>> Randy
>>
>
>
> __________________________________________________________
>
> Public Seismic Network Mailing List (PSNLIST)
>
> To leave this list email PSNLIST-REQUEST@.............. with the body 
> of the message (first line only): unsubscribe
> See http://www.seismicnet.com/maillist.html for more information.
>
>


__________________________________________________________

Public Seismic Network Mailing List (PSNLIST)

To leave this list email PSNLIST-REQUEST@.............. with 
the body of the message (first line only): unsubscribe
See http://www.seismicnet.com/maillist.html for more information.

[ Top ] [ Back ] [ Home Page ]