Compute WCT (Wavelet Coherence Transform)
Wavelet Coherence Transform (WCT) Computation
This script performs the computation of the Wavelet Coherence Transform (WCT), a tool used to analyze the correlation between two time series in the time-frequency domain. The script supports parallel processing and interacts with a database to manage job statuses.
Filter Configuration Parameters
wavelet.wct_freqmin: The lower frequency bound to apply WCT (default=0.1)wavelet.wct_freqmax: The upper frequency bound to apply WCT (default=1.0)wavelet.wct_ns: Smoothing parameter in frequency (default=5)wavelet.wct_nt: Smoothing parameter in time (default=5)wavelet.wct_vpo: Voices per octave: controls the scale resolution of the CWT. Values of 10-12 are standard in ambient-noise WCT literature and give adequate time-frequency resolution for dv/v measurements. 20 (old default) is very fine and rarely changes results while tripling computation time. Default reduced from 20 to 12. (default=12)wavelet.wct_nptsfreq: Number of frequency points sampled linearly between wct_freqmin and wct_freqmax. Because adjacent wavelet scales are correlated over ~1/vpo octaves, having more points than ~vpo*(log2(freqmax/freqmin)) adds no independent information. For a 0.1-1.0 Hz band at vpo=12 that is ~40 independent scales; 100 points is a safe over-sample. Default reduced from 300 (overkill). (default=100)wavelet.wct_norm: If the REF and CCF are normalized before computing wavelet (default=True)wavelet.wavelet_type: Type of wavelet function used (default=(‘Morlet’,6.))wavelet.wct_compute_dtt: When True (default), compute dt/t inline during the wavelet step using ALL downstream wavelet_dtt config sets, and write only the compact DTT/ERR/COH output — no intermediate WCT files are stored on disk. This eliminates ~14000x of intermediate storage (a WCT file for a single pair/component/year is ~2.5 GB compressed vs ~175 KB for DTT). Set to False to store the full WCT arrays (WXamp, Wcoh, WXdt) for inspection or to re-run wavelet_dtt separately with different parameters without recomputing the CWT. (default=True)stack.mov_stack: A list of two parameters: the time to “roll” over (default 1 day) and the granularity (step) of the resulting stacked CCFs (default 1 day) to stack for the Moving-window stacks. This can be a list of tuples, e.g. ((‘1d’,’1d’),(‘2d’,’1d’)) corresponds to the MSNoise 1.6 “1,2” before. Time deltas can be anything pandas can interpret (“d”, “min”, “sec”, etc). (default=((‘1D’,’1D’)))refstack.ref_begin: Start of REF period. Absolute date (YYYY-MM-DD) OR negative integer for rolling-index mode (e.g. -5 means 5 windows before current) (default=1970-01-01)refstack.ref_end: End of REF period. Absolute date (YYYY-MM-DD) OR negative integer (e.g. -1 means exclude self). Must be > ref_begin when both are negative. (default=2100-01-01)cc.cc_sampling_rate: Sampling Rate for the CrossCorrelation (in Hz) (default=20.0)cc.components_to_compute: List (comma separated) of components to compute between two different stations (default=ZZ)cc.components_to_compute_single_station: List (comma separated) of components within a single station. ZZ would be the autocorrelation of Z component, while ZE or ZN are the cross-components. Defaults to [], no single-station computations are done. (default=)global.hpc: Is MSNoise going to run on an HPC? (default=N)
This process is job-based, so it is possible to run several instances in parallel.
To run this step:
$ msnoise cc dtt compute_wct
This step also supports parallel processing/threading:
$ msnoise -t 4 cc dtt compute_wct
will start 4 instances of the code (after 1 second delay to avoid database conflicts). This works both with SQLite and MySQL but be aware problems could occur with SQLite.
See also
Reading these results in Python — use MSNoiseResult:
from msnoise.results import MSNoiseResult
from msnoise.core.db import connect
db = connect()
r = MSNoiseResult.from_ids(db, ...) # include the steps you need
# then call r.get_wct(...)
See Reading outputs with MSNoiseResult for the full guide and all available methods.