sproc2caps

Recordstream data acquisition plugin that applies filter and/or a mathematical expression to one or more data streams forming new streams

Description

The sproc2caps plugin requests data from a SeisComP RecordStream [4] in real time or based on time windows, filters the data and/or applies mathematical expressions. The processed data is sent to a CAPS server or to stdout. Streams can be renamed.

Setup

Streams

The plugin reads the streams to subscribe to from a separate stream map file. The location of the file can be either defined in the plugin configuration or given as a command line argument:

streams.map = @DATADIR@/sproc2caps/streams.map

Each line of the stream map file defines n input streams and one output stream. By definition at least one input and one output stream must be given. The last argument in a line is the output stream. All other lines are input stream. Lines beginning with a comment are ignored.

Note

The map file is required even if the stream codes remain the same. Without an entry in the map file the input streams are not treated.

Example map file:

#Input 1      Input 2       ... Output
XX.TEST1..HHZ XX.TEST2..HHZ ... XX.TEST3..HHZ

Each stream entry may contain additional stream options e.g. for data filtering. Options are indicated by “?”.

The following stream options are supported:

Name

Description

Example

filter

Filter string

filter=BW(4,0.7,2)

unit

Output unit

unit=cm/s

expr

Expression to be used(Output only)

expr=x1+x2

Examples of streams with stream options:

XX.TEST1..HHZ?filter=BW_HP(4,0.1)
XX.TEST2..HHZ?filter=BW_HP(4,0.1)
XX.TEST3..HHZ?filter=BW(4,0.7,2)?unit=cm/s

For the given example the plugin assigns the following access variables to the streams. The access variables can be used in the mathematical expression string. The unit option provides an additional description of the stream. unit does not modify the stream.

Access variables for N input streams:

map,input

input 1

input 2

input N

stream

XX.TEST1..HHZ

XX.TEST2..HHZ

XX.TESTN..HHZ

variable

x1

x2

xN

When the mathematical expression is evaluated the xi will be replaced with the sample of the corresponding stream at the sample time. The maximum number of input streams is 3.

Filtering

Input data can be filtered before mathematical expressions are applied. Filter grammar and all filters [2] known from SeisComP can be considered. By default input data remain unfiltered.

Example for setting the filter in the map file:

XX.TEST1..HHZ?filter=BW(4,0.7,2) XX.TEST2..HHZ XX.TEST3..HHZ

Expressions

The sproc plugin uses the C++ Mathematical Expression Library to evaluate mathematical expressions. The library supports a wide range of mathematical expressions. The complete feature list can be found here. The number of input variables depends on the number of input streams. The variables are numbered consecutively from 1 to n: x1, x2, …, xn.

Example how to multiply 3 streams:

  • via command-line:

--expr="x1*x2*x3"
  • via config:

streams.expr = x1*x2*x3
  • via stream options:

XX.TEST1..HHZ XX.TEST2..HHZ XX.TEST3..HHZ XX.TESTOUT..HHZ?expr=x1*x2*x3

Rename Streams

In addition to applying mathematical expressions to streams, the plugin can be also used to rename streams. With the following example we show how to map the streams GE.APE..BHE and GE.BKNI..BHE to new stream ids and store the output streams in the same CAPS server:

  1. Open the plugin configuration and create a clone of the input data stream with:

    streams.expr = x1
    
  2. Create the mapping file @DATADIR@/sproc2caps/streams.map with the following content

    # Input      Output
    GE.APE..BHE  AB.APE..BHE
    GE.BKNI..BHE GE.BKNI2..BHE
    

Time windows

Set the time windows using –begin and –end to set the start and the end times, respectively. When no time window is given, real-time input data are considered.

Examples

  1. To map waveform data for a specific time window reading from a local CAPS server on localhost:18002 and sending to the plugin port of the same CAPS server on localhost:18003 run:

    sproc2caps --begin "2019-01-01 00:00:00" --end "2019-01-01 01:00:00" -I "caps://localhost:18002" -a localhost:18003
    

    This will create duplicate data on the CAPS server if the map file renames the streams. To remove the original streams:

    1. Configure caps to keep the orignal data for 0 days

    2. Restart or reload caps

  2. Read real-time data from an external seedlink server like with [19] but applying the mapping:

    sproc2caps -I "slink://host:18000" -a localhost:18003
    
  3. Read data from the file data.mseed resample to 10 Hz sample rate by the RecordStream and write the resulting data to stdout. By applying --stop the processing stops when the data is read completely:

    sproc2caps -I dec://file?rate=10/data.mseed -d localhost --gain-in 1 --gain-out 1 --dump-packets --mseed --begin "2000-01-01 00:00:00" --stop > test.mseed
    

    You may join the command with capstool [8] and scmssort [14]:

    echo "2024,01,01,00,00,00 2024,01,01,00,10,00 * * * *" | capstool -H localhost |\
    sproc2caps -I dec://file?rate=10/- -d localhost --gain-in 1 --gain-out 1 --dump-packets --mseed --begin "2000-01-01 00:00:00" --stop |\
    scmssort -E > test.mseed
    

    Note

    A similar action may be executed using rs2caps.

Module Configuration

etc/defaults/global.cfg
etc/defaults/sproc2caps.cfg
etc/global.cfg
etc/sproc2caps.cfg
~/.seiscomp/global.cfg
~/.seiscomp/sproc2caps.cfg

sproc2caps inherits global options.

Note

Modules/plugins may require a license file. The default path to license files is @DATADIR@/licenses/ which can be overridden by global configuration of the parameter gempa.licensePath. Example:

gempa.licensePath = @CONFIGDIR@/licenses
journal.file

Default: @ROOTDIR@/var/run/sproc2caps/journal

Type: string

File to store stream states

journal.flush

Default: 10

Unit: s

Type: uint

Flush stream states to disk every n seconds

journal.waitForAck

Default: 60

Unit: s

Type: uint

Wait when a sync has been forced, up to n seconds

journal.waitForLastAck

Default: 5

Unit: s

Type: uint

Wait on shutdown to receive acknownledgement messages, up to n seconds

Note

streams.* Configure operations applied to input streams and the stream mapping.

streams.begin

Type: string

Start time of data time window, default ‘GMT’

streams.end

Type: string

End time of data time window

streams.filter

Default: self

Type: string

Sets the input filter

streams.expr

Default: x1 + x2

Type: string

Sets the mathematical expression

streams.map

Default: @DATADIR@/sproc2caps/streams.map

Type: string

Absolute path to the stream map file. Each line holds n input streams and one output stream. Example:

CX.PB11..BHZ CX.PB11..BHZ

CX.PB11..BHZ CX.PB07..BHZ CX.PB11..BBZ

Note

output.* Configure the data output.

output.address

Default: localhost:18003

Type: string

Data output URL [[caps|capss]://][user:pass@]host[:port]. This parameter superseds the host and port parameter of previous versions and takes precedence.

output.host

Default: localhost

Type: string

Data output host. Deprecated: Use output.address instead.

output.port

Default: 18003

Type: int

Data output port. Deprecated: Use output.address instead.

output.bufferSize

Default: 1048576

Unit: B

Type: uint

Size (bytes) of the packet buffer

output.backfillingBufferSize

Default: 180

Unit: s

Type: uint

Length of backfilling buffer. Whenever a gap is detected, records will be held in a buffer and not sent out. Records are flushed from front to back if the buffer size is exceeded.

output.mseed.enable

Default: true

Type: boolean

Enable on-the-fly MiniSeed encoding. If the encoder does not support the input type of a packet it will be forwarded. Re encoding of MiniSEED packets is not supported.

output.mseed.encoding

Default: Steim2

Type: string

MiniSEED encoding to use. (Uncompressed, Steim1 or Steim2)

statusLog.enable

Default: false

Type: boolean

Log information status information e.g. max bytes buffered

statusLog.flush

Default: 10

Unit: s

Type: uint

Flush status every n seconds to disk

Command-Line Options

Generic

-h, --help

Show help message.

-V, --version

Show version information.

-D, --daemon

Run as daemon. This means the application will fork itself and doesn’t need to be started with &.

Verbosity

--verbosity arg

Verbosity level [0..4]. 0:quiet, 1:error, 2:warning, 3:info, 4:debug.

-v, --v

Increase verbosity level (may be repeated, eg. -vv).

-q, --quiet

Quiet mode: no logging output.

-s, --syslog

Use syslog logging backend. The output usually goes to /var/lib/messages.

-l, --lockfile arg

Path to lock file.

--console arg

Send log output to stdout.

--debug

Execute in debug mode. Equivalent to --verbosity=4 --console=1 .

--log-file arg

Use alternative log file.

Records

--record-driver-list

List all supported record stream drivers.

-I, --record-url arg

The recordstream source URL, format: [service://]location[#type]. "service" is the name of the recordstream driver which can be queried with "--record-driver-list". If "service" is not given, "file://" is used.

--record-file arg

Specify a file as record source.

--record-type arg

Specify a type for the records being read.

Output

-O, --output arg

Overrides configuration parameter output.address.

This is the CAPS server which shall receive the data.

--agent arg

Sets the agent string. Allows the server to identify the application that sends data.

-b, --buffer-size arg

Size (bytes) of the journal buffer. If the value ist exceeded, a synchronization of the journal is forced.

--backfilling arg

Default: 0

Buffer size in seconds for backfilling gaps.

--mseed

Enable on-the-fly miniSEED encoding. If the encoder does not support the input type of a packet, it will be forwarded. Re-encoding of miniSEED packets is not supported.

--encoding arg

miniSEED encoding to use: Uncompressed, Steim1 or Steim2.

--rec-len arg

miniSEED record length expressed as a power of 2. A 512 byte record would be 9.

--max-future-endtime arg

Maximum allowed relative end time for packets. If the packet end time is greater than the current time plus this value, the packet will be discarded. By default this value is set to 120 seconds.

--dump-packets

Dump packets to stdout.

Journal

-j, --journal arg

File to store stream states. Use an empty string to log to stdout.

--flush arg

Flush stream states to disk every n seconds.

--wait-for-ack arg arg

Wait when a sync has been forced, up to n seconds.

-w, --wait-for-last-ack arg

Wait on shutdown to receive acknownledgement messages, up to the given number of seconds.

Status

--status-log

Log information status information, e.g., max bytes buffered.

--status-flush arg

Flush status every n seconds to disk.

--stop

Stop processing when data acquisition is finished. The ‘finished’ signal depends on data source.

Streams

--begin arg

Start time of data time window.

--end arg

End time of data time window.

--map arg

Stream map file.

--expr arg

Mathematical expression to be applied.

Test

--gain-in arg

Gain that is applied to the input values.

--gain-out arg

Gain that is applied to the output values.