diff --git a/guide/guide.adoc b/guide/guide.adoc index d1fb757..b946cdd 100644 --- a/guide/guide.adoc +++ b/guide/guide.adoc @@ -2,9 +2,9 @@ :author: Philipp Engel :copyright: CC BY 4.0 :orgname: DABAMOS -:revnumber: 0.9.5 +:revnumber: 0.9.6 :lang: en -:docdate: 2024-06-19 +:docdate: 2024-07-04 :doctype: book :url-org: https://www.dabamos.de/ :url-project: {url-org}dmpack @@ -13,7 +13,7 @@ :source-highlighter: pygments :pygments-style: lovelace :toc: left -:toclevels: 2 +:toclevels: 4 :xrefstyle: short :table-caption!: @@ -23,46 +23,62 @@ The *Deformation Monitoring Package* (*DMPACK*) is a free and open source software package for sensor control and automated time series processing in -geodesy and geotechnics. The package consists of a library _libdmpack_ and -additional programs based on it which serve as a reference implementation of -solutions to various problems in deformation monitoring, such as: - -* sensor control -* sensor data parsing and processing -* database access -* remote procedure calls -* data synchronisation and export -* spatial transformations -* time series analysis -* plotting and reporting -* web-based data access -* distributed logging -* MQTT connectivity -* Leica GeoCOM API -* scripting -* e-mail +engineering geodesy and geotechnics. The package consists of a library +_libdmpack_ and additional programs based on it which serve as a reference +implementation of solutions to various problems in deformation monitoring, such +as: + +* sensor control, +* sensor data parsing and processing, +* database access, +* remote procedure calls, +* data synchronisation and export, +* spatial transformations, +* time series analysis, +* client status messages, +* distributed logging, +* plotting and reporting, +* web-based data access, +* MQTT connectivity, +* Leica GeoCOM API, +* scripting, +* e-mail. DMPACK is a scientific monitoring system developed for automated control measurements of buildings, infrastructure, terrain, geodetic nets, and other -objects. The software runs on sensor nodes, usually industrial embedded systems -or single-board computers, and obtains observation data from arbitrary sensors, -like total stations, digital levels, inclinometers, weather stations, or GNSS -receivers. The raw sensor data is then processed, stored, and optionally -transmitted to a server. The software package may be used to monitor objects -like: - -* bridges, tunnels, dams -* motorways, railways -* construction sites, mining areas -* landslides, cliffs, glaciers -* churches, monasteries, and other heritage buildings - -DMPACK is built around the relational SQLite database for time series and log -storage on client and server. The server component is optional. It is possible -to run DMPACK on clients only, without data distribution. The client-side -message passing is based on POSIX message queues and POSIX semaphores. - -Currently, only 64-bit Linux and FreeBSD are supported as operating systems. +objects through autonomous sensor networks in the IoT. The programs for sensor +data collection are intended to be run on client hardware connected to the +Internet through LTE/5G, usually industrial embedded systems or single-board +computers. + +Observation data is periodically collected by the clients from arbitrary +sensors, like total stations, digital levels, inclinometers, weather stations, +or GNSS receivers. The raw sensor responses are structured, post-processed, +locally stored, and transmitted to a central monitoring server that provides an +HTTP-RPC API for client–server communication. + +The software package can be used to monitor objects like: + +* bridges, tunnels, dams, +* roads, railways, +* construction sites, mining areas, +* slopes, landslides, cliffs, glaciers, +* churches, monasteries, and other heritage buildings. + +DMPACK is written in Fortran 2018 and integrates the relational SQLite database +for time series and log storage on client and server. The server component is +optional. If preferred, the data distribution may be omitted for local +monitoring only. + +The software package relies on POSIX standards for system calls and process +management. The client-side message passing is based on POSIX message queues and +POSIX semaphores. Currently, only 64-bit Linux (_glibc_) and FreeBSD are +supported as operating systems. + +The source code of DMPACK is released under the ISC licence that is functionally +equivalent to the BSD 2-Clause and MIT licences. The source code and the +documentation are available online. See the project website for further +information: * link:https://www.dabamos.de/[Project Website] * link:https://www.dabamos.de/dmpack/guide/[User Guide] @@ -132,7 +148,7 @@ DMPACK has the following requirements: * Linux or FreeBSD operating system * 64-bit platform (x86-64, AArch64) -* Fortran 2018 and ANSI C compiler (GCC, Intel oneAPI) +* Fortran 2018 and ANSI C compiler (GNU, Intel) Additional dependencies have to be present to build and run the software of this package: @@ -141,49 +157,22 @@ package: * link:https://gnuplot.sourceforge.net/[Gnuplot] * link:https://www.hdfgroup.org/solutions/hdf5/[HDF5] * link:https://www.netlib.org/lapack/[LAPACK] -* link:https://curl.se/libcurl/[libcurl] (≥ 8.5.0) +* link:https://curl.se/libcurl/[libcurl] (≥ 8.0.0) +* link:https://libmodbus.org/[libmodbus] * link:https://www.lua.org/[Lua 5.4] * link:https://www.pcre.org/[PCRE2] * link:https://www.sqlite.org/[SQLite 3] (≥ 3.39.0) * link:https://www.zlib.net/[zlib] * link:https://facebook.github.io/zstd/[zstd] (≥ 1.5.5) -The <> require a compatible web server: +The <> require a compatible web server, like: * link:https://www.lighttpd.net/[lighttpd] +* link:https://httpd.apache.org/[Apache HTTP Server] -To generate the man pages, the User Guide, and the source code documentation, -you will also need: - -* link:https://asciidoctor.org/[AsciiDoctor], - link:https://pygments.org/[Pygments], and - link:https://rubygems.org/gems/pygments.rb/versions/2.2.0[pygments.rb] -* link:https://github.com/Fortran-FOSS-Programmers/ford[FORD] - -DMPACK depends on the following interface libraries: - -* link:https://github.com/interkosmos/fortran-curl[fortran-curl] -* link:https://github.com/interkosmos/fortran-lua54[fortran-lua54] -* link:https://github.com/interkosmos/fortran-pcre2[fortran-pcre2] -* link:https://github.com/interkosmos/fortran-sqlite3[fortran-sqlite3] -* link:https://github.com/interkosmos/fortran-unix[fortran-unix] -* link:https://github.com/interkosmos/fortran-zlib[fortran-zlib] -* link:https://github.com/interkosmos/fortran-zstd[fortran-zstd] - -If the repository is cloned recursively through Git, or if the project is built -using FPM, the submodules will be downloaded automatically. Without Git or FPM, -this step has to be done manually by executing `fetchvendor.sh`, for example: - -.... -$ curl -L -s -o master.zip https://github.com/dabamos/dmpack/archive/refs/heads/master.zip -$ unzip master.zip -$ cd dmpack-master/ -$ sh fetchvendor.sh -$ make [freebsd|linux] -$ make install -.... - -The shell script `fetchvendor.sh` requires _curl(1)_ and _unzip(1)_. +DMPACK depends on additional interface libraries. If the repository is cloned +recursively with Git, or if the project is built using FPM, the submodules will +be downloaded automatically. Otherwise, run script `fetchvendor.sh` first. .Paths used by default [[requirements-paths]] [cols="3,6"] @@ -203,7 +192,7 @@ The shell script `fetchvendor.sh` requires _curl(1)_ and _unzip(1)_. == Installation This section describes the steps to build the DMPACK library and programs from -source, either with POSIX Make or the +source, either with POSIX make or the link:https://fpm.fortran-lang.org/[Fortran Package Manager] (FPM). At the moment, support for the Fortran Package Manager is experimental, and using GNU/BSD Make is the recommended way. Display the available build targets of the @@ -219,119 +208,74 @@ Or, output the selected build options: $ make options PREFIX=/opt .... -See section <> on how to configure the operating system +See section <> on how to configure the operating system after the installation. The shared libraries `libgcc.so`, `libgfortran.so`, and `libquadmath.so` must be present on the target system if the DMPACK programs have been compiled with GNU Fortran. -=== FreeBSD [[freebsd]] +=== Linux [[linux]] -First, install the build and run-time dependencies: +On Debian, install the compilers and the build environment first: .... -$ doas pkg install archivers/zstd databases/sqlite3 devel/git devel/pcre2 \ - devel/pkgconf ftp/curl lang/gcc lang/lua54 math/gnuplot math/lapack \ - science/hdf5 www/fcgi +$ sudo apt-get install gcc gfortran git make pkg-config .... -Instead of `math/gnuplot`, you may want to install package `math/gnuplot-lite` -which does not depend on X11 (but lacks the raster graphic terminals). The web -applications additionally require a web server: +The third-party dependencies have to be installed with development headers: .... -$ doas pkg install www/lighttpd +$ sudo apt-get install --no-install-recommends libblas-dev liblapack-dev \ + curl libcurl4 libcurl4-openssl-dev libfcgi-bin libfcgi-dev libmodbus5 \ + libmodbus-dev libhdf5-103-1 libhdf5-dev lua5.4 liblua5.4 liblua5.4-dev \ + libpcre2-8-0 libpcre2-dev sqlite3 libsqlite3-0 libsqlite3-dev zlib1g \ + zlib1g-dev libzstd1 libzstd-dev gnuplot .... -Optionally, install Pygments and AsciiDoctor to generate the man pages and the -User Guide: +Instead of package `gnuplot`, you may prefer the no-X11 flavour `gnuplot-nox` if +raster graphic formats are not required (essentially, SVG output only). The +SQLite 3 package version must be ≥ 3.39.0. If the version in the package +repository is too old, like on Ubuntu 22.04 LTS, you can also +<>. Depending on the Linux +distribution, the names of the HDF5 and Lua packages may differ. -.... -$ doas pkg install devel/rubygem-pygments.rb textproc/rubygem-asciidoctor -.... +[NOTE] +==== +If Intel oneAPI is used, it is necessary to build HDF5 from source, as the +versions in the Linux package repositories have been compiled with GNU Fortran +and are therefore incompatible. See section <> for +hints regarding the build process. +==== -==== Make [[freebsd-make]] +==== Make [[linux-make]] -The repository has to be cloned recursively using command-line argument -`--recursive`. Execute the Makefile with build target `freebsd`: +Clone the DMPACK repository with Git, using command-line argument `--recursive`: .... $ git clone --depth 1 --recursive https://github.com/dabamos/dmpack $ cd dmpack/ -$ make freebsd .... -Install the library and all programs system-wide to `/usr/local`: - -.... -$ doas make install -.... - -You can change the installation prefix with argument `PREFIX`. To install to -directory `/opt` instead, run: - -.... -$ doas make install PREFIX=/opt -.... - -In this case, path `/opt/bin` must be included in environment variable `PATH`. - -==== Fortran Package Manager [[freebsd-fpm]] - -Either clone the repository with Git, or download the -link:https://github.com/dabamos/dmpack/archive/refs/heads/master.zip[archive of -the master branch]. Then, run: - -.... -$ export FFLAGS="-D__FreeBSD__ -I/usr/local/include -ffree-line-length-0" -$ fpm test --flag "${FFLAGS}" -$ fpm build --profile release --flag "${FFLAGS}" -$ fpm install -.... - -The Fortran Package Manager will fetch all third-party dependencies -automatically, but the configuration and shared files have to be installed -manually. The library and programs will be installed to `~/.local` by default. - -=== Linux [[linux]] - -On Debian, install GCC, GNU Fortran, and the build environment first: +If Git is not available, download the archive of the master branch instead and +run the shell script `fetchvendor.sh` to fetch the missing submodules: .... -$ sudo apt install gcc gfortran git make pkg-config +$ curl -L -s -o dmpack.zip https://github.com/dabamos/dmpack/archive/refs/heads/master.zip +$ unzip dmpack.zip +$ cd dmpack-master/ +$ sh fetchvendor.sh .... -The third-party dependencies have to be installed with development headers: +Then, execute build target `linux` of the Makefile to compile the source: .... -$ sudo apt install --no-install-recommends libblas-dev liblapack-dev \ - curl libcurl4 libcurl4-openssl-dev libfcgi-bin libfcgi-dev gnuplot \ - libhdf5 libhdf5-dev lua5.4 liblua5.4 liblua5.4-dev libpcre2-8-0 \ - libpcre2-dev sqlite3 libsqlite3-dev zlib1g zlib1g-dev libzstd1 \ - libzstd-dev +$ make linux .... -Instead of package `gnuplot`, you may prefer the no-X11 flavour `gnuplot-nox` if -raster graphic formats are not required (essentially, SVG output only). The -SQLite 3 package version must be ≥ 3.39.0. Depending on the package repository, -the names of the HDF5 and Lua packages may differ. - -[NOTE] -==== -If Intel oneAPI is used instead of GCC to compile DMPACK, it is necessary to -build HDF5 from source, as the versions in the Linux package repositories have -been compiled with GNU Fortran and are therefore incompatible. See section -<> for hints regarding the build process. -==== - -==== Make [[linux-make]] - -Clone the DMPACK repository using command-line argument `--recursive`, and -execute build target `linux` of the Makefile: +On a 64-bit ARM platform, like those of the Raspberry Pi 3/4/5, select build +target `linux_aarch64` instead: .... -$ git clone --depth 1 --recursive https://github.com/dabamos/dmpack -$ cd dmpack/ -$ make linux +$ make linux_aarch64 .... Install the DMPACK libraries and programs system-wide to `/usr/local`: @@ -340,13 +284,14 @@ Install the DMPACK libraries and programs system-wide to `/usr/local`: $ sudo make install .... -Or, to install to directory `/opt` instead, run: +Or, to install to directory `/opt`, run: .... $ sudo make install PREFIX=/opt .... -Path `/opt/bin` must be added to the global `PATH` environment variable. +Path `/opt/bin` must be added to the global `PATH` environment variable to run +DMPACK programs from the command-line. [NOTE] .Custom SQLite 3 @@ -356,20 +301,20 @@ If the SQLite 3 library has been built from source and installed to library `libsqlite3.so`: .... -$ make OS=linux PREFIX=/usr LIBSQLITE3="-L/usr/local/lib -lsqlite3" +$ make build OS=linux LIBSQLITE3="-L/usr/local/lib -lsqlite3" .... -If more than one library is installed, specify the path with linker flag -`-Wl,-rpath=/usr/local/lib`. +If more than one library is installed, additionally specify the path with linker +flag `-Wl,-rpath=/usr/local/lib`. ==== [NOTE] .Intel oneAPI Compilers ==== -If Intel oneAPI is used instead of GCC, run: +If Intel oneAPI is used instead of the GNU Compiler Collection, run: .... -$ make CC=icx FC=ifx PPFLAGS= \ +$ make build OS=linux CC=icx FC=ifx PPFLAGS= \ CFLAGS="-mtune=native -O2 -fpic" FFLAGS="-mtune=native -O2 -fpic" \ LDFLAGS="-module ./include -I./include" \ INCHDF5="-I/opt/include" \ @@ -381,34 +326,129 @@ the HDF5 modules files to `/opt/include/`. Change the paths to the actual locations. ==== -==== Fortran Package Manager [[linux-fpm]] +==== FPM [[linux-fpm]] To build DMPACK using the Fortran Package Manager, change to the cloned or downloaded repository, and run: .... $ export FFLAGS="-D__linux__ `pkg-config --cflags hdf5` -ffree-line-length-0" -$ fpm test --flag "${FFLAGS}" -$ fpm build --profile release --flag "${FFLAGS}" +$ fpm test --flag "$FFLAGS" +$ fpm build --profile release --flag "$FFLAGS" $ fpm install .... The library and programs will be installed to directory `~/.local` by default. If the compilation fails with an error message stating that `-llua-5.4` cannot -be found, update the build manifests first: +be found, update the library names in the build manifests: .... $ sed -i "s/lua-5/lua5/g" fpm.toml $ sed -i "s/lua-5/lua5/g" build/dependencies/fortran-lua54/fpm.toml .... +=== FreeBSD [[freebsd]] + +First, install the build and run-time dependencies: + +.... +$ doas pkg install archivers/zstd comms/libmodbus databases/sqlite3 devel/git \ + devel/pcre2 devel/pkgconf ftp/curl lang/gcc lang/lua54 math/gnuplot math/lapack \ + science/hdf5 www/fcgi +.... + +Instead of `math/gnuplot`, you may want to install package `math/gnuplot-lite` +which does not depend on X11 (but lacks the raster graphic terminals). The web +applications additionally require a web server: + +.... +$ doas pkg install www/lighttpd +.... + +Optionally, install Pygments and AsciiDoctor to generate the man pages and the +User Guide from source: + +.... +$ doas pkg install devel/rubygem-pygments.rb textproc/rubygem-asciidoctor +.... + +==== Make [[freebsd-make]] + +The repository has to be cloned recursively using command-line argument +`--recursive`: + +.... +$ git clone --depth 1 --recursive https://github.com/dabamos/dmpack +$ cd dmpack/ +.... + +If Git is not available, download the archive of the master branch and run the +shell script `fetchvendor.sh` to fetch the submodules: + +.... +$ curl -L -s -o dmpack.zip https://github.com/dabamos/dmpack/archive/refs/heads/master.zip +$ unzip dmpack.zip +$ cd dmpack-master/ +$ sh fetchvendor.sh +.... + +Execute the Makefile with build target `freebsd`: + +.... +$ make freebsd +.... + +Install the library and all programs system-wide to `/usr/local`: + +.... +$ doas make install +.... + +You can change the installation prefix with argument `PREFIX`. To install to +directory `/opt` instead, run: + +.... +$ doas make install PREFIX=/opt +.... + +In this case, path `/opt/bin` must be included in `PATH` environment variable. + +==== FPM [[freebsd-fpm]] + +Either clone the repository with Git, or download the +link:https://github.com/dabamos/dmpack/archive/refs/heads/master.zip[archive of +the master branch]. Then, run: + +.... +$ export FFLAGS="-D__FreeBSD__ -I/usr/local/include -ffree-line-length-0" +$ fpm test --flag "$FFLAGS" +$ fpm build --profile release --flag "$FFLAGS" +$ fpm install +.... + +The Fortran Package Manager will fetch all third-party dependencies +automatically, but the configuration and shared files have to be installed +manually. The library and programs will be installed to `~/.local` by default. + +=== Updates + +Update the cloned source code repository and its submodules with Git: + +.... +$ git pull +$ git submodule update --remote +$ make purge +$ make [freebsd|linux|linux_aarch64] +$ sudo make install PREFIX=/opt +.... + == Deformation Monitoring Entities [[entities]] The data structures of DMPACK are based on the following entities. The date and time format used internally is a 32-characters long ISO 8601 time stamp in microsecond resolution, with time separator `T` and mandatory GMT offset, for -example, `1970-01-01T00:00:00.000000+00:00`. A human-readable format -`1970-01-01 00:00:00 +00:00` may be used where reasonable. +example, `1970-01-01T00:00:00.000000+00:00`. The human-readable output format +`1970-01-01 00:00:00 +00:00` is used where reasonable. === Observation Entities @@ -465,8 +505,7 @@ read, add, update, or delete nodes, sensors, and targets. logs from database to file, either in CSV, JSON, or JSON Lines format. <>:: Imports nodes, sensors, targets, observations, and logs from CSV file into database. -<>:: Creates and initialises SQLite observation, log, and beat -databases. +<>:: Creates and initialises observation, log, and beat databases. <>:: Stores logs received from POSIX message queue in a SQLite database. @@ -542,13 +581,15 @@ client and server. Requires a web server and _gnuplot(1)_. == Programs -Some programs read settings from an optional or mandatory configuration file. -Examples of configuration files are provided in directory -`/usr/local/etc/dmpack/`. The configuration file format is based on Lua tables -and is scriptable. Comments in the configuration file start with `--`. - -You may want to enable Lua syntax highlighting in your editor (for instance, -`set syntax=lua` in Vim), or use the file ending `.lua` instead of `.conf`. +This section contains descriptions of all DMPACK programs with their +respective command-line arguments. Some programs read settings from an optional +or mandatory configuration file. Examples are provided in directory +`/usr/local/etc/dmpack/` to be used as templates. The files are ordinary Lua +scripts, i.e., you can add Lua control structures for complex configurations or +access the <> of DMPACK. Set the language in your editor to Lua to +enable syntax highlighting (for instance, `set syntax=lua` in Vim), or use file +ending `.lua` instead of `.conf`. The set-up of the <> is outlined in the next section. === dmapi [[dmapi]] @@ -559,17 +600,15 @@ FastCGI spawner. It is recommended to use _lighttpd(1)_. The *dmapi* service offers endpoints for clients to insert beats, logs, and observations into the local SQLite database, and to request data in CSV or JSON format. Authentication and encryption are independent from *dmapi* and have to -be provided by the web server. - -All POST data has to be serialised in Fortran 95 Namelist format, with optional -link:http://www.zlib.net/[deflate] or link:http://www.zstd.net/[zstd] -compression. +be provided by the web server. All POST data has to be serialised in Fortran 95 +Namelist format, with optional link:http://www.zlib.net/[deflate] or +link:http://www.zstd.net/[zstd] compression. If HTTP Basic Auth is enabled, the sensor id of each beat, log, node, sensor, and observation sent to the HTTP-RPC service must match the name of the authenticated user. For example, to store an observation of a node with the id -`node-1`, the HTTP Basic Auth user name must be `node-1` as well. If the -observation is sent by any other user, it will be rejected (HTTP 401). +`node-1`, the HTTP Basic Auth user name of the client must be `node-1` as well. +If the observation is sent by any other user, it will be rejected (HTTP 401). .Environment variables of _dmapi(1)_ [[dmapi-env]] @@ -585,7 +624,7 @@ observation is sent by any other user, it will be rejected (HTTP 401). The web application is configured through environment variables. The web server or FastCGI spawner must be able to pass environment variables to *dmapi*. See -<> for an example configuration. +section <> for a basic _lighttpd(1)_ configuration. The service accepts HTTP GET and POST requests. Section <> gives an overview of the available endpoints. The response format depends on the MIME @@ -612,9 +651,10 @@ $ sqlite3 ".backup ''" *dmbackup* does not replace existing backup databases. +[discrete] ==== Command-Line Options -[cols="3,1,1,7"] +[cols="3,1,1,6"] |=== | Option | Short | Default | Description @@ -627,6 +667,7 @@ $ sqlite3 ".backup ''" | `--wal` | `-W` | off | Enable WAL journal for backup database. |=== +[discrete] ==== Examples Create an online backup of an observation database: @@ -637,40 +678,42 @@ $ dmbackup --database /var/dmpack/observ.sqlite --backup /tmp/observ.sqlite === dmbeat [[dmbeat]] -The *dmbeat* program is a heartbeat emitter that sends <> -or handshakes via HTTP POST to a remote <> service. The heartbeat -messages include time stamp, system uptime, and last connection error. The +The *dmbeat* program is a heartbeat emitter that sends +<> via HTTP POST to a remote <> service. +The heartbeats include time stamp, system uptime, and last connection error. The server may inspect this data to check if a client is still running and has -network access. The RPC endpoint is expected at +network access. The RPC endpoint on the server is expected at URL `[http|https]://:/api/v1/beat`. Passing the server credentials via the command-line arguments `--username` and `--password` is insecure on multi-user operating systems and only recommended for testing. +[discrete] ==== Command-Line Options -[cols="3,1,1,7"] +[cols="3,1,1,6"] |=== | Option | Short | Default | Description | `--config _file_` | `-c` | – | Path to configuration file. -| `--count _n_` | `-C` | 0 | Maximum number of heartbeats to send (unlimited if `0`). -| `--debug` | `-D` | off | Forward log messages of level _debug_ via IPC (if logger is set). +| `--count _n_` | `-C` | 0 | Number of heartbeats to send (unlimited if 0). +| `--debug` | `-D` | off | Forward log messages of level _debug_ (if logger is set). | `--help` | `-h` | – | Output available command-line arguments and quit. -| `--host _host_` | `-H` | – | IP or FQDN of HTTP-RPC host (for instance, `127.0.0.1` or `iot.example.com`). +| `--host _host_` | `-H` | – | IP or FQDN of HTTP-RPC API host (for instance, `127.0.0.1` or `iot.example.com`). | `--interval _seconds_` | `-I` | 0 | Emit interval in seconds. | `--logger _name_` | `-l` | – | Optional name of logger. If set, sends logs to <> process of given name. -| `--name _name_` | `-n` | `dmbeat` | Optional name of instance and table in given configuration file. +| `--name _name_` | `-n` | `dmbeat` | Optional name of instance and table in configuration. | `--node _id_` | `-N` | – | Node id. -| `--password _string_` | `-P` | – | HTTP-RPC API password. -| `--port _port_` | `-p` | 0 | Port of HTTP-RPC API server. The default `0` selects the port automatically. +| `--password _string_` | `-P` | – | API password. +| `--port _port_` | `-q` | 0 | Port of HTTP-RPC API server (0 for automatic). | `--tls` | `-X` | off | Use TLS encryption. -| `--username _string_` | `-U` | – | HTTP-RPC API user name. If set, implies HTTP Basic Auth. +| `--username _string_` | `-U` | – | API user name. If set, implies HTTP Basic Auth. | `--verbose` | `-V` | off | Print log messages to _stderr_. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Send a single heartbeat to a <> service on `localhost`: @@ -686,20 +729,25 @@ web application <> lists the beats received by the server. The *dmdb* program collects observations from a POSIX message queue and stores them in a SQLite database. The name of the message queue equals the -given *dmdb* name, by default `dmdb`. The IPC option enables process -synchronisation via POSIX semaphores. The value of the semaphore is changed -from 0 to 1 if a new observation has been received. The name of the semaphore -equals the *dmdb* name. Only a single process may wait for the semaphore. +given *dmdb* name and leading `/`. The IPC option enables process +synchronisation via POSIX semaphores. The value of the semaphore is changed from +0 to 1 if a new observation has been received. The name of the semaphore equals +the *dmdb* name with leading `/`. Only a single process shall wait for the +semaphore. + +Only <> in binary format are accepted. Log +messages are stored to database by the distinct <> program. +[discrete] ==== Command-Line Options -[cols="2,1,1,7"] +[cols="2,1,1,6"] |=== | Option | Short | Default | Description | `--config _file_` | `-c` | – | Path to configuration file. | `--database _file_` | `-d` | – | Path to SQLite observation database. -| `--debug` | `-D` | off | Forward log messages of level _debug_ via IPC (if logger is set). +| `--debug` | `-D` | off | Forward log messages of level _debug_ (if logger is set). | `--help` | `-h` | – | Output available command-line arguments and quit. | `--ipc` @@ -727,6 +775,7 @@ POSIX semaphore. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Create a message queue `/dmdb`, wait for incoming observations, and store them @@ -736,7 +785,8 @@ in the given database: $ dmdb --name dmdb --node dummy-node --database /var/dmpack/observ.sqlite --verbose .... -Log messages and observation ids are printed to _stdout_. +Log messages and observation ids are printed to _stdout_ if argument `--verbose` +is set. === dmdbctl [[dmdbctl]] @@ -759,9 +809,10 @@ Delete:: Only nodes, sensors, and targets are supported. All data attributes are passed through command-line arguments. +[discrete] ==== Command-Line Options -[cols="2,1,1,7"] +[cols="2,1,1,6"] |=== | Option | Short | Default | Description @@ -785,6 +836,7 @@ through command-line arguments. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Add node, sensor, and target to observation database: @@ -830,19 +882,23 @@ an empty file will be created. .Output file formats [[dmexport-output]] -[cols="1,5,4"] +[cols="1,2,2,2,2"] |=== -| Format | Supported Types | Description +| Type ^| Block ^| CSV ^| JSON ^| JSONL -| `block` | `dp` | ASCII block format. -| `csv` | `beat`, `dp`, `log`, `node`, `observ`, `sensor`, `target` | CSV format. -| `json` | `beat`, `dp`, `log`, `node`, `observ`, `sensor`, `target` | JSON format. -| `jsonl` | `beat`, `dp`, `log`, `node`, `observ`, `sensor`, `target` | JSON Lines format. +| `beat` ^| ^| ✓ ^| ✓ ^| ✓ +| `dp` ^| ✓ ^| ✓ ^| ✓ ^| ✓ +| `log` ^| ^| ✓ ^| ✓ ^| ✓ +| `node` ^| ^| ✓ ^| ✓ ^| ✓ +| `observ` ^| ^| ✓ ^| ✓ ^| ✓ +| `sensor` ^| ^| ✓ ^| ✓ ^| ✓ +| `target` ^| ^| ✓ ^| ✓ ^| ✓ |=== +[discrete] ==== Command-Line Options -[cols="3,1,1,6"] +[cols="2,1,1,6"] |=== | Option | Short | Default | Description @@ -862,6 +918,7 @@ an empty file will be created. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Export log messages from database to JSON file: @@ -887,7 +944,7 @@ output or file. The feed id has to be a 36 characters long UUID with hyphens. News aggregators will use the id to identify the feed. Therefore, the id should not be reused -among different feeds. Run <> to generate a valid UUID. +among different feeds. Run <> to generate a valid UUIDv4. The time stamp of the feed in element _updated_ is set to the date and time of the last log message. If no logs have been added to the database since the last @@ -900,32 +957,34 @@ feed in HTML format. Set the option to the (relative) path of the public XSL on the web server. An example style sheet `feed.xsl` is located in `/usr/local/share/dmpack/`. +[discrete] ==== Command-Line Options -[cols="3,1,1,6"] -|=== -| Option | Short | Default | Description - -| `--author _name_` | `-A` | – | Name of feed author or organisation. -| `--config _file_` | `-c` | – | Path to configuration file. -| `--database _file_` | `-d` | – | Path to SQLite log database. -| `--email _address_` | `-M` | – | E-mail address of feed author (optional). -| `--entries _count_` | `-E` | 50 | Maximum number of entries in feed (max. 500). -| `--force` | `-F` | – | Force file output even if no new log records are available. -| `--help` | `-h` | – | Output available command-line arguments and quit. -| `--id _uuid_` | `-I` | – | UUID of the feed, 36 characters long with hyphens. -| `--maxlevel _level_` | `-K` | 5 | Select log messages of the given maximum <> (between 1 and 5). Must be greater or equal the minimum level. -| `--minlevel _level_` | `-L` | 1 | Select log messages of the given minimum <> (between 1 and 5). -| `--name _name_` | `-n` | `dmfeed` | Name of instance and table in given configuration file. -| `--node _id_` | `-N` | – | Select log messages of the given node id. -| `--output _file_` | `-o` | _stdout_ | Path of the output file. If empty or `-`, the Atom feed will be printed to standard output. -| `--subtitle _string_` | `-G` | – | Sub-title of feed. -| `--title _string_` | `-C` | – | Title of feed. -| `--url _url_` | `-U` | – | Public URL of the feed. -| `--version` | `-v` | – | Output version information and quit. -| `--xsl` | `-X` | – | Path to XSLT style sheet. +[cols="2,1,1,5"] |=== - +| Option | Short | Default | Description + +| `--author _name_` | `-A` | – | Name of feed author or organisation. +| `--config _file_` | `-c` | – | Path to configuration file. +| `--database _file_` | `-d` | – | Path to SQLite log database. +| `--email _address_` | `-M` | – | E-mail address of feed author (optional). +| `--entries _count_` | `-E` | 50 | Maximum number of entries in feed (max. 500). +| `--force` | `-F` | – | Force file output even if no new log records are available. +| `--help` | `-h` | – | Output available command-line arguments and quit. +| `--id _uuid_` | `-I` | – | UUID of the feed, 36 characters long with hyphens. +| `--maxlevel _level_` | `-K` | `critical` | Select log messages of the given maximum <> (between `debug` or 1 and `critical` or 5). Must be greater or equal the minimum level. +| `--minlevel _level_` | `-L` | `debug` | Select log messages of the given minimum <> (between `debug` or 1 and `critical` or 5). +| `--name _name_` | `-n` | `dmfeed` | Name of instance and table in configuration. +| `--node _id_` | `-N` | – | Select log messages of the given node id. +| `--output _file_` | `-o` | _stdout_ | Path of the output file. If empty or `-`, the Atom feed will be printed to standard output. +| `--subtitle _string_` | `-G` | – | Sub-title of feed. +| `--title _string_` | `-C` | – | Title of feed. +| `--url _url_` | `-U` | – | Public URL of the feed. +| `--version` | `-v` | – | Output version information and quit. +| `--xsl` | `-X` | – | Path to XSLT style sheet. +|=== + +[discrete] ==== Examples First, generate a unique feed id: @@ -977,18 +1036,19 @@ A configuration file is mandatory to describe the jobs to perform. Each observation must have a valid target id. Node, sensor, and target have to be present in the database. +[discrete] ==== Command-Line Options -[cols="2,1,1,7"] +[cols="2,1,1,6"] |=== | Option | Short | Default | Description | `--config _file_` | `-c` | – | Path to configuration file (required). -| `--debug` | `-D` | off | Forward log messages of level _debug_ via IPC (if logger is set). +| `--debug` | `-D` | off | Forward log messages of level _debug_ (if logger is set). | `--format _format_` | `-f` | – | Output format, either `csv` or `jsonl`. | `--help` | `-h` | – | Output available command-line arguments and quit. | `--logger _name_` | `-l` | – | Optional name of logger. If set, sends logs to <> process of given name. -| `--name _name_` | `-n` | `dmfs` | Name of instance and table in given configuration file. +| `--name _name_` | `-n` | `dmfs` | Name of instance and table in configuration. | `--node _id_` | `-N` | – | Node id. | `--output _file_` | `-o` | – | Output file to append observations to (`-` for _stdout_). | `--sensor _id_` | `-S` | – | Sensor id. @@ -996,133 +1056,13 @@ present in the database. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples -First, install the 1-Wire file system package. On FreeBSD, run: - -.... -# pkg install comms/owfs -.... - -On Linux, install the package instead with: - -.... -# apt install owfs -.... - -Connect a 1-Wire temperature sensor through USB (device `/dev/ttyU0`), and mount -the 1-Wire file system with _owfs(1)_ under `/mnt/1wire/`: - -.... -# mkdir -p /mnt/1wire -# owfs -C -d /dev/ttyU0 --allow_other -m /mnt/1wire/ -.... - -On Linux, the path to the USB adapter slightly differs: - -.... -# owfs -C -d /dev/ttyUSB0 --allow_other -m /mnt/1wire/ -.... - -The command-line argument `-C` selects output in °C. The settings can be added -to the _owfs(1)_ configuration file, usually `/usr/local/etc/owfs.conf` or -`/etc/owfs.conf`: - -.... -device = /dev/ttyU0 -mountpoint = /mnt/1wire -allow_other -Celsius -.... - -The file system is mounted automatically at system start-up if _owfs(1)_ is -configured to run as a service. +Start *dmfs* to execute the jobs in the configuration file: -Reading a temperature value from the connected sensor: - -.... -$ cat /mnt/1wire/10.DCA98C020800/temperature -19.12 -.... - -Then, initialise the observation and log databases: - -.... -$ cd /var/dmpack/ -$ dminit --type observ --database observ.sqlite --wal -$ dminit --type log --database log.sqlite --wal -.... - -Create node `node-1`, sensor `sensor-1`, and target `target-1` in database -`/var/dmpack/observ.sqlite` through <> or <>: - -.... -$ dmdbctl -d observ.sqlite -C node --id node-1 --name "Node 1" -$ dmdbctl -d observ.sqlite -C sensor --id sensor-1 --name "Sensor 1" --node node-1 -$ dmdbctl -d observ.sqlite -C target --id target-1 --name "Target 1" -.... - -Set the program settings in configuration file -`/usr/local/etc/dmpack/dmfs.conf`: - -[source,lua] -.... --- dmfs.conf -dmfs = { - logger = "dmlogger", -- Logger to send logs to. - node = "node-1", -- Node id (required). - sensor = "sensor-1", -- Sensor id (required). - output = "", -- Path to output file, or `-` for stdout. - format = "none", -- Output format (`csv` or `jsonl`). - jobs = { -- List of jobs to perform. - { - disabled = false, -- Enable to ignore job. - onetime = false, -- Run job only once. - observation = { -- Observation to execute (required). - name = "observ-1", -- Observation name (required). - target_id = "target-1", -- Target id (required). - receivers = { "dmdb" }, -- List of receivers (up to 16). - requests = { -- List of files to read. - { - request = "/mnt/1wire/10.DCA98C020800/temperature", -- File path. - pattern = "(?[-+0-9\\.]+)", -- RegEx pattern. - delay = 500, -- Delay in mseconds. - responses = { - { - name = "temp", -- RegEx group name (max. 8 characters). - unit = "degC", -- Response unit (max. 8 characters). - type = RESPONSE_TYPE_REAL64 -- Response value type. - } - } - } - } - }, - delay = 10 * 1000, -- Delay in mseconds to wait afterwards. - } - }, - debug = false, -- Forward logs of level DEBUG via IPC. - verbose = true -- Print messages to standard output. -} -.... - -Log messages will be sent to logger `dmlogger`, observations to receiver `dmdb`. - -Start the logger process: - -.... -$ dmlogger --name dmlogger --database /var/dmpack/log.sqlite -.... - -Start the database process: - -.... -$ dmdb --name dmdb --database /var/dmpack/observ.sqlite --node node-1 --logger dmlogger .... - -Start *dmfs* to execute the configured job: - -.... -$ dmfs --name dmfs --config /usr/local/etc/dmpack/dmfs.conf +$ dmfs --name dmfs --config /usr/local/etc/dmpack/dmfs.conf --verbose .... === dmgrc [[dmgrc]] @@ -1136,24 +1076,26 @@ By default, observation responses of name `grc` are verified. For each GeoCOM error code, a custom log level may be specified in the configuration file. Otherwise, the default log level is used instead. +[discrete] ==== Command-Line Options -[cols="2,1,1,7"] +[cols="2,1,1,6"] |=== | Option | Short | Default | Description | `--config _file_` | `-c` | – | Path to configuration file (required). -| `--debug` | `-D` | off | Forward log messages of level _debug_ via IPC (if logger is set). +| `--debug` | `-D` | off | Forward log messages of level _debug_ (if logger is set). | `--help` | `-h` | – | Output available command-line arguments and quit. | `--level _level_` | `-L` | 3 | Default level of log messages, between 1 and 5. | `--logger _name_` | `-l` | – | Name of <> process to send logs to. -| `--name _name_` | `-n` | `dmgrc` | Name of instance and table in given configuration file. +| `--name _name_` | `-n` | `dmgrc` | Name of instance and table in configuration. | `--node _id_` | `-N` | – | Node id. | `--response _name_` | `-R` | `grc` | Response name of the GeoCOM return code. | `--verbose` | `-V` | off | Print log messages to _stderr_. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples A configuration file is not required, but allows to specifiy the log level of @@ -1202,6 +1144,7 @@ through command-line argument `--database`. Only one database can be specified. The output contains compiler version and options; database PRAGMAs, tables, and number of rows; as well as system name, version, and host name. +[discrete] ==== Command-Line Options [cols="2,1,1,7"] @@ -1213,21 +1156,24 @@ number of rows; as well as system name, version, and host name. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Print build, database, and system information: .... $ dminfo --database /var/dmpack/observ.sqlite -build.compiler: GCC version 13.1.0 +build.compiler: GCC version 13.2.0 build.options: -mtune=generic -march=x86-64 -std=f2018 db.application_id: 444D31 db.foreign_keys: T db.journal_mode: wal +db.library: libsqlite3/3.46.0 db.path: /var/dmpack/observ.sqlite +db.schema_version: 1 db.size: 286720 -db.table.beats: F -db.table.beats.rows: 0 +db.table.nodes.rows: 1 +db.table.observs.rows: 202 ... .... @@ -1243,23 +1189,25 @@ required for the input records. The nodes, sensors, and targets referenced by input observations must exist in the database. The nodes referenced by input sensors must exist as well. +[discrete] ==== Command-Line Options -[cols="3,1,1,6"] +[cols="2,1,1,6"] |=== | Option | Short | Default | Description | `--database _file_` | `-d` | – | Path to SQLite database (required, unless in dry mode). -| `--dry` | `-y` | off | Dry mode. Reads and validates records from file but skips database import. +| `--dry` | `-D` | off | Dry mode. Reads and validates records from file but skips database import. | `--help` | `-h` | – | Output available command-line arguments and quit. | `--input _file_` | `-i` | – | Path to input file in CSV format (required). | `--quote _char_` | `-q` | – | CSV quote character. | `--separator _char_` | `-s` | `,` | CSV field separator. -| `--type _type_` | `-t` | – | Type of record to import: `log`, `node`, `observ`, `sensor`, `target` (required). +| `--type _type_` | `-t` | – | Type of record to import, either `log`, `node`, `observ`, `sensor`, `target` (required). | `--verbose` | `-V` | off | Print progress to _stdout_. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Import observations from CSV file `observ.csv` into database `observ.sqlite`: @@ -1277,20 +1225,23 @@ server. The argument can be omitted if this feature is not needed. The journal mode Write-Ahead Logging (WAL) should be enabled for databases with multiple readers. +[discrete] ==== Command-Line Options -[cols="2,1,1,7"] +[cols="2,1,1,6"] |=== | Option | Short | Default | Description -| `--database _file_` | `-d` | – | Path of the new SQLite database. +| `--database _file_` | `-d` | – | Path of the new SQLite database (required). +| `--force` | `-F` | off | Force the table creation even if the database already exists. | `--help` | `-h` | – | Output available command-line arguments and quit. | `--sync` | `-Y` | off | Add synchronisation tables. Enable for data synchronisation between client and server. -| `--type _type_` | `-t` | – | Type of database, either `beat`, `log`, or `observ`. +| `--type _type_` | `-t` | – | Type of database, either `beat`, `log`, or `observ` (required). | `--version` | `-v` | – | Output version information and quit. | `--wal` | `-W` | off | Enable journal mode Write-Ahead Logging (WAL). |=== +[discrete] ==== Examples Create an observation database with remote synchronisation tables (WAL): @@ -1314,23 +1265,30 @@ $ dminit --database /var/dmpack/beat.sqlite --type beat --wal === dmlog [[dmlog]] The *dmlog* utility forwards a log message to the message queue of a -<> instance. The argument `--message` is mandatory. The default log -level is _info_. Pass the name of the _dmlogger_ instance through command-line -argument `--logger`. The program terminates after log transmission. +<> or <> instance. The program may be executed through a +shell script to add logs to the DMPACK database. The argument `--message` is +mandatory. The default log level is _info_. Pass the name of the _dmlogger_ or +_dmrecv_ instance to send the log to through command-line argument `--logger`. -The following log levels are accepted: +Logs are sent in binary format. The program terminates after log transmission. +The log level may be one of the following: -[cols="1,3"] +[cols="1,2,5"] |=== -| Level | Name +| Level | Parameter String | Name -| 1 | debug -| 2 | info -| 3 | warning -| 4 | error -| 5 | critical +| 1 | `debug` | Debug +| 2 | `info` | Info +| 3 | `warning` | Warning +| 4 | `error` | Error +| 5 | `critical` | Critical |=== +Both, parameter strings and literal log level values, are accepted as +command-line arguments. For level _warning_, set argument `--level` to `3` or +`warning`. + +[discrete] ==== Command-Line Options [cols="2,1,1,5"] @@ -1339,7 +1297,7 @@ The following log levels are accepted: | `--error _n_` | `-e` | 0 | DMPACK <> (optional). | `--help` | `-h` | – | Output available command-line arguments and quit. -| `--level _level_` | `-L` | 2 | <>, from 1 to 5 (level or name). +| `--level _level_` | `-L` | `info` | <>, from `debug` or 1 to `critical` or 5. | `--logger _name_` | `-l` | `dmlogger` | Name of logger instance and POSIX message queue. | `--message _string_` | `-m` | – | Log message (max. 512 characters). | `--node _id_` | `-N` | – | Node id (optional). @@ -1351,6 +1309,7 @@ The following log levels are accepted: | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Send a log message to the message queue of logger `dmlogger`: @@ -1381,25 +1340,28 @@ to standard output before being discarded (only if the verbose flag is enabled). The IPC option allows an optional process synchronisation via a named POSIX semaphores. The value of the semaphore is changed from `0` to `1` whenever a new log was received. The name of the semaphore will equal the *dmlogger* name -with leading `/`. Only a single process should wait for the semaphore unless -round-robin passing is desired. This feature may be used to automatically -synchronise incoming log messages with a remote HTTP-RPC API server. <> -will wait for new logs before starting synchronisation if the *dmlogger* -instance name has been passed through command-line argument `--wait`. +with leading `/`. + +Only a single process should wait for the semaphore unless round-robin passing +is desired. This feature may be used to automatically synchronise incoming log +messages with a remote HTTP-RPC API server. <> will wait for new logs +before starting synchronisation if the *dmlogger* instance name has been passed +through command-line argument `--wait`. The following log levels are accepted: -[cols="1,3"] +[cols="1,3,3"] |=== -| Level | Name +| Level | Parameter String | Name -| 1 | debug -| 2 | info -| 3 | warning -| 4 | error -| 5 | critical +| 1 | `debug` | Debug +| 2 | `info` | Info +| 3 | `warning` | Warning +| 4 | `error` | Error +| 5 | `critical` | Critical |=== +[discrete] ==== Command-Line Options [cols="2,1,1,5"] @@ -1420,8 +1382,8 @@ this semaphore. | `--minlevel _level_` | `-L` -| 3 -| Minimum level for a log to be stored in the database, from 1 to 5. +| `info` +| Minimum level for a log to be stored in the database, from `debug` or 1 to `critical` or 5. | `--name _name_` | `-n` @@ -1434,13 +1396,14 @@ semaphore. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Create a message queue `/dmlogger`, wait for incoming logs, and store them in -the given database if logs are of level 4 (ERROR) or higher: +the given database if logs are of level _error_ (4) or higher: .... -$ dmlogger --node dummy-node --database log.sqlite --minlevel 4 +$ dmlogger --node dummy-node --database log.sqlite --minlevel warning .... Push semaphore `/dmlogger` each time a log has been received: @@ -1468,17 +1431,18 @@ The observation returned from the Lua function is forwarded to the next receiver specified in the receivers list of the observation. If no receivers are left, the observation will be discarded. +[discrete] ==== Command-Line Options -[cols="2,1,1,6"] +[cols="2,1,1,5"] |=== | Option | Short | Default | Description | `--config _file_` | `-c` | – | Path to configuration file (optional). -| `--debug` | `-D` | off | Forward log messages of level _debug_ via IPC (if logger is set). +| `--debug` | `-D` | off | Forward log messages of level _debug_ (if logger is set). | `--help` | `-h` | – | Output available command-line arguments and quit. | `--logger _name_` | `-l` | – | Optional name of logger. If set, sends logs to <> process of given name. -| `--name _name_` | `-n` | `dmlua` | Name of instance and table in given configuration file. +| `--name _name_` | `-n` | `dmlua` | Name of instance and table in configuration. | `--node _id_` | `-N` | – | Node id. | `--procedure _name_` | `-p` | `process` | Name of Lua function to call. | `--script _file_` | `-s` | – | Path to Lua script to run. @@ -1486,6 +1450,7 @@ the observation will be discarded. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples The following Lua script `script.lua` just prints observation table `observ` to @@ -1538,18 +1503,19 @@ observation must have a valid target id. Node id, sensor id, and observation id are added by *dmpipe*. Node, sensor, and target have to be present in the database for the observation to be stored. +[discrete] ==== Command-Line Options -[cols="2,1,1,7"] +[cols="2,1,1,5"] |=== | Option | Short | Default | Description | `--config _file_` | `-c` | – | Path to configuration file (required). -| `--debug` | `-D` | off | Forward log messages of level _debug_ via IPC (if logger is set). +| `--debug` | `-D` | off | Forward log messages of level _debug_ (if logger is set). | `--format _format_` | `-f` | – | Output format, either `csv` or `jsonl`. | `--help` | `-h` | – | Output available command-line arguments and quit. | `--logger _name_` | `-l` | – | Optional name of logger. If set, sends logs to <> process of given name. -| `--name _name_` | `-n` | `dmpipe` | Name of instance and table in given configuration file. +| `--name _name_` | `-n` | `dmpipe` | Name of instance and table in configuration. | `--node _id_` | `-N` | – | Node id. | `--output _file_` | `-o` | – | Output file to append observations to (`-` for _stdout_). | `--sensor _id_` | `-S` | – | Sensor id. @@ -1557,6 +1523,7 @@ database for the observation to be stored. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples The example reads the remaining battery life returned by the _sysctl(8)_ tool @@ -1573,9 +1540,8 @@ On Linux, the battery life can be read with <> from The regular expression pattern describes the response and defines the group `battery` for extraction. The name of one of the responses in the `responses` table must equal the group name. The observation will be forwarded to the -message queue of a <> process. - -Backslash characters in the string values have to be escaped with `\`. +message queue of a <> process. Backslash characters in the string values +have to be escaped with `\`. [source,lua] .... @@ -1588,7 +1554,7 @@ dmpipe = { format = "none", -- Output format (`csv` or `jsonl`). jobs = { -- Jobs to perform. { - disabled = false, -- Enable to ignore job. + disabled = false, -- Enable to skip job. onetime = false, -- Run job only once. observation = { -- Observation to execute. name = "dummy-observ", -- Observation name (required). @@ -1639,14 +1605,14 @@ observations read from database. Plots are either written to file or displayed in terminal or X11 window. Depending on the selected terminal back-end, you may have to set the environment -variable `GDFONTPATH` to the local font directory first: +variable `GDFONTPATH` to the path of the local font directory first: .... $ export GDFONTPATH="/usr/local/share/fonts/webfonts/" .... If _gnuplot(1)_ is installed under a name other than `gnuplot`, for example, -`gnuplot-nox`, an alias has to be added to the global profile: +`gnuplot-nox`, create a symbolic link or add an alias to the global profile: .... alias gnuplot="gnuplot-nox" @@ -1686,6 +1652,7 @@ configuration file. | `%s` | second (ss) |=== +[discrete] ==== Command-Line Options [cols="3,1,1,6"] @@ -1700,7 +1667,7 @@ configuration file. | `--from _timestamp_` | `-B` | – | Start of time range in ISO 8601. | `--height _n_` | `-H` | 400 | Plot height. | `--help` | `-h` | – | Output available command-line arguments and quit. -| `--name _name_` | `-n` | `dmplot` | Name of table in configuration file. +| `--name _name_` | `-n` | `dmplot` | Name of table in configuration. | `--node _id_` | `-N` | – | Node id. | `--output _file_` | `-o` | – | File path of plot image. May include <>. | `--response _name_` | `-R` | – | Response name. @@ -1713,6 +1680,7 @@ configuration file. | `--width _n_` | `-W` | 1000 | Plot width. |=== +[discrete] ==== Examples Create a plot of observations selected from database `observ.sqlite` in PNG @@ -1747,50 +1715,47 @@ replaced consecutively. Received observations are not forwarded to the next specified receiver unless argument `--forward` is set. If no receivers are defined or left, the -observation will be discarded after output. - -The output format `block` is only available for observation data and requires -a response name to be set. Observations will be searched for this response name -and converted to data point type if found. The data point is printed in ASCII -block format. - -If the JSON Lines output format is selected, logs and observations are written -as JSON objects to file or _stdout_, separated by new line (`\n`). Use _jq(1)_ -to convert records in JSON Lines file `input.jsonl` into a valid JSON array in -`output.json`: +observation will be discarded after output. If the JSON Lines output format is +selected, logs and observations are written as JSON objects to file or _stdout_, +separated by new line (`\n`). Use _jq(1)_ to convert records in JSON Lines file +`input.jsonl` into a valid JSON array in `output.json`: .... $ jq -s '.' input.jsonl > output.json .... +The output format `block` is only available for observation data and requires a +response name to be set. Observations will be searched for this response name +and converted to data point type if found. The data point is printed in ASCII +block format. + The program settings are passed through command-line arguments or an optional configuration file. The arguments overwrite settings from file. .Output formats of logs and observations [[dmrecv-output]] -[cols="2,4,12"] +[cols="1,2,2,2,2"] |=== -| Format | Type | Description +| Type ^| Block ^| CSV ^| JSONL ^| NML -| `block` | `observ` | ASCII block format (time stamp and response value). -| `csv` | `log`, `observ` | CSV format. -| `jsonl` | `log`, `observ` | JSON Lines format. -| `nml` | `log`, `observ` | Fortran 95 Namelist format. +| `log` ^| ^| ✓ ^| ✓ ^| ✓ +| `observ` ^| ✓ ^| ✓ ^| ✓ ^| ✓ |=== +[discrete] ==== Command-Line Options -[cols="2,1,1,7"] +[cols="2,1,1,5"] |=== | Option | Short | Default | Description | `--config _file_` | `-c` | – | Path to configuration file. -| `--debug` | `-D` | off | Forward log messages of level _debug_ via IPC (if logger is set). +| `--debug` | `-D` | off | Forward log messages of level _debug_ (if logger is set). | `--format _format_` | `-f` | – | <> (`block`, `csv`, `jsonl`, `nml`). | `--forward` | `-F` | off | Forward observations to the next specified receiver. | `--help` | `-h` | – | Output available command-line arguments and quit. | `--logger _name_` | `-l` | – | Optional name of logger. If set, sends logs to <> process of given name. -| `--name _name_` | `-n` | `dmrecv` | Name of table in configuration file and POSIX message queue to subscribe to. +| `--name _name_` | `-n` | `dmrecv` | Name of table in configuration and POSIX message queue to subscribe to. | `--node _id_` | `-N` | – | Optional node id. | `--output _file_` | `-o` | _stdout_ | Output file to append observations to (`-` for _stdout_). | `--replace` | `-r` | off | Replace output file instead of appending data. @@ -1800,6 +1765,7 @@ configuration file. The arguments overwrite settings from file. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Write log messages received from POSIX message queue `/dmrecv` to file @@ -1816,17 +1782,27 @@ $ dmrecv --name dmrecv --type observ --format jsonl .... Write the observations serialised in JSON Lines format to named pipe -`/tmp/dmrecv_pipe`: +`/tmp/fifo_dmrecv`: + +.... +$ mkfifo /tmp/fifo_dmrecv +$ dmrecv --name dmrecv --type observ --format jsonl --output /tmp/fifo_dmrecv +.... + +Another process can now read the observations from `/tmp/fifo_dmrecv`: .... -$ mkfifo /tmp/dmrecv_pipe -$ dmrecv --name dmrecv --type observ --format jsonl --output /tmp/dmrecv_pipe +$ tail -f /tmp/fifo_dmrecv .... -Another process can now read the observations from `/tmp/dmrecv_pipe`: +Responses in block format can also be piped to a graph tool like +link:https://www.thregr.org/wavexx/software/trend/[trend] to update a chart in +real-time. For instance, to pipe the responses of name `tz0` for observations +received through message queue `/dmrecv` to _trend(1)_, run: .... -$ tail -f /tmp/dmrecv_pipe +$ dmrecv --name dmrecv --type observ --format block --response tz0 \ + | awk '{ print $2 | "trend - 60" }' .... === dmreport [[dmreport]] @@ -1835,18 +1811,24 @@ The *dmreport* program creates reports in HTML5 format, containing plots of observations and/or log messages selected from database. Plots are created by calling _gnuplot(1)_ and inlining the returned image (GIF, PNG, SVG) as a base64-encoded data URI. Any style sheet file with classless CSS can be -included to alter the presentation of the report. The output of *dmreport* is a -single HTML file. +included to alter the presentation of the report. A basic style sheet +`dmreport.css` and its minified version `dmreport.min.css` are provided in +`/usr/local/share/dmpack/`. The output of *dmreport* is a single HTML file with +inlined CSS. Use a command-line tool like +link:https://wkhtmltopdf.org/[wkhtmltopdf] to convert the HTML report to PDF +format. Depending on the selected plot format, the environment variable `GDFONTPATH` may -have to be set to the local font directory first: +have to be set to the local font directory containing the TrueType fonts first, +for example: .... $ export GDFONTPATH="/usr/local/share/fonts/webfonts/" .... -If _gnuplot(1)_ is installed under a name other than `gnuplot`, for example, -`gnuplot-nox`, an alias has to be added to the global profile: +Add the export statement to the global profile `/etc/profile`. If _gnuplot(1)_ +is installed under a name other than `gnuplot`, for example, `gnuplot-nox`, +create a symbolic link or add an alias to `/etc/profile`: .... alias gnuplot="gnuplot-nox" @@ -1870,6 +1852,7 @@ priority over settings in the configuration file. | `%s` | second (ss) |=== +[discrete] ==== Command-Line Options [cols="2,1,1,5"] @@ -1887,11 +1870,12 @@ priority over settings in the configuration file. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples The settings are stored in Lua table `dmreport` in the configuration file. The observations are read from database `observ.sqlite`, the log messages from -`log.sqlite`. +`log.sqlite`. You might want to use absolute paths for the databases. [source,lua] .... @@ -1937,18 +1921,21 @@ dmreport = { } .... -Write a report to file `report.html` based on settings in `dmreport.conf`: +The sensor node `dummy-node`, the sensor `dummy-sensor`, and the target +`dummy-target` must exist in the database, and the observations to plot need to +have responses of name `tz0`. Write a report to file `report.html` based on +settings in `dmreport.conf`. The command-line arguments overwrite the settings +of the configuration file: .... $ dmreport --name dmreport --config dmreport.conf --output report.html .... -The command-line arguments overwrite the settings of the configuration file. - -In order to create monthly reports, we may customise the shell script -`/usr/local/share/dmpack/mkreport.sh` to determine the timestamps of the last -and the current month, which will then be passed to *dmreport*. Modify the -script `mkreport.sh` to your set-up: +In order to update reports periodically, we can customise the shell script +`mkreport.sh` in `/usr/local/share/dmpack/`. The script determines the +timestamps of the last and the current month (to allow observations to arrived +late), which will then be passed to *dmreport* to create monthly reports. Modify +the script according to your set-up: [source,sh] .... @@ -1958,9 +1945,7 @@ config="/usr/local/etc/dmpack/dmreport.conf" output="/var/www/reports/" .... -Executing the shell script creates two reports, one for time -series of the previous month (in case some observations have arrived late), and -one for those of the current month, for example: +The shell script writes two reports to `/var/www/reports/`. .... $ sh /usr/local/share/dmpack/mkreport.sh @@ -1968,36 +1953,38 @@ $ sh /usr/local/share/dmpack/mkreport.sh --- Writing report of 2023-09 to file /var/www/reports/2023-09_report.html ... .... -To run the report generation periodically, simply add the script to your -<>. +The directory may be served by _lighttpd(1)_. Add the script to your +<> to run the report generation periodically. === dmsend [[dmsend]] -The *dmsend* program reads observations or logs in CSV or Fortran 95 Namelist -format, and sends them sequentially to the POSIX message queue of the given -receiver. The data is either read from file or from standard input. If the input -data is of type `observ` and the argument `--forward` is passed, each -observation will be sent to its next specified receiver in the receivers list. -If no receivers are declared, or if the end of the receivers list is reached, -the observation will not be forwarded. +The *dmsend* program reads observations or logs in <> and +<> format, and sends them sequentially to +the POSIX message queue of a given receiver. The data is either read from file +or standard input. If the input data is of type `observ` and the argument +`--forward` is passed, each observation will be sent to its next specified +receiver in the receivers list instead of the receiver given through +argument `--receiver`. If no receivers are set, or if the end of the receivers +list is reached, the observation will be discarded. The program settings are passed through command-line arguments or an optional configuration file. The arguments overwrite settings from file. +[discrete] ==== Command-Line Options -[cols="2,1,1,7"] +[cols="2,1,1,5"] |=== | Option | Short | Default | Description | `--config _file_` | `-c` | – | Path to configuration file. -| `--debug` | `-D` | off | Forward log messages of level _debug_ via IPC (if logger is set). +| `--debug` | `-D` | off | Forward log messages of level _debug_ (if logger is set). | `--format _format_` | `-f` | – | Input format: `csv` or `nml`. | `--input _file_` | `-i` | _stdin_ | Path to input file (empty or `-` for _stdin_). | `--forward` | `-F` | off | Forward observations to the next specified receiver. | `--help` | `-h` | – | Output available command-line arguments and quit. | `--logger _name_` | `-l` | – | Optional name of logger. If set, sends logs to <> process of given name. -| `--name _name_` | `-n` | `dmsend` | Name of instance and table in configuration file. +| `--name _name_` | `-n` | `dmsend` | Name of instance and table in configuration. | `--node _id_` | `-N` | – | Optional node id. | `--receiver _name_` | `-r` | – | Name of receiver/message queue. | `--type _type_` | `-t` | – | Input data type: `log` or `observ`. @@ -2005,16 +1992,17 @@ configuration file. The arguments overwrite settings from file. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples -Read observation from Namelist file `observ.nml` and send it to the next -specified receiver: +Read a single observation from Namelist file `observ.nml` and send it to the +next receiver specified by attribute `next`: .... $ dmsend --type observ --format nml --input observ.nml --forward .... -Send logs in CSV file `logs.csv` sequentially to process `dmrecv`: +Send multiple logs in CSV file `logs.csv` sequentially to process `dmrecv`: .... $ dmsend --receiver dmrecv --type log --format csv --input logs.csv @@ -2031,36 +2019,34 @@ Each request of an observation must contains the raw request intended for the sensor in attribute `request`. Response values are extracted by group from the raw response using the given regular expression pattern. Each group name must match a response name. Response names are limited to eight characters. - Observations will be forwarded to the next receiver via POSIX message queue if any receiver is specified. The program can act as a sole data logger if output -and format are set. If the output path is set to `-`, observations are printed -to _stdout_, else to file. +file and format are set. If the output is set to `-`, observations are printed +to _stdout_. -A configuration file is required to configure the jobs to perform. Each +A configuration file is mandatory to configure the jobs to perform. Each observation must have a valid target id. The database must contain the specified -node, sensor, and targets. Parameters and functions of the <> -may be used in the configuration file. - -The following baud rates are supported: 50, 75, 110, 134, 150, 200, 300, 600, -1200, 1800, 2400, 4800, 9600, 19200, 38400, 57600, 115200, 230400, 460800, -921600. +node, sensor, and targets. Parameters and functions of the <> may be +used in the configuration file. The following baud rates are supported: 50, 75, +110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800, 9600, 19200, 38400, +57600, 115200, 230400, 460800, 921600. +[discrete] ==== Command-Line Options [cols="2,1,1,5"] |=== | Option | Short | Default | Description -| `--baudrate _n_` | `-B` | 9600 | Number of symbols transmitted per second (4800, 9600, 115200, …). +| `--baudrate _n_` | `-B` | 9600 | Number of symbols transmitted per second. | `--bytesize _n_` | `-Z` | 8 | Byte size (5, 6, 7, 8). | `--config _file_` | `-c` | – | Path to configuration file (required). -| `--debug` | `-D` | off | Forward log messages of level _debug_ via IPC (if logger is set). +| `--debug` | `-D` | off | Forward log messages of level _debug_ (if logger is set). | `--dtr` | `-Q` | off | Enable Data Terminal Ready (DTR). | `--format _format_` | `-f` | – | Output format, either `csv` or `jsonl`. | `--help` | `-h` | – | Output available command-line arguments and quit. | `--logger _name_` | `-l` | – | Optional name of logger. If set, sends logs to <> process of given name. -| `--name _name_` | `-n` | `dmserial` | Name of instance and table in given configuration file. +| `--name _name_` | `-n` | `dmserial` | Name of instance and table in configuration. | `--node _id_` | `-N` | – | Node id. | `--output _file_` | `-o` | – | Output file to append observations to (`-` for _stdout_). | `--parity _name_` | `-P` | `none` | Parity bits (`none`, `even`, or `odd`). @@ -2068,11 +2054,12 @@ The following baud rates are supported: 50, 75, 110, 134, 150, 200, 300, 600, | `--sensor _id_` | `-S` | – | Sensor id. | `--stopbits _n_` | `-O` | 1 | Number of stop bits (1, 2). | `--timeout _n_` | `-T` | 0 | Connection timeout in seconds (max. 25). -| `--tty _path_` | `-Y` | – | Path to TTY/PTY device (for example, `/dev/ttyU0`). +| `--path _path_` | `-p` | – | Path to TTY/PTY device (for example, `/dev/ttyU0`). | `--verbose` | `-V` | off | Print log messages to _stderr_. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Read the jobs to perform from configuration file and execute them sequentially: @@ -2084,28 +2071,32 @@ $ dmserial --name dmserial --config /usr/local/etc/dmpack/dmserial.conf --verbos === dmsync [[dmsync]] The *dmsync* program synchronises logs, nodes, observations, sensors, and -targets from local database concurrently with a remote <> server. The +targets from local databases concurrently with a remote <> server. The synchronisation may be started only once if no interval is set (to transfer -nodes, sensors, and targets from client to server), periodically as a cron job, -or by waiting for a POSIX semaphore. +nodes, sensors, and targets initially from client to server), periodically as a +cron job, or by waiting for a POSIX semaphore. The nodes, sensors, and targets referenced by observations in the local database -must also exist in the remote server database. They can be created either with -<> or <>, but also synchronised with *dmsync*. Logs and targets -do not require any additional database entries on server-side. +must also exist in the remote server database. They can be created on the server +with <> or <>, or sent from client to server with *dmsync*. +Logs and targets do not require any additional database entries on the +server-side. -The client databases must contain synchronisation tables. The tables are created -automatically by <> if command-line argument `--sync` is passed. -Alternatively, start *dmsync* with argument `--create` once. +The client databases must contain synchronisation tables. The tables are +created automatically by <> if command-line argument `--sync` is +passed. Otherweise, start *dmsync* with argument `--create` once to add the +missing tables. If the RPC server uses HTTP Basic Auth for authentication, the RPC user name must match the _node id_ of the transmitted node, sensor, observation, log, or -beat record. Otherwise, the server will reject the record and return HTTP 401 +beat records, or the server will reject the requests and return HTTP 401 (Unauthorized). -The database records are send in compressed Fortran 95 Namelist format via HTTP -to the server. The program uses libcurl for data transfer. The accessed RPC API -endpoints are expected under URL `[http|https]://:/api/v1/`. +The database records are serialised in Fortran 95 Namelist format and +optionally compressed before being sent to the server. The program uses libcurl +for data transfer, and deflate or zstd for compression. The RPC API endpoints +to post records to are expected at URL +`[http|https]://:/api/v1/`. The result of each synchronisation attempt is stored in the local database. Records are marked as synchronised only if the server returns HTTP 201 @@ -2115,24 +2106,25 @@ Passing the server credentials via the command-line arguments `--username` and `--password` is insecure on multi-user operating systems and only recommended for testing. +[discrete] ==== Command-Line Options -[cols="3,1,1,6"] +[cols="2,1,1,5"] |=== | Option | Short | Default | Description | `--config _file_` | `-c` | – | Path to configuration file. -| `--create` | `-C` | off | Create database synchronisation tables if they do not exist. -| `--database _file_` | `-d` | – | Path to log or observation database, depending on `--type`. -| `--debug` | `-D` | off | Forward log messages of level _debug_ via IPC (if logger is set). +| `--create` | `-C` | off | Create missing database synchronisation tables. +| `--database _file_` | `-d` | – | Path to log or observation database. +| `--debug` | `-D` | off | Forward log messages of level _debug_ (if logger is set). | `--help` | `-h` | – | Output available command-line arguments and quit. -| `--host _host_` | `-H` | – | IP address or FQDN of HTTP-RPC host (for instance, `127.0.0.1` or `iot.example.com`). -| `--interval _seconds_` | `-I` | 60 | Synchronisation interval in seconds. If `0`, synchronisation is executed only once. +| `--host _host_` | `-H` | – | IP address or FQDN of HTTP-RPC API host (for instance, `127.0.0.1` or `iot.example.com`). +| `--interval _seconds_` | `-I` | 60 | Synchronisation interval in seconds. If set to 0, synchronisation is executed only once. | `--logger _name_` | `-l` | – | Name of logger. If set, sends logs to <> process of given name. | `--name _name_` | `-n` | `dmsync` | Name of program instance and configuration. | `--node _id_` | `-N` | – | Node id, required for types `sensor` and `observ`. -| `--password _string_` | `-P` | – | HTTP-RPC API password. -| `--port _port_` | `-p` | 0 | Port of HTTP-RPC API server (set to `0` for automatic selection). +| `--password _string_` | `-P` | – | API password. +| `--port _port_` | `-q` | 0 | Port of HTTP-RPC API server (0 for automatic). | `--tls` | `-X` | off | Use TLS-encrypted connection. | `--type _type_` @@ -2141,16 +2133,17 @@ for testing. | Type of data to sychronise, either `log`, `node`, `observ`, `sensor`, or `target`. Type `log` requires a log database, all other an observation database. -| `--username _string_` | `-U` | – | HTTP-RPC API user name. If set, implies HTTP Basic Auth. +| `--username _string_` | `-U` | – | API user name. If set, implies HTTP Basic Auth. | `--verbose` | `-V` | off | Print log messages to _stderr_. | `--version` | `-v` | – | Output version information and quit. | `--wait _name_` | `-w` | – | Name of POSIX semaphore to wait for. Synchronises databases if semaphore is > 0. |=== +[discrete] ==== Examples -Synchronise nodes, sensors, and targets in the local observation database with -an HTTP-RPC server (without authentication): +Initially synchronise nodes, sensors, and targets in the local observation +database with an HTTP-RPC server (without authentication): .... $ dmsync --database observ.sqlite --type node --host 192.168.1.100 @@ -2175,9 +2168,11 @@ $ dmsync --database log.sqlite --type log --host 192.168.1.100 The *dmuuid* program is a command-line tool to generate pseudo-random UUIDs. By default, DMPACK uses 32 characters long UUIDv4 identifiers in hexadecimal format (without hyphens). Hyphens can be added by a command-line flag. The option -`--convert` expects UUIDs to be passed via standard input. Invalid identifiers -will be replaced with the default UUID. +`--convert` expects UUIDv4 identifiers to be passed via standard input. Invalid +identifiers will be replaced with the default UUID. The program may be used to +create a feed id for <>. +[discrete] ==== Command-Line Options [cols="2,1,1,7"] @@ -2185,12 +2180,13 @@ will be replaced with the default UUID. | Option | Short | Default | Description | `--convert` | `-C` | off | Add hyphens to 32 characters long hexadecimal UUIDs passed via _stdin_. -| `--count _n_` | `-n` | 1 | Number of UUIDs to generate. +| `--count _n_` | `-n` | 1 | Number of identifiers to generate. | `--help` | `-h` | – | Output available command-line arguments and quit. -| `--hyphens` | `-H` | off | Return 36 characters long UUIDs with hyphens. +| `--hyphens` | `-H` | off | Return 36 characters long UUIDv4 with hyphens. | `--version` | `-v` | – | Output version information and quit. |=== +[discrete] ==== Examples Create three identifiers: @@ -2212,7 +2208,7 @@ d498f067-d14a-4f98-a9d8-777a3a131d12 Add hyphens to a hexadecimal UUID: .... -$ echo '3d3eee7ae1fb4259b5df72f854aaa369' | dmuuid --convert +$ echo "3d3eee7ae1fb4259b5df72f854aaa369" | dmuuid --convert 3d3eee7a-e1fb-4259-b5df-72f854aaa369 .... @@ -2240,17 +2236,25 @@ since then, additionally in link:https://en.wikipedia.org/wiki/Swatch_Internet_Time[Swatch Internet Time]. The style sheet of *dmweb* is based on https://missing.style/[missing.css]. -It may be replaced with any other classless CSS theme. For best experience, the +It can be replaced with any other classless CSS theme. For best experience, the link:https://github.com/IBM/plex/releases[IBM Plex] font family should be installed locally. If _gnuplot(1)_ is installed under a name other than `gnuplot`, for example, -`gnuplot-nox`, an alias has to be added to the global profile: +`gnuplot-nox`, create a symbolic link or add an alias to the global profile +`/etc/profile`: .... alias gnuplot="gnuplot-nox" .... +On FreeBSD, it might be necessary to add the environment variable `GDFONTPATH` +to the path of the font directory: + +.... +export GDFONTPATH="/usr/local/share/fonts/webfonts/" +.... + .Environment variables of _dmweb(1)_ [[dmweb-env]] [cols="4,12"] @@ -2265,7 +2269,7 @@ alias gnuplot="gnuplot-nox" Copy the style sheet `dmpack.min.css` manually to the WWW root directory, or create a symlink. Environment variables are used to configure *dmweb*. Transport -security and authentication have to be provided by the web server. See section +security and authentication have to be managed by the web server. See section <> for an example configuration. .Plotting of time series through the *dmweb* user interface @@ -2286,10 +2290,10 @@ image::dmweb.png[dmweb,align="center"] | Location | server | client, server | Configuration | environment variables | environment variables | Authentication | HTTP Basic Auth | HTTP Basic Auth -| Content-Types | CSV, JSON, JSON Lines, Namelist, Text | HTML5 +| Content Types | CSV, JSON, JSON Lines, Namelist, Text | HTML5 | HTTP Methods | GET, POST | GET, POST | Database | SQLite 3 | SQLite 3 -| Read-Only Mode | Yes | Yes +| Read-Only Mode | ✓ | ✓ |=== The following web applications are part of DMPACK: @@ -2308,28 +2312,56 @@ them in _lighttpd(1)_. On FreeBSD, install the package with: The web server is configured through `/usr/local/etc/lighttpd/lighttpd.conf`. See the link:https://redmine.lighttpd.net/projects/lighttpd/wiki[lighttpd wiki] -on how to configure the web server. - -In the listed examples, the DMPACK executables are assumend to be in -`/usr/local/bin/`, but you may copy the programs to `/var/www/cgi-bin/` or any -other directory. Set an appropriate owner, such as the one the server is running -as. +on how to configure the web server. In the listed examples, the DMPACK +executables are assumend to be in `/usr/local/bin/`, but you may copy the +programs to `/var/www/cgi-bin/` or any other directory. Set an appropriate +owner, such as the one the server is running as. === Authentication [[web-auth]] -In the _lighttpd(1)_ configuration file, set `auth.backend.htpasswd.userfile` to -the path of the file that contains the HTTP Basic Auth credentials, or remove -the related lines from the configuration if authentication is not desired. You -can run _openssl(1)_ to add credentials to the _htpasswd_ user file: +The HTTP-RPC API and the web interface will be publicly accessible if the web +server is not configured to manage user authentication. HTTP Basic Auth is a +simple method to authenticate users by name and password. The _lighttpd(1)_ web +server includes an +link:https://redmine.lighttpd.net/projects/lighttpd/wiki/Mod_auth[auth module] +with various back-ends. In the web server configuration, set +`auth.backend.htpasswd.userfile` to the path of the file that contains the +credentials. You can run _openssl(1)_ to add one or more user accounts with +hashed password (SHA-512) to the _htpasswd_ file, in this case +`/usr/local/etc/lighttpd/htpasswd`: .... -# printf ":`openssl passwd -crypt ''`\n" >> /usr/local/etc/lighttpd/htpasswd +# read AUTH_USR +# read AUTH_PWD +# printf "%s:%s\n" $AUTH_USR `openssl passwd -6 '$AUTH_PWD'` \ + >> /usr/local/etc/lighttpd/htpasswd +.... + +Enter the user name and the associated password after the `read` commands. As +an alternative to storing the credentials in a flat file, we can select a +different authentication back-end, for example, LDAP, PAM, or database. See the +documentation of the module for further instructions. In the web server +configuration, load module `mod_authn_file`, select the back-end, and enable +authentication globally or for specific routes: + +[source,lighttpd] .... +# Load authentication module. +server.modules += ( "mod_authn_file" ) + +# Authentication back-end and path of user file. +auth.backend = "htpasswd" +auth.backend.htpasswd.userfile = "/usr/local/etc/lighttpd/htpasswd" -Replace `` and `` with real values. Instead of a _htpasswd_ -file, we may select a different authentication back-end, for example, LDAP, -MySQL/MariaDB, PostgreSQL, or SQLite 3. See the _lighttpd(1)_ auth module -documentation for further instructions. +# Protected routes. +$HTTP["url"] =^ "/api/v1" { + auth.require = ( "" => ( + "method" => "basic", + "realm" => "dmpack", + "require" => "valid-user" + )) +} +.... === Cross-Origin Resource Sharing [[web-cors]] @@ -2394,6 +2426,7 @@ access to the real IP of a client. [source,lighttpd] .... +# Listen on all network interfaces. $SERVER["socket"] == "0.0.0.0:80" { } # Load lighttpd modules. @@ -2403,7 +2436,7 @@ server.modules += ( "mod_fastcgi" ) -# Set authentication back-end and path of password file. +# Set authentication back-end and path of user file. auth.backend = "htpasswd" auth.backend.htpasswd.userfile = "/usr/local/etc/lighttpd/htpasswd" @@ -2443,10 +2476,8 @@ $HTTP["url"] =^ "/api/v1" { The FastCGI socket will be written to `/var/run/lighttpd/sockets/dmapi.sock`. Change `max-procs` to the desired number of FastCGI processes. Set the environment variables to the locations of the databases. The databases must -exist prior start. - -On FreeBSD, add the service to the system rc file `/etc/rc.conf` and start the -server manually: +exist prior start. On FreeBSD, add the service to the system rc file +`/etc/rc.conf` and start the server manually: .... # sysrc lighttpd_enable="YES" @@ -2470,6 +2501,7 @@ The example configuration may be appended to your `lighttpd.conf`: [source,lighttpd] .... +# Listen on all network interfaces. $SERVER["socket"] == "0.0.0.0:80" { } # Load lighttpd modules. @@ -2493,7 +2525,7 @@ setenv.add-environment = ( "DM_READ_ONLY" => "0" ) -# Set authentication back-end and path of password file. +# Set authentication back-end and path of user file. auth.backend = "htpasswd" auth.backend.htpasswd.userfile = "/usr/local/etc/lighttpd/htpasswd" @@ -2542,395 +2574,499 @@ service to the system rc file `/etc/rc.conf` and start the web server manually: If served locally, access the web application at http://127.0.0.1/dmpack/. -== RPC API [[rpc-api]] - -All database records are returned in CSV format by default, with content type -`text/comma-separated-values`. Status and error messages are returned as -key–values pairs, with content type `text/plain`. +== Databases -The following HTTP endpoints are provided by the RPC API: +The DMPACK programs use three distinct databases to store +<> records: -[cols="3,2,7"] -|=== -| Endpoint | HTTP Method | Description +Observation Database:: Stores nodes, sensors, targets, observations, observation +receivers, observation requests, and observation responses, with optional +synchronisation tables for all record types. +Log Database:: Stores all log messages in single table. +Beat Database:: Stores heartbeat messages by unique node id. -| `/api/v1/` | GET | <>. -| `/api/v1/beats` | GET | <>. -| `/api/v1/logs` | GET | <>. -| `/api/v1/nodes` | GET | <>. -| `/api/v1/observs` | GET | <>. -| `/api/v1/sensors` | GET | <>. -| `/api/v1/targets` | GET | <>. -| `/api/v1/timeseries` | GET | <>. -| `/api/v1/beat` | GET, POST | <>. -| `/api/v1/log` | GET, POST | <>. -| `/api/v1/node` | GET, POST | <>. -| `/api/v1/observ` | GET, POST | <>. -| `/api/v1/sensor` | GET, POST | <>. -| `/api/v1/target` | GET, POST | <> -|=== +The databases are usually located in directory `/var/dmpack/`. -=== Read Service Status [[api-root]] +=== Administration -Returns <> in API status format as `text/plain`. +The _sqlite3(1)_ program is stand-alone command-line shell for SQLite database +access that allows the user to execute arbitrary SQL statements. Third-party +programs provide an additional graphical user interface: -==== Endpoint +link:https://sqlitebrowser.org/[DB Browser for SQLite] (DB4S):: A +spreadsheet-like visual interface for Linux, Unix, macOS, and Windows. (MPLv2, +GPLv3) +link:https://www.heidisql.com/[HeidiSQL]:: A free database administration tool +for MariaDB, MySQL, MS SQL Server, PostgreSQL, and SQLite. For Windows only. +(GPLv2) +link:https://www.phpliteadmin.org/[phpLiteAdmin]:: A web front-end for SQLite +database administration written in PHP. (GPLv3) +link:https://github.com/coleifer/sqlite-web[SQLite Web]:: A web-based SQLite +database browser in Python. (MIT) -* `/api/v1/` +=== Entity–Relationship Model -==== HTTP Methods +.Beat database +[#db-uml-beat] +image::beat.svg[UML,align="center",scaledwidth=25%] -* GET +.Observation database +[#db-uml-observ] +image::observ.svg[UML,align="center"] -==== Responses +.Log database +[#db-uml-log] +image::log.svg[UML,align="center",scaledwidth=25%] -.GET -[cols="1,9"] -|=== -| Status | Description +=== Examples -| `200` | Default response. -| `401` | Unauthorised. -| `500` | Server error. -|=== +Write all schemas of an observation database to file `schema.sql`, using the +_sqlite3(1)_ command-line tool: -==== Example +.... +$ sqlite3 /var/dmpack/observ.sqlite ".schema" > schema.sql +.... -Return the HTTP-RPC service status: +To dump an observation database as raw SQL to `observ.sql`: .... -$ curl -s -u : --header "Accept: text/plain" \ - "http://localhost/api/v1/" +$ sqlite3 /var/dmpack/observ.sqlite ".dump" > observ.sql .... -=== Read Beats [[api-beats]] - -Returns all heartbeats in <>, <>, or -JSON Lines format from database. +Dump only table `logs` of a log database: -==== Endpoint +.... +$ sqlite3 /var/dmpack/log.sqlite ".dump 'logs'" > log.sql +.... -* `/api/v1/beats` -* `/api/v1/beats?header=<0|1>` +== System Configuration [[sys-conf]] -==== HTTP Methods +Additional changes to the system configuration should be considered to prevent +issues while conducting a long-term monitoring. -* GET +=== Time Zone [[sys-conf-tz]] -==== Request Parameters +The local time zone of the sensor client should be set to a zone without summer +daylight-saving. For instance, time zone `Europe/Berlin` implies Central +European Summer Time (CEST), which is usually not desired for long-term +observations, as it leads to time jumps. Instead, use time zone `GMT+1` or `UTC` +in this case. -[cols="3,2,5"] -|=== -| GET Parameter | Type | Description +==== Linux [[sys-conf-tz-linux]] -| `header` | integer | Add CSV header (0 or 1). -|=== +On Linux, list all time zones and set the preferred one with _timedatectl(1)_: -==== Request Headers +.... +# timedatectl list-timezones +# timedatectl set-timezone Etc/GMT+1 +.... -.GET -[cols="1,9"] -|=== -| Name | Values +==== FreeBSD [[sys-conf-tz-freebsd]] -| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` -|=== +On FreeBSD, configure the time zone using: -==== Responses +.... +# tzsetup +.... -.GET -[cols="1,9"] -|=== -| Status | Description +=== Time Synchronisation [[sys-conf-ntp]] -| `200` | Beats are returned. -| `401` | Unauthorised. -| `404` | No beats found. -| `500` | Server error. -| `503` | Database error. -|=== +The system time should be updated periodically by synchronising it with network +time servers. A Network Time Protocol (NTP) client has to be installed and +configured to enable the synchronisation. -==== Example +==== Linux [[sys-conf-ntp-linux]] -Return beats of all nodes in JSON format, pretty-print the result with _jq(1)_: +On Debian Linux, install the NTP package: .... -$ curl -s -u : --header "Accept: application/json" \ - "http://localhost/api/v1/beats" | jq +# apt-get install ntp .... -=== Read Logs [[api-logs]] - -Returns logs of a given node and time range in <>, -<>, or JSON Lines format from database. Node id and time -range are mandatory. +Query the NTP servers to synchronise with: -==== Endpoint +.... +# ntpq -p +.... -* `/api/v1/logs?node_id=&from=&to=` +The system time should be updated now: -==== HTTP Methods +.... +# date -R +.... -* GET +On error, try to reconfigure the NTP service: -==== Request Parameters +.... +# dpkg-reconfigure ntp +.... -[cols="3,2,5"] -|=== -| GET Parameter | Type | Description +==== FreeBSD [[sys-conf-ntp-freebsd]] -| `node_id` | string | Node id. -| `from` | string | Start of time range (ISO 8601). -| `to` | string | End of time range (ISO 8601). -| `header` | integer | Add CSV header (0 or 1). -|=== +Set the current date and time intially by passing the IP or FQDN of the NTP +server to _ntpdate(1)_: -==== Request Headers +.... +# ntpdate -b ptbtime1.ptb.de +.... -.GET -[cols="1,9"] -|=== -| Name | Values +The NTP daemon _ntpd(8)_ is configured through file `/etc/ntp.conf`. If +favoured, we can replace the existing NTP server pool `0.freebsd.pool.ntp.org` +with a single server, for example: -| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` -|=== +.... +server ptbtime1.ptb.de iburst +.... -==== Responses +Add the following entries to `/etc/rc.conf`: -.GET -[cols="1,9"] -|=== -| Status | Description +.... +ntpd_enable="YES" +ntpd_sync_on_start="YES" +ntpd_flags="-g" +.... -| `200` | Logs are returned. -| `400` | Invalid request. -| `401` | Unauthorised. -| `404` | No logs found. -| `500` | Server error. -| `503` | Database error. -|=== +Start the _ntpd(8)_ service: -==== Example +.... +# service ntpd start +.... -Return all logs of node `dummy-node` and year 2023 in CSV format: +=== Power Saving [[sys-conf-power-saving]] -.... -$ curl -s -u : --header "Accept: text/comma-separated-values" \ - "http://localhost/api/v1/logs?node_id=dummy-node&from=2023&to=2024" -.... - -=== Read Nodes [[api-nodes]] - -Returns all nodes in <>, <>, or JSON -Lines format from database. - -==== Endpoint +On Linux, power saving for USB devices may be enabled by default. This can cause +issues if sensors are attached through an USB adapter. USB power saving is +enabled if the kernel boot parameter `usbcore.autosuspend` is not `-1`: -* `/api/v1/nodes` -* `/api/v1/nodes?header=<0|1>` +.... +# cat /sys/module/usbcore/parameters/autosuspend +2 +.... -==== HTTP Methods +We can update the boot loader to turn auto-suspend off. Edit `/etc/default/grub` +and change `GRUB_CMDLINE_LINUX_DEFAULT` to: -* GET +.... +GRUB_CMDLINE_LINUX_DEFAULT="quiet usbcore.autosuspend=-1" +.... -==== Request Parameters +Then, update the boot loader: -[cols="3,2,5"] -|=== -| GET Parameter | Type | Description +.... +# update-grub +.... -| `header` | integer | Add CSV header (0 or 1). -|=== +The system has to be rebooted for the changes to take effect. -==== Request Headers +=== Message Queues [[sys-conf-mqueue]] -.GET -[cols="1,9"] -|=== -| Name | Values +The operating system must have POSIX message queues enabled to run DMPACK +programs on sensor nodes. -| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` -|=== +==== Linux [[sys-conf-mqueue-linux]] -==== Responses +The POSIX message queue file system should be already mounted by default. +Otherwise, run: -.GET -[cols="1,9"] -|=== -| Status | Description +.... +# mkdir -p /dev/mqueue +# mount -t mqueue none /dev/mqueue +.... -| `200` | Nodes are returned. -| `401` | Unauthorised. -| `404` | No nodes found. -| `500` | Server error. -| `503` | Database error. -|=== +Set the maximum number of messages and the maximum message size to some +reasonable values: -==== Example +.... +# sysctl fs.mqueue.msg_max=32 +# sysctl fs.mqueue.msgsize_max=16384 +.... -Return all nodes in database as JSON array: +The maximum message size has to be at least 16384 bytes. Add the settings to +`/etc/sysctl.conf` to make them permanent: .... -$ curl -s -u : --header "Accept: application/json" \ - "http://localhost/api/v1/nodes" +fs.mqueue.msg_max=32 +fs.mqueue.msgsize_max=16384 .... -=== Read Observations [[api-observs]] - -Returns observations of given node, sensor, target, and time range from -database, in <>, <>, or JSON Lines -format. +==== FreeBSD [[sys-conf-mqueue-freebsd]] -==== Endpoint +On FreeBSD, make sure the kernel module `mqueuefs` is loaded, and the message +queue file system is mounted: -* `/api/v1/observs?` +.... +# kldstat -m mqueuefs +Id Refs Name +522 1 mqueuefs +.... -==== HTTP Methods +Otherwise, we can simply load and mount the file system: -* GET +.... +# kldload mqueuefs +# mkdir -p /mnt/mqueue +# mount -t mqueuefs null /mnt/mqueue +.... -==== Request Parameters +To load messages queues at system start, add the module `mqueuefs` to +`/etc/rc.conf`, and the file system to `/etc/fstab`: -[cols="3,2,5"] -|=== -| GET Parameter | Type | Description +.... +# sysrc kld_list+="mqueuefs" +# echo "null /mnt/mqueue mqueuefs rw 0 0" >> /etc/fstab +.... -| `node_id` | string | Node id. -| `sensor_id` | string | Sensor id. -| `target_id` | string | Target id. -| `response` | string | Response name. -| `from` | string | Start of time range (ISO 8601). -| `to` | string | End of time range (ISO 8601). -| `limit` | integer | Max. number of results (optional). -| `header` | integer | Add CSV header (0 or 1). -|=== +Additionally, we may increase the system limits of POSIX message queues with +_sysctl(8)_, or in `/etc/sysctl.conf`. The defaults are: -==== Request Headers +.... +# sysctl kern.mqueue.maxmsg +kern.mqueue.maxmsg: 32 +# sysctl kern.mqueue.maxmsgsize +kern.mqueue.maxmsgsize: 16384 +.... -.GET -[cols="1,9"] -|=== -| Name | Values +The maximum message size has to be at least 16384 bytes. -| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` -|=== +=== Cron [[sys-conf-cron]] -==== Responses +On Unix-like operating system, link:https://en.wikipedia.org/wiki/Cron[cron] is +usually used to run jobs periodically. For instance, in order to update an XML +feed or to generate HTML reports at regular intervals, add a schedule of the +task to perform to the _crontab(5)_ file of a local user. For example, to edit +the cron jobs of user `www` with _crontab(1)_ run: -.GET -[cols="1,9"] -|=== -| Status | Description +.... +# crontab -u www -e +.... -| `200` | Observations are returned. -| `400` | Invalid request. -| `401` | Unauthorised. -| `404` | No observations found. -| `500` | Server error. -| `503` | Database error. -|=== +The following _crontab(5)_ entry adds a job to generate reports every hour, +using utility script `mkreport.sh`: -==== Example +[source,crontab] +.... +SHELL=/bin/sh +MAILTO=/dev/null +# Create reports every hour, suppress logging. +@hourly -q /usr/local/share/dmpack/mkreport.sh +.... -Return all observations related to node `dummy-node`, sensor `dummy-sensor`, and -target `dummy-target` of a single month in JSON format, pretty-print the result -with _jq(1)_: +Status mails and logging are disabled. The shell script `mkreport.sh` must have +the execution bits set. Modify the script according to your set-up. The +parameter `-q` disables syslog messages. Additionally, we may update an Atom +XML feed of logs by running <> every five minutes: +[source,crontab] .... -$ curl -s -u : --header "Accept: application/json" \ - "http://localhost/api/v1/observs?node_id=dummy-node&sensor_id=dummy-sensor\ -&target_id=dummy-target&from=2023-01&to=2024-01" | jq +*/5 * * * * -q /usr/local/bin/dmfeed --config /usr/local/etc/dmpack/dmfeed.conf .... -=== Read Sensors [[api-sensors]] +The feed is updated only if new logs have arrived in the meantime, unless option +`--force` is passed as an additional argument. -Returns all sensors in <>, <>, or -JSON Lines format from database. +== Third-Party Software -==== Endpoint +This section covers the custom installation of third-party software. -* `/api/v1/sensors` -* `/api/v1/sensors?header=<0|1>` +=== HDFView [[third-party-hdfview]] -==== HTTP Methods +HDFView is a Java-based visual tool for browsing and editing HDF5 and HDF4 +files. Application images for Linux, macOS, and Windows are available for +download on the website of +link:https://www.hdfgroup.org/downloads/hdfview/[The HDF Group]. On FreeBSD, the +program has to be compiled from source. The following build dependencies are +required: -* GET +* link:https://www.freshports.org/devel/apache-ant[devel/apache-ant] +* link:https://www.freshports.org/devel/cmake[devel/cmake] +* link:https://www.freshports.org/devel/git[devel/git] +* link:https://www.freshports.org/java/openjdk19[java/openjdk19] (or any other version) +* link:https://www.freshports.org/lang/gcc[lang/gcc] +* link:https://www.freshports.org/x11-toolkits/swt[x11-toolkits/swt] -==== Request Parameters +The HDF4 and HDF5 libraries have to be built from source as well. -[cols="3,2,5"] -|=== -| GET Parameter | Type | Description +==== Building HDF4 -| `header` | integer | Add CSV header (0 or 1). -|=== +Clone the HDF4 repository and compile with CMake: -==== Request Headers +.... +$ cd /tmp/ +$ git clone --depth 1 https://github.com/HDFGroup/hdf4.git +$ cd hdf4/ +$ mkdir build && cd build/ +$ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE:STRING=Release \ + -DBUILD_SHARED_LIBS:BOOL=ON -DBUILD_TESTING:BOOL=OFF \ + -DHDF4_BUILD_TOOLS:BOOL=OFF -DHDF4_BUILD_EXAMPLES=OFF \ + -DHDF4_BUILD_FORTRAN=ON -DHDF4_BUILD_JAVA=ON \ + -DZLIB_LIBRARY:FILEPATH=/usr/lib/libz.so \ + -DZLIB_INCLUDE_DIR:PATH=/usr/include \ + -DCMAKE_Fortran_COMPILER=gfortran -DCMAKE_C_COMPILER=gcc .. +$ cmake --build . --config Release +.... -.GET -[cols="1,9"] -|=== -| Name | Values +Afterwards, copy `java/src/hdf/hdflib/jarhdf-4.3.0.jar` to `bin/` in the HDF4 +build directory. -| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` -|=== +==== Building HDF5 [[third-party-hdf5]] -==== Responses +In the next step, clone the HDF5 repository and build with CMake, too: -.GET -[cols="1,9"] -|=== -| Status | Description +.... +$ cd /tmp/ +$ git clone --depth 1 https://github.com/HDFGroup/hdf5.git +$ cd hdf5/ +$ mkdir build && cd build/ +$ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE:STRING=Release \ + -DBUILD_SHARED_LIBS:BOOL=ON -DBUILD_TESTING:BOOL=OFF \ + -DHDF5_BUILD_TOOLS:BOOL=OFF -DHDF5_BUILD_EXAMPLES=OFF \ + -DHDF5_BUILD_FORTRAN=ON -DHDF5_BUILD_JAVA=ON \ + -DZLIB_LIBRARY:FILEPATH=/usr/lib/libz.so \ + -DZLIB_INCLUDE_DIR:PATH=/usr/include \ + -DCMAKE_Fortran_COMPILER=gfortran -DCMAKE_C_COMPILER=gcc .. +$ cmake --build . --config Release +.... -| `200` | Sensors are returned. -| `401` | Unauthorised. -| `404` | No sensors found. -| `500` | Server error. -| `503` | Database error. -|=== +Then, copy `java/src/hdf/hdf5lib/jarhdf5-1.15.0.jar` and `src/libhdf5.settings` +to `bin/` in the HDF5 build directory. -==== Example +==== Building HDFView -Return all sensors of node `dummy-node` in JSON format: +Finally, clone the HDFView repository, set the build properties, and compile +with _ant(1)_: .... -$ curl -s -u : --header "Accept: application/json" \ - "http://localhost/api/v1/sensors?node_id=dummy-node" +$ cd /tmp/ +$ git clone --depth 1 https://github.com/HDFGroup/hdfview.git +$ cd hdfview/ .... -=== Read Targets [[api-targets]] - -Returns all targets in <>, <>, or -JSON Lines format from database. - -==== Endpoint +Set the following properties in `build.properties`: -* `/api/v1/targets` -* `/api/v1/targets?header=<0|1>` +.... +hdf.lib.dir = /tmp/hdf4/build/bin +hdf5.lib.dir = /tmp/hdf5/build/bin +hdf5.plugin.dir = /tmp/hdf5/build/bin/plugin +build.debug = false +.... -==== HTTP Methods +Build with _ant(1)_: -* GET +.... +$ ant run +.... -==== Request Parameters +The binaries are written to `build/HDF_Group/HDFView/99.99.99/`. The archive +`swt.jar` has to be replaced with the version installed system-wide: -[cols="3,2,5"] -|=== -| GET Parameter | Type | Description +.... +$ cp /usr/local/share/java/classes/swt.jar build/HDF_Group/HDFView/99.99.99/ +.... -| `header` | integer | Add CSV header (0 or 1). -|=== +Replace the last line in `build/HDF_Group/HDFView/99.99.99/hdfview.sh` with: -==== Request Headers +.... +java "$JAVAOPTS" -Djava.library.path=".:/usr/local/lib" -Dhdfview.root="." \ + -cp "./*" hdf.view.HDFView "$@" +.... -.GET -[cols="1,9"] +To start HDFView, run: + +.... +$ cd build/HDF_Group/HDFView/99.99.99/ +$ sh hdfview.sh +.... + +=== SQLite 3 [[third-party-sqlite]] + +DMPACK depends on SQLite version 3.39.0 or newer. If the requirement is not met +by the operating system, build the latest version from source: + +.... +$ cd /tmp/ +$ git clone --depth 1 https://github.com/sqlite/sqlite.git +$ cd sqlite/ +$ ./configure +.... + +You may want to disable Tcl if it is not installed locally: + +.... +$ ./configure --disable-tcl +.... + +Install SQLite 3 and test the command-line utility: + +.... +$ sudo make install +$ /usr/local/bin/sqlite3 --version +.... + +=== Zstandard [[third-party-zstd]] + +If the Zstandard version provided in the Linux package repository is too old to +be compatible with DMPACK, we can compile the library simply from source. Git +and CMake must be installed: + +.... +$ cd /tmp/ +$ git clone --depth 1 https://github.com/facebook/zstd.git +$ cd zstd/build/cmake/ +$ mkdir build && cd build/ +$ cmake .. +$ cmake --build . --config Release +$ sudo cmake --install . --prefix /usr/local +.... + +Execute the Zstandard command-line utility to verify the installation: + +.... +$ /usr/local/bin/zstd --version +.... + +== RPC API [[rpc-api]] + +All database records are returned in CSV format by default, with content type +`text/comma-separated-values`. Status and error messages are returned as +key–values pairs, with content type `text/plain`. + +The following HTTP endpoints are provided by the RPC API: + +[cols="3,2,7"] |=== -| Name | Values +| Endpoint | HTTP Method | Description -| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` +| `/api/v1/` | GET | <>. +| `/api/v1/beats` | GET | <>. +| `/api/v1/logs` | GET | <>. +| `/api/v1/nodes` | GET | <>. +| `/api/v1/observs` | GET | <>. +| `/api/v1/sensors` | GET | <>. +| `/api/v1/targets` | GET | <>. +| `/api/v1/timeseries` | GET | <>. +| `/api/v1/beat` | GET, POST | <>. +| `/api/v1/log` | GET, POST | <>. +| `/api/v1/node` | GET, POST | <>. +| `/api/v1/observ` | GET, POST | <>. +| `/api/v1/sensor` | GET, POST | <>. +| `/api/v1/target` | GET, POST | <> |=== +=== Read Service Status [[api-root]] + +Returns <> in API status format as `text/plain`. + +[discrete] +==== Endpoint + +* `/api/v1/` + +[discrete] +==== HTTP Methods + +* GET + +[discrete] ==== Responses .GET @@ -2938,54 +3074,48 @@ JSON Lines format from database. |=== | Status | Description -| `200` | Targets are returned. +| `200` | Default response. | `401` | Unauthorised. -| `404` | No targets found. | `500` | Server error. -| `503` | Database error. |=== +[discrete] ==== Example -Return all targets in CSV format: +Return the HTTP-RPC service status: .... -$ curl -s -u : --header "Accept: text/comma-separated-values" \ - "http://localhost/api/v1/targets" +$ curl -s -u : --header "Accept: text/plain" \ + "http://localhost/api/v1/" .... -=== Read Time Series [[api-timeseries]] +=== Read Beats [[api-beats]] -Returns time series as observation views or <> (X/Y -records) in CSV format from database. In comparison to the -<>, the time series include only a single -response, selected by name. +Returns all heartbeats in <>, <>, or +JSON Lines format from database. +[discrete] ==== Endpoint -* `/api/v1/timeseries?` +* `/api/v1/beats` +* `/api/v1/beats?header=<0|1>` +[discrete] ==== HTTP Methods * GET +[discrete] ==== Request Parameters [cols="3,2,5"] |=== | GET Parameter | Type | Description -| `node_id` | string | Node id. -| `sensor_id` | string | Sensor id. -| `target_id` | string | Target id. -| `response` | string | Response name. -| `from` | string | Start of time range (ISO 8601). -| `to` | string | End of time range (ISO 8601). -| `limit` | integer | Max. number of results (optional). | `header` | integer | Add CSV header (0 or 1). -| `view` | integer | Return observation views instead of data points (0 or 1). |=== +[discrete] ==== Request Headers .GET @@ -2993,9 +3123,10 @@ response, selected by name. |=== | Name | Values -| Accept | `text/comma-separated-values` +| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` |=== +[discrete] ==== Responses .GET @@ -3003,59 +3134,53 @@ response, selected by name. |=== | Status | Description -| `200` | Observations are returned. -| `400` | Invalid request. +| `200` | Beats are returned. | `401` | Unauthorised. -| `404` | No observations found. +| `404` | No beats found. | `500` | Server error. | `503` | Database error. |=== +[discrete] ==== Example -Return time series of responses `dummy` related to node `dummy-node`, sensor -`dummy-sensor`, and target `dummy-sensor`, from 2023 to 2024, as X/Y data in CSV -format: +Return beats of all nodes in JSON format, pretty-print the result with _jq(1)_: .... -$ curl -s -u : --header "Accept: text/comma-separated-values" \ - "http://localhost/api/v1/timeseries?node_id=dummy-node&sensor_id=dummy-sensor\ -&target_id=dummy-target&response=dummy&from=2023&to=2024" +$ curl -s -u : --header "Accept: application/json" \ + "http://localhost/api/v1/beats" | jq .... -For additional meta information, add the parameter `view=1`. - -=== Read or Update Beat [[api-beat]] - -Returns heartbeat of a given node in <>, -<>, or <> format from database. - -On POST, adds or updates heartbeat given in Namelist format. Optionally, the -payload may be deflate or zstd compressed. The API returns HTTP 201 Created if -the beat was accepted. +=== Read Logs [[api-logs]] -If HTTP Basic Auth is used, the user name must match the `node_id` attribute of -the beat, otherwise, the request will be rejected as unauthorised (HTTP 401). +Returns logs of a given node and time range in <>, +<>, or JSON Lines format from database. Node id and time +range are mandatory. +[discrete] ==== Endpoint -* `/api/v1/beat` -* `/api/v1/beat?node_id=` +* `/api/v1/logs?node_id=&from=&to=` +[discrete] ==== HTTP Methods * GET -* POST +[discrete] ==== Request Parameters [cols="3,2,5"] |=== -| GET Parameter | Type | Description +| GET Parameter | Type | Description -| `node_id` | string | Node id. +| `node_id` | string | Node id. +| `from` | string | Start of time range (ISO 8601). +| `to` | string | End of time range (ISO 8601). +| `header` | integer | Add CSV header (0 or 1). |=== +[discrete] ==== Request Headers .GET @@ -3063,18 +3188,10 @@ the beat, otherwise, the request will be rejected as unauthorised (HTTP 401). |=== | Name | Values -| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` -|=== - -.POST -[cols="2,8"] -|=== -| Name | Values - -| Content-Encoding | `deflate`, `zstd` (optional) -| Content-Type | `application/namelist` +| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` |=== +[discrete] ==== Responses .GET @@ -3082,68 +3199,51 @@ the beat, otherwise, the request will be rejected as unauthorised (HTTP 401). |=== | Status | Description -| `200` | Beat is returned. +| `200` | Logs are returned. | `400` | Invalid request. | `401` | Unauthorised. -| `404` | Beat not found. -| `500` | Server error. -| `503` | Database error. -|=== - -.POST -[cols="1,9"] -|=== -| Status | Description - -| `201` | Beat was accepted. -| `400` | Invalid request or payload. -| `401` | Unauthorised. -| `413` | Payload too large. -| `415` | Invalid payload format. +| `404` | No logs found. | `500` | Server error. | `503` | Database error. |=== +[discrete] ==== Example -Return the heartbeat of node `dummy-node` in JSON format: +Return all logs of node `dummy-node` and year 2023 in CSV format: .... -$ curl -s -u : --header "Accept: application/json" \ - "http://localhost/api/v1/beat?node_id=dummy-node" +$ curl -s -u : --header "Accept: text/comma-separated-values" \ + "http://localhost/api/v1/logs?node_id=dummy-node&from=2023&to=2024" .... -=== Read or Create Log [[api-log]] - -Returns single log of passed id in <>, -<>, or <> format from database. - -On POST, adds log in Namelist format to database. Optionally, the payload may -be deflate or zstd compressed. The API returns HTTP 201 Created if the log was -accepted. +=== Read Nodes [[api-nodes]] -If HTTP Basic Auth is used, the user name must match the `node_id` attribute of -the log, otherwise, the request will be rejected as unauthorised (HTTP 401). +Returns all nodes in <>, <>, or JSON +Lines format from database. +[discrete] ==== Endpoint -* `/api/v1/log` -* `/api/v1/log?id=` +* `/api/v1/nodes` +* `/api/v1/nodes?header=<0|1>` +[discrete] ==== HTTP Methods * GET -* POST +[discrete] ==== Request Parameters [cols="3,2,5"] |=== -| GET Parameter | Type | Description +| GET Parameter | Type | Description -| `id` | string | Log id (UUID). +| `header` | integer | Add CSV header (0 or 1). |=== +[discrete] ==== Request Headers .GET @@ -3151,18 +3251,10 @@ the log, otherwise, the request will be rejected as unauthorised (HTTP 401). |=== | Name | Values -| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` -|=== - -.POST -[cols="2,8"] -|=== -| Name | Values - -| Content-Encoding | `deflate`, `zstd` (optional) -| Content-Type | `application/namelist` +| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` |=== +[discrete] ==== Responses .GET @@ -3170,69 +3262,57 @@ the log, otherwise, the request will be rejected as unauthorised (HTTP 401). |=== | Status | Description -| `200` | Log is returned. -| `400` | Invalid request. +| `200` | Nodes are returned. | `401` | Unauthorised. -| `404` | Log not found. +| `404` | No nodes found. | `500` | Server error. | `503` | Database error. |=== -.POST -[cols="1,9"] -|=== -| Status | Description +[discrete] +==== Example -| `201` | Log was accepted. -| `400` | Invalid request or payload. -| `401` | Unauthorised. -| `409` | Log exists in database. -| `413` | Payload too large. -| `415` | Invalid payload format. -| `500` | Server error. -| `503` | Database error. -|=== - -==== Example - -Return a specific log in JSON format: +Return all nodes in database as JSON array: .... $ curl -s -u : --header "Accept: application/json" \ - "http://localhost/api/v1/log?id=51adca2f1d4e42a5829fd1a378c8b6f1" + "http://localhost/api/v1/nodes" .... -=== Read or Create Node [[api-node]] - -Returns node of given id in <>, <>, or -<> format from database. - -On POST, adds node in Namelist format to database. Optionally, the payload may -be deflate or zstd compressed. The API returns HTTP 201 Created if the node was -accepted. +=== Read Observations [[api-observs]] -If HTTP Basic Auth is used, the user name must match the `node_id` attribute of -the node, otherwise, the request will be rejected as unauthorised (HTTP 401). +Returns observations of given node, sensor, target, and time range from +database, in <>, <>, or JSON Lines +format. +[discrete] ==== Endpoint -* `/api/v1/node` -* `/api/v1/node?id=` +* `/api/v1/observs?` +[discrete] ==== HTTP Methods * GET -* POST +[discrete] ==== Request Parameters [cols="3,2,5"] |=== -| GET Parameter | Type | Description +| GET Parameter | Type | Description -| `id` | string | Node id. +| `node_id` | string | Node id. +| `sensor_id` | string | Sensor id. +| `target_id` | string | Target id. +| `response` | string | Response name. +| `from` | string | Start of time range (ISO 8601). +| `to` | string | End of time range (ISO 8601). +| `limit` | integer | Max. number of results (optional). +| `header` | integer | Add CSV header (0 or 1). |=== +[discrete] ==== Request Headers .GET @@ -3240,18 +3320,10 @@ the node, otherwise, the request will be rejected as unauthorised (HTTP 401). |=== | Name | Values -| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` -|=== - -.POST -[cols="2,8"] -|=== -| Name | Values - -| Content-Encoding | `deflate`, `zstd` (optional) -| Content-Type | `application/namelist` +| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` |=== +[discrete] ==== Responses .GET @@ -3259,70 +3331,54 @@ the node, otherwise, the request will be rejected as unauthorised (HTTP 401). |=== | Status | Description -| `200` | Node is returned. +| `200` | Observations are returned. | `400` | Invalid request. | `401` | Unauthorised. -| `404` | Node not found. -| `500` | Server error. -| `503` | Database error. -|=== - -.POST -[cols="1,9"] -|=== -| Status | Description - -| `201` | Node was accepted. -| `400` | Invalid request or payload. -| `401` | Unauthorised. -| `409` | Node exists in database. -| `413` | Payload too large. -| `415` | Invalid payload format. +| `404` | No observations found. | `500` | Server error. | `503` | Database error. |=== +[discrete] ==== Example -Return node `dummy-node` in JSON format: +Return all observations related to node `dummy-node`, sensor `dummy-sensor`, and +target `dummy-target` of a single month in JSON format, pretty-print the result +with _jq(1)_: .... $ curl -s -u : --header "Accept: application/json" \ - "http://localhost/api/v1/node?node_id=dummy-node" + "http://localhost/api/v1/observs?node_id=dummy-node&sensor_id=dummy-sensor\ +&target_id=dummy-target&from=2023-01&to=2024-01" | jq .... -=== Read or Create Observation [[api-observ]] - -Returns observation of given id from database, in <>, -<>, or <> format. - -On POST, adds observation in Namelist format to database. Optionally, the -payload may be deflate or zstd compressed. The API returns HTTP 201 Created if -the observation was accepted. +=== Read Sensors [[api-sensors]] -If HTTP Basic Auth is used, the user name must match the `node_id` attribute of -the observation, otherwise, the request will be rejected as unauthorised (HTTP -401). +Returns all sensors in <>, <>, or +JSON Lines format from database. +[discrete] ==== Endpoint -* `/api/v1/observ` -* `/api/v1/observ?id=` +* `/api/v1/sensors` +* `/api/v1/sensors?header=<0|1>` +[discrete] ==== HTTP Methods * GET -* POST +[discrete] ==== Request Parameters [cols="3,2,5"] |=== -| GET Parameter | Type | Description +| GET Parameter | Type | Description -| `id` | string | Observation id (UUID). +| `header` | integer | Add CSV header (0 or 1). |=== +[discrete] ==== Request Headers .GET @@ -3330,18 +3386,10 @@ the observation, otherwise, the request will be rejected as unauthorised (HTTP |=== | Name | Values -| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` -|=== - -.POST -[cols="2,8"] -|=== -| Name | Values - -| Content-Encoding | `deflate`, `zstd` (optional) -| Content-Type | `application/namelist` +| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` |=== +[discrete] ==== Responses .GET @@ -3349,70 +3397,50 @@ the observation, otherwise, the request will be rejected as unauthorised (HTTP |=== | Status | Description -| `200` | Observation is returned. -| `400` | Invalid request. -| `401` | Unauthorised. -| `404` | Observation not found. -| `500` | Server error. -| `503` | Database error. -|=== - -.POST -[cols="1,9"] -|=== -| Status | Description - -| `201` | Observation was accepted. -| `400` | Invalid request or payload. +| `200` | Sensors are returned. | `401` | Unauthorised. -| `409` | Observation exists in database. -| `413` | Payload too large. -| `415` | Invalid payload format. +| `404` | No sensors found. | `500` | Server error. | `503` | Database error. |=== +[discrete] ==== Example -Return a specific observation in JSON format: +Return all sensors of node `dummy-node` in JSON format: .... $ curl -s -u : --header "Accept: application/json" \ - "http://localhost/api/v1/observ?id=7b98ae11d80b4ee392fe1a74d2c05809" + "http://localhost/api/v1/sensors?node_id=dummy-node" .... -=== Read or Create Sensor [[api-sensor]] - -Returns sensor of given id in <>, -<>, or <> format from -database. - -On POST, adds node in Namelist format to database. Optionally, the payload may -be deflate or zstd compressed. The API returns HTTP 201 Created if the sensor -was accepted. +=== Read Targets [[api-targets]] -If HTTP Basic Auth is used, the user name must match the `node_id` attribute of -the sensor, otherwise, the request will be rejected as unauthorised (HTTP 401). +Returns all targets in <>, <>, or +JSON Lines format from database. +[discrete] ==== Endpoint -* `/api/v1/sensor` -* `/api/v1/sensor?id=` +* `/api/v1/targets` +* `/api/v1/targets?header=<0|1>` +[discrete] ==== HTTP Methods * GET -* POST +[discrete] ==== Request Parameters [cols="3,2,5"] |=== -| GET Parameter | Type | Description +| GET Parameter | Type | Description -| `id` | string | Sensor id. +| `header` | integer | Add CSV header (0 or 1). |=== +[discrete] ==== Request Headers .GET @@ -3420,18 +3448,10 @@ the sensor, otherwise, the request will be rejected as unauthorised (HTTP 401). |=== | Name | Values -| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` -|=== - -.POST -[cols="2,8"] -|=== -| Name | Values - -| Content-Encoding | `deflate`, `zstd` (optional) -| Content-Type | `application/namelist` +| Accept | `application/json`, `application/jsonl`, `text/comma-separated-values` |=== +[discrete] ==== Responses .GET @@ -3439,67 +3459,59 @@ the sensor, otherwise, the request will be rejected as unauthorised (HTTP 401). |=== | Status | Description -| `200` | Sensor is returned. -| `400` | Invalid request. -| `401` | Unauthorised. -| `404` | Sensor not found. -| `500` | Server error. -| `503` | Database error. -|=== - -.POST -[cols="1,9"] -|=== -| Status | Description - -| `201` | Sensor was accepted. -| `400` | Invalid request or payload. +| `200` | Targets are returned. | `401` | Unauthorised. -| `409` | Sensor exists in database. -| `413` | Payload too large. -| `415` | Invalid payload format. +| `404` | No targets found. | `500` | Server error. | `503` | Database error. |=== +[discrete] ==== Example -Return sensor `dummy-sensor` in JSON format: +Return all targets in CSV format: .... -$ curl -s -u : --header "Accept: application/json" \ - "http://localhost/api/v1/sensor?id=dummy-sensor" +$ curl -s -u : --header "Accept: text/comma-separated-values" \ + "http://localhost/api/v1/targets" .... -=== Read or Create Target [[api-target]] - -Returns target of given id in <>, -<>, or <> format from -database. +=== Read Time Series [[api-timeseries]] -On POST, adds target in Namelist format to database. Optionally, the payload -may be deflate or zstd compressed. The API returns HTTP 201 Created if the -target was accepted. +Returns time series as observation views or <> (X/Y +records) in CSV format from database. In comparison to the +<>, the time series include only a single +response, selected by name. +[discrete] ==== Endpoint -* `/api/v1/target` -* `/api/v1/target?id=` +* `/api/v1/timeseries?` +[discrete] ==== HTTP Methods * GET -* POST +[discrete] ==== Request Parameters [cols="3,2,5"] |=== -| GET Parameter | Type | Description +| GET Parameter | Type | Description -| `id` | string | Target id. +| `node_id` | string | Node id. +| `sensor_id` | string | Sensor id. +| `target_id` | string | Target id. +| `response` | string | Response name. +| `from` | string | Start of time range (ISO 8601). +| `to` | string | End of time range (ISO 8601). +| `limit` | integer | Max. number of results (optional). +| `header` | integer | Add CSV header (0 or 1). +| `view` | integer | Return observation views instead of data points (0 or 1). |=== +[discrete] ==== Request Headers .GET @@ -3507,18 +3519,10 @@ target was accepted. |=== | Name | Values -| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` -|=== - -.POST -[cols="2,8"] -|=== -| Name | Values - -| Content-Encoding | `deflate`, `zstd` (optional) -| Content-Type | `application/namelist` +| Accept | `text/comma-separated-values` |=== +[discrete] ==== Responses .GET @@ -3526,1285 +3530,678 @@ target was accepted. |=== | Status | Description -| `200` | Target is returned. +| `200` | Observations are returned. | `400` | Invalid request. | `401` | Unauthorised. -| `404` | Target not found. -| `500` | Server error. -| `503` | Database error. -|=== - -.POST -[cols="1,9"] -|=== -| Status | Description - -| `201` | Target was accepted. -| `400` | Invalid request or payload. -| `409` | Target exists in database. -| `413` | Payload too large. -| `415` | Invalid payload format. +| `404` | No observations found. | `500` | Server error. | `503` | Database error. |=== +[discrete] ==== Example -Return target `dummy-target` in JSON format: +Return time series of responses `dummy` related to node `dummy-node`, sensor +`dummy-sensor`, and target `dummy-sensor`, from 2023 to 2024, as X/Y data in CSV +format: .... -$ curl -s -u : --header "Accept: application/json" \ - "http://localhost/api/v1/target?id=dummy-target" +$ curl -s -u : --header "Accept: text/comma-separated-values" \ + "http://localhost/api/v1/timeseries?node_id=dummy-node&sensor_id=dummy-sensor\ +&target_id=dummy-target&response=dummy&from=2023&to=2024" .... -== Data Serialisation +For additional meta information, add the parameter `&view=1`. -DMPACK supports the following data serialisation formats: +=== Read or Update Beat [[api-beat]] -Atom XML:: Export of log messages in -link:https://en.wikipedia.org/wiki/Atom_(web_standard)[Atom Syndication Format] -(RFC 4287), with optional XSLT style sheet. -Block:: Export of observation responses as X/Y data points in ASCII block -format, consisting of time stamp (ISO 8601) and real value. -CSV:: Export and import of beat, log, node, observation, sensor, and target -data, with custom field separator and quote character. A CSV header is added -optionally. -HDF5:: Export and import of node, observation, sensor, and target data as HDF5 -compound data types. -JSON:: Export of beat, log, node, observation, sensor, and target data as -JSON objects or JSON arrays. -JSON Lines:: Export of beat, log, node, observation, sensor, and target data in -link:https://jsonlines.org/[JSON Lines] / -link:http://ndjson.org/[Newline Delimited JSON] format. -Lua:: Converting observations from and to Lua tables. Import of observations -from Lua file or stack-based data exchange between Fortran and Lua. -Namelist:: Import from and export to Fortran 95 Namelist format of single beat, -log, node, observation, sensor, and target data. The syntax is -case-insensitive, line-breaks are optional. Default values are assumed for -omitted attributes of data in Namelist format. -Text:: Status messages of the HTTP-RPC API are returned as key–value pairs in -plain text format +Returns heartbeat of a given node in <>, +<>, or <> format from database. -The JSON Lines format equals the JSON format, except that multiple records are -separated by new line. The HDF5 format description for observations is omitted -due to length. You can output the format from the command-line. For example, if -the file `observ.hdf5` contains DMPACK observations: +On POST, adds or updates heartbeat given in Namelist format. Optionally, the +payload may be deflate or zstd compressed. The API returns HTTP 201 Created if +the beat was accepted. -.... -$ h5dump -H -A 0 observ.hdf5 -.... +If HTTP Basic Auth is used, the user name must match the `node_id` attribute of +the beat, otherwise, the request will be rejected as unauthorised (HTTP 401). -=== API Status [[data-api]] +[discrete] +==== Endpoint -.API status derived type -[cols="3,2,2,14"] +* `/api/v1/beat` +* `/api/v1/beat?node_id=` + +[discrete] +==== HTTP Methods + +* GET +* POST + +[discrete] +==== Request Parameters + +[cols="3,2,5"] |=== -| Attribute | Type | Size | Description +| GET Parameter | Type | Description -| `version` | string | 32 | DMPACK application version. -| `dmpack` | string | 32 | DMPACK library version. -| `host` | string | 32 | Server host name. -| `server` | string | 32 | Server software (web server). -| `timestamp` | string | 32 | Server date and time in ISO 8601. -| `message` | string | 32 | Server status message (optional). -| `error` | integer | 4 | <>. +| `node_id` | string | Node id. |=== -.Text [[data-api-text]] -.... -version=1.0.0 -dmpack=1.0.0 -host=localhost -server=lighttpd/1.4.70 -timestamp=1970-01-01T00:00:00.000000+00:00 -message=online -error=0 -.... +[discrete] +==== Request Headers -=== Beat [[data-beat]] +.GET +[cols="1,9"] +|=== +| Name | Values -.Beat derived type -[cols="3,2,2,14"] +| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` |=== -| Attribute | Type | Size | Description -| `node_id` | string | 32 | Node id (`-0-9A-Z_a-z`). -| `address` | string | 45 | IPv4/IPv6 address of client. -| `client` | string | 32 | Client software name and version. -| `time_sent` | string | 32 | Date and time heartbeat was sent (ISO 8601). -| `time_recv` | string | 32 | Date and time heartbeat was received (ISO 8601). -| `error` | integer | 4 | Last client connection <>. -| `interval` | integer | 4 | Emit interval in seconds. -| `uptime` | integer | 4 | Client uptime in seconds. +.POST +[cols="2,8"] |=== +| Name | Values -.CSV [[data-beat-csv]] -[cols="2,3,15"] +| Content-Encoding | `deflate`, `zstd` (optional) +| Content-Type | `application/namelist` |=== -| Column | Attribute | Description -| 1 | `node_id` | Node id. -| 2 | `address` | IP address of client. -| 3 | `client` | Client software name and version. -| 4 | `time_sent` | Date and time heartbeat was sent. -| 5 | `time_recv` | Date and time heartbeat was received. -| 6 | `error` | Error code. -| 7 | `interval` | Emit interval in seconds. -| 8 | `uptime` | Client uptime in seconds. +[discrete] +==== Responses + +.GET +[cols="1,9"] |=== +| Status | Description + +| `200` | Beat is returned. +| `400` | Invalid request. +| `401` | Unauthorised. +| `404` | Beat not found. +| `500` | Server error. +| `503` | Database error. +|=== + +.POST +[cols="1,9"] +|=== +| Status | Description + +| `201` | Beat was accepted. +| `400` | Invalid request or payload. +| `401` | Unauthorised. +| `413` | Payload too large. +| `415` | Invalid payload format. +| `500` | Server error. +| `503` | Database error. +|=== + +[discrete] +==== Example + +Return the heartbeat of node `dummy-node` in JSON format: -.JSON [[data-beat-json]] -[source,json] .... -{ - "node_id": "dummy-node", - "address": "127.0.0.1", - "client": "dmbeat 1.0.0 (DMPACK 1.0.0)", - "time_sent": "1970-01-01T00:00:00.000000+00:00", - "time_recv": "1970-01-01T00:00:00.000000+00:00", - "error": 0, - "interval": 0, - "uptime": 0 -} +$ curl -s -u : --header "Accept: application/json" \ + "http://localhost/api/v1/beat?node_id=dummy-node" .... -.Namelist [[data-beat-nml]] -.... +=== Read or Create Log [[api-log]] -&DMBEAT -BEAT%NODE_ID="dummy-node", -BEAT%ADDRESS="127.0.0.1", -BEAT%CLIENT="dmbeat 1.0.0 (DMPACK 1.0.0)", -BEAT%TIME_SENT="1970-01-01T00:00:00.000000+00:00", -BEAT%TIME_RECV="1970-01-01T00:00:00.000000+00:00", -BEAT%ERROR=0, -BEAT%INTERVAL=0, -BEAT%UPTIME=0, -/ -.... +Returns single log of passed id in <>, +<>, or <> format from database. -=== Data Point [[data-dp]] +On POST, adds log in Namelist format to database. Optionally, the payload may +be deflate or zstd compressed. The API returns HTTP 201 Created if the log was +accepted. -.Data point derived type -[cols="3,2,2,14"] +If HTTP Basic Auth is used, the user name must match the `node_id` attribute of +the log, otherwise, the request will be rejected as unauthorised (HTTP 401). + +[discrete] +==== Endpoint + +* `/api/v1/log` +* `/api/v1/log?id=` + +[discrete] +==== HTTP Methods + +* GET +* POST + +[discrete] +==== Request Parameters + +[cols="3,2,5"] |=== -| Attribute | Type | Size | Description +| GET Parameter | Type | Description -| `x` | string | 32 | X value (ISO 8601). -| `y` | double | 8 | Y value. +| `id` | string | Log id (UUID). |=== -.Block [[data-dp-block]] -.... -1970-01-01T00:00:00.000000+00:00 0.00000000 -.... +[discrete] +==== Request Headers -.CSV [[data-dp-csv]] -[cols="2,3,15"] +.GET +[cols="1,9"] |=== -| Column | Attribute | Description +| Name | Values -| 1 | `x` | X value. -| 2 | `y` | Y value. +| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` |=== -.JSON [[data-dp-json]] -[source,json] -.... -{ - "x": "1970-01-01T00:00:00.000000+00:00", - "y": 0.0 -} -.... +.POST +[cols="2,8"] +|=== +| Name | Values -=== Log [[data-log]] +| Content-Encoding | `deflate`, `zstd` (optional) +| Content-Type | `application/namelist` +|=== -.Log derived type -[cols="3,2,2,14"] +[discrete] +==== Responses + +.GET +[cols="1,9"] |=== -| Attribute | Type | Size | Description +| Status | Description -| `id` | string | 32 | Log id (UUID). -| `level` | integer | 4 | <>. -| `error` | integer | 4 | <>. -| `timestamp` | string | 32 | Date and time (ISO 8601). -| `node_id` | string | 32 | Node id (optional). -| `sensor_id` | string | 32 | Sensor id (optional). -| `target_id` | string | 32 | Target id (optional). -| `observ_id` | string | 32 | Observation id (optional). -| `source` | string | 32 | Log source (optional). -| `message` | string | 512 | Log message. +| `200` | Log is returned. +| `400` | Invalid request. +| `401` | Unauthorised. +| `404` | Log not found. +| `500` | Server error. +| `503` | Database error. |=== -.Log level [[data-log-level]] -[cols="1,3"] +.POST +[cols="1,9"] |=== -| Level | Name +| Status | Description -| 1 | debug -| 2 | info -| 3 | warning -| 4 | error -| 5 | critical +| `201` | Log was accepted. +| `400` | Invalid request or payload. +| `401` | Unauthorised. +| `409` | Log exists in database. +| `413` | Payload too large. +| `415` | Invalid payload format. +| `500` | Server error. +| `503` | Database error. |=== -.Atom XML [[data-log-atom]] -[source,xml] -.... - - -DMPACK -DMPACK Logs -Log Messages Feed -urn:uuid:a6baaf1a-43b7-4e59-a18c-653e6ee61dfa -1970-01-01T00:00:00.000000+00:00 - -DEBUG: dummy log message -urn:uuid:26462d27-d7ff-4ef1-b10e-0a2e921e638b -1970-01-01T00:00:00.000000+00:00 -1970-01-01T00:00:00.000000+00:00 -DEBUG: dummy log message - -
- - - - - - - - - - - - - -
ID26462d27d7ff4ef1b10e0a2e921e638b
Timestamp1970-01-01T00:00:00.000000+00:00
LevelDEBUG (1)
Errordummy error (2)
Node IDdummy-node
Sensor IDdummy-sensor
Target IDdummy-target
Observation ID9bb894c779e544dab1bd7e7a07ae507d
Sourcedummy
Messagedummy log message
-
-
- -dummy - -
-
-.... - -.CSV [[data-log-csv]] -[cols="2,3,15"] -|=== -| Column | Attribute | Description - -| 1 | `id` | Log id. -| 2 | `level` | Log level. -| 3 | `error` | Error code. -| 4 | `timestamp` | Date and time. -| 5 | `node_id` | Node id. -| 6 | `sensor_id` | Sensor id. -| 7 | `target_id` | Target id. -| 8 | `observ_id` | Observation id. -| 9 | `source` | Log source. -| 10 | `message` | Log message. -|=== +[discrete] +==== Example -.JSON [[data-log-json]] -[source,json] -.... -{ - "id": "26462d27d7ff4ef1b10e0a2e921e638b", - "level": 1, - "error": 2, - "timestamp": "1970-01-01T00:00:00.000000+00:00", - "node_id": "dummy-node", - "sensor_id": "dummy-sensor", - "target_id": "dummy-target", - "observ_id": "9bb894c779e544dab1bd7e7a07ae507d", - "message": "dummy log message" -} -.... +Return a specific log in JSON format: -.Namelist [[data-log-nml]] .... -&DMLOG -LOG%ID="26462d27d7ff4ef1b10e0a2e921e638b", -LOG%LEVEL=1, -LOG%ERROR=2, -LOG%TIMESTAMP="1970-01-01T00:00:00.000000+00:00", -LOG%NODE_ID="dummy-node", -LOG%SENSOR_ID="dummy-sensor", -LOG%TARGET_ID="dummy-target", -LOG%OBSERV_ID="9bb894c779e544dab1bd7e7a07ae507d", -LOG%SOURCE="dummy", -LOG%MESSAGE="dummy log message", -/ +$ curl -s -u : --header "Accept: application/json" \ + "http://localhost/api/v1/log?id=51adca2f1d4e42a5829fd1a378c8b6f1" .... -=== Node [[data-node]] +=== Read or Create Node [[api-node]] -.Node derived type -[cols="3,2,2,14"] -|=== -| Attribute | Type | Size | Description +Returns node of given id in <>, <>, or +<> format from database. -| `id` | string | 32 | Node id (`-0-9A-Z_a-z`). -| `name` | string | 32 | Node name. -| `meta` | string | 32 | Node description (optional). -| `x` | double | 8 | Node x or easting (optional). -| `y` | double | 8 | Node y or northing (optional). -| `z` | double | 8 | Node z or altitude (optional). -|=== +On POST, adds node in Namelist format to database. Optionally, the payload may +be deflate or zstd compressed. The API returns HTTP 201 Created if the node was +accepted. -.CSV [[data-node-csv]] -[cols="2,3,15"] -|=== -| Column | Attribute | Description +If HTTP Basic Auth is used, the user name must match the `node_id` attribute of +the node, otherwise, the request will be rejected as unauthorised (HTTP 401). -| 1 | `id` | Node id. -| 2 | `name` | Node name. -| 3 | `meta` | Node description. -| 4 | `x` | Node x or easting. -| 5 | `y` | Node y or northing. -| 6 | `z` | Node z or altitude. -|=== +[discrete] +==== Endpoint -.HDF5 [[data-node-hdf5]] -.... -DATASET "node_type" { - DATATYPE H5T_COMPOUND { - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "id"; - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "name"; - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "meta"; - H5T_IEEE_F64LE "x"; - H5T_IEEE_F64LE "y"; - H5T_IEEE_F64LE "z"; - } - DATASPACE SIMPLE { ( 8 ) / ( 8 ) } -} -.... +* `/api/v1/node` +* `/api/v1/node?id=` -.JSON [[data-node-json]] -[source,json] -.... -{ - "id": "dummy-node", - "name": "Dummy Node", - "meta": "Description", - "x": 0.0, - "y": 0.0, - "z": 0.0 -} -.... +[discrete] +==== HTTP Methods -.Namelist [[data-node-nml]] -.... -&DMNODE -NODE%ID="dummy-node", -NODE%NAME="Dummy Node", -NODE%META="Description", -NODE%X=0.0, -NODE%Y=0.0, -NODE%Z=0.0, -/ -.... +* GET +* POST -=== Observation [[data-observ]] +[discrete] +==== Request Parameters -.Observation derived type -[cols="3,2,2,14"] +[cols="3,2,5"] |=== -| Attribute | Type | Size | Description +| GET Parameter | Type | Description -| `id` | string | 32 | Observation id (UUID). -| `node_id` | string | 32 | Node id (`-0-9A-Z_a-z`). -| `sensor_id` | string | 32 | Sensor id (`-0-9A-Z_a-z`). -| `target_id` | string | 32 | Target id (`-0-9A-Z_a-z`). -| `name` | string | 32 | Observation name (`-0-9A-Z_a-z`). -| `timestamp` | string | 32 | Date and time of observation (ISO 8601). -| `source` | string | 32 | Observation source or name of origin (`-0-9A-Z_a-z`). -| `path` | string | 32 | Path of TTY/PTY device. -| `priority` | integer | 4 | Message queue priority (>= 0). -| `error` | integer | 4 | Observation <>. -| `next` | integer | 4 | Cursor of receiver list (0 to 16). -| `nreceiver` | integer | 4 | Number of receivers (0 to 16). -| `nrequests` | integer | 4 | Number of sensor requests (0 to 8). -| `receivers` | array | 16{nbsp}×{nbsp}32 | Array of receiver names (16). -| `requests` | array | 8{nbsp}×{nbsp}1380 | Array of requests (8). +| `id` | string | Node id. |=== -.Request derived type of an observation [[data-request]] -[cols="3,2,2,14"] +[discrete] +==== Request Headers + +.GET +[cols="1,9"] |=== -| Attribute | Type | Size | Description +| Name | Values -| `name` | string | 32 | Request name (`-0-9A-Z_a-z`). -| `timestamp` | string | 32 | Date and time of request (ISO 8601). -| `request` | string | 256 | Raw request to sensor. Non-printable characters have to be escaped. -| `response` | string | 256 | Raw response of sensor. Non-printable characters will be escaped. -| `delimiter` | string | 8 | Request delimiter. Non-printable characters have to be escaped. -| `pattern` | string | 256 | Regular expression pattern that describes the raw response using named groups. -| `delay` | integer | 4 | Delay in mseconds to wait after the request. -| `error` | integer | 4 | Request <>. -| `mode` | integer | 4 | Request mode (unused, for future additions). -| `retries` | integer | 4 | Number of performed retries. -| `state` | integer | 4 | Request state (unused, for future additions). -| `timeout` | integer | 4 | Request timeout in mseconds. -| `nresponses` | integer | 4 | Number of responses (0 to 16). -| `responses` | array | 16{nbsp}×{nbsp}32 | Extracted values from the raw response (16). +| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` |=== -.Response derived type of a request [[data-response]] -[cols="3,2,2,14"] +.POST +[cols="2,8"] |=== -| Attribute | Type | Size | Description +| Name | Values -| `name` | string | 8 | Response name (`-0-9A-Z_a-z`). -| `unit` | string | 8 | Response unit. -| `type` | integer | 4 | Response <>. -| `error` | integer | 4 | Response <>. -| `value` | double | 8 | Response value. +| Content-Encoding | `deflate`, `zstd` (optional) +| Content-Type | `application/namelist` |=== -.Response value types [[data-response-types]] -[cols="1,6,14"] +[discrete] +==== Responses + +.GET +[cols="1,9"] |=== -| Value | Name | Description +| Status | Description -| 0 | `RESPONSE_TYPE_REAL64` | 8-byte signed real. -| 1 | `RESPONSE_TYPE_REAL32` | 4-byte signed real. -| 2 | `RESPONSE_TYPE_INT64` | 8-byte signed integer. -| 3 | `RESPONSE_TYPE_INT32` | 4-byte signed integer. -| 4 | `RESPONSE_TYPE_LOGICAL` | 1-byte boolean. -| 5 | `RESPONSE_TYPE_BYTE` | Byte. -| 6 | `RESPONSE_TYPE_STRING` | Byte string. +| `200` | Node is returned. +| `400` | Invalid request. +| `401` | Unauthorised. +| `404` | Node not found. +| `500` | Server error. +| `503` | Database error. |=== -.CSV [[data-observ-csv]] -[cols="3,3,14"] +[discrete] +.POST +[cols="1,9"] |=== -| Column | Attribute | Description +| Status | Description -| 1 | `id` | Observation id. -| 2 | `node_id` | Node id. -| 3 | `sensor_id` | Sensor id. -| 4 | `target_id` | Target id. -| 5 | `name` | Observation name. -| 6 | `timestamp` | Date and time of observation. -| 7 | `source` | Observation source. -| 8 | `path` | Path of TTY/PTY device. -| 9 | `priority` | Message queue priority. -| 10 | `error` | Error code. -| 11 | `next` | Cursor of receiver list (0 to 16). -| 12 | `nreceivers` | Number of receivers (0 to 16). -| 13 | `nrequests` | Number of sensor requests (0 to 8). -| 14 – 29 | `receivers` | Array of receiver names (16). -| 14 | `receiver` | Receiver 1. -| 15 | `receiver` | Receiver 2. -| 16 | `receiver` | Receiver 3. -| 17 | `receiver` | Receiver 4. -| 18 | `receiver` | Receiver 5. -| 19 | `receiver` | Receiver 6. -| 20 | `receiver` | Receiver 7. -| 21 | `receiver` | Receiver 8. -| 22 | `receiver` | Receiver 9. -| 23 | `receiver` | Receiver 10. -| 24 | `receiver` | Receiver 11. -| 25 | `receiver` | Receiver 12. -| 26 | `receiver` | Receiver 13. -| 27 | `receiver` | Receiver 14. -| 28 | `receiver` | Receiver 15. -| 29 | `receiver` | Receiver 16. -| 30 – 773 | `requests` | Array of requests (8). -| 30 – 105 | `request` | Request 1. -| 30 | `name` | Request name. -| 31 | `timestamp` | Date and time of request. -| 32 | `request` | Raw request to sensor. -| 33 | `response` | Raw response of sensor. -| 34 | `delimiter` | Request delimiter. -| 35 | `pattern` | Regular expression pattern that describes the raw response. -| 36 | `delay` | Delay in mseconds to wait after the request. -| 37 | `error` | Error code. -| 38 | `mode` | Request mode. -| 39 | `retries` | Number of retries performed. -| 40 | `state` | Request state. -| 41 | `timeout` | Request timeout in mseconds. -| 42 | `nresponses` | Number of responses (0 to 16). -| 43 – 122 | `responses` | Array of responses (16). -| 43 – 47 | `response` | Response 1. -| 43 | `name` | Response 1 name. -| 44 | `unit` | Response 1 unit. -| 45 | `type` | Response 1 value type. -| 46 | `error` | Response 1 error. -| 47 | `value` | Response 1 value. -| 48 – 52 | `response` | Response 2. -| 53 – 57 | `response` | Response 3. -| 58 – 62 | `response` | Response 4. -| 63 – 67 | `response` | Response 5. -| 68 – 72 | `response` | Response 6. -| 73 – 77 | `response` | Response 7. -| 78 – 82 | `response` | Response 8. -| 83 – 87 | `response` | Response 9. -| 88 – 92 | `response` | Response 10. -| 93 – 97 | `response` | Response 11. -| 98 – 102 | `response` | Response 12. -| 103 – 107 | `response` | Response 13. -| 108 – 112 | `response` | Response 14. -| 113 – 117 | `response` | Response 15. -| 118 – 122 | `response` | Response 16. -| 123 – 215 | `request` | Request 2. -| 216 – 308 | `request` | Request 3. -| 309 – 401 | `request` | Request 4. -| 402 – 494 | `request` | Request 5. -| 495 – 587 | `request` | Request 6. -| 588 – 680 | `request` | Request 7. -| 681 – 773 | `request` | Request 8. +| `201` | Node was accepted. +| `400` | Invalid request or payload. +| `401` | Unauthorised. +| `409` | Node exists in database. +| `413` | Payload too large. +| `415` | Invalid payload format. +| `500` | Server error. +| `503` | Database error. |=== -.HDF5 [[data-observ-hdf5]] -The HDF5 data-set description is too large to be fully shown in this document. +[discrete] +==== Example -.JSON [[data-observ-json]] -[source,json] -.... -{ - "id": "9273ab62f9a349b6a4da6dd274ee83e7", - "node_id": "dummy-node", - "sensor_id": "dummy-sensor", - "target_id": "dummy-target", - "name": "dummy-observ", - "timestamp": "1970-01-01T00:00:00.000000+00:00", - "source": "dmdummy", - "path": "/dev/null", - "priority": 0, - "error": 0, - "next": 0, - "nreceivers": 2, - "nrequests": 1, - "receivers": [ - "dummy-receiver1", - "dummy-receiver2" - ], - "requests": [ - { - "name": "dummy", - "timestamp": "1970-01-01T00:00:00.000000+00:00", - "request": "?\\n", - "response": "10.0\\n", - "delimiter": "\\n", - "pattern": "(?[-+0-9\\.]+)", - "delay": 0, - "error": 0, - "mode": 0, - "retries": 0, - "state": 0, - "timeout": 0, - "nresponses": 1, - "responses": [ - { - "name": "sample", - "unit": "none", - "type": 0, - "error": 0, - "value": 10.0 - } - ] - } - ] -} -.... +Return node `dummy-node` in JSON format: -.Lua [[data-observ-lua]] -[source,lua] .... -{ - id = "9273ab62f9a349b6a4da6dd274ee83e7", - node_id = "dummy-node", - sensor_id = "dummy-sensor", - target_id = "dummy-target", - name = "dummy-observ", - timestamp = "1970-01-01T00:00:00.000000+00:00", - source = "dmdummy", - path = "/dev/null", - error = 0, - next = 1, - priority = 0, - nreceivers = 2, - nrequests = 1, - receivers = { "dummy-receiver1", "dummy-receiver2" }, - requests = { - { - name = "dummy", - timestamp = "1970-01-01T00:00:00.000000+00:00", - request = "?\\n", - response = "10.0\\n", - pattern = "(?[-+0-9\\.]+)", - delimiter = "\\n", - delay = 0, - error = 0, - mode = 0, - retries = 0, - state = 0, - timeout = 0, - nresponses = 1, - responses = { - { - name = "sample", - unit = "none", - type = 0, - error = 0, - value = 10.0 - } - } - } - } -} +$ curl -s -u : --header "Accept: application/json" \ + "http://localhost/api/v1/node?node_id=dummy-node" .... -.Namelist [[data-observ-nml]] -.... -&DMOBSERV -OBSERV%ID="9273ab62f9a349b6a4da6dd274ee83e7", -OBSERV%NODE_ID="dummy-node", -OBSERV%SENSOR_ID="dummy-sensor", -OBSERV%TARGET_ID="dummy-target", -OBSERV%NAME="dummy-observ", -OBSERV%TIMESTAMP="1970-01-01T00:00:00.000000+00:00", -OBSERV%SOURCE="dmdummy", -OBSERV%PATH="/dev/null", -OBSERV%PRIORITY=0, -OBSERV%ERROR=0, -OBSERV%NEXT=0, -OBSERV%NRECEIVERS=2, -OBSERV%NREQUESTS=1, -OBSERV%RECEIVERS="dummy-receiver1","dummy-receiver2", -OBSERV%REQUESTS(1)%NAME="dummy", -OBSERV%REQUESTS(1)%TIMESTAMP="1970-01-01T00:00:00.000000+00:00", -OBSERV%REQUESTS(1)%REQUEST="?\n", -OBSERV%REQUESTS(1)%RESPONSE="10.0\n", -OBSERV%REQUESTS(1)%DELIMITER="\n", -OBSERV%REQUESTS(1)%PATTERN="(?[-+0-9\.]+)", -OBSERV%REQUESTS(1)%DELAY=0, -OBSERV%REQUESTS(1)%ERROR=0, -OBSERV%REQUESTS(1)%MODE=0, -OBSERV%REQUESTS(1)%RETRIES=0, -OBSERV%REQUESTS(1)%STATE=0, -OBSERV%REQUESTS(1)%TIMEOUT=0, -OBSERV%REQUESTS(1)%NRESPONSES=1, -OBSERV%REQUESTS(1)%RESPONSES(1)%NAME="sample", -OBSERV%REQUESTS(1)%RESPONSES(1)%UNIT="none", -OBSERV%REQUESTS(1)%RESPONSES(1)%TYPE=0, -OBSERV%REQUESTS(1)%RESPONSES(1)%ERROR=0, -OBSERV%REQUESTS(1)%RESPONSES(1)%VALUE=10.00000000000000, -/ -.... +=== Read or Create Observation [[api-observ]] -=== Sensor [[data-sensor]] +Returns observation of given id from database, in <>, +<>, or <> format. -.Sensor derived type -[cols="3,2,2,14"] -|=== -| Attribute | Type | Size | Description +On POST, adds observation in Namelist format to database. Optionally, the +payload may be deflate or zstd compressed. The API returns HTTP 201 Created if +the observation was accepted. -| `id` | string | 32 | Sensor id (`-0-9A-Z_a-z`). -| `node_id` | string | 32 | Node id (`-0-9A-Z_a-z`). -| `type` | integer | 4 | <>. -| `name` | string | 32 | Sensor name. -| `sn` | string | 32 | Sensor serial number (optional). -| `meta` | string | 32 | Sensor description (optional). -| `x` | double | 8 | Sensor x or easting (optional). -| `y` | double | 8 | Sensor y or northing (optional). -| `z` | double | 8 | Sensor z or altitude (optional). +If HTTP Basic Auth is used, the user name must match the `node_id` attribute of +the observation, otherwise, the request will be rejected as unauthorised (HTTP +401). + +[discrete] +==== Endpoint + +* `/api/v1/observ` +* `/api/v1/observ?id=` + +[discrete] +==== HTTP Methods + +* GET +* POST + +[discrete] +==== Request Parameters + +[cols="3,2,5"] |=== +| GET Parameter | Type | Description -.Sensor types -[[data-sensor-types]] -[cols="1,1,8"] +| `id` | string | Observation id (UUID). |=== -| Value | Name | Description -| 0 | `none` | Unknown sensor type. -| 1 | `virtual` | Virtual sensor. -| 2 | `fs` | File system. -| 3 | `process` | Process or service. -| 4 | `meteo` | Meteorological sensor. -| 5 | `rts` | Robotic total station. -| 6 | `gnss` | GNSS receiver. -| 7 | `level` | Level sensor. -| 8 | `mems` | MEMS sensor. +[discrete] +==== Request Headers + +.GET +[cols="1,9"] |=== +| Name | Values -.CSV [[data-sensor-csv]] -[cols="2,3,15"] +| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` |=== -| Column | Attribute | Description -| 1 | `id` | Sensor id. -| 2 | `node_id` | Node id. -| 3 | `type` | Sensor type. -| 4 | `name` | Sensor name. -| 5 | `sn` | Sensor serial number. -| 6 | `meta` | Sensor description. -| 7 | `x` | Sensor x or easting. -| 8 | `y` | Sensor y or northing. -| 9 | `z` | Sensor z or altitude. +.POST +[cols="2,8"] |=== +| Name | Values -.HDF5 [[data-sensor-hdf5]] -.... -DATASET "sensor_type" { - DATATYPE H5T_COMPOUND { - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "id"; - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "node_id"; - H5T_STD_I32LE "type"; - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "name"; - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "sn"; - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "meta"; - H5T_IEEE_F64LE "x"; - H5T_IEEE_F64LE "y"; - H5T_IEEE_F64LE "z"; - } - DATASPACE SIMPLE { ( 8 ) / ( 8 ) } -} -.... - -.JSON [[data-sensor-json]] -[source,json] -.... -{ - "id": "dummy-sensor", - "node_id": "dummy-node", - "type": 3, - "name": "Dummy Sensor", - "sn": "00000", - "meta": "Description.", - "x": 0.0, - "y": 0.0, - "z": 0.0 -} -.... - -.Namelist [[data-sensor-nml]] -.... -&DMSENSOR -SENSOR%ID="dummy-sensor", -SENSOR%NODE_ID="dummy-node", -SENSOR%TYPE=3, -SENSOR%NAME="Dummy Sensor", -SENSOR%SN="00000", -SENSOR%META="Description", -SENSOR%X=0.0, -SENSOR%Y=0.0, -SENSOR%Z=0.0, -/ -.... - -=== Target [[data-target]] - -.Target derived type -[cols="3,2,2,14"] +| Content-Encoding | `deflate`, `zstd` (optional) +| Content-Type | `application/namelist` |=== -| Attribute | Type | Size | Description -| `id` | string | 32 | Target id (`-0-9A-Z_a-z`). -| `name` | string | 32 | Target name. -| `meta` | string | 32 | Target description (optional). -| `state` | integer | 4 | Target <> (optional). -| `x` | double | 8 | Target x or easting (optional). -| `y` | double | 8 | Target y or northing (optional). -| `z` | double | 8 | Target z or altitude (optional). -|=== +[discrete] +==== Responses -.Target states -[[data-target-states]] -[cols="1,3,16"] +.GET +[cols="1,9"] |=== -| Value | Name | Description +| Status | Description -| 0 | `none` | No special target state. -| 1 | `removed` | Target has been removed. -| 2 | `missing` | Target is missing. -| 3 | `invalid` | Target is invalid. -| 4 | `ignore` | Target should be ignored. -| 5 | `obsolete` | Target is obsolete. -| 6 | `user` | User-defined target state. +| `200` | Observation is returned. +| `400` | Invalid request. +| `401` | Unauthorised. +| `404` | Observation not found. +| `500` | Server error. +| `503` | Database error. |=== -.CSV [[data-target-csv]] -[cols="2,3,15"] +.POST +[cols="1,9"] |=== -| Column | Attribute | Description +| Status | Description -| 1 | `id` | Target id. -| 2 | `name` | Target name. -| 3 | `meta` | Target description. -| 4 | `state` | Target state. -| 5 | `x` | Target x or easting. -| 6 | `y` | Target y or northing. -| 7 | `z` | Target z or altitude. +| `201` | Observation was accepted. +| `400` | Invalid request or payload. +| `401` | Unauthorised. +| `409` | Observation exists in database. +| `413` | Payload too large. +| `415` | Invalid payload format. +| `500` | Server error. +| `503` | Database error. |=== -.HDF5 [[data-target-hdf5]] -.... -DATASET "target_type" { - DATATYPE H5T_COMPOUND { - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "id"; - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "name"; - H5T_ARRAY { [32] H5T_STRING { - STRSIZE 1; - STRPAD H5T_STR_SPACEPAD; - CSET H5T_CSET_ASCII; - CTYPE H5T_C_S1; - } } "meta"; - H5T_STD_I32LE "state"; - H5T_IEEE_F64LE "x"; - H5T_IEEE_F64LE "y"; - H5T_IEEE_F64LE "z"; - } - DATASPACE SIMPLE { ( 8 ) / ( 8 ) } -} -.... +[discrete] +==== Example -.JSON [[data-target-json]] -[source,json] -.... -{ - "id": "dummy-target", - "name": "Dummy Target", - "meta": "Description", - "state": 0, - "x": 0.0, - "y": 0.0, - "z": 0.0 -} -.... +Return a specific observation in JSON format: -.Namelist [[data-target-nml]] .... -&DMTARGET -TARGET%ID="dummy-target", -TARGET%NAME="Dummy Target", -TARGET%META="Description", -TARGET%STATE=0, -TARGET%X=0.0, -TARGET%Y=0.0, -TARGET%Z=0.0, -/ +$ curl -s -u : --header "Accept: application/json" \ + "http://localhost/api/v1/observ?id=7b98ae11d80b4ee392fe1a74d2c05809" .... -== Databases +=== Read or Create Sensor [[api-sensor]] -The DMPACK programs use three distinct databases to store -<> records: +Returns sensor of given id in <>, +<>, or <> format from +database. -Observation Database:: Stores nodes, sensors, targets, observations, observation -receivers, observation requests, and observation responses, with optional -synchronisation tables for all record types. -Log Database:: Stores all log messages in single table. -Beat Database:: Stores heartbeat messages by unique node id. +On POST, adds node in Namelist format to database. Optionally, the payload may +be deflate or zstd compressed. The API returns HTTP 201 Created if the sensor +was accepted. -The databases are usually located in directory `/var/dmpack/`. +If HTTP Basic Auth is used, the user name must match the `node_id` attribute of +the sensor, otherwise, the request will be rejected as unauthorised (HTTP 401). -=== Administration +[discrete] +==== Endpoint -The _sqlite3(1)_ program is stand-alone command-line shell for SQLite database -access that allows the user to execute arbitrary SQL statements. Third-party -programs provide an additional graphical user interface: +* `/api/v1/sensor` +* `/api/v1/sensor?id=` -link:https://sqlitebrowser.org/[DB Browser for SQLite] (DB4S):: A -spreadsheet-like visual interface for Linux, Unix, macOS, and Windows. (MPLv2, -GPLv3) -link:https://www.heidisql.com/[HeidiSQL]:: A free database administration tool -for MariaDB, MySQL, MS SQL Server, PostgreSQL, and SQLite. For Windows only. -(GPLv2) -link:https://www.phpliteadmin.org/[phpLiteAdmin]:: A web front-end for SQLite -database administration written in PHP. (GPLv3) -link:https://github.com/coleifer/sqlite-web[SQLite Web]:: A web-based SQLite -database browser in Python. (MIT) +[discrete] +==== HTTP Methods -=== Entity–Relationship Model +* GET +* POST -.Log database -[#db-uml-log] -image::log.svg[UML,align="center",scaledwidth=25%] +[discrete] +==== Request Parameters -.Observation database -[#db-uml-observ] -image::observ.svg[UML,align="center"] +[cols="3,2,5"] +|=== +| GET Parameter | Type | Description -.Beat database -[#db-uml-beat] -image::beat.svg[UML,align="center",scaledwidth=25%] +| `id` | string | Sensor id. +|=== -=== Examples +[discrete] +==== Request Headers -Write all schemas of an observation database to file `schema.sql`, using the -_sqlite3(1)_ command-line tool: +.GET +[cols="1,9"] +|=== +| Name | Values -.... -$ sqlite3 /var/dmpack/observ.sqlite ".schema" > schema.sql -.... +| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` +|=== -To dump an observation database as raw SQL to `observ.sql`: - -.... -$ sqlite3 /var/dmpack/observ.sqlite ".dump" > observ.sql -.... +.POST +[cols="2,8"] +|=== +| Name | Values -Dump only table `logs` of a log database: +| Content-Encoding | `deflate`, `zstd` (optional) +| Content-Type | `application/namelist` +|=== -.... -$ sqlite3 /var/dmpack/log.sqlite ".dump 'logs'" > log.sql -.... +[discrete] +==== Responses -== System Configuration [[system-configuration]] +.GET +[cols="1,9"] +|=== +| Status | Description -Additional changes to the system configuration should be considered to prevent -issues while conducting a long-term monitoring. +| `200` | Sensor is returned. +| `400` | Invalid request. +| `401` | Unauthorised. +| `404` | Sensor not found. +| `500` | Server error. +| `503` | Database error. +|=== -=== Time Zone +.POST +[cols="1,9"] +|=== +| Status | Description -The local time zone of the sensor client should be set to a zone without summer -daylight-saving. For instance, time zone `Europe/Berlin` implies Central -European Summer Time (CEST), which is usually not desired for long-term -observations, as it leads to time jumps. Instead, use time zone `GMT+1` or `UTC` -in this case. +| `201` | Sensor was accepted. +| `400` | Invalid request or payload. +| `401` | Unauthorised. +| `409` | Sensor exists in database. +| `413` | Payload too large. +| `415` | Invalid payload format. +| `500` | Server error. +| `503` | Database error. +|=== -==== FreeBSD +[discrete] +==== Example -On FreeBSD, configure the time zone using: +Return sensor `dummy-sensor` in JSON format: .... -# tzsetup +$ curl -s -u : --header "Accept: application/json" \ + "http://localhost/api/v1/sensor?id=dummy-sensor" .... -==== Linux +=== Read or Create Target [[api-target]] -On Linux, list all time zones and set the preferred one with _timedatectl(1)_: +Returns target of given id in <>, +<>, or <> format from +database. -.... -# timedatectl list-timezones -# timedatectl set-timezone Etc/GMT+1 -.... +On POST, adds target in Namelist format to database. Optionally, the payload +may be deflate or zstd compressed. The API returns HTTP 201 Created if the +target was accepted. -=== Time Synchronisation +[discrete] +==== Endpoint -The system time should be updated periodically by synchronising it with network -time servers. A Network Time Protocol (NTP) client has to be installed and -configured to enable the synchronisation. +* `/api/v1/target` +* `/api/v1/target?id=` -==== FreeBSD +[discrete] +==== HTTP Methods -Set the current date and time intially by passing the IP or FQDN of the NTP -server to _ntpdate(1)_: +* GET +* POST -.... -# ntpdate -b ptbtime1.ptb.de -.... +[discrete] +==== Request Parameters -The NTP daemon _ntpd(8)_ is configured through file `/etc/ntp.conf`. If -favoured, we can replace the existing NTP server pool `0.freebsd.pool.ntp.org` -with a single server, for example: +[cols="3,2,5"] +|=== +| GET Parameter | Type | Description -.... -server ptbtime1.ptb.de iburst -.... +| `id` | string | Target id. +|=== -Add the following entries to `/etc/rc.conf`: +[discrete] +==== Request Headers -.... -ntpd_enable="YES" -ntpd_sync_on_start="YES" -ntpd_flags="-g" -.... +.GET +[cols="1,9"] +|=== +| Name | Values -Start the _ntpd(8)_ service: +| Accept | `application/json`, `application/namelist`, `text/comma-separated-values` +|=== -.... -# service ntpd start -.... +.POST +[cols="2,8"] +|=== +| Name | Values -==== Linux +| Content-Encoding | `deflate`, `zstd` (optional) +| Content-Type | `application/namelist` +|=== -On Debian Linux, install the NTP package: +[discrete] +==== Responses -.... -# apt install ntp -.... +.GET +[cols="1,9"] +|=== +| Status | Description -Query the NTP servers to synchronise with: +| `200` | Target is returned. +| `400` | Invalid request. +| `401` | Unauthorised. +| `404` | Target not found. +| `500` | Server error. +| `503` | Database error. +|=== -.... -# ntpq -p -.... +.POST +[cols="1,9"] +|=== +| Status | Description -The system time should be updated now: +| `201` | Target was accepted. +| `400` | Invalid request or payload. +| `409` | Target exists in database. +| `413` | Payload too large. +| `415` | Invalid payload format. +| `500` | Server error. +| `503` | Database error. +|=== -.... -# date -R -.... +[discrete] +==== Example -On error, try to reconfigure the NTP service: +Return target `dummy-target` in JSON format: .... -# dpkg-reconfigure ntp +$ curl -s -u : --header "Accept: application/json" \ + "http://localhost/api/v1/target?id=dummy-target" .... -=== Power Saving - -On Linux, power saving for USB devices may be enabled by default. This can cause -issues if sensors are attached through an USB adapter. USB power saving is -enabled if the kernel boot parameter `usbcore.autosuspend` is not `-1`: +== GeoCOM API [[geocom-api]] -.... -# cat /sys/module/usbcore/parameters/autosuspend -2 -.... +The official GeoCOM API is divided into the following sub-systems: -We can update the boot loader to turn auto-suspend off. Edit `/etc/default/grub` -and change `GRUB_CMDLINE_LINUX_DEFAULT` to: +[cols="1,7"] +|=== +| Acronym | Name -.... -GRUB_CMDLINE_LINUX_DEFAULT="quiet usbcore.autosuspend=-1" -.... +| `AUT` | Automation +| `BAP` | Basic Applications +| `BMM` | Basic Man–Machine Interface +| `COM` | Communication Settings +| `CSV` | Central Services +| `EDM` | Electronic Distance Measurement +| `FTR` | File Transfer +| `IMG` | Image Processing +| `MOT` | Motorisation +| `SUP` | Supervisor +| `TMC` | Theodolite Measurement and Calculation +|=== -Then, update the boot loader: +=== Enumerators [[geocom-api-enums]] -.... -# update-grub -.... +All GeoCOM named types and enumerators supported by DMPACK start with prefix +`GEOCOM_`. -The system has to be rebooted for the changes to take effect. +[cols="5,7"] +|=== +| Name | Description -=== Message Queues +| `GEOCOM_IOS_BEEP_STDINTENS` | Standard intensity of beep expressed as percentage. +| `GEOCOM_AUT_CLOCKWISE` | Direction close-wise. +| `GEOCOM_AUT_ANTICLOCKWISE` | Direction counter clock-wise. +|=== -The operating system must have POSIX message queues enabled to run DMPACK -programs on sensor nodes. +.GEOCOM_AUT_ADJMODE: Fine-adjust position mode [[geocom-api-aut-adjmode]] +[cols="5,7"] +|=== +| Name | Description -==== FreeBSD +| `GEOCOM_AUT_NORM_MODE` | Angle tolerance. +| `GEOCOM_AUT_POINT_MODE` | Point tolerance. +| `GEOCOM_AUT_DEFINE_MODE` | System independent positioning tolerance. +|=== -On FreeBSD, make sure the kernel module `mqueuefs` is loaded, and the message -queue file system is mounted: +.GEOCOM_AUT_ATRMODE: Automatic target recognition mode [[geocom-api-aut-atrmode]] +[cols="5,7"] +|=== +| Name | Description -.... -# kldstat -m mqueuefs -Id Refs Name -522 1 mqueuefs -.... +| `GEOCOM_AUT_POSITION` | Positioning to Hz and V angle. +| `GEOCOM_AUT_TARGET` | Positioning to a target in the env. of the Hz and V angle. +|=== -Otherwise, we can simply load and mount the file system: +.GEOCOM_AUT_POSMODE: Position precision [[geocom-api-aut-posmode]] +[cols="5,7"] +|=== +| Name | Description -.... -# kldload mqueuefs -# mkdir -p /mnt/mqueue -# mount -t mqueuefs null /mnt/mqueue -.... +| `GEOCOM_AUT_NORMAL` | Fast positioning mode. +| `GEOCOM_AUT_PRECISE` | Exact positioning mode. +| `GEOCOM_AUT_FAST` | For TM30/TS30. +|=== -To load messages queues at system start, add the module `mqueuefs` to -`/etc/rc.conf`, and the file system to `/etc/fstab`: +.GEOCOM_BAP_ATRSETTING: ATR low-vis mode definition [[geocom-api-bap-atrsetting]] +[cols="5,7"] +|=== +| Name | Description -.... -# sysrc kld_list+="mqueuefs" -# echo "null /mnt/mqueue mqueuefs rw 0 0" >> /etc/fstab -.... +| `GEOCOM_BAP_ATRSET_NORMAL` | ATR is using no special flags or modes. +| `GEOCOM_BAP_ATRSET_LOWVIS_ON` | ATR low-vis mode on. +| `GEOCOM_BAP_ATRSET_LOWVIS_AON` | ATR low-vis mode always on. +| `GEOCOM_BAP_ATRSET_SRANGE_ON` | ATR high-reflectivity mode on. +| `GEOCOM_BAP_ATRSET_SRANGE_AON` | ATR high-reflectivity mode always on. +|=== -Additionally, we may increase the system limits of POSIX message queues with -_sysctl(8)_, or in `/etc/sysctl.conf`. The defaults are: - -.... -# sysctl kern.mqueue.maxmsg -kern.mqueue.maxmsg: 100 -# sysctl kern.mqueue.maxmsgsize -kern.mqueue.maxmsgsize: 16384 -.... - -The maximum message size has to be at least 16384 bytes. - -==== Linux - -The POSIX message queue file system should be mounted by default on Linux. If -not, run: - -.... -# mkdir -p /dev/mqueue -# mount -t mqueue none /dev/mqueue -.... - -Set the maximum number of messages and the maximum message size to some -reasonable values: - -.... -# sysctl fs.mqueue.msg_max=100 -# sysctl fs.mqueue.msgsize_max=16384 -.... - -The maximum message size has to be at least 16384 bytes. - -=== Cron - -On Unix-like operating system, link:https://en.wikipedia.org/wiki/Cron[cron] is -usually used to run jobs periodically. For instance, in order to update an XML -feed, or to generate HTML reports, add a schedule of the task to perform to the -_crontab(5)_ file of a local user. - -Edit the cron jobs of user `www` with _crontab(1)_: - -.... -# crontab -u www -e -.... - -The following _crontab(5)_ entry adds a task to generate reports every hour, -using utility script `mkreport.sh`: - -[source,crontab] -.... -SHELL=/bin/sh -MAILTO=/dev/null -# Create reports every hour, suppress logging. -@hourly -q /usr/local/share/dmpack/mkreport.sh -.... - -Status mails and logging are disabled. The shell script `mkreport.sh` must have -the execution bits set. Modify the script according to your set-up. - -Additionally, we may update an Atom XML feed of logs by running <> every -five minutes: - -[source,crontab] -.... -*/5 * * * * -q /usr/local/bin/dmfeed --config /usr/local/etc/dmpack/dmfeed.conf -.... - -The feed is updated only if new logs have arrived in the meantime, unless option -_force_ is enabled. - -== GeoCOM API [[geocom-api]] - -The official GeoCOM API is divided into the following sub-systems: - -[cols="1,4"] -|=== -| Acronym | Name - -| `AUT` | Automation -| `BAP` | Basic Applications -| `BMM` | Basic Man–Machine Interface -| `COM` | Communication Settings -| `CSV` | Central Services -| `EDM` | Electronic Distance Measurement -| `FTR` | File Transfer -| `IMG` | Image Processing -| `MOT` | Motorisation -| `SUP` | Supervisor -| `TMC` | Theodolite Measurement and Calculation -|=== - -=== Types [[geocom-api-types]] - -All GeoCOM named types and enumerators supported by DMPACK start with prefix -`GEOCOM_`. - -[%autowidth] -|=== -| Name | Description - -| `GEOCOM_IOS_BEEP_STDINTENS` | Standard intensity of beep expressed as percentage. -| `GEOCOM_AUT_CLOCKWISE` | Direction close-wise. -| `GEOCOM_AUT_ANTICLOCKWISE` | Direction counter clock-wise. -|=== - -.GEOCOM_AUT_ADJMODE: Fine-adjust position mode [[geocom-api-aut-adjmode]] -[%autowidth] -|=== -| Name | Description - -| `GEOCOM_AUT_NORM_MODE` | Angle tolerance. -| `GEOCOM_AUT_POINT_MODE` | Point tolerance. -| `GEOCOM_AUT_DEFINE_MODE` | System independent positioning tolerance. -|=== - -.GEOCOM_AUT_ATRMODE: Automatic target recognition mode [[geocom-api-aut-atrmode]] -[%autowidth] -|=== -| Name | Description - -| `GEOCOM_AUT_POSITION` | Positioning to Hz and V angle. -| `GEOCOM_AUT_TARGET` | Positioning to a target in the env. of the Hz and V angle. -|=== - -.GEOCOM_AUT_POSMODE: Position precision [[geocom-api-aut-posmode]] -[%autowidth] -|=== -| Name | Description - -| `GEOCOM_AUT_NORMAL` | Fast positioning mode. -| `GEOCOM_AUT_PRECISE` | Exact positioning mode. -| `GEOCOM_AUT_FAST` | For TM30/TS30. -|=== - -.GEOCOM_BAP_ATRSETTING: ATR low-vis mode definition [[geocom-api-bap-atrsetting]] -[%autowidth] -|=== -| Name | Description - -| `GEOCOM_BAP_ATRSET_NORMAL` | ATR is using no special flags or modes. -| `GEOCOM_BAP_ATRSET_LOWVIS_ON` | ATR low-vis mode on. -| `GEOCOM_BAP_ATRSET_LOWVIS_AON` | ATR low-vis mode always on. -| `GEOCOM_BAP_ATRSET_SRANGE_ON` | ATR high-reflectivity mode on. -| `GEOCOM_BAP_ATRSET_SRANGE_AON` | ATR high-reflectivity mode always on. -|=== - -.GEOCOM_BAP_MEASURE_PRG: Measurement modes [[geocom-api-bap-measure-prg]] -[%autowidth] -|=== -| Name | Description +.GEOCOM_BAP_MEASURE_PRG: Measurement modes [[geocom-api-bap-measure-prg]] +[cols="5,7"] +|=== +| Name | Description | `GEOCOM_BAP_NO_MEAS` | No measurements, take last one. | `GEOCOM_BAP_NO_DIST` | No distance measurement, angles only. @@ -4814,7 +4211,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_BAP_PRISMTYPE: Prism type definition [[geocom-api-bap-prismtype]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4834,7 +4231,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_BAP_REFLTYPE: Reflector type definition [[geocom-api-bap-refltype]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4844,7 +4241,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_BAP_TARGET_TYPE: Target type definition [[geocom-api-bap-target-type]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4853,7 +4250,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_BAP_USER_MEASPRG: Distance measurement programs [[geocom-api-bap-user-measprg]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4872,7 +4269,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_COM_BAUD_RATE: Baud rate [[geocom-api-com-baud-rate]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4886,7 +4283,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_COM_FORMAT: Transmission data format [[geocom-api-com-format]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4895,7 +4292,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_CSV_POWER_PATH: Power sources [[geocom-api-csv-power-path]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4904,7 +4301,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_COM_TPS_STARTUP_MODE: Start mode [[geocom-api-tps-startup-mode]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4913,7 +4310,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_COM_TPS_STOP_MODE: Stop mode [[geocom-api-tps-stop-mode]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4922,7 +4319,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_EDM_EGLINTENSITY_TYPE: Intensity of Electronic Guidelight (EGL) [[geocom-api-egl-intensity-type]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4933,7 +4330,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_EDM_MODE: EDM measurement mode [[geocom-api-edm-mode]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4955,7 +4352,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_FTR_DEVICETYPE: Device type [[geocom-api-ftr-devicetype]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4964,7 +4361,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_FTR_FILETYPE: File type [[geocom-api-ftr-filetype]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4973,7 +4370,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_IMG_MEM_TYPE: Memory device type [[geocom-api-img-mem-type]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4982,7 +4379,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_MOT_LOCK_STATUS: Lock conditions [[geocom-api-mot-lock-status]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -4992,7 +4389,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_MOT_MODE: Controller configuration [[geocom-api-mot-mode]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -5005,7 +4402,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_MOT_STOPMODE: Controller stop mode [[geocom-api-mot-stopmode]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -5014,7 +4411,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_SUP_AUTO_POWER: Automatic shutdown mechanism for the system [[geocom-api-sup-auto-power]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -5023,7 +4420,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_TMC_FACE: Actual face [[geocom-api-tmc-face]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -5032,7 +4429,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_TMC_FACE_DEF: Face position [[geocom-api-tmc-face-def]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -5041,7 +4438,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_TMC_INCLINE_PRG: Inclination sensor measurement program [[geocom-api-tmc-incline-prg]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -5051,7 +4448,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_TMC_MEASURE_PRG: TMC measurement mode [[geocom-api-tmc-measure-prg]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -5066,7 +4463,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_TPS_DEVICE_CLASS: TPS device precision class [[geocom-api-tps-device-class]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -5092,7 +4489,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_TPS_DEVICE_TYPE: TPS device configuration type [[geocom-api-tps-device-type]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -5115,7 +4512,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix |=== .GEOCOM_TPS_REFLESS_CLASS: Reflectorless class [[geocom-api-tps-refless-class]] -[%autowidth] +[cols="5,7"] |=== | Name | Description @@ -5130,7 +4527,7 @@ All GeoCOM named types and enumerators supported by DMPACK start with prefix All GeoCOM return codes start with prefix `GRC_`. -[%autowidth] +[cols="2,8,12"] |=== | Code | Name | Description @@ -5433,7 +4830,8 @@ jobs = { target_id = "p01", receivers = { "dmdb" }, requests = { - geocom_set_position(gon2rad(0.0), gon2rad(100.0), GEOCOM_AUT_NORMAL, GEOCOM_AUT_TARGET), + geocom_set_position(gon2rad(0.0), gon2rad(100.0), + GEOCOM_AUT_NORMAL, GEOCOM_AUT_TARGET), geocom_do_measure(GEOCOM_TMC_DEF_DIST, GEOCOM_TMC_AUTO_INC), geocom_get_simple_measurement(3000, GEOCOM_TMC_AUTO_INC) } @@ -5448,7 +4846,7 @@ performed observations are forwarded to <>. === Parameters .Named log level parameters -[cols="1,4,8"] +[cols="1,3,14"] |=== | # | Name | Level @@ -5461,7 +4859,7 @@ performed observations are forwarded to <>. |=== .Named response value type parameters -[cols="1,6,8"] +[cols="1,5,12"] |=== | # | Name | Type @@ -5508,7 +4906,7 @@ parameters with `GEOCOM_`. The names of the requests are set to the name of the respective function without prefix. .Comparison between the DMPACK Lua API and the official GeoCOM API -[cols="3,2"] +[cols="4,5"] |=== | Lua API | GeoCOM API @@ -5630,31 +5028,37 @@ respective function without prefix. | <> | `IMG_TakeTccImage` |=== +[discrete] ==== geocom_abort_download() [[lua-api-geocom-abort-download]] Returns request for *FTR_AbortDownload* procedure. Creates request to abort or end the file download command. +[discrete] ==== geocom_abort_list() [[lua-api-geocom-abort-list]] Returns request for *FTR_AbortList* procedure. Creates request to abort or end the file list command. +[discrete] ==== geocom_beep_alarm() [[lua-api-geocom-beep-alarm]] Returns request for *BMM_BeepAlarm* procedure. Creates request to output an alarm signal (triple beep). +[discrete] ==== geocom_beep_normal() [[lua-api-geocom-beep-normal]] Returns request for *BMM_BeepNormal* procedure. Creates request to output an alarm signal (single beep). +[discrete] ==== geocom_beep_off() [[lua-api-geocom-beep-off]] Returns request for *IOS_BeepOff* procedure. Creates request to stop an active beep signal. +[discrete] ==== geocom_beep_on(intensity) [[lua-api-geocom-beep-on]] * `intensity` (_integer_) – Intensity of the beep signal. @@ -5663,6 +5067,7 @@ Returns request for *IOS_BeepOn* procedure. Creates request for continuous beep signal of given `intensity` from 0 to 100. The constant `GEOCOM_IOS_BEEP_STDINTENS` sets the intensity to 100. +[discrete] ==== geocom_change_face(pos_mode, atr_mode) [[lua-api-geocom-change-face]] * `pos_mode` (_integer_) – Position mode (<>). @@ -5680,6 +5085,7 @@ If `atr_mode` is `GEOCOM_AUT_POSITION`, the instrument uses conventional positioning to other face. If set to `GEOCOM_AUT_TARGET`, it tries to position into a target in the destination area. This mode requires activated ATR. +[discrete] ==== geocom_delete(device_type, file_type, day, month, year, file_name) [[lua-api-geocom-delete]] * `device_type` (_integer_) – Internal memory or memory card (<>). @@ -5693,6 +5099,7 @@ Returns request for *FTR_Delete* procedure. Creates request for deleting one or more files. Wildcards may be used to delete multiple files. If the deletion date is valid, only files older than the deletion date are deleted. +[discrete] ==== geocom_do_measure(tmc_prog, inc_mode) [[lua-api-geocom-do-measure]] * `tmc_prog` (_integer_) – Measurement program (<>). @@ -5703,6 +5110,7 @@ distance measurement. This command does not return any values. If a distance measurement is performed in measurement program `GEOCOM_TMC_DEF_DIST`, the distance sensor will work in the set EDM mode. +[discrete] ==== geocom_download(block_number) [[lua-api-geocom-download]] * `block_number` (_integer_) – Block number to download (0 – 65535). @@ -5714,6 +5122,7 @@ process will be aborted if the block number is set to 0. The maximum block number is 65535. The file size is therefore limited to 28 MiB. The function should not be used inside of configuration files. +[discrete] ==== geocom_fine_adjust(search_hz, search_v) [[lua-api-geocom-fine-adjust]] * `search_hz` (_number_) – Search range, Hz axis [rad]. @@ -5738,6 +5147,7 @@ The tolerance settings have no influence to this operation. The tolerance settings and the ATR precision depend on the instrument class and the used EDM mode. +[discrete] ==== geocom_get_angle(inc_mode) [[lua-api-geocom-get-angle]] * `inc_mode` (_integer_) – Inclination measurement mode (<>). @@ -5746,6 +5156,7 @@ Returns request for *TMC_GetAngle5* procedure. Creates request for returning a simple angle measurement. The function starts an angle measurement and returns the results. +[discrete] ==== geocom_get_angle_complete(inc_mode) [[lua-api-geocom-get-angle-complete]] * `inc_mode` (_integer_) – Inclination measurement mode (<>). @@ -5754,42 +5165,50 @@ Returns request for *TMC_GetAngle1* procedure. Creates request for returning a complete angle measurement. The function starts an angle and, depending on the configuration, an inclination measurement, and returns the results. +[discrete] ==== geocom_get_angle_correction() [[lua-api-geocom-get-angle-correction-status]] Returns request for *TMC_GetAngSwitch* procedure. Creates request for getting the angular correction status. +[discrete] ==== geocom_get_atmospheric_correction() [[lua-api-geocom-get-atmospheric-correction]] Returns request for *TMC_GetAtmCorr* procedure. Creates request for getting the atmospheric correction parameters +[discrete] ==== geocom_get_atmospheric_ppm() [[lua-api-geocom-get-atmospheric-ppm]] Returns request for *TMC_GetAtmPpm* procedure. Creates request for getting the atmospheric ppm correction factor. +[discrete] ==== geocom_get_atr_error() [[lua-api-geocom-get-atr-error]] Returns request for *TMC_IfDataAzeCorrError* procedure. Creates request for getting the ATR error status. +[discrete] ==== geocom_get_atr_setting() [[lua-api-geocom_get-atr-setting]] Returns request for *BAP_GetATRSetting* procedure. Creates request for getting the current ATR low-vis mode. +[discrete] ==== geocom_get_binary_mode() [[lua-api-geocom_get-binary-mode]] Returns request for *COM_GetBinaryAvailable* procedure. Creates request for getting the binary attribute of the server. +[discrete] ==== geocom_get_config() [[lua-api-geocom_get-config]] Returns request for *SUP_GetConfig* procedure. Creates request for getting the power management configuration status. The power timeout specifies the time after which the device switches into the mode indicated by response `autopwr`. +[discrete] ==== geocom_get_coordinate(wait_time, inc_mode) [[lua-api-geocom-get-coordinate]] * `wait_time` (_integer_) – Delay to wait for the distance measurement to finish [msec]. @@ -5804,48 +5223,57 @@ point with the last distance. The argument `wait_time` specifies the delay to wait for the distance measurement to finish. Single and tracking measurements are supported. The quality of the result is returned in the GeoCOM return code. +[discrete] ==== geocom_get_date_time() [[lua-api-geocom-get-date-time]] Returns request for *CSV_GetDateTime* procedure. Creates request for getting the current date and time of the instrument. A possible response may look like `%R1P,0,0:0,1996,'07','19','10','13','2f'`. +[discrete] ==== geocom_get_date_time_centi() [[lua-api-geocom-get-date-time-centi]] Returns request for *CSV_GetDateTimeCentiSec* procedure. Creates request for getting the current date and time of the instrument, including centiseconds. +[discrete] ==== geocom_get_device_config() [[lua-api-geocom-get-device-config]] Returns request for *CSV_GetDeviceConfig* procedure. Creates request for getting the instrument configuration. +[discrete] ==== geocom_get_double_precision() [[lua-api-geocom-get-double-precision]] Returns request for *COM_GetDoublePrecision* procedure. Creates request for getting the double precision setting – the number of digits to the right of the decimal point – when double floating-point values are transmitted. +[discrete] ==== geocom_get_edm_mode() [[lua-api-geocom-get-edm-mode]] Returns request for *TMC_GetEdmMode* procedure. Creates request for getting the EDM measurement mode. +[discrete] ==== geocom_get_egl_intensity() [[lua-api-geocom-get-egl-intensity]] Returns request for *EDM_GetEglIntensity* procedure. Creates request for getting the value of the intensity of the electronic guide light (EGL). +[discrete] ==== geocom_get_face() [[lua-api-geocom-get-face]] Returns request for *TMC_GetFace* procedure. Creates request for getting the face of the current telescope position. +[discrete] ==== geocom_get_fine_adjust_mode() [[lua-api-geocom-get-fine-adjust-mode]] Returns request for *AUT_GetFineAdjustMode* procedure. Creates request for getting the fine adjustment positioning mode. +[discrete] ==== geocom_get_full_measurement(wait_time, inc_mode) [[lua-api-geocom-get-full-measurement]] * `wait_time` (_integer_) – Delay to wait for the distance measurement to finish [msec]. @@ -5859,21 +5287,25 @@ function ignores `wait_time` and returns the results immediately. If no valid distance is available, and the measurement unit is not activated, the angle measurement result is retuned after the waiting time. +[discrete] ==== geocom_get_geocom_version() [[lua-api-geocom-get-geocom-version]] Returns request for *COM_GetSWVersion* procedure. Creates request for getting the GeoCOM server software version. +[discrete] ==== geocom_get_geometric_ppm() [[lua-api-geocom-get-geometric-ppm]] Returns request for *TMC_GeoPpm* procedure. Creates request for getting the geometric ppm correction factor. +[discrete] ==== geocom_get_height() [[lua-api-geocom-get-height]] Returns request for *TMC_GetHeight* procedure. Creates request for getting the current reflector height. +[discrete] ==== geocom_get_image_config(mem_type) [[lua-api-geocom-get-image-config]] * `mem_type` (_integer_) – Memory device type (<>). @@ -5887,51 +5319,61 @@ combination of the following settings: * `4` – Two-times sub-sampling. * `8` – Four-times sub-sampling. +[discrete] ==== geocom_get_inclination_correction() [[lua-api-geocom-get-inclination-correction]] Returns request for *TMC_GetInclineSwitch* procedure. Creates request for getting the dual-axis compensator status +[discrete] ==== geocom_get_inclination_error() [[lua-api-geocom-get-inclination-error]] Returns request for *TMC_IfDataIncCorrError* procedure. Creates request for getting the inclination error status. +[discrete] ==== geocom_get_instrument_name() [[lua-api-geocom-get-instrument-name]] Returns request for *CSV_GetInstrumentName* procedure. Creates request for getting the Leica-specific instrument name. +[discrete] ==== geocom_get_instrument_number() [[lua-api-geocom-get-instrument-number]] Returns request for *CSV_GetInstrumentNo* procedure. Creates request for getting the factory defined instrument number. +[discrete] ==== geocom_get_internal_temperature() [[lua-api-geocom-get-internal-temperature]] Returns request for *CSV_GetIntTemp* procedure. Creates request for getting the internal temperature of the instrument, measured on the mainboard side. +[discrete] ==== geocom_get_lock_status() [[lua-api-geocom-get-lock-status]] Returns request for *MOT_ReadLockStatus* procedure. Creates request for returning the condition of the Lock-In control. +[discrete] ==== geocom_get_measurement_program() [[lua-api-geocom-get-measurement-program]] Returns request for *BAP_GetMeasPrg* procedure. Creates request for getting the distance measurement mode of the instrument. +[discrete] ==== geocom_get_power() [[lua-api-geocom-get-power]] Returns request for *CSV_CheckPower* procedure. Creates request for checking the available power. +[discrete] ==== geocom_get_prism_constant() [[lua-api-geocom-get-prism-constant]] Returns request for *TMC_GetPrismCorr* procedure. Creates request for getting the prism constant. +[discrete] ==== geocom_get_prism_definition(prism_type) [[lua-api-geocom-get-prism-definition]] * `prism_type` (_integer_) – Prism type (<>). @@ -5939,16 +5381,19 @@ prism constant. Returns request for *BAP_GetPrismDef* procedure. Creates request for getting the default prism definition. +[discrete] ==== geocom_get_prism_type() [[lua-api-geocom-get-prism-type]] Returns request for *TMC_GetPrismType* procedure. Creates request for getting the default prism type. +[discrete] ==== geocom_get_prism_type_v2() [[lua-api-geocom-get-prism-type-v2]] Returns request for *TMC_GetPrismType2* procedure. Creates request for getting the default or user prism type. +[discrete] ==== geocom_get_quick_distance() [[lua-api-geocom-get-quick-distance]] Returns request for *TMC_QuickDist* procedure. Creates request for returning the @@ -5958,17 +5403,20 @@ the slope distance, but no coordinates. If no distance could be measured, only angles and an error code are returned. A measurement may be aborted by calling <>. +[discrete] ==== geocom_get_reduced_atr_fov() [[lua-api-geocom-get-reduced-atr-fov]] Returns request for *BAP_GetRedATRFov* procedure. Creates request for getting the reduced ATR field of view. +[discrete] ==== geocom_get_reflectorless_class() [[lua-api-geocom-get-reflectorless-class]] Returns request for *CSV_GetReflectorlessClass* procedure. Creates request for getting the RL type. The function returns the class of the reflectorless and long-range distance measurement of the instrument. +[discrete] ==== geocom_get_refraction_mode() [[lua-api-geocom-get-refraction_mode]] Returns request for *TMC_GetRefractiveMethod* procedure. Creates request for @@ -5976,12 +5424,14 @@ getting the refraction model. The function is used to get the current refraction model. Changing the method is not indicated on the interface of the instrument. +[discrete] ==== geocom_get_search_area() [[lua-api-geocom-get-search-area]] Returns request for *AUT_GetSearchArea* procedure. Creates request for getting the dimensions of the PowerSearch window. This command is valid for all instruments, but has only effects for instruments equipped with PowerSearch. +[discrete] ==== geocom_get_signal() [[lua-api-geocom-get-signal]] Returns request for *TMC_GetSignal* procedure. Creates request for getting the @@ -5991,6 +5441,7 @@ measurement mode is activated. Start the signal measurement with After the measurement, the EDM must be switched off with mode `GEOCOM_TMC_CLEAR`. While measuring, there is no angle data available. +[discrete] ==== geocom_get_simple_coordinates(wait_time, inc_mode) [[lua-api-geocom-get-simple-coordinates]] * `wait_time` (_integer_) – Delay to wait for the distance measurement to finish [msec]. @@ -6003,6 +5454,7 @@ coordinates are set to 0.0, and an error is returned. The coordinate calculation requires inclination results. The argument `inc_mode` sets the inclination measurement mode. +[discrete] ==== geocom_get_simple_measurement(wait_time, inc_mode) [[lua-api-geocom-get-simple-measurement]] * `wait_time` (_integer_) – Delay to wait for the distance measurement to finish [msec]. @@ -6013,22 +5465,26 @@ angles and distance measurement data. The argument `wait_time` sets the maximum time to wait for a valid distance. If a distance is available, the wait time is ignored. +[discrete] ==== geocom_get_slope_distance_correction() [[lua-api-geocom-get-slope-distance-correction]] Returns request for *TMC_GetSlopeDistCorr* procedure. The function returns the total ppm value (atmospheric ppm + geometric ppm) plus the current prism constant. +[discrete] ==== geocom_get_software_version() [[lua-api-geocom-get-software-version]] Returns request for *CSV_GetSWVersion* procedure. Creates request for getting the system software version of the instrument. +[discrete] ==== geocom_get_station() [[lua-api-geocom-get-station]] Returns request for *TMC_GetStation* procedure. Creates request for getting the station coordinates of the instrument. +[discrete] ==== geocom_get_target_type() [[lua-api-geocom-get-target-type]] Returns request for *BAP_GetTargetType* procedure. Creates request for getting @@ -6036,27 +5492,32 @@ the EDM type. The function returns the current EDM type (`GEOCOM_BAP_TARGET_TYPE`) for distance measurements: reflector (IR) or reflectorless (RL). +[discrete] ==== geocom_get_timeout() [[lua-api-geocom-get-timeout]] Returns request for *AUT_ReadTimeout* procedure. Creates request for getting the timeout for positioning. The function returns the maximum time to perform positioning. +[discrete] ==== geocom_get_tolerance() [[lua-api-geocom-get-tolerance]] Returns request for *AUT_ReadTol* procedure. The function returns the positioning tolerances of the Hz and V instrument axis. +[discrete] ==== geocom_get_user_atr_mode() [[lua-api-geocom-get-user-atr-mode]] Returns request for *AUS_GetUserAtrState* procedure. Creates request for getting the status of the ATR mode. +[discrete] ==== geocom_get_user_lock_mode() [[lua-api-geocom-get-user-lock-mode]] Returns request for *AUS_GetUserLockState* procedure. Creates request for getting the status of the _lock_ mode. +[discrete] ==== geocom_get_user_prism_definition(name) [[lua-api-geocom-get-user-prism-definition]] * `name` (_string_) – Prism name. @@ -6064,11 +5525,13 @@ the status of the _lock_ mode. Returns equest of *BAP_GetUserPrismDef* procedure. Creates request for getting the user prism definition. +[discrete] ==== geocom_get_user_spiral() [[lua-api-geocom-get-user-spiral]] Returns request for *AUT_GetUserSpiral* procedure. The function returns the current dimensions of the searching spiral. Requires at least a TCA instrument. +[discrete] ==== geocom_list(next) [[lua-api-geocom-list]] * `next` (_bool_) – First or next entry. @@ -6076,6 +5539,7 @@ current dimensions of the searching spiral. Requires at least a TCA instrument. Returns request for *FTR_List* procedure. Creates request for listing file information. +[discrete] ==== geocom_lock_in() [[lua-api-geocom-lock-in]] Returns request for *AUT_LockIn* procedure. Creates request for starting the @@ -6085,6 +5549,7 @@ mode has been activated through <> call must have finished successfully before executing this function. +[discrete] ==== geocom_measure_distance_angle(dist_mode) [[lua-api-geocom-measure-distance-angle]] * `dist_mode` (_integer_) – Distance measurement program (`GEOCOM_BAP_MEASURE_PRG`). @@ -6095,11 +5560,13 @@ and a single distance depending on the distance measurement mode `dist_mode`. It is not suited for continuous measurements (_lock_ mode and TRK mode), and uses the current automation settings. +[discrete] ==== geocom_null() [[lua-api-geocom-null]] Returns request for *COM_NullProc* procedure. Creates request for checking the communication. +[discrete] ==== geocom_ps_enable_range(enabled) [[lua-api-geocom-ps-enable-range]] * `enabled` (_bool_) – Enable PowerSearch. @@ -6110,6 +5577,7 @@ limits set by API call <> (requires GeoCOM robotic licence). If `enabled` is `false`, the default range is set to ≤ 400 m. +[discrete] ==== geocom_ps_search_next(direction, swing) [[lua-api-geocom-ps-search-next]] * `direction` (_integer_) – Searching direction (`GEOCOM_AUT_CLOCKWISE` or `GEOCOM_AUT_ANTICLOCKWISE`). @@ -6122,6 +5590,7 @@ PowerSearch window of into account. Use API call <> first. +[discrete] ==== geocom_ps_search_window() [[lua-api-geocom-ps-search-window]] Returns request for *AUT_PS_SearchWindow* procedure. Creates request for starting @@ -6130,6 +5599,7 @@ PowerSearch. The function starts PowerSearch in the window defined by API calls <> (requires GeoCOM robotic licence). +[discrete] ==== geocom_ps_set_range(min_dist, max_dist) [[lua-api-geocom-ps-set-range]] * `min_dist` (_integer_) – Min. distance to prism (≥ 0) [m]. @@ -6138,6 +5608,7 @@ licence). Returns request for *AUT_PS_SetRange* procedure. Creates request for setting the PowerSearch range. +[discrete] ==== geocom_search(search_hz, search_v) [[lua-api-geocom-search]] * `search_hz` (_number_) – Horizontal search region [rad]. @@ -6153,11 +5624,13 @@ afterwards. If the search range of the API function <> is expanded, target search and fine positioning are done in one step. +[discrete] ==== geocom_search_target() [[lua-api-geocom-search-target]] Returns request for *BAP_SearchTarget* procedure. Creates request for searching a target. The function searches for a target in the ATR search window. +[discrete] ==== geocom_set_angle_correction(incline, stand_axis, collimation, tilt_axis) [[lua-api-geocom-set-angle-correction]] * `incline` (_bool_) – Enable inclination correction. @@ -6168,6 +5641,7 @@ target. The function searches for a target in the ATR search window. Returns request for *TMC_SetAngSwitch* procedure. Creates request for turning angle corrections on or off. +[discrete] ==== geocom_set_atmospheric_correction(lambda, pressure, dry_temp, wet_temp) [[lua-api-geocom-set-atmospheric-correction]] * `lambda` (_number_) – Wave-length of EDM transmitter [m]. @@ -6180,6 +5654,7 @@ atmospheric correction parameters. The argument `lambda` should be queried with API call <>. +[discrete] ==== geocom_set_atmospheric_ppm(atm_ppm) [[lua-api-geocom-set-atmospheric-ppm]] * `atm_ppm` (_number_) – Atmospheric ppm correction factor [ppm]. @@ -6187,6 +5662,7 @@ API call Returns request for *BAP_SetAtmPpm* procedure. Creates request for setting the atmospheric ppm correction factor. +[discrete] ==== geocom_set_atr_mode(atr_mode) [[lua-api-geocom-set-atr-mode]] * `atm_mode` (_integer_) – ATR low-vis mode (<>). @@ -6194,6 +5670,7 @@ the atmospheric ppm correction factor. Returns request for *BAP_SetATRSetting* procedure. Creates request for setting the ATR low-vis mode. +[discrete] ==== geocom_set_binary_mode(enabled) [[lua-api-geocom-set-binary-mode]] * `enabled` (_bool_) – Enable binary communication. @@ -6202,6 +5679,7 @@ Returns request for *COM_SetBinaryAvailable* procedure. Creates request for setting the binary attribute of the server. The function sets the ability of the GeoCOM server to handle binary communication (not supported by DMPACK). +[discrete] ==== geocom_set_config(auto_power, timeout) [[lua-api-geocom-set-config]] * `auto_power` (_integer_) – Power-off mode (<>). @@ -6213,6 +5691,7 @@ which the instrument switches into the mode `auto_power` when no user activity occured (key press, GeoCOM communication). The value must be between 60,000 m/s (1 min) and 6,000,000 m/s (100 min). +[discrete] ==== geocom_set_date_time(year, month, day, hour, minute, second) [[lua-api-geocom-set-date-time]] * `year` (_integer_) – Year (`YYYY`). @@ -6225,6 +5704,7 @@ occured (key press, GeoCOM communication). The value must be between 60,000 m/s Returns request for *CSV_SetDateTime* procedure. Creates request for setting the date and time of the instrument. +[discrete] ==== geocom_set_distance(slope_dist, height_offset, inc_mode) [[lua-api-geocom-set-distance]] * `slope_dist` (_number_) – Slope distance [m]. @@ -6238,6 +5718,7 @@ the coordinates of the target. The vertical angle is corrected to π/2 or 3π/2, depending on the face of the instrument. The previously measured distance is cleared. +[discrete] ==== geocom_set_double_precision(ndigits) [[lua-api-geocom-set-double-precision]] * `ndigits` (_integer_) – Number of digits right to the comma. @@ -6249,6 +5730,7 @@ setting is only valid for the ASCII transmission mode. Trailing zeroes will not be sent by the instrument. For example, if `ndigits` is set to 3 and the exact value is 1.99975, the resulting value will be 2.0. +[discrete] ==== geocom_set_edm_mode(edm_mode) [[lua-api-geocom-set-edm-mode]] * `edm_mode` (_integer_) – EDM measurement mode (<>). @@ -6257,6 +5739,7 @@ Returns request for *TMC_SetEdmMode* procedure. Creates request for setting the EDM measurement mode. The EDM mode set by this function is used by <> in mode `GEOCOM_TMC_DEF_DIST`. +[discrete] ==== geocom_set_egl_intensity(intensity) [[lua-api-geocom-set-egl-intensity]] * `intensity` (_integer_) – EGL intensity (<>). @@ -6264,6 +5747,7 @@ EDM measurement mode. The EDM mode set by this function is used by Returns request for *EDM_SetEglIntensity* procedure. Creates request for setting the intensity of the electronic guide light. +[discrete] ==== geocom_set_fine_adjust_mode(adj_mode) [[lua-api-geocom-set-fine-adjust_mode]] * `adj_mode` (_integer_) – Fine adjust positioning mode (<>). @@ -6275,6 +5759,7 @@ it is recommended to set the adjust mode to `GEOCOM_AUT_POINT_MODE`. The argument `adj_mode` has to be either `GEOCOM_AUT_NORM_MODE` or `GEOCOM_AUT_POINT_MODE`. +[discrete] ==== geocom_set_geometric_ppm(enabled, scale_factor, offset, height_ppm, individual_ppm) [[lua-api-geocom-set-geometric-ppm]] * `enabled` (_bool_) – Enable binary communication. @@ -6286,6 +5771,7 @@ argument `adj_mode` has to be either `GEOCOM_AUT_NORM_MODE` or Returns request for *TMC_SetGeoPpm* procedure. Creates request for setting the geometric ppm correction factor. +[discrete] ==== geocom_set_height(height) [[lua-api-geocom-set-height]] * `height` (_number_) – Reflector height [m]. @@ -6293,6 +5779,7 @@ geometric ppm correction factor. Returns request for *TMC_SetHeight* procedure. Creates request for setting a new reflector height. +[discrete] ==== geocom_set_image_config(mem_type, image_number, quality, sub_function, prefix) [[lua-api-geocom-set-image-config]] * `mem_type` (_integer_) – Memory device type (<>). @@ -6310,6 +5797,7 @@ of the following settings: * `3` – Two-times sub-sampling. * `4` – Four-times sub-sampling. +[discrete] ==== geocom_set_inclination_correction(enabled) [[lua-api-geocom-set-inclination-correction]] * `enabled` (_bool_) – Enable dual-axis compensator. @@ -6317,6 +5805,7 @@ of the following settings: Returns request for *TMC_SetInclineSwitch* procedure. Creates request for turning the dual-axis compensator on or off. +[discrete] ==== geocom_set_laser_pointer(enabled) [[lua-api-geocom-set-laser-pointer]] * `enabled` (_bool_) – Enable laser pointer. @@ -6325,6 +5814,7 @@ Returns request for *EDM_Laserpointer* procedure. Creates request for turning the laser pointer on or off. The function is only available on models which support reflectorless distance measurement. +[discrete] ==== geocom_set_measurement_program(bap_prog) [[lua-api-geocom-set-measurement-program]] * `bap_prog` (_integer_) – Measurement program (<>). @@ -6335,6 +5825,7 @@ measurement program, for example, for API call RL EDM type programs are not available on all instruments. Changing the measurement program may change the EDM type as well (IR, RL). +[discrete] ==== geocom_set_orientation(hz) [[lua-api-geocom-set-orientation]] * `hz` (_number_) – Horizontal orientation [rad]. @@ -6347,6 +5838,7 @@ orientation can be set, an existing distance must be cleared by calling API function <> with argument `GEOCOM_TMC_CLEAR`. +[discrete] ==== geocom_set_position(hz, v, pos_mode, atr_mode) [[lua-api-geocom-set-position]] * `hz` (_number_) – Horizontal angle [rad]. @@ -6366,6 +5858,7 @@ If `atr_mode` is `GEOCOM_AUT_POSITION`, uses conventional position to other face. If set to `GEOCOM_AUT_TARGET`, tries to position into a target in the destination area. This mode requires activated ATR. +[discrete] ==== geocom_set_positioning_timeout(time_hz, time_v) [[lua-api-geocom-set-positioning-timeout]] * `time_hz` (_number_) – Timeout in Hz direction [sec]. @@ -6375,6 +5868,7 @@ Returns request for *AUT_SetTimeout* procedure. This function sets the maximum time to perform a positioning. The timeout is reset on 7 seconds after each power on. Valid value for `hz` and `v` are between 7 [sec] and 60 [sec]. +[discrete] ==== geocom_set_prism_constant(prism_const) [[lua-api-geocom-set-prism-constant]] * `prism_const` (_number_) – Prism constant [mm]. @@ -6384,6 +5878,7 @@ the prism constant. The API function <> overwrites this setting. +[discrete] ==== geocom_set_prism_type(prism_type) [[lua-api-geocom-set-prism-type]] * `prism_type` (_integer_) – Prism type (<>). @@ -6393,6 +5888,7 @@ the default prism type. This function sets the prism type for measurement with a reflector (`GEOCOM_BAP_PRISMTYPE`). It overwrites the prism constant set by API call <>. +[discrete] ==== geocom_set_prism_type_v2(prism_type, prism_name) [[lua-api-geocom-set-prism-type-v2]] * `prism_type` (_integer_) – Prism type (<>). @@ -6406,6 +5902,7 @@ defined prism must have been added with API call <> beforehand. +[discrete] ==== geocom_set_reduced_atr_fov(enabled) [[lua-api-geocom-set-reduced-atr-fov]] * `enabled` (_bool_) – Use reduced field of view. @@ -6414,6 +5911,7 @@ Returns request for *BAP_SetRedATRFov* procedure. Creates request for setting the reduced ATR field of view. If `enabled` is `true`, ATR uses reduced field of view (about 1/9), full field of view otherwise. +[discrete] ==== geocom_set_refraction_mode(mode) [[lua-api-geocom-set-refraction-mode]] * `mode` (_integer_) – Refraction data method (1 or 2). @@ -6422,6 +5920,7 @@ Returns request for *TMC_SetRefractiveMethod* procedure. Creates request for setting the refraction model. Mode `1` means method 1 for the rest of the world, mode `2` means method for Australia. +[discrete] ==== geocom_set_search_area(center_hz, center_v, range_hz, range_v, enabled) [[lua-api-geocom-set-search-area]] * `center_hz` (_number_) – Search area center Hz angle [rad]. @@ -6435,6 +5934,7 @@ position and dimensions of the PowerSearch window, and activates it. The API call is valid for all instruments, but has effects only for those equipped with PowerSearch (requires GeoCOM robotic licence). +[discrete] ==== geocom_set_station(easting, northing, height, instr_height) [[lua-api-geocom-set-station]] * `easting` (_number_) – E coordinate [m]. @@ -6445,6 +5945,7 @@ PowerSearch (requires GeoCOM robotic licence). Returns request for *TMC_SetStation* procedure. Creates request for setting the station coordinates of the instrument. +[discrete] ==== geocom_set_target_type(target_type) [[lua-api-geocom-set-target-type]] * `target_type` (_integer_) – Target type (<>). @@ -6461,6 +5962,7 @@ API function also change the target type. The EDM type RL is not available on all instruments. +[discrete] ==== geocom_set_tolerance(hz, v) [[lua-api-geocom-set_tolerance]] * `hz` (_number_) – Positioning tolerance in Hz direction [rad]. @@ -6477,6 +5979,7 @@ The maximum resolution of the angle measurement system depends on the instrument accuracy class. If smaller positioning tolerances are required, the positioning time can increase drastically +[discrete] ==== geocom_set_user_atr_mode(enabled) [[lua-api-geocom-set-user-atr-mode]] * `enabled` (_bool_) – Enable ATR state. @@ -6490,6 +5993,7 @@ enabled while the API call is made, _lock_ mode will change to ATR mode. If `enabled` is `false`, ATR mode is deactivated, and if _lock_ mode is enabled then it stays enabled. +[discrete] ==== geocom_set_user_lock_mode(enabled) [[lua-api-geocom-set-user-lock-mode]] * `enabled` (_bool_) – Enable _lock_ state. @@ -6504,6 +6008,7 @@ and follow a moving target, call API function mode is deactivated. Tracking of a moving target will be aborted, and the manual drive wheel is activated. +[discrete] ==== geocom_set_user_prism_definition(prism_name, prism_const, refl_type, creator) [[lua-api-geocom-set-user-prism-definition]] * `prism_name` (_string_) – Prism name. @@ -6514,6 +6019,7 @@ drive wheel is activated. Returns request for *BAP_SetUserPrismDef* procedure. Creates request for setting a user prism definition. +[discrete] ==== geocom_set_user_spiral(hz, v) [[lua-api-geocom-set-user-spiral]] * `hz` (_number_) – ATR search window in Hz direction [rad]. @@ -6522,6 +6028,7 @@ a user prism definition. Returns request for *AUT_SetUserSpiral* procedure. The function sets the dimensions of the ATR search window (GeoCOM robotic licence required). +[discrete] ==== geocom_set_velocity(omega_hz, omega_v) [[lua-api-geocom-set-velocity]] * `omega_hz` (_number_) – Velocity in Hz direction [rad/sec]. @@ -6538,6 +6045,7 @@ called with argument `GEOCOM_MOT_OCONST` before. The velocity in horizontal and vertical direction are in [rad/sec]. The maximum velocity is ±3.14 rad/sec for TM30/TS30, and ±0.79 rad/sec for TPS1100/TPS1200. +[discrete] ==== geocom_setup_download(device_type, file_type, file_name, block_size) [[lua-api-geocom-setup-download]] * `device_type` (_integer_) – Device type (<>). @@ -6558,6 +6066,7 @@ The argument `device_type` must be one of the following: The argument `file_type` is usually `GEOCOM_FTR_FILE_IMAGES`. The maximum value for `block_size` is `GEOCOM_FTR_MAX_BLOCKSIZE`. +[discrete] ==== geocom_setup_list(device_type, file_type, search_path) [[lua-api-geocom-setup-list]] * `device_type` (_integer_) – Device type (<>). @@ -6568,6 +6077,7 @@ Returns request for *FTR_SetupList* procedure. Creates request for setting up file listing. The function sets up the device type, file type, and search path. It has to be called before <>. +[discrete] ==== geocom_start_controller(start_mode) [[lua-api-geocom-start-controller]] * `start_mode` (_integer_) – Controller start mode (<>). @@ -6586,6 +6096,7 @@ The argument `start_mode` must be one of the following: * `GEOCOM_MOT_BREAK` – “Brake” controller. * `GEOCOM_MOT_TERM` – Terminates the controller task. +[discrete] ==== geocom_stop_controller(stop_mode) [[lua-api-geocom-stop-controller]] * `stop_mode` (_integer_) – Controller stop mode (<>). @@ -6598,6 +6109,7 @@ The argument `stop_mode` must be one of the following: * `GEOCOM_MOT_NORMAL` – Slow down with current acceleration. * `GEOCOM_MOT_SHUTDOWN` – Slow down by switching off power supply. +[discrete] ==== geocom_switch_off(stop_mode) [[lua-api-geocom-switch-off]] * `stop_mode` (_integer_) – Switch-off mode (<>). @@ -6610,6 +6122,7 @@ The argument `stop_mode` has to be one of the following: * `GEOCOM_COM_TPS_STOP_SHUT_DOWN` – Power down instrument. * `GEOCOM_COM_TPS_STOP_SLEEP` – Sleep mode (not supported by TPS1200). +[discrete] ==== geocom_switch_on(start_mode) [[lua-api-geocom-switch-on]] * `start_mode` (_integer_) – Switch-on mode (<>). @@ -6622,127 +6135,936 @@ The argument `start_mode` has to be one of the following: * `GEOCOM_COM_TPS_STARTUP_LOCAL` – Not supported by TPS1200. * `GEOCOM_COM_TPS_STARTUP_REMOTE` – Online mode (RPC is enabled). +[discrete] ==== geocom_take_image(mem_type) [[lua-api-geocom-take-image]] * `mem_type` (_integer_) – Memory type (`GEOCOM_IMG_MEM_TYPE`). -Returns request for *IMG_TakeTccImage* procedure. Creates request for capturing -a telescope image. +Returns request for *IMG_TakeTccImage* procedure. Creates request for capturing +a telescope image. + +The memory type `mem_type` has to be one of the following: + +* `GEOCOM_IMG_INTERNAL_MEMORY` – Internal memory module. +* `GEOCOM_IMG_PC_CARD` – External memory card. + +== Data Serialisation + +DMPACK supports the following data serialisation formats: + +Atom XML:: Export of log messages in +link:https://en.wikipedia.org/wiki/Atom_(web_standard)[Atom Syndication Format] +(RFC 4287), with optional XSLT style sheet. +Block:: Export of observation responses as X/Y data points in ASCII block +format, consisting of time stamp (ISO 8601) and real value. +CSV:: Export and import of beat, log, node, observation, sensor, and target +data, with custom field separator and quote character. A CSV header is added +optionally. +HDF5:: Export and import of node, observation, sensor, and target data as HDF5 +compound data types. +JSON:: Export of beat, log, node, observation, sensor, and target data as +JSON objects or JSON arrays. +JSON Lines:: Export of beat, log, node, observation, sensor, and target data in +link:https://jsonlines.org/[JSON Lines]/link:http://ndjson.org/[Newline Delimited JSON] +format. +Lua:: Converting observations from and to Lua tables. Import of observations +from Lua file or stack-based data exchange between Fortran and Lua. +Namelist:: Import from and export to Fortran 95 Namelist (NML) format of single +beat, log, node, observation, sensor, and target types. The syntax is +case-insensitive, line-breaks are optional. Default values are assumed for +omitted attributes of data in Namelist format. +Text:: Status messages of the HTTP-RPC API are returned as key–value pairs in +plain text format + +The JSON Lines format equals the JSON format, except that multiple records are +separated by new line. + +=== API Status [[data-api]] + +==== Derived Type + +[cols="3,2,2,14"] +|=== +| Attribute | Type | Size | Description + +| `version` | string | 32 | DMPACK application version. +| `dmpack` | string | 32 | DMPACK library version. +| `host` | string | 32 | Server host name. +| `server` | string | 32 | Server software (web server). +| `timestamp` | string | 32 | Server date and time in ISO 8601. +| `message` | string | 32 | Server status message (optional). +| `error` | integer | 4 | <>. +|=== + +==== Text [[data-api-text]] + +.... +version=1.0.0 +dmpack=1.0.0 +host=localhost +server=lighttpd/1.4.70 +timestamp=1970-01-01T00:00:00.000000+00:00 +message=online +error=0 +.... + +=== Beat [[data-beat]] + +==== Derived Type + +[cols="3,2,2,14"] +|=== +| Attribute | Type | Size | Description + +| `node_id` | string | 32 | Node id (`-0-9A-Z_a-z`). +| `address` | string | 45 | IPv4/IPv6 address of client. +| `client` | string | 32 | Client software name and version. +| `time_sent` | string | 32 | Date and time heartbeat was sent (ISO 8601). +| `time_recv` | string | 32 | Date and time heartbeat was received (ISO 8601). +| `error` | integer | 4 | Last client connection <>. +| `interval` | integer | 4 | Emit interval in seconds. +| `uptime` | integer | 4 | Client uptime in seconds. +|=== + +==== CSV [[data-beat-csv]] + +[cols="2,3,15"] +|=== +| Column | Attribute | Description + +| 1 | `node_id` | Node id. +| 2 | `address` | IP address of client. +| 3 | `client` | Client software name and version. +| 4 | `time_sent` | Date and time heartbeat was sent. +| 5 | `time_recv` | Date and time heartbeat was received. +| 6 | `error` | Error code. +| 7 | `interval` | Emit interval in seconds. +| 8 | `uptime` | Client uptime in seconds. +|=== + +==== JSON [[data-beat-json]] + +[source,json] +.... +{ + "node_id": "dummy-node", + "address": "127.0.0.1", + "client": "dmbeat 1.0.0 (DMPACK 1.0.0)", + "time_sent": "1970-01-01T00:00:00.000000+00:00", + "time_recv": "1970-01-01T00:00:00.000000+00:00", + "error": 0, + "interval": 0, + "uptime": 0 +} +.... + +==== Namelist [[data-beat-nml]] + +.... + +&DMBEAT +BEAT%NODE_ID="dummy-node", +BEAT%ADDRESS="127.0.0.1", +BEAT%CLIENT="dmbeat 1.0.0 (DMPACK 1.0.0)", +BEAT%TIME_SENT="1970-01-01T00:00:00.000000+00:00", +BEAT%TIME_RECV="1970-01-01T00:00:00.000000+00:00", +BEAT%ERROR=0, +BEAT%INTERVAL=0, +BEAT%UPTIME=0, +/ +.... + +=== Data Point [[data-dp]] + +==== Derived Type + +[cols="3,2,2,14"] +|=== +| Attribute | Type | Size | Description + +| `x` | string | 32 | X value (ISO 8601). +| `y` | double | 8 | Y value. +|=== + +==== Block [[data-dp-block]] + +.... +1970-01-01T00:00:00.000000+00:00 0.00000000 +.... + +==== CSV [[data-dp-csv]] + +[cols="2,3,15"] +|=== +| Column | Attribute | Description + +| 1 | `x` | X value. +| 2 | `y` | Y value. +|=== + +==== JSON [[data-dp-json]] + +[source,json] +.... +{ + "x": "1970-01-01T00:00:00.000000+00:00", + "y": 0.0 +} +.... + +=== Log [[data-log]] + +.Log level [[data-log-level]] +[cols="1,2,2,2"] +|=== +| Level | Parameter | Parameter String | Name + +| 1 | `LL_DEBUG` | `debug` | Debug +| 2 | `LL_INFO` | `info` | Info +| 3 | `LL_WARNING` | `warning` | Warning +| 4 | `LL_ERROR` | `error` | Error +| 5 | `LL_CRITICAL` | `critical` | Critical +|=== + +==== Derived Type + +[cols="3,2,2,14"] +|=== +| Attribute | Type | Size | Description + +| `id` | string | 32 | Log id (UUID). +| `level` | integer | 4 | <>. +| `error` | integer | 4 | <>. +| `timestamp` | string | 32 | Date and time (ISO 8601). +| `node_id` | string | 32 | Node id (optional). +| `sensor_id` | string | 32 | Sensor id (optional). +| `target_id` | string | 32 | Target id (optional). +| `observ_id` | string | 32 | Observation id (optional). +| `source` | string | 32 | Log source (optional). +| `message` | string | 512 | Log message. +|=== + +==== Atom XML [[data-log-atom]] + +[source,xml] +.... + + +DMPACK +DMPACK Logs +Log Messages Feed +urn:uuid:a6baaf1a-43b7-4e59-a18c-653e6ee61dfa +1970-01-01T00:00:00.000000+00:00 + +DEBUG: dummy log message +urn:uuid:26462d27-d7ff-4ef1-b10e-0a2e921e638b +1970-01-01T00:00:00.000000+00:00 +1970-01-01T00:00:00.000000+00:00 +DEBUG: dummy log message + +
+ + + + + + + + + + + + + +
ID26462d27d7ff4ef1b10e0a2e921e638b
Timestamp1970-01-01T00:00:00.000000+00:00
LevelDEBUG (1)
Errordummy error (2)
Node IDdummy-node
Sensor IDdummy-sensor
Target IDdummy-target
Observation ID9bb894c779e544dab1bd7e7a07ae507d
Sourcedummy
Messagedummy log message
+
+
+ +dummy + +
+
+.... + +==== CSV [[data-log-csv]] + +[cols="2,3,15"] +|=== +| Column | Attribute | Description + +| 1 | `id` | Log id. +| 2 | `level` | Log level. +| 3 | `error` | Error code. +| 4 | `timestamp` | Date and time. +| 5 | `node_id` | Node id. +| 6 | `sensor_id` | Sensor id. +| 7 | `target_id` | Target id. +| 8 | `observ_id` | Observation id. +| 9 | `source` | Log source. +| 10 | `message` | Log message. +|=== + +==== JSON [[data-log-json]] + +[source,json] +.... +{ + "id": "26462d27d7ff4ef1b10e0a2e921e638b", + "level": 1, + "error": 2, + "timestamp": "1970-01-01T00:00:00.000000+00:00", + "node_id": "dummy-node", + "sensor_id": "dummy-sensor", + "target_id": "dummy-target", + "observ_id": "9bb894c779e544dab1bd7e7a07ae507d", + "message": "dummy log message" +} +.... + +==== Namelist [[data-log-nml]] + +.... +&DMLOG +LOG%ID="26462d27d7ff4ef1b10e0a2e921e638b", +LOG%LEVEL=1, +LOG%ERROR=2, +LOG%TIMESTAMP="1970-01-01T00:00:00.000000+00:00", +LOG%NODE_ID="dummy-node", +LOG%SENSOR_ID="dummy-sensor", +LOG%TARGET_ID="dummy-target", +LOG%OBSERV_ID="9bb894c779e544dab1bd7e7a07ae507d", +LOG%SOURCE="dummy", +LOG%MESSAGE="dummy log message", +/ +.... + +=== Node [[data-node]] + +==== Derived Type + +[cols="3,2,2,14"] +|=== +| Attribute | Type | Size | Description + +| `id` | string | 32 | Node id (`-0-9A-Z_a-z`). +| `name` | string | 32 | Node name. +| `meta` | string | 32 | Node description (optional). +| `x` | double | 8 | Node x or easting (optional). +| `y` | double | 8 | Node y or northing (optional). +| `z` | double | 8 | Node z or altitude (optional). +|=== + +==== CSV [[data-node-csv]] + +[cols="2,3,15"] +|=== +| Column | Attribute | Description + +| 1 | `id` | Node id. +| 2 | `name` | Node name. +| 3 | `meta` | Node description. +| 4 | `x` | Node x or easting. +| 5 | `y` | Node y or northing. +| 6 | `z` | Node z or altitude. +|=== + +==== HDF5 [[data-node-hdf5]] + +.... +DATASET "node_type" { + DATATYPE H5T_COMPOUND { + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "id"; + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "name"; + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "meta"; + H5T_IEEE_F64LE "x"; + H5T_IEEE_F64LE "y"; + H5T_IEEE_F64LE "z"; + } + DATASPACE SIMPLE { ( 8 ) / ( 8 ) } +} +.... + +==== JSON [[data-node-json]] + +[source,json] +.... +{ + "id": "dummy-node", + "name": "Dummy Node", + "meta": "Description", + "x": 0.0, + "y": 0.0, + "z": 0.0 +} +.... + +==== Namelist [[data-node-nml]] + +.... +&DMNODE +NODE%ID="dummy-node", +NODE%NAME="Dummy Node", +NODE%META="Description", +NODE%X=0.0, +NODE%Y=0.0, +NODE%Z=0.0, +/ +.... + +=== Observation [[data-observ]] + +.Response value types [[data-response-types]] +[cols="1,6,14"] +|=== +| Value | Name | Description + +| 0 | `RESPONSE_TYPE_REAL64` | 8-byte signed real. +| 1 | `RESPONSE_TYPE_REAL32` | 4-byte signed real. +| 2 | `RESPONSE_TYPE_INT64` | 8-byte signed integer. +| 3 | `RESPONSE_TYPE_INT32` | 4-byte signed integer. +| 4 | `RESPONSE_TYPE_LOGICAL` | 1-byte boolean. +| 5 | `RESPONSE_TYPE_BYTE` | Byte. +| 6 | `RESPONSE_TYPE_STRING` | Byte string. +|=== + +==== Derived Type + +.Observation derived type +[cols="3,2,2,14"] +|=== +| Attribute | Type | Size | Description + +| `id` | string | 32 | Observation id (UUID). +| `node_id` | string | 32 | Node id (`-0-9A-Z_a-z`). +| `sensor_id` | string | 32 | Sensor id (`-0-9A-Z_a-z`). +| `target_id` | string | 32 | Target id (`-0-9A-Z_a-z`). +| `name` | string | 32 | Observation name (`-0-9A-Z_a-z`). +| `timestamp` | string | 32 | Date and time of observation (ISO 8601). +| `source` | string | 32 | Observation source or name of origin (`-0-9A-Z_a-z`). +| `device` | string | 32 | Device (TTY/PTY path, IP address). +| `priority` | integer | 4 | Message queue priority (>= 0). +| `error` | integer | 4 | Observation <>. +| `next` | integer | 4 | Cursor of receiver list (0 to 16). +| `nreceiver` | integer | 4 | Number of receivers (0 to 16). +| `nrequests` | integer | 4 | Number of sensor requests (0 to 8). +| `receivers` | array | 16{nbsp}×{nbsp}32 | Array of receiver names (16). +| `requests` | array | 8{nbsp}×{nbsp}1380 | Array of requests (8). +|=== + +.Request derived type of an observation [[data-request]] +[cols="3,2,2,14"] +|=== +| Attribute | Type | Size | Description + +| `name` | string | 32 | Request name (`-0-9A-Z_a-z`). +| `timestamp` | string | 32 | Date and time of request (ISO 8601). +| `request` | string | 256 | Raw request to sensor. Non-printable characters have to be escaped. +| `response` | string | 256 | Raw response of sensor. Non-printable characters will be escaped. +| `delimiter` | string | 8 | Request delimiter. Non-printable characters have to be escaped. +| `pattern` | string | 256 | Regular expression pattern that describes the raw response using named groups. +| `delay` | integer | 4 | Delay in mseconds to wait after the request. +| `error` | integer | 4 | Request <>. +| `mode` | integer | 4 | Request mode (unused, for future additions). +| `retries` | integer | 4 | Number of performed retries. +| `state` | integer | 4 | Request state (unused, for future additions). +| `timeout` | integer | 4 | Request timeout in mseconds. +| `nresponses` | integer | 4 | Number of responses (0 to 16). +| `responses` | array | 16{nbsp}×{nbsp}32 | Extracted values from the raw response (16). +|=== + +.Response derived type of a request [[data-response]] +[cols="3,2,2,14"] +|=== +| Attribute | Type | Size | Description + +| `name` | string | 8 | Response name (`-0-9A-Z_a-z`). +| `unit` | string | 8 | Response unit. +| `type` | integer | 4 | Response <>. +| `error` | integer | 4 | Response <>. +| `value` | double | 8 | Response value. +|=== + +==== CSV [[data-observ-csv]] + +[cols="3,3,14"] +|=== +| Column | Attribute | Description + +| 1 | `id` | Observation id. +| 2 | `node_id` | Node id. +| 3 | `sensor_id` | Sensor id. +| 4 | `target_id` | Target id. +| 5 | `name` | Observation name. +| 6 | `timestamp` | Date and time of observation. +| 7 | `source` | Observation source. +| 8 | `device` | Device (TTY/PTY path). +| 9 | `priority` | Message queue priority. +| 10 | `error` | Error code. +| 11 | `next` | Cursor of receiver list (0 to 16). +| 12 | `nreceivers` | Number of receivers (0 to 16). +| 13 | `nrequests` | Number of sensor requests (0 to 8). +| 14 – 29 | `receivers` | Array of receiver names (16). +| 14 | `receiver` | Receiver 1. +| 15 | `receiver` | Receiver 2. +| 16 | `receiver` | Receiver 3. +| 17 | `receiver` | Receiver 4. +| 18 | `receiver` | Receiver 5. +| 19 | `receiver` | Receiver 6. +| 20 | `receiver` | Receiver 7. +| 21 | `receiver` | Receiver 8. +| 22 | `receiver` | Receiver 9. +| 23 | `receiver` | Receiver 10. +| 24 | `receiver` | Receiver 11. +| 25 | `receiver` | Receiver 12. +| 26 | `receiver` | Receiver 13. +| 27 | `receiver` | Receiver 14. +| 28 | `receiver` | Receiver 15. +| 29 | `receiver` | Receiver 16. +| 30 – 773 | `requests` | Array of requests (8). +| 30 – 105 | `request` | Request 1. +| 30 | `name` | Request name. +| 31 | `timestamp` | Date and time of request. +| 32 | `request` | Raw request to sensor. +| 33 | `response` | Raw response of sensor. +| 34 | `delimiter` | Request delimiter. +| 35 | `pattern` | Regular expression pattern that describes the raw response. +| 36 | `delay` | Delay in mseconds to wait after the request. +| 37 | `error` | Error code. +| 38 | `mode` | Request mode. +| 39 | `retries` | Number of retries performed. +| 40 | `state` | Request state. +| 41 | `timeout` | Request timeout in mseconds. +| 42 | `nresponses` | Number of responses (0 to 16). +| 43 – 122 | `responses` | Array of responses (16). +| 43 – 47 | `response` | Response 1. +| 43 | `name` | Response 1 name. +| 44 | `unit` | Response 1 unit. +| 45 | `type` | Response 1 value type. +| 46 | `error` | Response 1 error. +| 47 | `value` | Response 1 value. +| 48 – 52 | `response` | Response 2. +| 53 – 57 | `response` | Response 3. +| 58 – 62 | `response` | Response 4. +| 63 – 67 | `response` | Response 5. +| 68 – 72 | `response` | Response 6. +| 73 – 77 | `response` | Response 7. +| 78 – 82 | `response` | Response 8. +| 83 – 87 | `response` | Response 9. +| 88 – 92 | `response` | Response 10. +| 93 – 97 | `response` | Response 11. +| 98 – 102 | `response` | Response 12. +| 103 – 107 | `response` | Response 13. +| 108 – 112 | `response` | Response 14. +| 113 – 117 | `response` | Response 15. +| 118 – 122 | `response` | Response 16. +| 123 – 215 | `request` | Request 2. +| 216 – 308 | `request` | Request 3. +| 309 – 401 | `request` | Request 4. +| 402 – 494 | `request` | Request 5. +| 495 – 587 | `request` | Request 6. +| 588 – 680 | `request` | Request 7. +| 681 – 773 | `request` | Request 8. +|=== + +==== HDF5 [[data-observ-hdf5]] + +The HDF5 format description for observations is omitted due to length. You can +output the format from command-line. For example, if the file `observ.hdf5` +contains DMPACK observations: + +.... +$ h5dump -H -A 0 observ.hdf5 +.... + +==== JSON [[data-observ-json]] + +[source,json] +.... +{ + "id": "9273ab62f9a349b6a4da6dd274ee83e7", + "node_id": "dummy-node", + "sensor_id": "dummy-sensor", + "target_id": "dummy-target", + "name": "dummy-observ", + "timestamp": "1970-01-01T00:00:00.000000+00:00", + "source": "dmdummy", + "device": "/dev/null", + "priority": 0, + "error": 0, + "next": 0, + "nreceivers": 2, + "nrequests": 1, + "receivers": [ + "dummy-receiver1", + "dummy-receiver2" + ], + "requests": [ + { + "name": "dummy", + "timestamp": "1970-01-01T00:00:00.000000+00:00", + "request": "?\\n", + "response": "10.0\\n", + "delimiter": "\\n", + "pattern": "(?[-+0-9\\.]+)", + "delay": 0, + "error": 0, + "mode": 0, + "retries": 0, + "state": 0, + "timeout": 0, + "nresponses": 1, + "responses": [ + { + "name": "sample", + "unit": "none", + "type": 0, + "error": 0, + "value": 10.0 + } + ] + } + ] +} +.... + +==== Lua [[data-observ-lua]] + +[source,lua] +.... +{ + id = "9273ab62f9a349b6a4da6dd274ee83e7", + node_id = "dummy-node", + sensor_id = "dummy-sensor", + target_id = "dummy-target", + name = "dummy-observ", + timestamp = "1970-01-01T00:00:00.000000+00:00", + source = "dmdummy", + device = "/dev/null", + error = 0, + next = 1, + priority = 0, + nreceivers = 2, + nrequests = 1, + receivers = { "dummy-receiver1", "dummy-receiver2" }, + requests = { + { + name = "dummy", + timestamp = "1970-01-01T00:00:00.000000+00:00", + request = "?\\n", + response = "10.0\\n", + pattern = "(?[-+0-9\\.]+)", + delimiter = "\\n", + delay = 0, + error = 0, + mode = 0, + retries = 0, + state = 0, + timeout = 0, + nresponses = 1, + responses = { + { + name = "sample", + unit = "none", + type = 0, + error = 0, + value = 10.0 + } + } + } + } +} +.... + +==== Namelist [[data-observ-nml]] + +.... +&DMOBSERV +OBSERV%ID="9273ab62f9a349b6a4da6dd274ee83e7", +OBSERV%NODE_ID="dummy-node", +OBSERV%SENSOR_ID="dummy-sensor", +OBSERV%TARGET_ID="dummy-target", +OBSERV%NAME="dummy-observ", +OBSERV%TIMESTAMP="1970-01-01T00:00:00.000000+00:00", +OBSERV%SOURCE="dmdummy", +OBSERV%PATH="/dev/null", +OBSERV%PRIORITY=0, +OBSERV%ERROR=0, +OBSERV%NEXT=0, +OBSERV%NRECEIVERS=2, +OBSERV%NREQUESTS=1, +OBSERV%RECEIVERS="dummy-receiver1","dummy-receiver2", +OBSERV%REQUESTS(1)%NAME="dummy", +OBSERV%REQUESTS(1)%TIMESTAMP="1970-01-01T00:00:00.000000+00:00", +OBSERV%REQUESTS(1)%REQUEST="?\n", +OBSERV%REQUESTS(1)%RESPONSE="10.0\n", +OBSERV%REQUESTS(1)%DELIMITER="\n", +OBSERV%REQUESTS(1)%PATTERN="(?[-+0-9\.]+)", +OBSERV%REQUESTS(1)%DELAY=0, +OBSERV%REQUESTS(1)%ERROR=0, +OBSERV%REQUESTS(1)%MODE=0, +OBSERV%REQUESTS(1)%RETRIES=0, +OBSERV%REQUESTS(1)%STATE=0, +OBSERV%REQUESTS(1)%TIMEOUT=0, +OBSERV%REQUESTS(1)%NRESPONSES=1, +OBSERV%REQUESTS(1)%RESPONSES(1)%NAME="sample", +OBSERV%REQUESTS(1)%RESPONSES(1)%UNIT="none", +OBSERV%REQUESTS(1)%RESPONSES(1)%TYPE=0, +OBSERV%REQUESTS(1)%RESPONSES(1)%ERROR=0, +OBSERV%REQUESTS(1)%RESPONSES(1)%VALUE=10.00000000000000, +/ +.... + +=== Sensor [[data-sensor]] -The memory type `mem_type` has to be one of the following: +.Sensor types [[data-sensor-types]] +[cols="1,1,8"] +|=== +| Value | Name | Description -* `GEOCOM_IMG_INTERNAL_MEMORY` – Internal memory module. -* `GEOCOM_IMG_PC_CARD` – External memory card. +| 0 | `none` | Unknown sensor type. +| 1 | `virtual` | Virtual sensor. +| 2 | `system` | Operating system. +| 3 | `fs` | File system. +| 4 | `process` | Process or service. +| 5 | `network` | Network-based sensor (Ethernet, HTTP). +| 6 | `multi` | Multi-sensor system. +| 7 | `meteo` | Meteorological sensor. +| 8 | `rts` | Robotic total station. +| 9 | `gnss` | GNSS receiver. +| 10 | `level` | Level sensor. +| 11 | `mems` | MEMS sensor. +|=== -== Third-Party Programs +==== Derived Type -=== HDFView [[hdfview]] +[cols="3,2,2,14"] +|=== +| Attribute | Type | Size | Description -link:[]HDFView is a Java-based visual tool for browsing and editing HDF5 and HDF4 -files. Application images for Linux, macOS, and Windows are available for -download on the website of -link:https://www.hdfgroup.org/downloads/hdfview/[The HDF Group]. On FreeBSD, the -program has to be compiled from source. The following build dependencies are -required: +| `id` | string | 32 | Sensor id (`-0-9A-Z_a-z`). +| `node_id` | string | 32 | Node id (`-0-9A-Z_a-z`). +| `type` | integer | 4 | <>. +| `name` | string | 32 | Sensor name. +| `sn` | string | 32 | Sensor serial number (optional). +| `meta` | string | 32 | Sensor description (optional). +| `x` | double | 8 | Sensor x or easting (optional). +| `y` | double | 8 | Sensor y or northing (optional). +| `z` | double | 8 | Sensor z or altitude (optional). +|=== -* link:https://www.freshports.org/devel/apache-ant[devel/apache-ant] -* link:https://www.freshports.org/devel/cmake[devel/cmake] -* link:https://www.freshports.org/devel/git[devel/git] -* link:https://www.freshports.org/java/openjdk19[java/openjdk19] (or any other version) -* link:https://www.freshports.org/lang/gcc[lang/gcc] -* link:https://www.freshports.org/[x11-toolkits/swt] +==== CSV [[data-sensor-csv]] -The HDF4 and HDF5 libraries have to be built from source as well. +[cols="2,3,15"] +|=== +| Column | Attribute | Description -==== Building HDF4 [[hdfview-building-hdf4]] +| 1 | `id` | Sensor id. +| 2 | `node_id` | Node id. +| 3 | `type` | Sensor type. +| 4 | `name` | Sensor name. +| 5 | `sn` | Sensor serial number. +| 6 | `meta` | Sensor description. +| 7 | `x` | Sensor x or easting. +| 8 | `y` | Sensor y or northing. +| 9 | `z` | Sensor z or altitude. +|=== -Clone the HDF4 repository and compile with CMake: +==== HDF5 [[data-sensor-hdf5]] .... -$ cd /tmp/ -$ git clone --depth 1 https://github.com/HDFGroup/hdf4.git -$ cd hdf4/ -$ mkdir build && cd build/ -$ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE:STRING=Release \ - -DBUILD_SHARED_LIBS:BOOL=ON -DBUILD_TESTING:BOOL=OFF \ - -DHDF4_BUILD_TOOLS:BOOL=OFF -DHDF4_BUILD_EXAMPLES=OFF \ - -DHDF4_BUILD_FORTRAN=ON -DHDF4_BUILD_JAVA=ON \ - -DZLIB_LIBRARY:FILEPATH=/usr/lib/libz.so \ - -DZLIB_INCLUDE_DIR:PATH=/usr/include \ - -DCMAKE_Fortran_COMPILER=gfortran -DCMAKE_C_COMPILER=gcc .. -$ cmake --build . --config Release +DATASET "sensor_type" { + DATATYPE H5T_COMPOUND { + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "id"; + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "node_id"; + H5T_STD_I32LE "type"; + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "name"; + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "sn"; + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "meta"; + H5T_IEEE_F64LE "x"; + H5T_IEEE_F64LE "y"; + H5T_IEEE_F64LE "z"; + } + DATASPACE SIMPLE { ( 8 ) / ( 8 ) } +} .... -Afterwards, copy `java/src/hdf/hdflib/jarhdf-4.3.0.jar` to `bin/` in the HDF4 -build directory. +==== JSON [[data-sensor-json]] -==== Building HDF5 [[hdfview-building-hdf5]] +[source,json] +.... +{ + "id": "dummy-sensor", + "node_id": "dummy-node", + "type": 3, + "name": "Dummy Sensor", + "sn": "00000", + "meta": "Description.", + "x": 0.0, + "y": 0.0, + "z": 0.0 +} +.... -In the next step, clone the HDF5 repository and build with CMake, too: +==== Namelist [[data-sensor-nml]] .... -$ cd /tmp/ -$ git clone --depth 1 https://github.com/HDFGroup/hdf5.git -$ cd hdf5/ -$ mkdir build && cd build/ -$ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE:STRING=Release \ - -DBUILD_SHARED_LIBS:BOOL=ON -DBUILD_TESTING:BOOL=OFF \ - -DHDF5_BUILD_TOOLS:BOOL=OFF -DHDF5_BUILD_EXAMPLES=OFF \ - -DHDF5_BUILD_FORTRAN=ON -DHDF5_BUILD_JAVA=ON \ - -DZLIB_LIBRARY:FILEPATH=/usr/lib/libz.so \ - -DZLIB_INCLUDE_DIR:PATH=/usr/include \ - -DCMAKE_Fortran_COMPILER=gfortran -DCMAKE_C_COMPILER=gcc .. -$ cmake --build . --config Release +&DMSENSOR +SENSOR%ID="dummy-sensor", +SENSOR%NODE_ID="dummy-node", +SENSOR%TYPE=3, +SENSOR%NAME="Dummy Sensor", +SENSOR%SN="00000", +SENSOR%META="Description", +SENSOR%X=0.0, +SENSOR%Y=0.0, +SENSOR%Z=0.0, +/ .... -Then, copy `java/src/hdf/hdf5lib/jarhdf5-1.15.0.jar` and `src/libhdf5.settings` -to `bin/` in the HDF5 build directory. +=== Target [[data-target]] -==== Building HDFView [[hdfview-building-hdfview]] +.Target states [[data-target-states]] +[cols="1,3,16"] +|=== +| Value | Name | Description -Finally, clone the HDFView repository, set the build properties, and compile -with _ant(1)_: +| 0 | `none` | No special target state. +| 1 | `removed` | Target has been removed. +| 2 | `missing` | Target is missing. +| 3 | `invalid` | Target is invalid. +| 4 | `ignore` | Target should be ignored. +| 5 | `obsolete` | Target is obsolete. +| 6 | `user` | User-defined target state. +|=== -.... -$ cd /tmp/ -$ git clone --depth 1 https://github.com/HDFGroup/hdfview.git -$ cd hdfview/ -.... +==== Derived Type -Set the following properties in `build.properties`: +[cols="3,2,2,14"] +|=== +| Attribute | Type | Size | Description -.... -hdf.lib.dir = /tmp/hdf4/build/bin -hdf5.lib.dir = /tmp/hdf5/build/bin -hdf5.plugin.dir = /tmp/hdf5/build/bin/plugin -build.debug = false -.... +| `id` | string | 32 | Target id (`-0-9A-Z_a-z`). +| `name` | string | 32 | Target name. +| `meta` | string | 32 | Target description (optional). +| `state` | integer | 4 | Target <> (optional). +| `x` | double | 8 | Target x or easting (optional). +| `y` | double | 8 | Target y or northing (optional). +| `z` | double | 8 | Target z or altitude (optional). +|=== -Build with _ant(1)_: +==== CSV [[data-target-csv]] -.... -$ ant run -.... +[cols="2,3,15"] +|=== +| Column | Attribute | Description -The binaries are written to `build/HDF_Group/HDFView/99.99.99/`. The archive -`swt.jar` has to be replaced with the version installed system-wide: +| 1 | `id` | Target id. +| 2 | `name` | Target name. +| 3 | `meta` | Target description. +| 4 | `state` | Target state. +| 5 | `x` | Target x or easting. +| 6 | `y` | Target y or northing. +| 7 | `z` | Target z or altitude. +|=== + +==== HDF5 [[data-target-hdf5]] .... -$ cp /usr/local/share/java/classes/swt.jar build/HDF_Group/HDFView/99.99.99/ +DATASET "target_type" { + DATATYPE H5T_COMPOUND { + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "id"; + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "name"; + H5T_ARRAY { [32] H5T_STRING { + STRSIZE 1; + STRPAD H5T_STR_SPACEPAD; + CSET H5T_CSET_ASCII; + CTYPE H5T_C_S1; + } } "meta"; + H5T_STD_I32LE "state"; + H5T_IEEE_F64LE "x"; + H5T_IEEE_F64LE "y"; + H5T_IEEE_F64LE "z"; + } + DATASPACE SIMPLE { ( 8 ) / ( 8 ) } +} .... -Replace the last line in `build/HDF_Group/HDFView/99.99.99/hdfview.sh` with: +==== JSON [[data-target-json]] +[source,json] .... -java "$JAVAOPTS" -Djava.library.path=".:/usr/local/lib" -Dhdfview.root="." \ - -cp "./*" hdf.view.HDFView "$@" +{ + "id": "dummy-target", + "name": "Dummy Target", + "meta": "Description", + "state": 0, + "x": 0.0, + "y": 0.0, + "z": 0.0 +} .... -To start HDFView, run: - +==== Namelist [[data-target-nml]] .... -$ cd build/HDF_Group/HDFView/99.99.99/ -$ sh hdfview.sh +&DMTARGET +TARGET%ID="dummy-target", +TARGET%NAME="Dummy Target", +TARGET%META="Description", +TARGET%STATE=0, +TARGET%X=0.0, +TARGET%Y=0.0, +TARGET%Z=0.0, +/ .... == Error Codes [[error-codes]] @@ -6795,6 +7117,7 @@ $ sh hdfview.sh | 44 | `E_DB_BACKUP` | Database backup error. | 45 | `E_DB_ATTACH` | Database attach failed. | 46 | `E_DB_DETACH` | Database detach failed. +| 47 | `E_DB_VERSION` | Database version incompatible. | 50 | `E_ARG` | Generic command-line error. | 51 | `E_ARG_NOT_FOUND` | Argument not passed. | 52 | `E_ARG_INVALID` | Argument invalid or missing. @@ -6834,4 +7157,5 @@ $ sh hdfview.sh | 132 | `E_HDF5` | HDF5 library error. | 133 | `E_ZLIB` | Zlib library error. | 134 | `E_ZSTD` | Zstandard library error. +| 135 | `E_MODBUS` | Modbus library error. |=== diff --git a/guide/resources/images/observ.svg b/guide/resources/images/observ.svg index c815148..99a0442 100644 --- a/guide/resources/images/observ.svg +++ b/guide/resources/images/observ.svg @@ -1 +1,1816 @@ -nodesnode_id : INTEGERid : TEXTname : TEXTmeta : TEXTx : REALy : REALz : REALobservsobserv_id : INTEGERnode_id : INTEGERsensor_id : INTEGERtarget_id : INTEGERid : TEXTname : TEXTtimestamp : TEXTsource : TEXTpath : TEXTpriority : INTEGERerror : INTEGERnext : INTEGERnreceivers : INTEGERnrequests : INTEGERreceiversreceiver_id : INTEGERobserv_id : INTEGERidx : INTEGERname : TEXTrequestsrequest_id : INTEGERobserv_id : INTEGERidx : INTEGERname: TEXTtimestamp : TEXTrequest : TEXTresponse : TEXTdelimiter : TEXTpattern : TEXTdelay : INTEGERerror : INTEGERmode : INTEGERretries : INTEGERstate : INTEGERtimeout : INTEGERnresponses : INTEGERresponsesresponse_id : INTEGERrequest_id : INTEGERidx : INTEGERname : TEXTunit : TEXTtype : INTEGERerror : INTEGERvalue : REALsensorssensor_id : INTEGERnode_id : INTEGERid : TEXTtype : INTEGERname : TEXTsn : TEXTmeta : TEXTx : REALy : REALz : REALsync_nodessync_node_id : INTEGERnode_id : INTEGERtimestamp : TEXTcode : INTEGERattempts : INTEGERsync_observssync_observ_id : INTEGERobserv_id : INTEGERtimestamp : TEXTcode : INTEGERattempts : INTEGERsync_sensorssync_sensor_id : INTEGERsensor_id : INTEGERtimestamp : TEXTcode : INTEGERattempts : INTEGERsync_targetssync_target_id : INTEGERtarget_id : INTEGERtimestamp : TEXTcode : INTEGERattempts : INTEGERtargetstarget_id : INTEGERid : TEXTname : TEXTmeta : TEXTstate : INTEGERx : REALy : REALz : REAL \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +