diff --git a/nightly/install/containers/index.html b/nightly/install/containers/index.html index 2cec48f3b..dd632ff70 100644 --- a/nightly/install/containers/index.html +++ b/nightly/install/containers/index.html @@ -1473,7 +1473,7 @@

Generate a Docker compose fo
What if my harvest configuration file is somewhere else or not named harvest.yml

Use the following docker run command, updating the HYML variable with the absolute path to your harvest.yml.

-
HYML="/opt/custom_harvest.yml" \
+
HYML="/opt/custom_harvest.yml"; \
 docker run --rm \
 --entrypoint "bin/harvest" \
 --volume "$(pwd):/opt/temp" \
diff --git a/nightly/search/search_index.json b/nightly/search/search_index.json
index 8d2eafd3d..3393fd625 100644
--- a/nightly/search/search_index.json
+++ b/nightly/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"What is Harvest?","text":"

Harvest is the open-metrics endpoint for ONTAP and StorageGRID

NetApp Harvest brings observability to ONTAP and StorageGRID clusters. Harvest collects performance, capacity and hardware metrics from ONTAP and StorageGRID, transforms them, and routes them to your choice of time-series database.

The included Grafana dashboards deliver the datacenter insights you need, while new metrics can be collected with a few edits of the included template files.

Harvest is open-source, released under an Apache2 license, and offers great flexibility in how you collect, augment, and export your datacenter metrics.

Note

Hop onto our Discord or GitHub discussions and say hi. \ud83d\udc4b\ud83c\udffd

"},{"location":"MigratePrometheusDocker/","title":"Migrate Prometheus Docker Volume","text":"

If you want to keep your historical Prometheus data, and you generated your harvest-compose.yml file via bin/harvest generate before Harvest 22.11, please follow the steps below to migrate your historical Prometheus data.

This is not required if you generated your harvest-compose.yml file via bin/harvest generate at Harvest release 22.11 or after.

Outline of steps: 1. Stop Prometheus container so data acquiesces 2. Find historical Prometheus volume and create new Prometheus data volume 3. Create a new Prometheus volume that Harvest 22.11 and after will use 4. Copy the historical Prometheus data from the old volume to the new one 5. Optionally remove the historical Prometheus volume

"},{"location":"MigratePrometheusDocker/#stop-prometheus-container","title":"Stop Prometheus container","text":"

It's safe to run the stop and rm commands below regardless if Prometheus is running or not since removing the container does not touch the historical data stored in the volume.

Stop all containers named Prometheus and remove them.

docker stop (docker ps -fname=prometheus -q) && docker rm (docker ps -a -fname=prometheus -q)\n

Docker may complain if the container is not running, like so. You can ignore this.

Ignorable output when container is not running (click me)
\"docker stop\" requires at least 1 argument.\nSee 'docker stop --help'.\n\nUsage:  docker stop [OPTIONS] CONTAINER [CONTAINER...]\n\nStop one or more running containers\n
"},{"location":"MigratePrometheusDocker/#find-the-name-of-the-prometheus-volume-that-has-the-historical-data","title":"Find the name of the Prometheus volume that has the historical data","text":"
docker volume ls -f name=prometheus -q\n

Output should look like this:

harvest-22080-1_linux_amd64_prometheus_data  # historical Prometheus data here\nharvest_prometheus_data                      # it is fine if this line is missing\n

We want to copy the historical data from harvest-22080-1_linux_amd64_prometheus_data to harvest_prometheus_data

If harvest_prometheus_data already exists, you need to decide if you want to move that volume's data to a different volume or remove it. If you want to remove the volume, run docker volume rm harvest_prometheus_data. If you want to move the data, adjust the command below to first copy harvest_prometheus_data to a different volume and then remove it.

"},{"location":"MigratePrometheusDocker/#create-new-prometheus-volume","title":"Create new Prometheus volume","text":"

We're going to create a new mount named, harvest_prometheus_data by executing:

docker volume create --name harvest_prometheus_data\n
"},{"location":"MigratePrometheusDocker/#copy-the-historical-prometheus-data","title":"Copy the historical Prometheus data","text":"

We will copy the historical Prometheus data from the old volume to the new one by mounting both volumes and copying data between them. NOTE: Prometheus only supports copying a single volume. It will not work if you attempt to copy multiple volumes into the same destination volume.

# replace  `HISTORICAL_VOLUME` with the name of the Prometheus volume that contains you historical data found in step 2.\ndocker run --rm -it -v $HISTORICAL_VOLUME:/from -v harvest_prometheus_data:/to alpine ash -c \"cd /from ; cp -av . /to\"\n

Output will look something like this:

'./wal' -> '/to/./wal'\n'./wal/00000000' -> '/to/./wal/00000000'\n'./chunks_head' -> '/to/./chunks_head'\n...\n
"},{"location":"MigratePrometheusDocker/#optionally-remove-historical-prometheus-data","title":"Optionally remove historical Prometheus data","text":"

Before removing the historical data, start your compose stack and make sure everything works.

Once you're satisfied that you can destroy the old data, remove it like so.

# replace `HISTORICAL_VOLUME` with the name of the Prometheus volume that contains your historical data found in step 2.\ndocker volume rm $HISTORICAL_VOLUME\n
"},{"location":"MigratePrometheusDocker/#reference","title":"Reference","text":"
  • Rename Docker Volume
"},{"location":"configure-ems/","title":"EMS","text":""},{"location":"configure-ems/#ems-collector","title":"EMS collector","text":"

The EMS collector collects ONTAP event management system ( EMS) events via the ONTAP REST API.

This collector uses a YAML template file to define which events to collect, export, and what labels to attach to each metric. This means you can collect new EMS events or attach new labels by editing the default template file or by extending existing templates.

The default template file contains 98 EMS events.

"},{"location":"configure-ems/#supported-ontap-systems","title":"Supported ONTAP Systems","text":"

Any cDOT ONTAP system using 9.6 or higher.

"},{"location":"configure-ems/#requirements","title":"Requirements","text":"

It is recommended to create a read-only user on the ONTAP system. See prepare an ONTAP cDOT cluster for details.

"},{"location":"configure-ems/#metrics","title":"Metrics","text":"

This collector collects EMS events from ONTAP and for each received EMS event, creates new metrics prefixed with ems_events.

Harvest supports two types of ONTAP EMS events:

  • Normal EMS events

Single shot events. When ONTAP detects a problem, an event is raised. When the issue is addressed, ONTAP does not raise another event reflecting that the problem was resolved.

  • Bookend EMS events

ONTAP creates bookend events in matching pairs. ONTAP creates an event when an issue is detected and another paired event when the event is resolved. Typically, these events share a common set of properties.

"},{"location":"configure-ems/#collector-configuration","title":"Collector Configuration","text":"

The parameters of the collector are distributed across three files:

  • Harvest configuration file (default: harvest.yml)
  • EMS collector configuration file (default: conf/ems/default.yaml)
  • EMS template file (located in conf/ems/9.6.0/ems.yaml)

Except for addr, datacenter, and auth_style, all other parameters of the EMS collector can be defined in either of these three files. Parameters defined in the lower-level files, override parameters in the higher-level file. This allows you to configure each EMS event individually, or use the same parameters for all events.

"},{"location":"configure-ems/#ems-collector-configuration-file","title":"EMS Collector Configuration File","text":"

This configuration file contains the parameters that are used to configure the EMS collector. These parameters can be defined in your harvest.yml or conf/ems/default.yaml file.

parameter type description default client_timeout Go duration how long to wait for server responses 1m schedule list, required the polling frequency of the collector/object. Should include exactly the following two elements in the order specified: - instance Go duration polling frequency for updating the instance cache (example value: 24h = 1440m) - data Go duration polling frequency for updating the data cache (example value: 3m)Note Harvest allows defining poll intervals on sub-second level (e.g. 1ms), however keep in mind the following:
  • API response of an ONTAP system can take several seconds, so the collector is likely to enter failed state if the poll interval is less than client_timeout.
  • Small poll intervals will create significant workload on the ONTAP system.

The EMS configuration file should contain the following section mapping the Ems object to the corresponding template file.

objects:\nEms: ems.yaml\n

Even though the EMS mapping shown above references a single file named ems.yaml, there may be multiple versions of that file across subdirectories named after ONTAP releases. See cDOT for examples.

At runtime, the EMS collector will select the appropriate object configuration file that most closely matches the targeted ONTAP system.

"},{"location":"configure-ems/#ems-template-file","title":"EMS Template File","text":"

The EMS template file should contain the following parameters:

parameter type description default name string display name of the collector. this matches the named defined in your conf/ems/default.yaml file EMS object string short name of the object, used to prefix metrics ems query string REST API endpoint used to query EMS events api/support/ems/events exports list list of default labels attached to each exported metric events list list of EMS events to collect. See Event Parameters"},{"location":"configure-ems/#event-parameters","title":"Event Parameters","text":"

This section defines the list of EMS events you want to collect, which properties to export, what labels to attach, and how to handle bookend pairs. The EMS event template parameters are explained below along with an example for reference.

  • name is the ONTAP EMS event name. (collect ONTAP EMS events with the name of LUN.offline)
  • matches list of name-value pairs used to further filter ONTAP events. Some EMS events include arguments and these name-value pairs provide a way to filter on those arguments. (Only collect ONTAP EMS events where volume_name has the value abc_vol)
  • exports list of EMS event parameters to export. These exported parameters are attached as labels to each matching EMS event.
    • labels that are prefixed with ^^ use that parameter to define instance uniqueness.
  • resolve_when_ems (applicable to bookend events only). Lists the resolving event that pairs with the issuing event
    • name is the ONTAP EMS event name of the resolving EMS event (LUN.online). When the resolving event is received, the issuing EMS event will be resolved. In this example, Harvest will raise an event when it finds the ONTAP EMS event named LUN.offline and that event will be resolved when the EMS event named LUN.online is received.
    • resolve_after (optional, Go duration, default = 28 days) resolve the issuing EMS after the specified duration has elapsed (672h = 28d). If the bookend pair is not received within the resolve_after duration, the Issuing EMS event expires. When that happens, Harvest will mark the event as auto resolved by adding the autoresolved=true label to the issuing EMS event.
    • resolve_key (optional) bookend key used to match bookend EMS events. Defaults to prefixed (^^) labels in exports section. resolve_key allows you to override what is defined in the exports section.

Labels are only exported if they are included in the exports section.

Example template definition for the LUN.offline EMS event:

  - name: LUN.offline\nmatches:\n- name: volume_name\nvalue: abc_vol\nexports:\n- ^^parameters.object_uuid            => object_uuid\n- parameters.object_type              => object_type\n- parameters.lun_path                 => lun_path\n- parameters.volume_name              => volume\n- parameters.volume_dsid              => volume_ds_id\nresolve_when_ems:\n- name: LUN.online\nresolve_after: 672h\nresolve_key:\n- ^^parameters.object_uuid        => object_uuid\n
"},{"location":"configure-ems/#how-do-i-find-the-full-list-of-supported-ems-events","title":"How do I find the full list of supported EMS events?","text":"

ONTAP documents the list of EMS events created in the ONTAP EMS Event Catalog.

You can also query a live system and ask the cluster for its event catalog like so:

curl --insecure --user \"user:password\" 'https://10.61.124.110/api/support/ems/messages?fields=*'\n

Example Output

{\n  \"records\": [\n    {\n      \"name\": \"AccessCache.NearLimits\",\n      \"severity\": \"alert\",\n      \"description\": \"This message occurs when the access cache module is near its limits for entries or export rules. Reaching these limits can prevent new clients from being able to mount and perform I/O on the storage system, and can also cause clients to be granted or denied access based on stale cached information.\",\n      \"corrective_action\": \"Ensure that the number of clients accessing the storage system continues to be below the limits for access cache entries and export rules across those entries. If the set of clients accessing the storage system is constantly changing, consider using the \\\"vserver export-policy access-cache config modify\\\" command to reduce the harvest timeout parameter so that cache entries for clients that are no longer accessing the storage system can be evicted sooner.\",\n      \"snmp_trap_type\": \"severity_based\",\n      \"deprecated\": false\n    },\n...\n    {\n      \"name\": \"ztl.smap.online.status\",\n      \"severity\": \"notice\",\n      \"description\": \"This message occurs when the specified partition on a Software Defined Flash drive could not be onlined due to internal S/W or device error.\",\n      \"corrective_action\": \"NONE\",\n      \"snmp_trap_type\": \"severity_based\",\n      \"deprecated\": false\n    }\n  ],\n  \"num_records\": 7273\n}\n
"},{"location":"configure-ems/#ems-prometheus-alerts","title":"Ems Prometheus Alerts","text":"

Refer Prometheus-Alerts

"},{"location":"configure-grafana/","title":"Configure Grafana","text":""},{"location":"configure-grafana/#grafana","title":"Grafana","text":"

Grafana hosts the Harvest dashboards and needs to be setup before importing your dashboards.

"},{"location":"configure-harvest-advanced/","title":"Configure Harvest (advanced)","text":"

This chapter describes additional advanced configuration possibilities of NetApp Harvest. For a typical installation this level of detail is likely not needed.

"},{"location":"configure-harvest-basic/","title":"Configure Harvest (basic)","text":"

The main configuration file, harvest.yml, consists of the following sections, described below:

"},{"location":"configure-harvest-basic/#pollers","title":"Pollers","text":"

All pollers are defined in harvest.yml, the main configuration file of Harvest, under the section Pollers.

parameter type description default Poller name (header) required Poller name, user-defined value datacenter required Datacenter name, user-defined value addr required by some collectors IPv4 or FQDN of the target system collectors required List of collectors to run for this poller exporters required List of exporter names from the Exporters section. Note: this should be the name of the exporter (e.g. prometheus1), not the value of the exporter key (e.g. Prometheus) auth_style required by Zapi* collectors Either basic_auth or certificate_auth See authentication for details basic_auth username, password required if auth_style is basic_auth ssl_cert, ssl_key optional if auth_style is certificate_auth Paths to SSL (client) certificate and key used to authenticate with the target system.If not provided, the poller will look for <hostname>.key and <hostname>.pem in $HARVEST_HOME/cert/.To create certificates for ONTAP systems, see using certificate authentication ca_cert optional if auth_style is certificate_auth Path to file that contains PEM encoded certificates. Harvest will append these certificates to the system-wide set of root certificate authorities (CA).If not provided, the OS's root CAs will be used.To create certificates for ONTAP systems, see using certificate authentication use_insecure_tls optional, bool If true, disable TLS verification when connecting to ONTAP cluster false credentials_file optional, string Path to a yaml file that contains cluster credentials. The file should have the same shape as harvest.yml. See here for examples. Path can be relative to harvest.yml or absolute. credentials_script optional, section Section that defines how Harvest should fetch credentials via external script. See here for details. tls_min_version optional, string Minimum TLS version to use when connecting to ONTAP cluster: One of tls10, tls11, tls12 or tls13 Platform decides labels optional, list of key-value pairs Each of the key-value pairs will be added to a poller's metrics. Details below log_max_bytes Maximum size of the log file before it will be rotated 10 MB log_max_files Number of rotated log files to keep 5 log optional, list of collector names Matching collectors log their ZAPI request/response prefer_zapi optional, bool Use the ZAPI API if the cluster supports it, otherwise allow Harvest to choose REST or ZAPI, whichever is appropriate to the ONTAP version. See rest-strategy for details."},{"location":"configure-harvest-basic/#defaults","title":"Defaults","text":"

This section is optional. If there are parameters identical for all your pollers (e.g. datacenter, authentication method, login preferences), they can be grouped under this section. The poller section will be checked first and if the values aren't found there, the defaults will be consulted.

"},{"location":"configure-harvest-basic/#exporters","title":"Exporters","text":"

All exporters need two types of parameters:

  • exporter parameters - defined in harvest.yml under Exporters section
  • export_options - these options are defined in the Matrix data structure that is emitted from collectors and plugins

The following two parameters are required for all exporters:

parameter type description default Exporter name (header) required Name of the exporter instance, this is a user-defined value exporter required Name of the exporter class (e.g. Prometheus, InfluxDB, Http) - these can be found under the cmd/exporters/ directory

Note: when we talk about the Prometheus Exporter or InfluxDB Exporter, we mean the Harvest modules that send the data to a database, NOT the names used to refer to the actual databases.

"},{"location":"configure-harvest-basic/#prometheus-exporter","title":"Prometheus Exporter","text":""},{"location":"configure-harvest-basic/#influxdb-exporter","title":"InfluxDB Exporter","text":""},{"location":"configure-harvest-basic/#tools","title":"Tools","text":"

This section is optional. You can uncomment the grafana_api_token key and add your Grafana API token so harvest does not prompt you for the key when importing dashboards.

Tools:\n  #grafana_api_token: 'aaa-bbb-ccc-ddd'\n
"},{"location":"configure-harvest-basic/#poller_files","title":"Poller_files","text":"

Harvest supports loading pollers from multiple files specified in the Poller_files section of your harvest.yml file. For example, the following snippet tells harvest to load pollers from all the *.yml files under the configs directory, and from the path/to/single.yml file.

Paths may be relative or absolute.

Poller_files:\n- configs/*.yml\n- path/to/single.yml\n\nPollers:\nu2:\ndatacenter: dc-1\n

Each referenced file can contain one or more unique pollers. Ensure that you include the top-level Pollers section in these files. All other top-level sections will be ignored. For example:

# contents of configs/00-rtp.yml\nPollers:\nntap3:\ndatacenter: rtp\n\nntap4:\ndatacenter: rtp\n---\n# contents of configs/01-rtp.yml\nPollers:\nntap5:\ndatacenter: blr\n---\n# contents of path/to/single.yml\nPollers:\nntap1:\ndatacenter: dc-1\n\nntap2:\ndatacenter: dc-1\n

At runtime, all files will be read and combined into a single configuration. The example above would result in the following set of pollers, in this order.

- u2\n- ntap3\n- ntap4\n- ntap5\n- ntap1\n- ntap2\n

When using glob patterns, the list of matching paths will be sorted before they are read. Errors will be logged for all duplicate pollers and Harvest will refuse to start.

"},{"location":"configure-harvest-basic/#configuring-collectors","title":"Configuring collectors","text":"

Collectors are configured by their own configuration files (templates), which are stored in subdirectories in conf/. Most collectors run concurrently and collect a subset of related metrics. For example, node related metrics are grouped together and run independently of the disk related metrics. Below is a snippet from conf/zapi/default.yaml

In this example, the default.yaml template contains a list of objects (e.g. Node) that reference sub-templates (e.g. node.yaml). This decomposition groups related metrics together and at runtime, a Zapi collector per object will be created and each of these collectors will run concurrently.

Using the snippet below, we expect there to be four Zapi collectors running, each with a different subtemplate and object.

collector:          Zapi\nobjects:\n  Node:             node.yaml\n  Aggregate:        aggr.yaml\n  Volume:           volume.yaml\n  SnapMirror:       snapmirror.yaml\n

At start-up, Harvest looks for two files (default.yaml and custom.yaml) in the conf directory of the collector (e.g. conf/zapi/default.yaml). The default.yaml is installed by default, while the custom.yaml is an optional file you can create to add new templates.

When present, the custom.yaml file will be merged with the default.yaml file. This behavior can be overridden in your harvest.yml, see here for an example.

For a list of collector-specific parameters, refer to their individual documentation.

"},{"location":"configure-harvest-basic/#zapi-and-zapiperf","title":"Zapi and ZapiPerf","text":""},{"location":"configure-harvest-basic/#rest-and-restperf","title":"Rest and RestPerf","text":""},{"location":"configure-harvest-basic/#ems","title":"EMS","text":""},{"location":"configure-harvest-basic/#storagegrid","title":"StorageGRID","text":""},{"location":"configure-harvest-basic/#unix","title":"Unix","text":""},{"location":"configure-harvest-basic/#labels","title":"Labels","text":"

Labels offer a way to add additional key-value pairs to a poller's metrics. These allow you to tag a cluster's metrics in a cross-cutting fashion. Here's an example:

  cluster-03:\n    datacenter: DC-01\n    addr: 10.0.1.1\n    labels:\n      - org: meg       # add an org label with the value \"meg\"\n      - ns:  rtp       # add a namespace label with the value \"rtp\"\n

These settings add two key-value pairs to each metric collected from cluster-03 like this:

node_vol_cifs_write_data{org=\"meg\",ns=\"rtp\",datacenter=\"DC-01\",cluster=\"cluster-03\",node=\"umeng-aff300-05\"} 10\n

Keep in mind that each unique combination of key-value pairs increases the amount of stored data. Use them sparingly. See PrometheusNaming for details.

"},{"location":"configure-harvest-basic/#authentication","title":"Authentication","text":"

When authenticating with ONTAP and StorageGRID clusters, Harvest supports both client certificates and basic authentication.

These methods of authentication are defined in the Pollers or Defaults section of your harvest.yml using one or more of the following parameters.

parameter description default Link auth_sytle One of basic_auth or certificate_auth Optional when using credentials_file or credentials_script basic_auth link username Username used for authenticating to the remote system link password Password used for authenticating to the remote system link credentials_file Relative or absolute path to a yaml file that contains cluster credentials link credentials_script External script Harvest executes to retrieve credentials link"},{"location":"configure-harvest-basic/#precedence","title":"Precedence","text":"

When multiple authentication parameters are defined at the same time, Harvest tries each method listed below, in the following order, to resolve authentication requests. The first method that returns a non-empty password stops the search.

When these parameters exist in both the Pollers and Defaults section, the Pollers section will be consulted before the Defaults.

section parameter Pollers auth_style: certificate_auth Pollers auth_style: basic_auth with username and password Pollers credentials_script Pollers credentials_file Defaults auth_style: certificate_auth Defaults auth_style: basic_auth with username and password Defaults credentials_script Defaults credentials_file"},{"location":"configure-harvest-basic/#credentials-file","title":"Credentials File","text":"

If you would rather not list cluster credentials in your harvest.yml, you can use the credentials_file section in your harvest.yml to point to a file that contains the credentials. At runtime, the credentials_file will be read and the included credentials will be used to authenticate with the matching cluster(s).

This is handy when integrating with 3rd party credential stores. See #884 for examples.

The format of the credentials_file is similar to harvest.yml and can contain multiple cluster credentials.

Example:

Snippet from harvest.yml:

Pollers:\ncluster1:\naddr: 10.193.48.11\ncredentials_file: secrets/cluster1.yml\nexporters:\n- prom1 

File secrets/cluster1.yml:

Pollers:\ncluster1:\nusername: harvest\npassword: foo\n
"},{"location":"configure-harvest-basic/#credentials-script","title":"Credentials Script","text":"

You can fetch authentication information via an external script by using the credentials_script section in the Pollers section of your harvest.yml as shown in the example below.

At runtime, Harvest will invoke the script referenced in the credentials_script path section. Harvest will call the script with two arguments like so: ./script $addr $username.

  • The first argument is the address of the cluster taken from your harvest.yaml file, section Pollers addr
  • The second argument is the username of the cluster taken from your harvest.yaml file, section Pollers username

The script should use the two arguments to look up and return the password via the script's standard out. If the script doesn't finish within the specified timeout, Harvest will kill the script and any spawned processes.

Credential scripts are defined in your harvest.yml under the Pollers credentials_script section. Below are the options for the credentials_script section

parameter type description default path string absolute path to script that takes two arguments: addr and username, in that order schedule go duration or always schedule used to call the authentication script. If the value is always, the script will be called everytime a password is requested, otherwise use the earlier cached value 24h timeout go duration amount of time Harvest will wait for the script to finish before killing it and descendents 10s"},{"location":"configure-harvest-basic/#example","title":"Example","text":"
Pollers:\nontap1:\ndatacenter: rtp\naddr: 10.1.1.1\ncollectors:\n- Rest\n- RestPerf\ncredentials_script:\npath: ./get_pass\nschedule: 3h\ntimeout: 10s\n
"},{"location":"configure-harvest-basic/#troubleshooting","title":"Troubleshooting","text":"
  • Make sure your script is executable
  • Ensure the user/group that executes your poller also has read and execute permissions on the script. su as the user/group that runs Harvest and make sure you can execute the script too.
"},{"location":"configure-rest/","title":"REST","text":""},{"location":"configure-rest/#rest-collector","title":"Rest Collector","text":"

The Rest collectors uses the REST protocol to collect data from ONTAP systems.

The RestPerf collector is an extension of this collector, therefore they share many parameters and configuration settings.

"},{"location":"configure-rest/#target-system","title":"Target System","text":"

Target system can be cDot ONTAP system. 9.12.1 and after are supported, however the default configuration files may not completely match with all versions. See REST Strategy for more details.

"},{"location":"configure-rest/#requirements","title":"Requirements","text":"

No SDK or other requirements. It is recommended to create a read-only user for Harvest on the ONTAP system (see prepare monitored clusters for details)

"},{"location":"configure-rest/#metrics","title":"Metrics","text":"

The collector collects a dynamic set of metrics. ONTAP returns JSON documents and Harvest allows you to define templates to extract values from the JSON document via a dot notation path. You can view ONTAP's full set of REST APIs by visiting https://docs.netapp.com/us-en/ontap-automation/reference/api_reference.html#access-a-copy-of-the-ontap-rest-api-reference-documentation

As an example, the /api/storage/aggregates endpoint, lists all data aggregates in the cluster. Below is an example response from this endpoint:

{\n\"records\": [\n{\n\"uuid\": \"3e59547d-298a-4967-bd0f-8ae96cead08c\",\n\"name\": \"umeng_aff300_aggr2\",\n\"space\": {\n\"block_storage\": {\n\"size\": 8117898706944,\n\"available\": 4889853616128\n}\n},\n\"state\": \"online\",\n\"volume_count\": 36\n}\n]\n}\n

The Rest collector will take this document, extract the records section and convert the metrics above into: name, space.block_storage.size, space.block_storage.available, state and volume_count. Metric names will be taken, as is, unless you specify a short display name. See counters for more details.

"},{"location":"configure-rest/#parameters","title":"Parameters","text":"

The parameters of the collector are distributed across three files:

  • Harvest configuration file (default: harvest.yml)
  • Rest configuration file (default: conf/rest/default.yaml)
  • Each object has its own configuration file (located in conf/rest/$version/)

Except for addr and datacenter, all other parameters of the Rest collector can be defined in either of these three files. Parameters defined in the lower-level file, override parameters in the higher-level ones. This allows you to configure each object individually, or use the same parameters for all objects.

The full set of parameters are described below.

"},{"location":"configure-rest/#collector-configuration-file","title":"Collector configuration file","text":"

This configuration file contains a list of objects that should be collected and the filenames of their templates ( explained in the next section).

Additionally, this file contains the parameters that are applied as defaults to all objects. As mentioned before, any of these parameters can be defined in the Harvest or object configuration files as well.

parameter type description default client_timeout duration (Go-syntax) how long to wait for server responses 30s schedule list, required how frequently to retrieve metrics from ONTAP - data duration (Go-syntax) how frequently this collector/object should retrieve metrics from ONTAP 3 minutes

The template should define objects in the objects section. Example:

objects:\nAggregate: aggr.yaml\n

For each object, we define the filename of the object configuration file. The object configuration files are located in subdirectories matching the ONTAP version that was used to create these files. It is possible to have multiple version-subdirectories for multiple ONTAP versions. At runtime, the collector will select the object configuration file that closest matches the version of the target ONTAP system.

"},{"location":"configure-rest/#object-configuration-file","title":"Object configuration file","text":"

The Object configuration file (\"subtemplate\") should contain the following parameters:

parameter type description default name string, required display name of the collector that will collect this object query string, required REST endpoint used to issue a REST request object string, required short name of the object counters string list of counters to collect (see notes below) plugins list plugins and their parameters to run on the collected data export_options list parameters to pass to exporters (see notes below)"},{"location":"configure-rest/#template-example","title":"Template Example:","text":"
name:                     Volume\nquery:                    api/storage/volumes\nobject:                   volume\n\ncounters:\n- ^^name                                        => volume\n- ^^svm.name                                    => svm\n- ^aggregates.#.name                            => aggr\n- ^anti_ransomware.state                        => antiRansomwareState\n- ^state                                        => state\n- ^style                                        => style\n- space.available                               => size_available\n- space.overwrite_reserve                       => overwrite_reserve_total\n- space.overwrite_reserve_used                  => overwrite_reserve_used\n- space.percent_used                            => size_used_percent\n- space.physical_used                           => space_physical_used\n- space.physical_used_percent                   => space_physical_used_percent\n- space.size                                    => size\n- space.used                                    => size_used\n- hidden_fields:\n- anti_ransomware.state\n- space\n- filter:\n- name=*harvest*\n\nplugins:\n- LabelAgent:\nexclude_equals:\n- style `flexgroup_constituent`\n\nexport_options:\ninstance_keys:\n- aggr\n- style\n- svm\n- volume\ninstance_labels:\n- antiRansomwareState\n- state\n
"},{"location":"configure-rest/#counters","title":"counters","text":"

This section defines the list of counters that will be collected. These counters can be labels, numeric metrics or histograms. The exact property of each counter is fetched from ONTAP and updated periodically.

The display name of a counter can be changed with => (e.g., space.block_storage.size => space_total).

Counters that are stored as labels will only be exported if they are included in the export_options section.

The counters section allows you to specify hidden_fields and filter parameters. Please find the detailed explanation below.

"},{"location":"configure-rest/#hidden_fields","title":"hidden_fields","text":"

There are some fields that ONTAP will not return unless you explicitly ask for them, even when using the URL parameter fields=**. hidden_fields is how you tell ONTAP which additional fields it should include in the REST response.

"},{"location":"configure-rest/#filter","title":"filter","text":"

The filter is used to constrain the data returned by the endpoint, allowing for more targeted data retrieval. The filtering uses ONTAP's REST record filtering. The example above asks ONTAP to only return records where a volume's name matches *harvest*.

If you're familiar with ONTAP's REST record filtering, the example above would become name=*harvest* and appended to the final URL like so:

https://CLUSTER_IP/api/storage/volumes?fields=*,anti_ransomware.state,space&name=*harvest*\n

Refer to the ONTAP API specification, sections: query parameters and record filtering, for more details.

"},{"location":"configure-rest/#export_options","title":"export_options","text":"

Parameters in this section tell the exporters how to handle the collected data. The set of parameters varies by exporter. For Prometheus and InfluxDB exporters, the following parameters can be defined:

  • instances_keys (list): display names of labels to export with each data-point
  • instance_labels (list): display names of labels to export as a separate data-point
  • include_all_labels (bool): export all labels with each data-point (overrides previous two parameters)
"},{"location":"configure-rest/#restperf-collector","title":"RestPerf Collector","text":"

RestPerf collects performance metrics from ONTAP systems using the REST protocol. The collector is designed to be easily extendable to collect new objects or to collect additional counters from already configured objects.

This collector is an extension of the Rest collector. The major difference between them is that RestPerf collects only the performance (perf) APIs. Additionally, RestPerf always calculates final values from the deltas of two subsequent polls.

"},{"location":"configure-rest/#metrics_1","title":"Metrics","text":"

RestPerf metrics are calculated the same as ZapiPerf metrics. More details about how performance metrics are calculated can be found here.

"},{"location":"configure-rest/#parameters_1","title":"Parameters","text":"

The parameters of the collector are distributed across three files:

  • Harvest configuration file (default: harvest.yml)
  • RestPerf configuration file (default: conf/restperf/default.yaml)
  • Each object has its own configuration file (located in conf/restperf/$version/)

Except for addr, datacenter and auth_style, all other parameters of the RestPerf collector can be defined in either of these three files. Parameters defined in the lower-level file, override parameters in the higher-level file. This allows the user to configure each objects individually, or use the same parameters for all objects.

The full set of parameters are described below.

"},{"location":"configure-rest/#restperf-configuration-file","title":"RestPerf configuration file","text":"

This configuration file (the \"template\") contains a list of objects that should be collected and the filenames of their configuration (explained in the next section).

Additionally, this file contains the parameters that are applied as defaults to all objects. (As mentioned before, any of these parameters can be defined in the Harvest or object configuration files as well).

parameter type description default use_insecure_tls bool, optional skip verifying TLS certificate of the target system false client_timeout duration (Go-syntax) how long to wait for server responses 30s latency_io_reqd int, optional threshold of IOPs for calculating latency metrics (latencies based on very few IOPs are unreliable) 100 schedule list, required the poll frequencies of the collector/object, should include exactly these three elements in the exact same other: - counter duration (Go-syntax) poll frequency of updating the counter metadata cache 20 minutes - instance duration (Go-syntax) poll frequency of updating the instance cache 10 minutes - data duration (Go-syntax) poll frequency of updating the data cache Note Harvest allows defining poll intervals on sub-second level (e.g. 1ms), however keep in mind the following:
  • API response of an ONTAP system can take several seconds, so the collector is likely to enter failed state if the poll interval is less than client_timeout.
  • Small poll intervals will create significant workload on the ONTAP system, as many counters are aggregated on-demand.
  • Some metric values become less significant if they are calculated for very short intervals (e.g. latencies)
1 minute

The template should define objects in the objects section. Example:

objects:\nSystemNode: system_node.yaml\nHostAdapter: hostadapter.yaml\n

Note that for each object we only define the filename of the object configuration file. The object configuration files are located in subdirectories matching to the ONTAP version that was used to create these files. It is possible to have multiple version-subdirectories for multiple ONTAP versions. At runtime, the collector will select the object configuration file that closest matches to the version of the target ONTAP system. (A mismatch is tolerated since RestPerf will fetch and validate counter metadata from the system.)

"},{"location":"configure-rest/#object-configuration-file_1","title":"Object configuration file","text":"

Refer Object configuration file

"},{"location":"configure-rest/#counters_1","title":"counters","text":"

Refer Counters

Some counters require a \"base-counter\" for post-processing. If the base-counter is missing, RestPerf will still run, but the missing data won't be exported.

"},{"location":"configure-rest/#export_options_1","title":"export_options","text":"

Refer Export Options

"},{"location":"configure-storagegrid/","title":"StorageGRID","text":""},{"location":"configure-storagegrid/#storagegrid-collector","title":"StorageGRID Collector","text":"

The StorageGRID collector uses REST calls to collect data from StorageGRID systems.

"},{"location":"configure-storagegrid/#target-system","title":"Target System","text":"

All StorageGRID versions are supported, however the default configuration files may not completely match with older systems.

"},{"location":"configure-storagegrid/#requirements","title":"Requirements","text":"

No SDK or other requirements. It is recommended to create a read-only user for Harvest on the StorageGRID system (see prepare monitored clusters for details)

"},{"location":"configure-storagegrid/#metrics","title":"Metrics","text":"

The collector collects a dynamic set of metrics via StorageGRID's REST API. StorageGRID returns JSON documents and Harvest allows you to define templates to extract values from the JSON document via a dot notation path. You can view StorageGRID's full set of REST APIs by visiting https://$STORAGE_GRID_HOSTNAME/grid/apidocs.html

As an example, the /grid/accounts-cache endpoint, lists the tenant accounts in the cache and includes additional information, such as objectCount and dataBytes. Below is an example response from this endpoint:

{\n\"data\": [\n{\n\"id\": \"95245224059574669217\",\n\"name\": \"foople\",\n\"policy\": {\n\"quotaObjectBytes\": 50000000000\n},\n\"objectCount\": 6,\n\"dataBytes\": 10473454261\n}\n]\n}\n

The StorageGRID collector will take this document, extract the data section and convert the metrics above into: name, policy.quotaObjectBytes, objectCount, and dataBytes. Metric names will be taken, as is, unless you specify a short display name. See counters for more details.

"},{"location":"configure-storagegrid/#parameters","title":"Parameters","text":"

The parameters of the collector are distributed across three files:

  • Harvest configuration file (default: harvest.yml)
  • StorageGRID configuration file (default: conf/storagegrid/default.yaml)
  • Each object has its own configuration file (located in conf/storagegrid/$version/)

Except for addr and datacenter, all other parameters of the StorageGRID collector can be defined in either of these three files. Parameters defined in the lower-level file, override parameters in the higher-level ones. This allows you to configure each object individually, or use the same parameters for all objects.

The full set of parameters are described below.

"},{"location":"configure-storagegrid/#harvest-configuration-file","title":"Harvest configuration file","text":"

Parameters in the poller section should define the following required parameters.

parameter type description default Poller name (header) string, required Poller name, user-defined value addr string, required address (IP or FQDN) of the ONTAP system datacenter string, required Datacenter name, user-defined value username, password string, required StorageGRID username and password with at least Tenant accounts permissions collectors list, required Name of collector to run for this poller, use StorageGrid for this collector"},{"location":"configure-storagegrid/#storagegrid-configuration-file","title":"StorageGRID configuration file","text":"

This configuration file contains a list of objects that should be collected and the filenames of their templates ( explained in the next section).

Additionally, this file contains the parameters that are applied as defaults to all objects. As mentioned before, any of these parameters can be defined in the Harvest or object configuration files as well.

parameter type description default client_timeout duration (Go-syntax) how long to wait for server responses 30s schedule list, required how frequently to retrieve metrics from StorageGRID - data duration (Go-syntax) how frequently this collector/object should retrieve metrics from StorageGRID 5 minutes

The template should define objects in the objects section. Example:

objects:\nTenant: tenant.yaml\n

For each object, we define the filename of the object configuration file. The object configuration files are located in subdirectories matching the StorageGRID version that was used to create these files. It is possible to have multiple version-subdirectories for multiple StorageGRID versions. At runtime, the collector will select the object configuration file that closest matches the version of the target StorageGRID system.

"},{"location":"configure-storagegrid/#object-configuration-file","title":"Object configuration file","text":"

The Object configuration file (\"subtemplate\") should contain the following parameters:

parameter type description default name string, required display name of the collector that will collect this object query string, required REST endpoint used to issue a REST request object string, required short name of the object api string StorageGRID REST endpoint version to use, overrides default management API version 3 counters list list of counters to collect (see notes below) plugins list plugins and their parameters to run on the collected data export_options list parameters to pass to exporters (see notes below)"},{"location":"configure-storagegrid/#counters","title":"counters","text":"

This section defines the list of counters that will be collected. These counters can be labels, numeric metrics or histograms. The exact property of each counter is fetched from StorageGRID and updated periodically.

The display name of a counter can be changed with => (e.g., policy.quotaObjectBytes => logical_quota).

Counters that are stored as labels will only be exported if they are included in the export_options section.

"},{"location":"configure-storagegrid/#export_options","title":"export_options","text":"

Parameters in this section tell the exporters how to handle the collected data. The set of parameters varies by exporter. For Prometheus and InfluxDB exporters, the following parameters can be defined:

  • instances_keys (list): display names of labels to export with each data-point
  • instance_labels (list): display names of labels to export as a separate _label metric
  • include_all_labels (bool): export all labels with each data-point (overrides previous two parameters)
"},{"location":"configure-templates/","title":"Templates","text":""},{"location":"configure-templates/#creatingediting-templates","title":"Creating/editing templates","text":"

This document covers how to use Collector and Object templates to extend Harvest.

  1. How to add a new object template
  2. How to extend an existing object template

There are a couple of ways to learn about ZAPIs and their attributes:

  • ONTAP's documentation
  • Using Harvest's zapi tool to explore available APIs and metrics on your cluster. Examples:
$ harvest zapi --poller <poller> show apis\n  # will print list of apis that are available\n# usually apis with the \"get-iter\" suffix can provide useful metrics\n$ harvest zapi --poller <poller> show attrs --api volume-get-iter\n  # will print the attribute tree of the API\n$ harvest zapi --poller <poller> show data --api volume-get-iter\n  # will print raw data of the API attribute tree\n

(Replace <poller> with the name of a poller that can connect to an ONTAP system.)

"},{"location":"configure-templates/#collector-templates","title":"Collector templates","text":"

Collector templates define which set of objects Harvest should collect from the system being monitored. In your harvest.yml configuration file, when you say that you want to use a Zapi collector, that collector will read the matching conf/zapi/default.yaml - same with ZapiPerf, it will read the conf/zapiperf/default.yaml file. Belows's a snippet from conf/zapi/default.yaml. Each object is mapped to a corresponding object template file. For example, the Node object searches for the most appropriate version of the node.yaml file in the conf/zapi/cdot/** directory.

collector:          Zapi\nobjects:\n  Node:             node.yaml\n  Aggregate:        aggr.yaml\n  Volume:           volume.yaml\n  Disk:             disk.yaml\n

Each collector will also check if a matching file named, custom.yaml exists, and if it does, it will read that file and merge it with default.yaml. The custom.yaml file should be located beside the matching default.yaml file. ( eg. conf/zapi/custom.yaml is beside conf/zapi/default.yaml).

Let's take a look at some examples.

  1. Define a poller that uses the default Zapi collector. Using the default template is the easiest and most used option.
Pollers:\njamaica:\ndatacenter: munich\naddr: 10.10.10.10\ncollectors:\n- Zapi # will use conf/zapi/default.yaml and optionally merge with conf/zapi/custom.yaml\n
  1. Define a poller that uses the Zapi collector, but with a custom template file:
Pollers:\njamaica:\ndatacenter: munich\naddr: 10.10.10.10\ncollectors:\n- ZapiPerf:\n- limited.yaml # will use conf/zapiperf/limited.yaml\n# more templates can be added, they will be merged\n
"},{"location":"configure-templates/#object-templates","title":"Object Templates","text":"

Object templates (example: conf/zapi/cdot/9.8.0/lun.yaml) describe what to collect and export. These templates are used by collectors to gather metrics and send them to your time-series db.

Object templates are made up of the following parts:

  1. the name of the object (or resource) to collect
  2. the ZAPI or REST query used to collect the object
  3. a list of object counters to collect and how to export them

Instead of editing one of the existing templates, it's better to extend one of them. That way, your custom template will not be overwritten when upgrading Harvest. For example, if you want to extend conf/zapi/cdot/9.8.0/aggr.yaml, first create a copy (e.g., conf/zapi/cdot/9.8.0/custom_aggr.yaml), and then tell Harvest to use your custom template by adding these lines to conf/zapi/custom.yaml:

objects:\nAggregate: custom_aggr.yaml\n

After restarting your pollers, aggr.yaml and custom_aggr.yaml will be merged.

"},{"location":"configure-templates/#create-a-new-object-template","title":"Create a new object template","text":"

In this example, imagine that Harvest doesn't already collect environment sensor data and you wanted to collect it. Sensor does comes from the environment-sensors-get-iter ZAPI. Here are the steps to add a new object template.

Create the file conf/zapi/cdot/9.8.0/sensor.yaml (optionally replace 9.8.0 with the earliest version of ONTAP that supports sensor data. Refer to Harvest Versioned Templates for more information. Add the following content to your new sensor.yaml file.

name: Sensor                      # this name must match the key in your custom.yaml file\nquery: environment-sensors-get-iter\nobject: sensor\n\nmetric_type: int64\n\ncounters:\nenvironment-sensors-info:\n- critical-high-threshold    => critical_high\n- critical-low-threshold     => critical_low\n- ^discrete-sensor-state     => discrete_state\n- ^discrete-sensor-value     => discrete_value\n- ^^node-name                => node\n- ^^sensor-name              => sensor\n- ^sensor-type               => type\n- ^threshold-sensor-state    => threshold_state\n- threshold-sensor-value     => threshold_value\n- ^value-units               => unit\n- ^warning-high-threshold    => warning_high\n- ^warning-low-threshold     => warning_low\n\nexport_options:\ninclude_all_labels: true\n
"},{"location":"configure-templates/#enable-the-new-object-template","title":"Enable the new object template","text":"

To enable the new sensor object template, create the conf/zapi/custom.yaml file with the lines shown below.

objects:\nSensor: sensor.yaml                 # this key must match the name in your sensor.yaml file\n

The Sensor key used in the custom.yaml must match the name defined in the sensor.yaml file. That mapping is what connects this object with its template. In the future, if you add more object templates, you can add those in your existing custom.yaml file.

"},{"location":"configure-templates/#test-your-object-template-changes","title":"Test your object template changes","text":"

Test your new Sensor template with a single poller like this:

./bin/harvest start <poller> --foreground --verbose --collectors Zapi --objects Sensor\n

Replace <poller> with the name of one of your ONTAP pollers.

Once you have confirmed that the new template works, restart any already running pollers that you want to use the new template(s).

"},{"location":"configure-templates/#check-the-metrics","title":"Check the metrics","text":"

If you are using the Prometheus exporter, you can scrape the poller's HTTP endpoint with curl or a web browser. E.g., my poller exports its data on port 15001. Adjust as needed for your exporter.

curl -s 'http://localhost:15001/metrics' | grep ^sensor_  # sensor_ name matches the object: value in your sensor.yaml file.\n\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",critical_high=\"3664\",node=\"shopfloor-02\",sensor=\"P3.3V STBY\",type=\"voltage\",warning_low=\"3040\",critical_low=\"2960\",threshold_state=\"normal\",unit=\"mV\",warning_high=\"3568\"} 3280\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",sensor=\"P1.2V STBY\",type=\"voltage\",threshold_state=\"normal\",warning_high=\"1299\",warning_low=\"1105\",critical_low=\"1086\",node=\"shopfloor-02\",critical_high=\"1319\",unit=\"mV\"} 1193\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",unit=\"mV\",critical_high=\"15810\",critical_low=\"0\",node=\"shopfloor-02\",sensor=\"P12V STBY\",type=\"voltage\",threshold_state=\"normal\"} 11842\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",sensor=\"P12V STBY Curr\",type=\"current\",threshold_state=\"normal\",unit=\"mA\",critical_high=\"3182\",critical_low=\"0\",node=\"shopfloor-02\"} 748\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",critical_low=\"1470\",node=\"shopfloor-02\",sensor=\"Sysfan2 F2 Speed\",type=\"fan\",threshold_state=\"normal\",unit=\"RPM\",warning_low=\"1560\"} 2820\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",sensor=\"PSU2 Fan1 Speed\",type=\"fan\",threshold_state=\"normal\",unit=\"RPM\",warning_low=\"4600\",critical_low=\"4500\",node=\"shopfloor-01\"} 6900\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",sensor=\"PSU1 InPwr Monitor\",type=\"unknown\",threshold_state=\"normal\",unit=\"mW\",node=\"shopfloor-01\"} 132000\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",critical_high=\"58\",type=\"thermal\",unit=\"C\",warning_high=\"53\",critical_low=\"0\",node=\"shopfloor-01\",sensor=\"Bat Temp\",threshold_state=\"normal\",warning_low=\"5\"} 24\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",critical_high=\"9000\",node=\"shopfloor-01\",sensor=\"Bat Charge Volt\",type=\"voltage\",threshold_state=\"normal\",unit=\"mV\",warning_high=\"8900\"} 8200\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",node=\"shopfloor-02\",sensor=\"PSU1 InPwr Monitor\",type=\"unknown\",threshold_state=\"normal\",unit=\"mW\"} 132000\n
"},{"location":"configure-templates/#extend-an-existing-object-template","title":"Extend an existing object template","text":""},{"location":"configure-templates/#how-to-extend-a-restrestperfstoragegridems-collectors-existing-object-template","title":"How to extend a Rest/RestPerf/StorageGRID/Ems collector's existing object template","text":"

Instead of editing one of the existing templates, it's better to copy one and edit the copy. That way, your custom template will not be overwritten when upgrading Harvest. For example, if you want to change conf/rest/9.12.0/aggr.yaml, first create a copy (e.g., conf/rest/9.12.0/custom_aggr.yaml), then add these lines to conf/rest/custom.yaml:

objects:\nAggregate: custom_aggr.yaml\n

After restarting pollers, aggr.yaml will be ignored and the new, custom_aggr.yaml subtemplate will be used instead.

"},{"location":"configure-templates/#how-to-extend-a-zapizapiperf-collectors-existing-object-template","title":"How to extend a Zapi/ZapiPerf collector's existing object template","text":"

In this example, we want to extend one of the existing object templates that Harvest ships with, e.g. conf/zapi/cdot/9.8.0/lun.yaml and collect additional information as outlined below.

Let's say you want to extend lun.yaml to:

  1. Increase client_timeout (You want to increase the default timeout of the lun ZAPI because it keeps timing out)
  2. Add additional counters, e.g. multiprotocol-type, application
  3. Add a new counter to the already collected lun metrics using the value_to_num plugin
  4. Add a new application instance_keys and labels to the collected metrics

Let's assume the existing template is located at conf/zapi/cdot/9.8.0/lun.yaml and contains the following.

name: Lun\nquery: lun-get-iter\nobject: lun\n\ncounters:\nlun-info:\n- ^node\n- ^path\n- ^qtree\n- size\n- size-used\n- ^state\n- ^^uuid\n- ^volume\n- ^vserver => svm\n\nplugins:\n- LabelAgent:\n# metric label zapi_value rest_value `default_value`\nvalue_to_num:\n- new_status state online online `0`\nsplit:\n- path `/` ,,,lun\n\nexport_options:\ninstance_keys:\n- node\n- qtree\n- lun\n- volume\n- svm\ninstance_labels:\n- state\n

To extend the out-of-the-box lun.yaml template, create a conf/zapi/custom.yaml file if it doesn't already exist and add the lines shown below:

objects:\nLun: custom_lun.yaml\n

Create a new object template conf/zapi/cdot/9.8.0/custom_lun.yaml with the lines shown below.

client_timeout: 5m\ncounters:\nlun-info:\n- ^multiprotocol-type\n- ^application\n\nplugins:\n- LabelAgent:\nvalue_to_num:\n- custom_status state online online `0`\n\nexport_options:\ninstance_keys:\n- application\n

When you restart your pollers, Harvest will take the out-of-the-box template (lun.yaml) and your new one (custom_lun.yaml) and merge them into the following:

name: Lun\nquery: lun-get-iter\nobject: lun\ncounters:\nlun-info:\n- ^node\n- ^path\n- ^qtree\n- size\n- size-used\n- ^state\n- ^^uuid\n- ^volume\n- ^vserver => svm\n- ^multiprotocol-type\n- ^application\nplugins:\nLabelAgent:\nvalue_to_num:\n- new_status state online online `0`\n- custom_status state online online `0`\nsplit:\n- path `/` ,,,lun\nexport_options:\ninstance_keys:\n- node\n- qtree\n- lun\n- volume\n- svm\n- application\nclient_timeout: 5m\n

To help understand the merging process and the resulting combined template, you can view the result with:

bin/harvest doctor merge --template conf/zapi/cdot/9.8.0/lun.yaml --with conf/zapi/cdot/9.8.0/custom_lun.yaml\n
"},{"location":"configure-templates/#replace-an-existing-object-template-for-zapizapiperf-collector","title":"Replace an existing object template for Zapi/ZapiPerf Collector","text":"

You can only extend existing templates for Zapi/ZapiPerf Collector as explained above. If you need to replace one of the existing object templates, let us know on Discord or GitHub.

"},{"location":"configure-templates/#harvest-versioned-templates","title":"Harvest Versioned Templates","text":"

Harvest ships with a set of versioned templates tailored for specific versions of ONTAP. At runtime, Harvest uses a BestFit heuristic to pick the most appropriate template. The BestFit heuristic compares the list of Harvest templates with the ONTAP version and selects the best match. There are versioned templates for both the ZAPI and REST collectors. Below is an example of how the BestFit algorithm works - assume Harvest has these templated versions:

  • 9.6.0
  • 9.6.1
  • 9.8.0
  • 9.9.0
  • 9.10.1

if you are monitoring a cluster at these versions, Harvest will select the indicated template:

  • ONTAP version 9.4.1, Harvest will select the templates for 9.6.0
  • ONTAP version 9.6.0, Harvest will select the templates for 9.6.0
  • ONTAP version 9.7.X, Harvest will select the templates for 9.6.1
  • ONTAP version 9.12, Harvest will select the templates for 9.10.1
"},{"location":"configure-templates/#counters","title":"counters","text":"

This section contains the complete or partial attribute tree of the queried API. Since the collector does not get counter metadata from the ONTAP system, two additional symbols are used for non-numeric attributes:

  • ^ used as a prefix indicates that the attribute should be stored as a label
  • ^^ indicates that the attribute is a label and an instance key (i.e., a label that uniquely identifies an instance, such as name, uuid). If a single label does not uniquely identify an instance, then multiple instance keys should be indicated.

Additionally, the symbol => can be used to set a custom display name for both instance labels and numeric counters. Example:

name: Spare\nquery: aggr-spare-get-iter\nobject: spare\ncollect_only_labels: true\ncounters:\naggr-spare-disk-info:\n- ^^disk                                # creates label aggr-disk\n- ^disk-type                            # creates label aggr-disk-type\n- ^is-disk-zeroed   => is_disk_zeroed   # creates label is_disk_zeroed\n- ^^original-owner  => original_owner   # creates label original_owner\nexport_options:\ninstance_keys:\n- disk\n- original_owner\ninstance_labels:\n- disk_type\n- is_disk_zeroed\n

Harvest does its best to determine a unique display name for each template's label and metric. Instead of relying on this heuristic, it is better to be explicit in your templates and define a display name using the caret (^) mapping. For example, instead of this:

aggr-spare-disk-info:\n    - ^^disk\n    - ^disk-type\n

do this:

aggr-spare-disk-info:\n    - ^^disk      => disk\n    - ^disk-type  => disk_type\n

See also #585

"},{"location":"configure-unix/","title":"Unix","text":"

This collector polls resource usage by Harvest pollers on the local system. Collector might be extended in the future to monitor any local or remote process.

"},{"location":"configure-unix/#target-system","title":"Target System","text":"

The machine where Harvest is running (\"localhost\").

"},{"location":"configure-unix/#requirements","title":"Requirements","text":"

Collector requires any OS where the proc-filesystem is available. If you are a developer, you are welcome to add support for other platforms. Currently, supported platforms includes most Unix/Unix-like systems:

  • Android / Termux
  • DragonFly BSD
  • FreeBSD
  • IBM AIX
  • Linux
  • NetBSD
  • Plan9
  • Solaris

(On FreeBSD and NetBSD the proc-filesystem needs to be manually mounted).

"},{"location":"configure-unix/#parameters","title":"Parameters","text":"parameter type description default mount_point string, optional path to the proc filesystem `/proc"},{"location":"configure-unix/#metrics","title":"Metrics","text":"

The Collector follows the Linux proc(5) manual to parse a static set of metrics. Unless otherwise stated, the metric has a scalar value:

metric type unit description start_time counter, float64 seconds process uptime cpu_percent gauge, float64 percent CPU used since last poll memory_percent gauge, float64 percent Memory used (RSS) since last poll cpu histogram, float64 seconds CPU used since last poll (system, user, iowait) memory histogram, uint64 kB Memory used since last poll (rss, vms, swap, etc) io histogram, uint64 bytecount IOs performed by process:rchar, wchar, read_bytes, write_bytes - read/write IOssyscr, syscw - syscalls for IO operations net histogram, uint64 count/byte Different IO operations over network devices ctx histogram, uint64 count Number of context switched (voluntary, involuntary) threads counter, uint64 count Number of threads fds counter, uint64 count Number of file descriptors

Additionally, the collector provides the following instance labels:

label description poller name of the poller pid PID of the poller"},{"location":"configure-unix/#issues","title":"Issues","text":"
  • Collector will fail on WSL because some non-critical files, in the proc-filesystem, are not present.
"},{"location":"configure-zapi/","title":"ZAPI","text":"

What about REST?

ZAPI will reach end of availablity in ONTAP 9.13.1 released Q2 2023. Don't worry, Harvest has you covered. Switch to Harvest's REST collectors and collect idential metrics. See REST Strategy for more details.

"},{"location":"configure-zapi/#zapi-collector","title":"Zapi Collector","text":"

The Zapi collectors uses the ZAPI protocol to collect data from ONTAP systems. The collector submits data as received from the target system, and does not perform any calculations or post-processing. Since the attributes of most APIs have an irregular tree structure, sometimes a plugin will be required to collect all metrics from an API.

The ZapiPerf collector is an extension of this collector, therefore they share many parameters and configuration settings.

"},{"location":"configure-zapi/#target-system","title":"Target System","text":"

Target system can be any cDot or 7Mode ONTAP system. Any version is supported, however the default configuration files may not completely match with older systems.

"},{"location":"configure-zapi/#requirements","title":"Requirements","text":"

No SDK or other requirements. It is recommended to create a read-only user for Harvest on the ONTAP system (see prepare monitored clusters for details)

"},{"location":"configure-zapi/#metrics","title":"Metrics","text":"

The collector collects a dynamic set of metrics. Since most ZAPIs have a tree structure, the collector converts that structure into a flat metric representation. No post-processing or calculation is performed on the collected data itself.

As an example, the aggr-get-iter ZAPI provides the following partial attribute tree:

aggr-attributes:\n- aggr-raid-attributes:\n- disk-count\n- aggr-snapshot-attributes:\n- files-total\n

The Zapi collector will convert this tree into two \"flat\" metrics: aggr_raid_disk_count and aggr_snapshot_files_total. (The algorithm to generate a name for the metrics will attempt to keep it as simple as possible, but sometimes it's useful to manually set a short display name. See counters for more details.

"},{"location":"configure-zapi/#parameters","title":"Parameters","text":"

The parameters and configuration are similar to those of the ZapiPerf collector. Only the differences will be discussed below.

"},{"location":"configure-zapi/#collector-configuration-file","title":"Collector configuration file","text":"

Parameters different from ZapiPerf:

parameter type description default schedule required same as for ZapiPerf, but only two elements: instance and data (collector does not run a counter poll) no_max_records bool, optional don't add max-records to the ZAPI request collect_only_labels bool, optional don't look for numeric metrics, only submit labels (suppresses the ErrNoMetrics error) only_cluster_instance bool, optional don't look for instance keys and assume only instance is the cluster itself"},{"location":"configure-zapi/#object-configuration-file","title":"Object configuration file","text":"

The Zapi collector does not have the parameters instance_key and override parameters. The optional parameter metric_type allows you to override the default metric type (uint64). The value of this parameter should be one of the metric types supported by the matrix data-structure.

"},{"location":"configure-zapi/#zapiperf-collector","title":"ZapiPerf Collector","text":""},{"location":"configure-zapi/#zapiperf","title":"ZapiPerf","text":"

ZapiPerf collects performance metrics from ONTAP systems using the ZAPI protocol. The collector is designed to be easily extendable to collect new objects or to collect additional counters from already configured objects.

This collector is an extension of the Zapi collector. The major difference between them is that ZapiPerf collects only the performance (perf) APIs. Additionally, ZapiPerf always calculates final values from the deltas of two subsequent polls.

"},{"location":"configure-zapi/#metrics_1","title":"Metrics","text":"

The collector collects a dynamic set of metrics. The metric values are calculated from two consecutive polls (therefore, no metrics are emitted after the first poll). The calculation algorithm depends on the property and base-counter attributes of each metric, the following properties are supported:

property formula description raw x = xi no post-processing, value x is submitted as it is delta x = xi - xi-1 delta of two poll values, xi and xi-1 rate x = (xi - xi-1) / (ti - ti-1) delta divided by the interval of the two polls in seconds average x = (xi - xi-1) / (yi - yi-1) delta divided by the delta of the base counter y percent x = 100 * (xi - xi-1) / (yi - yi-1) average multiplied by 100"},{"location":"configure-zapi/#parameters_1","title":"Parameters","text":"

The parameters of the collector are distributed across three files:

  • Harvest configuration file (default: harvest.yml)
  • ZapiPerf configuration file (default: conf/zapiperf/default.yaml)
  • Each object has its own configuration file (located in conf/zapiperf/cdot/ and conf/zapiperf/7mode/ for cDot and 7Mode systems respectively)

Except for addr, datacenter and auth_style, all other parameters of the ZapiPerf collector can be defined in either of these three files. Parameters defined in the lower-level file, override parameters in the higher-level file. This allows the user to configure each objects individually, or use the same parameters for all objects.

The full set of parameters are described below.

"},{"location":"configure-zapi/#harvest-configuration-file","title":"Harvest configuration file","text":"

Parameters in poller section should define (at least) the address and authentication method of the target system:

parameter type description default addr string, required address (IP or FQDN) of the ONTAP system datacenter string, required name of the datacenter where the target system is located auth_style string, optional authentication method: either basic_auth or certificate_auth basic_auth ssl_cert, ssl_key string, optional full path of the SSL certificate and key pairs (when using certificate_auth) username, password string, optional full path of the SSL certificate and key pairs (when using basic_auth)"},{"location":"configure-zapi/#zapiperf-configuration-file","title":"ZapiPerf configuration file","text":"

This configuration file (the \"template\") contains a list of objects that should be collected and the filenames of their configuration (explained in the next section).

Additionally, this file contains the parameters that are applied as defaults to all objects. (As mentioned before, any of these parameters can be defined in the Harvest or object configuration files as well).

parameter type description default use_insecure_tls bool, optional skip verifying TLS certificate of the target system false client_timeout duration (Go-syntax) how long to wait for server responses 30s batch_size int, optional max instances per API request 500 latency_io_reqd int, optional threshold of IOPs for calculating latency metrics (latencies based on very few IOPs are unreliable) 100 schedule list, required the poll frequencies of the collector/object, should include exactly these three elements in the exact same other: - counter duration (Go-syntax) poll frequency of updating the counter metadata cache (example value: 20m) - instance duration (Go-syntax) poll frequency of updating the instance cache (example value: 10m) - data duration (Go-syntax) poll frequency of updating the data cache (example value: 1m)Note Harvest allows defining poll intervals on sub-second level (e.g. 1ms), however keep in mind the following:
  • API response of an ONTAP system can take several seconds, so the collector is likely to enter failed state if the poll interval is less than client_timeout.
  • Small poll intervals will create significant workload on the ONTAP system, as many counters are aggregated on-demand.
  • Some metric values become less significant if they are calculated for very short intervals (e.g. latencies)

The template should define objects in the objects section. Example:

objects:\nSystemNode: system_node.yaml\nHostAdapter: hostadapter.yaml\n

Note that for each object we only define the filename of the object configuration file. The object configuration files are located in subdirectories matching to the ONTAP version that was used to create these files. It is possible to have multiple version-subdirectories for multiple ONTAP versions. At runtime, the collector will select the object configuration file that closest matches to the version of the target ONTAP system. (A mismatch is tolerated since ZapiPerf will fetch and validate counter metadata from the system.)

"},{"location":"configure-zapi/#object-configuration-file_1","title":"Object configuration file","text":"

The Object configuration file (\"subtemplate\") should contain the following parameters:

parameter type description default name string display name of the collector that will collect this object object string short name of the object query string raw object name used to issue a ZAPI request counters list list of counters to collect (see notes below) instance_key string label to use as instance key (either name or uuid) override list of key-value pairs override counter properties that we get from ONTAP (allows circumventing ZAPI bugs) plugins list plugins and their parameters to run on the collected data export_options list parameters to pass to exporters (see notes below)"},{"location":"configure-zapi/#counters","title":"counters","text":"

This section defines the list of counters that will be collected. These counters can be labels, numeric metrics or histograms. The exact property of each counter is fetched from ONTAP and updated periodically.

Some counters require a \"base-counter\" for post-processing. If the base-counter is missing, ZapiPerf will still run, but the missing data won't be exported.

The display name of a counter can be changed with => (e.g., nfsv3_ops => ops). There's one conversion Harvest does for you by default, the instance_name counter will be renamed to the value of object.

Counters that are stored as labels will only be exported if they are included in the export_options section.

"},{"location":"configure-zapi/#export_options","title":"export_options","text":"

Parameters in this section tell the exporters how to handle the collected data. The set of parameters varies by exporter. For Prometheus and InfluxDB exporters, the following parameters can be defined:

  • instances_keys (list): display names of labels to export with each data-point
  • instance_labels (list): display names of labels to export as a separate data-point
  • include_all_labels (bool): export all labels with each data-point (overrides previous two parameters)
"},{"location":"dashboards/","title":"Dashboards","text":"

Harvest can be used to import dashboards to Grafana.

The bin/harvest grafana utility requires the address (hostname or IP), port of the Grafana server, and a Grafana API token. The port can be omitted if Grafana is configured to redirect the URL. Use the -d flag to point to the directory that contains the dashboards.

"},{"location":"dashboards/#grafana-api-token","title":"Grafana API token","text":"

The utility tool asks for an API token which can be generated from the Grafana web-gui.

Click on Configuration in the left menu bar (1), click on API Keys (2) and click on the New API Key button. Choose a Key name (3), choose Editor for role (4) and click on add (5). Copy the generated key and paste it in your terminal or add the token to the Tools section of your configuration file. (see below)

For example, let's say your Grafana server is on http://my.grafana.server:3000 and you want to import the Prometheus-based dashboards from the grafana directory. You would run this:

$ bin/harvest grafana import --addr my.grafana.server:3000\n

Similarly, to export:

$ bin/harvest grafana export --addr my.grafana.server:3000 --directory /path/to/export/directory --serverfolder grafanaFolderName\n

By default, the dashboards are connected to the Prometheus datasource defined in Grafana. If your datasource has a different name, use the --datasource flag during import/export.

"},{"location":"dashboards/#cli","title":"CLI","text":"

The bin/harvest grafana tool includes CLI help when passing the --help command line argument flag like so:

bin/harvest grafana import --help\n

The labels argument requires more explanation.

"},{"location":"dashboards/#labels","title":"Labels","text":"

The grafana import --labels argument goes hand-in-hand with a poller's Labels section described here. Labels are used to add additional key-value pairs to a poller's metrics.

When you run bin/harvest grafana import, you may optionally pass a set of labels like so:

bin/harvest grafana import --labels org --labels dept

This will cause Harvest to do the following for each dashboard: 1. Parse each dashboard and add a new variable for each label passed on the command line 2. Modify each dashboard variable to use the new label variable(s) in a chained query.

Here's an example:

bin/harvest grafana import --labels \"org,dept\"\n

This will add the Org and Dept variables, as shown below, and modify the existing variables as shown.

Results in

"},{"location":"influxdb-exporter/","title":"InfluxDB Exporter","text":"InfluxDB Install

The information below describes how to setup Harvest's InfluxDB exporter. If you need help installing or setting up InfluxDB, check out their documention.

"},{"location":"influxdb-exporter/#overview","title":"Overview","text":"

The InfluxDB Exporter will format metrics into the InfluxDB's line protocol and write it into a bucket. The Exporter is compatible with InfluxDB v2.0. For explanation about bucket, org and precision, see InfluxDB API documentation.

If you are monitoring both CDOT and 7mode clusters, it is strongly recommended to use two different buckets.

"},{"location":"influxdb-exporter/#parameters","title":"Parameters","text":"

Overview of all parameters is provided below. Only one of url or addr should be provided and at least one of them is required. If addr is specified, it should be a valid TCP address or hostname of the InfluxDB server and should not include the scheme. When using addr, the bucket, org, and token key/values are required.

addr only works with HTTP. If you need to use HTTPS, you should use url instead.

If url is specified, you must add all arguments to the url. Harvest will do no additional processing and use exactly what you specify. ( e.g. url: https://influxdb.example.com:8086/write?db=netapp&u=user&p=pass&precision=2. When using url, the bucket, org, port, and precision fields will be ignored.

parameter type description default url string URL of the database, format: SCHEME://HOST[:PORT] addr string address of the database, format: HOST (HTTP only) port int, optional port of the database 8086 bucket string, required with addr InfluxDB bucket to write org string, required with addr InfluxDB organization name precision string, required with addr Preferred timestamp precision in seconds 2 client_timeout int, optional client timeout in seconds 5 token string token for authentication"},{"location":"influxdb-exporter/#example","title":"Example","text":"

snippet from harvest.yml using addr: (supports HTTP only))

Exporters:\nmy_influx:\nexporter: InfluxDB\naddr: localhost\nbucket: harvest\norg: harvest\ntoken: ZTTrt%24@#WNFM2VZTTNNT25wZWUdtUmhBZEdVUmd3dl@# 

snippet from harvest.yml using url: (supports both HTTP/HTTPS))

Exporters:\ninflux2:\nexporter: InfluxDB\nurl: https://localhost:8086/api/v2/write?org=harvest&bucket=harvest&precision=s\ntoken: my-token== 

Notice: InfluxDB stores a token in ~/.influxdbv2/configs, but you can also retrieve it from the UI (usually serving on localhost:8086): click on \"Data\" on the left task bar, then on \"Tokens\".

"},{"location":"license/","title":"License","text":"

Harvest's License

"},{"location":"manage-harvest/","title":"Manage Harvest Pollers","text":"

Coming Soon

"},{"location":"monitor-harvest/","title":"Monitor Harvest","text":""},{"location":"monitor-harvest/#harvest-metadata","title":"Harvest Metadata","text":"

Harvest publishes metadata metrics about the key components of Harvest. Many of these metrics are used in the Harvest Metadata dashboard.

If you want to understand more about these metrics, read on!

Metrics are published for:

  • collectors
  • pollers
  • clusters being monitored
  • exporters

Here's a high-level summary of the metadata metrics Harvest publishes with details below.

Metric Description Units metadata_collector_api_time amount of time to collect data from monitored cluster object microseconds metadata_collector_instances number of objects collected from monitored cluster scalar metadata_collector_metrics number of counters collected from monitored cluster scalar metadata_collector_parse_time amount of time to parse XML, JSON, etc. for cluster object microseconds metadata_collector_plugin_time amount of time for all plugins to post-process metrics microseconds metadata_collector_poll_time amount of time it took for the poll to finish microseconds metadata_collector_task_time amount of time it took for each collector's subtasks to complete microseconds metadata_component_count number of metrics collected for each object scalar metadata_component_status status of the collector - 0 means running, 1 means standby, 2 means failed enum metadata_exporter_count number of metrics and labels exported scalar metadata_exporter_time amount of time it took to render, export, and serve exported data microseconds metadata_target_goroutines number of goroutines that exist within the poller scalar metadata_target_status status of the system being monitored. 0 means reachable, 1 means unreachable enum metadata_collector_calc_time amount of time it took to compute metrics between two successive polls, specifically using properties like raw, delta, rate, average, and percent. This metric is available for ZapiPerf/RestPerf collectors. microseconds metadata_collector_skips number of metrics that were not calculated between two successive polls. This metric is available for ZapiPerf/RestPerf collectors. scalar"},{"location":"monitor-harvest/#collector-metadata","title":"Collector Metadata","text":"

A poller publishes the metadata metrics for each collector and exporter associated with it.

Let's say we start a poller with the Zapi collector and the out-of-the-box default.yaml exporting metrics to Prometheus. That means you will be monitoring 22 different objects (uncommented lines in default.yaml as of 23.02).

When we start this poller, we expect it to export 23 metadata_component_status metrics. One for each of the 22 objects, plus one for the Prometheus exporter.

The following curl confirms there are 23 metadata_component_status metrics reported.

curl -s http://localhost:12990/metrics | grep -v \"#\" | grep metadata_component_status | wc -l\n      23\n

These metrics also tell us which collectors are in an standby or failed state. For example, filtering on components not in the running state shows the following since this cluster doesn't have any ClusterPeers, SecurityAuditDestinations, or SnapMirrors. The reason is listed as no instances and the metric value is 1 which means standby.

curl -s http://localhost:12990/metrics | grep -v \"#\" | grep metadata_component_status | grep -Evo \"running\"\nmetadata_component_status{name=\"Zapi\", reason=\"no instances\",target=\"ClusterPeer\",type=\"collector\",version=\"23.04.1417\"} 1\nmetadata_component_status{name=\"Zapi\", reason=\"no instances\",target=\"SecurityAuditDestination\",type=\"collector\",version=\"23.04.1417\"} 1\nmetadata_component_status{name=\"Zapi\", reason=\"no instances\",target=\"SnapMirror\",type=\"collector\",version=\"23.04.1417\"} 1\n

The log files for the poller show a similar story. The poller starts with 22 collectors, but drops to 19 after three of the collectors go to standby because there are no instances to collect.

2023-04-17T13:14:18-04:00 INF ./poller.go:539 > updated status, up collectors: 22 (of 22), up exporters: 1 (of 1) Poller=u2\n2023-04-17T13:14:18-04:00 INF collector/collector.go:342 > no instances, entering standby Poller=u2 collector=Zapi:SecurityAuditDestination task=data\n2023-04-17T13:14:18-04:00 INF collector/collector.go:342 > no instances, entering standby Poller=u2 collector=Zapi:ClusterPeer task=data\n2023-04-17T13:14:18-04:00 INF collector/collector.go:342 > no instances, entering standby Poller=u2 collector=Zapi:SnapMirror task=data\n2023-04-17T13:15:18-04:00 INF ./poller.go:539 > updated status, up collectors: 19 (of 22), up exporters: 1 (of 1) Poller=u2\n
"},{"location":"ontap-metrics/","title":"ONTAP Metrics","text":"

This document describes how Harvest metrics relate to their relevant ONTAP ZAPI and REST mappings, including:

  • Details about which Harvest metrics each dashboard uses. These can be generated on demand by running bin/harvest grafana metrics. See #1577 for details.

  • More information about ONTAP REST performance counters can be found here.

Creation Date : 2023-Nov-03\nONTAP Version: 9.13.1\n
"},{"location":"ontap-metrics/#understanding-the-structure","title":"Understanding the structure","text":"

Below is an annotated example of how to interpret the structure of each of the metrics.

disk_io_queued Name of the metric exported by Harvest

Number of I/Os queued to the disk but not yet issued Description of the ONTAP metric

  • API will be one of REST or ZAPI depending on which collector is used to collect the metric
  • Endpoint name of the REST or ZAPI API used to collect this metric
  • Metric name of the ONTAP metric Template path of the template that collects the metric

Performance related metrics also include:

  • Unit the unit of the metric
  • Type describes how to calculate a cooked metric from two consecutive ONTAP raw metrics
  • Base some counters require a base counter for post-processing. When required, this property lists the base counter
API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#metrics","title":"Metrics","text":""},{"location":"ontap-metrics/#aggr_disk_busy","title":"aggr_disk_busy","text":"

The utilization percent of the disk. aggr_disk_busy is disk_busy aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_capacity","title":"aggr_disk_capacity","text":"

Disk capacity in MB. aggr_disk_capacity is disk_capacity aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_cp_read_chain","title":"aggr_disk_cp_read_chain","text":"

Average number of blocks transferred in each consistency point read operation during a CP. aggr_disk_cp_read_chain is disk_cp_read_chain aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_cp_read_latency","title":"aggr_disk_cp_read_latency","text":"

Average latency per block in microseconds for consistency point read operations. aggr_disk_cp_read_latency is disk_cp_read_latency aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_cp_reads","title":"aggr_disk_cp_reads","text":"

Number of disk read operations initiated each second for consistency point processing. aggr_disk_cp_reads is disk_cp_reads aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_io_pending","title":"aggr_disk_io_pending","text":"

Average number of I/Os issued to the disk for which we have not yet received the response. aggr_disk_io_pending is disk_io_pending aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_io_queued","title":"aggr_disk_io_queued","text":"

Number of I/Os queued to the disk but not yet issued. aggr_disk_io_queued is disk_io_queued aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_busy","title":"aggr_disk_max_busy","text":"

The utilization percent of the disk. aggr_disk_max_busy is the maximum of disk_busy for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_capacity","title":"aggr_disk_max_capacity","text":"

Disk capacity in MB. aggr_disk_max_capacity is the maximum of disk_capacity for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_cp_read_chain","title":"aggr_disk_max_cp_read_chain","text":"

Average number of blocks transferred in each consistency point read operation during a CP. aggr_disk_max_cp_read_chain is the maximum of disk_cp_read_chain for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_cp_read_latency","title":"aggr_disk_max_cp_read_latency","text":"

Average latency per block in microseconds for consistency point read operations. aggr_disk_max_cp_read_latency is the maximum of disk_cp_read_latency for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_cp_reads","title":"aggr_disk_max_cp_reads","text":"

Number of disk read operations initiated each second for consistency point processing. aggr_disk_max_cp_reads is the maximum of disk_cp_reads for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_disk_busy","title":"aggr_disk_max_disk_busy","text":"

The utilization percent of the disk. aggr_disk_max_disk_busy is the maximum of disk_busy for label aggr.

API Endpoint Metric Template ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_disk_capacity","title":"aggr_disk_max_disk_capacity","text":"

Disk capacity in MB. aggr_disk_max_disk_capacity is the maximum of disk_capacity for label aggr.

API Endpoint Metric Template ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_io_pending","title":"aggr_disk_max_io_pending","text":"

Average number of I/Os issued to the disk for which we have not yet received the response. aggr_disk_max_io_pending is the maximum of disk_io_pending for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_io_queued","title":"aggr_disk_max_io_queued","text":"

Number of I/Os queued to the disk but not yet issued. aggr_disk_max_io_queued is the maximum of disk_io_queued for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_total_data","title":"aggr_disk_max_total_data","text":"

Total throughput for user operations per second. aggr_disk_max_total_data is the maximum of disk_total_data for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_total_transfers","title":"aggr_disk_max_total_transfers","text":"

Total number of disk operations involving data transfer initiated per second. aggr_disk_max_total_transfers is the maximum of disk_total_transfers for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_read_blocks","title":"aggr_disk_max_user_read_blocks","text":"

Number of blocks transferred for user read operations per second. aggr_disk_max_user_read_blocks is the maximum of disk_user_read_blocks for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_read_chain","title":"aggr_disk_max_user_read_chain","text":"

Average number of blocks transferred in each user read operation. aggr_disk_max_user_read_chain is the maximum of disk_user_read_chain for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_read_latency","title":"aggr_disk_max_user_read_latency","text":"

Average latency per block in microseconds for user read operations. aggr_disk_max_user_read_latency is the maximum of disk_user_read_latency for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_reads","title":"aggr_disk_max_user_reads","text":"

Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. aggr_disk_max_user_reads is the maximum of disk_user_reads for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_write_blocks","title":"aggr_disk_max_user_write_blocks","text":"

Number of blocks transferred for user write operations per second. aggr_disk_max_user_write_blocks is the maximum of disk_user_write_blocks for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_write_chain","title":"aggr_disk_max_user_write_chain","text":"

Average number of blocks transferred in each user write operation. aggr_disk_max_user_write_chain is the maximum of disk_user_write_chain for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_write_latency","title":"aggr_disk_max_user_write_latency","text":"

Average latency per block in microseconds for user write operations. aggr_disk_max_user_write_latency is the maximum of disk_user_write_latency for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_writes","title":"aggr_disk_max_user_writes","text":"

Number of disk write operations initiated each second for storing data or metadata associated with user requests. aggr_disk_max_user_writes is the maximum of disk_user_writes for label aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_total_data","title":"aggr_disk_total_data","text":"

Total throughput for user operations per second. aggr_disk_total_data is disk_total_data aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_total_transfers","title":"aggr_disk_total_transfers","text":"

Total number of disk operations involving data transfer initiated per second. aggr_disk_total_transfers is disk_total_transfers aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_read_blocks","title":"aggr_disk_user_read_blocks","text":"

Number of blocks transferred for user read operations per second. aggr_disk_user_read_blocks is disk_user_read_blocks aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_read_chain","title":"aggr_disk_user_read_chain","text":"

Average number of blocks transferred in each user read operation. aggr_disk_user_read_chain is disk_user_read_chain aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_read_latency","title":"aggr_disk_user_read_latency","text":"

Average latency per block in microseconds for user read operations. aggr_disk_user_read_latency is disk_user_read_latency aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_reads","title":"aggr_disk_user_reads","text":"

Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. aggr_disk_user_reads is disk_user_reads aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_write_blocks","title":"aggr_disk_user_write_blocks","text":"

Number of blocks transferred for user write operations per second. aggr_disk_user_write_blocks is disk_user_write_blocks aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_write_chain","title":"aggr_disk_user_write_chain","text":"

Average number of blocks transferred in each user write operation. aggr_disk_user_write_chain is disk_user_write_chain aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_write_latency","title":"aggr_disk_user_write_latency","text":"

Average latency per block in microseconds for user write operations. aggr_disk_user_write_latency is disk_user_write_latency aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_writes","title":"aggr_disk_user_writes","text":"

Number of disk write operations initiated each second for storing data or metadata associated with user requests. aggr_disk_user_writes is disk_user_writes aggregated by aggr.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_efficiency_savings","title":"aggr_efficiency_savings","text":"

Space saved by storage efficiencies (logical_used - used)

API Endpoint Metric Template REST api/storage/aggregates space.efficiency.savings conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_efficiency_savings_wo_snapshots","title":"aggr_efficiency_savings_wo_snapshots","text":"

Space saved by storage efficiencies (logical_used - used)

API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots.savings conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_efficiency_savings_wo_snapshots_flexclones","title":"aggr_efficiency_savings_wo_snapshots_flexclones","text":"

Space saved by storage efficiencies (logical_used - used)

API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots_flexclones.savings conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_hybrid_cache_size_total","title":"aggr_hybrid_cache_size_total","text":"

Total usable space in bytes of SSD cache. Only provided when hybrid_cache.enabled is 'true'.

API Endpoint Metric Template REST api/storage/aggregates block_storage.hybrid_cache.size conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.hybrid-cache-size-total conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_hybrid_disk_count","title":"aggr_hybrid_disk_count","text":"

Number of disks used in the cache tier of the aggregate. Only provided when hybrid_cache.enabled is 'true'.

API Endpoint Metric Template REST api/storage/aggregates block_storage.hybrid_cache.disk_count conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_files_private_used","title":"aggr_inode_files_private_used","text":"

Number of system metadata files used. If the referenced file system is restricted or offline, a value of 0 is returned.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.

API Endpoint Metric Template REST api/storage/aggregates inode_attributes.files_private_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.files-private-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_files_total","title":"aggr_inode_files_total","text":"

Maximum number of user-visible files that this referenced file system can currently hold. If the referenced file system is restricted or offline, a value of 0 is returned.

API Endpoint Metric Template REST api/storage/aggregates inode_attributes.files_total conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.files-total conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_files_used","title":"aggr_inode_files_used","text":"

Number of user-visible files used in the referenced file system. If the referenced file system is restricted or offline, a value of 0 is returned.

API Endpoint Metric Template REST api/storage/aggregates inode_attributes.files_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.files-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_inodefile_private_capacity","title":"aggr_inode_inodefile_private_capacity","text":"

Number of files that can currently be stored on disk for system metadata files. This number will dynamically increase as more system files are created.This is an advanced property; there is an added computationl cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.

API Endpoint Metric Template REST api/storage/aggregates inode_attributes.file_private_capacity conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.inodefile-private-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_inodefile_public_capacity","title":"aggr_inode_inodefile_public_capacity","text":"

Number of files that can currently be stored on disk for user-visible files. This number will dynamically increase as more user-visible files are created.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.

API Endpoint Metric Template REST api/storage/aggregates inode_attributes.file_public_capacity conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.inodefile-public-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_maxfiles_available","title":"aggr_inode_maxfiles_available","text":"

The count of the maximum number of user-visible files currently allowable on the referenced file system.

API Endpoint Metric Template REST api/storage/aggregates inode_attributes.max_files_available conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.maxfiles-available conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_maxfiles_possible","title":"aggr_inode_maxfiles_possible","text":"

The largest value to which the maxfiles-available parameter can be increased by reconfiguration, on the referenced file system.

API Endpoint Metric Template REST api/storage/aggregates inode_attributes.max_files_possible conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.maxfiles-possible conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_maxfiles_used","title":"aggr_inode_maxfiles_used","text":"

The number of user-visible files currently in use on the referenced file system.

API Endpoint Metric Template REST api/storage/aggregates inode_attributes.max_files_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.maxfiles-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_used_percent","title":"aggr_inode_used_percent","text":"

The percentage of disk space currently in use based on user-visible file count on the referenced file system.

API Endpoint Metric Template REST api/storage/aggregates inode_attributes.used_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.percent-inode-used-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_logical_used_wo_snapshots","title":"aggr_logical_used_wo_snapshots","text":"

Logical used

API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots.logical_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-logical-used-wo-snapshots conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_logical_used_wo_snapshots_flexclones","title":"aggr_logical_used_wo_snapshots_flexclones","text":"

Logical used

API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots_flexclones.logical_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-logical-used-wo-snapshots-flexclones conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_physical_used_wo_snapshots","title":"aggr_physical_used_wo_snapshots","text":"

Total Data Reduction Physical Used Without Snapshots

API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots.logical_used, space.efficiency_without_snapshots.savings conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-physical-used-wo-snapshots conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_physical_used_wo_snapshots_flexclones","title":"aggr_physical_used_wo_snapshots_flexclones","text":"

Total Data Reduction Physical Used without snapshots and flexclones

API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots_flexclones.logical_used, space.efficiency_without_snapshots_flexclones.savings conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-physical-used-wo-snapshots-flexclones conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_power","title":"aggr_power","text":"

Power consumed by aggregate in Watts.

API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_primary_disk_count","title":"aggr_primary_disk_count","text":"

Number of disks used in the aggregate. This includes parity disks, but excludes disks in the hybrid cache.

API Endpoint Metric Template REST api/storage/aggregates block_storage.primary.disk_count conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_raid_disk_count","title":"aggr_raid_disk_count","text":"

Number of disks in the aggregate.

API Endpoint Metric Template REST api/storage/aggregates block_storage.primary.disk_count, block_storage.hybrid_cache.disk_count conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-raid-attributes.disk-count conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_raid_plex_count","title":"aggr_raid_plex_count","text":"

Number of plexes in the aggregate

API Endpoint Metric Template REST api/storage/aggregates block_storage.plexes.# conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-raid-attributes.plex-count conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_raid_size","title":"aggr_raid_size","text":"

Option to specify the maximum number of disks that can be included in a RAID group.

API Endpoint Metric Template REST api/storage/aggregates block_storage.primary.raid_size conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-raid-attributes.raid-size conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_files_total","title":"aggr_snapshot_files_total","text":"

Total files allowed in Snapshot copies

API Endpoint Metric Template REST api/storage/aggregates snapshot.files_total conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.files-total conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_files_used","title":"aggr_snapshot_files_used","text":"

Total files created in Snapshot copies

API Endpoint Metric Template REST api/storage/aggregates snapshot.files_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.files-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_inode_used_percent","title":"aggr_snapshot_inode_used_percent","text":"

The percentage of disk space currently in use based on user-visible file (inode) count on the referenced file system.

API Endpoint Metric Template ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.percent-inode-used-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_maxfiles_available","title":"aggr_snapshot_maxfiles_available","text":"

Maximum files available for Snapshot copies

API Endpoint Metric Template REST api/storage/aggregates snapshot.max_files_available conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.maxfiles-available conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_maxfiles_possible","title":"aggr_snapshot_maxfiles_possible","text":"

The largest value to which the maxfiles-available parameter can be increased by reconfiguration, on the referenced file system.

API Endpoint Metric Template REST api/storage/aggregates snapshot.max_files_available, snapshot.max_files_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.maxfiles-possible conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_maxfiles_used","title":"aggr_snapshot_maxfiles_used","text":"

Files in use by Snapshot copies

API Endpoint Metric Template REST api/storage/aggregates snapshot.max_files_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.maxfiles-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_reserve_percent","title":"aggr_snapshot_reserve_percent","text":"

Percentage of space reserved for Snapshot copies

API Endpoint Metric Template REST api/storage/aggregates space.snapshot.reserve_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.snapshot-reserve-percent conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_size_available","title":"aggr_snapshot_size_available","text":"

Available space for Snapshot copies in bytes

API Endpoint Metric Template REST api/storage/aggregates space.snapshot.available conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.size-available conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_size_total","title":"aggr_snapshot_size_total","text":"

Total space for Snapshot copies in bytes

API Endpoint Metric Template REST api/storage/aggregates space.snapshot.total conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.size-total conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_size_used","title":"aggr_snapshot_size_used","text":"

Space used by Snapshot copies in bytes

API Endpoint Metric Template REST api/storage/aggregates space.snapshot.used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.size-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_used_percent","title":"aggr_snapshot_used_percent","text":"

Percentage of disk space used by Snapshot copies

API Endpoint Metric Template REST api/storage/aggregates space.snapshot.used_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.percent-used-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_available","title":"aggr_space_available","text":"

Space available in bytes.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.available conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.size-available conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_capacity_tier_used","title":"aggr_space_capacity_tier_used","text":"

Used space in bytes in the cloud store. Only applicable for aggregates with a cloud store tier.

API Endpoint Metric Template REST api/storage/aggregates space.cloud_storage.used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.capacity-tier-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_data_compacted_count","title":"aggr_space_data_compacted_count","text":"

Amount of compacted data in bytes.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.data_compacted_count conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.data-compacted-count conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_data_compaction_saved","title":"aggr_space_data_compaction_saved","text":"

Space saved in bytes by compacting the data.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.data_compaction_space_saved conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.data-compaction-space-saved conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_data_compaction_saved_percent","title":"aggr_space_data_compaction_saved_percent","text":"

Percentage saved by compacting the data.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.data_compaction_space_saved_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.data-compaction-space-saved-percent conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_performance_tier_inactive_user_data","title":"aggr_space_performance_tier_inactive_user_data","text":"

The size that is physically used in the block storage and has a cold temperature, in bytes. This property is only supported if the aggregate is either attached to a cloud store or can be attached to a cloud store.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either block_storage.inactive_user_data or **.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.inactive_user_data conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.performance-tier-inactive-user-data conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_performance_tier_inactive_user_data_percent","title":"aggr_space_performance_tier_inactive_user_data_percent","text":"

The percentage of inactive user data in the block storage. This property is only supported if the aggregate is either attached to a cloud store or can be attached to a cloud store.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either block_storage.inactive_user_data_percent or **.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.inactive_user_data_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.performance-tier-inactive-user-data-percent conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_performance_tier_used","title":"aggr_space_performance_tier_used","text":"

A summation of volume footprints (including volume guarantees), in bytes. This includes all of the volume footprints in the block_storage tier and the cloud_storage tier.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.

API Endpoint Metric Template REST api/storage/aggregates space.footprint conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_performance_tier_used_percent","title":"aggr_space_performance_tier_used_percent","text":"API Endpoint Metric Template REST api/storage/aggregates space.footprint_percent conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_physical_used","title":"aggr_space_physical_used","text":"

Total physical used size of an aggregate in bytes.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.physical_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.physical-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_physical_used_percent","title":"aggr_space_physical_used_percent","text":"

Physical used percentage.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.physical_used_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.physical-used-percent conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_reserved","title":"aggr_space_reserved","text":"

The total disk space in bytes that is reserved on the referenced file system. The reserved space is already counted in the used space, so this element can be used to see what portion of the used space represents space reserved for future use.

API Endpoint Metric Template ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.total-reserved-space conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_sis_saved","title":"aggr_space_sis_saved","text":"

Amount of space saved in bytes by storage efficiency.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.volume_deduplication_space_saved conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.sis-space-saved conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_sis_saved_percent","title":"aggr_space_sis_saved_percent","text":"

Percentage of space saved by storage efficiency.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.volume_deduplication_space_saved_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.sis-space-saved-percent conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_sis_shared_count","title":"aggr_space_sis_shared_count","text":"

Amount of shared bytes counted by storage efficiency.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.volume_deduplication_shared_count conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.sis-shared-count conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_total","title":"aggr_space_total","text":"

Total usable space in bytes, not including WAFL reserve and aggregate Snapshot copy reserve.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.size conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.size-total conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_used","title":"aggr_space_used","text":"

Space used or reserved in bytes. Includes volume guarantees and aggregate metadata.

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.size-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_used_percent","title":"aggr_space_used_percent","text":"

The percentage of disk space currently in use on the referenced file system

API Endpoint Metric Template REST api/storage/aggregates space.block_storage.used, space.block_storage.size conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.percent-used-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_total_logical_used","title":"aggr_total_logical_used","text":"

Logical used

API Endpoint Metric Template REST api/storage/aggregates space.efficiency.logical_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-logical-used conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_total_physical_used","title":"aggr_total_physical_used","text":"

Total Physical Used

API Endpoint Metric Template REST api/storage/aggregates space.efficiency.logical_used, space.efficiency.savings conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-physical-used conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_volume_count_flexvol","title":"aggr_volume_count_flexvol","text":"

Number of flexvol volumes in the aggregate.

API Endpoint Metric Template REST api/storage/aggregates volume_count conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-volume-count-attributes.flexvol-count conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#cifs_session_connection_count","title":"cifs_session_connection_count","text":"

A counter used to track requests that are sent to the volumes to the node.

API Endpoint Metric Template REST api/protocols/cifs/sessions connection_count conf/rest/9.8.0/cifs_session.yaml ZAPI cifs-session-get-iter cifs-session.connection-count conf/zapi/cdot/9.8.0/cifs_session.yaml"},{"location":"ontap-metrics/#cloud_target_used","title":"cloud_target_used","text":"

The amount of cloud space used by all the aggregates attached to the target, in bytes. This field is only populated for FabricPool targets. The value is recalculated once every 5 minutes.

API Endpoint Metric Template REST api/cloud/targets used conf/rest/9.12.0/cloud_target.yaml ZAPI aggr-object-store-config-get-iter aggr-object-store-config-info.used-space conf/zapi/cdot/9.10.0/aggr_object_store_config.yaml"},{"location":"ontap-metrics/#cluster_new_status","title":"cluster_new_status","text":"

It is an indicator of the overall health status of the cluster, with a value of 1 indicating a healthy status and a value of 0 indicating an unhealthy status.

API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/status.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/status.yaml"},{"location":"ontap-metrics/#cluster_subsystem_outstanding_alerts","title":"cluster_subsystem_outstanding_alerts","text":"

Number of outstanding alerts

API Endpoint Metric Template REST api/private/cli/system/health/subsystem outstanding_alert_count conf/rest/9.12.0/subsystem.yaml ZAPI diagnosis-subsystem-config-get-iter diagnosis-subsystem-config-info.outstanding-alert-count conf/zapi/cdot/9.8.0/subsystem.yaml"},{"location":"ontap-metrics/#cluster_subsystem_suppressed_alerts","title":"cluster_subsystem_suppressed_alerts","text":"

Number of suppressed alerts

API Endpoint Metric Template REST api/private/cli/system/health/subsystem suppressed_alert_count conf/rest/9.12.0/subsystem.yaml ZAPI diagnosis-subsystem-config-get-iter diagnosis-subsystem-config-info.suppressed-alert-count conf/zapi/cdot/9.8.0/subsystem.yaml"},{"location":"ontap-metrics/#copy_manager_bce_copy_count_curr","title":"copy_manager_bce_copy_count_curr","text":"

Current number of copy requests being processed by the Block Copy Engine.

API Endpoint Metric Template REST api/cluster/counter/tables/copy_manager block_copy_engine_current_copy_countUnit: noneType: deltaBase: conf/restperf/9.12.0/copy_manager.yaml ZAPI perf-object-get-instances copy_manager bce_copy_count_currUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/copy_manager.yaml"},{"location":"ontap-metrics/#copy_manager_kb_copied","title":"copy_manager_kb_copied","text":"

Sum of kilo-bytes copied.

API Endpoint Metric Template REST api/cluster/counter/tables/copy_manager KB_copiedUnit: noneType: deltaBase: conf/restperf/9.12.0/copy_manager.yaml ZAPI perf-object-get-instances copy_manager KB_copiedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/copy_manager.yaml"},{"location":"ontap-metrics/#copy_manager_ocs_copy_count_curr","title":"copy_manager_ocs_copy_count_curr","text":"

Current number of copy requests being processed by the ONTAP copy subsystem.

API Endpoint Metric Template REST api/cluster/counter/tables/copy_manager ontap_copy_subsystem_current_copy_countUnit: noneType: deltaBase: conf/restperf/9.12.0/copy_manager.yaml ZAPI perf-object-get-instances copy_manager ocs_copy_count_currUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/copy_manager.yaml"},{"location":"ontap-metrics/#copy_manager_sce_copy_count_curr","title":"copy_manager_sce_copy_count_curr","text":"

Current number of copy requests being processed by the System Continuous Engineering.

API Endpoint Metric Template REST api/cluster/counter/tables/copy_manager system_continuous_engineering_current_copy_countUnit: noneType: deltaBase: conf/restperf/9.12.0/copy_manager.yaml ZAPI perf-object-get-instances copy_manager sce_copy_count_currUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/copy_manager.yaml"},{"location":"ontap-metrics/#copy_manager_spince_copy_count_curr","title":"copy_manager_spince_copy_count_curr","text":"

Current number of copy requests being processed by the SpinCE.

API Endpoint Metric Template REST api/cluster/counter/tables/copy_manager spince_current_copy_countUnit: noneType: deltaBase: conf/restperf/9.12.0/copy_manager.yaml ZAPI perf-object-get-instances copy_manager spince_copy_count_currUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/copy_manager.yaml"},{"location":"ontap-metrics/#disk_busy","title":"disk_busy","text":"

The utilization percent of the disk

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_bytes_per_sector","title":"disk_bytes_per_sector","text":"

Bytes per sector.

API Endpoint Metric Template REST api/storage/disks bytes_per_sector conf/rest/9.12.0/disk.yaml ZAPI storage-disk-get-iter storage-disk-info.disk-inventory-info.bytes-per-sector conf/zapi/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_capacity","title":"disk_capacity","text":"

Disk capacity in MB

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_cp_read_chain","title":"disk_cp_read_chain","text":"

Average number of blocks transferred in each consistency point read operation during a CP

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_cp_read_latency","title":"disk_cp_read_latency","text":"

Average latency per block in microseconds for consistency point read operations

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_cp_reads","title":"disk_cp_reads","text":"

Number of disk read operations initiated each second for consistency point processing

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_io_pending","title":"disk_io_pending","text":"

Average number of I/Os issued to the disk for which we have not yet received the response

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_io_queued","title":"disk_io_queued","text":"

Number of I/Os queued to the disk but not yet issued

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_power_on_hours","title":"disk_power_on_hours","text":"

Hours powered on.

API Endpoint Metric Template REST api/storage/disks stats.power_on_hours conf/rest/9.12.0/disk.yaml"},{"location":"ontap-metrics/#disk_sectors","title":"disk_sectors","text":"

Number of sectors on the disk.

API Endpoint Metric Template REST api/storage/disks sector_count conf/rest/9.12.0/disk.yaml ZAPI storage-disk-get-iter storage-disk-info.disk-inventory-info.capacity-sectors conf/zapi/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_stats_average_latency","title":"disk_stats_average_latency","text":"

Average I/O latency across all active paths, in milliseconds.

API Endpoint Metric Template REST api/storage/disks stats.average_latency conf/rest/9.12.0/disk.yaml ZAPI storage-disk-get-iter storage-disk-info.disk-stats-info.average-latency conf/zapi/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_stats_io_kbps","title":"disk_stats_io_kbps","text":"

Total Disk Throughput in KBPS Across All Active Paths

API Endpoint Metric Template ZAPI storage-disk-get-iter storage-disk-info.disk-stats-info.disk-io-kbps conf/zapi/cdot/9.8.0/disk.yaml REST api/private/cli/disk disk_io_kbps_total conf/rest/9.12.0/disk.yaml"},{"location":"ontap-metrics/#disk_stats_sectors_read","title":"disk_stats_sectors_read","text":"

Number of Sectors Read

API Endpoint Metric Template ZAPI storage-disk-get-iter storage-disk-info.disk-stats-info.sectors-read conf/zapi/cdot/9.8.0/disk.yaml REST api/private/cli/disk sectors_read conf/rest/9.12.0/disk.yaml"},{"location":"ontap-metrics/#disk_stats_sectors_written","title":"disk_stats_sectors_written","text":"

Number of Sectors Written

API Endpoint Metric Template ZAPI storage-disk-get-iter storage-disk-info.disk-stats-info.sectors-written conf/zapi/cdot/9.8.0/disk.yaml REST api/private/cli/disk sectors_written conf/rest/9.12.0/disk.yaml"},{"location":"ontap-metrics/#disk_total_data","title":"disk_total_data","text":"

Total throughput for user operations per second

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_total_transfers","title":"disk_total_transfers","text":"

Total number of disk operations involving data transfer initiated per second

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_uptime","title":"disk_uptime","text":"

Number of seconds the drive has been powered on

API Endpoint Metric Template REST api/storage/disks stats.power_on_hours, 60, 60 conf/rest/9.12.0/disk.yaml ZAPI storage-disk-get-iter storage-disk-info.disk-stats-info.power-on-time-interval conf/zapi/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_usable_size","title":"disk_usable_size","text":"

Usable size of each disk, in bytes.

API Endpoint Metric Template REST api/storage/disks usable_size conf/rest/9.12.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_read_blocks","title":"disk_user_read_blocks","text":"

Number of blocks transferred for user read operations per second

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_read_chain","title":"disk_user_read_chain","text":"

Average number of blocks transferred in each user read operation

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_read_latency","title":"disk_user_read_latency","text":"

Average latency per block in microseconds for user read operations

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_reads","title":"disk_user_reads","text":"

Number of disk read operations initiated each second for retrieving data or metadata associated with user requests

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_write_blocks","title":"disk_user_write_blocks","text":"

Number of blocks transferred for user write operations per second

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_write_chain","title":"disk_user_write_chain","text":"

Average number of blocks transferred in each user write operation

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_write_latency","title":"disk_user_write_latency","text":"

Average latency per block in microseconds for user write operations

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_writes","title":"disk_user_writes","text":"

Number of disk write operations initiated each second for storing data or metadata associated with user requests

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#environment_sensor_average_ambient_temperature","title":"environment_sensor_average_ambient_temperature","text":"

Average temperature of all ambient sensors for node in Celsius.

API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_average_fan_speed","title":"environment_sensor_average_fan_speed","text":"

Average fan speed for node in rpm.

API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_average_temperature","title":"environment_sensor_average_temperature","text":"

Average temperature of all non-ambient sensors for node in Celsius.

API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_max_fan_speed","title":"environment_sensor_max_fan_speed","text":"

Maximum fan speed for node in rpm.

API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_max_temperature","title":"environment_sensor_max_temperature","text":"

Maximum temperature of all non-ambient sensors for node in Celsius.

API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_min_ambient_temperature","title":"environment_sensor_min_ambient_temperature","text":"

Minimum temperature of all ambient sensors for node in Celsius.

API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_min_fan_speed","title":"environment_sensor_min_fan_speed","text":"

Minimum fan speed for node in rpm.

API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_min_temperature","title":"environment_sensor_min_temperature","text":"

Minimum temperature of all non-ambient sensors for node in Celsius.

API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_power","title":"environment_sensor_power","text":"

Power consumed by a node in Watts.

API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_threshold_value","title":"environment_sensor_threshold_value","text":"

Provides the sensor reading.

API Endpoint Metric Template REST api/cluster/sensors value conf/rest/9.12.0/sensor.yaml ZAPI environment-sensors-get-iter environment-sensors-info.threshold-sensor-value conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#external_service_op_num_not_found_responses","title":"external_service_op_num_not_found_responses","text":"

Number of 'Not Found' responses for calls to this operation.

API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_not_found_responsesUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_num_request_failures","title":"external_service_op_num_request_failures","text":"

A cumulative count of all request failures.

API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_request_failuresUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_num_requests_sent","title":"external_service_op_num_requests_sent","text":"

Number of requests sent to this service.

API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_requests_sentUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_num_responses_received","title":"external_service_op_num_responses_received","text":"

Number of responses received from the server (does not include timeouts).

API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_responses_receivedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_num_successful_responses","title":"external_service_op_num_successful_responses","text":"

Number of successful responses to this operation.

API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_successful_responsesUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_num_timeouts","title":"external_service_op_num_timeouts","text":"

Number of times requests to the server for this operation timed out, meaning no response was recevied in a given time period.

API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_timeoutsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_request_latency","title":"external_service_op_request_latency","text":"

Average latency of requests for operations of this type on this server.

API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op request_latencyUnit: microsecType: averageBase: num_requests_sent conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_request_latency_hist","title":"external_service_op_request_latency_hist","text":"

This histogram holds the latency values for requests of this operation to the specified server.

API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op request_latency_histUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#fabricpool_average_latency","title":"fabricpool_average_latency","text":"

This counter is deprecated.Average latencies executed during various phases of command execution. The execution-start latency represents the average time taken to start executing a operation. The request-prepare latency represent the average time taken to prepare the commplete request that needs to be sent to the server. The send latency represents the average time taken to send requests to the server. The execution-start-to-send-complete represents the average time taken to send a operation out since its execution started. The execution-start-to-first-byte-received represent the average time taken to to receive the first byte of a response since the command's request execution started. These counters can be used to identify performance bottlenecks within the object store client module.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_client_op average_latencyUnit: Type: Base: conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml"},{"location":"ontap-metrics/#fabricpool_cloud_bin_op_latency_average","title":"fabricpool_cloud_bin_op_latency_average","text":"

Cloud bin operation latency average in milliseconds.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_comp_aggr_vol_bin cloud_bin_op_latency_averageUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_comp_aggr_vol_bin.yaml ZAPI perf-object-get-instances wafl_comp_aggr_vol_bin cloud_bin_op_latency_averageUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/wafl_comp_aggr_vol_bin.yaml"},{"location":"ontap-metrics/#fabricpool_cloud_bin_operation","title":"fabricpool_cloud_bin_operation","text":"

Cloud bin operation counters.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_comp_aggr_vol_bin cloud_bin_opUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl_comp_aggr_vol_bin.yaml ZAPI perf-object-get-instances wafl_comp_aggr_vol_bin cloud_bin_operationUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl_comp_aggr_vol_bin.yaml"},{"location":"ontap-metrics/#fabricpool_get_throughput_bytes","title":"fabricpool_get_throughput_bytes","text":"

This counter is deprecated. Counter that indicates the throughput for GET command in bytes per second.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_client_op get_throughput_bytesUnit: Type: Base: conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml"},{"location":"ontap-metrics/#fabricpool_put_throughput_bytes","title":"fabricpool_put_throughput_bytes","text":"

This counter is deprecated. Counter that indicates the throughput for PUT command in bytes per second.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_client_op put_throughput_bytesUnit: Type: Base: conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml"},{"location":"ontap-metrics/#fabricpool_stats","title":"fabricpool_stats","text":"

This counter is deprecated. Counter that indicates the number of object store operations sent, and their success and failure counts. The objstore_client_op_name array indicate the operation name such as PUT, GET, etc. The objstore_client_op_stats_name array contain the total number of operations, their success and failure counter for each operation.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_client_op statsUnit: Type: Base: conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml"},{"location":"ontap-metrics/#fabricpool_throughput_ops","title":"fabricpool_throughput_ops","text":"

Counter that indicates the throughput for commands in ops per second.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_client_op throughput_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml"},{"location":"ontap-metrics/#fcp_avg_other_latency","title":"fcp_avg_other_latency","text":"

Average latency for operations other than read and write

API Endpoint Metric Template REST api/cluster/counter/tables/fcp average_other_latencyUnit: microsecType: averageBase: other_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_avg_read_latency","title":"fcp_avg_read_latency","text":"

Average latency for read operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp average_read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_avg_write_latency","title":"fcp_avg_write_latency","text":"

Average latency for write operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp average_write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_discarded_frames_count","title":"fcp_discarded_frames_count","text":"

Number of discarded frames.

API Endpoint Metric Template REST api/cluster/counter/tables/fcp discarded_frames_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port discarded_frames_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_fabric_connected_speed","title":"fcp_fabric_connected_speed","text":"

The negotiated data rate between the target FC port and the fabric in gigabits per second.

API Endpoint Metric Template REST api/network/fc/ports fabric.connected_speed conf/rest/9.6.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_int_count","title":"fcp_int_count","text":"

Number of interrupts

API Endpoint Metric Template REST api/cluster/counter/tables/fcp interrupt_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port int_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_invalid_crc","title":"fcp_invalid_crc","text":"

Number of invalid cyclic redundancy checks (CRC count)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp invalid.crcUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port invalid_crcUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_invalid_transmission_word","title":"fcp_invalid_transmission_word","text":"

Number of invalid transmission words

API Endpoint Metric Template REST api/cluster/counter/tables/fcp invalid.transmission_wordUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port invalid_transmission_wordUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_isr_count","title":"fcp_isr_count","text":"

Number of interrupt responses

API Endpoint Metric Template REST api/cluster/counter/tables/fcp isr.countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port isr_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_lif_avg_latency","title":"fcp_lif_avg_latency","text":"

Average latency for FCP operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif average_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_avg_other_latency","title":"fcp_lif_avg_other_latency","text":"

Average latency for operations other than read and write

API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif average_other_latencyUnit: microsecType: averageBase: other_ops conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_avg_read_latency","title":"fcp_lif_avg_read_latency","text":"

Average latency for read operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif average_read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_avg_write_latency","title":"fcp_lif_avg_write_latency","text":"

Average latency for write operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif average_write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_other_ops","title":"fcp_lif_other_ops","text":"

Number of operations that are not read or write.

API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_read_data","title":"fcp_lif_read_data","text":"

Amount of data read from the storage system

API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_read_ops","title":"fcp_lif_read_ops","text":"

Number of read operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_total_ops","title":"fcp_lif_total_ops","text":"

Total number of operations.

API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_write_data","title":"fcp_lif_write_data","text":"

Amount of data written to the storage system

API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_write_ops","title":"fcp_lif_write_ops","text":"

Number of write operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_link_down","title":"fcp_link_down","text":"

Number of times the Fibre Channel link was lost

API Endpoint Metric Template REST api/cluster/counter/tables/fcp link.downUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port link_downUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_link_failure","title":"fcp_link_failure","text":"

Number of link failures

API Endpoint Metric Template REST api/cluster/counter/tables/fcp link_failureUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port link_failureUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_loss_of_signal","title":"fcp_loss_of_signal","text":"

Number of times this port lost signal

API Endpoint Metric Template REST api/cluster/counter/tables/fcp loss_of_signalUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port loss_of_signalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_loss_of_sync","title":"fcp_loss_of_sync","text":"

Number of times this port lost sync

API Endpoint Metric Template REST api/cluster/counter/tables/fcp loss_of_syncUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port loss_of_syncUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_max_speed","title":"fcp_max_speed","text":"

The maximum speed supported by the FC port in gigabits per second.

API Endpoint Metric Template REST api/network/fc/ports speed.maximum conf/rest/9.6.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_other_latency","title":"fcp_nvmf_avg_other_latency","text":"

Average latency for operations other than read and write (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_other_latencyUnit: microsecType: averageBase: nvmf.other_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_other_latencyUnit: microsecType: averageBase: nvmf_other_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_read_latency","title":"fcp_nvmf_avg_read_latency","text":"

Average latency for read operations (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_read_latencyUnit: microsecType: averageBase: nvmf.read_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_read_latencyUnit: microsecType: averageBase: nvmf_read_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_remote_other_latency","title":"fcp_nvmf_avg_remote_other_latency","text":"

Average latency for remote operations other than read and write (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_remote_other_latencyUnit: microsecType: averageBase: nvmf_remote.other_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_remote_other_latencyUnit: microsecType: averageBase: nvmf_remote_other_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_remote_read_latency","title":"fcp_nvmf_avg_remote_read_latency","text":"

Average latency for remote read operations (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_remote_read_latencyUnit: microsecType: averageBase: nvmf_remote.read_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_remote_read_latencyUnit: microsecType: averageBase: nvmf_remote_read_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_remote_write_latency","title":"fcp_nvmf_avg_remote_write_latency","text":"

Average latency for remote write operations (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_remote_write_latencyUnit: microsecType: averageBase: nvmf_remote.write_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_remote_write_latencyUnit: microsecType: averageBase: nvmf_remote_write_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_write_latency","title":"fcp_nvmf_avg_write_latency","text":"

Average latency for write operations (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_write_latencyUnit: microsecType: averageBase: nvmf.write_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_write_latencyUnit: microsecType: averageBase: nvmf_write_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_caw_data","title":"fcp_nvmf_caw_data","text":"

Amount of CAW data sent to the storage system (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.caw_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_caw_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_caw_ops","title":"fcp_nvmf_caw_ops","text":"

Number of FC-NVMe CAW operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.caw_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_caw_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_command_slots","title":"fcp_nvmf_command_slots","text":"

Number of command slots that have been used by initiators logging into this port. This shows the command fan-in on the port.

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.command_slotsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_command_slotsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_other_ops","title":"fcp_nvmf_other_ops","text":"

Number of NVMF operations that are not read or write.

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_read_data","title":"fcp_nvmf_read_data","text":"

Amount of data read from the storage system (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_read_ops","title":"fcp_nvmf_read_ops","text":"

Number of FC-NVMe read operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_caw_data","title":"fcp_nvmf_remote_caw_data","text":"

Amount of remote CAW data sent to the storage system (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.caw_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_caw_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_caw_ops","title":"fcp_nvmf_remote_caw_ops","text":"

Number of FC-NVMe remote CAW operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.caw_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_caw_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_other_ops","title":"fcp_nvmf_remote_other_ops","text":"

Number of NVMF remote operations that are not read or write.

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_read_data","title":"fcp_nvmf_remote_read_data","text":"

Amount of remote data read from the storage system (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_read_ops","title":"fcp_nvmf_remote_read_ops","text":"

Number of FC-NVMe remote read operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_total_data","title":"fcp_nvmf_remote_total_data","text":"

Amount of remote FC-NVMe traffic to and from the storage system

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_total_ops","title":"fcp_nvmf_remote_total_ops","text":"

Total number of remote FC-NVMe operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_write_data","title":"fcp_nvmf_remote_write_data","text":"

Amount of remote data written to the storage system (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_write_ops","title":"fcp_nvmf_remote_write_ops","text":"

Number of FC-NVMe remote write operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_total_data","title":"fcp_nvmf_total_data","text":"

Amount of FC-NVMe traffic to and from the storage system

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_total_ops","title":"fcp_nvmf_total_ops","text":"

Total number of FC-NVMe operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_write_data","title":"fcp_nvmf_write_data","text":"

Amount of data written to the storage system (FC-NVMe)

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_write_ops","title":"fcp_nvmf_write_ops","text":"

Number of FC-NVMe write operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_other_ops","title":"fcp_other_ops","text":"

Number of operations that are not read or write.

API Endpoint Metric Template REST api/cluster/counter/tables/fcp other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_prim_seq_err","title":"fcp_prim_seq_err","text":"

Number of primitive sequence errors

API Endpoint Metric Template REST api/cluster/counter/tables/fcp primitive_seq_errUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port prim_seq_errUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_queue_full","title":"fcp_queue_full","text":"

Number of times a queue full condition occurred.

API Endpoint Metric Template REST api/cluster/counter/tables/fcp queue_fullUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port queue_fullUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_read_data","title":"fcp_read_data","text":"

Amount of data read from the storage system

API Endpoint Metric Template REST api/cluster/counter/tables/fcp read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_read_ops","title":"fcp_read_ops","text":"

Number of read operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_reset_count","title":"fcp_reset_count","text":"

Number of physical port resets

API Endpoint Metric Template REST api/cluster/counter/tables/fcp reset_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port reset_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_shared_int_count","title":"fcp_shared_int_count","text":"

Number of shared interrupts

API Endpoint Metric Template REST api/cluster/counter/tables/fcp shared_interrupt_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port shared_int_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_spurious_int_count","title":"fcp_spurious_int_count","text":"

Number of spurious interrupts

API Endpoint Metric Template REST api/cluster/counter/tables/fcp spurious_interrupt_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port spurious_int_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_threshold_full","title":"fcp_threshold_full","text":"

Number of times the total number of outstanding commands on the port exceeds the threshold supported by this port.

API Endpoint Metric Template REST api/cluster/counter/tables/fcp threshold_fullUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port threshold_fullUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_total_data","title":"fcp_total_data","text":"

Amount of FCP traffic to and from the storage system

API Endpoint Metric Template REST api/cluster/counter/tables/fcp total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_total_ops","title":"fcp_total_ops","text":"

Total number of FCP operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_write_data","title":"fcp_write_data","text":"

Amount of data written to the storage system

API Endpoint Metric Template REST api/cluster/counter/tables/fcp write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_write_ops","title":"fcp_write_ops","text":"

Number of write operations

API Endpoint Metric Template REST api/cluster/counter/tables/fcp write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcvi_firmware_invalid_crc_count","title":"fcvi_firmware_invalid_crc_count","text":"

Firmware reported invalid CRC count

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.invalid_crc_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_invalid_crcUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_firmware_invalid_transmit_word_count","title":"fcvi_firmware_invalid_transmit_word_count","text":"

Firmware reported invalid transmit word count

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.invalid_transmit_word_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_invalid_xmit_wordsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_firmware_link_failure_count","title":"fcvi_firmware_link_failure_count","text":"

Firmware reported link failure count

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.link_failure_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_link_failureUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_firmware_loss_of_signal_count","title":"fcvi_firmware_loss_of_signal_count","text":"

Firmware reported loss of signal count

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.loss_of_signal_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_loss_of_signalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_firmware_loss_of_sync_count","title":"fcvi_firmware_loss_of_sync_count","text":"

Firmware reported loss of sync count

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.loss_of_sync_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_loss_of_syncUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_firmware_systat_discard_frames","title":"fcvi_firmware_systat_discard_frames","text":"

Firmware reported SyStatDiscardFrames value

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.systat.discard_framesUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_SyStatDiscardFramesUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_hard_reset_count","title":"fcvi_hard_reset_count","text":"

Number of times hard reset of FCVI adapter got issued.

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi hard_reset_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi hard_reset_cntUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_rdma_write_avg_latency","title":"fcvi_rdma_write_avg_latency","text":"

Average RDMA write I/O latency.

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi rdma.write_average_latencyUnit: microsecType: averageBase: rdma.write_ops conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi rdma_write_avg_latencyUnit: microsecType: averageBase: rdma_write_ops conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_rdma_write_ops","title":"fcvi_rdma_write_ops","text":"

Number of RDMA write I/Os issued per second.

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi rdma.write_opsUnit: noneType: rateBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi rdma_write_opsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_rdma_write_throughput","title":"fcvi_rdma_write_throughput","text":"

RDMA write throughput in bytes per second.

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi rdma.write_throughputUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi rdma_write_throughputUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_soft_reset_count","title":"fcvi_soft_reset_count","text":"

Number of times soft reset of FCVI adapter got issued.

API Endpoint Metric Template REST api/cluster/counter/tables/fcvi soft_reset_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi soft_reset_cntUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#flashcache_accesses","title":"flashcache_accesses","text":"

External cache accesses per second

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache accessesUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj accessesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_disk_reads_replaced","title":"flashcache_disk_reads_replaced","text":"

Estimated number of disk reads per second replaced by cache

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache disk_reads_replacedUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj disk_reads_replacedUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_evicts","title":"flashcache_evicts","text":"

Number of blocks evicted from the external cache to make room for new blocks

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache evictsUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj evictsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit","title":"flashcache_hit","text":"

Number of WAFL buffers served off the external cache

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.totalUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hitUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit_directory","title":"flashcache_hit_directory","text":"

Number of directory buffers served off the external cache

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.directoryUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hit_directoryUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit_indirect","title":"flashcache_hit_indirect","text":"

Number of indirect file buffers served off the external cache

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.indirectUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hit_indirectUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit_metadata_file","title":"flashcache_hit_metadata_file","text":"

Number of metadata file buffers served off the external cache

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.metadata_fileUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hit_metadata_fileUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit_normal_lev0","title":"flashcache_hit_normal_lev0","text":"

Number of normal level 0 WAFL buffers served off the external cache

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.normal_level_zeroUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hit_normal_lev0Unit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit_percent","title":"flashcache_hit_percent","text":"

External cache hit rate

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.percentUnit: percentType: averageBase: accesses conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hit_percentUnit: percentType: percentBase: accesses conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_inserts","title":"flashcache_inserts","text":"

Number of WAFL buffers inserted into the external cache

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache insertsUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj insertsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_invalidates","title":"flashcache_invalidates","text":"

Number of blocks invalidated in the external cache

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache invalidatesUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj invalidatesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_miss","title":"flashcache_miss","text":"

External cache misses

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache miss.totalUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj missUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_miss_directory","title":"flashcache_miss_directory","text":"

External cache misses accessing directory buffers

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache miss.directoryUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj miss_directoryUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_miss_indirect","title":"flashcache_miss_indirect","text":"

External cache misses accessing indirect file buffers

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache miss.indirectUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj miss_indirectUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_miss_metadata_file","title":"flashcache_miss_metadata_file","text":"

External cache misses accessing metadata file buffers

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache miss.metadata_fileUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj miss_metadata_fileUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_miss_normal_lev0","title":"flashcache_miss_normal_lev0","text":"

External cache misses accessing normal level 0 buffers

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache miss.normal_level_zeroUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj miss_normal_lev0Unit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_usage","title":"flashcache_usage","text":"

Percentage of blocks in external cache currently containing valid data

API Endpoint Metric Template REST api/cluster/counter/tables/external_cache usageUnit: percentType: rawBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj usageUnit: percentType: rawBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashpool_cache_stats","title":"flashpool_cache_stats","text":"

Automated Working-set Analyzer (AWA) per-interval pseudo cache statistics for the most recent intervals. The number of intervals defined as recent is CM_WAFL_HYAS_INT_DIS_CNT. This array is a table with fields corresponding to the enum type of hyas_cache_stat_type_t.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_sizer cache_statsUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_sizer.yaml ZAPI perf-object-get-instances wafl_hya_sizer cache_statsUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/wafl_hya_sizer.yaml"},{"location":"ontap-metrics/#flashpool_evict_destage_rate","title":"flashpool_evict_destage_rate","text":"

Number of block destage per second.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate evict_destage_rateUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr evict_destage_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_evict_remove_rate","title":"flashpool_evict_remove_rate","text":"

Number of block free per second.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate evict_remove_rateUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr evict_remove_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_hya_read_hit_latency_average","title":"flashpool_hya_read_hit_latency_average","text":"

Average of RAID I/O latency on read hit.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate hya_read_hit_latency_averageUnit: noneType: averageBase: hya_read_hit_latency_count conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr hya_read_hit_latency_averageUnit: noneType: averageBase: hya_read_hit_latency_count conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_hya_read_miss_latency_average","title":"flashpool_hya_read_miss_latency_average","text":"

Average read miss latency.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate hya_read_miss_latency_averageUnit: noneType: averageBase: hya_read_miss_latency_count conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr hya_read_miss_latency_averageUnit: noneType: averageBase: hya_read_miss_latency_count conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_hya_write_hdd_latency_average","title":"flashpool_hya_write_hdd_latency_average","text":"

Average write latency to HDD.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate hya_write_hdd_latency_averageUnit: noneType: averageBase: hya_write_hdd_latency_count conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr hya_write_hdd_latency_averageUnit: noneType: averageBase: hya_write_hdd_latency_count conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_hya_write_ssd_latency_average","title":"flashpool_hya_write_ssd_latency_average","text":"

Average of RAID I/O latency on write to SSD.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate hya_write_ssd_latency_averageUnit: noneType: averageBase: hya_write_ssd_latency_count conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr hya_write_ssd_latency_averageUnit: noneType: averageBase: hya_write_ssd_latency_count conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_read_cache_ins_rate","title":"flashpool_read_cache_ins_rate","text":"

Cache insert rate blocks/sec.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate read_cache_insert_rateUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr read_cache_ins_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_read_ops_replaced","title":"flashpool_read_ops_replaced","text":"

Number of HDD read operations replaced by SSD reads per second.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate read_ops_replacedUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr read_ops_replacedUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_read_ops_replaced_percent","title":"flashpool_read_ops_replaced_percent","text":"

Percentage of HDD read operations replace by SSD.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate read_ops_replaced_percentUnit: percentType: percentBase: read_ops_total conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr read_ops_replaced_percentUnit: percentType: percentBase: read_ops_total conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_ssd_available","title":"flashpool_ssd_available","text":"

Total SSD blocks available.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate ssd_availableUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr ssd_availableUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_ssd_read_cached","title":"flashpool_ssd_read_cached","text":"

Total read cached SSD blocks.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate ssd_read_cachedUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr ssd_read_cachedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_ssd_total","title":"flashpool_ssd_total","text":"

Total SSD blocks.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate ssd_totalUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr ssd_totalUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_ssd_total_used","title":"flashpool_ssd_total_used","text":"

Total SSD blocks used.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate ssd_total_usedUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr ssd_total_usedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_ssd_write_cached","title":"flashpool_ssd_write_cached","text":"

Total write cached SSD blocks.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate ssd_write_cachedUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr ssd_write_cachedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_wc_write_blks_total","title":"flashpool_wc_write_blks_total","text":"

Number of write-cache blocks written per second.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate wc_write_blocks_totalUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr wc_write_blks_totalUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_write_blks_replaced","title":"flashpool_write_blks_replaced","text":"

Number of HDD write blocks replaced by SSD writes per second.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate write_blocks_replacedUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr write_blks_replacedUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_write_blks_replaced_percent","title":"flashpool_write_blks_replaced_percent","text":"

Percentage of blocks overwritten to write-cache among all disk writes.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate write_blocks_replaced_percentUnit: percentType: averageBase: estimated_write_blocks_total conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr write_blks_replaced_percentUnit: percentType: averageBase: est_write_blks_total conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_current_latency","title":"headroom_aggr_current_latency","text":"

This is the storage aggregate average latency per message at the disk level.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate current_latencyUnit: microsecType: averageBase: current_ops conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr current_latencyUnit: microsecType: averageBase: current_ops conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_current_ops","title":"headroom_aggr_current_ops","text":"

Total number of I/Os processed by the aggregate per second.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate current_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr current_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_current_utilization","title":"headroom_aggr_current_utilization","text":"

This is the storage aggregate average utilization of all the data disks in the aggregate.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate current_utilizationUnit: percentType: percentBase: current_utilization_denominator conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr current_utilizationUnit: percentType: percentBase: current_utilization_total conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_ewma_daily","title":"headroom_aggr_ewma_daily","text":"

Daily exponential weighted moving average.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate ewma.dailyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr ewma_dailyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_ewma_hourly","title":"headroom_aggr_ewma_hourly","text":"

Hourly exponential weighted moving average.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate ewma.hourlyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr ewma_hourlyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_ewma_monthly","title":"headroom_aggr_ewma_monthly","text":"

Monthly exponential weighted moving average.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate ewma.monthlyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr ewma_monthlyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_ewma_weekly","title":"headroom_aggr_ewma_weekly","text":"

Weekly exponential weighted moving average.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate ewma.weeklyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr ewma_weeklyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_optimal_point_confidence_factor","title":"headroom_aggr_optimal_point_confidence_factor","text":"

The confidence factor for the optimal point value based on the observed resource latency and utilization.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate optimal_point.confidence_factorUnit: noneType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr optimal_point_confidence_factorUnit: noneType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_optimal_point_latency","title":"headroom_aggr_optimal_point_latency","text":"

The latency component of the optimal point of the latency/utilization curve.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate optimal_point.latencyUnit: microsecType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr optimal_point_latencyUnit: microsecType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_optimal_point_ops","title":"headroom_aggr_optimal_point_ops","text":"

The ops component of the optimal point derived from the latency/utilzation curve.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate optimal_point.opsUnit: per_secType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr optimal_point_opsUnit: per_secType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_optimal_point_utilization","title":"headroom_aggr_optimal_point_utilization","text":"

The utilization component of the optimal point of the latency/utilization curve.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate optimal_point.utilizationUnit: noneType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr optimal_point_utilizationUnit: noneType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_cpu_current_latency","title":"headroom_cpu_current_latency","text":"

Current operation latency of the resource.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu current_latencyUnit: microsecType: averageBase: current_ops conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu current_latencyUnit: microsecType: averageBase: current_ops conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_current_ops","title":"headroom_cpu_current_ops","text":"

Total number of operations per second (also referred to as dblade ops).

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu current_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu current_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_current_utilization","title":"headroom_cpu_current_utilization","text":"

Average processor utilization across all processors in the system.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu current_utilizationUnit: percentType: percentBase: elapsed_time conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu current_utilizationUnit: percentType: percentBase: current_utilization_total conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_ewma_daily","title":"headroom_cpu_ewma_daily","text":"

Daily exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu ewma.dailyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu ewma_dailyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_ewma_hourly","title":"headroom_cpu_ewma_hourly","text":"

Hourly exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu ewma.hourlyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu ewma_hourlyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_ewma_monthly","title":"headroom_cpu_ewma_monthly","text":"

Monthly exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu ewma.monthlyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu ewma_monthlyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_ewma_weekly","title":"headroom_cpu_ewma_weekly","text":"

Weekly exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu ewma.weeklyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu ewma_weeklyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_optimal_point_confidence_factor","title":"headroom_cpu_optimal_point_confidence_factor","text":"

Confidence factor for the optimal point value based on the observed resource latency and utilization. The possible values are: 0 - unknown, 1 - low, 2 - medium, 3 - high. This counter can provide an average confidence factor over a range of time.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu optimal_point.confidence_factorUnit: noneType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu optimal_point_confidence_factorUnit: noneType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_optimal_point_latency","title":"headroom_cpu_optimal_point_latency","text":"

Latency component of the optimal point of the latency/utilization curve. This counter can provide an average latency over a range of time.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu optimal_point.latencyUnit: microsecType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu optimal_point_latencyUnit: microsecType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_optimal_point_ops","title":"headroom_cpu_optimal_point_ops","text":"

Ops component of the optimal point derived from the latency/utilization curve. This counter can provide an average ops over a range of time.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu optimal_point.opsUnit: per_secType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu optimal_point_opsUnit: per_secType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_optimal_point_utilization","title":"headroom_cpu_optimal_point_utilization","text":"

Utilization component of the optimal point of the latency/utilization curve. This counter can provide an average utilization over a range of time.

API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu optimal_point.utilizationUnit: noneType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu optimal_point_utilizationUnit: noneType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#hostadapter_bytes_read","title":"hostadapter_bytes_read","text":"

Bytes read through a host adapter

API Endpoint Metric Template REST api/cluster/counter/tables/host_adapter bytes_readUnit: per_secType: rateBase: conf/restperf/9.12.0/hostadapter.yaml ZAPI perf-object-get-instances hostadapter bytes_readUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/hostadapter.yaml"},{"location":"ontap-metrics/#hostadapter_bytes_written","title":"hostadapter_bytes_written","text":"

Bytes written through a host adapter

API Endpoint Metric Template REST api/cluster/counter/tables/host_adapter bytes_writtenUnit: per_secType: rateBase: conf/restperf/9.12.0/hostadapter.yaml ZAPI perf-object-get-instances hostadapter bytes_writtenUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/hostadapter.yaml"},{"location":"ontap-metrics/#iscsi_lif_avg_latency","title":"iscsi_lif_avg_latency","text":"

Average latency for iSCSI operations

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif average_latencyUnit: microsecType: averageBase: cmd_transferred conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif avg_latencyUnit: microsecType: averageBase: cmd_transfered conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_avg_other_latency","title":"iscsi_lif_avg_other_latency","text":"

Average latency for operations other than read and write (for example, Inquiry, Report LUNs, SCSI Task Management Functions)

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif average_other_latencyUnit: microsecType: averageBase: iscsi_other_ops conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif avg_other_latencyUnit: microsecType: averageBase: iscsi_other_ops conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_avg_read_latency","title":"iscsi_lif_avg_read_latency","text":"

Average latency for read operations

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif average_read_latencyUnit: microsecType: averageBase: iscsi_read_ops conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif avg_read_latencyUnit: microsecType: averageBase: iscsi_read_ops conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_avg_write_latency","title":"iscsi_lif_avg_write_latency","text":"

Average latency for write operations

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif average_write_latencyUnit: microsecType: averageBase: iscsi_write_ops conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif avg_write_latencyUnit: microsecType: averageBase: iscsi_write_ops conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_cmd_transfered","title":"iscsi_lif_cmd_transfered","text":"

Command transfered by this iSCSI conn

API Endpoint Metric Template ZAPI perf-object-get-instances iscsi_lif cmd_transferedUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_cmd_transferred","title":"iscsi_lif_cmd_transferred","text":"

Command transferred by this iSCSI connection

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif cmd_transferredUnit: noneType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_iscsi_other_ops","title":"iscsi_lif_iscsi_other_ops","text":"

iSCSI other operations per second on this logical interface (LIF)

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif iscsi_other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif iscsi_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_iscsi_read_ops","title":"iscsi_lif_iscsi_read_ops","text":"

iSCSI read operations per second on this logical interface (LIF)

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif iscsi_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif iscsi_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_iscsi_write_ops","title":"iscsi_lif_iscsi_write_ops","text":"

iSCSI write operations per second on this logical interface (LIF)

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif iscsi_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif iscsi_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_protocol_errors","title":"iscsi_lif_protocol_errors","text":"

Number of protocol errors from iSCSI sessions on this logical interface (LIF)

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif protocol_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif protocol_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_read_data","title":"iscsi_lif_read_data","text":"

Amount of data read from the storage system in bytes

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_write_data","title":"iscsi_lif_write_data","text":"

Amount of data written to the storage system in bytes

API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iw_avg_latency","title":"iw_avg_latency","text":"

Average RDMA I/O latency.

API Endpoint Metric Template ZAPI perf-object-get-instances iwarp iw_avg_latencyUnit: microsecType: averageBase: iw_ops conf/zapiperf/cdot/9.8.0/iwarp.yaml"},{"location":"ontap-metrics/#iw_ops","title":"iw_ops","text":"

Number of RDMA I/Os issued.

API Endpoint Metric Template ZAPI perf-object-get-instances iwarp iw_opsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/iwarp.yaml"},{"location":"ontap-metrics/#iw_read_ops","title":"iw_read_ops","text":"

Number of RDMA read I/Os issued.

API Endpoint Metric Template ZAPI perf-object-get-instances iwarp iw_read_opsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/iwarp.yaml"},{"location":"ontap-metrics/#iw_write_ops","title":"iw_write_ops","text":"

Number of RDMA write I/Os issued.

API Endpoint Metric Template ZAPI perf-object-get-instances iwarp iw_write_opsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/iwarp.yaml"},{"location":"ontap-metrics/#lif_recv_data","title":"lif_recv_data","text":"

Number of bytes received per second

API Endpoint Metric Template REST api/cluster/counter/tables/lif received_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif recv_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lif_recv_errors","title":"lif_recv_errors","text":"

Number of received Errors per second

API Endpoint Metric Template REST api/cluster/counter/tables/lif received_errorsUnit: per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif recv_errorsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lif_recv_packet","title":"lif_recv_packet","text":"

Number of packets received per second

API Endpoint Metric Template REST api/cluster/counter/tables/lif received_packetsUnit: per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif recv_packetUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lif_sent_data","title":"lif_sent_data","text":"

Number of bytes sent per second

API Endpoint Metric Template REST api/cluster/counter/tables/lif sent_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif sent_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lif_sent_errors","title":"lif_sent_errors","text":"

Number of sent errors per second

API Endpoint Metric Template REST api/cluster/counter/tables/lif sent_errorsUnit: per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif sent_errorsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lif_sent_packet","title":"lif_sent_packet","text":"

Number of packets sent per second

API Endpoint Metric Template REST api/cluster/counter/tables/lif sent_packetsUnit: per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif sent_packetUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lun_avg_read_latency","title":"lun_avg_read_latency","text":"

Average read latency in microseconds for all operations on the LUN

API Endpoint Metric Template REST api/cluster/counter/tables/lun average_read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_avg_write_latency","title":"lun_avg_write_latency","text":"

Average write latency in microseconds for all operations on the LUN

API Endpoint Metric Template REST api/cluster/counter/tables/lun average_write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_avg_xcopy_latency","title":"lun_avg_xcopy_latency","text":"

Average latency in microseconds for xcopy requests

API Endpoint Metric Template REST api/cluster/counter/tables/lun average_xcopy_latencyUnit: microsecType: averageBase: xcopy_requests conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun avg_xcopy_latencyUnit: microsecType: averageBase: xcopy_reqs conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_caw_reqs","title":"lun_caw_reqs","text":"

Number of compare and write requests

API Endpoint Metric Template REST api/cluster/counter/tables/lun caw_requestsUnit: noneType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun caw_reqsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_enospc","title":"lun_enospc","text":"

Number of operations receiving ENOSPC errors

API Endpoint Metric Template REST api/cluster/counter/tables/lun enospcUnit: noneType: deltaBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun enospcUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_queue_full","title":"lun_queue_full","text":"

Queue full responses

API Endpoint Metric Template REST api/cluster/counter/tables/lun queue_fullUnit: per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun queue_fullUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_read_align_histo","title":"lun_read_align_histo","text":"

Histogram of WAFL read alignment (number sectors off WAFL block start)

API Endpoint Metric Template REST api/cluster/counter/tables/lun read_align_histogramUnit: percentType: percentBase: read_ops_sent conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun read_align_histoUnit: percentType: percentBase: read_ops_sent conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_read_data","title":"lun_read_data","text":"

Read bytes

API Endpoint Metric Template REST api/cluster/counter/tables/lun read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_read_ops","title":"lun_read_ops","text":"

Number of read operations

API Endpoint Metric Template REST api/cluster/counter/tables/lun read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_read_partial_blocks","title":"lun_read_partial_blocks","text":"

Percentage of reads whose size is not a multiple of WAFL block size

API Endpoint Metric Template REST api/cluster/counter/tables/lun read_partial_blocksUnit: percentType: percentBase: read_ops conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun read_partial_blocksUnit: percentType: percentBase: read_ops conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_remote_bytes","title":"lun_remote_bytes","text":"

I/O to or from a LUN which is not owned by the storage system handling the I/O.

API Endpoint Metric Template REST api/cluster/counter/tables/lun remote_bytesUnit: b_per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun remote_bytesUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_remote_ops","title":"lun_remote_ops","text":"

Number of operations received by a storage system that does not own the LUN targeted by the operations.

API Endpoint Metric Template REST api/cluster/counter/tables/lun remote_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun remote_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_size","title":"lun_size","text":"

The total provisioned size of the LUN. The LUN size can be increased but not be made smaller using the REST interface.The maximum and minimum sizes listed here are the absolute maximum and absolute minimum sizes in bytes. The actual minimum and maxiumum sizes vary depending on the ONTAP version, ONTAP platform and the available space in the containing volume and aggregate.For more information, see Size properties in the docs section of the ONTAP REST API documentation.

API Endpoint Metric Template REST api/storage/luns space.size conf/rest/9.12.0/lun.yaml ZAPI lun-get-iter lun-info.size conf/zapi/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_size_used","title":"lun_size_used","text":"

The amount of space consumed by the main data stream of the LUN.This value is the total space consumed in the volume by the LUN, including filesystem overhead, but excluding prefix and suffix streams. Due to internal filesystem overhead and the many ways SAN filesystems and applications utilize blocks within a LUN, this value does not necessarily reflect actual consumption/availability from the perspective of the filesystem or application. Without specific knowledge of how the LUN blocks are utilized outside of ONTAP, this property should not be used as an indicator for an out-of-space condition.For more information, see Size properties in the docs section of the ONTAP REST API documentation.

API Endpoint Metric Template REST api/storage/luns space.used conf/rest/9.12.0/lun.yaml ZAPI lun-get-iter lun-info.size-used conf/zapi/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_size_used_percent","title":"lun_size_used_percent","text":"API Endpoint Metric Template REST api/storage/luns size_used, size conf/rest/9.12.0/lun.yaml ZAPI lun-get-iter size_used, size conf/zapi/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_unmap_reqs","title":"lun_unmap_reqs","text":"

Number of unmap command requests

API Endpoint Metric Template REST api/cluster/counter/tables/lun unmap_requestsUnit: noneType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun unmap_reqsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_write_align_histo","title":"lun_write_align_histo","text":"

Histogram of WAFL write alignment (number of sectors off WAFL block start)

API Endpoint Metric Template REST api/cluster/counter/tables/lun write_align_histogramUnit: percentType: percentBase: write_ops_sent conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun write_align_histoUnit: percentType: percentBase: write_ops_sent conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_write_data","title":"lun_write_data","text":"

Write bytes

API Endpoint Metric Template REST api/cluster/counter/tables/lun write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_write_ops","title":"lun_write_ops","text":"

Number of write operations

API Endpoint Metric Template REST api/cluster/counter/tables/lun write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_write_partial_blocks","title":"lun_write_partial_blocks","text":"

Percentage of writes whose size is not a multiple of WAFL block size

API Endpoint Metric Template REST api/cluster/counter/tables/lun write_partial_blocksUnit: percentType: percentBase: write_ops conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun write_partial_blocksUnit: percentType: percentBase: write_ops conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_writesame_reqs","title":"lun_writesame_reqs","text":"

Number of write same command requests

API Endpoint Metric Template REST api/cluster/counter/tables/lun writesame_requestsUnit: noneType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun writesame_reqsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_writesame_unmap_reqs","title":"lun_writesame_unmap_reqs","text":"

Number of write same commands requests with unmap bit set

API Endpoint Metric Template REST api/cluster/counter/tables/lun writesame_unmap_requestsUnit: noneType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun writesame_unmap_reqsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_xcopy_reqs","title":"lun_xcopy_reqs","text":"

Total number of xcopy operations on the LUN

API Endpoint Metric Template REST api/cluster/counter/tables/lun xcopy_requestsUnit: noneType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun xcopy_reqsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#metadata_collector_api_time","title":"metadata_collector_api_time","text":"

amount of time to collect data from monitored cluster object

API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_collector_calc_time","title":"metadata_collector_calc_time","text":"

amount of time it took to compute metrics between two successive polls, specifically using properties like raw, delta, rate, average, and percent. This metric is available for ZapiPerf/RestPerf collectors.

API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_collector_instances","title":"metadata_collector_instances","text":"

number of objects collected from monitored cluster

API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_collector_metrics","title":"metadata_collector_metrics","text":"

number of counters collected from monitored cluster

API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_collector_parse_time","title":"metadata_collector_parse_time","text":"

amount of time to parse XML, JSON, etc. for cluster object

API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_collector_plugin_time","title":"metadata_collector_plugin_time","text":"

amount of time for all plugins to post-process metrics

API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_collector_poll_time","title":"metadata_collector_poll_time","text":"

amount of time it took for the poll to finish

API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_collector_skips","title":"metadata_collector_skips","text":"

number of metrics that were not calculated between two successive polls. This metric is available for ZapiPerf/RestPerf collectors.

API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_collector_task_time","title":"metadata_collector_task_time","text":"

amount of time it took for each collector's subtasks to complete

API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_component_count","title":"metadata_component_count","text":"

number of metrics collected for each object

API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_component_status","title":"metadata_component_status","text":"

status of the collector - 0 means running, 1 means standby, 2 means failed

API Endpoint Metric Template REST NA Harvest generatedUnit: enum NA ZAPI NA Harvest generatedUnit: enum NA"},{"location":"ontap-metrics/#metadata_exporter_count","title":"metadata_exporter_count","text":"

number of metrics and labels exported

API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_exporter_time","title":"metadata_exporter_time","text":"

amount of time it took to render, export, and serve exported data

API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_target_goroutines","title":"metadata_target_goroutines","text":"

number of goroutines that exist within the poller

API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_target_status","title":"metadata_target_status","text":"

status of the system being monitored. 0 means reachable, 1 means unreachable

API Endpoint Metric Template REST NA Harvest generatedUnit: enum NA ZAPI NA Harvest generatedUnit: enum NA"},{"location":"ontap-metrics/#namespace_avg_other_latency","title":"namespace_avg_other_latency","text":"

Average other ops latency in microseconds for all operations on the Namespace

API Endpoint Metric Template REST api/cluster/counter/tables/namespace average_other_latencyUnit: microsecType: averageBase: other_ops conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_avg_read_latency","title":"namespace_avg_read_latency","text":"

Average read latency in microseconds for all operations on the Namespace

API Endpoint Metric Template REST api/cluster/counter/tables/namespace average_read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_avg_write_latency","title":"namespace_avg_write_latency","text":"

Average write latency in microseconds for all operations on the Namespace

API Endpoint Metric Template REST api/cluster/counter/tables/namespace average_write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_block_size","title":"namespace_block_size","text":"

The size of blocks in the namespace in bytes.Valid in POST when creating an NVMe namespace that is not a clone of another. Disallowed in POST when creating a namespace clone. Valid in POST.

API Endpoint Metric Template REST api/storage/namespaces space.block_size conf/rest/9.12.0/namespace.yaml ZAPI nvme-namespace-get-iter nvme-namespace-info.block-size conf/zapi/cdot/9.8.0/namespace.yaml"},{"location":"ontap-metrics/#namespace_other_ops","title":"namespace_other_ops","text":"

Number of other operations

API Endpoint Metric Template REST api/cluster/counter/tables/namespace other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_read_data","title":"namespace_read_data","text":"

Read bytes

API Endpoint Metric Template REST api/cluster/counter/tables/namespace read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_read_ops","title":"namespace_read_ops","text":"

Number of read operations

API Endpoint Metric Template REST api/cluster/counter/tables/namespace read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_remote_bytes","title":"namespace_remote_bytes","text":"

Remote read bytes

API Endpoint Metric Template REST api/cluster/counter/tables/namespace remote.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace remote_bytesUnit: Type: Base: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_remote_ops","title":"namespace_remote_ops","text":"

Number of remote read operations

API Endpoint Metric Template REST api/cluster/counter/tables/namespace remote.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace remote_opsUnit: Type: Base: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_size","title":"namespace_size","text":"

The total provisioned size of the NVMe namespace. Valid in POST and PATCH. The NVMe namespace size can be increased but not be made smaller using the REST interface.The maximum and minimum sizes listed here are the absolute maximum and absolute minimum sizes in bytes. The maximum size is variable with respect to large NVMe namespace support in ONTAP. If large namespaces are supported, the maximum size is 128 TB (140737488355328 bytes) and if not supported, the maximum size is just under 16 TB (17557557870592 bytes). The minimum size supported is always 4096 bytes.For more information, see Size properties in the docs section of the ONTAP REST API documentation.

API Endpoint Metric Template REST api/storage/namespaces space.size conf/rest/9.12.0/namespace.yaml ZAPI nvme-namespace-get-iter nvme-namespace-info.size conf/zapi/cdot/9.8.0/namespace.yaml"},{"location":"ontap-metrics/#namespace_size_available","title":"namespace_size_available","text":"API Endpoint Metric Template REST api/storage/namespaces size, size_used conf/rest/9.12.0/namespace.yaml ZAPI nvme-namespace-get-iter size, size_used conf/zapi/cdot/9.8.0/namespace.yaml"},{"location":"ontap-metrics/#namespace_size_available_percent","title":"namespace_size_available_percent","text":"API Endpoint Metric Template REST api/storage/namespaces size_available, size conf/rest/9.12.0/namespace.yaml ZAPI nvme-namespace-get-iter size_available, size conf/zapi/cdot/9.8.0/namespace.yaml"},{"location":"ontap-metrics/#namespace_size_used","title":"namespace_size_used","text":"

The amount of space consumed by the main data stream of the NVMe namespace.This value is the total space consumed in the volume by the NVMe namespace, including filesystem overhead, but excluding prefix and suffix streams. Due to internal filesystem overhead and the many ways NVMe filesystems and applications utilize blocks within a namespace, this value does not necessarily reflect actual consumption/availability from the perspective of the filesystem or application. Without specific knowledge of how the namespace blocks are utilized outside of ONTAP, this property should not be used and an indicator for an out-of-space condition.For more information, see Size properties in the docs section of the ONTAP REST API documentation.

API Endpoint Metric Template REST api/storage/namespaces space.used conf/rest/9.12.0/namespace.yaml ZAPI nvme-namespace-get-iter nvme-namespace-info.size-used conf/zapi/cdot/9.8.0/namespace.yaml"},{"location":"ontap-metrics/#namespace_write_data","title":"namespace_write_data","text":"

Write bytes

API Endpoint Metric Template REST api/cluster/counter/tables/namespace write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_write_ops","title":"namespace_write_ops","text":"

Number of write operations

API Endpoint Metric Template REST api/cluster/counter/tables/namespace write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#net_port_mtu","title":"net_port_mtu","text":"

Maximum transmission unit, largest packet size on this network

API Endpoint Metric Template REST api/network/ethernet/ports mtu conf/rest/9.12.0/netport.yaml ZAPI net-port-get-iter net-port-info.mtu conf/zapi/cdot/9.8.0/netport.yaml"},{"location":"ontap-metrics/#netstat_bytes_recvd","title":"netstat_bytes_recvd","text":"

Number of bytes received by a TCP connection

API Endpoint Metric Template ZAPI perf-object-get-instances netstat bytes_recvdUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_bytes_sent","title":"netstat_bytes_sent","text":"

Number of bytes sent by a TCP connection

API Endpoint Metric Template ZAPI perf-object-get-instances netstat bytes_sentUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_cong_win","title":"netstat_cong_win","text":"

Congestion window of a TCP connection

API Endpoint Metric Template ZAPI perf-object-get-instances netstat cong_winUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_cong_win_th","title":"netstat_cong_win_th","text":"

Congestion window threshold of a TCP connection

API Endpoint Metric Template ZAPI perf-object-get-instances netstat cong_win_thUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_ooorcv_pkts","title":"netstat_ooorcv_pkts","text":"

Number of out-of-order packets received by this TCP connection

API Endpoint Metric Template ZAPI perf-object-get-instances netstat ooorcv_pktsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_recv_window","title":"netstat_recv_window","text":"

Receive window size of a TCP connection

API Endpoint Metric Template ZAPI perf-object-get-instances netstat recv_windowUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_rexmit_pkts","title":"netstat_rexmit_pkts","text":"

Number of packets retransmitted by this TCP connection

API Endpoint Metric Template ZAPI perf-object-get-instances netstat rexmit_pktsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_send_window","title":"netstat_send_window","text":"

Send window size of a TCP connection

API Endpoint Metric Template ZAPI perf-object-get-instances netstat send_windowUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#nfs_clients_idle_duration","title":"nfs_clients_idle_duration","text":"

Specifies an ISO-8601 format of date and time to retrieve the idle time duration in hours, minutes, and seconds format.

API Endpoint Metric Template REST api/protocols/nfs/connected-clients idle_duration conf/rest/9.7.0/nfs_clients.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_bytelockalloc","title":"nfs_diag_storePool_ByteLockAlloc","text":"

Current number of byte range lock objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.byte_lock_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ByteLockAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_bytelockmax","title":"nfs_diag_storePool_ByteLockMax","text":"

Maximum number of byte range lock objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.byte_lock_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ByteLockMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_clientalloc","title":"nfs_diag_storePool_ClientAlloc","text":"

Current number of client objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.client_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ClientAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_clientmax","title":"nfs_diag_storePool_ClientMax","text":"

Maximum number of client objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.client_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ClientMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_connectionparentsessionreferencealloc","title":"nfs_diag_storePool_ConnectionParentSessionReferenceAlloc","text":"

Current number of connection parent session reference objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.connection_parent_session_reference_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ConnectionParentSessionReferenceAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_connectionparentsessionreferencemax","title":"nfs_diag_storePool_ConnectionParentSessionReferenceMax","text":"

Maximum number of connection parent session reference objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.connection_parent_session_reference_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ConnectionParentSessionReferenceMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_copystatealloc","title":"nfs_diag_storePool_CopyStateAlloc","text":"

Current number of copy state objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.copy_state_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_CopyStateAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_copystatemax","title":"nfs_diag_storePool_CopyStateMax","text":"

Maximum number of copy state objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.copy_state_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_CopyStateMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_delegalloc","title":"nfs_diag_storePool_DelegAlloc","text":"

Current number of delegation lock objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.delegation_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_DelegAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_delegmax","title":"nfs_diag_storePool_DelegMax","text":"

Maximum number delegation lock objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.delegation_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_DelegMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_delegstatealloc","title":"nfs_diag_storePool_DelegStateAlloc","text":"

Current number of delegation state objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.delegation_state_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_DelegStateAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_delegstatemax","title":"nfs_diag_storePool_DelegStateMax","text":"

Maximum number of delegation state objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.delegation_state_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_DelegStateMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_layoutalloc","title":"nfs_diag_storePool_LayoutAlloc","text":"

Current number of layout objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.layout_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LayoutAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_layoutmax","title":"nfs_diag_storePool_LayoutMax","text":"

Maximum number of layout objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.layout_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LayoutMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_layoutstatealloc","title":"nfs_diag_storePool_LayoutStateAlloc","text":"

Current number of layout state objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.layout_state_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LayoutStateAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_layoutstatemax","title":"nfs_diag_storePool_LayoutStateMax","text":"

Maximum number of layout state objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.layout_state_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LayoutStateMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_lockstatealloc","title":"nfs_diag_storePool_LockStateAlloc","text":"

Current number of lock state objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.lock_state_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LockStateAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_lockstatemax","title":"nfs_diag_storePool_LockStateMax","text":"

Maximum number of lock state objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.lock_state_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LockStateMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_openalloc","title":"nfs_diag_storePool_OpenAlloc","text":"

Current number of share objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.open_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OpenAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_openmax","title":"nfs_diag_storePool_OpenMax","text":"

Maximum number of share lock objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.open_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OpenMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_openstatealloc","title":"nfs_diag_storePool_OpenStateAlloc","text":"

Current number of open state objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.openstate_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OpenStateAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_openstatemax","title":"nfs_diag_storePool_OpenStateMax","text":"

Maximum number of open state objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.openstate_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OpenStateMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_owneralloc","title":"nfs_diag_storePool_OwnerAlloc","text":"

Current number of owner objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.owner_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OwnerAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_ownermax","title":"nfs_diag_storePool_OwnerMax","text":"

Maximum number of owner objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.owner_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OwnerMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionalloc","title":"nfs_diag_storePool_SessionAlloc","text":"

Current number of session objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionconnectionholderalloc","title":"nfs_diag_storePool_SessionConnectionHolderAlloc","text":"

Current number of session connection holder objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_connection_holder_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionConnectionHolderAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionconnectionholdermax","title":"nfs_diag_storePool_SessionConnectionHolderMax","text":"

Maximum number of session connection holder objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_connection_holder_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionConnectionHolderMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionholderalloc","title":"nfs_diag_storePool_SessionHolderAlloc","text":"

Current number of session holder objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_holder_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionHolderAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionholdermax","title":"nfs_diag_storePool_SessionHolderMax","text":"

Maximum number of session holder objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_holder_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionHolderMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionmax","title":"nfs_diag_storePool_SessionMax","text":"

Maximum number of session objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_staterefhistoryalloc","title":"nfs_diag_storePool_StateRefHistoryAlloc","text":"

Current number of state reference callstack history objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.state_reference_history_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_StateRefHistoryAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_staterefhistorymax","title":"nfs_diag_storePool_StateRefHistoryMax","text":"

Maximum number of state reference callstack history objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.state_reference_history_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_StateRefHistoryMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_stringalloc","title":"nfs_diag_storePool_StringAlloc","text":"

Current number of string objects allocated.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.string_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_StringAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_stringmax","title":"nfs_diag_storePool_StringMax","text":"

Maximum number of string objects.

API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.string_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_StringMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nic_link_up_to_downs","title":"nic_link_up_to_downs","text":"

Number of link state change from UP to DOWN.

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common link_up_to_downUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common link_up_to_downsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_alignment_errors","title":"nic_rx_alignment_errors","text":"

Alignment errors detected on received packets

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_alignment_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_alignment_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_bytes","title":"nic_rx_bytes","text":"

Bytes received

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_bytesUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_bytesUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_crc_errors","title":"nic_rx_crc_errors","text":"

CRC errors detected on received packets

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_crc_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_crc_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_errors","title":"nic_rx_errors","text":"

Error received

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_errorsUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_errorsUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_length_errors","title":"nic_rx_length_errors","text":"

Length errors detected on received packets

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_length_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_length_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_total_errors","title":"nic_rx_total_errors","text":"

Total errors received

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_total_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_total_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_tx_bytes","title":"nic_tx_bytes","text":"

Bytes sent

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common transmit_bytesUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common tx_bytesUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_tx_errors","title":"nic_tx_errors","text":"

Error sent

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common transmit_errorsUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common tx_errorsUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_tx_hw_errors","title":"nic_tx_hw_errors","text":"

Transmit errors reported by hardware

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common transmit_hw_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common tx_hw_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_tx_total_errors","title":"nic_tx_total_errors","text":"

Total errors sent

API Endpoint Metric Template REST api/cluster/counter/tables/nic_common transmit_total_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common tx_total_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#node_avg_processor_busy","title":"node_avg_processor_busy","text":"

Average processor utilization across all processors in the system

API Endpoint Metric Template REST api/cluster/counter/tables/system:node average_processor_busy_percentUnit: percentType: percentBase: cpu_elapsed_time conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node avg_processor_busyUnit: percentType: percentBase: cpu_elapsed_time conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_cifs_connections","title":"node_cifs_connections","text":"

Number of connections

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node connectionsUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node connectionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_established_sessions","title":"node_cifs_established_sessions","text":"

Number of established SMB and SMB2 sessions

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node established_sessionsUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node established_sessionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_latency","title":"node_cifs_latency","text":"

Average latency for CIFS operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node latencyUnit: microsecType: averageBase: latency_base conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_latencyUnit: microsecType: averageBase: cifs_latency_base conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_op_count","title":"node_cifs_op_count","text":"

Array of select CIFS operation counts

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node op_countUnit: noneType: rateBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_op_countUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_open_files","title":"node_cifs_open_files","text":"

Number of open files over SMB and SMB2

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node open_filesUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node open_filesUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_ops","title":"node_cifs_ops","text":"

Number of CIFS operations per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node cifs_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node cifs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_cifs_read_latency","title":"node_cifs_read_latency","text":"

Average latency for CIFS read operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node average_read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_read_latencyUnit: microsecType: averageBase: cifs_read_ops conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_read_ops","title":"node_cifs_read_ops","text":"

Total number of CIFS read operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node total_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_total_ops","title":"node_cifs_total_ops","text":"

Total number of CIFS operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_write_latency","title":"node_cifs_write_latency","text":"

Average latency for CIFS write operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node average_write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_write_latencyUnit: microsecType: averageBase: cifs_write_ops conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_write_ops","title":"node_cifs_write_ops","text":"

Total number of CIFS write operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node total_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cpu_busy","title":"node_cpu_busy","text":"

System CPU resource utilization. Returns a computed percentage for the default CPU field. Basically computes a 'cpu usage summary' value which indicates how 'busy' the system is based upon the most heavily utilized domain. The idea is to determine the amount of available CPU until we're limited by either a domain maxing out OR we exhaust all available idle CPU cycles, whichever occurs first.

API Endpoint Metric Template REST api/cluster/counter/tables/system:node cpu_busyUnit: percentType: percentBase: cpu_elapsed_time conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node cpu_busyUnit: percentType: percentBase: cpu_elapsed_time conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_cpu_busytime","title":"node_cpu_busytime","text":"

The time (in hundredths of a second) that the CPU has been doing useful work since the last boot

API Endpoint Metric Template ZAPI system-node-get-iter node-details-info.cpu-busytime conf/zapi/cdot/9.8.0/node.yaml REST api/private/cli/node cpu_busy_time conf/rest/9.12.0/node.yaml"},{"location":"ontap-metrics/#node_cpu_domain_busy","title":"node_cpu_domain_busy","text":"

Array of processor time in percentage spent in various domains

API Endpoint Metric Template REST api/cluster/counter/tables/system:node domain_busyUnit: percentType: percentBase: cpu_elapsed_time conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node domain_busyUnit: percentType: percentBase: cpu_elapsed_time conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_cpu_elapsed_time","title":"node_cpu_elapsed_time","text":"

Elapsed time since boot

API Endpoint Metric Template REST api/cluster/counter/tables/system:node cpu_elapsed_timeUnit: microsecType: deltaBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node cpu_elapsed_timeUnit: noneType: delta,no-displayBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_disk_busy","title":"node_disk_busy","text":"

The utilization percent of the disk. node_disk_busy is disk_busy aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_capacity","title":"node_disk_capacity","text":"

Disk capacity in MB. node_disk_capacity is disk_capacity aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_cp_read_chain","title":"node_disk_cp_read_chain","text":"

Average number of blocks transferred in each consistency point read operation during a CP. node_disk_cp_read_chain is disk_cp_read_chain aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_cp_read_latency","title":"node_disk_cp_read_latency","text":"

Average latency per block in microseconds for consistency point read operations. node_disk_cp_read_latency is disk_cp_read_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_cp_reads","title":"node_disk_cp_reads","text":"

Number of disk read operations initiated each second for consistency point processing. node_disk_cp_reads is disk_cp_reads aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_data_read","title":"node_disk_data_read","text":"

Number of disk kilobytes (KB) read per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node disk_data_readUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node disk_data_readUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_disk_data_written","title":"node_disk_data_written","text":"

Number of disk kilobytes (KB) written per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node disk_data_writtenUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node disk_data_writtenUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_disk_io_pending","title":"node_disk_io_pending","text":"

Average number of I/Os issued to the disk for which we have not yet received the response. node_disk_io_pending is disk_io_pending aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_io_queued","title":"node_disk_io_queued","text":"

Number of I/Os queued to the disk but not yet issued. node_disk_io_queued is disk_io_queued aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_busy","title":"node_disk_max_busy","text":"

The utilization percent of the disk. node_disk_max_busy is the maximum of disk_busy for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_capacity","title":"node_disk_max_capacity","text":"

Disk capacity in MB. node_disk_max_capacity is the maximum of disk_capacity for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_cp_read_chain","title":"node_disk_max_cp_read_chain","text":"

Average number of blocks transferred in each consistency point read operation during a CP. node_disk_max_cp_read_chain is the maximum of disk_cp_read_chain for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_cp_read_latency","title":"node_disk_max_cp_read_latency","text":"

Average latency per block in microseconds for consistency point read operations. node_disk_max_cp_read_latency is the maximum of disk_cp_read_latency for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_cp_reads","title":"node_disk_max_cp_reads","text":"

Number of disk read operations initiated each second for consistency point processing. node_disk_max_cp_reads is the maximum of disk_cp_reads for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_disk_busy","title":"node_disk_max_disk_busy","text":"

The utilization percent of the disk. node_disk_max_disk_busy is the maximum of disk_busy for label node.

API Endpoint Metric Template ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_disk_capacity","title":"node_disk_max_disk_capacity","text":"

Disk capacity in MB. node_disk_max_disk_capacity is the maximum of disk_capacity for label node.

API Endpoint Metric Template ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_io_pending","title":"node_disk_max_io_pending","text":"

Average number of I/Os issued to the disk for which we have not yet received the response. node_disk_max_io_pending is the maximum of disk_io_pending for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_io_queued","title":"node_disk_max_io_queued","text":"

Number of I/Os queued to the disk but not yet issued. node_disk_max_io_queued is the maximum of disk_io_queued for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_total_data","title":"node_disk_max_total_data","text":"

Total throughput for user operations per second. node_disk_max_total_data is the maximum of disk_total_data for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_total_transfers","title":"node_disk_max_total_transfers","text":"

Total number of disk operations involving data transfer initiated per second. node_disk_max_total_transfers is the maximum of disk_total_transfers for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_read_blocks","title":"node_disk_max_user_read_blocks","text":"

Number of blocks transferred for user read operations per second. node_disk_max_user_read_blocks is the maximum of disk_user_read_blocks for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_read_chain","title":"node_disk_max_user_read_chain","text":"

Average number of blocks transferred in each user read operation. node_disk_max_user_read_chain is the maximum of disk_user_read_chain for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_read_latency","title":"node_disk_max_user_read_latency","text":"

Average latency per block in microseconds for user read operations. node_disk_max_user_read_latency is the maximum of disk_user_read_latency for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_reads","title":"node_disk_max_user_reads","text":"

Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. node_disk_max_user_reads is the maximum of disk_user_reads for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_write_blocks","title":"node_disk_max_user_write_blocks","text":"

Number of blocks transferred for user write operations per second. node_disk_max_user_write_blocks is the maximum of disk_user_write_blocks for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_write_chain","title":"node_disk_max_user_write_chain","text":"

Average number of blocks transferred in each user write operation. node_disk_max_user_write_chain is the maximum of disk_user_write_chain for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_write_latency","title":"node_disk_max_user_write_latency","text":"

Average latency per block in microseconds for user write operations. node_disk_max_user_write_latency is the maximum of disk_user_write_latency for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_writes","title":"node_disk_max_user_writes","text":"

Number of disk write operations initiated each second for storing data or metadata associated with user requests. node_disk_max_user_writes is the maximum of disk_user_writes for label node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_total_data","title":"node_disk_total_data","text":"

Total throughput for user operations per second. node_disk_total_data is disk_total_data aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_total_transfers","title":"node_disk_total_transfers","text":"

Total number of disk operations involving data transfer initiated per second. node_disk_total_transfers is disk_total_transfers aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_read_blocks","title":"node_disk_user_read_blocks","text":"

Number of blocks transferred for user read operations per second. node_disk_user_read_blocks is disk_user_read_blocks aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_read_chain","title":"node_disk_user_read_chain","text":"

Average number of blocks transferred in each user read operation. node_disk_user_read_chain is disk_user_read_chain aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_read_latency","title":"node_disk_user_read_latency","text":"

Average latency per block in microseconds for user read operations. node_disk_user_read_latency is disk_user_read_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_reads","title":"node_disk_user_reads","text":"

Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. node_disk_user_reads is disk_user_reads aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_write_blocks","title":"node_disk_user_write_blocks","text":"

Number of blocks transferred for user write operations per second. node_disk_user_write_blocks is disk_user_write_blocks aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_write_chain","title":"node_disk_user_write_chain","text":"

Average number of blocks transferred in each user write operation. node_disk_user_write_chain is disk_user_write_chain aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_write_latency","title":"node_disk_user_write_latency","text":"

Average latency per block in microseconds for user write operations. node_disk_user_write_latency is disk_user_write_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_writes","title":"node_disk_user_writes","text":"

Number of disk write operations initiated each second for storing data or metadata associated with user requests. node_disk_user_writes is disk_user_writes aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_failed_fan","title":"node_failed_fan","text":"

Specifies a count of the number of chassis fans that are not operating within the recommended RPM range.

API Endpoint Metric Template REST api/cluster/nodes controller.failed_fan.count conf/rest/9.12.0/node.yaml ZAPI system-node-get-iter node-details-info.env-failed-fan-count conf/zapi/cdot/9.8.0/node.yaml"},{"location":"ontap-metrics/#node_failed_power","title":"node_failed_power","text":"

Number of failed power supply units.

API Endpoint Metric Template REST api/cluster/nodes controller.failed_power_supply.count conf/rest/9.12.0/node.yaml ZAPI system-node-get-iter node-details-info.env-failed-power-supply-count conf/zapi/cdot/9.8.0/node.yaml"},{"location":"ontap-metrics/#node_fcp_data_recv","title":"node_fcp_data_recv","text":"

Number of FCP kilobytes (KB) received per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node fcp_data_receivedUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node fcp_data_recvUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_fcp_data_sent","title":"node_fcp_data_sent","text":"

Number of FCP kilobytes (KB) sent per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node fcp_data_sentUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node fcp_data_sentUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_fcp_ops","title":"node_fcp_ops","text":"

Number of FCP operations per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node fcp_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node fcp_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_hdd_data_read","title":"node_hdd_data_read","text":"

Number of HDD Disk kilobytes (KB) read per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node hdd_data_readUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node hdd_data_readUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_hdd_data_written","title":"node_hdd_data_written","text":"

Number of HDD kilobytes (KB) written per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node hdd_data_writtenUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node hdd_data_writtenUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_iscsi_ops","title":"node_iscsi_ops","text":"

Number of iSCSI operations per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node iscsi_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node iscsi_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_memory","title":"node_memory","text":"

Total memory in megabytes (MB)

API Endpoint Metric Template REST api/cluster/counter/tables/system:node memoryUnit: noneType: rawBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node memoryUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_net_data_recv","title":"node_net_data_recv","text":"

Number of network kilobytes (KB) received per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node network_data_receivedUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node net_data_recvUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_net_data_sent","title":"node_net_data_sent","text":"

Number of network kilobytes (KB) sent per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node network_data_sentUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node net_data_sentUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_nfs_access_avg_latency","title":"node_nfs_access_avg_latency","text":"

Average latency of Access procedure requests. The counter keeps track of the average response time of Access requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_access_total","title":"node_nfs_access_total","text":"

Total number of Access procedure requests. It is the total number of access success and access error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_backchannel_ctl_avg_latency","title":"node_nfs_backchannel_ctl_avg_latency","text":"

Average latency of BACKCHANNEL_CTL operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node backchannel_ctl.average_latencyUnit: microsecType: averageBase: backchannel_ctl.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node backchannel_ctl.average_latencyUnit: microsecType: averageBase: backchannel_ctl.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node backchannel_ctl_avg_latencyUnit: microsecType: average,no-zero-valuesBase: backchannel_ctl_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node backchannel_ctl_avg_latencyUnit: microsecType: average,no-zero-valuesBase: backchannel_ctl_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_backchannel_ctl_total","title":"node_nfs_backchannel_ctl_total","text":"

Total number of BACKCHANNEL_CTL operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node backchannel_ctl.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node backchannel_ctl.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node backchannel_ctl_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node backchannel_ctl_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_bind_conn_to_session_avg_latency","title":"node_nfs_bind_conn_to_session_avg_latency","text":"

Average latency of BIND_CONN_TO_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node bind_connections_to_session.average_latencyUnit: microsecType: averageBase: bind_connections_to_session.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node bind_conn_to_session.average_latencyUnit: microsecType: averageBase: bind_conn_to_session.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node bind_conn_to_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: bind_conn_to_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node bind_conn_to_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: bind_conn_to_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_bind_conn_to_session_total","title":"node_nfs_bind_conn_to_session_total","text":"

Total number of BIND_CONN_TO_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node bind_connections_to_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node bind_conn_to_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node bind_conn_to_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node bind_conn_to_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_close_avg_latency","title":"node_nfs_close_avg_latency","text":"

Average latency of CLOSE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_close_total","title":"node_nfs_close_total","text":"

Total number of CLOSE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_commit_avg_latency","title":"node_nfs_commit_avg_latency","text":"

Average latency of Commit procedure requests. The counter keeps track of the average response time of Commit requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_commit_total","title":"node_nfs_commit_total","text":"

Total number of Commit procedure requests. It is the total number of Commit success and Commit error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_create_avg_latency","title":"node_nfs_create_avg_latency","text":"

Average latency of Create procedure requests. The counter keeps track of the average response time of Create requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_create_session_avg_latency","title":"node_nfs_create_session_avg_latency","text":"

Average latency of CREATE_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node create_session.average_latencyUnit: microsecType: averageBase: create_session.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node create_session.average_latencyUnit: microsecType: averageBase: create_session.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node create_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node create_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_create_session_total","title":"node_nfs_create_session_total","text":"

Total number of CREATE_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node create_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node create_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node create_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node create_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_create_total","title":"node_nfs_create_total","text":"

Total number Create of procedure requests. It is the total number of create success and create error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_delegpurge_avg_latency","title":"node_nfs_delegpurge_avg_latency","text":"

Average latency of DELEGPURGE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_delegpurge_total","title":"node_nfs_delegpurge_total","text":"

Total number of DELEGPURGE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_delegreturn_avg_latency","title":"node_nfs_delegreturn_avg_latency","text":"

Average latency of DELEGRETURN operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_delegreturn_total","title":"node_nfs_delegreturn_total","text":"

Total number of DELEGRETURN operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_destroy_clientid_avg_latency","title":"node_nfs_destroy_clientid_avg_latency","text":"

Average latency of DESTROY_CLIENTID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node destroy_clientid.average_latencyUnit: microsecType: averageBase: destroy_clientid.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node destroy_clientid.average_latencyUnit: microsecType: averageBase: destroy_clientid.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node destroy_clientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_clientid_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node destroy_clientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_clientid_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_destroy_clientid_total","title":"node_nfs_destroy_clientid_total","text":"

Total number of DESTROY_CLIENTID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node destroy_clientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node destroy_clientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node destroy_clientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node destroy_clientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_destroy_session_avg_latency","title":"node_nfs_destroy_session_avg_latency","text":"

Average latency of DESTROY_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node destroy_session.average_latencyUnit: microsecType: averageBase: destroy_session.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node destroy_session.average_latencyUnit: microsecType: averageBase: destroy_session.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node destroy_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node destroy_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_destroy_session_total","title":"node_nfs_destroy_session_total","text":"

Total number of DESTROY_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node destroy_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node destroy_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node destroy_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node destroy_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_exchange_id_avg_latency","title":"node_nfs_exchange_id_avg_latency","text":"

Average latency of EXCHANGE_ID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node exchange_id.average_latencyUnit: microsecType: averageBase: exchange_id.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node exchange_id.average_latencyUnit: microsecType: averageBase: exchange_id.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node exchange_id_avg_latencyUnit: microsecType: average,no-zero-valuesBase: exchange_id_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node exchange_id_avg_latencyUnit: microsecType: average,no-zero-valuesBase: exchange_id_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_exchange_id_total","title":"node_nfs_exchange_id_total","text":"

Total number of EXCHANGE_ID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node exchange_id.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node exchange_id.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node exchange_id_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node exchange_id_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_free_stateid_avg_latency","title":"node_nfs_free_stateid_avg_latency","text":"

Average latency of FREE_STATEID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node free_stateid.average_latencyUnit: microsecType: averageBase: free_stateid.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node free_stateid.average_latencyUnit: microsecType: averageBase: free_stateid.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node free_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: free_stateid_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node free_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: free_stateid_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_free_stateid_total","title":"node_nfs_free_stateid_total","text":"

Total number of FREE_STATEID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node free_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node free_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node free_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node free_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_fsinfo_avg_latency","title":"node_nfs_fsinfo_avg_latency","text":"

Average latency of FSInfo procedure requests. The counter keeps track of the average response time of FSInfo requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node fsinfo.average_latencyUnit: microsecType: averageBase: fsinfo.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node fsinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: fsinfo_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_fsinfo_total","title":"node_nfs_fsinfo_total","text":"

Total number FSInfo of procedure requests. It is the total number of FSInfo success and FSInfo error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node fsinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node fsinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_fsstat_avg_latency","title":"node_nfs_fsstat_avg_latency","text":"

Average latency of FSStat procedure requests. The counter keeps track of the average response time of FSStat requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node fsstat.average_latencyUnit: microsecType: averageBase: fsstat.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node fsstat_avg_latencyUnit: microsecType: average,no-zero-valuesBase: fsstat_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_fsstat_total","title":"node_nfs_fsstat_total","text":"

Total number FSStat of procedure requests. It is the total number of FSStat success and FSStat error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node fsstat.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node fsstat_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_get_dir_delegation_avg_latency","title":"node_nfs_get_dir_delegation_avg_latency","text":"

Average latency of GET_DIR_DELEGATION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node get_dir_delegation.average_latencyUnit: microsecType: averageBase: get_dir_delegation.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node get_dir_delegation.average_latencyUnit: microsecType: averageBase: get_dir_delegation.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node get_dir_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: get_dir_delegation_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node get_dir_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: get_dir_delegation_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_get_dir_delegation_total","title":"node_nfs_get_dir_delegation_total","text":"

Total number of GET_DIR_DELEGATION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node get_dir_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node get_dir_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node get_dir_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node get_dir_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_getattr_avg_latency","title":"node_nfs_getattr_avg_latency","text":"

Average latency of GetAttr procedure requests. This counter keeps track of the average response time of GetAttr requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_getattr_total","title":"node_nfs_getattr_total","text":"

Total number of Getattr procedure requests. It is the total number of getattr success and getattr error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_getdeviceinfo_avg_latency","title":"node_nfs_getdeviceinfo_avg_latency","text":"

Average latency of GETDEVICEINFO operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getdeviceinfo.average_latencyUnit: microsecType: averageBase: getdeviceinfo.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getdeviceinfo.average_latencyUnit: microsecType: averageBase: getdeviceinfo.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getdeviceinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdeviceinfo_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getdeviceinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdeviceinfo_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_getdeviceinfo_total","title":"node_nfs_getdeviceinfo_total","text":"

Total number of GETDEVICEINFO operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getdeviceinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getdeviceinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getdeviceinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getdeviceinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_getdevicelist_avg_latency","title":"node_nfs_getdevicelist_avg_latency","text":"

Average latency of GETDEVICELIST operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getdevicelist.average_latencyUnit: microsecType: averageBase: getdevicelist.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getdevicelist.average_latencyUnit: microsecType: averageBase: getdevicelist.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getdevicelist_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdevicelist_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getdevicelist_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdevicelist_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_getdevicelist_total","title":"node_nfs_getdevicelist_total","text":"

Total number of GETDEVICELIST operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getdevicelist.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getdevicelist.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getdevicelist_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getdevicelist_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_getfh_avg_latency","title":"node_nfs_getfh_avg_latency","text":"

Average latency of GETFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_getfh_total","title":"node_nfs_getfh_total","text":"

Total number of GETFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_latency","title":"node_nfs_latency","text":"

Average latency of NFSv3 requests. This counter keeps track of the average response time of NFSv3 requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutcommit_avg_latency","title":"node_nfs_layoutcommit_avg_latency","text":"

Average latency of LAYOUTCOMMIT operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutcommit.average_latencyUnit: microsecType: averageBase: layoutcommit.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutcommit.average_latencyUnit: microsecType: averageBase: layoutcommit.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutcommit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutcommit_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutcommit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutcommit_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutcommit_total","title":"node_nfs_layoutcommit_total","text":"

Total number of LAYOUTCOMMIT operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutcommit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutcommit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutcommit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutcommit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutget_avg_latency","title":"node_nfs_layoutget_avg_latency","text":"

Average latency of LAYOUTGET operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutget.average_latencyUnit: microsecType: averageBase: layoutget.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutget.average_latencyUnit: microsecType: averageBase: layoutget.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutget_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutget_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutget_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutget_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutget_total","title":"node_nfs_layoutget_total","text":"

Total number of LAYOUTGET operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutget.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutget.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutget_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutget_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutreturn_avg_latency","title":"node_nfs_layoutreturn_avg_latency","text":"

Average latency of LAYOUTRETURN operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutreturn.average_latencyUnit: microsecType: averageBase: layoutreturn.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutreturn.average_latencyUnit: microsecType: averageBase: layoutreturn.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutreturn_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutreturn_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutreturn_total","title":"node_nfs_layoutreturn_total","text":"

Total number of LAYOUTRETURN operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_link_avg_latency","title":"node_nfs_link_avg_latency","text":"

Average latency of Link procedure requests. The counter keeps track of the average response time of Link requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_link_total","title":"node_nfs_link_total","text":"

Total number Link of procedure requests. It is the total number of Link success and Link error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lock_avg_latency","title":"node_nfs_lock_avg_latency","text":"

Average latency of LOCK operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lock_total","title":"node_nfs_lock_total","text":"

Total number of LOCK operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lockt_avg_latency","title":"node_nfs_lockt_avg_latency","text":"

Average latency of LOCKT operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lockt_total","title":"node_nfs_lockt_total","text":"

Total number of LOCKT operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_locku_avg_latency","title":"node_nfs_locku_avg_latency","text":"

Average latency of LOCKU operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_locku_total","title":"node_nfs_locku_total","text":"

Total number of LOCKU operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lookup_avg_latency","title":"node_nfs_lookup_avg_latency","text":"

Average latency of LookUp procedure requests. This shows the average time it takes for the LookUp operation to reply to the request.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lookup_total","title":"node_nfs_lookup_total","text":"

Total number of Lookup procedure requests. It is the total number of lookup success and lookup error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lookupp_avg_latency","title":"node_nfs_lookupp_avg_latency","text":"

Average latency of LOOKUPP operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lookupp_total","title":"node_nfs_lookupp_total","text":"

Total number of LOOKUPP operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_mkdir_avg_latency","title":"node_nfs_mkdir_avg_latency","text":"

Average latency of MkDir procedure requests. The counter keeps track of the average response time of MkDir requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node mkdir.average_latencyUnit: microsecType: averageBase: mkdir.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node mkdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: mkdir_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_mkdir_total","title":"node_nfs_mkdir_total","text":"

Total number MkDir of procedure requests. It is the total number of MkDir success and MkDir error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node mkdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node mkdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_mknod_avg_latency","title":"node_nfs_mknod_avg_latency","text":"

Average latency of MkNod procedure requests. The counter keeps track of the average response time of MkNod requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node mknod.average_latencyUnit: microsecType: averageBase: mknod.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node mknod_avg_latencyUnit: microsecType: average,no-zero-valuesBase: mknod_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_mknod_total","title":"node_nfs_mknod_total","text":"

Total number MkNod of procedure requests. It is the total number of MkNod success and MkNod error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node mknod.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node mknod_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_null_avg_latency","title":"node_nfs_null_avg_latency","text":"

Average latency of Null procedure requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_null_total","title":"node_nfs_null_total","text":"

Total number of Null procedure requests. It is the total of null success and null error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_nverify_avg_latency","title":"node_nfs_nverify_avg_latency","text":"

Average latency of NVERIFY operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_nverify_total","title":"node_nfs_nverify_total","text":"

Total number of NVERIFY operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_avg_latency","title":"node_nfs_open_avg_latency","text":"

Average latency of OPEN operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_confirm_avg_latency","title":"node_nfs_open_confirm_avg_latency","text":"

Average latency of OPEN_CONFIRM procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node open_confirm.average_latencyUnit: microsecType: averageBase: open_confirm.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node open_confirm_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_confirm_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_confirm_total","title":"node_nfs_open_confirm_total","text":"

Total number of OPEN_CONFIRM procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node open_confirm.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node open_confirm_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_downgrade_avg_latency","title":"node_nfs_open_downgrade_avg_latency","text":"

Average latency of OPEN_DOWNGRADE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_downgrade_total","title":"node_nfs_open_downgrade_total","text":"

Total number of OPEN_DOWNGRADE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_total","title":"node_nfs_open_total","text":"

Total number of OPEN operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_openattr_avg_latency","title":"node_nfs_openattr_avg_latency","text":"

Average latency of OPENATTR operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_openattr_total","title":"node_nfs_openattr_total","text":"

Total number of OPENATTR operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_ops","title":"node_nfs_ops","text":"

Number of NFS operations per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node nfs_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node nfs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_nfs_pathconf_avg_latency","title":"node_nfs_pathconf_avg_latency","text":"

Average latency of PathConf procedure requests. The counter keeps track of the average response time of PathConf requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node pathconf.average_latencyUnit: microsecType: averageBase: pathconf.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node pathconf_avg_latencyUnit: microsecType: average,no-zero-valuesBase: pathconf_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_pathconf_total","title":"node_nfs_pathconf_total","text":"

Total number PathConf of procedure requests. It is the total number of PathConf success and PathConf error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node pathconf.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node pathconf_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_putfh_avg_latency","title":"node_nfs_putfh_avg_latency","text":"

The number of successful PUTPUBFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putfh.average_latencyUnit: noneType: deltaBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putfh.average_latencyUnit: microsecType: averageBase: putfh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putfh.average_latencyUnit: microsecType: averageBase: putfh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_putfh_total","title":"node_nfs_putfh_total","text":"

Total number of PUTFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_putpubfh_avg_latency","title":"node_nfs_putpubfh_avg_latency","text":"

Average latency of PUTPUBFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_putpubfh_total","title":"node_nfs_putpubfh_total","text":"

Total number of PUTPUBFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_putrootfh_avg_latency","title":"node_nfs_putrootfh_avg_latency","text":"

Average latency of PUTROOTFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_putrootfh_total","title":"node_nfs_putrootfh_total","text":"

Total number of PUTROOTFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_avg_latency","title":"node_nfs_read_avg_latency","text":"

Average latency of Read procedure requests. The counter keeps track of the average response time of Read requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_ops","title":"node_nfs_read_ops","text":"

Total observed NFSv3 read operations per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_read_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_symlink_avg_latency","title":"node_nfs_read_symlink_avg_latency","text":"

Average latency of ReadSymLink procedure requests. The counter keeps track of the average response time of ReadSymLink requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read_symlink.average_latencyUnit: microsecType: averageBase: read_symlink.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node read_symlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_symlink_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_symlink_total","title":"node_nfs_read_symlink_total","text":"

Total number of ReadSymLink procedure requests. It is the total number of read symlink success and read symlink error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read_symlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node read_symlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_throughput","title":"node_nfs_read_throughput","text":"

Rate of NFSv3 read data transfers per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node nfs41_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node nfs42_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node nfs4_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_total","title":"node_nfs_read_total","text":"

Total number Read of procedure requests. It is the total number of read success and read error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_readdir_avg_latency","title":"node_nfs_readdir_avg_latency","text":"

Average latency of ReadDir procedure requests. The counter keeps track of the average response time of ReadDir requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_readdir_total","title":"node_nfs_readdir_total","text":"

Total number ReadDir of procedure requests. It is the total number of ReadDir success and ReadDir error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_readdirplus_avg_latency","title":"node_nfs_readdirplus_avg_latency","text":"

Average latency of ReadDirPlus procedure requests. The counter keeps track of the average response time of ReadDirPlus requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node readdirplus.average_latencyUnit: microsecType: averageBase: readdirplus.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node readdirplus_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdirplus_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_readdirplus_total","title":"node_nfs_readdirplus_total","text":"

Total number ReadDirPlus of procedure requests. It is the total number of ReadDirPlus success and ReadDirPlus error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node readdirplus.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node readdirplus_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_readlink_avg_latency","title":"node_nfs_readlink_avg_latency","text":"

Average latency of READLINK operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_readlink_total","title":"node_nfs_readlink_total","text":"

Total number of READLINK operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_reclaim_complete_avg_latency","title":"node_nfs_reclaim_complete_avg_latency","text":"

Average latency of RECLAIM_COMPLETE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node reclaim_complete.average_latencyUnit: microsecType: averageBase: reclaim_complete.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node reclaim_complete.average_latencyUnit: microsecType: averageBase: reclaim_complete.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node reclaim_complete_avg_latencyUnit: microsecType: average,no-zero-valuesBase: reclaim_complete_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node reclaim_complete_avg_latencyUnit: microsecType: average,no-zero-valuesBase: reclaim_complete_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_reclaim_complete_total","title":"node_nfs_reclaim_complete_total","text":"

Total number of RECLAIM_COMPLETE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node reclaim_complete.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node reclaim_complete.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node reclaim_complete_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node reclaim_complete_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_release_lock_owner_avg_latency","title":"node_nfs_release_lock_owner_avg_latency","text":"

Average Latency of RELEASE_LOCKOWNER procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node release_lock_owner.average_latencyUnit: microsecType: averageBase: release_lock_owner.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node release_lock_owner_avg_latencyUnit: microsecType: average,no-zero-valuesBase: release_lock_owner_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_release_lock_owner_total","title":"node_nfs_release_lock_owner_total","text":"

Total number of RELEASE_LOCKOWNER procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node release_lock_owner.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node release_lock_owner_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_remove_avg_latency","title":"node_nfs_remove_avg_latency","text":"

Average latency of Remove procedure requests. The counter keeps track of the average response time of Remove requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_remove_total","title":"node_nfs_remove_total","text":"

Total number Remove of procedure requests. It is the total number of Remove success and Remove error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_rename_avg_latency","title":"node_nfs_rename_avg_latency","text":"

Average latency of Rename procedure requests. The counter keeps track of the average response time of Rename requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_rename_total","title":"node_nfs_rename_total","text":"

Total number Rename of procedure requests. It is the total number of Rename success and Rename error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_renew_avg_latency","title":"node_nfs_renew_avg_latency","text":"

Average latency of RENEW procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node renew.average_latencyUnit: microsecType: averageBase: renew.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node renew_avg_latencyUnit: microsecType: average,no-zero-valuesBase: renew_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_renew_total","title":"node_nfs_renew_total","text":"

Total number of RENEW procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node renew.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node renew_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_restorefh_avg_latency","title":"node_nfs_restorefh_avg_latency","text":"

Average latency of RESTOREFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_restorefh_total","title":"node_nfs_restorefh_total","text":"

Total number of RESTOREFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_rmdir_avg_latency","title":"node_nfs_rmdir_avg_latency","text":"

Average latency of RmDir procedure requests. The counter keeps track of the average response time of RmDir requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node rmdir.average_latencyUnit: microsecType: averageBase: rmdir.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node rmdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rmdir_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_rmdir_total","title":"node_nfs_rmdir_total","text":"

Total number RmDir of procedure requests. It is the total number of RmDir success and RmDir error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node rmdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node rmdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_savefh_avg_latency","title":"node_nfs_savefh_avg_latency","text":"

Average latency of SAVEFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_savefh_total","title":"node_nfs_savefh_total","text":"

Total number of SAVEFH operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_secinfo_avg_latency","title":"node_nfs_secinfo_avg_latency","text":"

Average latency of SECINFO operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_secinfo_no_name_avg_latency","title":"node_nfs_secinfo_no_name_avg_latency","text":"

Average latency of SECINFO_NO_NAME operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node secinfo_no_name.average_latencyUnit: microsecType: averageBase: secinfo_no_name.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node secinfo_no_name.average_latencyUnit: microsecType: averageBase: secinfo_no_name.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node secinfo_no_name_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_no_name_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node secinfo_no_name_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_no_name_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_secinfo_no_name_total","title":"node_nfs_secinfo_no_name_total","text":"

Total number of SECINFO_NO_NAME operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node secinfo_no_name.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node secinfo_no_name.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node secinfo_no_name_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node secinfo_no_name_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_secinfo_total","title":"node_nfs_secinfo_total","text":"

Total number of SECINFO operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_sequence_avg_latency","title":"node_nfs_sequence_avg_latency","text":"

Average latency of SEQUENCE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node sequence.average_latencyUnit: microsecType: averageBase: sequence.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node sequence.average_latencyUnit: microsecType: averageBase: sequence.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node sequence_avg_latencyUnit: microsecType: average,no-zero-valuesBase: sequence_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node sequence_avg_latencyUnit: microsecType: average,no-zero-valuesBase: sequence_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_sequence_total","title":"node_nfs_sequence_total","text":"

Total number of SEQUENCE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node sequence.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node sequence.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node sequence_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node sequence_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_set_ssv_avg_latency","title":"node_nfs_set_ssv_avg_latency","text":"

Average latency of SET_SSV operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node set_ssv.average_latencyUnit: microsecType: averageBase: set_ssv.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node set_ssv.average_latencyUnit: microsecType: averageBase: set_ssv.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node set_ssv_avg_latencyUnit: microsecType: average,no-zero-valuesBase: set_ssv_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node set_ssv_avg_latencyUnit: microsecType: average,no-zero-valuesBase: set_ssv_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_set_ssv_total","title":"node_nfs_set_ssv_total","text":"

Total number of SET_SSV operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node set_ssv.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node set_ssv.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node set_ssv_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node set_ssv_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_setattr_avg_latency","title":"node_nfs_setattr_avg_latency","text":"

Average latency of SetAttr procedure requests. The counter keeps track of the average response time of SetAttr requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_setattr_total","title":"node_nfs_setattr_total","text":"

Total number of Setattr procedure requests. It is the total number of Setattr success and setattr error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_setclientid_avg_latency","title":"node_nfs_setclientid_avg_latency","text":"

Average latency of SETCLIENTID procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node setclientid.average_latencyUnit: microsecType: averageBase: setclientid.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node setclientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setclientid_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_setclientid_confirm_avg_latency","title":"node_nfs_setclientid_confirm_avg_latency","text":"

Average latency of SETCLIENTID_CONFIRM procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node setclientid_confirm.average_latencyUnit: microsecType: averageBase: setclientid_confirm.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node setclientid_confirm_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setclientid_confirm_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_setclientid_confirm_total","title":"node_nfs_setclientid_confirm_total","text":"

Total number of SETCLIENTID_CONFIRM procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node setclientid_confirm.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node setclientid_confirm_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_setclientid_total","title":"node_nfs_setclientid_total","text":"

Total number of SETCLIENTID procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node setclientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node setclientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_symlink_avg_latency","title":"node_nfs_symlink_avg_latency","text":"

Average latency of SymLink procedure requests. The counter keeps track of the average response time of SymLink requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node symlink.average_latencyUnit: microsecType: averageBase: symlink.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node symlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: symlink_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_symlink_total","title":"node_nfs_symlink_total","text":"

Total number SymLink of procedure requests. It is the total number of SymLink success and create SymLink requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node symlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node symlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_test_stateid_avg_latency","title":"node_nfs_test_stateid_avg_latency","text":"

Average latency of TEST_STATEID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node test_stateid.average_latencyUnit: microsecType: averageBase: test_stateid.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node test_stateid.average_latencyUnit: microsecType: averageBase: test_stateid.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node test_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: test_stateid_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node test_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: test_stateid_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_test_stateid_total","title":"node_nfs_test_stateid_total","text":"

Total number of TEST_STATEID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node test_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node test_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node test_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node test_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_throughput","title":"node_nfs_throughput","text":"

Rate of NFSv3 data transfers per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node nfs41_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node nfs42_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node nfs4_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_total_ops","title":"node_nfs_total_ops","text":"

Total number of NFSv3 procedure requests per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_verify_avg_latency","title":"node_nfs_verify_avg_latency","text":"

Average latency of VERIFY operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_verify_total","title":"node_nfs_verify_total","text":"

Total number of VERIFY operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_want_delegation_avg_latency","title":"node_nfs_want_delegation_avg_latency","text":"

Average latency of WANT_DELEGATION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node want_delegation.average_latencyUnit: microsecType: averageBase: want_delegation.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node want_delegation.average_latencyUnit: microsecType: averageBase: want_delegation.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node want_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: want_delegation_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node want_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: want_delegation_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_want_delegation_total","title":"node_nfs_want_delegation_total","text":"

Total number of WANT_DELEGATION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node want_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node want_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node want_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node want_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_write_avg_latency","title":"node_nfs_write_avg_latency","text":"

Average latency of Write procedure requests. The counter keeps track of the average response time of Write requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_write_ops","title":"node_nfs_write_ops","text":"

Total observed NFSv3 write operations per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_write_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_write_throughput","title":"node_nfs_write_throughput","text":"

Rate of NFSv3 write data transfers per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node nfs41_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node nfs42_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node nfs4_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_write_total","title":"node_nfs_write_total","text":"

Total number of Write procedure requests. It is the total number of write success and write error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nvmf_data_recv","title":"node_nvmf_data_recv","text":"

NVMe/FC kilobytes (KB) received per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node nvme_fc_data_receivedUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node nvmf_data_recvUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_nvmf_data_sent","title":"node_nvmf_data_sent","text":"

NVMe/FC kilobytes (KB) sent per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node nvme_fc_data_sentUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node nvmf_data_sentUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_nvmf_ops","title":"node_nvmf_ops","text":"

NVMe/FC operations per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node nvme_fc_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node nvmf_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_ssd_data_read","title":"node_ssd_data_read","text":"

Number of SSD Disk kilobytes (KB) read per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node ssd_data_readUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node ssd_data_readUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_ssd_data_written","title":"node_ssd_data_written","text":"

Number of SSD Disk kilobytes (KB) written per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node ssd_data_writtenUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node ssd_data_writtenUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_total_data","title":"node_total_data","text":"

Total throughput in bytes

API Endpoint Metric Template REST api/cluster/counter/tables/system:node total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_total_latency","title":"node_total_latency","text":"

Average latency for all operations in the system in microseconds

API Endpoint Metric Template REST api/cluster/counter/tables/system:node total_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node total_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_total_ops","title":"node_total_ops","text":"

Total number of operations per second

API Endpoint Metric Template REST api/cluster/counter/tables/system:node total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_uptime","title":"node_uptime","text":"

The total time, in seconds, that the node has been up.

API Endpoint Metric Template REST api/cluster/nodes uptime conf/rest/9.12.0/node.yaml ZAPI system-node-get-iter node-details-info.node-uptime conf/zapi/cdot/9.8.0/node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_other_latency","title":"node_vol_cifs_other_latency","text":"

Average time for the WAFL filesystem to process other CIFS operations to the volume; not including CIFS protocol request processing or network communication time which will also be included in client observed CIFS request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.other_latencyUnit: microsecType: averageBase: cifs.other_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_other_latencyUnit: microsecType: averageBase: cifs_other_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_other_ops","title":"node_vol_cifs_other_ops","text":"

Number of other CIFS operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_read_data","title":"node_vol_cifs_read_data","text":"

Bytes read per second via CIFS

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_read_latency","title":"node_vol_cifs_read_latency","text":"

Average time for the WAFL filesystem to process CIFS read requests to the volume; not including CIFS protocol request processing or network communication time which will also be included in client observed CIFS request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.read_latencyUnit: microsecType: averageBase: cifs.read_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_read_latencyUnit: microsecType: averageBase: cifs_read_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_read_ops","title":"node_vol_cifs_read_ops","text":"

Number of CIFS read operations per second from the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_write_data","title":"node_vol_cifs_write_data","text":"

Bytes written per second via CIFS

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_write_latency","title":"node_vol_cifs_write_latency","text":"

Average time for the WAFL filesystem to process CIFS write requests to the volume; not including CIFS protocol request processing or network communication time which will also be included in client observed CIFS request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.write_latencyUnit: microsecType: averageBase: cifs.write_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_write_latencyUnit: microsecType: averageBase: cifs_write_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_write_ops","title":"node_vol_cifs_write_ops","text":"

Number of CIFS write operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_other_latency","title":"node_vol_fcp_other_latency","text":"

Average time for the WAFL filesystem to process other FCP protocol operations to the volume; not including FCP protocol request processing or network communication time which will also be included in client observed FCP protocol request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.other_latencyUnit: microsecType: averageBase: fcp.other_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_other_latencyUnit: microsecType: averageBase: fcp_other_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_other_ops","title":"node_vol_fcp_other_ops","text":"

Number of other block protocol operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_read_data","title":"node_vol_fcp_read_data","text":"

Bytes read per second via block protocol

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_read_latency","title":"node_vol_fcp_read_latency","text":"

Average time for the WAFL filesystem to process FCP protocol read operations to the volume; not including FCP protocol request processing or network communication time which will also be included in client observed FCP protocol request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.read_latencyUnit: microsecType: averageBase: fcp.read_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_read_latencyUnit: microsecType: averageBase: fcp_read_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_read_ops","title":"node_vol_fcp_read_ops","text":"

Number of block protocol read operations per second from the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_write_data","title":"node_vol_fcp_write_data","text":"

Bytes written per second via block protocol

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_write_latency","title":"node_vol_fcp_write_latency","text":"

Average time for the WAFL filesystem to process FCP protocol write operations to the volume; not including FCP protocol request processing or network communication time which will also be included in client observed FCP protocol request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.write_latencyUnit: microsecType: averageBase: fcp.write_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_write_latencyUnit: microsecType: averageBase: fcp_write_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_write_ops","title":"node_vol_fcp_write_ops","text":"

Number of block protocol write operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_other_latency","title":"node_vol_iscsi_other_latency","text":"

Average time for the WAFL filesystem to process other iSCSI protocol operations to the volume; not including iSCSI protocol request processing or network communication time which will also be included in client observed iSCSI protocol request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.other_latencyUnit: microsecType: averageBase: iscsi.other_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_other_latencyUnit: microsecType: averageBase: iscsi_other_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_other_ops","title":"node_vol_iscsi_other_ops","text":"

Number of other block protocol operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_read_data","title":"node_vol_iscsi_read_data","text":"

Bytes read per second via block protocol

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_read_latency","title":"node_vol_iscsi_read_latency","text":"

Average time for the WAFL filesystem to process iSCSI protocol read operations to the volume; not including iSCSI protocol request processing or network communication time which will also be included in client observed iSCSI protocol request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.read_latencyUnit: microsecType: averageBase: iscsi.read_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_read_latencyUnit: microsecType: averageBase: iscsi_read_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_read_ops","title":"node_vol_iscsi_read_ops","text":"

Number of block protocol read operations per second from the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_write_data","title":"node_vol_iscsi_write_data","text":"

Bytes written per second via block protocol

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_write_latency","title":"node_vol_iscsi_write_latency","text":"

Average time for the WAFL filesystem to process iSCSI protocol write operations to the volume; not including iSCSI protocol request processing or network communication time which will also be included in client observed iSCSI request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.write_latencyUnit: microsecType: averageBase: iscsi.write_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_write_latencyUnit: microsecType: averageBase: iscsi_write_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_write_ops","title":"node_vol_iscsi_write_ops","text":"

Number of block protocol write operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_other_latency","title":"node_vol_nfs_other_latency","text":"

Average time for the WAFL filesystem to process other NFS operations to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.other_latencyUnit: microsecType: averageBase: nfs.other_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_other_latencyUnit: microsecType: averageBase: nfs_other_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_other_ops","title":"node_vol_nfs_other_ops","text":"

Number of other NFS operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_read_data","title":"node_vol_nfs_read_data","text":"

Bytes read per second via NFS

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_read_latency","title":"node_vol_nfs_read_latency","text":"

Average time for the WAFL filesystem to process NFS protocol read requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.read_latencyUnit: microsecType: averageBase: nfs.read_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_read_latencyUnit: microsecType: averageBase: nfs_read_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_read_ops","title":"node_vol_nfs_read_ops","text":"

Number of NFS read operations per second from the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_write_data","title":"node_vol_nfs_write_data","text":"

Bytes written per second via NFS

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_write_latency","title":"node_vol_nfs_write_latency","text":"

Average time for the WAFL filesystem to process NFS protocol write requests to the volume; not including NFS protocol request processing or network communication time, which will also be included in client observed NFS request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.write_latencyUnit: microsecType: averageBase: nfs.write_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_write_latencyUnit: microsecType: averageBase: nfs_write_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_write_ops","title":"node_vol_nfs_write_ops","text":"

Number of NFS write operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_read_latency","title":"node_vol_read_latency","text":"

Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_write_latency","title":"node_vol_write_latency","text":"

Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time

API Endpoint Metric Template REST api/cluster/counter/tables/volume:node write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_volume_avg_latency","title":"node_volume_avg_latency","text":"

Average latency in microseconds for the WAFL filesystem to process all the operations on the volume; not including request processing or network communication time. node_volume_avg_latency is volume_avg_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume average_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_access_latency","title":"node_volume_nfs_access_latency","text":"

Average time for the WAFL filesystem to process NFS protocol access requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_access_latency is volume_nfs_access_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.access_latencyUnit: microsecType: averageBase: nfs.access_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_access_latencyUnit: microsecType: averageBase: nfs_access_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_access_ops","title":"node_volume_nfs_access_ops","text":"

Number of NFS accesses per second to the volume. node_volume_nfs_access_ops is volume_nfs_access_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.access_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_access_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_getattr_latency","title":"node_volume_nfs_getattr_latency","text":"

Average time for the WAFL filesystem to process NFS protocol getattr requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_getattr_latency is volume_nfs_getattr_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.getattr_latencyUnit: microsecType: averageBase: nfs.getattr_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_getattr_latencyUnit: microsecType: averageBase: nfs_getattr_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_getattr_ops","title":"node_volume_nfs_getattr_ops","text":"

Number of NFS getattr per second to the volume. node_volume_nfs_getattr_ops is volume_nfs_getattr_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.getattr_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_getattr_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_lookup_latency","title":"node_volume_nfs_lookup_latency","text":"

Average time for the WAFL filesystem to process NFS protocol lookup requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_lookup_latency is volume_nfs_lookup_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.lookup_latencyUnit: microsecType: averageBase: nfs.lookup_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_lookup_latencyUnit: microsecType: averageBase: nfs_lookup_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_lookup_ops","title":"node_volume_nfs_lookup_ops","text":"

Number of NFS lookups per second to the volume. node_volume_nfs_lookup_ops is volume_nfs_lookup_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.lookup_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_lookup_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_other_latency","title":"node_volume_nfs_other_latency","text":"

Average time for the WAFL filesystem to process other NFS operations to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_other_latency is volume_nfs_other_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.other_latencyUnit: microsecType: averageBase: nfs.other_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_other_latencyUnit: microsecType: averageBase: nfs_other_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_other_ops","title":"node_volume_nfs_other_ops","text":"

Number of other NFS operations per second to the volume. node_volume_nfs_other_ops is volume_nfs_other_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_punch_hole_latency","title":"node_volume_nfs_punch_hole_latency","text":"

Average time for the WAFL filesystem to process NFS protocol hole-punch requests to the volume. node_volume_nfs_punch_hole_latency is volume_nfs_punch_hole_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.punch_hole_latencyUnit: microsecType: averageBase: nfs.punch_hole_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_punch_hole_latencyUnit: microsecType: averageBase: nfs_punch_hole_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_punch_hole_ops","title":"node_volume_nfs_punch_hole_ops","text":"

Number of NFS hole-punch requests per second to the volume. node_volume_nfs_punch_hole_ops is volume_nfs_punch_hole_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.punch_hole_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_punch_hole_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_read_latency","title":"node_volume_nfs_read_latency","text":"

Average time for the WAFL filesystem to process NFS protocol read requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_read_latency is volume_nfs_read_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.read_latencyUnit: microsecType: averageBase: nfs.read_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_read_latencyUnit: microsecType: averageBase: nfs_read_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_read_ops","title":"node_volume_nfs_read_ops","text":"

Number of NFS read operations per second from the volume. node_volume_nfs_read_ops is volume_nfs_read_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_setattr_latency","title":"node_volume_nfs_setattr_latency","text":"

Average time for the WAFL filesystem to process NFS protocol setattr requests to the volume. node_volume_nfs_setattr_latency is volume_nfs_setattr_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.setattr_latencyUnit: microsecType: averageBase: nfs.setattr_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_setattr_latencyUnit: microsecType: averageBase: nfs_setattr_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_setattr_ops","title":"node_volume_nfs_setattr_ops","text":"

Number of NFS setattr requests per second to the volume. node_volume_nfs_setattr_ops is volume_nfs_setattr_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.setattr_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_setattr_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_total_ops","title":"node_volume_nfs_total_ops","text":"

Number of total NFS operations per second to the volume. node_volume_nfs_total_ops is volume_nfs_total_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_write_latency","title":"node_volume_nfs_write_latency","text":"

Average time for the WAFL filesystem to process NFS protocol write requests to the volume; not including NFS protocol request processing or network communication time, which will also be included in client observed NFS request latency. node_volume_nfs_write_latency is volume_nfs_write_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.write_latencyUnit: microsecType: averageBase: nfs.write_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_write_latencyUnit: microsecType: averageBase: nfs_write_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_write_ops","title":"node_volume_nfs_write_ops","text":"

Number of NFS write operations per second to the volume. node_volume_nfs_write_ops is volume_nfs_write_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_other_latency","title":"node_volume_other_latency","text":"

Average latency in microseconds for the WAFL filesystem to process other operations to the volume; not including request processing or network communication time. node_volume_other_latency is volume_other_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume other_latencyUnit: microsecType: averageBase: total_other_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_other_ops","title":"node_volume_other_ops","text":"

Number of other operations per second to the volume. node_volume_other_ops is volume_other_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume total_other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_read_data","title":"node_volume_read_data","text":"

Bytes read per second. node_volume_read_data is volume_read_data aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume bytes_readUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_read_latency","title":"node_volume_read_latency","text":"

Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time. node_volume_read_latency is volume_read_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_read_ops","title":"node_volume_read_ops","text":"

Number of read operations per second from the volume. node_volume_read_ops is volume_read_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume total_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_total_ops","title":"node_volume_total_ops","text":"

Number of operations per second serviced by the volume. node_volume_total_ops is volume_total_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_write_data","title":"node_volume_write_data","text":"

Bytes written per second. node_volume_write_data is volume_write_data aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume bytes_writtenUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_write_latency","title":"node_volume_write_latency","text":"

Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time. node_volume_write_latency is volume_write_latency aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_write_ops","title":"node_volume_write_ops","text":"

Number of write operations per second to the volume. node_volume_write_ops is volume_write_ops aggregated by node.

API Endpoint Metric Template REST api/cluster/counter/tables/volume total_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#nvme_lif_avg_latency","title":"nvme_lif_avg_latency","text":"

Average latency for NVMF operations

API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif average_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_avg_other_latency","title":"nvme_lif_avg_other_latency","text":"

Average latency for operations other than read, write, compare or compare-and-write.

API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif average_other_latencyUnit: microsecType: averageBase: other_ops conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_avg_read_latency","title":"nvme_lif_avg_read_latency","text":"

Average latency for read operations

API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif average_read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_avg_write_latency","title":"nvme_lif_avg_write_latency","text":"

Average latency for write operations

API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif average_write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_other_ops","title":"nvme_lif_other_ops","text":"

Number of operations that are not read, write, compare or compare-and-write.

API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_read_data","title":"nvme_lif_read_data","text":"

Amount of data read from the storage system

API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_read_ops","title":"nvme_lif_read_ops","text":"

Number of read operations

API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_total_ops","title":"nvme_lif_total_ops","text":"

Total number of operations.

API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_write_data","title":"nvme_lif_write_data","text":"

Amount of data written to the storage system

API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_write_ops","title":"nvme_lif_write_ops","text":"

Number of write operations

API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_avg_latency","title":"nvmf_rdma_port_avg_latency","text":"

Average latency for NVMF operations

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_avg_other_latency","title":"nvmf_rdma_port_avg_other_latency","text":"

Average latency for operations other than read, write, compare or caw

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_avg_read_latency","title":"nvmf_rdma_port_avg_read_latency","text":"

Average latency for read operations

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_avg_write_latency","title":"nvmf_rdma_port_avg_write_latency","text":"

Average latency for write operations

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_other_ops","title":"nvmf_rdma_port_other_ops","text":"

Number of operations that are not read, write, compare or caw.

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_read_data","title":"nvmf_rdma_port_read_data","text":"

Amount of data read from the storage system

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_read_ops","title":"nvmf_rdma_port_read_ops","text":"

Number of read operations

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_total_data","title":"nvmf_rdma_port_total_data","text":"

Amount of NVMF traffic to and from the storage system

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_total_ops","title":"nvmf_rdma_port_total_ops","text":"

Total number of operations.

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_write_data","title":"nvmf_rdma_port_write_data","text":"

Amount of data written to the storage system

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_write_ops","title":"nvmf_rdma_port_write_ops","text":"

Number of write operations

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_avg_latency","title":"nvmf_tcp_port_avg_latency","text":"

Average latency for NVMF operations

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_avg_other_latency","title":"nvmf_tcp_port_avg_other_latency","text":"

Average latency for operations other than read, write, compare or caw

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_avg_read_latency","title":"nvmf_tcp_port_avg_read_latency","text":"

Average latency for read operations

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_avg_write_latency","title":"nvmf_tcp_port_avg_write_latency","text":"

Average latency for write operations

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_other_ops","title":"nvmf_tcp_port_other_ops","text":"

Number of operations that are not read, write, compare or caw.

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_read_data","title":"nvmf_tcp_port_read_data","text":"

Amount of data read from the storage system

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_read_ops","title":"nvmf_tcp_port_read_ops","text":"

Number of read operations

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_total_data","title":"nvmf_tcp_port_total_data","text":"

Amount of NVMF traffic to and from the storage system

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_total_ops","title":"nvmf_tcp_port_total_ops","text":"

Total number of operations.

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_write_data","title":"nvmf_tcp_port_write_data","text":"

Amount of data written to the storage system

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_write_ops","title":"nvmf_tcp_port_write_ops","text":"

Number of write operations

API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#ontaps3_logical_used_size","title":"ontaps3_logical_used_size","text":"

Specifies the bucket logical used size up to this point.

API Endpoint Metric Template REST api/protocols/s3/buckets logical_used_size conf/rest/9.7.0/ontap_s3.yaml"},{"location":"ontap-metrics/#ontaps3_size","title":"ontaps3_size","text":"

Specifies the bucket size in bytes; ranges from 80MB to 64TB.

API Endpoint Metric Template REST api/protocols/s3/buckets size conf/rest/9.7.0/ontap_s3.yaml"},{"location":"ontap-metrics/#ontaps3_svm_abort_multipart_upload_failed","title":"ontaps3_svm_abort_multipart_upload_failed","text":"

Number of failed Abort Multipart Upload operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server abort_multipart_upload_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_abort_multipart_upload_failed_client_close","title":"ontaps3_svm_abort_multipart_upload_failed_client_close","text":"

Number of times Abort Multipart Upload operation failed because client terminated connection for operation pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server abort_multipart_upload_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_abort_multipart_upload_latency","title":"ontaps3_svm_abort_multipart_upload_latency","text":"

Average latency for Abort Multipart Upload operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server abort_multipart_upload_latencyUnit: microsecType: averageBase: abort_multipart_upload_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_abort_multipart_upload_rate","title":"ontaps3_svm_abort_multipart_upload_rate","text":"

Number of Abort Multipart Upload operations per second.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server abort_multipart_upload_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_abort_multipart_upload_total","title":"ontaps3_svm_abort_multipart_upload_total","text":"

Number of Abort Multipart Upload operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server abort_multipart_upload_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_allow_access","title":"ontaps3_svm_allow_access","text":"

Number of times access was allowed.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server allow_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_anonymous_access","title":"ontaps3_svm_anonymous_access","text":"

Number of times anonymous access was allowed.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server anonymous_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_anonymous_deny_access","title":"ontaps3_svm_anonymous_deny_access","text":"

Number of times anonymous access was denied.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server anonymous_deny_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_authentication_failures","title":"ontaps3_svm_authentication_failures","text":"

Number of authentication failures.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server authentication_failuresUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_chunked_upload_reqs","title":"ontaps3_svm_chunked_upload_reqs","text":"

Total number of object store server chunked object upload requests

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server chunked_upload_reqsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_complete_multipart_upload_failed","title":"ontaps3_svm_complete_multipart_upload_failed","text":"

Number of failed Complete Multipart Upload operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server complete_multipart_upload_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_complete_multipart_upload_failed_client_close","title":"ontaps3_svm_complete_multipart_upload_failed_client_close","text":"

Number of times Complete Multipart Upload operation failed because client terminated connection for operation pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server complete_multipart_upload_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_complete_multipart_upload_latency","title":"ontaps3_svm_complete_multipart_upload_latency","text":"

Average latency for Complete Multipart Upload operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server complete_multipart_upload_latencyUnit: microsecType: averageBase: complete_multipart_upload_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_complete_multipart_upload_rate","title":"ontaps3_svm_complete_multipart_upload_rate","text":"

Number of Complete Multipart Upload operations per second.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server complete_multipart_upload_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_complete_multipart_upload_total","title":"ontaps3_svm_complete_multipart_upload_total","text":"

Number of Complete Multipart Upload operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server complete_multipart_upload_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_connected_connections","title":"ontaps3_svm_connected_connections","text":"

Number of object store server connections currently established

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server connected_connectionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_connections","title":"ontaps3_svm_connections","text":"

Total number of object store server connections.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server connectionsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_create_bucket_failed","title":"ontaps3_svm_create_bucket_failed","text":"

Number of failed Create Bucket operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server create_bucket_failedUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_create_bucket_failed_client_close","title":"ontaps3_svm_create_bucket_failed_client_close","text":"

Number of times Create Bucket operation failed because client terminated connection for operation pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server create_bucket_failed_client_closeUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_create_bucket_latency","title":"ontaps3_svm_create_bucket_latency","text":"

Average latency for Create Bucket operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server create_bucket_latencyUnit: microsecType: average,no-zero-valuesBase: create_bucket_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_create_bucket_rate","title":"ontaps3_svm_create_bucket_rate","text":"

Number of Create Bucket operations per second.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server create_bucket_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_create_bucket_total","title":"ontaps3_svm_create_bucket_total","text":"

Number of Create Bucket operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server create_bucket_totalUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_default_deny_access","title":"ontaps3_svm_default_deny_access","text":"

Number of times access was denied by default and not through any policy statement.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server default_deny_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_bucket_failed","title":"ontaps3_svm_delete_bucket_failed","text":"

Number of failed Delete Bucket operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_bucket_failedUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_bucket_failed_client_close","title":"ontaps3_svm_delete_bucket_failed_client_close","text":"

Number of times Delete Bucket operation failed because client terminated connection for operation pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_bucket_failed_client_closeUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_bucket_latency","title":"ontaps3_svm_delete_bucket_latency","text":"

Average latency for Delete Bucket operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_bucket_latencyUnit: microsecType: average,no-zero-valuesBase: delete_bucket_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_bucket_rate","title":"ontaps3_svm_delete_bucket_rate","text":"

Number of Delete Bucket operations per second.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_bucket_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_bucket_total","title":"ontaps3_svm_delete_bucket_total","text":"

Number of Delete Bucket operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_bucket_totalUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_failed","title":"ontaps3_svm_delete_object_failed","text":"

Number of failed DELETE object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_failed_client_close","title":"ontaps3_svm_delete_object_failed_client_close","text":"

Number of times DELETE object operation failed due to the case where client closed the connection while the operation was still pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_latency","title":"ontaps3_svm_delete_object_latency","text":"

Average latency for DELETE object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_latencyUnit: microsecType: averageBase: delete_object_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_rate","title":"ontaps3_svm_delete_object_rate","text":"

Number of DELETE object operations per sec

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_tagging_failed","title":"ontaps3_svm_delete_object_tagging_failed","text":"

Number of failed DELETE object tagging operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_tagging_failedUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_tagging_failed_client_close","title":"ontaps3_svm_delete_object_tagging_failed_client_close","text":"

Number of times DELETE object tagging operation failed because client terminated connection for operation pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_tagging_failed_client_closeUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_tagging_latency","title":"ontaps3_svm_delete_object_tagging_latency","text":"

Average latency for DELETE object tagging operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_tagging_latencyUnit: microsecType: average,no-zero-valuesBase: delete_object_tagging_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_tagging_rate","title":"ontaps3_svm_delete_object_tagging_rate","text":"

Number of DELETE object tagging operations per sec.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_tagging_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_tagging_total","title":"ontaps3_svm_delete_object_tagging_total","text":"

Number of DELETE object tagging operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_tagging_totalUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_total","title":"ontaps3_svm_delete_object_total","text":"

Number of DELETE object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_explicit_deny_access","title":"ontaps3_svm_explicit_deny_access","text":"

Number of times access was denied explicitly by a policy statement.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server explicit_deny_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_bucket_acl_failed","title":"ontaps3_svm_get_bucket_acl_failed","text":"

Number of failed GET Bucket ACL operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_bucket_acl_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_bucket_acl_total","title":"ontaps3_svm_get_bucket_acl_total","text":"

Number of GET Bucket ACL operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_bucket_acl_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_bucket_versioning_failed","title":"ontaps3_svm_get_bucket_versioning_failed","text":"

Number of failed Get Bucket Versioning operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_bucket_versioning_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_bucket_versioning_total","title":"ontaps3_svm_get_bucket_versioning_total","text":"

Number of Get Bucket Versioning operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_bucket_versioning_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_data","title":"ontaps3_svm_get_data","text":"

Rate of GET object data transfers per second

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_acl_failed","title":"ontaps3_svm_get_object_acl_failed","text":"

Number of failed GET Object ACL operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_acl_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_acl_total","title":"ontaps3_svm_get_object_acl_total","text":"

Number of GET Object ACL operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_acl_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_failed","title":"ontaps3_svm_get_object_failed","text":"

Number of failed GET object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_failed_client_close","title":"ontaps3_svm_get_object_failed_client_close","text":"

Number of times GET object operation failed due to the case where client closed the connection while the operation was still pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_lastbyte_latency","title":"ontaps3_svm_get_object_lastbyte_latency","text":"

Average last-byte latency for GET object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_lastbyte_latencyUnit: microsecType: averageBase: get_object_lastbyte_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_latency","title":"ontaps3_svm_get_object_latency","text":"

Average first-byte latency for GET object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_latencyUnit: microsecType: averageBase: get_object_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_rate","title":"ontaps3_svm_get_object_rate","text":"

Number of GET object operations per sec

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_tagging_failed","title":"ontaps3_svm_get_object_tagging_failed","text":"

Number of failed GET object tagging operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_tagging_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_tagging_failed_client_close","title":"ontaps3_svm_get_object_tagging_failed_client_close","text":"

Number of times GET object tagging operation failed due to the case where client closed the connection while the operation was still pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_tagging_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_tagging_latency","title":"ontaps3_svm_get_object_tagging_latency","text":"

Average latency for GET object tagging operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_tagging_latencyUnit: microsecType: averageBase: get_object_tagging_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_tagging_rate","title":"ontaps3_svm_get_object_tagging_rate","text":"

Number of GET object tagging operations per sec

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_tagging_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_tagging_total","title":"ontaps3_svm_get_object_tagging_total","text":"

Number of GET object tagging operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_tagging_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_total","title":"ontaps3_svm_get_object_total","text":"

Number of GET object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_group_policy_evaluated","title":"ontaps3_svm_group_policy_evaluated","text":"

Number of times group policies were evaluated.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server group_policy_evaluatedUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_bucket_failed","title":"ontaps3_svm_head_bucket_failed","text":"

Number of failed HEAD bucket operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_bucket_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_bucket_failed_client_close","title":"ontaps3_svm_head_bucket_failed_client_close","text":"

Number of times HEAD bucket operation failed due to the case where client closed the connection while the operation was still pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_bucket_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_bucket_latency","title":"ontaps3_svm_head_bucket_latency","text":"

Average latency for HEAD bucket operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_bucket_latencyUnit: microsecType: averageBase: head_bucket_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_bucket_rate","title":"ontaps3_svm_head_bucket_rate","text":"

Number of HEAD bucket operations per sec

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_bucket_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_bucket_total","title":"ontaps3_svm_head_bucket_total","text":"

Number of HEAD bucket operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_bucket_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_object_failed","title":"ontaps3_svm_head_object_failed","text":"

Number of failed HEAD Object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_object_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_object_failed_client_close","title":"ontaps3_svm_head_object_failed_client_close","text":"

Number of times HEAD object operation failed due to the case where client closed the connection while the operation was still pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_object_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_object_latency","title":"ontaps3_svm_head_object_latency","text":"

Average latency for HEAD object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_object_latencyUnit: microsecType: averageBase: head_object_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_object_rate","title":"ontaps3_svm_head_object_rate","text":"

Number of HEAD Object operations per sec

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_object_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_object_total","title":"ontaps3_svm_head_object_total","text":"

Number of HEAD Object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_object_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_initiate_multipart_upload_failed","title":"ontaps3_svm_initiate_multipart_upload_failed","text":"

Number of failed Initiate Multipart Upload operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server initiate_multipart_upload_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_initiate_multipart_upload_failed_client_close","title":"ontaps3_svm_initiate_multipart_upload_failed_client_close","text":"

Number of times Initiate Multipart Upload operation failed because client terminated connection for operation pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server initiate_multipart_upload_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_initiate_multipart_upload_latency","title":"ontaps3_svm_initiate_multipart_upload_latency","text":"

Average latency for Initiate Multipart Upload operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server initiate_multipart_upload_latencyUnit: microsecType: averageBase: initiate_multipart_upload_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_initiate_multipart_upload_rate","title":"ontaps3_svm_initiate_multipart_upload_rate","text":"

Number of Initiate Multipart Upload operations per second.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server initiate_multipart_upload_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_initiate_multipart_upload_total","title":"ontaps3_svm_initiate_multipart_upload_total","text":"

Number of Initiate Multipart Upload operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server initiate_multipart_upload_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_input_flow_control_entry","title":"ontaps3_svm_input_flow_control_entry","text":"

Number of times input flow control was entered.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server input_flow_control_entryUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_input_flow_control_exit","title":"ontaps3_svm_input_flow_control_exit","text":"

Number of times input flow control was exited.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server input_flow_control_exitUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_buckets_failed","title":"ontaps3_svm_list_buckets_failed","text":"

Number of failed LIST Buckets operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_buckets_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_buckets_failed_client_close","title":"ontaps3_svm_list_buckets_failed_client_close","text":"

Number of times LIST Bucket operation failed due to the case where client closed the connection while the operation was still pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_buckets_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_buckets_latency","title":"ontaps3_svm_list_buckets_latency","text":"

Average latency for LIST Buckets operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_buckets_latencyUnit: microsecType: averageBase: head_object_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_buckets_rate","title":"ontaps3_svm_list_buckets_rate","text":"

Number of LIST Buckets operations per sec

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_buckets_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_buckets_total","title":"ontaps3_svm_list_buckets_total","text":"

Number of LIST Buckets operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_buckets_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_object_versions_failed","title":"ontaps3_svm_list_object_versions_failed","text":"

Number of failed LIST object versions operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_object_versions_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_object_versions_failed_client_close","title":"ontaps3_svm_list_object_versions_failed_client_close","text":"

Number of times LIST object versions operation failed due to the case where client closed the connection while the operation was still pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_object_versions_failed_client_closeUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_object_versions_latency","title":"ontaps3_svm_list_object_versions_latency","text":"

Average latency for LIST Object versions operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_object_versions_latencyUnit: microsecType: average,no-zero-valuesBase: list_object_versions_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_object_versions_rate","title":"ontaps3_svm_list_object_versions_rate","text":"

Number of LIST Object Versions operations per sec

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_object_versions_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_object_versions_total","title":"ontaps3_svm_list_object_versions_total","text":"

Number of LIST Object Versions operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_object_versions_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_objects_failed","title":"ontaps3_svm_list_objects_failed","text":"

Number of failed LIST objects operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_objects_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_objects_failed_client_close","title":"ontaps3_svm_list_objects_failed_client_close","text":"

Number of times LIST objects operation failed due to the case where client closed the connection while the operation was still pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_objects_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_objects_latency","title":"ontaps3_svm_list_objects_latency","text":"

Average latency for LIST Objects operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_objects_latencyUnit: microsecType: averageBase: list_objects_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_objects_rate","title":"ontaps3_svm_list_objects_rate","text":"

Number of LIST Objects operations per sec

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_objects_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_objects_total","title":"ontaps3_svm_list_objects_total","text":"

Number of LIST Objects operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_objects_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_uploads_failed","title":"ontaps3_svm_list_uploads_failed","text":"

Number of failed LIST Uploads operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_uploads_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_uploads_failed_client_close","title":"ontaps3_svm_list_uploads_failed_client_close","text":"

Number of times LIST Uploads operation failed due to the case where client closed the connection while the operation was still pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_uploads_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_uploads_latency","title":"ontaps3_svm_list_uploads_latency","text":"

Average latency for LIST Uploads operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_uploads_latencyUnit: microsecType: averageBase: list_uploads_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_uploads_rate","title":"ontaps3_svm_list_uploads_rate","text":"

Number of LIST Uploads operations per sec

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_uploads_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_uploads_total","title":"ontaps3_svm_list_uploads_total","text":"

Number of LIST Uploads operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_uploads_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_max_cmds_per_connection","title":"ontaps3_svm_max_cmds_per_connection","text":"

Maximum commands pipelined at any instance on a connection.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server max_cmds_per_connectionUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_max_connected_connections","title":"ontaps3_svm_max_connected_connections","text":"

Maximum number of object store server connections established at one time

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server max_connected_connectionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_max_requests_outstanding","title":"ontaps3_svm_max_requests_outstanding","text":"

Maximum number of object store server requests in process at one time

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server max_requests_outstandingUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_multi_delete_reqs","title":"ontaps3_svm_multi_delete_reqs","text":"

Total number of object store server multiple object delete requests

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server multi_delete_reqsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_output_flow_control_entry","title":"ontaps3_svm_output_flow_control_entry","text":"

Number of output flow control was entered.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server output_flow_control_entryUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_output_flow_control_exit","title":"ontaps3_svm_output_flow_control_exit","text":"

Number of times output flow control was exited.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server output_flow_control_exitUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_presigned_url_reqs","title":"ontaps3_svm_presigned_url_reqs","text":"

Total number of presigned object store server URL requests.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server presigned_url_reqsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_bucket_versioning_failed","title":"ontaps3_svm_put_bucket_versioning_failed","text":"

Number of failed Put Bucket Versioning operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_bucket_versioning_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_bucket_versioning_total","title":"ontaps3_svm_put_bucket_versioning_total","text":"

Number of Put Bucket Versioning operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_bucket_versioning_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_data","title":"ontaps3_svm_put_data","text":"

Rate of PUT object data transfers per second

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_failed","title":"ontaps3_svm_put_object_failed","text":"

Number of failed PUT object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_failed_client_close","title":"ontaps3_svm_put_object_failed_client_close","text":"

Number of times PUT object operation failed due to the case where client closed the connection while the operation was still pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_latency","title":"ontaps3_svm_put_object_latency","text":"

Average latency for PUT object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_latencyUnit: microsecType: averageBase: put_object_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_rate","title":"ontaps3_svm_put_object_rate","text":"

Number of PUT object operations per sec

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_tagging_failed","title":"ontaps3_svm_put_object_tagging_failed","text":"

Number of failed PUT object tagging operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_tagging_failedUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_tagging_failed_client_close","title":"ontaps3_svm_put_object_tagging_failed_client_close","text":"

Number of times PUT object tagging operation failed because client terminated connection for operation pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_tagging_failed_client_closeUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_tagging_latency","title":"ontaps3_svm_put_object_tagging_latency","text":"

Average latency for PUT object tagging operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_tagging_latencyUnit: microsecType: average,no-zero-valuesBase: put_object_tagging_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_tagging_rate","title":"ontaps3_svm_put_object_tagging_rate","text":"

Number of PUT object tagging operations per second.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_tagging_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_tagging_total","title":"ontaps3_svm_put_object_tagging_total","text":"

Number of PUT object tagging operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_tagging_totalUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_total","title":"ontaps3_svm_put_object_total","text":"

Number of PUT object operations

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_request_parse_errors","title":"ontaps3_svm_request_parse_errors","text":"

Number of request parser errors due to malformed requests.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server request_parse_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_requests","title":"ontaps3_svm_requests","text":"

Total number of object store server requests

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server requestsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_requests_outstanding","title":"ontaps3_svm_requests_outstanding","text":"

Number of object store server requests in process

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server requests_outstandingUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_root_user_access","title":"ontaps3_svm_root_user_access","text":"

Number of times access was done by root user.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server root_user_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_server_connection_close","title":"ontaps3_svm_server_connection_close","text":"

Number of connection closes triggered by server due to fatal errors.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server server_connection_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_signature_v2_reqs","title":"ontaps3_svm_signature_v2_reqs","text":"

Total number of object store server signature V2 requests

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server signature_v2_reqsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_signature_v4_reqs","title":"ontaps3_svm_signature_v4_reqs","text":"

Total number of object store server signature V4 requests

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server signature_v4_reqsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_tagging","title":"ontaps3_svm_tagging","text":"

Number of requests with tagging specified.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server taggingUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_upload_part_failed","title":"ontaps3_svm_upload_part_failed","text":"

Number of failed Upload Part operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server upload_part_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_upload_part_failed_client_close","title":"ontaps3_svm_upload_part_failed_client_close","text":"

Number of times Upload Part operation failed because client terminated connection for operation pending on server.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server upload_part_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_upload_part_latency","title":"ontaps3_svm_upload_part_latency","text":"

Average latency for Upload Part operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server upload_part_latencyUnit: microsecType: averageBase: upload_part_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_upload_part_rate","title":"ontaps3_svm_upload_part_rate","text":"

Number of Upload Part operations per second.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server upload_part_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_upload_part_total","title":"ontaps3_svm_upload_part_total","text":"

Number of Upload Part operations.

API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server upload_part_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_used_percent","title":"ontaps3_used_percent","text":"API Endpoint Metric Template REST api/protocols/s3/buckets logical_used_size, size conf/rest/9.7.0/ontap_s3.yaml"},{"location":"ontap-metrics/#path_read_data","title":"path_read_data","text":"

The average read throughput in kilobytes per second read from the indicated target port by the controller.

API Endpoint Metric Template REST api/cluster/counter/tables/path read_dataUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path read_dataUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_read_iops","title":"path_read_iops","text":"

The number of I/O read operations sent from the initiator port to the indicated target port.

API Endpoint Metric Template REST api/cluster/counter/tables/path read_iopsUnit: per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path read_iopsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_read_latency","title":"path_read_latency","text":"

The average latency of I/O read operations sent from this controller to the indicated target port.

API Endpoint Metric Template REST api/cluster/counter/tables/path read_latencyUnit: microsecType: averageBase: read_iops conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path read_latencyUnit: microsecType: averageBase: read_iops conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_total_data","title":"path_total_data","text":"

The average throughput in kilobytes per second read and written from/to the indicated target port by the controller.

API Endpoint Metric Template REST api/cluster/counter/tables/path total_dataUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path total_dataUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_total_iops","title":"path_total_iops","text":"

The number of total read/write I/O operations sent from the initiator port to the indicated target port.

API Endpoint Metric Template REST api/cluster/counter/tables/path total_iopsUnit: per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path total_iopsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_write_data","title":"path_write_data","text":"

The average write throughput in kilobytes per second written to the indicated target port by the controller.

API Endpoint Metric Template REST api/cluster/counter/tables/path write_dataUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path write_dataUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_write_iops","title":"path_write_iops","text":"

The number of I/O write operations sent from the initiator port to the indicated target port.

API Endpoint Metric Template REST api/cluster/counter/tables/path write_iopsUnit: per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path write_iopsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_write_latency","title":"path_write_latency","text":"

The average latency of I/O write operations sent from this controller to the indicated target port.

API Endpoint Metric Template REST api/cluster/counter/tables/path write_latencyUnit: microsecType: averageBase: write_iops conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path write_latencyUnit: microsecType: averageBase: write_iops conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#plex_disk_busy","title":"plex_disk_busy","text":"

The utilization percent of the disk. plex_disk_busy is disk_busy aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_capacity","title":"plex_disk_capacity","text":"

Disk capacity in MB. plex_disk_capacity is disk_capacity aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_cp_read_chain","title":"plex_disk_cp_read_chain","text":"

Average number of blocks transferred in each consistency point read operation during a CP. plex_disk_cp_read_chain is disk_cp_read_chain aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_cp_read_latency","title":"plex_disk_cp_read_latency","text":"

Average latency per block in microseconds for consistency point read operations. plex_disk_cp_read_latency is disk_cp_read_latency aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_cp_reads","title":"plex_disk_cp_reads","text":"

Number of disk read operations initiated each second for consistency point processing. plex_disk_cp_reads is disk_cp_reads aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_io_pending","title":"plex_disk_io_pending","text":"

Average number of I/Os issued to the disk for which we have not yet received the response. plex_disk_io_pending is disk_io_pending aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_io_queued","title":"plex_disk_io_queued","text":"

Number of I/Os queued to the disk but not yet issued. plex_disk_io_queued is disk_io_queued aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_total_data","title":"plex_disk_total_data","text":"

Total throughput for user operations per second. plex_disk_total_data is disk_total_data aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_total_transfers","title":"plex_disk_total_transfers","text":"

Total number of disk operations involving data transfer initiated per second. plex_disk_total_transfers is disk_total_transfers aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_read_blocks","title":"plex_disk_user_read_blocks","text":"

Number of blocks transferred for user read operations per second. plex_disk_user_read_blocks is disk_user_read_blocks aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_read_chain","title":"plex_disk_user_read_chain","text":"

Average number of blocks transferred in each user read operation. plex_disk_user_read_chain is disk_user_read_chain aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_read_latency","title":"plex_disk_user_read_latency","text":"

Average latency per block in microseconds for user read operations. plex_disk_user_read_latency is disk_user_read_latency aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_reads","title":"plex_disk_user_reads","text":"

Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. plex_disk_user_reads is disk_user_reads aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_write_blocks","title":"plex_disk_user_write_blocks","text":"

Number of blocks transferred for user write operations per second. plex_disk_user_write_blocks is disk_user_write_blocks aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_write_chain","title":"plex_disk_user_write_chain","text":"

Average number of blocks transferred in each user write operation. plex_disk_user_write_chain is disk_user_write_chain aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_write_latency","title":"plex_disk_user_write_latency","text":"

Average latency per block in microseconds for user write operations. plex_disk_user_write_latency is disk_user_write_latency aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_writes","title":"plex_disk_user_writes","text":"

Number of disk write operations initiated each second for storing data or metadata associated with user requests. plex_disk_user_writes is disk_user_writes aggregated by plex.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#qos_concurrency","title":"qos_concurrency","text":"

This is the average number of concurrent requests for the workload.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume concurrencyUnit: noneType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume concurrencyUnit: noneType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_detail_resource_latency","title":"qos_detail_resource_latency","text":"

average latency for workload on Data ONTAP subsystems

API Endpoint Metric Template REST api/cluster/counter/tables/qos_detail Harvest generatedUnit: microsecondsType: Base: conf/restperf/9.12.0/workload_detail.yaml ZAPI perf-object-get-instances workload_detail Harvest generatedUnit: microsecondsType: Base: conf/zapiperf/9.12.0/workload_detail.yaml"},{"location":"ontap-metrics/#qos_latency","title":"qos_latency","text":"

This is the average response time for requests that were initiated by the workload.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume latencyUnit: microsecType: averageBase: ops conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume latencyUnit: microsecType: average,no-zero-valuesBase: ops conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_ops","title":"qos_ops","text":"

This field is the workload's rate of operations that completed during the measurement interval; measured per second.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume opsUnit: per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_other_ops","title":"qos_other_ops","text":"

This is the rate of this workload's other operations that completed during the measurement interval.

API Endpoint Metric Template REST api/cluster/counter/tables/qos other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/workload.yaml ZAPI perf-object-get-instances workload_volume other_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_read_data","title":"qos_read_data","text":"

This is the amount of data read per second from the filer by the workload.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume read_dataUnit: b_per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_read_io_type","title":"qos_read_io_type","text":"

This is the percentage of read requests served from various components (such as buffer cache, ext_cache, disk, etc.).

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume read_io_type_percentUnit: percentType: percentBase: read_io_type_base conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume read_io_typeUnit: percentType: percentBase: read_io_type_base conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_read_latency","title":"qos_read_latency","text":"

This is the average response time for read requests that were initiated by the workload.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume read_latencyUnit: microsecType: average,no-zero-valuesBase: read_ops conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_read_ops","title":"qos_read_ops","text":"

This is the rate of this workload's read operations that completed during the measurement interval.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume read_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_sequential_reads","title":"qos_sequential_reads","text":"

This is the percentage of reads, performed on behalf of the workload, that were sequential.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume sequential_reads_percentUnit: percentType: percentBase: sequential_reads_base conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume sequential_readsUnit: percentType: percent,no-zero-valuesBase: sequential_reads_base conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_sequential_writes","title":"qos_sequential_writes","text":"

This is the percentage of writes, performed on behalf of the workload, that were sequential. This counter is only available on platforms with more than 4GB of NVRAM.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume sequential_writes_percentUnit: percentType: percentBase: sequential_writes_base conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume sequential_writesUnit: percentType: percent,no-zero-valuesBase: sequential_writes_base conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_total_data","title":"qos_total_data","text":"

This is the total amount of data read/written per second from/to the filer by the workload.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume total_dataUnit: b_per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_write_data","title":"qos_write_data","text":"

This is the amount of data written per second to the filer by the workload.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume write_dataUnit: b_per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_write_latency","title":"qos_write_latency","text":"

This is the average response time for write requests that were initiated by the workload.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume write_latencyUnit: microsecType: average,no-zero-valuesBase: write_ops conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_write_ops","title":"qos_write_ops","text":"

This is the workload's write operations that completed during the measurement interval; measured per second.

API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume write_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qtree_cifs_ops","title":"qtree_cifs_ops","text":"

Number of CIFS operations per second to the qtree

API Endpoint Metric Template REST api/cluster/counter/tables/qtree cifs_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/qtree.yaml ZAPI perf-object-get-instances qtree cifs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#qtree_id","title":"qtree_id","text":"

The identifier for the qtree, unique within the qtree's volume.

API Endpoint Metric Template REST api/storage/qtrees id conf/rest/9.12.0/qtree.yaml"},{"location":"ontap-metrics/#qtree_internal_ops","title":"qtree_internal_ops","text":"

Number of internal operations generated by activites such as snapmirror and backup per second to the qtree

API Endpoint Metric Template REST api/cluster/counter/tables/qtree internal_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/qtree.yaml ZAPI perf-object-get-instances qtree internal_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#qtree_nfs_ops","title":"qtree_nfs_ops","text":"

Number of NFS operations per second to the qtree

API Endpoint Metric Template REST api/cluster/counter/tables/qtree nfs_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/qtree.yaml ZAPI perf-object-get-instances qtree nfs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#qtree_total_ops","title":"qtree_total_ops","text":"

Summation of NFS ops, CIFS ops, CSS ops and internal ops

API Endpoint Metric Template REST api/cluster/counter/tables/qtree total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/qtree.yaml ZAPI perf-object-get-instances qtree total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_disk_limit","title":"quota_disk_limit","text":"

Maximum amount of disk space, in kilobytes, allowed for the quota target (hard disk space limit). The value is -1 if the limit is unlimited.

API Endpoint Metric Template REST api/storage/quota/reports space.hard_limit conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter disk-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_disk_used","title":"quota_disk_used","text":"

Current amount of disk space, in kilobytes, used by the quota target.

API Endpoint Metric Template REST api/storage/quota/reports space.used.total conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter disk-used conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_disk_used_pct_disk_limit","title":"quota_disk_used_pct_disk_limit","text":"

Current disk space used expressed as a percentage of hard disk limit.

API Endpoint Metric Template REST api/storage/quota/reports space.used.hard_limit_percent conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter disk-used-pct-disk-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_disk_used_pct_soft_disk_limit","title":"quota_disk_used_pct_soft_disk_limit","text":"

Current disk space used expressed as a percentage of soft disk limit.

API Endpoint Metric Template REST api/storage/quota/reports space.used.soft_limit_percent conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter disk-used-pct-soft-disk-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_disk_used_pct_threshold","title":"quota_disk_used_pct_threshold","text":"

Current disk space used expressed as a percentage of threshold.

API Endpoint Metric Template ZAPI quota-report-iter disk-used-pct-threshold conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_file_limit","title":"quota_file_limit","text":"

Maximum number of files allowed for the quota target (hard files limit). The value is -1 if the limit is unlimited.

API Endpoint Metric Template REST api/storage/quota/reports files.hard_limit conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter file-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_files_used","title":"quota_files_used","text":"

Current number of files used by the quota target.

API Endpoint Metric Template REST api/storage/quota/reports files.used.total conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter files-used conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_files_used_pct_file_limit","title":"quota_files_used_pct_file_limit","text":"

Current number of files used expressed as a percentage of hard file limit.

API Endpoint Metric Template REST api/storage/quota/reports files.used.hard_limit_percent conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter files-used-pct-file-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_files_used_pct_soft_file_limit","title":"quota_files_used_pct_soft_file_limit","text":"

Current number of files used expressed as a percentage of soft file limit.

API Endpoint Metric Template REST api/storage/quota/reports files.used.soft_limit_percent conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter files-used-pct-soft-file-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_soft_disk_limit","title":"quota_soft_disk_limit","text":"

soft disk space limit, in kilobytes, for the quota target. The value is -1 if the limit is unlimited.

API Endpoint Metric Template REST api/storage/quota/reports space.soft_limit conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter soft-disk-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_soft_file_limit","title":"quota_soft_file_limit","text":"

Soft file limit, in number of files, for the quota target. The value is -1 if the limit is unlimited.

API Endpoint Metric Template REST api/storage/quota/reports files.soft_limit conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter soft-file-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_threshold","title":"quota_threshold","text":"

Disk space threshold, in kilobytes, for the quota target. The value is -1 if the limit is unlimited.

API Endpoint Metric Template ZAPI quota-report-iter threshold conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#raid_disk_busy","title":"raid_disk_busy","text":"

The utilization percent of the disk. raid_disk_busy is disk_busy aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_capacity","title":"raid_disk_capacity","text":"

Disk capacity in MB. raid_disk_capacity is disk_capacity aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_cp_read_chain","title":"raid_disk_cp_read_chain","text":"

Average number of blocks transferred in each consistency point read operation during a CP. raid_disk_cp_read_chain is disk_cp_read_chain aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_cp_read_latency","title":"raid_disk_cp_read_latency","text":"

Average latency per block in microseconds for consistency point read operations. raid_disk_cp_read_latency is disk_cp_read_latency aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_cp_reads","title":"raid_disk_cp_reads","text":"

Number of disk read operations initiated each second for consistency point processing. raid_disk_cp_reads is disk_cp_reads aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_io_pending","title":"raid_disk_io_pending","text":"

Average number of I/Os issued to the disk for which we have not yet received the response. raid_disk_io_pending is disk_io_pending aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_io_queued","title":"raid_disk_io_queued","text":"

Number of I/Os queued to the disk but not yet issued. raid_disk_io_queued is disk_io_queued aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_total_data","title":"raid_disk_total_data","text":"

Total throughput for user operations per second. raid_disk_total_data is disk_total_data aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_total_transfers","title":"raid_disk_total_transfers","text":"

Total number of disk operations involving data transfer initiated per second. raid_disk_total_transfers is disk_total_transfers aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_read_blocks","title":"raid_disk_user_read_blocks","text":"

Number of blocks transferred for user read operations per second. raid_disk_user_read_blocks is disk_user_read_blocks aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_read_chain","title":"raid_disk_user_read_chain","text":"

Average number of blocks transferred in each user read operation. raid_disk_user_read_chain is disk_user_read_chain aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_read_latency","title":"raid_disk_user_read_latency","text":"

Average latency per block in microseconds for user read operations. raid_disk_user_read_latency is disk_user_read_latency aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_reads","title":"raid_disk_user_reads","text":"

Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. raid_disk_user_reads is disk_user_reads aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_write_blocks","title":"raid_disk_user_write_blocks","text":"

Number of blocks transferred for user write operations per second. raid_disk_user_write_blocks is disk_user_write_blocks aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_write_chain","title":"raid_disk_user_write_chain","text":"

Average number of blocks transferred in each user write operation. raid_disk_user_write_chain is disk_user_write_chain aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_write_latency","title":"raid_disk_user_write_latency","text":"

Average latency per block in microseconds for user write operations. raid_disk_user_write_latency is disk_user_write_latency aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_writes","title":"raid_disk_user_writes","text":"

Number of disk write operations initiated each second for storing data or metadata associated with user requests. raid_disk_user_writes is disk_user_writes aggregated by raid.

API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#security_audit_destination_port","title":"security_audit_destination_port","text":"

The destination port used to forward the message.

API Endpoint Metric Template ZAPI cluster-log-forward-get-iter cluster-log-forward-info.port conf/zapi/cdot/9.8.0/security_audit_dest.yaml"},{"location":"ontap-metrics/#security_certificate_expiry_time","title":"security_certificate_expiry_time","text":"

Certificate expiration time. Can be provided on POST if creating self-signed certificate. The expiration time range is between 1 day to 10 years.

API Endpoint Metric Template REST api/security/certificates expiry_time conf/rest/9.12.0/security_certificate.yaml ZAPI security-certificate-get-iter certificate-info.expiration-date conf/zapi/cdot/9.8.0/security_certificate.yaml"},{"location":"ontap-metrics/#security_ssh_max_instances","title":"security_ssh_max_instances","text":"

Maximum possible simultaneous connections.

API Endpoint Metric Template REST api/security/ssh max_instances conf/rest/9.12.0/security_ssh.yaml"},{"location":"ontap-metrics/#shelf_average_ambient_temperature","title":"shelf_average_ambient_temperature","text":"

Average temperature of all ambient sensors for shelf in Celsius.

API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_average_fan_speed","title":"shelf_average_fan_speed","text":"

Average fan speed for shelf in rpm.

API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_average_temperature","title":"shelf_average_temperature","text":"

Average temperature of all non-ambient sensors for shelf in Celsius.

API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_disk_count","title":"shelf_disk_count","text":"

Disk count in a shelf.

API Endpoint Metric Template REST api/storage/shelves disk_count conf/rest/9.12.0/shelf.yaml ZAPI storage-shelf-info-get-iter storage-shelf-info.disk-count conf/zapi/cdot/9.8.0/shelf.yaml"},{"location":"ontap-metrics/#shelf_max_fan_speed","title":"shelf_max_fan_speed","text":"

Maximum fan speed for shelf in rpm.

API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_max_temperature","title":"shelf_max_temperature","text":"

Maximum temperature of all non-ambient sensors for shelf in Celsius.

API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_min_ambient_temperature","title":"shelf_min_ambient_temperature","text":"

Minimum temperature of all ambient sensors for shelf in Celsius.

API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_min_fan_speed","title":"shelf_min_fan_speed","text":"

Minimum fan speed for shelf in rpm.

API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_min_temperature","title":"shelf_min_temperature","text":"

Minimum temperature of all non-ambient sensors for shelf in Celsius.

API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_power","title":"shelf_power","text":"

Power consumed by shelf in Watts.

API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#smb2_close_latency","title":"smb2_close_latency","text":"

Average latency for SMB2_COM_CLOSE operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 close_latencyUnit: microsecType: averageBase: close_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_close_latency_histogram","title":"smb2_close_latency_histogram","text":"

Latency histogram for SMB2_COM_CLOSE operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 close_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_close_ops","title":"smb2_close_ops","text":"

Number of SMB2_COM_CLOSE operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 close_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_create_latency","title":"smb2_create_latency","text":"

Average latency for SMB2_COM_CREATE operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 create_latencyUnit: microsecType: averageBase: create_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_create_latency_histogram","title":"smb2_create_latency_histogram","text":"

Latency histogram for SMB2_COM_CREATE operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 create_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_create_ops","title":"smb2_create_ops","text":"

Number of SMB2_COM_CREATE operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 create_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_lock_latency","title":"smb2_lock_latency","text":"

Average latency for SMB2_COM_LOCK operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 lock_latencyUnit: microsecType: averageBase: lock_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_lock_latency_histogram","title":"smb2_lock_latency_histogram","text":"

Latency histogram for SMB2_COM_LOCK operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 lock_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_lock_ops","title":"smb2_lock_ops","text":"

Number of SMB2_COM_LOCK operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 lock_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_negotiate_latency","title":"smb2_negotiate_latency","text":"

Average latency for SMB2_COM_NEGOTIATE operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 negotiate_latencyUnit: microsecType: averageBase: negotiate_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_negotiate_ops","title":"smb2_negotiate_ops","text":"

Number of SMB2_COM_NEGOTIATE operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 negotiate_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_oplock_break_latency","title":"smb2_oplock_break_latency","text":"

Average latency for SMB2_COM_OPLOCK_BREAK operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 oplock_break_latencyUnit: microsecType: averageBase: oplock_break_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_oplock_break_latency_histogram","title":"smb2_oplock_break_latency_histogram","text":"

Latency histogram for SMB2_COM_OPLOCK_BREAK operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 oplock_break_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_oplock_break_ops","title":"smb2_oplock_break_ops","text":"

Number of SMB2_COM_OPLOCK_BREAK operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 oplock_break_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_directory_latency","title":"smb2_query_directory_latency","text":"

Average latency for SMB2_COM_QUERY_DIRECTORY operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_directory_latencyUnit: microsecType: averageBase: query_directory_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_directory_latency_histogram","title":"smb2_query_directory_latency_histogram","text":"

Latency histogram for SMB2_COM_QUERY_DIRECTORY operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_directory_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_directory_ops","title":"smb2_query_directory_ops","text":"

Number of SMB2_COM_QUERY_DIRECTORY operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_directory_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_info_latency","title":"smb2_query_info_latency","text":"

Average latency for SMB2_COM_QUERY_INFO operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_info_latencyUnit: microsecType: averageBase: query_info_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_info_latency_histogram","title":"smb2_query_info_latency_histogram","text":"

Latency histogram for SMB2_COM_QUERY_INFO operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_info_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_info_ops","title":"smb2_query_info_ops","text":"

Number of SMB2_COM_QUERY_INFO operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_info_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_read_latency","title":"smb2_read_latency","text":"

Average latency for SMB2_COM_READ operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_read_ops","title":"smb2_read_ops","text":"

Number of SMB2_COM_READ operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_session_setup_latency","title":"smb2_session_setup_latency","text":"

Average latency for SMB2_COM_SESSION_SETUP operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 session_setup_latencyUnit: microsecType: averageBase: session_setup_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_session_setup_latency_histogram","title":"smb2_session_setup_latency_histogram","text":"

Latency histogram for SMB2_COM_SESSION_SETUP operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 session_setup_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_session_setup_ops","title":"smb2_session_setup_ops","text":"

Number of SMB2_COM_SESSION_SETUP operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 session_setup_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_set_info_latency","title":"smb2_set_info_latency","text":"

Average latency for SMB2_COM_SET_INFO operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 set_info_latencyUnit: microsecType: averageBase: set_info_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_set_info_latency_histogram","title":"smb2_set_info_latency_histogram","text":"

Latency histogram for SMB2_COM_SET_INFO operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 set_info_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_set_info_ops","title":"smb2_set_info_ops","text":"

Number of SMB2_COM_SET_INFO operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 set_info_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_tree_connect_latency","title":"smb2_tree_connect_latency","text":"

Average latency for SMB2_COM_TREE_CONNECT operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 tree_connect_latencyUnit: microsecType: averageBase: tree_connect_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_tree_connect_ops","title":"smb2_tree_connect_ops","text":"

Number of SMB2_COM_TREE_CONNECT operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 tree_connect_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_write_latency","title":"smb2_write_latency","text":"

Average latency for SMB2_COM_WRITE operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 write_latencyUnit: microsecType: averageBase: write_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_write_ops","title":"smb2_write_ops","text":"

Number of SMB2_COM_WRITE operations

API Endpoint Metric Template ZAPI perf-object-get-instances smb2 write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#snapmirror_break_failed_count","title":"snapmirror_break_failed_count","text":"

The number of failed SnapMirror break operations for the relationship

API Endpoint Metric Template REST api/private/cli/snapmirror break_failed_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.break-failed-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_break_successful_count","title":"snapmirror_break_successful_count","text":"

The number of successful SnapMirror break operations for the relationship

API Endpoint Metric Template REST api/private/cli/snapmirror break_successful_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.break-successful-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_lag_time","title":"snapmirror_lag_time","text":"

Amount of time since the last snapmirror transfer in seconds

API Endpoint Metric Template REST api/private/cli/snapmirror lag_time conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.lag-time conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_last_transfer_duration","title":"snapmirror_last_transfer_duration","text":"

Duration of the last SnapMirror transfer in seconds

API Endpoint Metric Template REST api/private/cli/snapmirror last_transfer_duration conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.last-transfer-duration conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_last_transfer_end_timestamp","title":"snapmirror_last_transfer_end_timestamp","text":"

The Timestamp of the end of the last transfer

API Endpoint Metric Template REST api/private/cli/snapmirror last_transfer_end_timestamp conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.last-transfer-end-timestamp conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_last_transfer_size","title":"snapmirror_last_transfer_size","text":"

Size in kilobytes (1024 bytes) of the last transfer

API Endpoint Metric Template REST api/private/cli/snapmirror last_transfer_size conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.last-transfer-size conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_newest_snapshot_timestamp","title":"snapmirror_newest_snapshot_timestamp","text":"

The timestamp of the newest Snapshot copy on the destination volume

API Endpoint Metric Template REST api/private/cli/snapmirror newest_snapshot_timestamp conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.newest-snapshot-timestamp conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_resync_failed_count","title":"snapmirror_resync_failed_count","text":"

The number of failed SnapMirror resync operations for the relationship

API Endpoint Metric Template REST api/private/cli/snapmirror resync_failed_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.resync-failed-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_resync_successful_count","title":"snapmirror_resync_successful_count","text":"

The number of successful SnapMirror resync operations for the relationship

API Endpoint Metric Template REST api/private/cli/snapmirror resync_successful_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.resync-successful-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_total_transfer_bytes","title":"snapmirror_total_transfer_bytes","text":"

Cumulative bytes transferred for the relationship

API Endpoint Metric Template REST api/private/cli/snapmirror total_transfer_bytes conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.total-transfer-bytes conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_total_transfer_time_secs","title":"snapmirror_total_transfer_time_secs","text":"

Cumulative total transfer time in seconds for the relationship

API Endpoint Metric Template REST api/private/cli/snapmirror total_transfer_time_secs conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.total-transfer-time-secs conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_update_failed_count","title":"snapmirror_update_failed_count","text":"

The number of successful SnapMirror update operations for the relationship

API Endpoint Metric Template REST api/private/cli/snapmirror update_failed_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.update-failed-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_update_successful_count","title":"snapmirror_update_successful_count","text":"

Number of Successful Updates

API Endpoint Metric Template REST api/private/cli/snapmirror update_successful_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.update-successful-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapshot_policy_total_schedules","title":"snapshot_policy_total_schedules","text":"

Total Number of Schedules in this Policy

API Endpoint Metric Template REST api/private/cli/snapshot/policy total_schedules conf/rest/9.12.0/snapshotpolicy.yaml ZAPI snapshot-policy-get-iter snapshot-policy-info.total-schedules conf/zapi/cdot/9.8.0/snapshotpolicy.yaml"},{"location":"ontap-metrics/#svm_cifs_connections","title":"svm_cifs_connections","text":"

Number of connections

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs connectionsUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver connectionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_established_sessions","title":"svm_cifs_established_sessions","text":"

Number of established SMB and SMB2 sessions

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs established_sessionsUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver established_sessionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_latency","title":"svm_cifs_latency","text":"

Average latency for CIFS operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs latencyUnit: microsecType: averageBase: latency_base conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_latencyUnit: microsecType: averageBase: cifs_latency_base conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_op_count","title":"svm_cifs_op_count","text":"

Array of select CIFS operation counts

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs op_countUnit: noneType: rateBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_op_countUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_open_files","title":"svm_cifs_open_files","text":"

Number of open files over SMB and SMB2

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs open_filesUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver open_filesUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_ops","title":"svm_cifs_ops","text":"

Total number of CIFS operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_read_latency","title":"svm_cifs_read_latency","text":"

Average latency for CIFS read operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs average_read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_read_latencyUnit: microsecType: averageBase: cifs_read_ops conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_read_ops","title":"svm_cifs_read_ops","text":"

Total number of CIFS read operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs total_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_signed_sessions","title":"svm_cifs_signed_sessions","text":"

Number of signed SMB and SMB2 sessions.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs signed_sessionsUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver signed_sessionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_write_latency","title":"svm_cifs_write_latency","text":"

Average latency for CIFS write operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs average_write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_write_latencyUnit: microsecType: averageBase: cifs_write_ops conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_write_ops","title":"svm_cifs_write_ops","text":"

Total number of CIFS write operations

API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs total_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_nfs_access_avg_latency","title":"svm_nfs_access_avg_latency","text":"

Average latency of Access procedure requests. The counter keeps track of the average response time of Access requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_access_total","title":"svm_nfs_access_total","text":"

Total number of Access procedure requests. It is the total number of access success and access error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_backchannel_ctl_avg_latency","title":"svm_nfs_backchannel_ctl_avg_latency","text":"

Average latency of BACKCHANNEL_CTL operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 backchannel_ctl.average_latencyUnit: microsecType: averageBase: backchannel_ctl.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 backchannel_ctl.average_latencyUnit: microsecType: averageBase: backchannel_ctl.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 backchannel_ctl_avg_latencyUnit: microsecType: average,no-zero-valuesBase: backchannel_ctl_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 backchannel_ctl_avg_latencyUnit: microsecType: average,no-zero-valuesBase: backchannel_ctl_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_backchannel_ctl_total","title":"svm_nfs_backchannel_ctl_total","text":"

Total number of BACKCHANNEL_CTL operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 backchannel_ctl.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 backchannel_ctl.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 backchannel_ctl_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 backchannel_ctl_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_bind_conn_to_session_avg_latency","title":"svm_nfs_bind_conn_to_session_avg_latency","text":"

Average latency of BIND_CONN_TO_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 bind_connections_to_session.average_latencyUnit: microsecType: averageBase: bind_connections_to_session.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 bind_conn_to_session.average_latencyUnit: microsecType: averageBase: bind_conn_to_session.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 bind_conn_to_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: bind_conn_to_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 bind_conn_to_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: bind_conn_to_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_bind_conn_to_session_total","title":"svm_nfs_bind_conn_to_session_total","text":"

Total number of BIND_CONN_TO_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 bind_connections_to_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 bind_conn_to_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 bind_conn_to_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 bind_conn_to_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_close_avg_latency","title":"svm_nfs_close_avg_latency","text":"

Average latency of CLOSE procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_close_total","title":"svm_nfs_close_total","text":"

Total number of CLOSE procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_commit_avg_latency","title":"svm_nfs_commit_avg_latency","text":"

Average latency of Commit procedure requests. The counter keeps track of the average response time of Commit requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_commit_total","title":"svm_nfs_commit_total","text":"

Total number of Commit procedure requests. It is the total number of Commit success and Commit error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_create_avg_latency","title":"svm_nfs_create_avg_latency","text":"

Average latency of Create procedure requests. The counter keeps track of the average response time of Create requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_create_session_avg_latency","title":"svm_nfs_create_session_avg_latency","text":"

Average latency of CREATE_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 create_session.average_latencyUnit: microsecType: averageBase: create_session.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 create_session.average_latencyUnit: microsecType: averageBase: create_session.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 create_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 create_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_create_session_total","title":"svm_nfs_create_session_total","text":"

Total number of CREATE_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 create_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 create_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 create_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 create_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_create_total","title":"svm_nfs_create_total","text":"

Total number Create of procedure requests. It is the total number of create success and create error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_delegpurge_avg_latency","title":"svm_nfs_delegpurge_avg_latency","text":"

Average latency of DELEGPURGE procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_delegpurge_total","title":"svm_nfs_delegpurge_total","text":"

Total number of DELEGPURGE procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_delegreturn_avg_latency","title":"svm_nfs_delegreturn_avg_latency","text":"

Average latency of DELEGRETURN procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_delegreturn_total","title":"svm_nfs_delegreturn_total","text":"

Total number of DELEGRETURN procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_destroy_clientid_avg_latency","title":"svm_nfs_destroy_clientid_avg_latency","text":"

Average latency of DESTROY_CLIENTID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 destroy_clientid.average_latencyUnit: microsecType: averageBase: destroy_clientid.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 destroy_clientid.average_latencyUnit: microsecType: averageBase: destroy_clientid.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 destroy_clientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_clientid_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 destroy_clientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_clientid_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_destroy_clientid_total","title":"svm_nfs_destroy_clientid_total","text":"

Total number of DESTROY_CLIENTID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 destroy_clientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 destroy_clientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 destroy_clientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 destroy_clientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_destroy_session_avg_latency","title":"svm_nfs_destroy_session_avg_latency","text":"

Average latency of DESTROY_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 destroy_session.average_latencyUnit: microsecType: averageBase: destroy_session.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 destroy_session.average_latencyUnit: microsecType: averageBase: destroy_session.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 destroy_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 destroy_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_destroy_session_total","title":"svm_nfs_destroy_session_total","text":"

Total number of DESTROY_SESSION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 destroy_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 destroy_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 destroy_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 destroy_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_exchange_id_avg_latency","title":"svm_nfs_exchange_id_avg_latency","text":"

Average latency of EXCHANGE_ID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 exchange_id.average_latencyUnit: microsecType: averageBase: exchange_id.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 exchange_id.average_latencyUnit: microsecType: averageBase: exchange_id.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 exchange_id_avg_latencyUnit: microsecType: average,no-zero-valuesBase: exchange_id_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 exchange_id_avg_latencyUnit: microsecType: average,no-zero-valuesBase: exchange_id_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_exchange_id_total","title":"svm_nfs_exchange_id_total","text":"

Total number of EXCHANGE_ID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 exchange_id.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 exchange_id.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 exchange_id_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 exchange_id_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_free_stateid_avg_latency","title":"svm_nfs_free_stateid_avg_latency","text":"

Average latency of FREE_STATEID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 free_stateid.average_latencyUnit: microsecType: averageBase: free_stateid.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 free_stateid.average_latencyUnit: microsecType: averageBase: free_stateid.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 free_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: free_stateid_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 free_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: free_stateid_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_free_stateid_total","title":"svm_nfs_free_stateid_total","text":"

Total number of FREE_STATEID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 free_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 free_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 free_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 free_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_fsinfo_avg_latency","title":"svm_nfs_fsinfo_avg_latency","text":"

Average latency of FSInfo procedure requests. The counter keeps track of the average response time of FSInfo requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 fsinfo.average_latencyUnit: microsecType: averageBase: fsinfo.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 fsinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: fsinfo_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_fsinfo_total","title":"svm_nfs_fsinfo_total","text":"

Total number FSInfo of procedure requests. It is the total number of FSInfo success and FSInfo error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 fsinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 fsinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_fsstat_avg_latency","title":"svm_nfs_fsstat_avg_latency","text":"

Average latency of FSStat procedure requests. The counter keeps track of the average response time of FSStat requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 fsstat.average_latencyUnit: microsecType: averageBase: fsstat.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 fsstat_avg_latencyUnit: microsecType: average,no-zero-valuesBase: fsstat_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_fsstat_total","title":"svm_nfs_fsstat_total","text":"

Total number FSStat of procedure requests. It is the total number of FSStat success and FSStat error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 fsstat.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 fsstat_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_get_dir_delegation_avg_latency","title":"svm_nfs_get_dir_delegation_avg_latency","text":"

Average latency of GET_DIR_DELEGATION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 get_dir_delegation.average_latencyUnit: microsecType: averageBase: get_dir_delegation.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 get_dir_delegation.average_latencyUnit: microsecType: averageBase: get_dir_delegation.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 get_dir_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: get_dir_delegation_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 get_dir_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: get_dir_delegation_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_get_dir_delegation_total","title":"svm_nfs_get_dir_delegation_total","text":"

Total number of GET_DIR_DELEGATION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 get_dir_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 get_dir_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 get_dir_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 get_dir_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getattr_avg_latency","title":"svm_nfs_getattr_avg_latency","text":"

Average latency of GetAttr procedure requests. This counter keeps track of the average response time of GetAttr requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getattr_total","title":"svm_nfs_getattr_total","text":"

Total number of Getattr procedure requests. It is the total number of getattr success and getattr error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getdeviceinfo_avg_latency","title":"svm_nfs_getdeviceinfo_avg_latency","text":"

Average latency of GETDEVICEINFO operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 getdeviceinfo.average_latencyUnit: microsecType: averageBase: getdeviceinfo.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getdeviceinfo.average_latencyUnit: microsecType: averageBase: getdeviceinfo.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 getdeviceinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdeviceinfo_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getdeviceinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdeviceinfo_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getdeviceinfo_total","title":"svm_nfs_getdeviceinfo_total","text":"

Total number of GETDEVICEINFO operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 getdeviceinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getdeviceinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 getdeviceinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getdeviceinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getdevicelist_avg_latency","title":"svm_nfs_getdevicelist_avg_latency","text":"

Average latency of GETDEVICELIST operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 getdevicelist.average_latencyUnit: microsecType: averageBase: getdevicelist.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getdevicelist.average_latencyUnit: microsecType: averageBase: getdevicelist.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 getdevicelist_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdevicelist_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getdevicelist_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdevicelist_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getdevicelist_total","title":"svm_nfs_getdevicelist_total","text":"

Total number of GETDEVICELIST operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 getdevicelist.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getdevicelist.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 getdevicelist_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getdevicelist_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getfh_avg_latency","title":"svm_nfs_getfh_avg_latency","text":"

Average latency of GETFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getfh_total","title":"svm_nfs_getfh_total","text":"

Total number of GETFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_latency","title":"svm_nfs_latency","text":"

Average latency of NFSv3 requests. This counter keeps track of the average response time of NFSv3 requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutcommit_avg_latency","title":"svm_nfs_layoutcommit_avg_latency","text":"

Average latency of LAYOUTCOMMIT operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutcommit.average_latencyUnit: microsecType: averageBase: layoutcommit.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutcommit.average_latencyUnit: microsecType: averageBase: layoutcommit.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutcommit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutcommit_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutcommit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutcommit_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutcommit_total","title":"svm_nfs_layoutcommit_total","text":"

Total number of LAYOUTCOMMIT operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutcommit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutcommit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutcommit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutcommit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutget_avg_latency","title":"svm_nfs_layoutget_avg_latency","text":"

Average latency of LAYOUTGET operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutget.average_latencyUnit: microsecType: averageBase: layoutget.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutget.average_latencyUnit: microsecType: averageBase: layoutget.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutget_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutget_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutget_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutget_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutget_total","title":"svm_nfs_layoutget_total","text":"

Total number of LAYOUTGET operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutget.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutget.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutget_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutget_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutreturn_avg_latency","title":"svm_nfs_layoutreturn_avg_latency","text":"

Average latency of LAYOUTRETURN operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutreturn.average_latencyUnit: microsecType: averageBase: layoutreturn.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutreturn.average_latencyUnit: microsecType: averageBase: layoutreturn.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutreturn_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutreturn_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutreturn_total","title":"svm_nfs_layoutreturn_total","text":"

Total number of LAYOUTRETURN operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_link_avg_latency","title":"svm_nfs_link_avg_latency","text":"

Average latency of Link procedure requests. The counter keeps track of the average response time of Link requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_link_total","title":"svm_nfs_link_total","text":"

Total number Link of procedure requests. It is the total number of Link success and Link error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lock_avg_latency","title":"svm_nfs_lock_avg_latency","text":"

Average latency of LOCK procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lock_total","title":"svm_nfs_lock_total","text":"

Total number of LOCK procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lockt_avg_latency","title":"svm_nfs_lockt_avg_latency","text":"

Average latency of LOCKT procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lockt_total","title":"svm_nfs_lockt_total","text":"

Total number of LOCKT procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_locku_avg_latency","title":"svm_nfs_locku_avg_latency","text":"

Average latency of LOCKU procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_locku_total","title":"svm_nfs_locku_total","text":"

Total number of LOCKU procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lookup_avg_latency","title":"svm_nfs_lookup_avg_latency","text":"

Average latency of LookUp procedure requests. This shows the average time it takes for the LookUp operation to reply to the request.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lookup_total","title":"svm_nfs_lookup_total","text":"

Total number of Lookup procedure requests. It is the total number of lookup success and lookup error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lookupp_avg_latency","title":"svm_nfs_lookupp_avg_latency","text":"

Average latency of LOOKUPP procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lookupp_total","title":"svm_nfs_lookupp_total","text":"

Total number of LOOKUPP procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_mkdir_avg_latency","title":"svm_nfs_mkdir_avg_latency","text":"

Average latency of MkDir procedure requests. The counter keeps track of the average response time of MkDir requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 mkdir.average_latencyUnit: microsecType: averageBase: mkdir.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 mkdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: mkdir_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_mkdir_total","title":"svm_nfs_mkdir_total","text":"

Total number MkDir of procedure requests. It is the total number of MkDir success and MkDir error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 mkdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 mkdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_mknod_avg_latency","title":"svm_nfs_mknod_avg_latency","text":"

Average latency of MkNod procedure requests. The counter keeps track of the average response time of MkNod requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 mknod.average_latencyUnit: microsecType: averageBase: mknod.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 mknod_avg_latencyUnit: microsecType: average,no-zero-valuesBase: mknod_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_mknod_total","title":"svm_nfs_mknod_total","text":"

Total number MkNod of procedure requests. It is the total number of MkNod success and MkNod error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 mknod.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 mknod_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_null_avg_latency","title":"svm_nfs_null_avg_latency","text":"

Average latency of Null procedure requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_null_total","title":"svm_nfs_null_total","text":"

Total number of Null procedure requests. It is the total of null success and null error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_nverify_avg_latency","title":"svm_nfs_nverify_avg_latency","text":"

Average latency of NVERIFY procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_nverify_total","title":"svm_nfs_nverify_total","text":"

Total number of NVERIFY procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_open_avg_latency","title":"svm_nfs_open_avg_latency","text":"

Average latency of OPEN procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_open_confirm_avg_latency","title":"svm_nfs_open_confirm_avg_latency","text":"

Average latency of OPEN_CONFIRM procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open_confirm.average_latencyUnit: microsecType: averageBase: open_confirm.total conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 open_confirm_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_confirm_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_open_confirm_total","title":"svm_nfs_open_confirm_total","text":"

Total number of OPEN_CONFIRM procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open_confirm.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 open_confirm_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_open_downgrade_avg_latency","title":"svm_nfs_open_downgrade_avg_latency","text":"

Average latency of OPEN_DOWNGRADE procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_open_downgrade_total","title":"svm_nfs_open_downgrade_total","text":"

Total number of OPEN_DOWNGRADE procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_open_total","title":"svm_nfs_open_total","text":"

Total number of OPEN procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_openattr_avg_latency","title":"svm_nfs_openattr_avg_latency","text":"

Average latency of OPENATTR procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_openattr_total","title":"svm_nfs_openattr_total","text":"

Total number of OPENATTR procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_ops","title":"svm_nfs_ops","text":"

Total number of NFSv3 procedure requests per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_pathconf_avg_latency","title":"svm_nfs_pathconf_avg_latency","text":"

Average latency of PathConf procedure requests. The counter keeps track of the average response time of PathConf requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 pathconf.average_latencyUnit: microsecType: averageBase: pathconf.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 pathconf_avg_latencyUnit: microsecType: average,no-zero-valuesBase: pathconf_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_pathconf_total","title":"svm_nfs_pathconf_total","text":"

Total number PathConf of procedure requests. It is the total number of PathConf success and PathConf error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 pathconf.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 pathconf_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_putfh_avg_latency","title":"svm_nfs_putfh_avg_latency","text":"

Average latency of PUTFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putfh.average_latencyUnit: microsecType: averageBase: putfh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putfh.average_latencyUnit: noneType: deltaBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putfh.average_latencyUnit: microsecType: averageBase: putfh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_putfh_total","title":"svm_nfs_putfh_total","text":"

Total number of PUTFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_putpubfh_avg_latency","title":"svm_nfs_putpubfh_avg_latency","text":"

Average latency of PUTPUBFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_putpubfh_total","title":"svm_nfs_putpubfh_total","text":"

Total number of PUTPUBFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_putrootfh_avg_latency","title":"svm_nfs_putrootfh_avg_latency","text":"

Average latency of PUTROOTFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_putrootfh_total","title":"svm_nfs_putrootfh_total","text":"

Total number of PUTROOTFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_read_avg_latency","title":"svm_nfs_read_avg_latency","text":"

Average latency of Read procedure requests. The counter keeps track of the average response time of Read requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_read_ops","title":"svm_nfs_read_ops","text":"

Total observed NFSv3 read operations per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_read_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_read_symlink_avg_latency","title":"svm_nfs_read_symlink_avg_latency","text":"

Average latency of ReadSymLink procedure requests. The counter keeps track of the average response time of ReadSymLink requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read_symlink.average_latencyUnit: microsecType: averageBase: read_symlink.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 read_symlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_symlink_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_read_symlink_total","title":"svm_nfs_read_symlink_total","text":"

Total number of ReadSymLink procedure requests. It is the total number of read symlink success and read symlink error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read_symlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 read_symlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_read_throughput","title":"svm_nfs_read_throughput","text":"

Rate of NFSv3 read data transfers per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 nfs4_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 nfs41_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 nfs42_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_read_total","title":"svm_nfs_read_total","text":"

Total number Read of procedure requests. It is the total number of read success and read error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_readdir_avg_latency","title":"svm_nfs_readdir_avg_latency","text":"

Average latency of ReadDir procedure requests. The counter keeps track of the average response time of ReadDir requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_readdir_total","title":"svm_nfs_readdir_total","text":"

Total number ReadDir of procedure requests. It is the total number of ReadDir success and ReadDir error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_readdirplus_avg_latency","title":"svm_nfs_readdirplus_avg_latency","text":"

Average latency of ReadDirPlus procedure requests. The counter keeps track of the average response time of ReadDirPlus requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 readdirplus.average_latencyUnit: microsecType: averageBase: readdirplus.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 readdirplus_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdirplus_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_readdirplus_total","title":"svm_nfs_readdirplus_total","text":"

Total number ReadDirPlus of procedure requests. It is the total number of ReadDirPlus success and ReadDirPlus error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 readdirplus.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 readdirplus_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_readlink_avg_latency","title":"svm_nfs_readlink_avg_latency","text":"

Average latency of READLINK procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_readlink_total","title":"svm_nfs_readlink_total","text":"

Total number of READLINK procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_reclaim_complete_avg_latency","title":"svm_nfs_reclaim_complete_avg_latency","text":"

Average latency of RECLAIM_COMPLETE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 reclaim_complete.average_latencyUnit: microsecType: averageBase: reclaim_complete.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 reclaim_complete.average_latencyUnit: microsecType: averageBase: reclaim_complete.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 reclaim_complete_avg_latencyUnit: microsecType: average,no-zero-valuesBase: reclaim_complete_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 reclaim_complete_avg_latencyUnit: microsecType: average,no-zero-valuesBase: reclaim_complete_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_reclaim_complete_total","title":"svm_nfs_reclaim_complete_total","text":"

Total number of RECLAIM_COMPLETE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 reclaim_complete.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 reclaim_complete.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 reclaim_complete_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 reclaim_complete_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_release_lock_owner_avg_latency","title":"svm_nfs_release_lock_owner_avg_latency","text":"

Average Latency of RELEASE_LOCKOWNER procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 release_lock_owner.average_latencyUnit: microsecType: averageBase: release_lock_owner.total conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 release_lock_owner_avg_latencyUnit: microsecType: average,no-zero-valuesBase: release_lock_owner_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_release_lock_owner_total","title":"svm_nfs_release_lock_owner_total","text":"

Total number of RELEASE_LOCKOWNER procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 release_lock_owner.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 release_lock_owner_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_remove_avg_latency","title":"svm_nfs_remove_avg_latency","text":"

Average latency of Remove procedure requests. The counter keeps track of the average response time of Remove requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_remove_total","title":"svm_nfs_remove_total","text":"

Total number Remove of procedure requests. It is the total number of Remove success and Remove error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_rename_avg_latency","title":"svm_nfs_rename_avg_latency","text":"

Average latency of Rename procedure requests. The counter keeps track of the average response time of Rename requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_rename_total","title":"svm_nfs_rename_total","text":"

Total number Rename of procedure requests. It is the total number of Rename success and Rename error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_renew_avg_latency","title":"svm_nfs_renew_avg_latency","text":"

Average latency of RENEW procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 renew.average_latencyUnit: microsecType: averageBase: renew.total conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 renew_avg_latencyUnit: microsecType: average,no-zero-valuesBase: renew_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_renew_total","title":"svm_nfs_renew_total","text":"

Total number of RENEW procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 renew.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 renew_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_restorefh_avg_latency","title":"svm_nfs_restorefh_avg_latency","text":"

Average latency of RESTOREFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_restorefh_total","title":"svm_nfs_restorefh_total","text":"

Total number of RESTOREFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_rmdir_avg_latency","title":"svm_nfs_rmdir_avg_latency","text":"

Average latency of RmDir procedure requests. The counter keeps track of the average response time of RmDir requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 rmdir.average_latencyUnit: microsecType: averageBase: rmdir.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 rmdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rmdir_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_rmdir_total","title":"svm_nfs_rmdir_total","text":"

Total number RmDir of procedure requests. It is the total number of RmDir success and RmDir error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 rmdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 rmdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_savefh_avg_latency","title":"svm_nfs_savefh_avg_latency","text":"

Average latency of SAVEFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_savefh_total","title":"svm_nfs_savefh_total","text":"

Total number of SAVEFH procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_secinfo_avg_latency","title":"svm_nfs_secinfo_avg_latency","text":"

Average latency of SECINFO procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_secinfo_no_name_avg_latency","title":"svm_nfs_secinfo_no_name_avg_latency","text":"

Average latency of SECINFO_NO_NAME operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 secinfo_no_name.average_latencyUnit: microsecType: averageBase: secinfo_no_name.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 secinfo_no_name.average_latencyUnit: microsecType: averageBase: secinfo_no_name.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 secinfo_no_name_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_no_name_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 secinfo_no_name_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_no_name_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_secinfo_no_name_total","title":"svm_nfs_secinfo_no_name_total","text":"

Total number of SECINFO_NO_NAME operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 secinfo_no_name.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 secinfo_no_name.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 secinfo_no_name_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 secinfo_no_name_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_secinfo_total","title":"svm_nfs_secinfo_total","text":"

Total number of SECINFO procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_sequence_avg_latency","title":"svm_nfs_sequence_avg_latency","text":"

Average latency of SEQUENCE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 sequence.average_latencyUnit: microsecType: averageBase: sequence.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 sequence.average_latencyUnit: microsecType: averageBase: sequence.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 sequence_avg_latencyUnit: microsecType: average,no-zero-valuesBase: sequence_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 sequence_avg_latencyUnit: microsecType: average,no-zero-valuesBase: sequence_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_sequence_total","title":"svm_nfs_sequence_total","text":"

Total number of SEQUENCE operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 sequence.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 sequence.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 sequence_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 sequence_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_set_ssv_avg_latency","title":"svm_nfs_set_ssv_avg_latency","text":"

Average latency of SET_SSV operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 set_ssv.average_latencyUnit: microsecType: averageBase: set_ssv.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 set_ssv.average_latencyUnit: microsecType: averageBase: set_ssv.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 set_ssv_avg_latencyUnit: microsecType: average,no-zero-valuesBase: set_ssv_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 set_ssv_avg_latencyUnit: microsecType: average,no-zero-valuesBase: set_ssv_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_set_ssv_total","title":"svm_nfs_set_ssv_total","text":"

Total number of SET_SSV operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 set_ssv.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 set_ssv.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 set_ssv_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 set_ssv_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_setattr_avg_latency","title":"svm_nfs_setattr_avg_latency","text":"

Average latency of SetAttr procedure requests. The counter keeps track of the average response time of SetAttr requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_setattr_total","title":"svm_nfs_setattr_total","text":"

Total number of Setattr procedure requests. It is the total number of Setattr success and setattr error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_setclientid_avg_latency","title":"svm_nfs_setclientid_avg_latency","text":"

Average latency of SETCLIENTID procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 setclientid.average_latencyUnit: microsecType: averageBase: setclientid.total conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 setclientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setclientid_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_setclientid_confirm_avg_latency","title":"svm_nfs_setclientid_confirm_avg_latency","text":"

Average latency of SETCLIENTID_CONFIRM procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 setclientid_confirm.average_latencyUnit: microsecType: averageBase: setclientid_confirm.total conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 setclientid_confirm_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setclientid_confirm_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_setclientid_confirm_total","title":"svm_nfs_setclientid_confirm_total","text":"

Total number of SETCLIENTID_CONFIRM procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 setclientid_confirm.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 setclientid_confirm_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_setclientid_total","title":"svm_nfs_setclientid_total","text":"

Total number of SETCLIENTID procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 setclientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 setclientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_symlink_avg_latency","title":"svm_nfs_symlink_avg_latency","text":"

Average latency of SymLink procedure requests. The counter keeps track of the average response time of SymLink requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 symlink.average_latencyUnit: microsecType: averageBase: symlink.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 symlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: symlink_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_symlink_total","title":"svm_nfs_symlink_total","text":"

Total number SymLink of procedure requests. It is the total number of SymLink success and create SymLink requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 symlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 symlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_test_stateid_avg_latency","title":"svm_nfs_test_stateid_avg_latency","text":"

Average latency of TEST_STATEID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 test_stateid.average_latencyUnit: microsecType: averageBase: test_stateid.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 test_stateid.average_latencyUnit: microsecType: averageBase: test_stateid.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 test_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: test_stateid_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 test_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: test_stateid_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_test_stateid_total","title":"svm_nfs_test_stateid_total","text":"

Total number of TEST_STATEID operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 test_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 test_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 test_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 test_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_throughput","title":"svm_nfs_throughput","text":"

Rate of NFSv3 data transfers per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 nfs4_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 nfs41_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 nfs42_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_verify_avg_latency","title":"svm_nfs_verify_avg_latency","text":"

Average latency of VERIFY procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_verify_total","title":"svm_nfs_verify_total","text":"

Total number of VERIFY procedures

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_want_delegation_avg_latency","title":"svm_nfs_want_delegation_avg_latency","text":"

Average latency of WANT_DELEGATION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 want_delegation.average_latencyUnit: microsecType: averageBase: want_delegation.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 want_delegation.average_latencyUnit: microsecType: averageBase: want_delegation.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 want_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: want_delegation_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 want_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: want_delegation_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_want_delegation_total","title":"svm_nfs_want_delegation_total","text":"

Total number of WANT_DELEGATION operations.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 want_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 want_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 want_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 want_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_write_avg_latency","title":"svm_nfs_write_avg_latency","text":"

Average latency of Write procedure requests. The counter keeps track of the average response time of Write requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_write_ops","title":"svm_nfs_write_ops","text":"

Total observed NFSv3 write operations per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_write_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_write_throughput","title":"svm_nfs_write_throughput","text":"

Rate of NFSv3 write data transfers per second.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 nfs4_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 nfs41_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 nfs42_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_write_total","title":"svm_nfs_write_total","text":"

Total number of Write procedure requests. It is the total number of write success and write error requests.

API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_vol_avg_latency","title":"svm_vol_avg_latency","text":"

Average latency in microseconds for the WAFL filesystem to process all the operations on the volume; not including request processing or network communication time

API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm average_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_other_latency","title":"svm_vol_other_latency","text":"

Average latency in microseconds for the WAFL filesystem to process other operations to the volume; not including request processing or network communication time

API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm other_latencyUnit: microsecType: averageBase: total_other_ops conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_other_ops","title":"svm_vol_other_ops","text":"

Number of other operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm total_other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_read_data","title":"svm_vol_read_data","text":"

Bytes read per second

API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm bytes_readUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_read_latency","title":"svm_vol_read_latency","text":"

Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time

API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_read_ops","title":"svm_vol_read_ops","text":"

Number of read operations per second from the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm total_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_total_ops","title":"svm_vol_total_ops","text":"

Number of operations per second serviced by the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_write_data","title":"svm_vol_write_data","text":"

Bytes written per second

API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm bytes_writtenUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_write_latency","title":"svm_vol_write_latency","text":"

Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time

API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_write_ops","title":"svm_vol_write_ops","text":"

Number of write operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm total_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vscan_connections_active","title":"svm_vscan_connections_active","text":"

Total number of current active connections

API Endpoint Metric Template REST api/cluster/counter/tables/svm_vscan connections_activeUnit: noneType: rawBase: conf/restperf/9.13.0/vscan_svm.yaml ZAPI perf-object-get-instances offbox_vscan connections_activeUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/vscan_svm.yaml"},{"location":"ontap-metrics/#svm_vscan_dispatch_latency","title":"svm_vscan_dispatch_latency","text":"

Average dispatch latency

API Endpoint Metric Template REST api/cluster/counter/tables/svm_vscan dispatch.latencyUnit: microsecType: averageBase: dispatch.requests conf/restperf/9.13.0/vscan_svm.yaml ZAPI perf-object-get-instances offbox_vscan dispatch_latencyUnit: microsecType: averageBase: dispatch_latency_base conf/zapiperf/cdot/9.8.0/vscan_svm.yaml"},{"location":"ontap-metrics/#svm_vscan_scan_latency","title":"svm_vscan_scan_latency","text":"

Average scan latency

API Endpoint Metric Template REST api/cluster/counter/tables/svm_vscan scan.latencyUnit: microsecType: averageBase: scan.requests conf/restperf/9.13.0/vscan_svm.yaml ZAPI perf-object-get-instances offbox_vscan scan_latencyUnit: microsecType: averageBase: scan_latency_base conf/zapiperf/cdot/9.8.0/vscan_svm.yaml"},{"location":"ontap-metrics/#svm_vscan_scan_noti_received_rate","title":"svm_vscan_scan_noti_received_rate","text":"

Total number of scan notifications received by the dispatcher per second

API Endpoint Metric Template REST api/cluster/counter/tables/svm_vscan scan.notification_received_rateUnit: per_secType: rateBase: conf/restperf/9.13.0/vscan_svm.yaml ZAPI perf-object-get-instances offbox_vscan scan_noti_received_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/vscan_svm.yaml"},{"location":"ontap-metrics/#svm_vscan_scan_request_dispatched_rate","title":"svm_vscan_scan_request_dispatched_rate","text":"

Total number of scan requests sent to the Vscanner per second

API Endpoint Metric Template REST api/cluster/counter/tables/svm_vscan scan.request_dispatched_rateUnit: per_secType: rateBase: conf/restperf/9.13.0/vscan_svm.yaml ZAPI perf-object-get-instances offbox_vscan scan_request_dispatched_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/vscan_svm.yaml"},{"location":"ontap-metrics/#token_copy_bytes","title":"token_copy_bytes","text":"

Total number of bytes copied.

API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_copy.bytesUnit: noneType: rateBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_copy_bytesUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_copy_failure","title":"token_copy_failure","text":"

Number of failed token copy requests.

API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_copy.failuresUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_copy_failureUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_copy_success","title":"token_copy_success","text":"

Number of successful token copy requests.

API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_copy.successesUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_copy_successUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_create_bytes","title":"token_create_bytes","text":"

Total number of bytes for which tokens are created.

API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_create.bytesUnit: noneType: rateBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_create_bytesUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_create_failure","title":"token_create_failure","text":"

Number of failed token create requests.

API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_create.failuresUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_create_failureUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_create_success","title":"token_create_success","text":"

Number of successful token create requests.

API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_create.successesUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_create_successUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_zero_bytes","title":"token_zero_bytes","text":"

Total number of bytes zeroed.

API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_zero.bytesUnit: noneType: rateBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_zero_bytesUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_zero_failure","title":"token_zero_failure","text":"

Number of failed token zero requests.

API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_zero.failuresUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_zero_failureUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_zero_success","title":"token_zero_success","text":"

Number of successful token zero requests.

API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_zero.successesUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_zero_successUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#volume_autosize_grow_threshold_percent","title":"volume_autosize_grow_threshold_percent","text":"API Endpoint Metric Template REST api/private/cli/volume autosize_grow_threshold_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-autosize-attributes.grow-threshold-percent conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_autosize_maximum_size","title":"volume_autosize_maximum_size","text":"API Endpoint Metric Template REST api/private/cli/volume max_autosize conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-autosize-attributes.maximum-size conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_avg_latency","title":"volume_avg_latency","text":"

Average latency in microseconds for the WAFL filesystem to process all the operations on the volume; not including request processing or network communication time

API Endpoint Metric Template REST api/cluster/counter/tables/volume average_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_filesystem_size","title":"volume_filesystem_size","text":"API Endpoint Metric Template REST api/private/cli/volume filesystem_size conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.filesystem-size conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_inode_files_total","title":"volume_inode_files_total","text":"

Total user-visible file (inode) count, i.e., current maximum number of user-visible files (inodes) that this volume can currently hold.

API Endpoint Metric Template REST api/private/cli/volume files conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-inode-attributes.files-total conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_inode_files_used","title":"volume_inode_files_used","text":"

Number of user-visible files (inodes) used. This field is valid only when the volume is online.

API Endpoint Metric Template REST api/private/cli/volume files_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-inode-attributes.files-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_inode_used_percent","title":"volume_inode_used_percent","text":"

volume_inode_files_used / volume_inode_total

API Endpoint Metric Template REST api/private/cli/volume inode_files_used, inode_files_total conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter inode_files_used, inode_files_total conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_access_latency","title":"volume_nfs_access_latency","text":"

Average time for the WAFL filesystem to process NFS protocol access requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.access_latencyUnit: microsecType: averageBase: nfs.access_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_access_latencyUnit: microsecType: averageBase: nfs_access_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_access_ops","title":"volume_nfs_access_ops","text":"

Number of NFS accesses per second to the volume.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.access_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_access_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_getattr_latency","title":"volume_nfs_getattr_latency","text":"

Average time for the WAFL filesystem to process NFS protocol getattr requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.getattr_latencyUnit: microsecType: averageBase: nfs.getattr_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_getattr_latencyUnit: microsecType: averageBase: nfs_getattr_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_getattr_ops","title":"volume_nfs_getattr_ops","text":"

Number of NFS getattr per second to the volume.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.getattr_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_getattr_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_lookup_latency","title":"volume_nfs_lookup_latency","text":"

Average time for the WAFL filesystem to process NFS protocol lookup requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.lookup_latencyUnit: microsecType: averageBase: nfs.lookup_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_lookup_latencyUnit: microsecType: averageBase: nfs_lookup_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_lookup_ops","title":"volume_nfs_lookup_ops","text":"

Number of NFS lookups per second to the volume.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.lookup_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_lookup_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_other_latency","title":"volume_nfs_other_latency","text":"

Average time for the WAFL filesystem to process other NFS operations to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.other_latencyUnit: microsecType: averageBase: nfs.other_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_other_latencyUnit: microsecType: averageBase: nfs_other_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_other_ops","title":"volume_nfs_other_ops","text":"

Number of other NFS operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_punch_hole_latency","title":"volume_nfs_punch_hole_latency","text":"

Average time for the WAFL filesystem to process NFS protocol hole-punch requests to the volume.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.punch_hole_latencyUnit: microsecType: averageBase: nfs.punch_hole_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_punch_hole_latencyUnit: microsecType: averageBase: nfs_punch_hole_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_punch_hole_ops","title":"volume_nfs_punch_hole_ops","text":"

Number of NFS hole-punch requests per second to the volume.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.punch_hole_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_punch_hole_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_read_latency","title":"volume_nfs_read_latency","text":"

Average time for the WAFL filesystem to process NFS protocol read requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.read_latencyUnit: microsecType: averageBase: nfs.read_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_read_latencyUnit: microsecType: averageBase: nfs_read_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_read_ops","title":"volume_nfs_read_ops","text":"

Number of NFS read operations per second from the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_setattr_latency","title":"volume_nfs_setattr_latency","text":"

Average time for the WAFL filesystem to process NFS protocol setattr requests to the volume.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.setattr_latencyUnit: microsecType: averageBase: nfs.setattr_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_setattr_latencyUnit: microsecType: averageBase: nfs_setattr_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_setattr_ops","title":"volume_nfs_setattr_ops","text":"

Number of NFS setattr requests per second to the volume.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.setattr_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_setattr_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_total_ops","title":"volume_nfs_total_ops","text":"

Number of total NFS operations per second to the volume.

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_write_latency","title":"volume_nfs_write_latency","text":"

Average time for the WAFL filesystem to process NFS protocol write requests to the volume; not including NFS protocol request processing or network communication time, which will also be included in client observed NFS request latency

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.write_latencyUnit: microsecType: averageBase: nfs.write_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_write_latencyUnit: microsecType: averageBase: nfs_write_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_write_ops","title":"volume_nfs_write_ops","text":"

Number of NFS write operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_other_latency","title":"volume_other_latency","text":"

Average latency in microseconds for the WAFL filesystem to process other operations to the volume; not including request processing or network communication time

API Endpoint Metric Template REST api/cluster/counter/tables/volume other_latencyUnit: microsecType: averageBase: total_other_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_other_ops","title":"volume_other_ops","text":"

Number of other operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume total_other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_overwrite_reserve_available","title":"volume_overwrite_reserve_available","text":"

amount of storage space that is currently available for overwrites, calculated by subtracting the total amount of overwrite reserve space from the amount that has already been used.

API Endpoint Metric Template REST api/private/cli/volume overwrite_reserve_total, overwrite_reserve_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter overwrite_reserve_total, overwrite_reserve_used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_overwrite_reserve_total","title":"volume_overwrite_reserve_total","text":"API Endpoint Metric Template REST api/private/cli/volume overwrite_reserve conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.overwrite-reserve conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_overwrite_reserve_used","title":"volume_overwrite_reserve_used","text":"API Endpoint Metric Template REST api/private/cli/volume overwrite_reserve_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.overwrite-reserve-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_read_data","title":"volume_read_data","text":"

Bytes read per second

API Endpoint Metric Template REST api/cluster/counter/tables/volume bytes_readUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_read_latency","title":"volume_read_latency","text":"

Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time

API Endpoint Metric Template REST api/cluster/counter/tables/volume read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_read_ops","title":"volume_read_ops","text":"

Number of read operations per second from the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume total_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_compress_saved","title":"volume_sis_compress_saved","text":"

The total disk space (in bytes) that is saved by compressing blocks on the referenced file system.

API Endpoint Metric Template REST api/private/cli/volume compression_space_saved conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.compression-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_compress_saved_percent","title":"volume_sis_compress_saved_percent","text":"

Percentage of the total disk space that is saved by compressing blocks on the referenced file system

API Endpoint Metric Template REST api/private/cli/volume compression_space_saved_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.percentage-compression-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_dedup_saved","title":"volume_sis_dedup_saved","text":"

The total disk space (in bytes) that is saved by deduplication and file cloning.

API Endpoint Metric Template REST api/private/cli/volume dedupe_space_saved conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.deduplication-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_dedup_saved_percent","title":"volume_sis_dedup_saved_percent","text":"

Percentage of the total disk space that is saved by deduplication and file cloning.

API Endpoint Metric Template REST api/private/cli/volume dedupe_space_saved_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.percentage-deduplication-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_total_saved","title":"volume_sis_total_saved","text":"

Total space saved (in bytes) in the volume due to deduplication, compression, and file cloning.

API Endpoint Metric Template REST api/private/cli/volume sis_space_saved conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.total-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_total_saved_percent","title":"volume_sis_total_saved_percent","text":"

Percentage of total disk space that is saved by compressing blocks, deduplication and file cloning.

API Endpoint Metric Template REST api/private/cli/volume sis_space_saved_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.percentage-total-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_size","title":"volume_size","text":"

Physical size of the volume, in bytes. The minimum size for a FlexVol volume is 20MB and the minimum size for a FlexGroup volume is 200MB per constituent. The recommended size for a FlexGroup volume is a minimum of 100GB per constituent. For all volumes, the default size is equal to the minimum size.

API Endpoint Metric Template REST api/private/cli/volume size conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_size_available","title":"volume_size_available","text":"API Endpoint Metric Template REST api/private/cli/volume available conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size-available conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_size_total","title":"volume_size_total","text":"API Endpoint Metric Template REST api/private/cli/volume total conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size-total conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_size_used","title":"volume_size_used","text":"API Endpoint Metric Template REST api/private/cli/volume used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_size_used_percent","title":"volume_size_used_percent","text":"

percentage of utilized storage space in a volume relative to its total capacity

API Endpoint Metric Template REST api/private/cli/volume percent_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.percentage-size-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_count","title":"volume_snapshot_count","text":"

Number of Snapshot copies in the volume.

API Endpoint Metric Template REST api/private/cli/volume snapshot_count conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-snapshot-attributes.snapshot-count conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_reserve_available","title":"volume_snapshot_reserve_available","text":"API Endpoint Metric Template REST api/private/cli/volume snapshot_reserve_available conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.snapshot-reserve-available conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_reserve_percent","title":"volume_snapshot_reserve_percent","text":"API Endpoint Metric Template REST api/private/cli/volume percent_snapshot_space conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.percentage-snapshot-reserve conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_reserve_size","title":"volume_snapshot_reserve_size","text":"API Endpoint Metric Template REST api/private/cli/volume snapshot_reserve_size conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.snapshot-reserve-size conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_reserve_used","title":"volume_snapshot_reserve_used","text":"

amount of storage space currently used by a volume's snapshot reserve, which is calculated by subtracting the snapshot reserve available space from the snapshot reserve size.

API Endpoint Metric Template REST api/private/cli/volume snapshot_reserve_size, snapshot_reserve_available conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter snapshot_reserve_size, snapshot_reserve_available conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_reserve_used_percent","title":"volume_snapshot_reserve_used_percent","text":"API Endpoint Metric Template REST api/private/cli/volume snapshot_space_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.percentage-snapshot-reserve-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshots_size_available","title":"volume_snapshots_size_available","text":"API Endpoint Metric Template REST api/private/cli/volume size_available_for_snapshots conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size-available-for-snapshots conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshots_size_used","title":"volume_snapshots_size_used","text":"API Endpoint Metric Template REST api/private/cli/volume size_used_by_snapshots conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size-used-by-snapshots conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_expected_available","title":"volume_space_expected_available","text":"API Endpoint Metric Template REST api/private/cli/volume expected_available conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.expected-available conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_logical_available","title":"volume_space_logical_available","text":"API Endpoint Metric Template ZAPI volume-get-iter volume-attributes.volume-space-attributes.logical-available conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_logical_used","title":"volume_space_logical_used","text":"API Endpoint Metric Template REST api/private/cli/volume logical_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.logical-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_logical_used_by_afs","title":"volume_space_logical_used_by_afs","text":"API Endpoint Metric Template REST api/private/cli/volume logical_used_by_afs conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.logical-used-by-afs conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_logical_used_by_snapshots","title":"volume_space_logical_used_by_snapshots","text":"API Endpoint Metric Template REST api/private/cli/volume logical_used_by_snapshots conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.logical-used-by-snapshots conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_logical_used_percent","title":"volume_space_logical_used_percent","text":"API Endpoint Metric Template REST api/private/cli/volume logical_used_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.logical-used-percent conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_physical_used","title":"volume_space_physical_used","text":"API Endpoint Metric Template REST api/private/cli/volume physical_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.physical-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_physical_used_percent","title":"volume_space_physical_used_percent","text":"API Endpoint Metric Template REST api/private/cli/volume physical_used_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.physical-used-percent conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_total_ops","title":"volume_total_ops","text":"

Number of operations per second serviced by the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_write_data","title":"volume_write_data","text":"

Bytes written per second

API Endpoint Metric Template REST api/cluster/counter/tables/volume bytes_writtenUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_write_latency","title":"volume_write_latency","text":"

Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time

API Endpoint Metric Template REST api/cluster/counter/tables/volume write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_write_ops","title":"volume_write_ops","text":"

Number of write operations per second to the volume

API Endpoint Metric Template REST api/cluster/counter/tables/volume total_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#vscan_scan_latency","title":"vscan_scan_latency","text":"

Average scan latency

API Endpoint Metric Template REST api/cluster/counter/tables/vscan scan.latencyUnit: microsecType: averageBase: scan.requests conf/restperf/9.13.0/vscan.yaml ZAPI perf-object-get-instances offbox_vscan_server scan_latencyUnit: microsecType: averageBase: scan_latency_base conf/zapiperf/cdot/9.8.0/vscan.yaml"},{"location":"ontap-metrics/#vscan_scan_request_dispatched_rate","title":"vscan_scan_request_dispatched_rate","text":"

Total number of scan requests sent to the scanner per second

API Endpoint Metric Template REST api/cluster/counter/tables/vscan scan.request_dispatched_rateUnit: per_secType: rateBase: conf/restperf/9.13.0/vscan.yaml ZAPI perf-object-get-instances offbox_vscan_server scan_request_dispatched_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/vscan.yaml"},{"location":"ontap-metrics/#vscan_scanner_stats_pct_cpu_used","title":"vscan_scanner_stats_pct_cpu_used","text":"

Percentage CPU utilization on scanner calculated over the last 15 seconds.

API Endpoint Metric Template REST api/cluster/counter/tables/vscan scanner.stats_percent_cpu_usedUnit: noneType: rawBase: conf/restperf/9.13.0/vscan.yaml ZAPI perf-object-get-instances offbox_vscan_server scanner_stats_pct_cpu_usedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/vscan.yaml"},{"location":"ontap-metrics/#vscan_scanner_stats_pct_mem_used","title":"vscan_scanner_stats_pct_mem_used","text":"

Percentage RAM utilization on scanner calculated over the last 15 seconds.

API Endpoint Metric Template REST api/cluster/counter/tables/vscan scanner.stats_percent_mem_usedUnit: noneType: rawBase: conf/restperf/9.13.0/vscan.yaml ZAPI perf-object-get-instances offbox_vscan_server scanner_stats_pct_mem_usedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/vscan.yaml"},{"location":"ontap-metrics/#vscan_scanner_stats_pct_network_used","title":"vscan_scanner_stats_pct_network_used","text":"

Percentage network utilization on scanner calculated for the last 15 seconds.

API Endpoint Metric Template REST api/cluster/counter/tables/vscan scanner.stats_percent_network_usedUnit: noneType: rawBase: conf/restperf/9.13.0/vscan.yaml ZAPI perf-object-get-instances offbox_vscan_server scanner_stats_pct_network_usedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/vscan.yaml"},{"location":"ontap-metrics/#wafl_avg_msg_latency","title":"wafl_avg_msg_latency","text":"

Average turnaround time for WAFL messages in milliseconds.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl average_msg_latencyUnit: millisecType: averageBase: msg_total conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl avg_wafl_msg_latencyUnit: millisecType: averageBase: wafl_msg_total conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_avg_non_wafl_msg_latency","title":"wafl_avg_non_wafl_msg_latency","text":"

Average turnaround time for non-WAFL messages in milliseconds.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl average_non_wafl_msg_latencyUnit: millisecType: averageBase: non_wafl_msg_total conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl avg_non_wafl_msg_latencyUnit: millisecType: averageBase: non_wafl_msg_total conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_avg_repl_msg_latency","title":"wafl_avg_repl_msg_latency","text":"

Average turnaround time for replication WAFL messages in milliseconds.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl average_replication_msg_latencyUnit: millisecType: averageBase: replication_msg_total conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl avg_wafl_repl_msg_latencyUnit: millisecType: averageBase: wafl_repl_msg_total conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_cp_count","title":"wafl_cp_count","text":"

Array of counts of different types of Consistency Points (CP).

API Endpoint Metric Template REST api/cluster/counter/tables/wafl cp_countUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl cp_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_cp_phase_times","title":"wafl_cp_phase_times","text":"

Array of percentage time spent in different phases of Consistency Point (CP).

API Endpoint Metric Template REST api/cluster/counter/tables/wafl cp_phase_timesUnit: percentType: percentBase: total_cp_msecs conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl cp_phase_timesUnit: percentType: percentBase: total_cp_msecs conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_memory_free","title":"wafl_memory_free","text":"

The current WAFL memory available in the system.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl memory_freeUnit: mbType: rawBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_memory_freeUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_memory_used","title":"wafl_memory_used","text":"

The current WAFL memory used in the system.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl memory_usedUnit: mbType: rawBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_memory_usedUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_msg_total","title":"wafl_msg_total","text":"

Total number of WAFL messages per second.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl msg_totalUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_msg_totalUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_non_wafl_msg_total","title":"wafl_non_wafl_msg_total","text":"

Total number of non-WAFL messages per second.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl non_wafl_msg_totalUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl non_wafl_msg_totalUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_read_io_type","title":"wafl_read_io_type","text":"

Percentage of reads served from buffer cache, external cache, or disk.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl read_io_typeUnit: percentType: percentBase: read_io_type_base conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl read_io_typeUnit: percentType: percentBase: read_io_type_base conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_cache","title":"wafl_reads_from_cache","text":"

WAFL reads from cache.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_cacheUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_cacheUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_cloud","title":"wafl_reads_from_cloud","text":"

WAFL reads from cloud storage.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_cloudUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_cloudUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_cloud_s2c_bin","title":"wafl_reads_from_cloud_s2c_bin","text":"

WAFL reads from cloud storage via s2c bin.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_cloud_s2c_binUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_cloud_s2c_binUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_disk","title":"wafl_reads_from_disk","text":"

WAFL reads from disk.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_diskUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_diskUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_ext_cache","title":"wafl_reads_from_ext_cache","text":"

WAFL reads from external cache.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_external_cacheUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_ext_cacheUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_fc_miss","title":"wafl_reads_from_fc_miss","text":"

WAFL reads from remote volume for fc_miss.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_fc_missUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_fc_missUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_pmem","title":"wafl_reads_from_pmem","text":"

Wafl reads from persistent mmeory.

API Endpoint Metric Template ZAPI perf-object-get-instances wafl wafl_reads_from_pmemUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_ssd","title":"wafl_reads_from_ssd","text":"

WAFL reads from SSD.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_ssdUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_ssdUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_repl_msg_total","title":"wafl_repl_msg_total","text":"

Total number of replication WAFL messages per second.

API Endpoint Metric Template REST api/cluster/counter/tables/wafl replication_msg_totalUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_repl_msg_totalUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_total_cp_msecs","title":"wafl_total_cp_msecs","text":"

Milliseconds spent in Consistency Point (CP).

API Endpoint Metric Template REST api/cluster/counter/tables/wafl total_cp_msecsUnit: millisecType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl total_cp_msecsUnit: millisecType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_total_cp_util","title":"wafl_total_cp_util","text":"

Percentage of time spent in a Consistency Point (CP).

API Endpoint Metric Template REST api/cluster/counter/tables/wafl total_cp_utilUnit: percentType: percentBase: cpu_elapsed_time conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl total_cp_utilUnit: percentType: percentBase: cpu_elapsed_time conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"plugins/","title":"Plugins","text":""},{"location":"plugins/#built-in-plugins","title":"Built-in Plugins","text":"

The plugin feature allows users to manipulate and customize data collected by collectors without changing the collectors. Plugins have the same capabilities as collectors and therefore can collect data on their own as well. Furthermore, multiple plugins can be put in a pipeline to perform more complex operations.

Harvest architecture defines two types of plugins:

Built-in generic - Statically compiled, generic plugins. \"Generic\" means the plugin is collector-agnostic. These plugins are provided in this package and listed in the right sidebar.

Built-in custom - These plugins are statically compiled, collector-specific plugins. Their source code should reside inside the plugins/ subdirectory of the collector package (e.g. (cmd/collectors/rest/plugins/svm/svm.go)[https://github.com/NetApp/harvest/blob/main/cmd/collectors/rest/plugins/svm/svm.go]). Custom plugins have access to all the parameters of their parent collector and should therefore be treated with great care.

This documentation gives an overview of builtin plugins. For other plugins, see their respective documentation. For writing your own plugin, see Developer's documentation.

Note: the rules are executed in the same order as you've added them.

"},{"location":"plugins/#aggregator","title":"Aggregator","text":"

Aggregator creates a new collection of metrics (Matrix) by summarizing and/or averaging metric values from an existing Matrix for a given label. For example, if the collected metrics are for volumes, you can create an aggregation for nodes or svms.

"},{"location":"plugins/#rule-syntax","title":"Rule syntax","text":"

simplest case:

plugins:\nAggregator:\n- LABEL\n# will aggregate a new Matrix based on target label LABEL\n

If you want to specify which labels should be included in the new instances, you can add those space-seperated after LABEL:

    - LABEL LABEL1,LABEL2\n# same, but LABEL1 and LABEL2 will be copied into the new instances\n# (default is to only copy LABEL and any global labels (such as cluster and datacenter)\n

Or include all labels:

    - LABEL ...\n# copy all labels of the original instance\n

By default, aggregated metrics will be prefixed with LABEL. For example if the object of the original Matrix is volume (meaning metrics are prefixed with volume_) and LABEL is aggr, then the metric volume_read_ops will become aggr_volume_read_ops, etc. You can override this by providing the <>OBJ using the following syntax:

    - LABEL<>OBJ\n# use OBJ as the object of the new matrix, e.g. if the original object is \"volume\" and you\n# want to leave metric names unchanged, use \"volume\"\n

Finally, sometimes you only want to aggregate instances with a specific label value. You can use <VALUE> for that ( optionally follow by OBJ):

    - LABEL<VALUE>\n# aggregate all instances if LABEL has value VALUE\n- LABEL<`VALUE`>\n# same, but VALUE is regular expression\n- LABEL<LABELX=`VALUE`>\n# same, but check against \"LABELX\" (instead of \"LABEL\")\n

Examples:

plugins:\nAggregator:\n# will aggregate metrics of the aggregate. The labels \"node\" and \"type\" are included in the new instances\n- aggr node type\n# aggregate instances if label \"type\" has value \"flexgroup\"\n# include all original labels\n- type<flexgroup> ...\n# aggregate all instances if value of \"volume\" ends with underscore and 4 digits\n- volume<`_\\d{4}$`>\n
"},{"location":"plugins/#aggregation-rules","title":"Aggregation rules","text":"

The plugin tries to intelligently aggregate metrics based on a few rules:

  • Sum - the default rule, if no other rules apply
  • Average - if any of the following is true:
    • metric name has suffix _percent or _percentage
    • metric name has prefix average_ or avg_
    • metric has property (metric.GetProperty()) percent or average
  • Weighted Average - applied if metric has property average and suffix _latency and if there is a matching _ops metric. (This is currently only matching to ZapiPerf metrics, which use the Property field of metrics.)
  • Ignore - metrics created by some plugins, such as value_to_num by LabelAgent
"},{"location":"plugins/#max","title":"Max","text":"

Max creates a new collection of metrics (Matrix) by calculating max of metric values from an existing Matrix for a given label. For example, if the collected metrics are for disks, you can create max at the node or aggregate level. Refer Max Examples for more details.

"},{"location":"plugins/#max-rule-syntax","title":"Max Rule syntax","text":"

simplest case:

plugins:\nMax:\n- LABEL\n# create a new Matrix of max values on target label LABEL\n

If you want to specify which labels should be included in the new instances, you can add those space-seperated after LABEL:

    - LABEL LABEL1,LABEL2\n# similar to the above example, but LABEL1 and LABEL2 will be copied into the new instances\n# (default is to only copy LABEL and all global labels (such as cluster and datacenter)\n

Or include all labels:

    - LABEL ...\n# copy all labels of the original instance\n

By default, metrics will be prefixed with LABEL. For example if the object of the original Matrix is volume (meaning metrics are prefixed with volume_) and LABEL is aggr, then the metric volume_read_ops will become aggr_volume_read_ops. You can override this using the <>OBJ pattern shown below:

    - LABEL<>OBJ\n# use OBJ as the object of the new matrix, e.g. if the original object is \"volume\" and you\n# want to leave metric names unchanged, use \"volume\"\n

Finally, sometimes you only want to generate instances with a specific label value. You can use <VALUE> for that ( optionally followed by OBJ):

    - LABEL<VALUE>\n# aggregate all instances if LABEL has value VALUE\n- LABEL<`VALUE`>\n# same, but VALUE is regular expression\n- LABEL<LABELX=`VALUE`>\n# same, but check against \"LABELX\" (instead of \"LABEL\")\n
"},{"location":"plugins/#max-examples","title":"Max Examples","text":"
plugins:\nMax:\n# will create max of each aggregate metric. All metrics will be prefixed with aggr_disk_max. All labels are included in the new instances\n- aggr<>aggr_disk_max ...\n# calculate max instances if label \"disk\" has value \"1.1.0\". Prefix with disk_max\n# include all original labels\n- disk<1.1.0>disk_max ...\n# max of all instances if value of \"volume\" ends with underscore and 4 digits\n- volume<`_\\d{4}$`>\n
"},{"location":"plugins/#labelagent","title":"LabelAgent","text":"

LabelAgent are used to manipulate instance labels based on rules. You can define multiple rules, here is an example of what you could add to the yaml file of a collector:

plugins:\nLabelAgent:\n# our rules:\nsplit: node `/` ,aggr,plex,disk\nreplace_regex: node node `^(node)_(\\d+)_.*$` `Node-$2`\n

Note: Labels for creating new label should use name defined in right side of =>. If not present then left side of => is used.

"},{"location":"plugins/#split","title":"split","text":"

Rule syntax:

split:\n- LABEL `SEP` LABEL1,LABEL2,LABEL3\n# source label - separator - comma-seperated target labels\n

Splits the value of a given label by separator SEP and creates new labels if their number matches to the number of target labels defined in rule. To discard a subvalue, just add a redundant , in the names of the target labels.

Example:

split:\n- node `/` ,aggr,plex,disk\n# will split the value of \"node\" using separator \"/\"\n# will expect 4 values: first will be discarded, remaining\n# three will be stored as labels \"aggr\", \"plex\" and \"disk\"\n
"},{"location":"plugins/#split_regex","title":"split_regex","text":"

Does the same as split but uses a regular expression instead of a separator.

Rule syntax:

split_regex:\n- LABEL `REGEX` LABEL1,LABEL2,LABEL3\n

Example:

split_regex:\n- node `.*_(ag\\d+)_(p\\d+)_(d\\d+)` aggr,plex,disk\n# will look for \"_ag\", \"_p\", \"_d\", each followed by one\n# or more numbers, if there is a match, the submatches\n# will be stored as \"aggr\", \"plex\" and \"disk\"\n
"},{"location":"plugins/#split_pairs","title":"split_pairs","text":"

Rule syntax:

split_pairs:\n- LABEL `SEP1` `SEP2`\n# source label - pair separator - key-value separator\n

Extracts key-value pairs from the value of source label LABEL. Note that you need to add these keys in the export options, otherwise they will not be exported.

Example:

split_pairs:\n- comment ` ` `:`\n# will split pairs using a single space and split key-values using colon\n# e.g. if comment=\"owner:jack contact:some@email\", the result wll be\n# two new labels: owner=\"jack\" and contact=\"some@email\"\n
"},{"location":"plugins/#join","title":"join","text":"

Join multiple label values using separator SEP and create a new label.

Rule syntax:

join:\n- LABEL `SEP` LABEL1,LABEL2,LABEL3\n# target label - separator - comma-seperated source labels\n

Example:

join:\n- plex_long `_` aggr,plex\n# will look for the values of labels \"aggr\" and \"plex\",\n# if they are set, a new \"plex_long\" label will be added\n# by joining their values with \"_\"\n
"},{"location":"plugins/#replace","title":"replace","text":"

Substitute substring OLD with NEW in label SOURCE and store in TARGET. Note that target and source labels can be the same.

Rule syntax:

replace:\n- SOURCE TARGET `OLD` `NEW`\n# source label - target label - substring to replace - replace with\n

Example:

replace:\n- node node_short `node_` ``\n# this rule will just remove \"node_\" from all values of label\n# \"node\". E.g. if label is \"node_jamaica1\", it will rewrite it \n# as \"jamaica1\"\n
"},{"location":"plugins/#replace_regex","title":"replace_regex","text":"

Same as replace, but will use a regular expression instead of OLD. Note you can use $n to specify nth submatch in NEW.

Rule syntax:

replace_regex:\n- SOURCE TARGET `REGEX` `NEW`\n# source label - target label - substring to replace - replace with\n

Example:

replace_regex:\n- node node `^(node)_(\\d+)_.*$` `Node-$2`\n# if there is a match, will capitalize \"Node\" and remove suffixes.\n# E.g. if label is \"node_10_dc2\", it will rewrite it as\n# will rewrite it as \"Node-10\"\n
"},{"location":"plugins/#exclude_equals","title":"exclude_equals","text":"

Exclude each instance, if the value of LABEL is exactly VALUE. Exclude means that metrics for this instance will not be exported.

Rule syntax:

exclude_equals:\n- LABEL `VALUE`\n# label name - label value\n

Example:

exclude_equals:\n- vol_type `flexgroup_constituent`\n# all instances, which have label \"vol_type\" with value\n# \"flexgroup_constituent\" will not be exported\n
"},{"location":"plugins/#exclude_contains","title":"exclude_contains","text":"

Same as exclude_equals, but all labels that contain VALUE will be excluded

Rule syntax:

exclude_contains:\n- LABEL `VALUE`\n# label name - label value\n

Example:

exclude_contains:\n- vol_type `flexgroup_`\n# all instances, which have label \"vol_type\" which contain\n# \"flexgroup_\" will not be exported\n
"},{"location":"plugins/#exclude_regex","title":"exclude_regex","text":"

Same as exclude_equals, but will use a regular expression and all matching instances will be excluded.

Rule syntax:

exclude_regex:\n- LABEL `REGEX`\n# label name - regular expression\n

Example:

exclude_regex:\n- vol_type `^flex`\n# all instances, which have label \"vol_type\" which starts with\n# \"flex\" will not be exported\n
"},{"location":"plugins/#include_equals","title":"include_equals","text":"

Include each instance, if the value of LABEL is exactly VALUE. Include means that metrics for this instance will be exported and instances that do not match will not be exported.

Rule syntax:

include_equals:\n- LABEL `VALUE`\n# label name - label value\n

Example:

include_equals:\n- vol_type `flexgroup_constituent`\n# all instances, which have label \"vol_type\" with value\n# \"flexgroup_constituent\" will be exported\n
"},{"location":"plugins/#include_contains","title":"include_contains","text":"

Same as include_equals, but all labels that contain VALUE will be included

Rule syntax:

include_contains:\n- LABEL `VALUE`\n# label name - label value\n

Example:

include_contains:\n- vol_type `flexgroup_`\n# all instances, which have label \"vol_type\" which contain\n# \"flexgroup_\" will be exported\n
"},{"location":"plugins/#include_regex","title":"include_regex","text":"

Same as include_equals, but a regular expression will be used for inclusion. Similar to the other includes, all matching instances will be included and all non-matching will not be exported.

Rule syntax:

include_regex:\n- LABEL `REGEX`\n# label name - regular expression\n

Example:

include_regex:\n- vol_type `^flex`\n# all instances, which have label \"vol_type\" which starts with\n# \"flex\" will be exported\n
"},{"location":"plugins/#value_mapping","title":"value_mapping","text":"

value_mapping was deprecated in 21.11 and removed in 22.02. Use value_to_num mapping instead.

"},{"location":"plugins/#value_to_num","title":"value_to_num","text":"

Map values of a given label to a numeric metric (of type uint8). This rule maps values of a given label to a numeric metric (of type unit8). Healthy is mapped to 1 and all non-healthy values are mapped to 0.

This is handy to manipulate the data in the DB or Grafana (e.g. change color based on status or create alert).

Note that you don't define the numeric values yourself, instead, you only provide the possible (expected) values, the plugin will map each value to its index in the rule.

Rule syntax:

value_to_num:\n- METRIC LABEL ZAPI_VALUE REST_VALUE `N`\n# map values of LABEL to 1 if it is ZAPI_VALUE or REST_VALUE\n# otherwise, value of METRIC is set to N\n

The default value N is optional, if no default value is given and the label value does not match any of the given values, the metric value will not be set.

Examples:

value_to_num:\n- status state up online `0`\n# a new metric will be created with the name \"status\"\n# if an instance has label \"state\" with value \"up\", the metric value will be 1,\n# if it's \"online\", the value will be set to 1,\n# if it's any other value, it will be set to the specified default, 0\n
value_to_num:\n- status state up online `4`\n# metric value will be set to 1 if \"state\" is \"up\", otherwise to **4**\n
value_to_num:\n- status outage - - `0` #ok_value is empty value. \n# metric value will be set to 1 if \"outage\" is empty, if it's any other value, it will be set to the default, 0\n# '-' is a special symbol in this mapping, and it will be converted to blank while processing.\n
"},{"location":"plugins/#value_to_num_regex","title":"value_to_num_regex","text":"

Same as value_to_num, but will use a regular expression. All matches are mapped to 1 and non-matches are mapped to 0.

This is handy to manipulate the data in the DB or Grafana (e.g. change color based on status or create alert).

Note that you don't define the numeric values, instead, you provide the expected values and the plugin will map each value to its index in the rule.

Rule syntax:

value_to_num_regex:\n- METRIC LABEL ZAPI_REGEX REST_REGEX `N`\n# map values of LABEL to 1 if it matches ZAPI_REGEX or REST_REGEX\n# otherwise, value of METRIC is set to N\n

The default value N is optional, if no default value is given and the label value does not match any of the given values, the metric value will not be set.

Examples:

value_to_num_regex:\n- certificateuser methods .*cert.*$ .*certificate.*$ `0`\n# a new metric will be created with the name \"certificateuser\"\n# if an instance has label \"methods\" with value contains \"cert\", the metric value will be 1,\n# if value contains \"certificate\", the value will be set to 1,\n# if value doesn't contain \"cert\" and \"certificate\", it will be set to the specified default, 0\n
value_to_num_regex:\n- status state ^up$ ^ok$ `4`\n# metric value will be set to 1 if label \"state\" matches regex, otherwise set to **4**\n
"},{"location":"plugins/#metricagent","title":"MetricAgent","text":"

MetricAgent are used to manipulate metrics based on rules. You can define multiple rules, here is an example of what you could add to the yaml file of a collector:

plugins:\nMetricAgent:\ncompute_metric:\n- snapshot_maxfiles_possible ADD snapshot.max_files_available snapshot.max_files_used\n- raid_disk_count ADD block_storage.primary.disk_count block_storage.hybrid_cache.disk_count\n

Note: Metric names used to create new metrics can come from the left or right side of the rename operator (=>) Note: The metric agent currently does not work for histogram or array metrics.

"},{"location":"plugins/#compute_metric","title":"compute_metric","text":"

This rule creates a new metric (of type float64) using the provided scalar or an existing metric value combined with a mathematical operation.

You can provide a numeric value or a metric name with an operation. The plugin will use the provided number or fetch the value of a given metric, perform the requested mathematical operation, and store the result in new custom metric.

Currently, we support these operations: ADD SUBTRACT MULTIPLY DIVIDE PERCENT

Rule syntax:

compute_metric:\n- METRIC OPERATION METRIC1 METRIC2 METRIC3\n# target new metric - mathematical operation - input metric names \n# apply OPERATION on metric values of METRIC1, METRIC2 and METRIC3 and set result in METRIC\n# METRIC1, METRIC2, METRIC3 can be a scalar or an existing metric name.\n

Examples:

compute_metric:\n- space_total ADD space_available space_used\n# a new metric will be created with the name \"space_total\"\n# if an instance has metric \"space_available\" with value \"1000\", and \"space_used\" with value \"400\",\n# the result value will be \"1400\" and set to metric \"space_total\".\n
compute_metric:\n- disk_count ADD primary.disk_count secondary.disk_count hybrid.disk_count\n# value of metric \"disk_count\" would be addition of all the given disk_counts metric values.\n# disk_count = primary.disk_count + secondary.disk_count + hybrid.disk_count\n
compute_metric:\n- files_available SUBTRACT files files_used\n# value of metric \"files_available\" would be subtraction of the metric value of files_used from metric value of files.\n# files_available = files - files_used\n
compute_metric:\n- total_bytes MULTIPLY bytes_per_sector sector_count\n# value of metric \"total_bytes\" would be multiplication of metric value of bytes_per_sector and metric value of sector_count.\n# total_bytes = bytes_per_sector * sector_count\n
compute_metric:\n- uptime MULTIPLY stats.power_on_hours 60 60\n# value of metric \"uptime\" would be multiplication of metric value of stats.power_on_hours and scalar value of 60 * 60.\n# total_bytes = bytes_per_sector * sector_count\n
compute_metric:\n- transmission_rate DIVIDE transfer.bytes_transferred transfer.total_duration\n# value of metric \"transmission_rate\" would be division of metric value of transfer.bytes_transferred by metric value of transfer.total_duration.\n# transmission_rate = transfer.bytes_transferred / transfer.total_duration\n
compute_metric:\n- inode_used_percent PERCENT inode_files_used inode_files_total\n# a new metric named \"inode_used_percent\" will be created by dividing the metric \"inode_files_used\" by \n#  \"inode_files_total\" and multiplying the result by 100.\n# inode_used_percent = inode_files_used / inode_files_total * 100\n
"},{"location":"plugins/#changelog-plugin","title":"ChangeLog Plugin","text":"

The ChangeLog plugin is a feature of Harvest, designed to detect and track changes related to the creation, modification, and deletion of an object. By default, it supports volume, svm, and node objects. Its functionality can be extended to track changes in other objects by making relevant changes in the template.

Please note that the ChangeLog plugin requires the uuid label, which is unique, to be collected by the template. Without the uuid label, the plugin will not function.

The ChangeLog feature only detects changes when Harvest is up and running. It does not detect changes that occur when Harvest is down. Additionally, the plugin does not detect changes in metric values.

"},{"location":"plugins/#enabling-the-plugin","title":"Enabling the Plugin","text":"

The plugin can be enabled in the templates under the plugins section.

For volume, svm, and node objects, you can enable the plugin with the following configuration:

plugins:\n- ChangeLog\n

For other objects, you need to specify the labels to track in the plugin configuration. These labels should be relevant to the object you want to track. If these labels are not specified in the template, the plugin will not be able to track changes for the object.

Here's an example of how to enable the plugin for an aggregate object:

plugins:\n- ChangeLog:\ntrack:\n- aggr\n- node\n- state\n

In the above configuration, the plugin will track changes in the aggr, node, and state labels for the aggregate object.

"},{"location":"plugins/#default-tracking-for-svm-node-volume","title":"Default Tracking for svm, node, volume","text":"

By default, the plugin tracks changes in the following labels for svm, node, and volume objects:

  • svm: svm, state, type, anti_ransomware_state
  • node: node, location, healthy
  • volume: node, volume, svm, style, type, aggr, state, status

Other objects are not tracked by default.

These default settings can be overwritten as needed in the relevant templates. For instance, if you want to track junction_path labels for Volume, you can overwrite this in the volume template.

plugins:\n- ChangeLog:\n- track:\n- node\n- volume\n- svm\n- style\n- type\n- aggr\n- state\n- status\n- junction_path\n
"},{"location":"plugins/#change-types-and-metrics","title":"Change Types and Metrics","text":"

The ChangeLog plugin publishes a metric with various labels providing detailed information about the change when an object is created, modified, or deleted.

"},{"location":"plugins/#object-creation","title":"Object Creation","text":"

When a new object is created, the ChangeLog plugin will publish a metric with the following labels:

Label Description object name of the ONTAP object that was changed op type of change that was made metric value timestamp when Harvest captured the change. 1698735558 in the example below

Example of metric shape for object creation:

change_log{aggr=\"umeng_aff300_aggr2\", cluster=\"umeng-aff300-01-02\", datacenter=\"u2\", index=\"0\", instance=\"localhost:12993\", job=\"prometheus\", node=\"umeng-aff300-01\", object=\"volume\", op=\"create\", style=\"flexvol\", svm=\"harvest\", volume=\"harvest_demo\"} 1698735558\n
"},{"location":"plugins/#object-modification","title":"Object Modification","text":"

When an existing object is modified, the ChangeLog plugin will publish a metric with the following labels:

Label Description object name of the ONTAP object that was changed op type of change that was made track property of the object which was modified new_value new value of the object after the change old_value previous value of the object before the change metric value timestamp when Harvest captured the change. 1698735677 in the example below

Example of metric shape for object modification:

change_log{aggr=\"umeng_aff300_aggr2\", cluster=\"umeng-aff300-01-02\", datacenter=\"u2\", index=\"1\", instance=\"localhost:12993\", job=\"prometheus\", new_value=\"offline\", node=\"umeng-aff300-01\", object=\"volume\", old_value=\"online\", op=\"update\", style=\"flexvol\", svm=\"harvest\", track=\"state\", volume=\"harvest_demo\"} 1698735677\n
"},{"location":"plugins/#object-deletion","title":"Object Deletion","text":"

When an object is deleted, the ChangeLog plugin will publish a metric with the following labels:

Label Description object name of the ONTAP object that was changed op type of change that was made metric value timestamp when Harvest captured the change. 1698735708 in the example below

Example of metric shape for object deletion:

change_log{aggr=\"umeng_aff300_aggr2\", cluster=\"umeng-aff300-01-02\", datacenter=\"u2\", index=\"2\", instance=\"localhost:12993\", job=\"prometheus\", node=\"umeng-aff300-01\", object=\"volume\", op=\"delete\", style=\"flexvol\", svm=\"harvest\", volume=\"harvest_demo\"} 1698735708\n
"},{"location":"plugins/#viewing-the-metrics","title":"Viewing the Metrics","text":"

You can view the metrics published by the ChangeLog plugin in the ChangeLog Monitor dashboard in Grafana. This dashboard provides a visual representation of the changes tracked by the plugin for volume, svm, and node objects.

"},{"location":"prepare-7mode-clusters/","title":"ONTAP 7mode","text":"

NetApp Harvest requires login credentials to access monitored hosts. Although, a generic admin account can be used, it is best practice to create a dedicated monitoring account with the least privilege access.

ONTAP 7-mode supports only username / password based authentication with NetApp Harvest. Harvest communicates with monitored systems exclusively via HTTPS, which is not enabled by default in Data ONTAP 7-mode. Login as a user with full administrative privileges and execute the following steps.

"},{"location":"prepare-7mode-clusters/#enabling-https-and-tls-ontap-7-mode-only","title":"Enabling HTTPS and TLS (ONTAP 7-mode only)","text":"

Verify SSL is configured

secureadmin status ssl\n

If ssl is \u2018active\u2019 continue. If not, setup SSL and be sure to choose a Key length (bits) of 2048:

secureadmin setup ssl\n
SSL Setup has already been done before. Do you want to proceed? [no] yes\nCountry Name (2 letter code) [US]: NL\nState or Province Name (full name) [California]: Noord-Holland\nLocality Name (city, town, etc.) [Santa Clara]: Schiphol\nOrganization Name (company) [Your Company]: NetApp\nOrganization Unit Name (division): SalesEngineering\nCommon Name (fully qualified domain name) [sdt-7dot1a.nltestlab.hq.netapp.com]:\nAdministrator email: noreply@netapp.com\nDays until expires [5475] :5475 Key length (bits) [512] :2048\n

Enable management via SSL and enable TLS

options httpd.admin.ssl.enable on\noptions tls.enable on  \n
"},{"location":"prepare-7mode-clusters/#creating-ontap-user","title":"Creating ONTAP user","text":""},{"location":"prepare-7mode-clusters/#create-the-role-with-required-capabilities","title":"Create the role with required capabilities","text":"
role add netapp-harvest-role -c \"Role for performance monitoring by NetApp Harvest\" -a login-http-admin,api-system-get-version,api-system-get-info,api-perf-object-*,api-emsautosupport-log \n
"},{"location":"prepare-7mode-clusters/#create-a-group-for-this-role","title":"Create a group for this role","text":"
useradmin group add netapp-harvest-group -c \"Group for performance monitoring by NetApp Harvest\" -r netapp-harvest-role \n
"},{"location":"prepare-7mode-clusters/#create-a-user-for-the-role-and-enter-the-password-when-prompted","title":"Create a user for the role and enter the password when prompted","text":"
useradmin user add netapp-harvest -c \"User account for performance monitoring by NetApp Harvest\" -n \"NetApp Harvest\" -g netapp-harvest-group\n

The user is now created and can be configured for use by NetApp Harvest.

"},{"location":"prepare-cdot-clusters/","title":"ONTAP cDOT","text":""},{"location":"prepare-cdot-clusters/#prepare-ontap-cdot-cluster","title":"Prepare ONTAP cDOT cluster","text":"

NetApp Harvest requires login credentials to access monitored hosts. Although a generic admin account can be used, it is better to create a dedicated read-only monitoring account.

In the examples below, the user, group, roles, etc., use a naming convention of netapp-harvest. These can be modified as needed to match your organizational needs.

There are few steps required to prepare each system for monitoring. Harvest supports two authentication styles (auth_style) to connect to ONTAP clusters. These are basic_auth or certificate_auth. Both work well, but if you're starting fresh, the recommendation is to create a read-only harvest user on your ONTAP server and use certificate-based TLS authentication.

Here's a summary of what we're going to do

  1. Create a read-only ONTAP role with the necessary capabilities that Harvest will use to auth and collect data
  2. Create a user account using the role created in step #1
  3. Update the harvest.yml file to use the user account and password created in step #2 and start Harvest.

There are two ways to create a read-only ONTAP role. Pick the one that best fits your needs.

  • Create a role with read-only access to all API objects via System Manager.
  • Create a role with read-only access to the limited set of APIs Harvest collects via ONTAP's command line interface (CLI).
"},{"location":"prepare-cdot-clusters/#system-manager","title":"System Manager","text":"

Open System Manager. Click on CLUSTER in the left menu bar, Settings and Users and Roles.

In the right column, under Roles, click on Add to add a new role.

Choose a role name (e.g. harvest2-role). In the REST API PATH field, type /api and select Read-Only for ACCESS. Click on Save.

In the left column, under Users, click on Add to create a new user. Choose a username. Under Role, select the role that we just created. Under User Login Methods select ONTAPI, and one of the two authentication methods. Press the Add button and select HTTP and one of the authentication methods. Type in a password if you chose Password. Click on Save

If you chose Password, you can add the username and password to the Harvest configuration file and start Harvest. If you chose Certificate jump to Using Certificate Authentication to generate certificates files.

System Manager Classic interface

Open System Manager. Click on the Settings icon in the top-right corner of the window.

Click on Roles in the left menu bar and click Add. Choose a role name (e.g. harvest2-role).

Under Role Attributes click on Add, under Command type DEFAULT, leave Query empty, select readonly under Access Level, click on OK and Add.

After you click on Add, this is what you should see:

Now we need to create a user. Click on Users in the left menu bar and Add. Choose a username and password. Under User Login Methods click on Add, select ontapi as Application and select the role that we just created as Role. Repeat by clicking on Add, select http as Application and select the role that we just created as Role. Click on Add in the pop-up window to save.

"},{"location":"prepare-cdot-clusters/#ontap-cli","title":"ONTAP CLI","text":"

We are going to:

  1. create a Harvest role with read-only access to a limited set of objects
  2. create a Harvest user and assign it to that role

Login to the CLI of your cDOT ONTAP system using SSH.

"},{"location":"prepare-cdot-clusters/#least-privilege-approach","title":"Least-privilege approach","text":"

Verify there are no errors when you copy/paste these. Warnings are fine.

security login role create -role harvest2-role -access readonly -cmddirname \"cluster\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"lun\"    \nsecurity login role create -role harvest2-role -access readonly -cmddirname \"network interface\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"qos adaptive-policy-group\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"qos policy-group\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"qos workload show\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"security\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"snapmirror\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"statistics\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"storage aggregate\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"storage disk\"     \nsecurity login role create -role harvest2-role -access readonly -cmddirname \"storage encryption disk\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"storage shelf\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"system health status show\" \nsecurity login role create -role harvest2-role -access readonly -cmddirname \"system health subsystem show\"  \nsecurity login role create -role harvest2-role -access readonly -cmddirname \"system node\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"version\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"volume\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"vserver\"\n
"},{"location":"prepare-cdot-clusters/#create-harvest-user-and-associate-with-the-harvest-role","title":"Create harvest user and associate with the harvest role","text":"

Use this for password authentication

security login create -user-or-group-name harvest2 -application ontapi -role harvest2-role -authentication-method password\nsecurity login create -user-or-group-name harvest2 -application http -role harvest2-role -authentication-method password   

Or this for certificate authentication

security login create -user-or-group-name harvest2 -application ontapi -role harvest2-role -authentication-method cert\nsecurity login create -user-or-group-name harvest2 -application http -role harvest2-role -authentication-method cert 

Check that the harvest role has web access for ONTAPI and REST.

vserver services web access show -role harvest2-role -name ontapi\nvserver services web access show -role harvest2-role -name rest\nvserver services web access show -role harvest2-role -name docs-api\n

If either entry is missing, enable access by running the following. Replace $ADMIN_VSERVER with your SVM admin name.

vserver services web access create -vserver $ADMIN_VSERVER -name ontapi -role harvest2-role\nvserver services web access create -vserver $ADMIN_VSERVER -name rest -role harvest2-role\nvserver services web access create -vserver $ADMIN_VSERVER -name docs-api -role harvest2-role\n

"},{"location":"prepare-cdot-clusters/#7-mode-cli","title":"7-Mode CLI","text":"

Login to the CLI of your 7-Mode ONTAP system (e.g. using SSH). First, we create a user role. If you want to give the user readonly access to all API objects, type in the following command:

useradmin role modify harvest2-role -a login-http-admin,api-system-get-version, \\\napi-system-get-info,api-perf-object-*,api-ems-autosupport-log,api-diagnosis-status-get, \\\napi-lun-list-info,api-diagnosis-subsystem-config-get-iter,api-disk-list-info, \\\napi-diagnosis-config-get-iter,api-aggr-list-info,api-volume-list-info, \\\napi-storage-shelf-environment-list-info,api-qtree-list,api-quota-report\n
"},{"location":"prepare-cdot-clusters/#using-certificate-authentication","title":"Using Certificate Authentication","text":"

See comments here for troubleshooting client certificate authentication.

Client certificate authentication allows you to authenticate with your ONTAP cluster without including username/passwords in your harvest.yml file. The process to set up client certificates is straightforward, although self-signed certificates introduce more work as does Go's strict treatment of common names.

Unless you've installed production certificates on your ONTAP cluster, you'll need to replace your cluster's common-name-based self-signed certificates with a subject alternative name-based certificate. After that step is completed, we'll create client certificates and add those for passwordless login.

If you can't or don't want to replace your ONTAP cluster certificates, there are some workarounds. You can

  • Use use_insecure_tls: true in your harvest.yml to disable certificate verification
  • Change your harvest.yml to connect via hostname instead of IP address
"},{"location":"prepare-cdot-clusters/#create-self-signed-subject-alternate-name-certificates-for-ontap","title":"Create Self-Signed Subject Alternate Name Certificates for ONTAP","text":"

Subject alternate name (SAN) certificates allow multiple hostnames in a single certificate. Starting with Go 1.3, when connecting to a cluster via its IP address, the CN field in the server certificate is ignored. This often causes errors like this: x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs

"},{"location":"prepare-cdot-clusters/#overview-of-steps-to-create-a-self-signed-san-certificate-and-make-ontap-use-it","title":"Overview of steps to create a self-signed SAN certificate and make ONTAP use it","text":"
  1. Create a root key
  2. Create a root certificate authority certificate
  3. Create a SAN certificate for your ONTAP cluster, using #2 to create it
  4. Install root ca certificate created in step #2 on cluster
  5. Install SAN certificate created in step #3 on your cluster
  6. Modify your cluster/SVM to use the new certificate installed at step #5
"},{"location":"prepare-cdot-clusters/#setup","title":"Setup","text":"
# create a place to store the certificate authority files, adjust as needed\nmkdir -p ca/{private,certs}\n
"},{"location":"prepare-cdot-clusters/#create-a-root-key","title":"Create a root key","text":"
cd ca\n# generate a private key that we will use to create our self-signed certificate authority\nopenssl genrsa -out private/ca.key.pem 4096\nchmod 400 private/ca.key.pem\n
"},{"location":"prepare-cdot-clusters/#create-a-root-certificate-authority-certificate","title":"Create a root certificate authority certificate","text":"

Download the sample openssl.cnf file and put it in the directory we created in setup. Edit line 9, changing dir to point to your ca directory created in setup.

openssl req -config openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 -sha256 -extensions v3_ca -out certs/ca.cert.pem\n\n# Verify\nopenssl x509 -noout -text -in certs/ca.cert.pem\n\n# Make sure these are present\n    Signature Algorithm: sha256WithRSAEncryption               <======== Signature Algorithm can not be sha-1\n        X509v3 extensions:\n            X509v3 Subject Key Identifier: \n                --removed\n            X509v3 Authority Key Identifier: \n                --removed\n\n            X509v3 Basic Constraints: critical\n                CA:TRUE                                        <======== CA must be true\n            X509v3 Key Usage: critical\n                Digital Signature, Certificate Sign, CRL Sign  <======== Digital and certificate signature\n
"},{"location":"prepare-cdot-clusters/#create-a-san-certificate-for-your-ontap-cluster","title":"Create a SAN certificate for your ONTAP cluster","text":"

First, we'll create the certificate signing request and then the certificate. In this example, the ONTAP cluster is named umeng-aff300-05-06, update accordingly.

Download the sample server_cert.cnf file and put it in the directory we created in setup. Edit lines 18-21 to include your ONTAP cluster hostnames and IP addresses. Edit lines 6-11 with new names as needed.

openssl req -new -newkey rsa:4096 -nodes -sha256 -subj \"/\" -config server_cert.cnf -outform pem -out umeng-aff300-05-06.csr -keyout umeng-aff300-05-06.key\n\n# Verify\nopenssl req -text -noout -in umeng-aff300-05-06.csr\n\n# Make sure these are present\n        Attributes:\n        Requested Extensions:\n            X509v3 Subject Alternative Name:         <======== Section that lists alternate DNS and IP names\n                DNS:umeng-aff300-05-06-cm.rtp.openenglab.netapp.com, DNS:umeng-aff300-05-06, IP Address:10.193.48.11, IP Address:10.193.48.11\n    Signature Algorithm: sha256WithRSAEncryption     <======== Signature Algorithm can not be sha-1\n

We'll now use the certificate signing request and the recently created certificate authority to create a new SAN certificate for our cluster.

openssl x509 -req -sha256 -days 30 -in umeng-aff300-05-06.csr -CA certs/ca.cert.pem -CAkey private/ca.key.pem -CAcreateserial -out umeng-aff300-05-06.crt -extensions req_ext -extfile server_cert.cnf\n\n# Verify\nopenssl x509 -text -noout -in umeng-aff300-05-06.crt\n\n# Make sure these are present\nX509v3 extensions:\n            X509v3 Subject Alternative Name:       <======== Section that lists alternate DNS and IP names\n                DNS:umeng-aff300-05-06-cm.rtp.openenglab.netapp.com, DNS:umeng-aff300-05-06, IP Address:10.193.48.11, IP Address:10.193.48.11\n    Signature Algorithm: sha256WithRSAEncryption   <======== Signature Algorithm can not be sha-1\n
"},{"location":"prepare-cdot-clusters/#install-root-ca-certificate-on-cluster","title":"Install Root CA Certificate On Cluster","text":"

Login to your cluster with admin credentials and install the server certificate authority. Copy from ca/certs/ca.cert.pem

ssh admin@IP\numeng-aff300-05-06::*> security certificate install -type server-ca\n\nPlease enter Certificate: Press <Enter> when done\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n\nYou should keep a copy of the CA-signed digital certificate for future reference.\n\nThe installed certificate's CA and serial number for reference:\nCA: ntap\nSerial: 46AFFC7A3A9999999E8FB2FEB0\n\nThe certificate's generated name for reference: ntap\n

Now install the server certificate we created above with SAN. Copy certificate from ca/umeng-aff300-05-06.crt and private key from ca/umeng-aff300-05-06.key

umeng-aff300-05-06::*> security certificate install -type server\n\nPlease enter Certificate: Press <Enter> when done\n-----BEGIN CERTIFICATE-----\n..\n-----END CERTIFICATE-----\n\nPlease enter Private Key: Press <Enter> when done\n-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n\nPlease enter certificates of Certification Authorities (CA) which form the certificate chain of the server certificate. This starts with the issuing CA certificate of the server certificate and can range up to the root CA certificate.\n\nDo you want to continue entering root and/or intermediate certificates {y|n}: n\n

If ONTAP tells you the provided certificate does not have a common name in the subject field, type the hostname of the cluster like this:

The provided certificate does not have a common name in the subject field.\n\nEnter a valid common name to continue installation of the certificate:\n\nEnter a valid common name to continue installation of the certificate: umeng-aff300-05-06-cm.rtp.openenglab.netapp.com\n\nYou should keep a copy of the private key and the CA-signed digital certificate for future reference.\n\nThe installed certificate's CA and serial number for reference:\nCA: ntap\nSerial: 67A94AA25B229A68AC5BABACA8939A835AA998A58\n\nThe certificate's generated name for reference: umeng-aff300-05-06-cm.rtp.openenglab.netapp.com\n
"},{"location":"prepare-cdot-clusters/#modify-the-admin-svm-to-use-the-new-certificate","title":"Modify the admin SVM to use the new certificate","text":"

We'll modify the cluster's admin SVM to use the just installed server certificate and certificate authority.

vserver show -type admin -fields vserver,type\nvserver            type\n------------------ -----\numeng-aff300-05-06 admin\n\numeng-aff300-05-06::*> ssl modify -vserver umeng-aff300-05-06 -server-enabled true -serial 67A94AA25B229A68AC5BABACA8939A835AA998A58 -ca ntap\n  (security ssl modify)\n

You can verify the certificate(s) are installed and working by using openssl like so:

openssl s_client -CAfile certs/ca.cert.pem -showcerts -servername server -connect umeng-aff300-05-06-cm.rtp.openenglab.netapp.com:443\n\nCONNECTED(00000005)\ndepth=1 C = US, ST = NC, L = RTP, O = ntap, OU = ntap\nverify return:1\ndepth=0 \nverify return:1\n...\n

without the -CAfile, openssl will report

CONNECTED(00000005)\ndepth=0 \nverify error:num=20:unable to get local issuer certificate\nverify return:1\ndepth=0 \nverify error:num=21:unable to verify the first certificate\nverify return:1\n---\n
"},{"location":"prepare-cdot-clusters/#create-client-certificates-for-password-less-login","title":"Create Client Certificates for Password-less Login","text":"

Copy the server certificate we created above into the Harvest install directory.

cp ca/umeng-aff300-05-06.crt /opt/harvest\ncd /opt/harvest\n

Create a self-signed client key and certificate with the same name as the hostname where Harvest is running. It's not required to name the key/cert pair after the hostname, but if you do, Harvest will load them automatically when you specify auth_style: certificate_auth otherwise you can point to them directly. See Pollers for details.

Change the common name to the ONTAP user you setup with the harvest role above. e.g harvest2

cd /opt/harvest\nmkdir cert\nopenssl req -x509 -nodes -days 1095 -newkey rsa:2048 -keyout cert/$(hostname).key -out cert/$(hostname).pem -subj \"/CN=harvest2\"\n
"},{"location":"prepare-cdot-clusters/#install-client-certificates-on-cluster","title":"Install Client Certificates on Cluster","text":"

Login to your cluster with admin credentials and install the client certificate. Copy from cert/$(hostname).pem

ssh admin@IP\numeng-aff300-05-06::*>  security certificate install -type client-ca -vserver umeng-aff300-05-06\n\nPlease enter Certificate: Press <Enter> when done\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n\nYou should keep a copy of the CA-signed digital certificate for future reference.\n\nThe installed certificate's CA and serial number for reference:\nCA: cbg\nSerial: B77B59444444CCCC\n\nThe certificate's generated name for reference: cbg_B77B59444444CCCC\n

Now that the client certificate is installed, let's enable it.

umeng-aff300-05-06::*> ssl modify -vserver umeng-aff300-05-06 -client-enabled true\n  (security ssl modify)\n

Verify with a recent version of curl. If you are running on a Mac see below.

curl --cacert umeng-aff300-05-06.crt --key cert/$(hostname).key --cert cert/$(hostname).pem https://umeng-aff300-05-06-cm.rtp.openenglab.netapp.com/api/storage/disks\n
"},{"location":"prepare-cdot-clusters/#update-harvestyml-to-use-client-certificates","title":"Update Harvest.yml to use client certificates","text":"

Update the poller section with auth_style: certificate_auth like this:

  u2-cert: \n    auth_style: certificate_auth\n    addr: umeng-aff300-05-06-cm.rtp.openenglab.netapp.com\n

Restart your poller and enjoy your password-less life-style.

"},{"location":"prepare-cdot-clusters/#macos","title":"macOS","text":"

The version of curl installed on macOS up through Monterey is not recent enough to work with self-signed SAN certs. You will need to install a newer version of curl via Homebrew, MacPorts, source, etc.

Example of failure when running with an older version of curl - you will see this in client auth test step above.

curl --version\ncurl 7.64.1 (x86_64-apple-darwin20.0) libcurl/7.64.1 (SecureTransport) LibreSSL/2.8.3 zlib/1.2.11 nghttp2/1.41.0\n\ncurl --cacert umeng-aff300-05-06.crt --key cert/cgrindst-mac-0.key --cert cert/cgrindst-mac-0.pem https://umeng-aff300-05-06-cm.rtp.openenglab.netapp.com/api/storage/disks\n\ncurl: (60) SSL certificate problem: unable to get local issuer certificate\n

Let's install curl via Homebrew. Make sure you don't miss the message that Homebrew prints about your path.

If you need to have curl first in your PATH, run:\n  echo 'export PATH=\"/usr/local/opt/curl/bin:$PATH\"' >> /Users/cgrindst/.bash_profile\n

Now when we make a client auth request with our self-signed certificate, it works! \\o/

brew install curl\n\ncurl --version\ncurl 7.80.0 (x86_64-apple-darwin20.6.0) libcurl/7.80.0 (SecureTransport) OpenSSL/1.1.1l zlib/1.2.11 brotli/1.0.9 zstd/1.5.0 libidn2/2.3.2 libssh2/1.10.0 nghttp2/1.46.0 librtmp/2.3 OpenLDAP/2.6.0\nRelease-Date: 2021-11-10\nProtocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp \nFeatures: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz MultiSSL NTLM NTLM_WB SPNEGO SSL TLS-SRP UnixSockets zstd\n\ncurl --cacert umeng-aff300-05-06.crt --key cert/cgrindst-mac-0.key --cert cert/cgrindst-mac-0.pem https://umeng-aff300-05-06-cm.rtp.openenglab.netapp.com/api/storage/disks\n\n{\n  \"records\": [\n    {\n      \"name\": \"1.1.22\",\n      \"_links\": {\n        \"self\": {\n          \"href\": \"/api/storage/disks/1.1.22\"\n        }\n      }\n    }\n}\n

Change directory to your Harvest home directory (replace /opt/harvest/ if this is not the default):

$ cd /opt/harvest/\n

Generate an SSL cert and key pair with the following command. Note that it's preferred to generate these files using the hostname of the local machine. The command below assumes debian8 as our hostname name and harvest2 as the user we created in the previous step:

openssl req -x509 -nodes -days 1095 -newkey rsa:2048 -keyout cert/debian8.key \\\n-out cert/debian8.pem  -subj \"/CN=harvest2\"\n

Next, open the public key (debian8.pem in our example) and copy all of its content. Login into your ONTAP CLI and run this command by replacing CLUSTER with the name of your cluster.

security certificate install -type client-ca -vserver CLUSTER\n

Paste the public key content and hit enter. Output should be similar to this:

jamaica::> security certificate install -type client-ca -vserver jamaica \n\nPlease enter Certificate: Press <Enter> when done\n-----BEGIN CERTIFICATE-----                       \nMIIDETCCAfmgAwIBAgIUP9EUXyl2BDSUOkNEcDU0yqbJ29IwDQYJKoZIhvcNAQEL\nBQAwGDEWMBQGA1UEAwwNaGFydmVzdDItY2xpMzAeFw0yMDEwMDkxMjA0MDhaFw0y\nMzEwMDktcGFueSBMdGQxFzAVBgNVBAMlc3QyLWNsaTMwggEiMA0tcGFueSBGCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCVVy25BeCRoGCJWFOlyUL7Ddkze4Hl2/6u\nqye/3mk5vBNsGuXUrtad5XfBB70Ez9hWl5sraLiY68ro6MyX1icjiUTeaYDvS/76\nIw7HeXJ5Pyb/fWth1nePunytoLyG/vaTCySINkIV5nlxC+k0X3wWFJdfJzhloPtt\n1Vdm7aCF2q6a2oZRnUEBGQb6t5KyF0/Xh65mvfgB0pl/AS2HY5Gz+~L54Xyvs+BY\nV7UmTop7WBYl0L3QXLieERpHXnyOXmtwlm1vG5g4n/0DVBNTBXjEdvc6oRh8sxBN\nZlQWRApE7pa/I1bLD7G2AiS4UcPmR4cEpPRVEsOFOaAN3Z3YskvnAgMBAAGjUzBR\nMB0GA1UdDgQWBBQr4syV6TCcgO/5EcU/F8L2YYF15jAfBgNVHSMEGDAWgBQr4syV\n6TCcgO/5EcU/F8L2YYF15jAPBgNVHRMdfdfwerH/MA0GCSqGSIb^ECd3DQEBCwUA\nA4IBAQBjP1BVhClRKkO/M3zlWa2L9Ztce6SuGwSnm6Ebmbs+iMc7o2N9p3RmV6Xl\nh6NcdXRzzPAVrUoK8ewhnBzdghgIPoCI6inAf1CUhcCX2xcnE/osO+CfvKuFnPYE\nWQ7UNLsdfka0a9kTK13r3GMs09z/VsDs0gD8UhPjoeO7LQhdU9tJ/qOaSP3s48pv\nsYzZurHUgKmVOaOE4t9DAdevSECEWCETRETA$Vbn%@@@%%rcdrctru65ryFaByb+\nhTtGhDnoHwzt/cAGvLGV/RyWdGFAbu7Fb1rV94ceggE7nh1FqbdLH9siot6LlnQN\nMhEWp5PYgndOW49dDYUxoauCCkiA\n-----END CERTIFICATE-----\n\n\nYou should keep a copy of the CA-signed digital certificate for future reference.\n\nThe installed certificate's CA and serial number for reference:\nCA: harvest2\nSerial: 3FD1145F2976043012213d3009095534CCRDBD2\n\nThe certificate's generated name for reference: harvest2\n

Finally, we need to enable SSL authentication with the following command (replace CLUSTER with the name of your cluster):

security ssl modify -client-enabled true -vserver CLUSTER\n
"},{"location":"prepare-cdot-clusters/#reference","title":"Reference","text":"
  • https://github.com/jcbsmpsn/golang-https-example
"},{"location":"prepare-fsx-clusters/","title":"Amazon FSx for ONTAP","text":""},{"location":"prepare-fsx-clusters/#prepare-amazon-fsx-for-ontap","title":"Prepare Amazon FSx for ONTAP","text":"

To set up Harvest and FSx make sure you read through Monitoring FSx for ONTAP file systems using Harvest and Grafana

"},{"location":"prepare-fsx-clusters/#supported-harvest-dashboards","title":"Supported Harvest Dashboards","text":"

Amazon FSx for ONTAP exposes a different set of metrics than ONTAP cDOT. That means a limited set of out-of-the-box dashboards are supported and some panels may be missing information.

The dashboards that work with FSx are tagged with fsx and listed below:

  • ONTAP: Volume
  • ONTAP: SVM
  • ONTAP: Security
  • ONTAP: Data Protection Snapshots
  • ONTAP: Compliance
"},{"location":"prepare-storagegrid-clusters/","title":"StorageGRID","text":""},{"location":"prepare-storagegrid-clusters/#prepare-storagegrid-cluster","title":"Prepare StorageGRID cluster","text":"

NetApp Harvest requires login credentials to access StorageGRID hosts. Although, a generic admin account can be used, it is better to create a dedicated monitoring user with the fewest permissions.

Here's a summary of what we're going to do

  1. Create a StorageGRID group with the necessary capabilities that Harvest will use to auth and collect data
  2. Create a user assigned to the group created in step #1.
"},{"location":"prepare-storagegrid-clusters/#create-storagegrid-group-permissions","title":"Create StorageGRID group permissions","text":"

These steps are documented here.

You will need a root or admin account to create a new group permission.

  1. Select CONFIGURATION > Access control > Admin groups
  2. Select Create group
  3. Select Local group
  4. Enter a display name for the group, which you can update later as required. For example, Harvest or monitoring.
  5. Enter a unique name for the group, which you cannot update later.
  6. Select Continue
  7. On the Manage group permissions screen, select the permissions you want. At a minimum, Harvest requires the Tenant accounts and Metrics query permissions.
  8. Select Save changes

"},{"location":"prepare-storagegrid-clusters/#create-a-storagegrid-user","title":"Create a StorageGRID user","text":"

These steps are documented here.

You will need a root or admin account to create a new user.

  1. Select CONFIGURATION > Access control > Admin users
  2. Select Create user
  3. Enter the user\u2019s full name, a unique username, and a password.
  4. Select Continue.
  5. Assign the user to the previously created harvest group.
  6. Select Create user and select Finish.

"},{"location":"prepare-storagegrid-clusters/#reference","title":"Reference","text":"

See group permissions for more information on StorageGRID permissions.

"},{"location":"prometheus-exporter/","title":"Prometheus Exporter","text":"Prometheus Install

The information below describes how to setup Harvest's Prometheus exporter. If you need help installing or setting up Prometheus, check out their documention.

"},{"location":"prometheus-exporter/#overview","title":"Overview","text":"

The Prometheus exporter is responsible for:

  • formatting metrics into the Prometheus line protocol
  • creating a web-endpoint on http://<ADDR>:<PORT>/metrics (or https: if TLS is enabled) for Prometheus to scrape

A web end-point is required because Prometheus scrapes Harvest by polling that end-point.

In addition to the /metrics end-point, the Prometheus exporter also serves an overview of all metrics and collectors available on its root address scheme://<ADDR>:<PORT>/.

Because Prometheus polls Harvest, don't forget to update your Prometheus configuration and tell Prometheus how to scrape each poller.

There are two ways to configure the Prometheus exporter: using a port range or individual ports.

The port range is more flexible and should be used when you want multiple pollers all exporting to the same instance of Prometheus. Both options are explained below.

"},{"location":"prometheus-exporter/#parameters","title":"Parameters","text":"

All parameters of the exporter are defined in the Exporters section of harvest.yml.

An overview of all parameters:

parameter type description default port_range int-int (range), overrides port if specified lower port to upper port (inclusive) of the HTTP end-point to create when a poller specifies this exporter. Starting at lower port, each free port will be tried sequentially up to the upper port. port int, required if port_range is not specified port of the HTTP end-point local_http_addr string, optional address of the HTTP server Harvest starts for Prometheus to scrape:use localhost to serve only on the local machineuse 0.0.0.0 (default) if Prometheus is scrapping from another machine 0.0.0.0 global_prefix string, optional add a prefix to all metrics (e.g. netapp_) allow_addrs list of strings, optional allow access only if host matches any of the provided addresses allow_addrs_regex list of strings, optional allow access only if host address matches at least one of the regular expressions cache_max_keep string (Go duration format), optional maximum amount of time metrics are cached (in case Prometheus does not timely collect the metrics) 5m add_meta_tags bool, optional add HELP and TYPE metatags to metrics (currently no useful information, but required by some tools) false sort_labels bool, optional sort metric labels before exporting. Some open-metrics scrapers report stale metrics when labels are not sorted. false tls tls optional If present, enables TLS transport. If running in a container, see note tls cert_file, key_file required child of tls Relative or absolute path to TLS certificate and key file. TLS 1.3 certificates required.FIPS complaint P-256 TLS 1.3 certificates can be created with bin/harvest admin tls create server, openssl, mkcert, etc.

A few examples:

"},{"location":"prometheus-exporter/#port_range","title":"port_range","text":"
Exporters:\nprom-prod:\nexporter: Prometheus\nport_range: 2000-2030\nPollers:\ncluster-01:\nexporters:\n- prom-prod\ncluster-02:\nexporters:\n- prom-prod\ncluster-03:\nexporters:\n- prom-prod\n# ... more\ncluster-16:\nexporters:\n- prom-prod\n

Sixteen pollers will collect metrics from 16 clusters and make those metrics available to a single instance of Prometheus named prom-prod. Sixteen web end-points will be created on the first 16 available free ports between 2000 and 2030 (inclusive).

After staring the pollers in the example above, running bin/harvest status shows the following. Note that ports 2000 and 2003 were not available so the next free port in the range was selected. If no free port can be found an error will be logged.

Datacenter   Poller       PID     PromPort  Status              \n++++++++++++ ++++++++++++ +++++++ +++++++++ ++++++++++++++++++++\nDC-01        cluster-01   2339    2001      running         \nDC-01        cluster-02   2343    2002      running         \nDC-01        cluster-03   2351    2004      running         \n...\nDC-01        cluster-14   2405    2015      running         \nDC-01        cluster-15   2502    2016      running         \nDC-01        cluster-16   2514    2017      running         \n
"},{"location":"prometheus-exporter/#allow_addrs","title":"allow_addrs","text":"
Exporters:\nmy_prom:\nallow_addrs:\n- 192.168.0.102\n- 192.168.0.103\n

will only allow access from exactly these two addresses.

"},{"location":"prometheus-exporter/#allow_addrs_regex","title":"allow_addrs_regex","text":"
Exporters:\nmy_prom:\nallow_addrs_regex:\n- `^192.168.0.\\d+$`\n

will only allow access from the IP4 range 192.168.0.0-192.168.0.255.

"},{"location":"prometheus-exporter/#configure-prometheus-to-scrape-harvest-pollers","title":"Configure Prometheus to scrape Harvest pollers","text":"

There are two ways to tell Prometheus how to scrape Harvest: using HTTP service discovery (SD) or listing each poller individually.

HTTP service discovery is the more flexible of the two. It is also less error-prone, and easier to manage. Combined with the port_range configuration described above, SD is the least effort to configure Prometheus and the easiest way to keep both Harvest and Prometheus in sync.

NOTE HTTP service discovery does not work with Docker yet. With Docker, you will need to list each poller individually or if possible, use the Docker Compose workflow that uses file service discovery to achieve a similar ease-of-use as HTTP service discovery.

See the example below for how to use HTTP SD and port_range together.

"},{"location":"prometheus-exporter/#prometheus-http-service-discovery","title":"Prometheus HTTP Service Discovery","text":"

HTTP service discovery was introduced in Prometheus version 2.28.0. Make sure you're using that version or later.

The way service discovery works is:

  • shortly after a poller starts up, it registers with the SD node (if one exists)
  • the poller sends a heartbeat to the SD node, by default every 45s.
  • if a poller fails to send a heartbeat, the SD node removes the poller from the list of active targets after a minute
  • the SD end-point is reachable via SCHEMA:///api/v1/sd

    To use HTTP service discovery you need to:

    1. tell Harvest to start the HTTP service discovery process
    2. tell Prometheus to use the HTTP service discovery endpoint
    "},{"location":"prometheus-exporter/#enable-http-service-discovery-in-harvest","title":"Enable HTTP service discovery in Harvest","text":"

    Add the following to your harvest.yml

    Admin:\nhttpsd:\nlisten: :8887\n

    This tells Harvest to create an HTTP service discovery end-point on interface 0.0.0.0:8887. If you want to only listen on localhost, use 127.0.0.1:<port> instead. See net.Dial for details on the supported listen formats.

    Start the SD process by running bin/harvest admin start. Once it is started, you can curl the end-point for the list of running Harvest pollers.

    curl -s 'http://localhost:8887/api/v1/sd' | jq .\n[\n  {\n    \"targets\": [\n      \"10.0.1.55:12990\",\n      \"10.0.1.55:15037\",\n      \"127.0.0.1:15511\",\n      \"127.0.0.1:15008\",\n      \"127.0.0.1:15191\",\n      \"10.0.1.55:15343\"\n    ]\n  }\n]\n
    "},{"location":"prometheus-exporter/#harvest-http-service-discovery-options","title":"Harvest HTTP Service Discovery options","text":"

    HTTP service discovery (SD) is configured in the Admin > httpsd section of your harvest.yml.

    parameter type description default listen required Interface and port to listen on, use localhost:PORT or :PORT for all interfaces auth_basic optional If present, enables basic authentication on /api/v1/sd end-point auth_basic username, password required child of auth_basic tls optional If present, enables TLS transport. If running in a container, see note tls cert_file, key_file required child of tls Relative or absolute path to TLS certificate and key file. TLS 1.3 certificates required.FIPS complaint P-256 TLS 1.3 certificates can be created with bin/harvest admin tls create server ssl_cert, ssl_key optional if auth_style is certificate_auth Absolute paths to SSL (client) certificate and key used to authenticate with the target system.If not provided, the poller will look for <hostname>.key and <hostname>.pem in $HARVEST_HOME/cert/.To create certificates for ONTAP systems, see using certificate authentication heart_beat optional, Go Duration format How frequently each poller sends a heartbeat message to the SD node 45s expire_after optional, Go Duration format If a poller fails to send a heartbeat, the SD node removes the poller after this duration 1m"},{"location":"prometheus-exporter/#enable-http-service-discovery-in-prometheus","title":"Enable HTTP service discovery in Prometheus","text":"

    Edit your prometheus.yml and add the following section

    $ vim /etc/prometheus/prometheus.yml

    scrape_configs:\n- job_name: harvest\nhttp_sd_configs:\n- url: http://localhost:8887/api/v1/sd\n

    Harvest and Prometheus both support basic authentication for HTTP SD end-points. To enable basic auth, add the following to your Harvest config.

    Admin:\nhttpsd:\nlisten: :8887\n# Basic auth protects GETs and publishes\nauth_basic:\nusername: admin\npassword: admin\n

    Don't forget to also update your Prometheus config with the matching basic_auth credentials.

    "},{"location":"prometheus-exporter/#prometheus-http-service-discovery-and-port-range","title":"Prometheus HTTP Service Discovery and Port Range","text":"

    HTTP SD combined with Harvest's port_range feature leads to significantly less configuration in your harvest.yml. For example, if your clusters all export to the same Prometheus instance, you can refactor the per-poller exporter into a single exporter shared by all clusters in Defaults as shown below:

    Notice that none of the pollers specify an exporter. Instead, all the pollers share the single exporter named prometheus-r listed in Defaults. prometheus-r is the only exporter defined and as specified will manage up to 1,000 Harvest Prometheus exporters.

    If you add or remove more clusters in the Pollers section, you do not have to change Prometheus since it dynamically pulls the targets from the Harvest admin node.

    Admin:\nhttpsd:\nlisten: :8887\n\nExporters:\nprometheus-r:\nexporter: Prometheus\nport_range: 13000-13999\n\nDefaults:\ncollectors:\n- Zapi\n- ZapiPerf\nuse_insecure_tls: false\nauth_style: password\nusername: admin\npassword: pass\nexporters:\n- prometheus-r\n\nPollers:\numeng_aff300:\ndatacenter: meg\naddr: 10.193.48.11\n\nF2240-127-26:\ndatacenter: meg\naddr: 10.193.6.61\n\n# ... add more clusters\n
    "},{"location":"prometheus-exporter/#static-scrape-targets","title":"Static Scrape Targets","text":"

    If we define four prometheus exporters at ports: 12990, 12991, 14567, and 14568 you need to add four sections to your prometheus.yml.

    $ vim /etc/prometheus/prometheus.yml\n

    Scroll down to near the end of file and add the following lines:

      - job_name: 'harvest'\nscrape_interval: 60s\nstatic_configs:\n- targets:\n- 'localhost:12990'\n- 'localhost:12991'\n- 'localhost:14567'\n- 'localhost:14568'\n

    NOTE If Prometheus is not on the same machine as Harvest, then replace localhost with the IP address of your Harvest machine. Also note the scrape interval above is set to 1m. That matches the polling frequency of the default Harvest collectors. If you change the polling frequency of a Harvest collector to a lower value, you should also change the scrape interval.

    "},{"location":"prometheus-exporter/#prometheus-exporter-and-tls","title":"Prometheus Exporter and TLS","text":"

    The Harvest Prometheus exporter can be configured to serve its metrics via HTTPS by configuring the tls section in the Exporters section of harvest.yml.

    Let's walk through an example of how to set up Harvest's Prometheus exporter and how to configure Prometheus to use TLS.

    "},{"location":"prometheus-exporter/#generate-tls-certificates","title":"Generate TLS Certificates","text":"

    We'll use Harvest's admin command line tool to create a self-signed TLS certificate key/pair for the exporter and Prometheus. Note: If running in a container, see note.

    cd $Harvest_Install_Directory\nbin/harvest admin tls create server\n2023/06/23 09:39:48 wrote cert/admin-cert.pem\n2023/06/23 09:39:48 wrote cert/admin-key.pem\n

    Two files are created. Since we want to use these certificates for our Prometheus exporter, let's rename them to make that clearer.

    mv cert/admin-cert.pem cert/prom-cert.pem\nmv cert/admin-key.pem cert/prom-key.pem\n
    "},{"location":"prometheus-exporter/#configure-harvest-prometheus-exporter-to-use-tls","title":"Configure Harvest Prometheus Exporter to use TLS","text":"

    Edit your harvest.yml and add a TLS section to your exporter block like this:

    Exporters:\nmy-exporter:\nlocal_http_addr: localhost\nexporter: Prometheus\nport: 16001\ntls:\ncert_file: cert/prom-cert.pem\nkey_file: cert/prom-key.pem\n

    Update one of your Pollers to use this exporter and start the poller.

    Pollers:\nmy-cluster:\ndatacenter: dc-1\naddr: 10.193.48.11\nexporters:\n- my-exporter     # Use TLS exporter we created above\n

    When the poller is started, it will log whether https or http is being used as part of the url like so:

    bin/harvest start -f my-cluster\n2023-06-23T10:02:03-04:00 INF prometheus/httpd.go:40 > server listen Poller=my-cluster exporter=my-exporter url=https://localhost:16001/metrics\n

    If the url schema is https, TLS is being used.

    You can use curl to scrape the Prometheus exporter and verify that TLS is being used like so:

    curl --cacert cert/prom-cert.pem https://localhost:16001/metrics\n\n# or use --insecure to tell curl to skip certificate validation\n# curl --insecure cert/prom-cert.pem https://localhost:16001/metrics  \n
    "},{"location":"prometheus-exporter/#configure-prometheus-to-use-tls","title":"Configure Prometheus to use TLS","text":"

    Let's configure Prometheus to use HTTPs to communicate with the exporter setup above.

    Edit your prometheus.yml and add or adapt your scrape_configs job. You need to add scheme: https and setup a tls_config block to point to the earlier created prom-cert.pem like so:

    scrape_configs:\n- job_name: 'harvest-https'\nscheme: https\ntls_config:\nca_file: /path/to/prom-cert.pem\nstatic_configs:\n- targets:\n- 'localhost:16001'\n

    Start Prometheus and visit http://localhost:9090/targets with your browser. You should see https://localhost:16001/metrics in the list of targets.

    "},{"location":"prometheus-exporter/#prometheus-alerts","title":"Prometheus Alerts","text":"

    Prometheus includes out-of-the-box support for simple alerting. Alert rules are configured in your prometheus.yml file. Setup and details can be found in the Prometheus guide on alerting.

    Harvest also includes ems alerts and sample alerts for reference. Refer EMS Collector for more details about EMS events.

    "},{"location":"prometheus-exporter/#alertmanager","title":"Alertmanager","text":"

    Prometheus's builtin alerts are good for simple workflows. They do a nice job telling you what's happening at the moment. If you need a richer solution that includes summarization, notification, advanced delivery, deduplication, etc. checkout Alertmanager.

    "},{"location":"prometheus-exporter/#reference","title":"Reference","text":"
    • Prometheus Alerting
    • Alertmanager
    • Alertmanager's notification metrics
    • Prometheus Linter
    • Collection of example Prometheus Alerts
    "},{"location":"quickstart/","title":"Quickstart","text":""},{"location":"quickstart/#1-install-harvest","title":"1. Install Harvest","text":"

    Harvest is distributed as a container, tarball, and RPM and Debs. Pick the one that works best for you. More details can be found in the installation documentation.

    "},{"location":"quickstart/#2-configuration-file","title":"2. Configuration file","text":"

    Harvest's configuration information is defined in harvest.yml. There are a few ways to tell Harvest how to load this file:

    • If you don't use the --config flag, the harvest.yml file located in the current working directory will be used

    • If you specify the --config flag like so harvest status --config /opt/harvest/harvest.yml, Harvest will use that file

    To start collecting metrics, you need to define at least one poller and one exporter in your configuration file. The default configuration comes with a pre-configured poller named unix which collects metrics from the local system. This is useful if you want to monitor resource usage by Harvest and serves as a good example. Feel free to delete it if you want.

    The next step is to add pollers for your ONTAP clusters in the Pollers section of the Harvest configuration file, harvest.yml.

    "},{"location":"quickstart/#3-start-harvest","title":"3. Start Harvest","text":"

    Start all Harvest pollers as daemons:

    bin/harvest start\n

    Or start a specific poller(s). In this case, we're staring two pollers named jamaica and jamaica.

    bin/harvest start jamaica jamaica\n

    Replace jamaica and grenada with the poller names you defined in harvest.yml. The logs of each poller can be found in /var/log/harvest/.

    "},{"location":"quickstart/#4-import-grafana-dashboards","title":"4. Import Grafana dashboards","text":"

    The Grafana dashboards are located in the $HARVEST_HOME/grafana directory. You can manually import the dashboards or use the bin/harvest grafana command (more documentation).

    Note: the current dashboards specify Prometheus as the datasource. If you use the InfluxDB exporter, you will need to create your own dashboards.

    "},{"location":"quickstart/#5-verify-the-metrics","title":"5. Verify the metrics","text":"

    If you use a Prometheus Exporter, open a browser and navigate to http://0.0.0.0:12990/ (replace 12990 with the port number of your poller). This is the Harvest created HTTP end-point for your Prometheus exporter. This page provides a real-time generated list of running collectors and names of exported metrics.

    The metric data that is exported for Prometheus to scrap is available at http://0.0.0.0:12990/metrics/.

    More information on configuring the exporter can be found in the Prometheus exporter documentation.

    If you can't access the URL, check the logs of your pollers. These are located in /var/log/harvest/.

    "},{"location":"quickstart/#6-optional-setup-systemd-service-files","title":"6. (Optional) Setup Systemd service files","text":"

    If you're running Harvest on a system with Systemd, you may want to take advantage of systemd instantiated units to manage your pollers.

    "},{"location":"release-notes/","title":"Release Notes","text":"
    • Changelog
    • Releases
    "},{"location":"system-requirements/","title":"System Requirements","text":"

    Harvest is written in Go, which means it runs on recent Linux systems. It also runs on Macs for development.

    Hardware requirements depend on how many clusters you monitor and the number of metrics you chose to collect. With the default configuration, when monitoring 10 clusters, we recommend:

    • CPU: 2 cores
    • Memory: 1 GB
    • Disk: 500 MB (mostly used by log files)

    Harvest is compatible with:

    • Prometheus: 2.26 or higher
    • InfluxDB: v2
    • Grafana: 8.1.X or higher
    • Docker: 20.10.0 or higher and compatible Docker Compose
    "},{"location":"upgrade/","title":"Upgrade","text":"

    To upgrade Harvest

    Stop harvest

    cd <existing harvest directory>\nbin/harvest stop\n

    Verify that all pollers have stopped:

    bin/harvest status\nor\npgrep --full '\\-\\-poller'  # should return nothing if all pollers are stopped\n

    Follow the installation instructions to download and install Harvest and then copy your old harvest.yml into the new install directory like so:

    cp /path/to/old/harvest/harvest.yml /path/to/new/harvest.yml\n

    After upgrade, re-import all dashboards (either bin/harvest grafana import cli or via the Grafana UI) to get any new enhancements in dashboards.

    "},{"location":"architecture/rest-collector/","title":"REST collector","text":""},{"location":"architecture/rest-collector/#status","title":"Status","text":"

    ~~Accepted~~ Superseded by REST strategy

    The exact version of ONTAP that has full ZAPI parity is subject to change. Everywhere you see version 9.12, may become 9.13 or later.

    "},{"location":"architecture/rest-collector/#context","title":"Context","text":"

    We need to document and communicate to customers: - when they should switch from the ZAPI collectors to the REST ones - what versions of ONTAP are supported by Harvest's REST collectors - how to fill ONTAP gaps between the ZAPI and REST APIs

    The ONTAP version information is important because gaps are addressed in later versions of cDOT.

    "},{"location":"architecture/rest-collector/#considered-options","title":"Considered Options","text":"
    1. Only REST A clean cut-over, stop using ZAPI, and switch completely to REST.

    2. Both Support both ZAPI and REST collectors running at the same time, collecting the same objects. Flexible, but has the downside of last-write wins. Not recommended unless you selectively pick non-overlapping sets of objects.

    3. Template change that supports both Change the template to break ties, priority, etc. Rejected because additional complexity not worth the benefits.

    4. private-cli When there are REST gaps that have not been filled yet or will never be filled (WONTFIX), the Harvest REST collector will provide infrastructure and documentation on how to use private-cli pass-through to address gaps.

    "},{"location":"architecture/rest-collector/#chosen-decision","title":"Chosen Decision","text":"

    For clusters with ONTAP versions < 9.12, we recommend customers use the ZAPI collectors. (#2) (#4)

    Once ONTAP 9.12+ is released and customers have upgraded to it, they should make a clean cut-over to the REST collectors (#1). ONTAP 9.12 is the version of ONTAP that has the best parity with what Harvest collects in terms of config and performance counters. Harvest REST collectors, templates, and dashboards are validated against ONTAP 9.12+. Most of the REST config templates will work before 9.12, but unless you have specific needs, we recommend sticking with the ZAPI collectors until you upgrade to 9.12.

    There is little value in running both the ZAPI and REST collectors for an overlapping set of objects. It's unlikely you want to collect the same object via REST and ZAPI at the same time. Harvest doesn't support this use-case, but does nothing to detect or prevent it.

    If you want to collect a non-overlapping set of objects with REST and ZAPI, you can. If you do, we recommend you disable the ZAPI object collector. For example, if you enable the REST disk template, you should disable the ZAPI disk template. We do NOT recommend collecting an overlapping set of objects with both collectors since the last one to run will overwrite previously collected data.

    Harvest will document how to use the REST private cli pass-through to collect custom and non-public counters.

    The Harvest team recommends that customers open ONTAP issues for REST public API gaps that need filled.

    "},{"location":"architecture/rest-collector/#consequences","title":"Consequences","text":"

    The Harvest REST collectors will work with limitations on earlier versions of ONTAP. ONTAP 9.12+ is the minimally validated version. We only validate the full set of templates, dashboards, counters, etc. on versions of ONTAP 9.12+

    Harvest does not prevent you from collecting the same resource with ZAPI and REST.

    "},{"location":"architecture/rest-strategy/","title":"REST Strategy","text":""},{"location":"architecture/rest-strategy/#status","title":"Status","text":"

    Accepted

    "},{"location":"architecture/rest-strategy/#context","title":"Context","text":"

    ONTAP has published a customer product communiqu\u00e9 (CPC-00410) announcing that ZAPIs will reach end of availability (EOA) in ONTAP 9.13.1 released Q2 2023.

    This document describes how Harvest handles the ONTAP transition from ZAPI to REST. In most cases, no action is required on your part.

    "},{"location":"architecture/rest-strategy/#harvest-api-transition","title":"Harvest API Transition","text":"

    Harvest tries to use the protocol you specify in your harvest.yml config file.

    When specifying the ZAPI collector, Harvest will use the ZAPI protocol unless the cluster no longer speaks Zapi, in which case, Harvest will switch to REST.

    If you specify the REST collector, Harvest will use the REST protocol.

    Harvest includes a full set of REST templates that export identical metrics as the included ZAPI templates. No changes to dashboards or downstream metric-consumers should be required. See below if you have added metrics to the Harvest out-of-the-box templates.

    Read on if you want to know how you can use REST sooner, or you want to take advantage of REST-only features in ONTAP.

    "},{"location":"architecture/rest-strategy/#frequently-asked-questions","title":"Frequently Asked Questions","text":""},{"location":"architecture/rest-strategy/#how-does-harvest-decide-whether-to-use-rest-or-zapi-apis","title":"How does Harvest decide whether to use REST or ZAPI APIs?","text":"

    Harvest attempts to use the collector defined in your harvest.yml config file.

    • If you specify the ZAPI collector, Harvest will use the ZAPI protocol as long as the cluster still speaks Zapi. If the cluster no longer understands Zapi, Harvest will switch to Rest.

    • If you specify the REST collector, Harvest will use REST.

    Earlier versions of Harvest included a prefer_zapi poller option and a HARVEST_NO_COLLECTOR_UPGRADE environment variable. Both of these options are ignored in Harvest versions 23.08 onwards.

    "},{"location":"architecture/rest-strategy/#why-would-i-switch-to-rest-before-9131","title":"Why would I switch to REST before 9.13.1?","text":"
    • You have advanced use cases to validate before ONTAP removes ZAPIs
    • You want to take advantage of new ONTAP features that are only available via REST (e.g., cloud features, event remediation, name services, cluster peers, etc.)
    • You want to collect a metric that is not available via ZAPI
    • You want to collect a metric from the ONTAP CLI. The REST API includes a private CLI pass-through to access any ONTAP CLI command
    "},{"location":"architecture/rest-strategy/#can-i-start-using-rest-before-9131","title":"Can I start using REST before 9.13.1?","text":"

    Yes. Many customers do. Be aware of the following limitations:

    1. ONTAP includes a subset of performance counters via REST beginning in ONTAP 9.11.1.
    2. There may be performance metrics missing from versions of ONTAP earlier than 9.11.1.

    Where performance metrics are concerned, because of point #2, our recommendation is to wait until at least ONTAP 9.12.1 before switching to the RestPerf collector. You can continue using the ZapiPerf collector until you switch.

    "},{"location":"architecture/rest-strategy/#a-counter-is-missing-from-rest-what-do-i-do","title":"A counter is missing from REST. What do I do?","text":"

    The Harvest team has ensured that all the out-of-the-box ZAPI templates have matching REST templates with identical metrics as of Harvest 22.11 and ONTAP 9.12.1. Any additional ZAPI Perf counters you have added may be missing from ONTAP REST Perf.

    Join the Harvest discord channel and ask us about the counter. Sometimes we may know which release the missing counter is coming in, otherwise we can point you to the ONTAP process to request new counters.

    "},{"location":"architecture/rest-strategy/#can-i-use-the-rest-and-zapi-collectors-at-the-same-time","title":"Can I use the REST and ZAPI collectors at the same time?","text":"

    Yes. Harvest ensures that duplicate resources are not collected from both collectors.

    When there is potential duplication, Harvest first resolves the conflict in the order collectors are defined in your poller and then negotiates with the cluster on the most appropriate API to use per above.

    Let's take a look at a few examples using the following poller definition:

    cluster-1:\ndatacenter: dc-1\naddr: 10.1.1.1\ncollectors:\n- Zapi\n- Rest\n
    • When cluster-1 is running ONTAP 9.9.X (ONTAP still supports ZAPIs), the Zapi collector will be used since it is listed first in the list of collectors. When collecting a REST-only resource like, nfs_client, the Rest collector will be used since nfs_client objects are only available via REST.

    • When cluster-1 is running ONTAP 9.18.1 (ONTAP no longer supports ZAPIs), the Rest collector will be used since ONTAP can no longer speak the ZAPI protocol.

    If you want the REST collector to be used in all cases, change the order in the collectors section so Rest comes before Zapi.

    If the resource does not exist for the first collector, the next collector will be tried. Using the example above, when collecting VolumeAnalytics resources, the Zapi collector will not run for VolumeAnalytics objects since that resource is only available via REST. The Rest collector will run and collect the VolumeAnalytics objects.

    "},{"location":"architecture/rest-strategy/#ive-added-counters-to-existing-zapi-templates-will-those-counters-work-in-rest","title":"I've added counters to existing ZAPI templates. Will those counters work in REST?","text":"

    ZAPI config metrics often have a REST equivalent that can be found in ONTAP's ONTAPI to REST mapping document.

    ZAPI performance metrics may be missing in REST. If you have added new metrics or templates to the ZapiPerf collector, those metrics likely aren't available via REST. You can check if the performance counter is available, ask the Harvest team on Discord, or ask ONTAP to add the counter you need.

    "},{"location":"architecture/rest-strategy/#reference","title":"Reference","text":"

    Table of ONTAP versions, dates and API notes.

    ONTAPversion ReleaseDate ONTAPNotes 9.11.1 Q2 2022 First version of ONTAP with REST performance metrics 9.12.1 Q4 2022 ZAPIs still supported - REST performance metrics have parity with Harvest 22.11 collected ZAPI performance metrics 9.13.1 ZAPIs still supported 9.14.1-9.15.1 ZAPIs enabled if ONTAP upgrade detects they were being used earlier. New ONTAP installs default to REST only. ZAPIs may be enabled via CLI 9.16.1-9.17.1 ZAPIs disabled. See ONTAP communique for details on re-enabling 9.18.1 ZAPIs removed. No way to re-enable"},{"location":"help/faq/","title":"FAQ","text":""},{"location":"help/faq/#how-do-i-migrate-from-harvest-16-to-20","title":"How do I migrate from Harvest 1.6 to 2.0?","text":"

    There currently is not a tool to migrate data from Harvest 1.6 to 2.0. The most common workaround is to run both, 1.6 and 2.0, in parallel. Run both, until the 1.6 data expires due to normal retention policy, and then fully cut over to 2.0.

    Technically, it\u2019s possible to take a Graphite DB, extract the data, and send it to a Prometheus db, but it\u2019s not an area we\u2019ve invested in. If you want to explore that option, check out the promtool which supports importing, but probably not worth the effort.

    "},{"location":"help/faq/#how-do-i-share-sensitive-log-files-with-netapp","title":"How do I share sensitive log files with NetApp?","text":"

    Email them to ng-harvest-files@netapp.com This mail address is accessible to NetApp Harvest employees only.

    "},{"location":"help/faq/#multi-tenancy","title":"Multi-tenancy","text":""},{"location":"help/faq/#question","title":"Question","text":"

    Is there a way to allow per SVM level user views? I need to offer 1 tenant per SVM. Can I limit visibility to specific SVMs? Is there an SVM dashboard available?

    "},{"location":"help/faq/#answer","title":"Answer","text":"

    You can do this with Grafana. Harvest can provide the labels for SVMs. The pieces are there but need to be put together.

    Grafana templates support the $__user variable to make pre-selections and decisions. You can use that + metadata mapping the user <-> SVM. With both of those you can build SVM specific dashboards.

    There is a German service provider who is doing this. They have service managers responsible for a set of customers \u2013 and only want to see the data/dashboards of their corresponding customers.

    "},{"location":"help/faq/#harvest-authentication-and-permissions","title":"Harvest Authentication and Permissions","text":""},{"location":"help/faq/#question_1","title":"Question","text":"

    What permissions does Harvest need to talk to ONTAP?

    "},{"location":"help/faq/#answer_1","title":"Answer","text":"

    Permissions, authentication, role based security, and creating a Harvest user are covered here.

    "},{"location":"help/faq/#ontap-counters-are-missing","title":"ONTAP counters are missing","text":""},{"location":"help/faq/#question_2","title":"Question","text":"

    How do I make Harvest collect additional ONTAP counters?

    "},{"location":"help/faq/#answer_2","title":"Answer","text":"

    Instead of modifying the out-of-the-box templates in the conf/ directory, it is better to create your own custom templates following these instructions.

    "},{"location":"help/faq/#capacity-metrics","title":"Capacity Metrics","text":""},{"location":"help/faq/#question_3","title":"Question","text":"

    How are capacity and other metrics calculated by Harvest?

    "},{"location":"help/faq/#answer_3","title":"Answer","text":"

    Each collector has its own way of collecting and post-processing metrics. Check the documentation of each individual collector (usually under section #Metrics). Capacity and hardware-related metrics are collected by the Zapi collector which emits metrics as they are without any additional calculation. Performance metrics are collected by the ZapiPerf collector and the final values are calculated from the delta of two consequent polls.

    "},{"location":"help/faq/#tagging-volumes","title":"Tagging Volumes","text":""},{"location":"help/faq/#question_4","title":"Question","text":"

    How do I tag ONTAP volumes with metadata and surface that data in Harvest?

    "},{"location":"help/faq/#answer_4","title":"Answer","text":"

    See volume tagging issue and volume tagging via sub-templates

    "},{"location":"help/faq/#rest-and-zapi-documentation","title":"REST and Zapi Documentation","text":""},{"location":"help/faq/#question_5","title":"Question","text":"

    How do I relate ONTAP REST endpoints to ZAPI APIs and attributes?

    "},{"location":"help/faq/#answer_5","title":"Answer","text":"

    Please refer to the ONTAPI to REST API mapping document.

    "},{"location":"help/faq/#sizing","title":"Sizing","text":"

    How much disk space is required by Prometheus?

    This depends on the collectors you've added, # of nodes monitored, cardinality of labels, # instances, retention, ingest rate, etc. A good approximation is to curl your Harvest exporter and count the number of samples that it publishes and then feed that information into a Prometheus sizing formula.

    Prometheus stores an average of 1-2 bytes per sample. To plan the capacity of a Prometheus server, you can use the rough formula: needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample

    A rough approximation is outlined https://devops.stackexchange.com/questions/9298/how-to-calculate-disk-space-required-by-prometheus-v2-2

    "},{"location":"help/faq/#topk-usage-in-grafana","title":"Topk usage in Grafana","text":""},{"location":"help/faq/#question_6","title":"Question","text":"

    In Grafana, why do I see more results from topk than I asked for?

    "},{"location":"help/faq/#answer_6","title":"Answer","text":"

    Topk is one of Prometheus's out-of-the-box aggregation operators, and is used to calculate the largest k elements by sample value.

    Depending on the time range you select, Prometheus will often return more results than you asked for. That's because Prometheus is picking the topk for each time in the graph. In other words, different time series are the topk at different times in the graph. When you use a large duration, there are often many time series.

    This is a limitation of Prometheus and can be mitigated by:

    • reducing the time range to a smaller duration that includes fewer topk results - something like a five to ten minute range works well for most of Harvest's charts
    • the panel's table shows the current topk rows and that data can be used to supplement the additional series shown in the charts

    Additional details: here, here, and here

    "},{"location":"help/faq/#where-are-harvest-container-images-published","title":"Where are Harvest container images published?","text":"

    Harvest images are published to both NetApp's (cr.netapp.io) and Docker's (hub.docker.com) image registry. By default, cr.netapp.io is used.

    "},{"location":"help/faq/#how-do-i-switch-from-dockerhub-to-netapps-image-registry-crnetappio-or-vice-versa","title":"How do I switch from DockerHub to NetApp's image registry (cr.netapp.io) or vice-versa?","text":""},{"location":"help/faq/#answer_7","title":"Answer","text":"

    Replace all instances of rahulguptajss/harvest:latest with cr.netapp.io/harvest:latest

    • Edit your docker-compose file and make those replacements or regenerate the compose file using the --image cr.netapp.io/harvest:latest option)

    • Update any shell or Ansible scripts you have that are also using those images

    • After making these changes, you should stop your containers, pull new images, and restart.

    You can verify that you're using the cr.netapp.io images like so:

    Before

    docker image ls -a\nREPOSITORY              TAG       IMAGE ID       CREATED        SIZE\nrahulguptajss/harvest   latest    80061bbe1c2c   10 days ago    85.4MB <=== no prefix in the repository \nprom/prometheus         v2.33.1   e528f02c45a6   3 weeks ago    204MB       column means from DockerHub\ngrafana/grafana         8.3.4     4a34578e4374   5 weeks ago    274MB\n

    Pull image from cr.netapp.io

    docker pull cr.netapp.io/harvest\nUsing default tag: latest\nlatest: Pulling from harvest\nDigest: sha256:6ff88153812ebb61e9dd176182bf8a792cde847748c5654d65f4630e61b1f3ae\nStatus: Image is up to date for cr.netapp.io/harvest:latest\ncr.netapp.io/harvest:latest\n

    Notice that the IMAGE ID for both images are identical since the images are the same.

    docker image ls -a\nREPOSITORY              TAG       IMAGE ID       CREATED        SIZE\ncr.netapp.io/harvest    latest    80061bbe1c2c   10 days ago    85.4MB  <== Harvest image from cr.netapp.io\nrahulguptajss/harvest   latest    80061bbe1c2c   10 days ago    85.4MB\nprom/prometheus         v2.33.1   e528f02c45a6   3 weeks ago    204MB\ngrafana/grafana         8.3.4     4a34578e4374   5 weeks ago    274MB\ngrafana/grafana         latest    1d60b4b996ad   2 months ago   275MB\nprom/prometheus         latest    c10e9cbf22cd   3 months ago   194MB\n

    We can now remove the DockerHub pulled image

    docker image rm rahulguptajss/harvest\nUntagged: rahulguptajss/harvest:latest\nUntagged: rahulguptajss/harvest@sha256:6ff88153812ebb61e9dd176182bf8a792cde847748c5654d65f4630e61b1f3ae\n\ndocker image ls -a\nREPOSITORY             TAG       IMAGE ID       CREATED        SIZE\ncr.netapp.io/harvest   latest    80061bbe1c2c   10 days ago    85.4MB\nprom/prometheus        v2.33.1   e528f02c45a6   3 weeks ago    204MB\ngrafana/grafana        8.3.4     4a34578e4374   5 weeks ago    274MB\n
    "},{"location":"help/faq/#ports","title":"Ports","text":""},{"location":"help/faq/#what-ports-does-harvest-use","title":"What ports does Harvest use?","text":""},{"location":"help/faq/#answer_8","title":"Answer","text":"

    The default ports are shown in the following diagram.

    • Harvest's pollers use ZAPI or REST to communicate with ONTAP on port 443
    • Each poller exposes the Prometheus port defined in your harvest.yml file
    • Prometheus scrapes each poller-exposed Prometheus port (promPort1, promPort2, promPort3)
    • Prometheus's default port is 9090
    • Grafana's default port is 3000
    "},{"location":"help/faq/#snapmirror_labels","title":"Snapmirror_labels","text":""},{"location":"help/faq/#why-do-my-snapmirror_labels-have-an-empty-source_node","title":"Why do my snapmirror_labels have an empty source_node?","text":""},{"location":"help/faq/#answer_9","title":"Answer","text":"

    Snapmirror relationships have a source and destination node. ONTAP however does not expose the source side of that relationship, only the destination side is returned via ZAPI/REST APIs. Because of that, the Prometheus metric named, snapmirror_labels, will have an empty source_node label.

    The dashboards show the correct value for source_node since we join multiple metrics in the Grafana panels to synthesize that information.

    In short: don't rely on the snapmirror_labels for source_node labels. If you need source_node you will need to do a similar join as the Snapmirror dashboard does.

    See https://github.com/NetApp/harvest/issues/1192 for more information and linked pull requests for REST and ZAPI.

    "},{"location":"help/faq/#nfs-clients-dashboard","title":"NFS Clients Dashboard","text":""},{"location":"help/faq/#why-do-my-nfs-clients-dashboard-have-no-data","title":"Why do my NFS Clients Dashboard have no data?","text":""},{"location":"help/faq/#answer_10","title":"Answer","text":"

    NFS Clients dashboard is only available through Rest Collector. This information is not available through Zapi. You must enable the Rest collector in your harvest.yml config and uncomment the nfs_clients.yaml section in your default.yaml file.

    Note: Enabling nfs_clients.yaml may slow down data collection.

    "},{"location":"help/faq/#file-analytics-dashboard","title":"File Analytics Dashboard","text":""},{"location":"help/faq/#why-do-my-file-analytics-dashboard-have-no-data","title":"Why do my File Analytics Dashboard have no data?","text":""},{"location":"help/faq/#answer_11","title":"Answer","text":"

    This dashboard requires ONTAP 9.8+ and the APIs are only available via REST. Please enable the REST collector in your harvest config. To collect and display usage data such as capacity analytics, you need to enable File System Analytics on a volume. Please see https://docs.netapp.com/us-en/ontap/task_nas_file_system_analytics_enable.html for more details.

    "},{"location":"help/faq/#why-do-i-have-volume-sis-stat-panel-empty-in-volume-dashboard","title":"Why do I have Volume Sis Stat panel empty in Volume dashboard?","text":""},{"location":"help/faq/#answer_12","title":"Answer","text":"

    This panel requires ONTAP 9.12+ and the APIs are only available via REST. Enable the REST collector in your harvest.yml config.

    "},{"location":"help/log-collection/","title":"Harvest Logs Collection Guide","text":"

    This guide will help you collect Harvest logs on various platforms. Follow the instructions specific to your platform. If you would like to share the collected logs with the Harvest team, please email them to ng-harvest-files@netapp.com.

    "},{"location":"help/log-collection/#rpm-deb-and-native-installations","title":"RPM, DEB, and Native Installations","text":"

    For RPM, DEB, and native installations, use the following command to create a compressed tar file containing the logs:

    tar -czvf harvest_logs.tar.gz -C /var/log harvest\n

    This command will create a file named harvest_logs.tar.gz with the contents of the /var/log/harvest directory.

    "},{"location":"help/log-collection/#docker-container","title":"Docker Container","text":"

    For Docker containers, first, identify the container ID for your Harvest instance. Then, replace <container_id> with the actual container ID in the following command:

    docker logs <container_id> &> harvest_logs.txt && tar -czvf harvest_logs.tar.gz harvest_logs.txt\n

    This command will create a file named harvest_logs.tar.gz containing the logs from the specified container.

    "},{"location":"help/log-collection/#nabox","title":"NABox","text":"

    For NABox installations, ssh into your nabox instance, and use the following command to create a compressed tar file containing the logs:

    dc logs nabox-api > nabox-api.log; dc logs nabox-harvest2 > nabox-harvest2.log;\\\ntar -czf nabox-logs-`date +%Y-%m-%d_%H:%M:%S`.tgz *\n

    This command will create a file named nabox-logs-$date.tgz containing the nabox-api and Harvest poller logs.

    For more information, see the NABox documentation on collecting logs

    "},{"location":"help/troubleshooting/","title":"Checklists for Harvest","text":"

    A set of steps to go through when something goes wrong.

    "},{"location":"help/troubleshooting/#what-version-of-ontap-do-you-have","title":"What version of ONTAP do you have?","text":"

    Run the following, replacing <poller> with the poller from your harvest.yaml

    ./bin/harvest zapi -p <poller> show system\n

    Copy and paste the output into your issue. Here's an example:

    ./bin/harvest -p infinity show system\nconnected to infinity (NetApp Release 9.8P2: Tue Feb 16 03:49:46 UTC 2021)\n[results]                             -                                   *\n  [build-timestamp]                   -                          1613447386\n  [is-clustered]                      -                                true\n  [version]                           - NetApp Release 9.8P2: Tue Feb 16 03:49:46 UTC 2021\n  [version-tuple]                     -                                   *\n    [system-version-tuple]            -                                   *\n      [generation]                    -                                   9\n      [major]                         -                                   8\n      [minor]                         -                                   0\n

    "},{"location":"help/troubleshooting/#install-fails","title":"Install fails","text":"

    I tried to install and ...

    "},{"location":"help/troubleshooting/#how-do-i-tell-if-harvest-is-doing-anything","title":"How do I tell if Harvest is doing anything?","text":"

    You believe Harvest is installed fine, but it's not working.

    • Post the contents of your harvest.yml

    Try validating your harvest.yml with yamllint like so: yamllint -d relaxed harvest.yml If you do not have yamllint installed, look here.

    There should be no errors - warnings like the following are fine:

    harvest.yml\n  64:1      warning  too many blank lines (3 > 0)  (empty-lines)\n

    • How did you start Harvest?

    • What do you see in /var/log/harvest/*

    • What does ps aux | grep poller show?

    • If you are using Prometheus, try hitting Harvest's Prometheus endpoint like so:

    curl http://machine-this-is-running-harvest:prometheus-port-in-harvest-yaml/metrics

    • Check file ownership (user/group) and file permissions of your templates, executable, etc in your Harvest home directory (ls -la /opt/harvest/) See also.
    "},{"location":"help/troubleshooting/#how-do-i-start-harvest-in-debug-mode","title":"How do I start Harvest in debug mode?","text":"

    Use the --debug flag when starting a poller. In debug mode, the poller will only collect metrics, but not write to databases. Another useful flag is --foreground, in which case all log messages are written to the terminal. Note that you can only start one poller in foreground mode.

    Finally, you can use --loglevel=1 or --verbose, if you want to see a lot of log messages. For even more, you can use --loglevel=0 or --trace.

    Examples:

    bin/harvest start $POLLER_NAME --foreground --debug --loglevel=0\nor\nbin/harvest start $POLLER_NAME --loglevel=1 --collectors Zapi --objects Qtree\n
    "},{"location":"help/troubleshooting/#how-do-i-start-harvest-in-foreground-mode","title":"How do I start Harvest in foreground mode?","text":"

    See How do I start Harvest in debug mode?

    "},{"location":"help/troubleshooting/#how-do-i-start-my-poller-with-only-one-collector","title":"How do I start my poller with only one collector?","text":"

    Since a poller will start a large number of collectors (each collector-object pair is treated as a collector), it is often hard to find the issue you are looking for in the abundance of log messages. It might be therefore useful to start one single collector-object pair when troubleshooting. You can use the --collectors and --objects flags for that. For example, start only the ZapiPerf collector with the SystemNode object:

    harvest start my_poller --collectors ZapiPerf --objects SystemNode

    (To find to correct object name, check conf/COLLECTOR/default.yaml file of the collector).

    "},{"location":"help/troubleshooting/#errors-in-the-log-file","title":"Errors in the log file","text":""},{"location":"help/troubleshooting/#some-of-my-clusters-are-not-showing-up-in-grafana","title":"Some of my clusters are not showing up in Grafana","text":"

    The logs show these errors:

    context deadline exceeded (Client.Timeout or context cancellation while reading body)\n\nand then for each volume\n\nskipped instance [9c90facd-3730-48f1-b55c-afacc35c6dbe]: not found in cache\n

    "},{"location":"help/troubleshooting/#workarounds","title":"Workarounds","text":"

    context deadline exceeded (Client.Timeout or context cancellation while reading body)

    means Harvest is timing out when talking to your cluster. This sometimes happens when you have a large number of resources (e.g. volumes).

    There are a few parameters that you can change to avoid this from happening. You can do this by editing the subtemplate of the resource affected. E.g. you can add the parameters in conf/zapiperf/cdot/9.8.0/volume.yaml or conf/zapi/cdot/9.8.0/volume.yaml. If the errors happen for most of the resources, you can add them in the main template of the collector (conf/zapi/default.yaml or conf/zapiperf/default.yaml) to apply them on all objects.

    "},{"location":"help/troubleshooting/#client_timeout","title":"client_timeout","text":"

    Increase the client_timeout value by adding a client_timeout line at the beginning of the template, like so:

    # increase the timeout to 1 minute\nclient_timeout: 1m\n
    "},{"location":"help/troubleshooting/#batch_size","title":"batch_size","text":"

    Decrease the batch_size value by adding a batch_size line at the beginning of the template. The default value of this parameter is 500. By decreasing it, the collector will fetch less instances during each API request. Example:

    # decrease number of instances to 200 for each API request\nbatch_size: 200\n
    "},{"location":"help/troubleshooting/#schedule","title":"schedule","text":"

    If nothing else helps, you can increase the data poll interval of the collector (default is 1m for ZapiPerf and 3m for Zapi). You can do this either by adding a schedule attribute to the template or, if it already exists, by changing the - data line.

    Example for ZapiPerf:

    # increase data poll frequency to 2 minutes\nschedule:\n- counter: 20m\n- instance: 10m\n- data: 2m\n
    Example for Zapi:

    # increase data poll frequency to 5 minutes\nschedule:\n- instance: 10m\n- data: 5m\n
    "},{"location":"help/troubleshooting/#prometheus-http-service-discovery-doesnt-work","title":"Prometheus HTTP Service Discovery doesn't work","text":"

    Some things to check:

    • Make sure the Harvest admin node is started via bin/harvest admin start and there are no errors printed to the console
    • Make sure your harvest.yml includes a valid Admin: section
    • Ensure bin/harvest doctor runs without error. If it does, include the output of bin/harvest doctor --print in Slack or your GitHub issue
    • Ensure your /etc/prometheus/prometheus.yml has a scrape config with http_sd_configs and it points to the admin node's ip:port
    • Ensure there are no errors in your poller logs (/var/log/harvest) related to the poller publishing its Prometheus port to the admin node. Something like this should help narrow it down: grep -R -E \"error.*poller.go\" /var/log/harvest/
      • If you see errors like dial udp 1.1.1.1:80: connect: network is unreachable, make sure your machine has a default route setup for your main interface
    • If the admin node is running, your harvest.yml includes the Admin: section, and your pollers are using the Prometheus exporter you should be able to curl the admin node endpoint for a list of running Harvest pollers like this:
      curl -s -k https://localhost:8887/api/v1/sd | jq .\n[\n  {\n    \"targets\": [\n      \":12994\"\n    ],\n    \"labels\": {\n      \"__meta_poller\": \"F2240-127-26\"\n    }\n  },\n  {\n    \"targets\": [\n      \":39000\"\n    ],\n    \"labels\": {\n      \"__meta_poller\": \"simple1\"\n    }\n  }\n]\n
    "},{"location":"help/troubleshooting/#how-do-i-run-harvest-commands-in-nabox","title":"How do I run Harvest commands in NAbox?","text":"

    NAbox is a vApp running Alpine Linux and Docker. NAbox runs Harvest as a set of Docker containers. That means to execute Harvest commands on NAbox, you need to exec into the container by following these commands.

    1. ssh into your NAbox instance

    2. Start bash in the Harvest container

    dc exec nabox-harvest2 bash\n

    You should see no errors and your prompt will change to something like root@nabox-harvest2:/app#

    Below are examples of running Harvest commands against a cluster named umeng-aff300-05-06. Replace with your cluster name as appropriate.

    # inside container\n\n> cat /etc/issue\nDebian GNU/Linux 10 \\n \\l\n\n> cd /netapp-harvest\nbin/harvest version\nharvest version 22.08.0-1 (commit 93db10a) (build date 2022-08-19T09:10:05-0400) linux/amd64\nchecking GitHub for latest... you have the latest \u2713\n\n# harvest.yml is found at /conf/harvest.yml\n\n> bin/zapi --poller umeng-aff300-05-06 show system\nconnected to umeng-aff300-05-06 (NetApp Release 9.9.1P9X3: Tue Apr 19 19:05:24 UTC 2022)\n[results]                                          -                                   *\n  [build-timestamp]                                -                          1650395124\n[is-clustered]                                   -                                true\n[version]                                        - NetApp Release 9.9.1P9X3: Tue Apr 19 19:05:24 UTC 2022\n[version-tuple]                                  -                                   *\n    [system-version-tuple]                         -                                   *\n      [generation]                                 -                                   9\n[major]                                      -                                   9\n[minor]                                      -                                   1\n\nbin/zapi -p umeng-aff300-05-06 show data --api environment-sensors-get-iter --max 10000 > env-sensor.xml\n

    The env-sensor.xml file will be written to the /opt/packages/harvest2 directory on the host.

    If needed, you can scp that file off NAbox and share it with the Harvest team.

    "},{"location":"help/troubleshooting/#rest-collector-auth-errors","title":"Rest Collector Auth errors?","text":"

    If you are seeing errors like User is not authorized or not authorized for that command while using Rest Collector. Follow below steps to make sure permissions are set correctly.

    1. Verify that user has permissions for relevant authentication method.

    security login show -vserver ROOT_VSERVER -user-or-group-name harvest2 -application http

    1. Verify that user has read-only permissions to api.
    security login role show -role harvest2-role\n

    1. Verify if an entry is present for following command.
    vserver services web access show -role harvest2-role -name rest\n

    If It is missing then add an entry with following commands

    vserver services web access create -vserver umeng-aff300-01-02 -name rest -role harvest2-role\n
    "},{"location":"help/troubleshooting/#why-do-i-have-gaps-in-my-dashboards","title":"Why do I have gaps in my dashboards?","text":"

    Here are possible reasons and things to check:

    • Prometheus scrape_interval found via (http://$promIP:9090/config)
    • Prometheus log files
    • Harvest collector scrape interval check your:
      • conf/zapi/default.yaml - default for config is 3m
      • conf/zapiperf/default.yaml - default of perf is 1m
    • Check you poller logs for any errors or lag messages
    • When using VictoriaMetrics, make sure your Prometheus exporter config includes sort_labels: true, since VictoriaMetrics will mark series stale if the label order changes between polls.
    "},{"location":"help/troubleshooting/#nabox","title":"NABox","text":"

    For NABox installations, refer to the NABox documentation on troubleshooting:

    NABox Troubleshooting

    "},{"location":"install/containerd/","title":"Containerized Harvest on Mac using containerd","text":"

    Harvest runs natively on a Mac already. If you need that, git clone and use GOOS=darwin make build.

    This page describes how to run Harvest on your Mac in a containerized environment (Compose, K8, etc.) The documentation below uses Rancher Desktop, but lima works just as well. Keep in mind, both of them are considered alpha. They work, but are still undergoing a lot of change.

    "},{"location":"install/containerd/#setup","title":"Setup","text":"

    We're going to: - Install and Start Rancher Desktop - (Optional) Create Harvest Docker image by following Harvest's existing documentation - Generate a Compose file following Harvest' existing documentation - Concatenate the Prometheus/Grafana compose file with the harvest compose file since Rancher doesn't support multiple compose files yet - Fixup the concatenated file - Start containers

    Under the hood, Rancher is using lima. If you want to skip Rancher and use lima directly that works too.

    "},{"location":"install/containerd/#install-and-start-rancher-desktop","title":"Install and Start Rancher Desktop","text":"

    We'll use brew to install Rancher.

    brew install rancher\n

    After Rancher Desktop installs, start it Cmd + Space type: Rancher and wait for it to start a VM and download images. Once everything is started continue.

    "},{"location":"install/containerd/#create-harvest-docker-image","title":"Create Harvest Docker image","text":"

    You only need to create a new image if you've made changes to Harvest. If you just want to use the latest version of Harvest, skip this step.

    These are the same steps outline on Building Harvest Docker Image except we replace docker build with nerdctl like so:

    nerdctl build -f container/onePollerPerContainer/Dockerfile -t harvest:latest . --no-cache 
    "},{"location":"install/containerd/#generate-a-harvest-compose-file","title":"Generate a Harvest compose file","text":"

    Follow the existing documentation to setup your harvest.yml file

    Create your harvest-compose.yml file like this:

    docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest \\\ngenerate docker full \\\n--output harvest-compose.yml # --image tag, if you built a new image above\n
    "},{"location":"install/containerd/#combine-prometheusgrafana-and-harvest-compose-file","title":"Combine Prometheus/Grafana and Harvest compose file","text":"

    Currently nerdctl compose does not support running with multiple compose files, so we'll concat the prom-stack.yml and the harvest-compose.yml into one file and then fix it up.

    cat prom-stack.yml harvest-compose.yml > both.yml\n\n# jump to line 45 and remove redundant version and services lines (lines 45, 46, 47 should be removed)\n# fix indentation of remaining lines - in vim, starting at line 46\n# Shift V\n# Shift G\n# Shift .\n# Esc\n# Shift ZZ\n
    "},{"location":"install/containerd/#start-containers","title":"Start containers","text":"
    nerdctl compose -f both.yml up -d\n\nnerdctl ps -a\n\nCONTAINER ID    IMAGE                               COMMAND                   CREATED               STATUS    PORTS                       NAMES\nbd7131291960    docker.io/grafana/grafana:latest    \"/run.sh\"                 About a minute ago    Up        0.0.0.0:3000->3000/tcp      grafana\nf911553a14e2    docker.io/prom/prometheus:latest    \"/bin/prometheus --c\u2026\"    About a minute ago    Up        0.0.0.0:9090->9090/tcp      prometheus\n037a4785bfad    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15007->15007/tcp    poller_simple7_v21.11.0513\n03fb951cfe26    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    59 seconds ago        Up        0.0.0.0:15025->15025/tcp    poller_simple25_v21.11.0513\n049d0d65b434    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:16050->16050/tcp    poller_simple49_v21.11.0513\n0b77dd1bc0ff    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:16067->16067/tcp    poller_u2_v21.11.0513\n1cabd1633c6f    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15015->15015/tcp    poller_simple15_v21.11.0513\n1d78c1bf605f    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15062->15062/tcp    poller_sandhya_v21.11.0513\n286271eabc1d    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15010->15010/tcp    poller_simple10_v21.11.0513\n29710da013d4    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:12990->12990/tcp    poller_simple1_v21.11.0513\n321ae28637b6    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15020->15020/tcp    poller_simple20_v21.11.0513\n39c91ae54d68    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15053->15053/tcp    poller_simple-53_v21.11.0513\n\nnerdctl logs poller_simple1_v21.11.0513\nnerdctl compose -f both.yml down\n\n# http://localhost:9090/targets   Prometheus\n# http://localhost:3000           Grafana\n# http://localhost:15062/metrics  Poller metrics\n
    "},{"location":"install/containers/","title":"Docker","text":""},{"location":"install/containers/#overview","title":"Overview","text":"

    Harvest is container-ready and supports several deployment options:

    • Stand-up Prometheus, Grafana, and Harvest via Docker Compose. Choose this if you want to hit the ground running. Install, volume and network mounts automatically handled.

    • Stand-up Harvest via Docker Compose that offers more flexibility in configuration. Choose this if you only want to run Harvest containers. Since you pick-and-choose what gets built and how it's deployed, stronger familiarity with containers is recommended.

    • If you prefer Ansible, David Blackwell created an Ansible script that stands up Harvest, Grafana, and Prometheus.

    • Want to run Harvest on a Mac via containerd and Rancher Desktop? We got you covered.

    • K8 Deployment via Kompose

    "},{"location":"install/containers/#docker-compose","title":"Docker Compose","text":"

    This is a quick way to install and get started with Harvest. Follow the four steps below to:

    • Setup Harvest, Grafana, and Prometheus via Docker Compose
    • Harvest dashboards are automatically imported and setup in Grafana with a Prometheus data source
    • A separate poller container is created for each monitored cluster
    • All pollers are automatically added as Prometheus scrape targets
    "},{"location":"install/containers/#setup-harvestyml","title":"Setup harvest.yml","text":"
    • Create a harvest.yml file with your cluster details, below is an example with annotated comments. Modify as needed for your scenario.

    This config is using the Prometheus exporter port_range feature, so you don't have to manage the Prometheus exporter port mappings for each poller.

    Exporters:\n  prometheus1:\n    exporter: Prometheus\n    addr: 0.0.0.0\n    port_range: 2000-2030  # <====== adjust to be greater than equal to the number of monitored clusters\n\nDefaults:\n  collectors:\n    - Zapi\n    - ZapiPerf\n    - EMS\n  use_insecure_tls: true   # <====== adjust as needed to enable/disable TLS checks \n  exporters:\n    - prometheus1\n\nPollers:\n  infinity:                # <====== add your cluster(s) here, they use the exporter defined three lines above\n    datacenter: DC-01\n    addr: 10.0.1.2\n    auth_style: basic_auth\n    username: user\n    password: 123#abc\n  # next cluster ....  \n
    "},{"location":"install/containers/#generate-a-docker-compose-for-your-pollers","title":"Generate a Docker compose for your Pollers","text":"
    • Generate a Docker compose file from your harvest.yml
    docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest \\\ngenerate docker full \\\n--output harvest-compose.yml\n

    By default, the above command uses the harvest configuration file(harvest.yml) located in the current directory. If you want to use a harvest config from a different location.

    What if my harvest configuration file is somewhere else or not named harvest.yml

    Use the following docker run command, updating the HYML variable with the absolute path to your harvest.yml.

    HYML=\"/opt/custom_harvest.yml\" \\\ndocker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"${HYML}:${HYML}\" \\\nghcr.io/netapp/harvest:latest \\\ngenerate docker full \\\n--output harvest-compose.yml \\\n--config \"${HYML}\"\n

    generate docker full does two things:

    1. Creates a Docker compose file with a container for each Harvest poller defined in your harvest.yml
    2. Creates a matching Prometheus service discovery file for each Harvest poller (located in container/prometheus/harvest_targets.yml). Prometheus uses this file to scrape the Harvest pollers.
    "},{"location":"install/containers/#start-everything","title":"Start everything","text":"

    Bring everything up

    docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n
    "},{"location":"install/containers/#prometheus-and-grafana","title":"Prometheus and Grafana","text":"

    The prom-stack.yml compose file creates a frontend and backend network. Prometheus and Grafana publish their admin ports on the front-end network and are routable to the local machine. By default, the Harvest pollers are part of the backend network and also expose their Prometheus web end-points. If you do not want their end-points exposed, add the --port=false option to the generate sub-command in the previous step.

    "},{"location":"install/containers/#prometheus","title":"Prometheus","text":"

    After bringing up the prom-stack.yml compose file, you can check Prometheus's list of targets at http://IP_OF_PROMETHEUS:9090/targets.

    "},{"location":"install/containers/#grafana","title":"Grafana","text":"

    After bringing up the prom-stack.yml compose file, you can access Grafana at http://IP_OF_GRAFANA:3000.

    You will be prompted to create a new password the first time you log in. Grafana's default credentials are

    username: admin\npassword: admin\n
    "},{"location":"install/containers/#manage-pollers","title":"Manage pollers","text":""},{"location":"install/containers/#how-do-i-add-a-new-poller","title":"How do I add a new poller?","text":"
    1. Add poller to harvest.yml
    2. Regenerate compose file by running harvest generate
    3. Run docker compose up, for example,
    docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n
    "},{"location":"install/containers/#stop-all-containers","title":"Stop all containers","text":"
    docker-compose -f prom-stack.yml -f harvest-compose.yml down\n

    If you encounter the following error message while attempting to stop your Docker containers using docker-compose down

    Error response from daemon: Conflict. The container name \"/poller-u2\" is already in use by container\n

    This error is likely due to running docker-compose down from a different directory than where you initially ran docker-compose up.

    To resolve this issue, make sure to run the docker-compose down command from the same directory where you ran docker-compose up. This will ensure that Docker can correctly match the container names and IDs with the directory you are working in. Alternatively, you can stop the Harvest, Prometheus, and Grafana containers by using the following command:

    docker ps -aq --filter \"name=prometheus\" --filter \"name=grafana\" --filter \"name=poller-\" | xargs docker stop | xargs docker rm\n

    Note: Deleting or stopping Docker containers does not remove the data stored in Docker volumes.

    "},{"location":"install/containers/#upgrade-harvest","title":"Upgrade Harvest","text":"

    Note: If you want to keep your historical Prometheus data, and you set up your Docker Compose workflow before Harvest 22.11, please read how to migrate your Prometheus volume before continuing with the upgrade steps below.

    To upgrade Harvest:

    1. Retrieve the most recent version of the Harvest Docker image by executing the following command.This is needed since the new version may contain new templates, dashboards, or other files not included in the Docker image.

      docker pull ghcr.io/netapp/harvest\n

    2. Stop all containers

    3. Regenerate your harvest-compose.yml file by running harvest generate By default, generate will use the latest tag. If you want to upgrade to a nightly build see the twisty.

      I want to upgrade to a nightly build

      Tell the generate cmd to use a different tag like so:

      docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest:nightly \\\ngenerate docker full \\\n--image ghcr.io/netapp/harvest:nightly \\\n--output harvest-compose.yml\n
    4. Restart your containers using the following:

    docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n
    "},{"location":"install/containers/#building-harvest-docker-image","title":"Building Harvest Docker Image","text":"

    Building a custom Harvest Docker image is only necessary if you require a tailored solution. If your intention is to run Harvest using Docker without any customizations, please refer to the Overview section above.

    docker build -f container/onePollerPerContainer/Dockerfile -t harvest:latest . --no-cache\n
    "},{"location":"install/harvest-containers/","title":"Harvest containers","text":"

    Follow this method if your goal is to establish a separate harvest container for each poller defined in harvest.yml file. Please note that these containers must be incorporated into your current infrastructure, which might include systems like Prometheus or Grafana.

    "},{"location":"install/harvest-containers/#setup-harvestyml","title":"Setup harvest.yml","text":"
    • Create a harvest.yml file with your cluster details, below is an example with annotated comments. Modify as needed for your scenario.

    This config is using the Prometheus exporter port_range feature, so you don't have to manage the Prometheus exporter port mappings for each poller.

    Exporters:\n  prometheus1:\n    exporter: Prometheus\n    addr: 0.0.0.0\n    port_range: 2000-2030  # <====== adjust to be greater than equal to the number of monitored clusters\n\nDefaults:\n  collectors:\n    - Zapi\n    - ZapiPerf\n    - EMS\n  use_insecure_tls: true   # <====== adjust as needed to enable/disable TLS checks \n  exporters:\n    - prometheus1\n\nPollers:\n  infinity:                # <====== add your cluster(s) here, they use the exporter defined three lines above\n    datacenter: DC-01\n    addr: 10.0.1.2\n    auth_style: basic_auth\n    username: user\n    password: 123#abc\n  # next cluster ....  \n
    "},{"location":"install/harvest-containers/#generate-a-docker-compose-for-your-pollers","title":"Generate a Docker compose for your Pollers","text":"
    • Generate a Docker compose file from your harvest.yml
    docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest \\\ngenerate docker \\\n--output harvest-compose.yml\n
    "},{"location":"install/harvest-containers/#start-everything","title":"Start everything","text":"

    Bring everything up

    docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n
    "},{"location":"install/harvest-containers/#manage-pollers","title":"Manage pollers","text":""},{"location":"install/harvest-containers/#how-do-i-add-a-new-poller","title":"How do I add a new poller?","text":"
    1. Add poller to harvest.yml
    2. Regenerate compose file by running harvest generate
    3. Run docker compose up, for example,
    docker-compose -f harvest-compose.yml up -d --remove-orphans\n
    "},{"location":"install/harvest-containers/#stop-all-containers","title":"Stop all containers","text":"
    docker-compose-f harvest-compose.yml down\n
    "},{"location":"install/harvest-containers/#upgrade-harvest","title":"Upgrade Harvest","text":"

    To upgrade Harvest:

    1. Retrieve the most recent version of the Harvest Docker image by executing the following command.This is needed since the new version may contain new templates, dashboards, or other files not included in the Docker image.

      docker pull ghcr.io/netapp/harvest\n

    2. Stop all containers

    3. Regenerate your harvest-compose.yml file by running harvest generate By default, generate will use the latest tag. If you want to upgrade to a nightly build see the twisty.

      I want to upgrade to a nightly build

      Tell the generate cmd to use a different tag like so:

      docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest:nightly \\\ngenerate docker \\\n--image ghcr.io/netapp/harvest:nightly \\\n--output harvest-compose.yml\n
    4. Restart your containers using the following:

    docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n
    "},{"location":"install/k8/","title":"K8 Deployment","text":"

    The following steps are provided for reference purposes only. Depending on the specifics of your k8 configuration, you may need to make modifications to the steps or files as necessary.

    "},{"location":"install/k8/#requirements","title":"Requirements","text":"
    • Kompose: v1.25 or higher
    "},{"location":"install/k8/#deployment","title":"Deployment","text":"
    • Local k8 Deployment
    • Cloud Deployment
    "},{"location":"install/k8/#local-k8-deployment","title":"Local k8 Deployment","text":"

    To run Harvest resources in Kubernetes, please execute the following commands:

    1. After adding your clusters to harvest.yml, generate harvest-compose.yml and prom-stack.yml.
    docker run --rm \\\n  --entrypoint \"bin/harvest\" \\\n  --volume \"$(pwd):/opt/temp\" \\\n  --volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\n  ghcr.io/netapp/harvest \\\n  generate docker full \\\n  --output harvest-compose.yml\n
    example harvest.yml

    Tools:\nExporters:\nprometheus1:\nexporter: Prometheus\nport_range: 12990-14000\nDefaults:\nuse_insecure_tls: true\ncollectors:\n- Zapi\n- ZapiPerf\nexporters:\n- prometheus1\nPollers:\nu2:\ndatacenter: u2\naddr: ADDRESS\nusername: USER\npassword: PASS\n

    harvest-compose.yml

    version: \"3.7\"\n\nservices:\n\nu2:\nimage: ghcr.io/netapp/harvest:latest\ncontainer_name: poller-u2\nrestart: unless-stopped\nports:\n- 12990:12990\ncommand: '--poller u2 --promPort 12990 --config /opt/harvest.yml'\nvolumes:\n- /Users/harvest/conf:/opt/harvest/conf\n- /Users/harvest/cert:/opt/harvest/cert\n- /Users/harvest/harvest.yml:/opt/harvest.yml\nnetworks:\n- backend\n

    1. Using kompose, convert harvest-compose.yml and prom-stack.yml into Kubernetes resources and save them as kub.yaml.
    kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\n
    kub.yaml

    ---\napiVersion: v1\nkind: Service\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: grafana\nname: grafana\nspec:\nports:\n- name: \"3000\"\nport: 3000\ntargetPort: 3000\nselector:\nio.kompose.service: grafana\ntype: NodePort\nstatus:\nloadBalancer: {}\n\n---\napiVersion: v1\nkind: Service\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: prometheus\nname: prometheus\nspec:\nports:\n- name: \"9090\"\nport: 9090\ntargetPort: 9090\nselector:\nio.kompose.service: prometheus\ntype: NodePort\nstatus:\nloadBalancer: {}\n\n---\napiVersion: v1\nkind: Service\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2\nspec:\nports:\n- name: \"12990\"\nport: 12990\ntargetPort: 12990\nselector:\nio.kompose.service: u2\nstatus:\nloadBalancer: {}\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: grafana\nname: grafana\nspec:\nreplicas: 1\nselector:\nmatchLabels:\nio.kompose.service: grafana\nstrategy:\ntype: Recreate\ntemplate:\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.network/harvest-backend: \"true\"\nio.kompose.network/harvest-frontend: \"true\"\nio.kompose.service: grafana\nspec:\ncontainers:\n- image: grafana/grafana:8.3.4\nname: grafana\nports:\n- containerPort: 3000\nresources: {}\nvolumeMounts:\n- mountPath: /var/lib/grafana\nname: grafana-data\n- mountPath: /etc/grafana/provisioning\nname: grafana-hostpath1\nrestartPolicy: Always\nvolumes:\n- hostPath:\npath: /Users/harvest\nname: grafana-data\n- hostPath:\npath: /Users/harvest/grafana\nname: grafana-hostpath1\nstatus: {}\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\ncreationTimestamp: null\nname: harvest-backend\nspec:\ningress:\n- from:\n- podSelector:\nmatchLabels:\nio.kompose.network/harvest-backend: \"true\"\npodSelector:\nmatchLabels:\nio.kompose.network/harvest-backend: \"true\"\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\ncreationTimestamp: null\nname: harvest-frontend\nspec:\ningress:\n- from:\n- podSelector:\nmatchLabels:\nio.kompose.network/harvest-frontend: \"true\"\npodSelector:\nmatchLabels:\nio.kompose.network/harvest-frontend: \"true\"\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: prometheus\nname: prometheus\nspec:\nreplicas: 1\nselector:\nmatchLabels:\nio.kompose.service: prometheus\nstrategy:\ntype: Recreate\ntemplate:\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.network/harvest-backend: \"true\"\nio.kompose.service: prometheus\nspec:\ncontainers:\n- args:\n- --config.file=/etc/prometheus/prometheus.yml\n- --storage.tsdb.path=/prometheus\n- --web.console.libraries=/usr/share/prometheus/console_libraries\n- --web.console.templates=/usr/share/prometheus/consoles\nimage: prom/prometheus:v2.33.1\nname: prometheus\nports:\n- containerPort: 9090\nresources: {}\nvolumeMounts:\n- mountPath: /etc/prometheus\nname: prometheus-hostpath0\n- mountPath: /prometheus\nname: prometheus-data\nrestartPolicy: Always\nvolumes:\n- hostPath:\npath: /Users/harvest/container/prometheus\nname: prometheus-hostpath0\n- hostPath:\npath: /Users/harvest\nname: prometheus-data\nstatus: {}\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2\nspec:\nreplicas: 1\nselector:\nmatchLabels:\nio.kompose.service: u2\nstrategy:\ntype: Recreate\ntemplate:\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.network/harvest-backend: \"true\"\nio.kompose.service: u2\nspec:\ncontainers:\n- args:\n- --poller\n- u2\n- --promPort\n- \"12990\"\n- --config\n- /opt/harvest.yml\nimage: ghcr.io/netapp/harvest:latest\nname: poller-u2\nports:\n- containerPort: 12990\nresources: {}\nvolumeMounts:\n- mountPath: /opt/harvest/conf\nname: u2-hostpath0\n- mountPath: /opt/harvest/cert\nname: u2-hostpath1\n- mountPath: /opt/harvest.yml\nname: u2-hostpath2\nrestartPolicy: Always\nvolumes:\n- hostPath:\npath: /Users/harvest/conf\nname: u2-hostpath0\n- hostPath:\npath: /Users/harvest/cert\nname: u2-hostpath1\n- hostPath:\npath: /Users/harvest/harvest.yml\nname: u2-hostpath2\nstatus: {}\n

    1. Apply kub.yaml to k8.
    kubectl apply --filename kub.yaml\n
    1. List running pods.
    kubectl get pods\n
    pods

    NAME                          READY   STATUS    RESTARTS   AGE\nprometheus-666fc7b64d-xfkvk   1/1     Running   0          43m\ngrafana-7cd8bdc9c9-wmsxh      1/1     Running   0          43m\nu2-7dfb76b5f6-zbfm6           1/1     Running   0          43m\n

    "},{"location":"install/k8/#remove-all-harvest-resources-from-k8","title":"Remove all Harvest resources from k8","text":"

    kubectl delete --filename kub.yaml

    "},{"location":"install/k8/#helm-chart","title":"Helm Chart","text":"

    Generate helm charts

    kompose convert --file harvest-compose.yml --file prom-stack.yml --chart --volumes hostPath --out harvestchart\n
    "},{"location":"install/k8/#cloud-deployment","title":"Cloud Deployment","text":"

    We will use configMap to generate Kubernetes resources for deploying Harvest pollers in a cloud environment. Please note the following assumptions for the steps below:

    • The steps provided are solely for the deployment of Harvest poller pods. Separate configurations are required to set up Prometheus and Grafana.
    • Networking between Harvest and Prometheus must be configured, and this can be accomplished by adding the network configuration in harvest-compose.yaml.

    • After configuring the clusters in harvest.yml, generate harvest-compose.yml. We also want to remove the conf directory from the harvest-compose.yml file, otherwise kompose will create an empty configMap for it. We'll remove the conf directory by commenting out that line using sed.

    docker run --rm \\\n  --entrypoint \"bin/harvest\" \\\n  --volume \"$(pwd):/opt/temp\" \\\n  --volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\n  ghcr.io/netapp/harvest \\\n  generate docker full \\\n  --output harvest-compose.yml\n\nsed -i '/\\/conf/s/^/#/g' harvest-compose.yml\n
    harvest-compose.yml

    version: \"3.7\"\n\nservices:\n\nu2:\nimage: ghcr.io/netapp/harvest:latest\ncontainer_name: poller-u2\nrestart: unless-stopped\nports:\n- 12990:12990\ncommand: '--poller u2 --promPort 12990 --config /opt/harvest.yml'\nvolumes:\n#      - /Users/harvest/conf:/opt/harvest/conf\n- /Users/harvest/cert:/opt/harvest/cert\n- /Users/harvest/harvest.yml:/opt/harvest.yml\n

    1. Using kompose, convert harvest-compose.yml into Kubernetes resources and save them as kub.yaml.
    kompose convert --file harvest-compose.yml --volumes configMap -o kub.yaml\n
    kub.yaml

    ---\napiVersion: v1\nkind: Service\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --volumes configMap -o kub.yaml\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2\nspec:\nports:\n- name: \"12990\"\nport: 12990\ntargetPort: 12990\nselector:\nio.kompose.service: u2\nstatus:\nloadBalancer: {}\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --volumes configMap -o kub.yaml\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2\nspec:\nreplicas: 1\nselector:\nmatchLabels:\nio.kompose.service: u2\nstrategy:\ntype: Recreate\ntemplate:\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --volumes configMap -o kub.yaml\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.network/harvest-default: \"true\"\nio.kompose.service: u2\nspec:\ncontainers:\n- args:\n- --poller\n- u2\n- --promPort\n- \"12990\"\n- --config\n- /opt/harvest.yml\nimage: ghcr.io/netapp/harvest:latest\nname: poller-u2\nports:\n- containerPort: 12990\nresources: {}\nvolumeMounts:\n- mountPath: /opt/harvest/cert\nname: u2-cm0\n- mountPath: /opt/harvest.yml\nname: u2-cm1\nsubPath: harvest.yml\nrestartPolicy: Always\nvolumes:\n- configMap:\nname: u2-cm0\nname: u2-cm0\n- configMap:\nitems:\n- key: harvest.yml\npath: harvest.yml\nname: u2-cm1\nname: u2-cm1\nstatus: {}\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2-cm0\n\n---\napiVersion: v1\ndata:\nharvest.yml: |+\nTools:\nExporters:\nprometheus1:\nexporter: Prometheus\nport_range: 12990-14000\nadd_meta_tags: false\nDefaults:\nuse_insecure_tls: true\nprefer_zapi: true\nPollers:\n\nu2:\ndatacenter: u2\naddr: ADDRESS\nusername: USER\npassword: PASS\ncollectors:\n- Rest\nexporters:\n- prometheus1\n\nkind: ConfigMap\nmetadata:\nannotations:\nuse-subpath: \"true\"\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2-cm1\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\ncreationTimestamp: null\nname: harvest-default\nspec:\ningress:\n- from:\n- podSelector:\nmatchLabels:\nio.kompose.network/harvest-default: \"true\"\npodSelector:\nmatchLabels:\nio.kompose.network/harvest-default: \"true\"\n

    1. Apply kub.yaml to k8.
    kubectl apply --filename kub.yaml\n
    1. List running pods.
    kubectl get pods\n
    pods

    NAME                  READY   STATUS    RESTARTS   AGE\nu2-6864cc7dbc-v6444   1/1     Running   0          6m27s\n

    "},{"location":"install/k8/#remove-all-harvest-resources-from-k8_1","title":"Remove all Harvest resources from k8","text":"

    kubectl delete --filename kub.yaml

    "},{"location":"install/k8/#helm-chart_1","title":"Helm Chart","text":"

    Generate helm charts

    kompose convert --file harvest-compose.yml --chart --volumes configMap --out harvestchart\n
    "},{"location":"install/native/","title":"Native","text":""},{"location":"install/native/#native","title":"Native","text":"

    Visit the Releases page and copy the tar.gz link for the latest release. For example, to download the v22.08.0 release:

    wget https://github.com/NetApp/harvest/releases/download/v22.08.0/harvest-22.08.0-1_linux_amd64.tar.gz\ntar -xvf harvest-22.08.0-1_linux_amd64.tar.gz\ncd harvest-22.08.0-1_linux_amd64\n\n# Run Harvest with the default unix localhost collector\nbin/harvest start\n

    With curl

    If you don't have wget installed, you can use curl like so:

    curl -L -O https://github.com/NetApp/harvest/releases/download/v22.08.0/harvest-22.08.0-1_linux_amd64.tar.gz\n

    It's best to run Harvest as a non-root user. Make sure the user running Harvest can write to /var/log/harvest/ or tell Harvest to write the logs somewhere else with the HARVEST_LOGS environment variable.

    If something goes wrong, examine the logs files in /var/log/harvest, check out the troubleshooting section on the wiki and jump onto Discord and ask for help.

    "},{"location":"install/overview/","title":"Overview","text":"

    Get up and running with Harvest on your preferred platform. We provide pre-compiled binaries for Linux, RPMs, Debs, as well as prebuilt container images for both nightly and stable releases.

    • Binaries for Linux
    • RPM and Debs
    • Containers
    "},{"location":"install/overview/#nabox","title":"Nabox","text":"

    Instructions on how to install Harvest via NAbox.

    "},{"location":"install/overview/#source","title":"Source","text":"

    To build Harvest from source code follow these steps.

    1. git clone https://github.com/NetApp/harvest.git
    2. cd harvest
    3. check the version of go required in the go.mod file
    4. ensure you have a working Go environment at that version or newer. Go installs found here.
    5. make build (if you want to run Harvest from a Mac use GOOS=darwin make build)
    6. bin/harvest version

    Checkout the Makefile for other targets of interest.

    "},{"location":"install/package-managers/","title":"Package Managers","text":""},{"location":"install/package-managers/#redhat","title":"Redhat","text":"

    Installation and upgrade of the Harvest package may require root or administrator privileges

    Download the latest rpm of Harvest from the releases tab and install or upgrade with yum.

    sudo yum install harvest.XXX.rpm\n

    Once the installation has finished, edit the harvest.yml configuration file located in /opt/harvest/harvest.yml

    After editing /opt/harvest/harvest.yml, manage Harvest with systemctl start|stop|restart harvest.

    After upgrade, re-import all dashboards (either bin/harvest grafana import cli or via the Grafana UI) to get any new enhancements in dashboards.

    To ensure that you don't run into permission issues, make sure you manage Harvest using systemctl instead of running the harvest binary directly.

    Changes install makes
    • Directories /var/log/harvest/ and /var/log/run/ are created
    • A harvest user and group are created and the installed files are chowned to harvest
    • Systemd /etc/systemd/system/harvest.service file is created and enabled
    "},{"location":"install/package-managers/#debian","title":"Debian","text":"

    Installation and upgrade of the Harvest package may require root or administrator privileges

    Download the latest deb of Harvest from the releases tab and install or upgrade with apt.

    sudo apt update\nsudo apt install|upgrade ./harvest-<RELEASE>.amd64.deb  \n

    Once the installation has finished, edit the harvest.yml configuration file located in /opt/harvest/harvest.yml

    After editing /opt/harvest/harvest.yml, manage Harvest with systemctl start|stop|restart harvest.

    After upgrade, re-import all dashboards (either bin/harvest grafana import cli or via the Grafana UI) to get any new enhancements in dashboards.

    To ensure that you don't run into permission issues, make sure you manage Harvest using systemctl instead of running the harvest binary directly.

    Changes install makes
    • Directories /var/log/harvest/ and /var/log/run/ are created
    • A harvest user and group are created and the installed files are chowned to harvest
    • Systemd /etc/systemd/system/harvest.service file is created and enabled
    "},{"location":"install/podman/","title":"Containerized Harvest on Linux using Rootless Podman","text":"

    RHEL 8 ships with Podman instead of Docker. There are two ways to run containers with Podman: rootless or with root. Both setups are outlined below. The Podman ecosystem is changing rapidly so the shelf life of these instructions may be short. Make sure you have at least the same versions of the tools listed below.

    If you don't want to bother with Podman, you can also install Docker on RHEL 8 and use it to run Harvest per normal.

    "},{"location":"install/podman/#setup","title":"Setup","text":"

    Make sure your OS is up-to-date with yum update. Podman's dependencies are updated frequently.

    sudo yum remove docker-ce\nsudo yum module enable -y container-tools:rhel8\nsudo yum module install -y container-tools:rhel8\nsudo yum install podman podman-docker podman-plugins\n

    We also need to install Docker Compose since Podman uses it for compose workflows. Install docker-compose like this:

    VERSION=1.29.2\nsudo curl -L \"https://github.com/docker/compose/releases/download/$VERSION/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose\nsudo chmod +x /usr/local/bin/docker-compose\nsudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose\n

    After all the packages are installed, start the Podman systemd socket-activated service:

    sudo systemctl start podman.socket\n
    "},{"location":"install/podman/#containerized-harvest-on-linux-using-rootful-podman","title":"Containerized Harvest on Linux using Rootful Podman","text":"

    Make sure you're able to curl the endpoint.

    sudo curl -H \"Content-Type: application/json\" --unix-socket /var/run/docker.sock http://localhost/_ping\n

    If the sudo curl does not print OK\u23ce troubleshoot before continuing.

    Proceed to Running Harvest

    "},{"location":"install/podman/#containerized-harvest-on-linux-using-rootless-podman_1","title":"Containerized Harvest on Linux using Rootless Podman","text":"

    To run Podman rootless, we'll create a non-root user named: harvest to run Harvest.

    # as root or sudo\nusermod --append --groups wheel harvest\n

    Login with the harvest user, setup the podman.socket, and make sure the curl below works. su or sudo aren't sufficient, you need to ssh into the machine as the harvest user or use machinectl login. See sudo-rootless-podman for details.

    # these must be run as the harvest user\nsystemctl --user enable podman.socket\nsystemctl --user start podman.socket\nsystemctl --user status podman.socket\nexport DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock\n\nsudo curl -H \"Content-Type: application/json\" --unix-socket /var/run/docker.sock http://localhost/_ping\n

    If the sudo curl does not print OK\u23ce troubleshoot before continuing.

    Run podman info and make sure runRoot points to /run/user/$UID/containers (see below). If it doesn't, you'll probably run into problems when restarting the machine. See errors after rebooting.

    podman info | grep runRoot\n  runRoot: /run/user/1001/containers\n
    "},{"location":"install/podman/#running-harvest","title":"Running Harvest","text":"

    By default, Cockpit runs on port 9090, same as Prometheus. We'll change Prometheus's host port to 9091 so we can run both Cockpit and Prometheus. Line 2 below does that.

    With these changes, the standard Harvest compose instructions can be followed as normal now. In summary,

    1. Add the clusters, exporters, etc. to your harvest.yml file
    2. Generate a compose file from your harvest.yml by running

      docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest \\\ngenerate docker full \\\n--output harvest-compose.yml \\\n--promPort 9091\n
    3. Bring everything up

      docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n

    After starting the containers, you can view them with podman ps -a or using Cockpit https://host-ip:9090/podman.

    podman ps -a\nCONTAINER ID  IMAGE                                   COMMAND               CREATED        STATUS            PORTS                     NAMES\n45fd00307d0a  ghcr.io/netapp/harvest:latest           --poller unix --p...  5 seconds ago  Up 5 seconds ago  0.0.0.0:12990->12990/tcp  poller_unix_v21.11.0\nd40585bb903c  localhost/prom/prometheus:latest        --config.file=/et...  5 seconds ago  Up 5 seconds ago  0.0.0.0:9091->9090/tcp    prometheus\n17a2784bc282  localhost/grafana/grafana:latest                              4 seconds ago  Up 5 seconds ago  0.0.0.0:3000->3000/tcp    grafana\n
    "},{"location":"install/podman/#troubleshooting","title":"Troubleshooting","text":"

    Check Podman's troubleshooting docs

    "},{"location":"install/podman/#nothing-works","title":"Nothing works","text":"

    Make sure the DOCKER_HOST env variable is set and that this curl works.

    sudo curl -H \"Content-Type: application/json\" --unix-socket /var/run/docker.sock http://localhost/_ping\n

    Make sure your containers can talk to each other.

    ping prometheus\nPING prometheus (10.88.2.3): 56 data bytes\n64 bytes from 10.88.2.3: seq=0 ttl=42 time=0.059 ms\n64 bytes from 10.88.2.3: seq=1 ttl=42 time=0.065 ms\n
    "},{"location":"install/podman/#errors-after-rebooting","title":"Errors after rebooting","text":"

    After restarting the machine, I see errors like these when running podman ps.

    podman ps -a\nERRO[0000] error joining network namespace for container 424df6c: error retrieving network namespace at /run/user/1001/netns/cni-5fb97adc-b6ef-17e8-565b-0481b311ba09: failed to Statfs \"/run/user/1001/netns/cni-5fb97adc-b6ef-17e8-565b-0481b311ba09\": no such file or directory\n

    Run podman info and make sure runRoot points to /run/user/$UID/containers (see below). If it instead points to /tmp/podman-run-$UID you will likely have problems when restarting the machine. Typically this happens because you used su to become the harvest user or ran podman as root. You can fix this by logging in as the harvest user and running podman system reset.

    podman info | grep runRoot\n  runRoot: /run/user/1001/containers\n
    "},{"location":"install/podman/#linger-errors","title":"Linger errors","text":"

    When you logout, systemd may remove some temp files and tear down Podman's rootless network. Workaround is to run the following as the harvest user. Details here

    loginctl enable-linger\n
    "},{"location":"install/podman/#versions","title":"Versions","text":"

    The following versions were used to validate this workflow.

    podman version\n\nVersion:      3.2.3\nAPI Version:  3.2.3\nGo Version:   go1.15.7\nBuilt:        Thu Jul 29 11:02:43 2021\nOS/Arch:      linux/amd64\n\ndocker-compose -v\ndocker-compose version 1.29.2, build 5becea4c\n\ncat /etc/redhat-release\nRed Hat Enterprise Linux release 8.4 (Ootpa)\n
    "},{"location":"install/podman/#references","title":"References","text":"
    • https://github.com/containers/podman
    • https://www.redhat.com/sysadmin/sudo-rootless-podman
    • https://www.redhat.com/sysadmin/podman-docker-compose
    • https://fedoramagazine.org/use-docker-compose-with-podman-to-orchestrate-containers-on-fedora/
    • https://podman.io/getting-started/network.html mentions the need for podman-plugins, otherwise rootless containers running in separate containers cannot see each other
    • Troubleshoot Podman
    "},{"location":"resources/matrix/","title":"Matrix","text":""},{"location":"resources/matrix/#matrix","title":"Matrix","text":"

    The \u2133atri\u03c7 package provides the matrix.Matrix data-structure for storage, manipulation and transmission of both numeric and non-numeric (string) data. It is utilized by core components of Harvest, including collectors, plugins and exporters. It furthermore serves as an interface between these components, such that \"the left hand does not know what the right hand does\".

    Internally, the Matrix is a collection of metrics (matrix.Metric) and instances (matrix.Instance) in the form of a 2-dimensional array:

    Since we use hash tables for accessing the elements of the array, all metrics and instances added to the matrix must have a unique key. Metrics are typed and contain the numeric data (i.e. rows) of the Matrix. Instances only serve as pointers to the columents of the Matrix, but they also store non-numeric data as labels (*dict.Dict).

    This package is the architectural backbone of Harvest, therefore understanding it is key for an advanced user or contributor.

    "},{"location":"resources/matrix/#basic-usage","title":"Basic Usage","text":""},{"location":"resources/matrix/#initialize","title":"Initialize","text":"

    func matrix.New(name, object string, identifier string) *Matrix\n// always returns successfully pointer to (empty) Matrix \n
    This section describes how to properly initialize a new Matrix instance. Note that if you write a collector, a Matrix instance is already properly initialized for you (as MyCollector.matrix), and if you write a plugin or exporter, it is passed to you from the collector. That means most of the time you don't have to worry about initializing the Matrix.

    matrix.New() requires three arguments: * UUID is by convention the collector name (e.g. MyCollector) if the Matrix comes from a collector, or the collector name and the plugin name concatenated with a . (e.g. MyCollector.MyPlugin) if the Matrix comes from a plugin. * object is a description of the instances of the Matrix. For example, if we collect data about cars and our instances are cars, a good name would be car. * identifier is a unique key used to identify a matrix instance

    Note that identifier should uniquely identify a Matrix instance. This is not a strict requirement, but guarantees that your data is properly handled by exporters.

    "},{"location":"resources/matrix/#example","title":"Example","text":"

    Here is an example from the point of view of a collector:

    import \"github.com/netapp/harvest/v2/pkg/matrix\"\n\nvar myMatrix *matrix.Matrix\n\nmyMatrix = matrix.New(\"CarCollector\", \"car\", \"car\")\n

    Next step is to add metrics and instances to our Matrix.

    "},{"location":"resources/matrix/#add-instances-and-instance-labels","title":"Add instances and instance labels","text":"
    func (x *Matrix) NewInstance(key string) (*Instance, error)\n// returns pointer to a new Instance, or nil with error (if key is not unique)\n

    func (i *Instance) SetLabel(key, value string)\n// always successful, overwrites existing values\n
    func (i *Instance) GetLabel(key) string\n// always returns value, if label is not set, returns empty string\n

    Once we have initialized a Matrix, we can add instances and add labels to our instances.

    "},{"location":"resources/matrix/#example_1","title":"Example","text":"
    var (\ninstance *matrix.Instance\nerr error\n)\nif instance, err = myMatrix.NewInstance(\"SomeCarMark\"); err != nil {\nreturn err\n// or handle err, but beware that instance is nil\n}\ninstance.SetLabel(\"mark\", \"SomeCarMark\")\ninstance.SetLabel(\"color\", \"red\")\ninstance.SetLabel(\"style\", \"coupe\")\n// add as many labels as you like\ninstance.GetLabel(\"color\") // return \"red\"\ninstance.GetLabel(\"owner\") // returns \"\"\n
    "},{"location":"resources/matrix/#add-metrics","title":"Add Metrics","text":"
    func (x *Matrix) NewMetricInt64(key string) (Metric, error)\n// returns pointer to a new MetricInt64, or nil with error (if key is not unique)\n// note that Metric is an interface\n

    Metrics are typed and there are currently 8 types, all can be created with the same signature as above: * MetricUint8 * MetricUint32 * MetricUint64 * MetricInt * MetricInt32 * MetricInt64 * MetricFloat32 * MetricFloat64 * We are able to read from and write to a metric instance using different types (as displayed in the next section), however choosing a type wisely ensures that this is done efficiently and overflow does not occur.

    We can add labels to metrics just like instances. This is usually done when we deal with histograms:

    func (m Metric) SetLabel(key, value string)\n// always successful, overwrites existing values\n
    func (m Metric) GetLabel(key) string\n// always returns value, if label is not set, returns empty string\n

    "},{"location":"resources/matrix/#example_2","title":"Example","text":"

    Continuing our Matrix for collecting car-related data:

    var (\nspeed, length matrix.Metric\nerr error\n)\n\nif speed, err = myMatrix.NewMetricUint32(\"max_speed\"); err != nil {\nreturn err\n}\nif length, err = myMatrix.NewMetricFloat32(\"length_in_mm\"); err != nil {\nreturn err\n}\n
    "},{"location":"resources/matrix/#write-numeric-data","title":"Write numeric data","text":"

    func (x *Matrix) Reset()\n// flush numeric data from previous poll\n
    func (m Metric) SetValueInt64(i *Instance, v int64) error\nfunc (m Metric) SetValueUint8(i *Instance, v uint8) error\nfunc (m Metric) SetValueUint64(i *Instance, v uint64) error\nfunc (m Metric) SetValueFloat64(i *Instance, v float64) error\nfunc (m Metric) SetValueBytes(i *Instance, v []byte) error\nfunc (m Metric) SetValueString(i *Instance, v []string) error\n// sets the numeric value for the instance i to v\n// returns error if v is invalid (explained below)\n
    func (m Metric) AddValueInt64(i *Instance, v int64) error\n// increments the numeric value for the instance i by v\n// same signatures for all the types defined above\n

    When possible you should reuse a Matrix for each data poll, but to do that, you need to call Reset() to drop old data from the Matrix. It is safe to add new instances and metrics after calling this method.

    The SetValue*() and AddValue*() methods are typed same as the metrics. Even though you are not required to use the same type as the metric, it is the safest and most efficient way.

    Since most collectors get their data as bytes or strings, it is recommended to use the SetValueString() and SetValueBytes() methods.

    These methods return an error if value v can not be converted to the type of the metric. Error is always nil when the type of v matches the type of the metric.

    "},{"location":"resources/matrix/#example_3","title":"Example","text":"

    Continuing with the previous examples:

    if err = myMatrix.Reset(); err != nil {\nreturn\n}\n// write numbers to the matrix using the instance and the metrics we have created\n\n// let the metric do the conversion for us\nif err = speed.SetValueString(instance, \"500\"); err != nil {\nlogger.Error(me.Prefix, \"set speed value: \", err)\n}\n// here we ignore err since type is the metric type\nlength.SetValueFloat64(instance, 10000.00)\n\n// safe to add new instances\nvar instance2 matrix.Instance\nif instance2, err = myMatrix.NewInstance(\"SomeOtherCar\"); err != nil {\nreturn err\n}\n\n// possible and safe even though speed has type Float32\n} if err = length.SetValueInt64(instance2, 13000); err != nil {\nlogger.Error(me.Prefix, \"set speed value:\", err)\n}\n\n// possible, but will overflow since speed is unsigned\n} if err = speed.SetValueInt64(instance2, -500); err != nil {\nlogger.Error(me.Prefix, \"set length value:\", err)\n}\n
    "},{"location":"resources/matrix/#read-metrics-and-instances","title":"Read metrics and instances","text":"

    In this section we switch gears and look at the Matrix from the point of view of plugins and exporters. Both those components need to read from the Matrix and have no knowledge of its origin or contents.

    func (x *Matrix) GetMetrics() map[string]Metric\n// returns all metrics in the Matrix\n
    func (x *Matrix) GetInstances() map[string]*Instance\n// returns all instances in the Matrix\n

    Usually we will do a nested loop with these two methods to read all data in the Matrix. See examples below.

    "},{"location":"resources/matrix/#example-iterate-over-instances","title":"Example: Iterate over instances","text":"

    In this example the method PrintKeys() will iterate over a Matrix and print all metric and instance keys.

    func PrintKeys(x *matrix.Matrix) {\nfor instanceKey, _ := range x.GetInstances() {\nfmt.Println(\"instance key=\", instanceKey)\n}\n}\n
    "},{"location":"resources/matrix/#example-read-instance-labels","title":"Example: Read instance labels","text":"

    Each instance has a set of labels. We can iterate over these labels with the GetLabel() and GetLabels() method. In this example, we write a function that prints all labels of an instance:

    func PrintLabels(instance *matrix.Instance) {\nfor label, value, := range instance.GetLabels().Map() {\nfmt.Printf(\"%s=%s\\n\", label, value)\n}\n}\n
    "},{"location":"resources/matrix/#example-read-metric-values-labels","title":"Example: Read metric values labels","text":"

    Similar to the SetValue* and AddValue* methods, you can choose a type when reading from a metric. If you don't know the type of the metric, it is safe to read it as a string. In this example, we write a function that prints the value of a metric for all instances in a Matrix:

    func PrintMetricValues(x *matrix.Matrix, m matrix.Metric) {\nfor key, instance := range x.GetInstances() {\nif value, has := m.GetValueString(instance) {\nfmt.Printf(\"instance %s = %s\\n\", key, value)\n} else {\nfmt.Printf(\"instance %s has no value\\n\", key)\n}\n}\n}\n
    "},{"location":"resources/power-algorithm/","title":"Power Algorithm","text":"

    Gathering power metrics requires a cluster with:

    • ONTAP versions 9.6+
    • REST enabled, even when using the ZAPI collector

    REST is required because it is the only way to collect chassis field-replaceable-unit (FRU) information via the REST API /api/private/cli/system/chassis/fru.

    "},{"location":"resources/power-algorithm/#how-does-harvest-calculate-cluster-power","title":"How does Harvest calculate cluster power?","text":"

    Cluster power is the sum of a cluster's node(s) power + the sum of attached disk shelve(s) power.

    Redundant power supplies (PSU) load-share the total load. With n PSUs, each PSU does roughly (1/n) the work (the actual amount is slightly more than a single PSU due to additional fans.)

    "},{"location":"resources/power-algorithm/#node-power","title":"Node power","text":"

    Node power is calculated by collecting power supply unit (PSU) power, as reported by REST /api/private/cli/system/environment/sensors or by ZAPI environment-sensors-get-iter.

    When a power supply is shared between controllers, the PSU's power will be evenly divided across the controllers due to load-sharing.

    For example:

    • FAS2750 models have two power supplies that power both controllers. Each PSU is shared between the two controllers.
    • A800 models have four power supplies. PSU1 and PSU2 power Controller1 and PSU3 and PSU4 power Controller2. Each PSU provides power to a single controller.

    Harvest determines whether a PSU is shared between controllers by consulting the connected_nodes of each PSU, as reported by ONTAP via /api/private/cli/system/chassis/fru

    "},{"location":"resources/power-algorithm/#disk-shelf-power","title":"Disk shelf power","text":"

    Disk shelf power is calculated by collecting psu.power_drawn, as reported by REST, via /api/storage/shelves or sensor-reading, as reported by ZAPI storage-shelf-info-get-iter.

    The power for embedded shelves is ignored, since that power is already accounted for in the controller's power draw.

    "},{"location":"resources/power-algorithm/#examples","title":"Examples","text":""},{"location":"resources/power-algorithm/#fas2750","title":"FAS2750","text":"
    # Power Metrics for 10.61.183.200\n\n## ONTAP version NetApp Release 9.8P16: Fri Dec 02 02:05:05 UTC 2022\n\n## Nodes\nsystem show\n       Node         |  Model  | SerialNumber  \n----------------------+---------+---------------\ncie-na2750-g1344-01 | FAS2750 | 621841000123  \ncie-na2750-g1344-02 | FAS2750 | 621841000124\n\n## Chassis\nsystem chassis fru show\n ChassisId   |      Name       |         Fru         |    Type    | Status | NumNodes |              ConnectedNodes               \n---------------+-----------------+---------------------+------------+--------+----------+-------------------------------------------\n021827030435 | 621841000123    | cie-na2750-g1344-01 | controller | ok     |        1 | cie-na2750-g1344-01                       \n021827030435 | 621841000124    | cie-na2750-g1344-02 | controller | ok     |        1 | cie-na2750-g1344-02                       \n021827030435 | PSQ094182201794 | PSU2 FRU            | psu        | ok     |        2 | cie-na2750-g1344-02, cie-na2750-g1344-01  \n021827030435 | PSQ094182201797 | PSU1 FRU            | psu        | ok     |        2 | cie-na2750-g1344-02, cie-na2750-g1344-01\n\n## Sensors\nsystem environment sensors show\n(filtered by power, voltage, current)\n       Node         |     Name      |  Type   | State  | Value | Units  \n----------------------+---------------+---------+--------+-------+--------\ncie-na2750-g1344-01 | PSU1 12V Curr | current | normal |  9920 | mA     \ncie-na2750-g1344-01 | PSU1 12V      | voltage | normal | 12180 | mV     \ncie-na2750-g1344-01 | PSU1 5V Curr  | current | normal |  4490 | mA     \ncie-na2750-g1344-01 | PSU1 5V       | voltage | normal |  5110 | mV     \ncie-na2750-g1344-01 | PSU2 12V Curr | current | normal |  9140 | mA     \ncie-na2750-g1344-01 | PSU2 12V      | voltage | normal | 12100 | mV     \ncie-na2750-g1344-01 | PSU2 5V Curr  | current | normal |  4880 | mA     \ncie-na2750-g1344-01 | PSU2 5V       | voltage | normal |  5070 | mV     \ncie-na2750-g1344-02 | PSU1 12V Curr | current | normal |  9920 | mA     \ncie-na2750-g1344-02 | PSU1 12V      | voltage | normal | 12180 | mV     \ncie-na2750-g1344-02 | PSU1 5V Curr  | current | normal |  4330 | mA     \ncie-na2750-g1344-02 | PSU1 5V       | voltage | normal |  5110 | mV     \ncie-na2750-g1344-02 | PSU2 12V Curr | current | normal |  9170 | mA     \ncie-na2750-g1344-02 | PSU2 12V      | voltage | normal | 12100 | mV     \ncie-na2750-g1344-02 | PSU2 5V Curr  | current | normal |  4720 | mA     \ncie-na2750-g1344-02 | PSU2 5V       | voltage | normal |  5070 | mV\n\n## Shelf PSUs\nstorage shelf show\nShelf | ProductId | ModuleType | PSUId | PSUIsEnabled | PSUPowerDrawn | Embedded  \n------+-----------+------------+-------+--------------+---------------+---------\n  1.0 | DS224-12  | iom12e     | 1,2   | true,true    | 1397,1318     | true\n\n### Controller Power From Sum(InVoltage * InCurrent)/NumNodes\nPower: 256W\n
    "},{"location":"resources/power-algorithm/#aff-a800","title":"AFF A800","text":"
    # Power Metrics for 10.61.124.110\n\n## ONTAP version NetApp Release 9.13.1P1: Tue Jul 25 10:19:28 UTC 2023\n\n## Nodes\nsystem show\n  Node    |  Model   | SerialNumber  \n----------+----------+-------------\na800-1-01 | AFF-A800 | 941825000071  \na800-1-02 | AFF-A800 | 941825000072\n\n## Chassis\nsystem chassis fru show\n   ChassisId    |      Name      |    Fru    |    Type    | Status | NumNodes | ConnectedNodes  \n----------------+----------------+-----------+------------+--------+----------+---------------\nSHFFG1826000154 | 941825000071   | a800-1-01 | controller | ok     |        1 | a800-1-01       \nSHFFG1826000154 | 941825000072   | a800-1-02 | controller | ok     |        1 | a800-1-02       \nSHFFG1826000154 | EEQT1822002800 | PSU1 FRU  | psu        | ok     |        1 | a800-1-02       \nSHFFG1826000154 | EEQT1822002804 | PSU2 FRU  | psu        | ok     |        1 | a800-1-02       \nSHFFG1826000154 | EEQT1822002805 | PSU2 FRU  | psu        | ok     |        1 | a800-1-01       \nSHFFG1826000154 | EEQT1822002806 | PSU1 FRU  | psu        | ok     |        1 | a800-1-01\n\n## Sensors\nsystem environment sensors show\n(filtered by power, voltage, current)\n  Node    |     Name      |  Type   | State  | Value | Units  \n----------+---------------+---------+--------+-------+------\na800-1-01 | PSU1 Power In | unknown | normal |   376 | W      \na800-1-01 | PSU2 Power In | unknown | normal |   411 | W      \na800-1-02 | PSU1 Power In | unknown | normal |   383 | W      \na800-1-02 | PSU2 Power In | unknown | normal |   433 | W\n\n## Shelf PSUs\nstorage shelf show\nShelf |  ProductId  | ModuleType | PSUId | PSUIsEnabled | PSUPowerDrawn | Embedded  \n------+-------------+------------+-------+--------------+---------------+---------\n  1.0 | FS4483PSM3E | psm3e      |       |              |               | true      \n\n### Controller Power From Sum(InPower sensors)\nPower: 1603W\n
    "},{"location":"resources/rest-perf-metrics/","title":"REST Perf Metrics","text":"

    This document describes implementation details about ONTAP's REST performance metrics endpoints, including how we built the Harvest RESTPerf collectors.

    Warning

    These are implemenation details about ONTAP's REST performance metrics. You do not need to understand any of this to use Harvest. If you want to know how to use or configure Harvest's REST collectors, checkout the Rest Collector documentation instead. If you're interested in the gory details. Read on.

    "},{"location":"resources/rest-perf-metrics/#introduction","title":"Introduction","text":"

    ONTAP REST metrics were introduced in ONTAP 9.11.1 and included parity with Harvest-collected ZAPI performance metrics by ONTAP 9.12.1.

    "},{"location":"resources/rest-perf-metrics/#performance-rest-queries","title":"Performance REST queries","text":"

    Mapping table

    ZAPI REST Comment perf-object-counter-list-info /api/cluster/counter/tables returns counter tables and schemas perf-object-instance-list-info-iter /api/cluster/counter/tables/{name}/rows returns instances and counter values perf-object-get-instances /api/cluster/counter/tables/{name}/rows returns instances and counter values

    Performance REST responses include properties and counters. Counters are metric-like, while properties include instance attributes.

    "},{"location":"resources/rest-perf-metrics/#examples","title":"Examples","text":""},{"location":"resources/rest-perf-metrics/#ask-ontap-for-all-resources-that-report-performance-metrics","title":"Ask ONTAP for all resources that report performance metrics","text":"
    curl 'https://$clusterIP/api/cluster/counter/tables'\n
    Response

    {\n\"records\": [\n{\n\"name\": \"copy_manager\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/copy_manager\"\n}\n}\n},\n{\n\"name\": \"copy_manager:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/copy_manager%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"disk\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/disk\"\n}\n}\n},\n{\n\"name\": \"disk:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/disk%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"disk:raid_group\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/disk%3Araid_group\"\n}\n}\n},\n{\n\"name\": \"external_cache\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/external_cache\"\n}\n}\n},\n{\n\"name\": \"fcp\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp\"\n}\n}\n},\n{\n\"name\": \"fcp:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp%3Anode\"\n}\n}\n},\n{\n\"name\": \"fcp_lif\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp_lif\"\n}\n}\n},\n{\n\"name\": \"fcp_lif:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp_lif%3Anode\"\n}\n}\n},\n{\n\"name\": \"fcp_lif:port\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp_lif%3Aport\"\n}\n}\n},\n{\n\"name\": \"fcp_lif:svm\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp_lif%3Asvm\"\n}\n}\n},\n{\n\"name\": \"fcvi\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcvi\"\n}\n}\n},\n{\n\"name\": \"headroom_aggregate\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/headroom_aggregate\"\n}\n}\n},\n{\n\"name\": \"headroom_cpu\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/headroom_cpu\"\n}\n}\n},\n{\n\"name\": \"host_adapter\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/host_adapter\"\n}\n}\n},\n{\n\"name\": \"iscsi_lif\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/iscsi_lif\"\n}\n}\n},\n{\n\"name\": \"iscsi_lif:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/iscsi_lif%3Anode\"\n}\n}\n},\n{\n\"name\": \"iscsi_lif:svm\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/iscsi_lif%3Asvm\"\n}\n}\n},\n{\n\"name\": \"lif\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/lif\"\n}\n}\n},\n{\n\"name\": \"lif:svm\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/lif%3Asvm\"\n}\n}\n},\n{\n\"name\": \"lun\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/lun\"\n}\n}\n},\n{\n\"name\": \"lun:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/lun%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"lun:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/lun%3Anode\"\n}\n}\n},\n{\n\"name\": \"namespace\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/namespace\"\n}\n}\n},\n{\n\"name\": \"namespace:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/namespace%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"nfs_v4_diag\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nfs_v4_diag\"\n}\n}\n},\n{\n\"name\": \"nic_common\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nic_common\"\n}\n}\n},\n{\n\"name\": \"nvmf_lif\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nvmf_lif\"\n}\n}\n},\n{\n\"name\": \"nvmf_lif:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nvmf_lif%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"nvmf_lif:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nvmf_lif%3Anode\"\n}\n}\n},\n{\n\"name\": \"nvmf_lif:port\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nvmf_lif%3Aport\"\n}\n}\n},\n{\n\"name\": \"object_store_client_op\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/object_store_client_op\"\n}\n}\n},\n{\n\"name\": \"path\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/path\"\n}\n}\n},\n{\n\"name\": \"processor\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/processor\"\n}\n}\n},\n{\n\"name\": \"processor:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/processor%3Anode\"\n}\n}\n},\n{\n\"name\": \"qos\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos\"\n}\n}\n},\n{\n\"name\": \"qos:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"qos:policy_group\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos%3Apolicy_group\"\n}\n}\n},\n{\n\"name\": \"qos_detail\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos_detail\"\n}\n}\n},\n{\n\"name\": \"qos_detail_volume\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos_detail_volume\"\n}\n}\n},\n{\n\"name\": \"qos_volume\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos_volume\"\n}\n}\n},\n{\n\"name\": \"qos_volume:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos_volume%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"qtree\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qtree\"\n}\n}\n},\n{\n\"name\": \"qtree:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qtree%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_cifs\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_cifs\"\n}\n}\n},\n{\n\"name\": \"svm_cifs:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_cifs%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_cifs:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_cifs%3Anode\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v3\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v3\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v3:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v3%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v3:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v3%3Anode\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v4\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v4\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v41\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v41\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v41:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v41%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v41:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v41%3Anode\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v42\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v42\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v42:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v42%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v42:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v42%3Anode\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v4:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v4%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v4:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v4%3Anode\"\n}\n}\n},\n{\n\"name\": \"system\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system\"\n}\n}\n},\n{\n\"name\": \"system:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"system:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system%3Anode\"\n}\n}\n},\n{\n\"name\": \"token_manager\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/token_manager\"\n}\n}\n},\n{\n\"name\": \"volume\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/volume\"\n}\n}\n},\n{\n\"name\": \"volume:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/volume%3Anode\"\n}\n}\n},\n{\n\"name\": \"volume:svm\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/volume%3Asvm\"\n}\n}\n},\n{\n\"name\": \"wafl\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/wafl\"\n}\n}\n},\n{\n\"name\": \"wafl_comp_aggr_vol_bin\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/wafl_comp_aggr_vol_bin\"\n}\n}\n},\n{\n\"name\": \"wafl_hya_per_aggregate\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/wafl_hya_per_aggregate\"\n}\n}\n},\n{\n\"name\": \"wafl_hya_sizer\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/wafl_hya_sizer\"\n}\n}\n}\n],\n\"num_records\": 71,\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/\"\n}\n}\n}\n

    "},{"location":"resources/rest-perf-metrics/#node-performance-metrics-metadata","title":"Node performance metrics metadata","text":"

    Ask ONTAP to return the schema for system:node. This will include the name, description, and metadata for all counters associated with system:node.

    curl 'https://$clusterIP/api/cluster/counter/tables/system:node?return_records=true'\n
    Response

    {\n\"name\": \"system:node\",\n\"description\": \"The System table reports general system activity. This includes global throughput for the main services, I/O latency, and CPU activity. The alias name for system:node is system_node.\",\n\"counter_schemas\": [\n{\n\"name\": \"average_processor_busy_percent\",\n\"description\": \"Average processor utilization across all processors in the system\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"cifs_ops\",\n\"description\": \"Number of CIFS operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"cp\",\n\"description\": \"CP time rate\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"cp_time\",\n\"description\": \"Processor time in CP\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"cpu_busy\",\n\"description\": \"System CPU resource utilization. Returns a computed percentage for the default CPU field. Basically computes a 'cpu usage summary' value which indicates how 'busy' the system is based upon the most heavily utilized domain. The idea is to determine the amount of available CPU until we're limited by either a domain maxing out OR we exhaust all available idle CPU cycles, whichever occurs first.\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"cpu_elapsed_time\",\n\"description\": \"Elapsed time since boot\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"disk_data_read\",\n\"description\": \"Number of disk kilobytes (KB) read per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"disk_data_written\",\n\"description\": \"Number of disk kilobytes (KB) written per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"domain_busy\",\n\"description\": \"Array of processor time in percentage spent in various domains\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"domain_shared\",\n\"description\": \"Array of processor time in percentage spent in various shared domains\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"dswitchto_cnt\",\n\"description\": \"Array of processor time in percentage spent in domain switch\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"fcp_data_received\",\n\"description\": \"Number of FCP kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"fcp_data_sent\",\n\"description\": \"Number of FCP kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"fcp_ops\",\n\"description\": \"Number of FCP operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"hard_switches\",\n\"description\": \"Number of context switches per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"hdd_data_read\",\n\"description\": \"Number of HDD Disk kilobytes (KB) read per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"hdd_data_written\",\n\"description\": \"Number of HDD kilobytes (KB) written per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"idle\",\n\"description\": \"Processor idle rate percentage\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"idle_time\",\n\"description\": \"Processor idle time\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"instance_name\",\n\"description\": \"Node name\",\n\"type\": \"string\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"interrupt\",\n\"description\": \"Processor interrupt rate percentage\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"interrupt_in_cp\",\n\"description\": \"Processor interrupt rate percentage\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cp_time\"\n}\n},\n{\n\"name\": \"interrupt_in_cp_time\",\n\"description\": \"Processor interrupt in CP time\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"interrupt_num\",\n\"description\": \"Processor interrupt number\",\n\"type\": \"delta\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"interrupt_num_in_cp\",\n\"description\": \"Number of processor interrupts in CP\",\n\"type\": \"delta\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"interrupt_time\",\n\"description\": \"Processor interrupt time\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"intr_cnt\",\n\"description\": \"Array of interrupt count per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"intr_cnt_ipi\",\n\"description\": \"IPI interrupt count per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"intr_cnt_msec\",\n\"description\": \"Millisecond interrupt count per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"intr_cnt_total\",\n\"description\": \"Total interrupt count per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"iscsi_data_received\",\n\"description\": \"iSCSI kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"iscsi_data_sent\",\n\"description\": \"iSCSI kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"iscsi_ops\",\n\"description\": \"Number of iSCSI operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"memory\",\n\"description\": \"Total memory in megabytes (MB)\",\n\"type\": \"raw\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"network_data_received\",\n\"description\": \"Number of network kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"network_data_sent\",\n\"description\": \"Number of network kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nfs_ops\",\n\"description\": \"Number of NFS operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"non_interrupt\",\n\"description\": \"Processor non-interrupt rate percentage\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"non_interrupt_time\",\n\"description\": \"Processor non-interrupt time\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"num_processors\",\n\"description\": \"Number of active processors in the system\",\n\"type\": \"raw\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"nvme_fc_data_received\",\n\"description\": \"NVMe/FC kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_fc_data_sent\",\n\"description\": \"NVMe/FC kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_fc_ops\",\n\"description\": \"NVMe/FC operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"nvme_roce_data_received\",\n\"description\": \"NVMe/RoCE kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_roce_data_sent\",\n\"description\": \"NVMe/RoCE kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_roce_ops\",\n\"description\": \"NVMe/RoCE operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"nvme_tcp_data_received\",\n\"description\": \"NVMe/TCP kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_tcp_data_sent\",\n\"description\": \"NVMe/TCP kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_tcp_ops\",\n\"description\": \"NVMe/TCP operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"other_data\",\n\"description\": \"Other throughput\",\n\"type\": \"rate\",\n\"unit\": \"b_per_sec\"\n},\n{\n\"name\": \"other_latency\",\n\"description\": \"Average latency for all other operations in the system in microseconds\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"other_ops\"\n}\n},\n{\n\"name\": \"other_ops\",\n\"description\": \"All other operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"partner_data_received\",\n\"description\": \"SCSI Partner kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"partner_data_sent\",\n\"description\": \"SCSI Partner kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"processor_plevel\",\n\"description\": \"Processor plevel rate percentage\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"processor_plevel_time\",\n\"description\": \"Processor plevel rate percentage\",\n\"type\": \"delta\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"read_data\",\n\"description\": \"Read throughput\",\n\"type\": \"rate\",\n\"unit\": \"b_per_sec\"\n},\n{\n\"name\": \"read_latency\",\n\"description\": \"Average latency for all read operations in the system in microseconds\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"read_ops\"\n}\n},\n{\n\"name\": \"read_ops\",\n\"description\": \"Read operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"sk_switches\",\n\"description\": \"Number of sk switches per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"ssd_data_read\",\n\"description\": \"Number of SSD Disk kilobytes (KB) read per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"ssd_data_written\",\n\"description\": \"Number of SSD Disk kilobytes (KB) written per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"sys_read_data\",\n\"description\": \"Network and FCP kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"sys_total_data\",\n\"description\": \"Network and FCP kilobytes (KB) received and sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"sys_write_data\",\n\"description\": \"Network and FCP kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"tape_data_read\",\n\"description\": \"Tape bytes read per millisecond\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"tape_data_written\",\n\"description\": \"Tape bytes written per millisecond\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"time\",\n\"description\": \"Time in seconds since the Epoch (00:00:00 UTC January 1 1970)\",\n\"type\": \"raw\",\n\"unit\": \"sec\"\n},\n{\n\"name\": \"time_per_interrupt\",\n\"description\": \"Processor time per interrupt\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"interrupt_num\"\n}\n},\n{\n\"name\": \"time_per_interrupt_in_cp\",\n\"description\": \"Processor time per interrupt in CP\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"interrupt_num_in_cp\"\n}\n},\n{\n\"name\": \"total_data\",\n\"description\": \"Total throughput in bytes\",\n\"type\": \"rate\",\n\"unit\": \"b_per_sec\"\n},\n{\n\"name\": \"total_latency\",\n\"description\": \"Average latency for all operations in the system in microseconds\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"total_ops\"\n}\n},\n{\n\"name\": \"total_ops\",\n\"description\": \"Total number of operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"total_processor_busy\",\n\"description\": \"Total processor utilization of all processors in the system\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"total_processor_busy_time\",\n\"description\": \"Total processor time of all processors in the system\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"uptime\",\n\"description\": \"Time in seconds that the system has been up\",\n\"type\": \"raw\",\n\"unit\": \"sec\"\n},\n{\n\"name\": \"wafliron\",\n\"description\": \"Wafliron counters\",\n\"type\": \"delta\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"write_data\",\n\"description\": \"Write throughput\",\n\"type\": \"rate\",\n\"unit\": \"b_per_sec\"\n},\n{\n\"name\": \"write_latency\",\n\"description\": \"Average latency for all write operations in the system in microseconds\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"write_ops\"\n}\n},\n{\n\"name\": \"write_ops\",\n\"description\": \"Write operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n}\n],\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system:node\"\n}\n}\n}\n

    "},{"location":"resources/rest-perf-metrics/#node-performance-metrics-with-all-instances-properties-and-counters","title":"Node performance metrics with all instances, properties, and counters","text":"

    Ask ONTAP to return all instances of system:node. For each system:node include all of that node's properties and performance metrics.

    curl 'https://$clusterIP/api/cluster/counter/tables/system:node/rows?fields=*&return_records=true'\n
    Response

    {\n\"records\": [\n{\n\"counter_table\": {\n\"name\": \"system:node\"\n},\n\"id\": \"umeng-aff300-01:28e14eab-0580-11e8-bd9d-00a098d39e12\",\n\"properties\": [\n{\n\"name\": \"node.name\",\n\"value\": \"umeng-aff300-01\"\n},\n{\n\"name\": \"system_model\",\n\"value\": \"AFF-A300\"\n},\n{\n\"name\": \"ontap_version\",\n\"value\": \"NetApp Release R9.12.1xN_221108_1315: Tue Nov  8 15:32:25 EST 2022 \"\n},\n{\n\"name\": \"compile_flags\",\n\"value\": \"1\"\n},\n{\n\"name\": \"serial_no\",\n\"value\": \"721802000260\"\n},\n{\n\"name\": \"system_id\",\n\"value\": \"0537124012\"\n},\n{\n\"name\": \"hostname\",\n\"value\": \"umeng-aff300-01\"\n},\n{\n\"name\": \"name\",\n\"value\": \"umeng-aff300-01\"\n},\n{\n\"name\": \"uuid\",\n\"value\": \"28e14eab-0580-11e8-bd9d-00a098d39e12\"\n}\n],\n\"counters\": [\n{\n\"name\": \"memory\",\n\"value\": 88766\n},\n{\n\"name\": \"nfs_ops\",\n\"value\": 15991465\n},\n{\n\"name\": \"cifs_ops\",\n\"value\": 0\n},\n{\n\"name\": \"fcp_ops\",\n\"value\": 0\n},\n{\n\"name\": \"iscsi_ops\",\n\"value\": 355884195\n},\n{\n\"name\": \"nvme_fc_ops\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_ops\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_ops\",\n\"value\": 0\n},\n{\n\"name\": \"network_data_received\",\n\"value\": 33454266379\n},\n{\n\"name\": \"network_data_sent\",\n\"value\": 9938586739\n},\n{\n\"name\": \"fcp_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"fcp_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"iscsi_data_received\",\n\"value\": 4543696942\n},\n{\n\"name\": \"iscsi_data_sent\",\n\"value\": 3058795391\n},\n{\n\"name\": \"nvme_fc_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_fc_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"partner_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"partner_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"sys_read_data\",\n\"value\": 33454266379\n},\n{\n\"name\": \"sys_write_data\",\n\"value\": 9938586739\n},\n{\n\"name\": \"sys_total_data\",\n\"value\": 43392853118\n},\n{\n\"name\": \"disk_data_read\",\n\"value\": 32083838540\n},\n{\n\"name\": \"disk_data_written\",\n\"value\": 21102507352\n},\n{\n\"name\": \"hdd_data_read\",\n\"value\": 0\n},\n{\n\"name\": \"hdd_data_written\",\n\"value\": 0\n},\n{\n\"name\": \"ssd_data_read\",\n\"value\": 32083838540\n},\n{\n\"name\": \"ssd_data_written\",\n\"value\": 21102507352\n},\n{\n\"name\": \"tape_data_read\",\n\"value\": 0\n},\n{\n\"name\": \"tape_data_written\",\n\"value\": 0\n},\n{\n\"name\": \"read_ops\",\n\"value\": 33495530\n},\n{\n\"name\": \"write_ops\",\n\"value\": 324699398\n},\n{\n\"name\": \"other_ops\",\n\"value\": 13680732\n},\n{\n\"name\": \"total_ops\",\n\"value\": 371875660\n},\n{\n\"name\": \"read_latency\",\n\"value\": 14728140707\n},\n{\n\"name\": \"write_latency\",\n\"value\": 1568830328022\n},\n{\n\"name\": \"other_latency\",\n\"value\": 2132691612\n},\n{\n\"name\": \"total_latency\",\n\"value\": 1585691160341\n},\n{\n\"name\": \"read_data\",\n\"value\": 3212301497187\n},\n{\n\"name\": \"write_data\",\n\"value\": 4787509093524\n},\n{\n\"name\": \"other_data\",\n\"value\": 0\n},\n{\n\"name\": \"total_data\",\n\"value\": 7999810590711\n},\n{\n\"name\": \"cpu_busy\",\n\"value\": 790347800332\n},\n{\n\"name\": \"cpu_elapsed_time\",\n\"value\": 3979034040025\n},\n{\n\"name\": \"average_processor_busy_percent\",\n\"value\": 788429907770\n},\n{\n\"name\": \"total_processor_busy\",\n\"value\": 12614878524320\n},\n{\n\"name\": \"total_processor_busy_time\",\n\"value\": 12614878524320\n},\n{\n\"name\": \"num_processors\",\n\"value\": 16\n},\n{\n\"name\": \"interrupt_time\",\n\"value\": 118435504138\n},\n{\n\"name\": \"interrupt\",\n\"value\": 118435504138\n},\n{\n\"name\": \"interrupt_num\",\n\"value\": 1446537540\n},\n{\n\"name\": \"time_per_interrupt\",\n\"value\": 118435504138\n},\n{\n\"name\": \"non_interrupt_time\",\n\"value\": 12496443020182\n},\n{\n\"name\": \"non_interrupt\",\n\"value\": 12496443020182\n},\n{\n\"name\": \"idle_time\",\n\"value\": 51049666116080\n},\n{\n\"name\": \"idle\",\n\"value\": 51049666116080\n},\n{\n\"name\": \"cp_time\",\n\"value\": 221447740301\n},\n{\n\"name\": \"cp\",\n\"value\": 221447740301\n},\n{\n\"name\": \"interrupt_in_cp_time\",\n\"value\": 7969316828\n},\n{\n\"name\": \"interrupt_in_cp\",\n\"value\": 7969316828\n},\n{\n\"name\": \"interrupt_num_in_cp\",\n\"value\": 1639345044\n},\n{\n\"name\": \"time_per_interrupt_in_cp\",\n\"value\": 7969316828\n},\n{\n\"name\": \"sk_switches\",\n\"value\": 3830419593\n},\n{\n\"name\": \"hard_switches\",\n\"value\": 2786999477\n},\n{\n\"name\": \"intr_cnt_msec\",\n\"value\": 3978648113\n},\n{\n\"name\": \"intr_cnt_ipi\",\n\"value\": 1709054\n},\n{\n\"name\": \"intr_cnt_total\",\n\"value\": 1215253490\n},\n{\n\"name\": \"time\",\n\"value\": 1677516216\n},\n{\n\"name\": \"uptime\",\n\"value\": 3978648\n},\n{\n\"name\": \"processor_plevel_time\",\n\"values\": [\n3405835479577,\n2628275207938,\n1916273074545,\n1366761457118,\n964863281216,\n676002919489,\n472533086045,\n331487674159,\n234447654307,\n167247803300,\n120098535891,\n86312126550,\n61675398266,\n43549889374,\n30176461104,\n19891286233,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"0_CPU\",\n\"1_CPU\",\n\"2_CPU\",\n\"3_CPU\",\n\"4_CPU\",\n\"5_CPU\",\n\"6_CPU\",\n\"7_CPU\",\n\"8_CPU\",\n\"9_CPU\",\n\"10_CPU\",\n\"11_CPU\",\n\"12_CPU\",\n\"13_CPU\",\n\"14_CPU\",\n\"15_CPU\",\n\"16_CPU\",\n\"17_CPU\",\n\"18_CPU\",\n\"19_CPU\",\n\"20_CPU\",\n\"21_CPU\",\n\"22_CPU\",\n\"23_CPU\",\n\"24_CPU\",\n\"25_CPU\",\n\"26_CPU\",\n\"27_CPU\",\n\"28_CPU\",\n\"29_CPU\",\n\"30_CPU\",\n\"31_CPU\",\n\"32_CPU\",\n\"33_CPU\",\n\"34_CPU\",\n\"35_CPU\",\n\"36_CPU\",\n\"37_CPU\",\n\"38_CPU\",\n\"39_CPU\",\n\"40_CPU\",\n\"41_CPU\",\n\"42_CPU\",\n\"43_CPU\",\n\"44_CPU\",\n\"45_CPU\",\n\"46_CPU\",\n\"47_CPU\",\n\"48_CPU\",\n\"49_CPU\",\n\"50_CPU\",\n\"51_CPU\",\n\"52_CPU\",\n\"53_CPU\",\n\"54_CPU\",\n\"55_CPU\",\n\"56_CPU\",\n\"57_CPU\",\n\"58_CPU\",\n\"59_CPU\",\n\"60_CPU\",\n\"61_CPU\",\n\"62_CPU\",\n\"63_CPU\",\n\"64_CPU\",\n\"65_CPU\",\n\"66_CPU\",\n\"67_CPU\",\n\"68_CPU\",\n\"69_CPU\",\n\"70_CPU\",\n\"71_CPU\",\n\"72_CPU\",\n\"73_CPU\",\n\"74_CPU\",\n\"75_CPU\",\n\"76_CPU\",\n\"77_CPU\",\n\"78_CPU\",\n\"79_CPU\",\n\"80_CPU\",\n\"81_CPU\",\n\"82_CPU\",\n\"83_CPU\",\n\"84_CPU\",\n\"85_CPU\",\n\"86_CPU\",\n\"87_CPU\",\n\"88_CPU\",\n\"89_CPU\",\n\"90_CPU\",\n\"91_CPU\",\n\"92_CPU\",\n\"93_CPU\",\n\"94_CPU\",\n\"95_CPU\",\n\"96_CPU\",\n\"97_CPU\",\n\"98_CPU\",\n\"99_CPU\",\n\"100_CPU\",\n\"101_CPU\",\n\"102_CPU\",\n\"103_CPU\",\n\"104_CPU\",\n\"105_CPU\",\n\"106_CPU\",\n\"107_CPU\",\n\"108_CPU\",\n\"109_CPU\",\n\"110_CPU\",\n\"111_CPU\",\n\"112_CPU\",\n\"113_CPU\",\n\"114_CPU\",\n\"115_CPU\",\n\"116_CPU\",\n\"117_CPU\",\n\"118_CPU\",\n\"119_CPU\",\n\"120_CPU\",\n\"121_CPU\",\n\"122_CPU\",\n\"123_CPU\",\n\"124_CPU\",\n\"125_CPU\",\n\"126_CPU\",\n\"127_CPU\"\n]\n},\n{\n\"name\": \"processor_plevel\",\n\"values\": [\n3405835479577,\n2628275207938,\n1916273074545,\n1366761457118,\n964863281216,\n676002919489,\n472533086045,\n331487674159,\n234447654307,\n167247803300,\n120098535891,\n86312126550,\n61675398266,\n43549889374,\n30176461104,\n19891286233,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"0_CPU\",\n\"1_CPU\",\n\"2_CPU\",\n\"3_CPU\",\n\"4_CPU\",\n\"5_CPU\",\n\"6_CPU\",\n\"7_CPU\",\n\"8_CPU\",\n\"9_CPU\",\n\"10_CPU\",\n\"11_CPU\",\n\"12_CPU\",\n\"13_CPU\",\n\"14_CPU\",\n\"15_CPU\",\n\"16_CPU\",\n\"17_CPU\",\n\"18_CPU\",\n\"19_CPU\",\n\"20_CPU\",\n\"21_CPU\",\n\"22_CPU\",\n\"23_CPU\",\n\"24_CPU\",\n\"25_CPU\",\n\"26_CPU\",\n\"27_CPU\",\n\"28_CPU\",\n\"29_CPU\",\n\"30_CPU\",\n\"31_CPU\",\n\"32_CPU\",\n\"33_CPU\",\n\"34_CPU\",\n\"35_CPU\",\n\"36_CPU\",\n\"37_CPU\",\n\"38_CPU\",\n\"39_CPU\",\n\"40_CPU\",\n\"41_CPU\",\n\"42_CPU\",\n\"43_CPU\",\n\"44_CPU\",\n\"45_CPU\",\n\"46_CPU\",\n\"47_CPU\",\n\"48_CPU\",\n\"49_CPU\",\n\"50_CPU\",\n\"51_CPU\",\n\"52_CPU\",\n\"53_CPU\",\n\"54_CPU\",\n\"55_CPU\",\n\"56_CPU\",\n\"57_CPU\",\n\"58_CPU\",\n\"59_CPU\",\n\"60_CPU\",\n\"61_CPU\",\n\"62_CPU\",\n\"63_CPU\",\n\"64_CPU\",\n\"65_CPU\",\n\"66_CPU\",\n\"67_CPU\",\n\"68_CPU\",\n\"69_CPU\",\n\"70_CPU\",\n\"71_CPU\",\n\"72_CPU\",\n\"73_CPU\",\n\"74_CPU\",\n\"75_CPU\",\n\"76_CPU\",\n\"77_CPU\",\n\"78_CPU\",\n\"79_CPU\",\n\"80_CPU\",\n\"81_CPU\",\n\"82_CPU\",\n\"83_CPU\",\n\"84_CPU\",\n\"85_CPU\",\n\"86_CPU\",\n\"87_CPU\",\n\"88_CPU\",\n\"89_CPU\",\n\"90_CPU\",\n\"91_CPU\",\n\"92_CPU\",\n\"93_CPU\",\n\"94_CPU\",\n\"95_CPU\",\n\"96_CPU\",\n\"97_CPU\",\n\"98_CPU\",\n\"99_CPU\",\n\"100_CPU\",\n\"101_CPU\",\n\"102_CPU\",\n\"103_CPU\",\n\"104_CPU\",\n\"105_CPU\",\n\"106_CPU\",\n\"107_CPU\",\n\"108_CPU\",\n\"109_CPU\",\n\"110_CPU\",\n\"111_CPU\",\n\"112_CPU\",\n\"113_CPU\",\n\"114_CPU\",\n\"115_CPU\",\n\"116_CPU\",\n\"117_CPU\",\n\"118_CPU\",\n\"119_CPU\",\n\"120_CPU\",\n\"121_CPU\",\n\"122_CPU\",\n\"123_CPU\",\n\"124_CPU\",\n\"125_CPU\",\n\"126_CPU\",\n\"127_CPU\"\n]\n},\n{\n\"name\": \"domain_busy\",\n\"values\": [\n51049666116086,\n13419960088,\n13297686377,\n1735383373870,\n39183250298,\n6728050897,\n28229793795,\n17493622207,\n122290467,\n974721172619,\n47944793823,\n164946850,\n4162377932,\n407009733276,\n128199854099,\n9037374471285,\n38911301970,\n366749865,\n732045734,\n2997541695,\n14,\n18,\n40\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"domain_shared\",\n\"values\": [\n0,\n685164024474,\n0,\n0,\n0,\n24684879894,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"dswitchto_cnt\",\n\"values\": [\n0,\n322698663,\n172936437,\n446893016,\n96971,\n39788918,\n5,\n10,\n10670440,\n22,\n7,\n836,\n2407967,\n9798186907,\n9802868991,\n265242,\n53,\n2614118,\n4430780,\n66117706,\n1,\n1,\n1\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"intr_cnt\",\n\"values\": [\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n4191453008,\n8181232,\n1625052957,\n0,\n71854,\n0,\n71854,\n0,\n5,\n0,\n5,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"dev_0\",\n\"dev_1\",\n\"dev_2\",\n\"dev_3\",\n\"dev_4\",\n\"dev_5\",\n\"dev_6\",\n\"dev_7\",\n\"dev_8\",\n\"dev_9\",\n\"dev_10\",\n\"dev_11\",\n\"dev_12\",\n\"dev_13\",\n\"dev_14\",\n\"dev_15\",\n\"dev_16\",\n\"dev_17\",\n\"dev_18\",\n\"dev_19\",\n\"dev_20\",\n\"dev_21\",\n\"dev_22\",\n\"dev_23\",\n\"dev_24\",\n\"dev_25\",\n\"dev_26\",\n\"dev_27\",\n\"dev_28\",\n\"dev_29\",\n\"dev_30\",\n\"dev_31\",\n\"dev_32\",\n\"dev_33\",\n\"dev_34\",\n\"dev_35\",\n\"dev_36\",\n\"dev_37\",\n\"dev_38\",\n\"dev_39\",\n\"dev_40\",\n\"dev_41\",\n\"dev_42\",\n\"dev_43\",\n\"dev_44\",\n\"dev_45\",\n\"dev_46\",\n\"dev_47\",\n\"dev_48\",\n\"dev_49\",\n\"dev_50\",\n\"dev_51\",\n\"dev_52\",\n\"dev_53\",\n\"dev_54\",\n\"dev_55\",\n\"dev_56\",\n\"dev_57\",\n\"dev_58\",\n\"dev_59\",\n\"dev_60\",\n\"dev_61\",\n\"dev_62\",\n\"dev_63\",\n\"dev_64\",\n\"dev_65\",\n\"dev_66\",\n\"dev_67\",\n\"dev_68\",\n\"dev_69\",\n\"dev_70\",\n\"dev_71\",\n\"dev_72\",\n\"dev_73\",\n\"dev_74\",\n\"dev_75\",\n\"dev_76\",\n\"dev_77\",\n\"dev_78\",\n\"dev_79\",\n\"dev_80\",\n\"dev_81\",\n\"dev_82\",\n\"dev_83\",\n\"dev_84\",\n\"dev_85\",\n\"dev_86\",\n\"dev_87\",\n\"dev_88\",\n\"dev_89\",\n\"dev_90\",\n\"dev_91\",\n\"dev_92\",\n\"dev_93\",\n\"dev_94\",\n\"dev_95\",\n\"dev_96\",\n\"dev_97\",\n\"dev_98\",\n\"dev_99\",\n\"dev_100\",\n\"dev_101\",\n\"dev_102\",\n\"dev_103\",\n\"dev_104\",\n\"dev_105\",\n\"dev_106\",\n\"dev_107\",\n\"dev_108\",\n\"dev_109\",\n\"dev_110\",\n\"dev_111\",\n\"dev_112\",\n\"dev_113\",\n\"dev_114\",\n\"dev_115\",\n\"dev_116\",\n\"dev_117\",\n\"dev_118\",\n\"dev_119\",\n\"dev_120\",\n\"dev_121\",\n\"dev_122\",\n\"dev_123\",\n\"dev_124\",\n\"dev_125\",\n\"dev_126\",\n\"dev_127\",\n\"dev_128\",\n\"dev_129\",\n\"dev_130\",\n\"dev_131\",\n\"dev_132\",\n\"dev_133\",\n\"dev_134\",\n\"dev_135\",\n\"dev_136\",\n\"dev_137\",\n\"dev_138\",\n\"dev_139\",\n\"dev_140\",\n\"dev_141\",\n\"dev_142\",\n\"dev_143\",\n\"dev_144\",\n\"dev_145\",\n\"dev_146\",\n\"dev_147\",\n\"dev_148\",\n\"dev_149\",\n\"dev_150\",\n\"dev_151\",\n\"dev_152\",\n\"dev_153\",\n\"dev_154\",\n\"dev_155\",\n\"dev_156\",\n\"dev_157\",\n\"dev_158\",\n\"dev_159\",\n\"dev_160\",\n\"dev_161\",\n\"dev_162\",\n\"dev_163\",\n\"dev_164\",\n\"dev_165\",\n\"dev_166\",\n\"dev_167\",\n\"dev_168\",\n\"dev_169\",\n\"dev_170\",\n\"dev_171\",\n\"dev_172\",\n\"dev_173\",\n\"dev_174\",\n\"dev_175\",\n\"dev_176\",\n\"dev_177\",\n\"dev_178\",\n\"dev_179\",\n\"dev_180\",\n\"dev_181\",\n\"dev_182\",\n\"dev_183\",\n\"dev_184\",\n\"dev_185\",\n\"dev_186\",\n\"dev_187\",\n\"dev_188\",\n\"dev_189\",\n\"dev_190\",\n\"dev_191\",\n\"dev_192\",\n\"dev_193\",\n\"dev_194\",\n\"dev_195\",\n\"dev_196\",\n\"dev_197\",\n\"dev_198\",\n\"dev_199\",\n\"dev_200\",\n\"dev_201\",\n\"dev_202\",\n\"dev_203\",\n\"dev_204\",\n\"dev_205\",\n\"dev_206\",\n\"dev_207\",\n\"dev_208\",\n\"dev_209\",\n\"dev_210\",\n\"dev_211\",\n\"dev_212\",\n\"dev_213\",\n\"dev_214\",\n\"dev_215\",\n\"dev_216\",\n\"dev_217\",\n\"dev_218\",\n\"dev_219\",\n\"dev_220\",\n\"dev_221\",\n\"dev_222\",\n\"dev_223\",\n\"dev_224\",\n\"dev_225\",\n\"dev_226\",\n\"dev_227\",\n\"dev_228\",\n\"dev_229\",\n\"dev_230\",\n\"dev_231\",\n\"dev_232\",\n\"dev_233\",\n\"dev_234\",\n\"dev_235\",\n\"dev_236\",\n\"dev_237\",\n\"dev_238\",\n\"dev_239\",\n\"dev_240\",\n\"dev_241\",\n\"dev_242\",\n\"dev_243\",\n\"dev_244\",\n\"dev_245\",\n\"dev_246\",\n\"dev_247\",\n\"dev_248\",\n\"dev_249\",\n\"dev_250\",\n\"dev_251\",\n\"dev_252\",\n\"dev_253\",\n\"dev_254\",\n\"dev_255\"\n]\n},\n{\n\"name\": \"wafliron\",\n\"values\": [\n0,\n0,\n0\n],\n\"labels\": [\n\"iron_totstarts\",\n\"iron_nobackup\",\n\"iron_usebackup\"\n]\n}\n],\n\"aggregation\": {\n\"count\": 2,\n\"complete\": true\n},\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system:node/rows/umeng-aff300-01%3A28e14eab-0580-11e8-bd9d-00a098d39e12\"\n}\n}\n},\n{\n\"counter_table\": {\n\"name\": \"system:node\"\n},\n\"id\": \"umeng-aff300-02:1524afca-0580-11e8-ae74-00a098d390f2\",\n\"properties\": [\n{\n\"name\": \"node.name\",\n\"value\": \"umeng-aff300-02\"\n},\n{\n\"name\": \"system_model\",\n\"value\": \"AFF-A300\"\n},\n{\n\"name\": \"ontap_version\",\n\"value\": \"NetApp Release R9.12.1xN_221108_1315: Tue Nov  8 15:32:25 EST 2022 \"\n},\n{\n\"name\": \"compile_flags\",\n\"value\": \"1\"\n},\n{\n\"name\": \"serial_no\",\n\"value\": \"721802000259\"\n},\n{\n\"name\": \"system_id\",\n\"value\": \"0537123843\"\n},\n{\n\"name\": \"hostname\",\n\"value\": \"umeng-aff300-02\"\n},\n{\n\"name\": \"name\",\n\"value\": \"umeng-aff300-02\"\n},\n{\n\"name\": \"uuid\",\n\"value\": \"1524afca-0580-11e8-ae74-00a098d390f2\"\n}\n],\n\"counters\": [\n{\n\"name\": \"memory\",\n\"value\": 88766\n},\n{\n\"name\": \"nfs_ops\",\n\"value\": 2061227971\n},\n{\n\"name\": \"cifs_ops\",\n\"value\": 0\n},\n{\n\"name\": \"fcp_ops\",\n\"value\": 0\n},\n{\n\"name\": \"iscsi_ops\",\n\"value\": 183570559\n},\n{\n\"name\": \"nvme_fc_ops\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_ops\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_ops\",\n\"value\": 0\n},\n{\n\"name\": \"network_data_received\",\n\"value\": 28707362447\n},\n{\n\"name\": \"network_data_sent\",\n\"value\": 31199786274\n},\n{\n\"name\": \"fcp_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"fcp_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"iscsi_data_received\",\n\"value\": 2462501728\n},\n{\n\"name\": \"iscsi_data_sent\",\n\"value\": 962425592\n},\n{\n\"name\": \"nvme_fc_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_fc_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"partner_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"partner_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"sys_read_data\",\n\"value\": 28707362447\n},\n{\n\"name\": \"sys_write_data\",\n\"value\": 31199786274\n},\n{\n\"name\": \"sys_total_data\",\n\"value\": 59907148721\n},\n{\n\"name\": \"disk_data_read\",\n\"value\": 27355740700\n},\n{\n\"name\": \"disk_data_written\",\n\"value\": 3426898232\n},\n{\n\"name\": \"hdd_data_read\",\n\"value\": 0\n},\n{\n\"name\": \"hdd_data_written\",\n\"value\": 0\n},\n{\n\"name\": \"ssd_data_read\",\n\"value\": 27355740700\n},\n{\n\"name\": \"ssd_data_written\",\n\"value\": 3426898232\n},\n{\n\"name\": \"tape_data_read\",\n\"value\": 0\n},\n{\n\"name\": \"tape_data_written\",\n\"value\": 0\n},\n{\n\"name\": \"read_ops\",\n\"value\": 29957410\n},\n{\n\"name\": \"write_ops\",\n\"value\": 2141657620\n},\n{\n\"name\": \"other_ops\",\n\"value\": 73183500\n},\n{\n\"name\": \"total_ops\",\n\"value\": 2244798530\n},\n{\n\"name\": \"read_latency\",\n\"value\": 43283636161\n},\n{\n\"name\": \"write_latency\",\n\"value\": 1437635703835\n},\n{\n\"name\": \"other_latency\",\n\"value\": 628457365\n},\n{\n\"name\": \"total_latency\",\n\"value\": 1481547797361\n},\n{\n\"name\": \"read_data\",\n\"value\": 1908711454978\n},\n{\n\"name\": \"write_data\",\n\"value\": 23562759645410\n},\n{\n\"name\": \"other_data\",\n\"value\": 0\n},\n{\n\"name\": \"total_data\",\n\"value\": 25471471100388\n},\n{\n\"name\": \"cpu_busy\",\n\"value\": 511050841704\n},\n{\n\"name\": \"cpu_elapsed_time\",\n\"value\": 3979039364919\n},\n{\n\"name\": \"average_processor_busy_percent\",\n\"value\": 509151403977\n},\n{\n\"name\": \"total_processor_busy\",\n\"value\": 8146422463632\n},\n{\n\"name\": \"total_processor_busy_time\",\n\"value\": 8146422463632\n},\n{\n\"name\": \"num_processors\",\n\"value\": 16\n},\n{\n\"name\": \"interrupt_time\",\n\"value\": 108155323601\n},\n{\n\"name\": \"interrupt\",\n\"value\": 108155323601\n},\n{\n\"name\": \"interrupt_num\",\n\"value\": 3369179127\n},\n{\n\"name\": \"time_per_interrupt\",\n\"value\": 108155323601\n},\n{\n\"name\": \"non_interrupt_time\",\n\"value\": 8038267140031\n},\n{\n\"name\": \"non_interrupt\",\n\"value\": 8038267140031\n},\n{\n\"name\": \"idle_time\",\n\"value\": 55518207375072\n},\n{\n\"name\": \"idle\",\n\"value\": 55518207375072\n},\n{\n\"name\": \"cp_time\",\n\"value\": 64306316680\n},\n{\n\"name\": \"cp\",\n\"value\": 64306316680\n},\n{\n\"name\": \"interrupt_in_cp_time\",\n\"value\": 2024956616\n},\n{\n\"name\": \"interrupt_in_cp\",\n\"value\": 2024956616\n},\n{\n\"name\": \"interrupt_num_in_cp\",\n\"value\": 2661183541\n},\n{\n\"name\": \"time_per_interrupt_in_cp\",\n\"value\": 2024956616\n},\n{\n\"name\": \"sk_switches\",\n\"value\": 2798598514\n},\n{\n\"name\": \"hard_switches\",\n\"value\": 1354185066\n},\n{\n\"name\": \"intr_cnt_msec\",\n\"value\": 3978642246\n},\n{\n\"name\": \"intr_cnt_ipi\",\n\"value\": 797281\n},\n{\n\"name\": \"intr_cnt_total\",\n\"value\": 905575861\n},\n{\n\"name\": \"time\",\n\"value\": 1677516216\n},\n{\n\"name\": \"uptime\",\n\"value\": 3978643\n},\n{\n\"name\": \"processor_plevel_time\",\n\"values\": [\n2878770221447,\n1882901052733,\n1209134416474,\n771086627192,\n486829133301,\n306387520688,\n193706139760,\n123419519944,\n79080346535,\n50459518003,\n31714732122,\n19476561954,\n11616026278,\n6666253598,\n3623880168,\n1790458071,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"0_CPU\",\n\"1_CPU\",\n\"2_CPU\",\n\"3_CPU\",\n\"4_CPU\",\n\"5_CPU\",\n\"6_CPU\",\n\"7_CPU\",\n\"8_CPU\",\n\"9_CPU\",\n\"10_CPU\",\n\"11_CPU\",\n\"12_CPU\",\n\"13_CPU\",\n\"14_CPU\",\n\"15_CPU\",\n\"16_CPU\",\n\"17_CPU\",\n\"18_CPU\",\n\"19_CPU\",\n\"20_CPU\",\n\"21_CPU\",\n\"22_CPU\",\n\"23_CPU\",\n\"24_CPU\",\n\"25_CPU\",\n\"26_CPU\",\n\"27_CPU\",\n\"28_CPU\",\n\"29_CPU\",\n\"30_CPU\",\n\"31_CPU\",\n\"32_CPU\",\n\"33_CPU\",\n\"34_CPU\",\n\"35_CPU\",\n\"36_CPU\",\n\"37_CPU\",\n\"38_CPU\",\n\"39_CPU\",\n\"40_CPU\",\n\"41_CPU\",\n\"42_CPU\",\n\"43_CPU\",\n\"44_CPU\",\n\"45_CPU\",\n\"46_CPU\",\n\"47_CPU\",\n\"48_CPU\",\n\"49_CPU\",\n\"50_CPU\",\n\"51_CPU\",\n\"52_CPU\",\n\"53_CPU\",\n\"54_CPU\",\n\"55_CPU\",\n\"56_CPU\",\n\"57_CPU\",\n\"58_CPU\",\n\"59_CPU\",\n\"60_CPU\",\n\"61_CPU\",\n\"62_CPU\",\n\"63_CPU\",\n\"64_CPU\",\n\"65_CPU\",\n\"66_CPU\",\n\"67_CPU\",\n\"68_CPU\",\n\"69_CPU\",\n\"70_CPU\",\n\"71_CPU\",\n\"72_CPU\",\n\"73_CPU\",\n\"74_CPU\",\n\"75_CPU\",\n\"76_CPU\",\n\"77_CPU\",\n\"78_CPU\",\n\"79_CPU\",\n\"80_CPU\",\n\"81_CPU\",\n\"82_CPU\",\n\"83_CPU\",\n\"84_CPU\",\n\"85_CPU\",\n\"86_CPU\",\n\"87_CPU\",\n\"88_CPU\",\n\"89_CPU\",\n\"90_CPU\",\n\"91_CPU\",\n\"92_CPU\",\n\"93_CPU\",\n\"94_CPU\",\n\"95_CPU\",\n\"96_CPU\",\n\"97_CPU\",\n\"98_CPU\",\n\"99_CPU\",\n\"100_CPU\",\n\"101_CPU\",\n\"102_CPU\",\n\"103_CPU\",\n\"104_CPU\",\n\"105_CPU\",\n\"106_CPU\",\n\"107_CPU\",\n\"108_CPU\",\n\"109_CPU\",\n\"110_CPU\",\n\"111_CPU\",\n\"112_CPU\",\n\"113_CPU\",\n\"114_CPU\",\n\"115_CPU\",\n\"116_CPU\",\n\"117_CPU\",\n\"118_CPU\",\n\"119_CPU\",\n\"120_CPU\",\n\"121_CPU\",\n\"122_CPU\",\n\"123_CPU\",\n\"124_CPU\",\n\"125_CPU\",\n\"126_CPU\",\n\"127_CPU\"\n]\n},\n{\n\"name\": \"processor_plevel\",\n\"values\": [\n2878770221447,\n1882901052733,\n1209134416474,\n771086627192,\n486829133301,\n306387520688,\n193706139760,\n123419519944,\n79080346535,\n50459518003,\n31714732122,\n19476561954,\n11616026278,\n6666253598,\n3623880168,\n1790458071,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"0_CPU\",\n\"1_CPU\",\n\"2_CPU\",\n\"3_CPU\",\n\"4_CPU\",\n\"5_CPU\",\n\"6_CPU\",\n\"7_CPU\",\n\"8_CPU\",\n\"9_CPU\",\n\"10_CPU\",\n\"11_CPU\",\n\"12_CPU\",\n\"13_CPU\",\n\"14_CPU\",\n\"15_CPU\",\n\"16_CPU\",\n\"17_CPU\",\n\"18_CPU\",\n\"19_CPU\",\n\"20_CPU\",\n\"21_CPU\",\n\"22_CPU\",\n\"23_CPU\",\n\"24_CPU\",\n\"25_CPU\",\n\"26_CPU\",\n\"27_CPU\",\n\"28_CPU\",\n\"29_CPU\",\n\"30_CPU\",\n\"31_CPU\",\n\"32_CPU\",\n\"33_CPU\",\n\"34_CPU\",\n\"35_CPU\",\n\"36_CPU\",\n\"37_CPU\",\n\"38_CPU\",\n\"39_CPU\",\n\"40_CPU\",\n\"41_CPU\",\n\"42_CPU\",\n\"43_CPU\",\n\"44_CPU\",\n\"45_CPU\",\n\"46_CPU\",\n\"47_CPU\",\n\"48_CPU\",\n\"49_CPU\",\n\"50_CPU\",\n\"51_CPU\",\n\"52_CPU\",\n\"53_CPU\",\n\"54_CPU\",\n\"55_CPU\",\n\"56_CPU\",\n\"57_CPU\",\n\"58_CPU\",\n\"59_CPU\",\n\"60_CPU\",\n\"61_CPU\",\n\"62_CPU\",\n\"63_CPU\",\n\"64_CPU\",\n\"65_CPU\",\n\"66_CPU\",\n\"67_CPU\",\n\"68_CPU\",\n\"69_CPU\",\n\"70_CPU\",\n\"71_CPU\",\n\"72_CPU\",\n\"73_CPU\",\n\"74_CPU\",\n\"75_CPU\",\n\"76_CPU\",\n\"77_CPU\",\n\"78_CPU\",\n\"79_CPU\",\n\"80_CPU\",\n\"81_CPU\",\n\"82_CPU\",\n\"83_CPU\",\n\"84_CPU\",\n\"85_CPU\",\n\"86_CPU\",\n\"87_CPU\",\n\"88_CPU\",\n\"89_CPU\",\n\"90_CPU\",\n\"91_CPU\",\n\"92_CPU\",\n\"93_CPU\",\n\"94_CPU\",\n\"95_CPU\",\n\"96_CPU\",\n\"97_CPU\",\n\"98_CPU\",\n\"99_CPU\",\n\"100_CPU\",\n\"101_CPU\",\n\"102_CPU\",\n\"103_CPU\",\n\"104_CPU\",\n\"105_CPU\",\n\"106_CPU\",\n\"107_CPU\",\n\"108_CPU\",\n\"109_CPU\",\n\"110_CPU\",\n\"111_CPU\",\n\"112_CPU\",\n\"113_CPU\",\n\"114_CPU\",\n\"115_CPU\",\n\"116_CPU\",\n\"117_CPU\",\n\"118_CPU\",\n\"119_CPU\",\n\"120_CPU\",\n\"121_CPU\",\n\"122_CPU\",\n\"123_CPU\",\n\"124_CPU\",\n\"125_CPU\",\n\"126_CPU\",\n\"127_CPU\"\n]\n},\n{\n\"name\": \"domain_busy\",\n\"values\": [\n55518207375080,\n8102895398,\n12058227646,\n991838747162,\n28174147737,\n6669066926,\n14245801778,\n9009875224,\n118982762,\n177496844302,\n5888814259,\n167280195,\n3851617905,\n484154906167,\n91240285306,\n6180138216837,\n22111798640,\n344700584,\n266304074,\n2388625825,\n16,\n21,\n19\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"domain_shared\",\n\"values\": [\n0,\n153663450171,\n0,\n0,\n0,\n11834112384,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"dswitchto_cnt\",\n\"values\": [\n0,\n178192633,\n143964155,\n286324250,\n2365,\n39684121,\n5,\n10,\n10715325,\n22,\n7,\n30,\n2407970,\n7865489299,\n7870331008,\n265242,\n53,\n2535145,\n3252888,\n53334340,\n1,\n1,\n1\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"intr_cnt\",\n\"values\": [\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n724698481,\n8181275,\n488080162,\n0,\n71856,\n0,\n71856,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"dev_0\",\n\"dev_1\",\n\"dev_2\",\n\"dev_3\",\n\"dev_4\",\n\"dev_5\",\n\"dev_6\",\n\"dev_7\",\n\"dev_8\",\n\"dev_9\",\n\"dev_10\",\n\"dev_11\",\n\"dev_12\",\n\"dev_13\",\n\"dev_14\",\n\"dev_15\",\n\"dev_16\",\n\"dev_17\",\n\"dev_18\",\n\"dev_19\",\n\"dev_20\",\n\"dev_21\",\n\"dev_22\",\n\"dev_23\",\n\"dev_24\",\n\"dev_25\",\n\"dev_26\",\n\"dev_27\",\n\"dev_28\",\n\"dev_29\",\n\"dev_30\",\n\"dev_31\",\n\"dev_32\",\n\"dev_33\",\n\"dev_34\",\n\"dev_35\",\n\"dev_36\",\n\"dev_37\",\n\"dev_38\",\n\"dev_39\",\n\"dev_40\",\n\"dev_41\",\n\"dev_42\",\n\"dev_43\",\n\"dev_44\",\n\"dev_45\",\n\"dev_46\",\n\"dev_47\",\n\"dev_48\",\n\"dev_49\",\n\"dev_50\",\n\"dev_51\",\n\"dev_52\",\n\"dev_53\",\n\"dev_54\",\n\"dev_55\",\n\"dev_56\",\n\"dev_57\",\n\"dev_58\",\n\"dev_59\",\n\"dev_60\",\n\"dev_61\",\n\"dev_62\",\n\"dev_63\",\n\"dev_64\",\n\"dev_65\",\n\"dev_66\",\n\"dev_67\",\n\"dev_68\",\n\"dev_69\",\n\"dev_70\",\n\"dev_71\",\n\"dev_72\",\n\"dev_73\",\n\"dev_74\",\n\"dev_75\",\n\"dev_76\",\n\"dev_77\",\n\"dev_78\",\n\"dev_79\",\n\"dev_80\",\n\"dev_81\",\n\"dev_82\",\n\"dev_83\",\n\"dev_84\",\n\"dev_85\",\n\"dev_86\",\n\"dev_87\",\n\"dev_88\",\n\"dev_89\",\n\"dev_90\",\n\"dev_91\",\n\"dev_92\",\n\"dev_93\",\n\"dev_94\",\n\"dev_95\",\n\"dev_96\",\n\"dev_97\",\n\"dev_98\",\n\"dev_99\",\n\"dev_100\",\n\"dev_101\",\n\"dev_102\",\n\"dev_103\",\n\"dev_104\",\n\"dev_105\",\n\"dev_106\",\n\"dev_107\",\n\"dev_108\",\n\"dev_109\",\n\"dev_110\",\n\"dev_111\",\n\"dev_112\",\n\"dev_113\",\n\"dev_114\",\n\"dev_115\",\n\"dev_116\",\n\"dev_117\",\n\"dev_118\",\n\"dev_119\",\n\"dev_120\",\n\"dev_121\",\n\"dev_122\",\n\"dev_123\",\n\"dev_124\",\n\"dev_125\",\n\"dev_126\",\n\"dev_127\",\n\"dev_128\",\n\"dev_129\",\n\"dev_130\",\n\"dev_131\",\n\"dev_132\",\n\"dev_133\",\n\"dev_134\",\n\"dev_135\",\n\"dev_136\",\n\"dev_137\",\n\"dev_138\",\n\"dev_139\",\n\"dev_140\",\n\"dev_141\",\n\"dev_142\",\n\"dev_143\",\n\"dev_144\",\n\"dev_145\",\n\"dev_146\",\n\"dev_147\",\n\"dev_148\",\n\"dev_149\",\n\"dev_150\",\n\"dev_151\",\n\"dev_152\",\n\"dev_153\",\n\"dev_154\",\n\"dev_155\",\n\"dev_156\",\n\"dev_157\",\n\"dev_158\",\n\"dev_159\",\n\"dev_160\",\n\"dev_161\",\n\"dev_162\",\n\"dev_163\",\n\"dev_164\",\n\"dev_165\",\n\"dev_166\",\n\"dev_167\",\n\"dev_168\",\n\"dev_169\",\n\"dev_170\",\n\"dev_171\",\n\"dev_172\",\n\"dev_173\",\n\"dev_174\",\n\"dev_175\",\n\"dev_176\",\n\"dev_177\",\n\"dev_178\",\n\"dev_179\",\n\"dev_180\",\n\"dev_181\",\n\"dev_182\",\n\"dev_183\",\n\"dev_184\",\n\"dev_185\",\n\"dev_186\",\n\"dev_187\",\n\"dev_188\",\n\"dev_189\",\n\"dev_190\",\n\"dev_191\",\n\"dev_192\",\n\"dev_193\",\n\"dev_194\",\n\"dev_195\",\n\"dev_196\",\n\"dev_197\",\n\"dev_198\",\n\"dev_199\",\n\"dev_200\",\n\"dev_201\",\n\"dev_202\",\n\"dev_203\",\n\"dev_204\",\n\"dev_205\",\n\"dev_206\",\n\"dev_207\",\n\"dev_208\",\n\"dev_209\",\n\"dev_210\",\n\"dev_211\",\n\"dev_212\",\n\"dev_213\",\n\"dev_214\",\n\"dev_215\",\n\"dev_216\",\n\"dev_217\",\n\"dev_218\",\n\"dev_219\",\n\"dev_220\",\n\"dev_221\",\n\"dev_222\",\n\"dev_223\",\n\"dev_224\",\n\"dev_225\",\n\"dev_226\",\n\"dev_227\",\n\"dev_228\",\n\"dev_229\",\n\"dev_230\",\n\"dev_231\",\n\"dev_232\",\n\"dev_233\",\n\"dev_234\",\n\"dev_235\",\n\"dev_236\",\n\"dev_237\",\n\"dev_238\",\n\"dev_239\",\n\"dev_240\",\n\"dev_241\",\n\"dev_242\",\n\"dev_243\",\n\"dev_244\",\n\"dev_245\",\n\"dev_246\",\n\"dev_247\",\n\"dev_248\",\n\"dev_249\",\n\"dev_250\",\n\"dev_251\",\n\"dev_252\",\n\"dev_253\",\n\"dev_254\",\n\"dev_255\"\n]\n},\n{\n\"name\": \"wafliron\",\n\"values\": [\n0,\n0,\n0\n],\n\"labels\": [\n\"iron_totstarts\",\n\"iron_nobackup\",\n\"iron_usebackup\"\n]\n}\n],\n\"aggregation\": {\n\"count\": 2,\n\"complete\": true\n},\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system:node/rows/umeng-aff300-02%3A1524afca-0580-11e8-ae74-00a098d390f2\"\n}\n}\n}\n],\n\"num_records\": 2,\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system:node/rows?fields=*&return_records=true\"\n}\n}\n}\n

    "},{"location":"resources/rest-perf-metrics/#references","title":"References","text":"
    • Harvest REST Strategy
    • ONTAP 9.11.1 ONTAPI-to-REST Counter Manager Mapping
    • ONTAP REST API reference documentation
    • ONTAP REST API
    "},{"location":"resources/templates-and-metrics/","title":"Harvest Templates and Metrics","text":"

    Harvest collects ONTAP counter information, augments it, and stores it in a time-series DB. Refer ONTAP Metrics for details about ONTAP metrics exposed by Harvest.

    flowchart RL\n    Harvest[Harvest<br>Get & Augment] -- REST<br>ZAPI --> ONTAP\n    id1[(Prometheus<br>Store)] -- Scrape --> Harvest

    Three concepts work in unison to collect ONTAP metrics data, prepare it and make it available to Prometheus.

    • ZAPI/REST
    • Harvest templates
    • Exporters

    We're going to walk through an example from a running system, focusing on the disk object.

    At a high-level, Harvest templates describe what ZAPIs to send to ONTAP and how to interpret the responses.

    • ONTAP defines twos ZAPIs to collect disk info
      • Config information is collected via storage-disk-get-iter
      • Performance counters are collected via disk:constituent
    • These ZAPIs are found in their corresponding object template file conf/zapi/cdot/9.8.0/disk.yaml and conf/zapiperf/cdot/9.8.0/disk.yaml. These files also describe how to map the ZAPI responses into a time-series-friendly format
    • Prometheus uniquely identifies a time series by its metric name and optional key-value pairs called labels.
    "},{"location":"resources/templates-and-metrics/#handy-tools","title":"Handy Tools","text":"
    • dasel is useful to convert between XML, YAML, JSON, etc. We'll use it to make displaying some of the data easier.
    "},{"location":"resources/templates-and-metrics/#ontap-zapi-disk-example","title":"ONTAP ZAPI disk example","text":"

    We'll use the bin/harvest zapi tool to interrogate the cluster and gather information about the counters. This is one way you can send ZAPIs to ONTAP and explore the return types and values.

    bin/harvest zapi -p u2 show attrs --api storage-disk-get-iter\n

    Output edited for brevity and line numbers added on left

    The hierarchy and return type of each counter is shown below. We'll use this hierarchy to build a matching Harvest template. For example, line 3 is the bytes-per-sector counter, which has an integer value, and is the child of storage-disk-info > disk-inventory-info.

    To capture that counter's value as a metric in a Harvest, the ZAPI template must use the same hierarchical path. The matching path can be seen below.

    building tree for attribute [attributes-list] => [storage-disk-info]\n\n 1 [storage-disk-info]            -               *\n 2   [disk-inventory-info]        -                \n 3     [bytes-per-sector]         -         integer\n 4     [capacity-sectors]         -         integer\n 5     [disk-type]                -          string\n 6     [is-shared]                -         boolean\n 7     [model]                    -          string\n 8     [serial-number]            -          string\n 9     [shelf]                    -          string\n10     [shelf-bay]                -          string\n11   [disk-name]                  -          string\n12   [disk-ownership-info]        -                \n13     [home-node-name]           -          string\n14     [is-failed]                -         boolean\n15     [owner-node-name]          -          string\n16   [disk-raid-info]             -                \n17     [container-type]           -          string\n18     [disk-outage-info]         -                \n19       [is-in-fdr]              -         boolean\n20       [reason]                 -          string  \n21   [disk-stats-info]            -                \n22     [average-latency]          -         integer\n23     [disk-io-kbps]             -         integer\n24     [power-on-time-interval]   -         integer\n25     [sectors-read]             -         integer\n26     [sectors-written]          -         integer\n27   [disk-uid]                   -          string\n28   [node-name]                  -          string\n29   [storage-disk-state]         -         integer\n30   [storage-disk-state-flags]   -         integer\n
    "},{"location":"resources/templates-and-metrics/#harvest-templates","title":"Harvest Templates","text":"

    To understand templates, there are a few concepts to cover:

    There are three kinds of information included in templates that define what Harvest collects and exports:

    1. Configuration information is exported into the _labels metric (e.g. disk_labels see below)
    2. Metrics data is exported as disk_\"metric name\" e.g. disk_bytes_per_sector, disk_sectors, etc. Metrics are leaf nodes that are not prefixed with a ^ or ^^. Metrics must be one of the number types: float or int.
    3. Plugins may add additional metrics, increasing the number of metrics exported in #2

    A resource will typically have multiple instances. Using disk as an example, that means there will be one disk_labels and a metric row per instance. If we have 24 disks and the disk template lists seven metrics to capture, Harvest will export a total of 192 rows of Prometheus data.

    24 instances * (7 metrics per instance + 1 label per instance) = 192 rows

    Sum of disk metrics that Harvest exports

    curl -s 'http://localhost:14002/metrics' | grep ^disk | cut -d'{' -f1 | sort | uniq -c\n  24 disk_bytes_per_sector\n  24 disk_labels\n  24 disk_sectors\n  24 disk_stats_average_latency\n  24 disk_stats_io_kbps\n  24 disk_stats_sectors_read\n  24 disk_stats_sectors_written\n  24 disk_uptime\n# 192 rows \n

    Read on to see how we control which labels from #1 and which metrics from #2 are included in the exported data.

    "},{"location":"resources/templates-and-metrics/#instance-keys-and-labels","title":"Instance Keys and Labels","text":"
    • Instance key - An instance key defines the set of attributes Harvest uses to construct a key that uniquely identifies an object. For example, the disk template uses the node + disk attributes to determine uniqueness. Using node or disk alone wouldn't be sufficient since disks on separate nodes can have the same name. If a single label does not uniquely identify an instance, combine multiple keys for uniqueness. Instance keys must refer to attributes that are of type string.

    Because instance keys define uniqueness, these keys are also added to each metric as a key-value pair. ( see Control What Labels and Metrics are Exported for examples)

    • Instance label - Labels are key-value pairs used to gather configuration information about each instance. All of the key-value pairs are combined into a single metric named disk_labels. There will be one disk_labels for each monitored instance. Here's an example reformatted so it's easier to read:
    disk_labels{\n  datacenter=\"dc-1\",\n  cluster=\"umeng-aff300-05-06\",\n  node=\"umeng-aff300-06\",\n  disk=\"1.1.23\",\n  type=\"SSD\",\n  model=\"X371_S1643960ATE\",\n  outage=\"\",\n  owner_node=\"umeng-aff300-06\",\n  shared=\"true\",\n  shelf=\"1\",\n  shelf_bay=\"23\",\n  serial_number=\"S3SENE0K500532\",\n  failed=\"false\",\n  container_type=\"shared\"\n}\n
    "},{"location":"resources/templates-and-metrics/#harvest-object-template","title":"Harvest Object Template","text":"

    Continuing with the disk example, below is the conf/zapi/cdot/9.8.0/disk.yaml that tells Harvest which ZAPI to send to ONTAP (storage-disk-get-iter) and describes how to interpret and export the response.

    • Line 1 defines the name of this resource and is an exact match to the object defined in your default.yaml or custom.yaml file. Eg.
    # default.yaml\nobjects:\n  Disk:  disk.yaml\n
    • Line 2 is the name of the ZAPI that Harvest will send to collect disk resources
    • Line 3 is the prefix used to export metrics associated with this object. i.e. all metrics will be of the form disk_*
    • Line 5 the counter section is where we define the metrics, labels, and what constitutes instance uniqueness
    • Line 7 the double hat prefix ^^ means this attribute is an instance key used to determine uniqueness. Instance keys are also included as labels. Uuids are good choices for uniqueness
    • Line 13 the single hat prefix ^ means this attribute should be stored as a label. That means we can include it in the export_options section as one of the key-value pairs in disk_labels
    • Rows 10, 11, 23, 24, 25, 26, 27 - these are the metrics rows - metrics are leaf nodes that are not prefixed with a ^ or ^^. If you refer back to the ONTAP ZAPI disk example above, you'll notice each of these attributes are integer types.
    • Line 43 defines the set of labels to use when constructing the disk_labels metrics. As mentioned above, these labels capture config-related attributes per instance.

    Output edited for brevity and line numbers added for reference.

     1  name:             Disk\n 2  query:            storage-disk-get-iter\n 3  object:           disk\n 4  \n 5  counters:\n 6    storage-disk-info:\n 7      - ^^disk-uid\n 8      - ^^disk-name               => disk\n 9      - disk-inventory-info:\n10        - bytes-per-sector        => bytes_per_sector        # notice this has the same hierarchical path we saw from bin/harvest zapi\n11        - capacity-sectors        => sectors\n12        - ^disk-type              => type\n13        - ^is-shared              => shared\n14        - ^model                  => model\n15        - ^serial-number          => serial_number\n16        - ^shelf                  => shelf\n17        - ^shelf-bay              => shelf_bay\n18      - disk-ownership-info:\n19        - ^home-node-name         => node\n20        - ^owner-node-name        => owner_node\n21        - ^is-failed              => failed\n22      - disk-stats-info:\n23        - average-latency\n24        - disk-io-kbps\n25        - power-on-time-interval  => uptime\n26        - sectors-read\n27        - sectors-written\n28      - disk-raid-info:\n29        - ^container-type         => container_type\n30        - disk-outage-info:\n31          - ^reason               => outage\n32  \n33  plugins:\n34    - LabelAgent:\n35      # metric label zapi_value rest_value `default_value`\n36      value_to_num:\n37        - new_status outage - - `0` #ok_value is empty value, '-' would be converted to blank while processing.\n38  \n39  export_options:\n40    instance_keys:\n41      - node\n42      - disk\n43    instance_labels:\n44      - type\n45      - model\n46      - outage\n47      - owner_node\n48      - shared\n49      - shelf\n50      - shelf_bay\n51      - serial_number\n52      - failed\n53      - container_type\n
    "},{"location":"resources/templates-and-metrics/#control-what-labels-and-metrics-are-exported","title":"Control What Labels and Metrics are Exported","text":"

    Let's continue with disk and look at a few examples. We'll use curl to examine the Prometheus wire format that Harvest uses to export the metrics from conf/zapi/cdot/9.8.0/disk.yaml.

    The curl below shows all exported disk metrics. There are 24 disks on this cluster, Harvest is collecting seven metrics + one disk_labels + one plugin-created metric, disk_new_status for a total of 216 rows.

    curl -s 'http://localhost:14002/metrics' | grep ^disk | cut -d'{' -f1 | sort | uniq -c\n  24 disk_bytes_per_sector           # metric\n  24 disk_labels                     # labels \n  24 disk_new_status                 # plugin created metric \n  24 disk_sectors                    # metric \n  24 disk_stats_average_latency      # metric   \n  24 disk_stats_io_kbps              # metric \n  24 disk_stats_sectors_read         # metric   \n  24 disk_stats_sectors_written      # metric  \n  24 disk_uptime                     # metric\n# sum = ((7 + 1 + 1) * 24 = 216 rows)\n

    Here's a disk_labels for one instance, reformated to make it easier to read.

    curl -s 'http://localhost:14002/metrics' | grep ^disk_labels | head -1\n\ndisk_labels{\n  datacenter = \"dc-1\",                 # always included - value taken from datacenter in harvest.yml\n  cluster = \"umeng-aff300-05-06\",      # always included\n  node = \"umeng-aff300-06\",            # node is in the list of export_options instance_keys\n  disk = \"1.1.13\",                     # disk is in the list of export_options instance_keys\n  type = \"SSD\",                        # remainder are included because they are listed in the template's instance_labels\n  model = \"X371_S1643960ATE\",\n  outage = \"\",\n  owner_node = \"umeng-aff300-06\",\n  shared = \"true\",\n  shelf = \"1\",\n  shelf_bay = \"13\",\n  serial_number = \"S3SENE0K500572\",\n  failed = \"false\",\n  container_type = \"\",\n} 1.0\n

    Here's the disk_sectors metric for a single instance.

    curl -s 'http://localhost:14002/metrics' | grep ^disk_sectors | head -1\n\ndisk_sectors{                          # prefix of disk_ + metric name (line 11 in template)\n  datacenter = \"dc-1\",                 # always included - value taken from datacenter in harvest.yml\n  cluster = \"umeng-aff300-05-06\",      # always included\n  node = \"umeng-aff300-06\",            # node is in the list of export_options instance_keys\n  disk = \"1.1.17\",                     # disk is in the list of export_options instance_keys\n} 1875385008                           # metric value - number of sectors for this disk instance\n
    Number of rows for each template = number of instances * (number of metrics + 1 (for <name>_labels row) + plugin additions)\nNumber of metrics                = number of counters which are not labels or keys, those without a ^ or ^^\n
    "},{"location":"resources/templates-and-metrics/#common-errors-and-troubleshooting","title":"Common Errors and Troubleshooting","text":""},{"location":"resources/templates-and-metrics/#1-failed-to-parse-any-metrics","title":"1. Failed to parse any metrics","text":"

    You add a new template to Harvest, restart your poller, and get an error message:

    WRN ./poller.go:649 > init collector-object (Zapi:NetPort): no metrics => failed to parse any\n

    This means the collector, Zapi NetPort, was unable to find any metrics. Recall metrics are lines without prefixes. In cases where you don't have any metrics, but still want to collect labels, add the collect_only_labels: true key-value to your template. This flag tells Harvest to ignore that you don't have metrics and continue. Example.

    "},{"location":"resources/templates-and-metrics/#2-missing-data","title":"2. Missing Data","text":"
    1. What happens if an attribute is listed in the list of instance_labels (line 43 above), but that label is missing from the list of counters captured at line 5?

    The label will still be written into disk_labels, but the value will be empty since it's missing. e.g if line 29 was deleted container_type would still be present in disk_labels{container_type=\"\"}.

    "},{"location":"resources/templates-and-metrics/#prometheus-wire-format","title":"Prometheus Wire Format","text":"

    https://prometheus.io/docs/instrumenting/exposition_formats/

    Keep in mind that Prometheus does not permit dashes (-) in labels. That's why Harvest templates use name replacement to convert dashed-names to underscored-names with =>. e.g. bytes-per-sector => bytes_per_sector converts bytes-per-sector into the Prometheus accepted bytes_per_sector.

    Every time series is uniquely identified by its metric name and optional key-value pairs called labels.

    Labels enable Prometheus's dimensional data model: any combination of labels for the same metric name identifies a particular dimensional instantiation of that metric (for example: all HTTP requests that used the method POST to the /api/tracks handler). The query language allows filtering and aggregation based on these dimensions. Changing any label value, including adding or removing a label, will create a new time series.

    <metric_name>{<label_name>=<label_value>, ...} value [ timestamp ]

    • metric_name and label_name carry the usual Prometheus expression language restrictions
    • label_value can be any sequence of UTF-8 characters, but the backslash (), double-quote (\"), and line feed (\\n) characters have to be escaped as \\, \\\", and \\n, respectively.
    • value is a float represented as required by Go's ParseFloat() function. In addition to standard numerical values, NaN, +Inf, and -Inf are valid values representing not a number, positive infinity, and negative infinity, respectively.
    • timestamp is an int64 (milliseconds since epoch, i.e. 1970-01-01 00:00:00 UTC, excluding leap seconds), represented as required by Go's ParseInt() function

    Exposition formats

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"What is Harvest?","text":"

    Harvest is the open-metrics endpoint for ONTAP and StorageGRID

    NetApp Harvest brings observability to ONTAP and StorageGRID clusters. Harvest collects performance, capacity and hardware metrics from ONTAP and StorageGRID, transforms them, and routes them to your choice of time-series database.

    The included Grafana dashboards deliver the datacenter insights you need, while new metrics can be collected with a few edits of the included template files.

    Harvest is open-source, released under an Apache2 license, and offers great flexibility in how you collect, augment, and export your datacenter metrics.

    Note

    Hop onto our Discord or GitHub discussions and say hi. \ud83d\udc4b\ud83c\udffd

    "},{"location":"MigratePrometheusDocker/","title":"Migrate Prometheus Docker Volume","text":"

    If you want to keep your historical Prometheus data, and you generated your harvest-compose.yml file via bin/harvest generate before Harvest 22.11, please follow the steps below to migrate your historical Prometheus data.

    This is not required if you generated your harvest-compose.yml file via bin/harvest generate at Harvest release 22.11 or after.

    Outline of steps: 1. Stop Prometheus container so data acquiesces 2. Find historical Prometheus volume and create new Prometheus data volume 3. Create a new Prometheus volume that Harvest 22.11 and after will use 4. Copy the historical Prometheus data from the old volume to the new one 5. Optionally remove the historical Prometheus volume

    "},{"location":"MigratePrometheusDocker/#stop-prometheus-container","title":"Stop Prometheus container","text":"

    It's safe to run the stop and rm commands below regardless if Prometheus is running or not since removing the container does not touch the historical data stored in the volume.

    Stop all containers named Prometheus and remove them.

    docker stop (docker ps -fname=prometheus -q) && docker rm (docker ps -a -fname=prometheus -q)\n

    Docker may complain if the container is not running, like so. You can ignore this.

    Ignorable output when container is not running (click me)
    \"docker stop\" requires at least 1 argument.\nSee 'docker stop --help'.\n\nUsage:  docker stop [OPTIONS] CONTAINER [CONTAINER...]\n\nStop one or more running containers\n
    "},{"location":"MigratePrometheusDocker/#find-the-name-of-the-prometheus-volume-that-has-the-historical-data","title":"Find the name of the Prometheus volume that has the historical data","text":"
    docker volume ls -f name=prometheus -q\n

    Output should look like this:

    harvest-22080-1_linux_amd64_prometheus_data  # historical Prometheus data here\nharvest_prometheus_data                      # it is fine if this line is missing\n

    We want to copy the historical data from harvest-22080-1_linux_amd64_prometheus_data to harvest_prometheus_data

    If harvest_prometheus_data already exists, you need to decide if you want to move that volume's data to a different volume or remove it. If you want to remove the volume, run docker volume rm harvest_prometheus_data. If you want to move the data, adjust the command below to first copy harvest_prometheus_data to a different volume and then remove it.

    "},{"location":"MigratePrometheusDocker/#create-new-prometheus-volume","title":"Create new Prometheus volume","text":"

    We're going to create a new mount named, harvest_prometheus_data by executing:

    docker volume create --name harvest_prometheus_data\n
    "},{"location":"MigratePrometheusDocker/#copy-the-historical-prometheus-data","title":"Copy the historical Prometheus data","text":"

    We will copy the historical Prometheus data from the old volume to the new one by mounting both volumes and copying data between them. NOTE: Prometheus only supports copying a single volume. It will not work if you attempt to copy multiple volumes into the same destination volume.

    # replace  `HISTORICAL_VOLUME` with the name of the Prometheus volume that contains you historical data found in step 2.\ndocker run --rm -it -v $HISTORICAL_VOLUME:/from -v harvest_prometheus_data:/to alpine ash -c \"cd /from ; cp -av . /to\"\n

    Output will look something like this:

    './wal' -> '/to/./wal'\n'./wal/00000000' -> '/to/./wal/00000000'\n'./chunks_head' -> '/to/./chunks_head'\n...\n
    "},{"location":"MigratePrometheusDocker/#optionally-remove-historical-prometheus-data","title":"Optionally remove historical Prometheus data","text":"

    Before removing the historical data, start your compose stack and make sure everything works.

    Once you're satisfied that you can destroy the old data, remove it like so.

    # replace `HISTORICAL_VOLUME` with the name of the Prometheus volume that contains your historical data found in step 2.\ndocker volume rm $HISTORICAL_VOLUME\n
    "},{"location":"MigratePrometheusDocker/#reference","title":"Reference","text":"
    • Rename Docker Volume
    "},{"location":"configure-ems/","title":"EMS","text":""},{"location":"configure-ems/#ems-collector","title":"EMS collector","text":"

    The EMS collector collects ONTAP event management system ( EMS) events via the ONTAP REST API.

    This collector uses a YAML template file to define which events to collect, export, and what labels to attach to each metric. This means you can collect new EMS events or attach new labels by editing the default template file or by extending existing templates.

    The default template file contains 98 EMS events.

    "},{"location":"configure-ems/#supported-ontap-systems","title":"Supported ONTAP Systems","text":"

    Any cDOT ONTAP system using 9.6 or higher.

    "},{"location":"configure-ems/#requirements","title":"Requirements","text":"

    It is recommended to create a read-only user on the ONTAP system. See prepare an ONTAP cDOT cluster for details.

    "},{"location":"configure-ems/#metrics","title":"Metrics","text":"

    This collector collects EMS events from ONTAP and for each received EMS event, creates new metrics prefixed with ems_events.

    Harvest supports two types of ONTAP EMS events:

    • Normal EMS events

    Single shot events. When ONTAP detects a problem, an event is raised. When the issue is addressed, ONTAP does not raise another event reflecting that the problem was resolved.

    • Bookend EMS events

    ONTAP creates bookend events in matching pairs. ONTAP creates an event when an issue is detected and another paired event when the event is resolved. Typically, these events share a common set of properties.

    "},{"location":"configure-ems/#collector-configuration","title":"Collector Configuration","text":"

    The parameters of the collector are distributed across three files:

    • Harvest configuration file (default: harvest.yml)
    • EMS collector configuration file (default: conf/ems/default.yaml)
    • EMS template file (located in conf/ems/9.6.0/ems.yaml)

    Except for addr, datacenter, and auth_style, all other parameters of the EMS collector can be defined in either of these three files. Parameters defined in the lower-level files, override parameters in the higher-level file. This allows you to configure each EMS event individually, or use the same parameters for all events.

    "},{"location":"configure-ems/#ems-collector-configuration-file","title":"EMS Collector Configuration File","text":"

    This configuration file contains the parameters that are used to configure the EMS collector. These parameters can be defined in your harvest.yml or conf/ems/default.yaml file.

    parameter type description default client_timeout Go duration how long to wait for server responses 1m schedule list, required the polling frequency of the collector/object. Should include exactly the following two elements in the order specified: - instance Go duration polling frequency for updating the instance cache (example value: 24h = 1440m) - data Go duration polling frequency for updating the data cache (example value: 3m)Note Harvest allows defining poll intervals on sub-second level (e.g. 1ms), however keep in mind the following:
    • API response of an ONTAP system can take several seconds, so the collector is likely to enter failed state if the poll interval is less than client_timeout.
    • Small poll intervals will create significant workload on the ONTAP system.

    The EMS configuration file should contain the following section mapping the Ems object to the corresponding template file.

    objects:\nEms: ems.yaml\n

    Even though the EMS mapping shown above references a single file named ems.yaml, there may be multiple versions of that file across subdirectories named after ONTAP releases. See cDOT for examples.

    At runtime, the EMS collector will select the appropriate object configuration file that most closely matches the targeted ONTAP system.

    "},{"location":"configure-ems/#ems-template-file","title":"EMS Template File","text":"

    The EMS template file should contain the following parameters:

    parameter type description default name string display name of the collector. this matches the named defined in your conf/ems/default.yaml file EMS object string short name of the object, used to prefix metrics ems query string REST API endpoint used to query EMS events api/support/ems/events exports list list of default labels attached to each exported metric events list list of EMS events to collect. See Event Parameters"},{"location":"configure-ems/#event-parameters","title":"Event Parameters","text":"

    This section defines the list of EMS events you want to collect, which properties to export, what labels to attach, and how to handle bookend pairs. The EMS event template parameters are explained below along with an example for reference.

    • name is the ONTAP EMS event name. (collect ONTAP EMS events with the name of LUN.offline)
    • matches list of name-value pairs used to further filter ONTAP events. Some EMS events include arguments and these name-value pairs provide a way to filter on those arguments. (Only collect ONTAP EMS events where volume_name has the value abc_vol)
    • exports list of EMS event parameters to export. These exported parameters are attached as labels to each matching EMS event.
      • labels that are prefixed with ^^ use that parameter to define instance uniqueness.
    • resolve_when_ems (applicable to bookend events only). Lists the resolving event that pairs with the issuing event
      • name is the ONTAP EMS event name of the resolving EMS event (LUN.online). When the resolving event is received, the issuing EMS event will be resolved. In this example, Harvest will raise an event when it finds the ONTAP EMS event named LUN.offline and that event will be resolved when the EMS event named LUN.online is received.
      • resolve_after (optional, Go duration, default = 28 days) resolve the issuing EMS after the specified duration has elapsed (672h = 28d). If the bookend pair is not received within the resolve_after duration, the Issuing EMS event expires. When that happens, Harvest will mark the event as auto resolved by adding the autoresolved=true label to the issuing EMS event.
      • resolve_key (optional) bookend key used to match bookend EMS events. Defaults to prefixed (^^) labels in exports section. resolve_key allows you to override what is defined in the exports section.

    Labels are only exported if they are included in the exports section.

    Example template definition for the LUN.offline EMS event:

      - name: LUN.offline\nmatches:\n- name: volume_name\nvalue: abc_vol\nexports:\n- ^^parameters.object_uuid            => object_uuid\n- parameters.object_type              => object_type\n- parameters.lun_path                 => lun_path\n- parameters.volume_name              => volume\n- parameters.volume_dsid              => volume_ds_id\nresolve_when_ems:\n- name: LUN.online\nresolve_after: 672h\nresolve_key:\n- ^^parameters.object_uuid        => object_uuid\n
    "},{"location":"configure-ems/#how-do-i-find-the-full-list-of-supported-ems-events","title":"How do I find the full list of supported EMS events?","text":"

    ONTAP documents the list of EMS events created in the ONTAP EMS Event Catalog.

    You can also query a live system and ask the cluster for its event catalog like so:

    curl --insecure --user \"user:password\" 'https://10.61.124.110/api/support/ems/messages?fields=*'\n

    Example Output

    {\n  \"records\": [\n    {\n      \"name\": \"AccessCache.NearLimits\",\n      \"severity\": \"alert\",\n      \"description\": \"This message occurs when the access cache module is near its limits for entries or export rules. Reaching these limits can prevent new clients from being able to mount and perform I/O on the storage system, and can also cause clients to be granted or denied access based on stale cached information.\",\n      \"corrective_action\": \"Ensure that the number of clients accessing the storage system continues to be below the limits for access cache entries and export rules across those entries. If the set of clients accessing the storage system is constantly changing, consider using the \\\"vserver export-policy access-cache config modify\\\" command to reduce the harvest timeout parameter so that cache entries for clients that are no longer accessing the storage system can be evicted sooner.\",\n      \"snmp_trap_type\": \"severity_based\",\n      \"deprecated\": false\n    },\n...\n    {\n      \"name\": \"ztl.smap.online.status\",\n      \"severity\": \"notice\",\n      \"description\": \"This message occurs when the specified partition on a Software Defined Flash drive could not be onlined due to internal S/W or device error.\",\n      \"corrective_action\": \"NONE\",\n      \"snmp_trap_type\": \"severity_based\",\n      \"deprecated\": false\n    }\n  ],\n  \"num_records\": 7273\n}\n
    "},{"location":"configure-ems/#ems-prometheus-alerts","title":"Ems Prometheus Alerts","text":"

    Refer Prometheus-Alerts

    "},{"location":"configure-grafana/","title":"Configure Grafana","text":""},{"location":"configure-grafana/#grafana","title":"Grafana","text":"

    Grafana hosts the Harvest dashboards and needs to be setup before importing your dashboards.

    "},{"location":"configure-harvest-advanced/","title":"Configure Harvest (advanced)","text":"

    This chapter describes additional advanced configuration possibilities of NetApp Harvest. For a typical installation this level of detail is likely not needed.

    "},{"location":"configure-harvest-basic/","title":"Configure Harvest (basic)","text":"

    The main configuration file, harvest.yml, consists of the following sections, described below:

    "},{"location":"configure-harvest-basic/#pollers","title":"Pollers","text":"

    All pollers are defined in harvest.yml, the main configuration file of Harvest, under the section Pollers.

    parameter type description default Poller name (header) required Poller name, user-defined value datacenter required Datacenter name, user-defined value addr required by some collectors IPv4 or FQDN of the target system collectors required List of collectors to run for this poller exporters required List of exporter names from the Exporters section. Note: this should be the name of the exporter (e.g. prometheus1), not the value of the exporter key (e.g. Prometheus) auth_style required by Zapi* collectors Either basic_auth or certificate_auth See authentication for details basic_auth username, password required if auth_style is basic_auth ssl_cert, ssl_key optional if auth_style is certificate_auth Paths to SSL (client) certificate and key used to authenticate with the target system.If not provided, the poller will look for <hostname>.key and <hostname>.pem in $HARVEST_HOME/cert/.To create certificates for ONTAP systems, see using certificate authentication ca_cert optional if auth_style is certificate_auth Path to file that contains PEM encoded certificates. Harvest will append these certificates to the system-wide set of root certificate authorities (CA).If not provided, the OS's root CAs will be used.To create certificates for ONTAP systems, see using certificate authentication use_insecure_tls optional, bool If true, disable TLS verification when connecting to ONTAP cluster false credentials_file optional, string Path to a yaml file that contains cluster credentials. The file should have the same shape as harvest.yml. See here for examples. Path can be relative to harvest.yml or absolute. credentials_script optional, section Section that defines how Harvest should fetch credentials via external script. See here for details. tls_min_version optional, string Minimum TLS version to use when connecting to ONTAP cluster: One of tls10, tls11, tls12 or tls13 Platform decides labels optional, list of key-value pairs Each of the key-value pairs will be added to a poller's metrics. Details below log_max_bytes Maximum size of the log file before it will be rotated 10 MB log_max_files Number of rotated log files to keep 5 log optional, list of collector names Matching collectors log their ZAPI request/response prefer_zapi optional, bool Use the ZAPI API if the cluster supports it, otherwise allow Harvest to choose REST or ZAPI, whichever is appropriate to the ONTAP version. See rest-strategy for details."},{"location":"configure-harvest-basic/#defaults","title":"Defaults","text":"

    This section is optional. If there are parameters identical for all your pollers (e.g. datacenter, authentication method, login preferences), they can be grouped under this section. The poller section will be checked first and if the values aren't found there, the defaults will be consulted.

    "},{"location":"configure-harvest-basic/#exporters","title":"Exporters","text":"

    All exporters need two types of parameters:

    • exporter parameters - defined in harvest.yml under Exporters section
    • export_options - these options are defined in the Matrix data structure that is emitted from collectors and plugins

    The following two parameters are required for all exporters:

    parameter type description default Exporter name (header) required Name of the exporter instance, this is a user-defined value exporter required Name of the exporter class (e.g. Prometheus, InfluxDB, Http) - these can be found under the cmd/exporters/ directory

    Note: when we talk about the Prometheus Exporter or InfluxDB Exporter, we mean the Harvest modules that send the data to a database, NOT the names used to refer to the actual databases.

    "},{"location":"configure-harvest-basic/#prometheus-exporter","title":"Prometheus Exporter","text":""},{"location":"configure-harvest-basic/#influxdb-exporter","title":"InfluxDB Exporter","text":""},{"location":"configure-harvest-basic/#tools","title":"Tools","text":"

    This section is optional. You can uncomment the grafana_api_token key and add your Grafana API token so harvest does not prompt you for the key when importing dashboards.

    Tools:\n  #grafana_api_token: 'aaa-bbb-ccc-ddd'\n
    "},{"location":"configure-harvest-basic/#poller_files","title":"Poller_files","text":"

    Harvest supports loading pollers from multiple files specified in the Poller_files section of your harvest.yml file. For example, the following snippet tells harvest to load pollers from all the *.yml files under the configs directory, and from the path/to/single.yml file.

    Paths may be relative or absolute.

    Poller_files:\n- configs/*.yml\n- path/to/single.yml\n\nPollers:\nu2:\ndatacenter: dc-1\n

    Each referenced file can contain one or more unique pollers. Ensure that you include the top-level Pollers section in these files. All other top-level sections will be ignored. For example:

    # contents of configs/00-rtp.yml\nPollers:\nntap3:\ndatacenter: rtp\n\nntap4:\ndatacenter: rtp\n---\n# contents of configs/01-rtp.yml\nPollers:\nntap5:\ndatacenter: blr\n---\n# contents of path/to/single.yml\nPollers:\nntap1:\ndatacenter: dc-1\n\nntap2:\ndatacenter: dc-1\n

    At runtime, all files will be read and combined into a single configuration. The example above would result in the following set of pollers, in this order.

    - u2\n- ntap3\n- ntap4\n- ntap5\n- ntap1\n- ntap2\n

    When using glob patterns, the list of matching paths will be sorted before they are read. Errors will be logged for all duplicate pollers and Harvest will refuse to start.

    "},{"location":"configure-harvest-basic/#configuring-collectors","title":"Configuring collectors","text":"

    Collectors are configured by their own configuration files (templates), which are stored in subdirectories in conf/. Most collectors run concurrently and collect a subset of related metrics. For example, node related metrics are grouped together and run independently of the disk related metrics. Below is a snippet from conf/zapi/default.yaml

    In this example, the default.yaml template contains a list of objects (e.g. Node) that reference sub-templates (e.g. node.yaml). This decomposition groups related metrics together and at runtime, a Zapi collector per object will be created and each of these collectors will run concurrently.

    Using the snippet below, we expect there to be four Zapi collectors running, each with a different subtemplate and object.

    collector:          Zapi\nobjects:\n  Node:             node.yaml\n  Aggregate:        aggr.yaml\n  Volume:           volume.yaml\n  SnapMirror:       snapmirror.yaml\n

    At start-up, Harvest looks for two files (default.yaml and custom.yaml) in the conf directory of the collector (e.g. conf/zapi/default.yaml). The default.yaml is installed by default, while the custom.yaml is an optional file you can create to add new templates.

    When present, the custom.yaml file will be merged with the default.yaml file. This behavior can be overridden in your harvest.yml, see here for an example.

    For a list of collector-specific parameters, refer to their individual documentation.

    "},{"location":"configure-harvest-basic/#zapi-and-zapiperf","title":"Zapi and ZapiPerf","text":""},{"location":"configure-harvest-basic/#rest-and-restperf","title":"Rest and RestPerf","text":""},{"location":"configure-harvest-basic/#ems","title":"EMS","text":""},{"location":"configure-harvest-basic/#storagegrid","title":"StorageGRID","text":""},{"location":"configure-harvest-basic/#unix","title":"Unix","text":""},{"location":"configure-harvest-basic/#labels","title":"Labels","text":"

    Labels offer a way to add additional key-value pairs to a poller's metrics. These allow you to tag a cluster's metrics in a cross-cutting fashion. Here's an example:

      cluster-03:\n    datacenter: DC-01\n    addr: 10.0.1.1\n    labels:\n      - org: meg       # add an org label with the value \"meg\"\n      - ns:  rtp       # add a namespace label with the value \"rtp\"\n

    These settings add two key-value pairs to each metric collected from cluster-03 like this:

    node_vol_cifs_write_data{org=\"meg\",ns=\"rtp\",datacenter=\"DC-01\",cluster=\"cluster-03\",node=\"umeng-aff300-05\"} 10\n

    Keep in mind that each unique combination of key-value pairs increases the amount of stored data. Use them sparingly. See PrometheusNaming for details.

    "},{"location":"configure-harvest-basic/#authentication","title":"Authentication","text":"

    When authenticating with ONTAP and StorageGRID clusters, Harvest supports both client certificates and basic authentication.

    These methods of authentication are defined in the Pollers or Defaults section of your harvest.yml using one or more of the following parameters.

    parameter description default Link auth_sytle One of basic_auth or certificate_auth Optional when using credentials_file or credentials_script basic_auth link username Username used for authenticating to the remote system link password Password used for authenticating to the remote system link credentials_file Relative or absolute path to a yaml file that contains cluster credentials link credentials_script External script Harvest executes to retrieve credentials link"},{"location":"configure-harvest-basic/#precedence","title":"Precedence","text":"

    When multiple authentication parameters are defined at the same time, Harvest tries each method listed below, in the following order, to resolve authentication requests. The first method that returns a non-empty password stops the search.

    When these parameters exist in both the Pollers and Defaults section, the Pollers section will be consulted before the Defaults.

    section parameter Pollers auth_style: certificate_auth Pollers auth_style: basic_auth with username and password Pollers credentials_script Pollers credentials_file Defaults auth_style: certificate_auth Defaults auth_style: basic_auth with username and password Defaults credentials_script Defaults credentials_file"},{"location":"configure-harvest-basic/#credentials-file","title":"Credentials File","text":"

    If you would rather not list cluster credentials in your harvest.yml, you can use the credentials_file section in your harvest.yml to point to a file that contains the credentials. At runtime, the credentials_file will be read and the included credentials will be used to authenticate with the matching cluster(s).

    This is handy when integrating with 3rd party credential stores. See #884 for examples.

    The format of the credentials_file is similar to harvest.yml and can contain multiple cluster credentials.

    Example:

    Snippet from harvest.yml:

    Pollers:\ncluster1:\naddr: 10.193.48.11\ncredentials_file: secrets/cluster1.yml\nexporters:\n- prom1 

    File secrets/cluster1.yml:

    Pollers:\ncluster1:\nusername: harvest\npassword: foo\n
    "},{"location":"configure-harvest-basic/#credentials-script","title":"Credentials Script","text":"

    You can fetch authentication information via an external script by using the credentials_script section in the Pollers section of your harvest.yml as shown in the example below.

    At runtime, Harvest will invoke the script referenced in the credentials_script path section. Harvest will call the script with two arguments like so: ./script $addr $username.

    • The first argument is the address of the cluster taken from your harvest.yaml file, section Pollers addr
    • The second argument is the username of the cluster taken from your harvest.yaml file, section Pollers username

    The script should use the two arguments to look up and return the password via the script's standard out. If the script doesn't finish within the specified timeout, Harvest will kill the script and any spawned processes.

    Credential scripts are defined in your harvest.yml under the Pollers credentials_script section. Below are the options for the credentials_script section

    parameter type description default path string absolute path to script that takes two arguments: addr and username, in that order schedule go duration or always schedule used to call the authentication script. If the value is always, the script will be called everytime a password is requested, otherwise use the earlier cached value 24h timeout go duration amount of time Harvest will wait for the script to finish before killing it and descendents 10s"},{"location":"configure-harvest-basic/#example","title":"Example","text":"
    Pollers:\nontap1:\ndatacenter: rtp\naddr: 10.1.1.1\ncollectors:\n- Rest\n- RestPerf\ncredentials_script:\npath: ./get_pass\nschedule: 3h\ntimeout: 10s\n
    "},{"location":"configure-harvest-basic/#troubleshooting","title":"Troubleshooting","text":"
    • Make sure your script is executable
    • Ensure the user/group that executes your poller also has read and execute permissions on the script. su as the user/group that runs Harvest and make sure you can execute the script too.
    "},{"location":"configure-rest/","title":"REST","text":""},{"location":"configure-rest/#rest-collector","title":"Rest Collector","text":"

    The Rest collectors uses the REST protocol to collect data from ONTAP systems.

    The RestPerf collector is an extension of this collector, therefore they share many parameters and configuration settings.

    "},{"location":"configure-rest/#target-system","title":"Target System","text":"

    Target system can be cDot ONTAP system. 9.12.1 and after are supported, however the default configuration files may not completely match with all versions. See REST Strategy for more details.

    "},{"location":"configure-rest/#requirements","title":"Requirements","text":"

    No SDK or other requirements. It is recommended to create a read-only user for Harvest on the ONTAP system (see prepare monitored clusters for details)

    "},{"location":"configure-rest/#metrics","title":"Metrics","text":"

    The collector collects a dynamic set of metrics. ONTAP returns JSON documents and Harvest allows you to define templates to extract values from the JSON document via a dot notation path. You can view ONTAP's full set of REST APIs by visiting https://docs.netapp.com/us-en/ontap-automation/reference/api_reference.html#access-a-copy-of-the-ontap-rest-api-reference-documentation

    As an example, the /api/storage/aggregates endpoint, lists all data aggregates in the cluster. Below is an example response from this endpoint:

    {\n\"records\": [\n{\n\"uuid\": \"3e59547d-298a-4967-bd0f-8ae96cead08c\",\n\"name\": \"umeng_aff300_aggr2\",\n\"space\": {\n\"block_storage\": {\n\"size\": 8117898706944,\n\"available\": 4889853616128\n}\n},\n\"state\": \"online\",\n\"volume_count\": 36\n}\n]\n}\n

    The Rest collector will take this document, extract the records section and convert the metrics above into: name, space.block_storage.size, space.block_storage.available, state and volume_count. Metric names will be taken, as is, unless you specify a short display name. See counters for more details.

    "},{"location":"configure-rest/#parameters","title":"Parameters","text":"

    The parameters of the collector are distributed across three files:

    • Harvest configuration file (default: harvest.yml)
    • Rest configuration file (default: conf/rest/default.yaml)
    • Each object has its own configuration file (located in conf/rest/$version/)

    Except for addr and datacenter, all other parameters of the Rest collector can be defined in either of these three files. Parameters defined in the lower-level file, override parameters in the higher-level ones. This allows you to configure each object individually, or use the same parameters for all objects.

    The full set of parameters are described below.

    "},{"location":"configure-rest/#collector-configuration-file","title":"Collector configuration file","text":"

    This configuration file contains a list of objects that should be collected and the filenames of their templates ( explained in the next section).

    Additionally, this file contains the parameters that are applied as defaults to all objects. As mentioned before, any of these parameters can be defined in the Harvest or object configuration files as well.

    parameter type description default client_timeout duration (Go-syntax) how long to wait for server responses 30s schedule list, required how frequently to retrieve metrics from ONTAP - data duration (Go-syntax) how frequently this collector/object should retrieve metrics from ONTAP 3 minutes

    The template should define objects in the objects section. Example:

    objects:\nAggregate: aggr.yaml\n

    For each object, we define the filename of the object configuration file. The object configuration files are located in subdirectories matching the ONTAP version that was used to create these files. It is possible to have multiple version-subdirectories for multiple ONTAP versions. At runtime, the collector will select the object configuration file that closest matches the version of the target ONTAP system.

    "},{"location":"configure-rest/#object-configuration-file","title":"Object configuration file","text":"

    The Object configuration file (\"subtemplate\") should contain the following parameters:

    parameter type description default name string, required display name of the collector that will collect this object query string, required REST endpoint used to issue a REST request object string, required short name of the object counters string list of counters to collect (see notes below) plugins list plugins and their parameters to run on the collected data export_options list parameters to pass to exporters (see notes below)"},{"location":"configure-rest/#template-example","title":"Template Example:","text":"
    name:                     Volume\nquery:                    api/storage/volumes\nobject:                   volume\n\ncounters:\n- ^^name                                        => volume\n- ^^svm.name                                    => svm\n- ^aggregates.#.name                            => aggr\n- ^anti_ransomware.state                        => antiRansomwareState\n- ^state                                        => state\n- ^style                                        => style\n- space.available                               => size_available\n- space.overwrite_reserve                       => overwrite_reserve_total\n- space.overwrite_reserve_used                  => overwrite_reserve_used\n- space.percent_used                            => size_used_percent\n- space.physical_used                           => space_physical_used\n- space.physical_used_percent                   => space_physical_used_percent\n- space.size                                    => size\n- space.used                                    => size_used\n- hidden_fields:\n- anti_ransomware.state\n- space\n- filter:\n- name=*harvest*\n\nplugins:\n- LabelAgent:\nexclude_equals:\n- style `flexgroup_constituent`\n\nexport_options:\ninstance_keys:\n- aggr\n- style\n- svm\n- volume\ninstance_labels:\n- antiRansomwareState\n- state\n
    "},{"location":"configure-rest/#counters","title":"counters","text":"

    This section defines the list of counters that will be collected. These counters can be labels, numeric metrics or histograms. The exact property of each counter is fetched from ONTAP and updated periodically.

    The display name of a counter can be changed with => (e.g., space.block_storage.size => space_total).

    Counters that are stored as labels will only be exported if they are included in the export_options section.

    The counters section allows you to specify hidden_fields and filter parameters. Please find the detailed explanation below.

    "},{"location":"configure-rest/#hidden_fields","title":"hidden_fields","text":"

    There are some fields that ONTAP will not return unless you explicitly ask for them, even when using the URL parameter fields=**. hidden_fields is how you tell ONTAP which additional fields it should include in the REST response.

    "},{"location":"configure-rest/#filter","title":"filter","text":"

    The filter is used to constrain the data returned by the endpoint, allowing for more targeted data retrieval. The filtering uses ONTAP's REST record filtering. The example above asks ONTAP to only return records where a volume's name matches *harvest*.

    If you're familiar with ONTAP's REST record filtering, the example above would become name=*harvest* and appended to the final URL like so:

    https://CLUSTER_IP/api/storage/volumes?fields=*,anti_ransomware.state,space&name=*harvest*\n

    Refer to the ONTAP API specification, sections: query parameters and record filtering, for more details.

    "},{"location":"configure-rest/#export_options","title":"export_options","text":"

    Parameters in this section tell the exporters how to handle the collected data. The set of parameters varies by exporter. For Prometheus and InfluxDB exporters, the following parameters can be defined:

    • instances_keys (list): display names of labels to export with each data-point
    • instance_labels (list): display names of labels to export as a separate data-point
    • include_all_labels (bool): export all labels with each data-point (overrides previous two parameters)
    "},{"location":"configure-rest/#restperf-collector","title":"RestPerf Collector","text":"

    RestPerf collects performance metrics from ONTAP systems using the REST protocol. The collector is designed to be easily extendable to collect new objects or to collect additional counters from already configured objects.

    This collector is an extension of the Rest collector. The major difference between them is that RestPerf collects only the performance (perf) APIs. Additionally, RestPerf always calculates final values from the deltas of two subsequent polls.

    "},{"location":"configure-rest/#metrics_1","title":"Metrics","text":"

    RestPerf metrics are calculated the same as ZapiPerf metrics. More details about how performance metrics are calculated can be found here.

    "},{"location":"configure-rest/#parameters_1","title":"Parameters","text":"

    The parameters of the collector are distributed across three files:

    • Harvest configuration file (default: harvest.yml)
    • RestPerf configuration file (default: conf/restperf/default.yaml)
    • Each object has its own configuration file (located in conf/restperf/$version/)

    Except for addr, datacenter and auth_style, all other parameters of the RestPerf collector can be defined in either of these three files. Parameters defined in the lower-level file, override parameters in the higher-level file. This allows the user to configure each objects individually, or use the same parameters for all objects.

    The full set of parameters are described below.

    "},{"location":"configure-rest/#restperf-configuration-file","title":"RestPerf configuration file","text":"

    This configuration file (the \"template\") contains a list of objects that should be collected and the filenames of their configuration (explained in the next section).

    Additionally, this file contains the parameters that are applied as defaults to all objects. (As mentioned before, any of these parameters can be defined in the Harvest or object configuration files as well).

    parameter type description default use_insecure_tls bool, optional skip verifying TLS certificate of the target system false client_timeout duration (Go-syntax) how long to wait for server responses 30s latency_io_reqd int, optional threshold of IOPs for calculating latency metrics (latencies based on very few IOPs are unreliable) 100 schedule list, required the poll frequencies of the collector/object, should include exactly these three elements in the exact same other: - counter duration (Go-syntax) poll frequency of updating the counter metadata cache 20 minutes - instance duration (Go-syntax) poll frequency of updating the instance cache 10 minutes - data duration (Go-syntax) poll frequency of updating the data cache Note Harvest allows defining poll intervals on sub-second level (e.g. 1ms), however keep in mind the following:
    • API response of an ONTAP system can take several seconds, so the collector is likely to enter failed state if the poll interval is less than client_timeout.
    • Small poll intervals will create significant workload on the ONTAP system, as many counters are aggregated on-demand.
    • Some metric values become less significant if they are calculated for very short intervals (e.g. latencies)
    1 minute

    The template should define objects in the objects section. Example:

    objects:\nSystemNode: system_node.yaml\nHostAdapter: hostadapter.yaml\n

    Note that for each object we only define the filename of the object configuration file. The object configuration files are located in subdirectories matching to the ONTAP version that was used to create these files. It is possible to have multiple version-subdirectories for multiple ONTAP versions. At runtime, the collector will select the object configuration file that closest matches to the version of the target ONTAP system. (A mismatch is tolerated since RestPerf will fetch and validate counter metadata from the system.)

    "},{"location":"configure-rest/#object-configuration-file_1","title":"Object configuration file","text":"

    Refer Object configuration file

    "},{"location":"configure-rest/#counters_1","title":"counters","text":"

    Refer Counters

    Some counters require a \"base-counter\" for post-processing. If the base-counter is missing, RestPerf will still run, but the missing data won't be exported.

    "},{"location":"configure-rest/#export_options_1","title":"export_options","text":"

    Refer Export Options

    "},{"location":"configure-storagegrid/","title":"StorageGRID","text":""},{"location":"configure-storagegrid/#storagegrid-collector","title":"StorageGRID Collector","text":"

    The StorageGRID collector uses REST calls to collect data from StorageGRID systems.

    "},{"location":"configure-storagegrid/#target-system","title":"Target System","text":"

    All StorageGRID versions are supported, however the default configuration files may not completely match with older systems.

    "},{"location":"configure-storagegrid/#requirements","title":"Requirements","text":"

    No SDK or other requirements. It is recommended to create a read-only user for Harvest on the StorageGRID system (see prepare monitored clusters for details)

    "},{"location":"configure-storagegrid/#metrics","title":"Metrics","text":"

    The collector collects a dynamic set of metrics via StorageGRID's REST API. StorageGRID returns JSON documents and Harvest allows you to define templates to extract values from the JSON document via a dot notation path. You can view StorageGRID's full set of REST APIs by visiting https://$STORAGE_GRID_HOSTNAME/grid/apidocs.html

    As an example, the /grid/accounts-cache endpoint, lists the tenant accounts in the cache and includes additional information, such as objectCount and dataBytes. Below is an example response from this endpoint:

    {\n\"data\": [\n{\n\"id\": \"95245224059574669217\",\n\"name\": \"foople\",\n\"policy\": {\n\"quotaObjectBytes\": 50000000000\n},\n\"objectCount\": 6,\n\"dataBytes\": 10473454261\n}\n]\n}\n

    The StorageGRID collector will take this document, extract the data section and convert the metrics above into: name, policy.quotaObjectBytes, objectCount, and dataBytes. Metric names will be taken, as is, unless you specify a short display name. See counters for more details.

    "},{"location":"configure-storagegrid/#parameters","title":"Parameters","text":"

    The parameters of the collector are distributed across three files:

    • Harvest configuration file (default: harvest.yml)
    • StorageGRID configuration file (default: conf/storagegrid/default.yaml)
    • Each object has its own configuration file (located in conf/storagegrid/$version/)

    Except for addr and datacenter, all other parameters of the StorageGRID collector can be defined in either of these three files. Parameters defined in the lower-level file, override parameters in the higher-level ones. This allows you to configure each object individually, or use the same parameters for all objects.

    The full set of parameters are described below.

    "},{"location":"configure-storagegrid/#harvest-configuration-file","title":"Harvest configuration file","text":"

    Parameters in the poller section should define the following required parameters.

    parameter type description default Poller name (header) string, required Poller name, user-defined value addr string, required address (IP or FQDN) of the ONTAP system datacenter string, required Datacenter name, user-defined value username, password string, required StorageGRID username and password with at least Tenant accounts permissions collectors list, required Name of collector to run for this poller, use StorageGrid for this collector"},{"location":"configure-storagegrid/#storagegrid-configuration-file","title":"StorageGRID configuration file","text":"

    This configuration file contains a list of objects that should be collected and the filenames of their templates ( explained in the next section).

    Additionally, this file contains the parameters that are applied as defaults to all objects. As mentioned before, any of these parameters can be defined in the Harvest or object configuration files as well.

    parameter type description default client_timeout duration (Go-syntax) how long to wait for server responses 30s schedule list, required how frequently to retrieve metrics from StorageGRID - data duration (Go-syntax) how frequently this collector/object should retrieve metrics from StorageGRID 5 minutes

    The template should define objects in the objects section. Example:

    objects:\nTenant: tenant.yaml\n

    For each object, we define the filename of the object configuration file. The object configuration files are located in subdirectories matching the StorageGRID version that was used to create these files. It is possible to have multiple version-subdirectories for multiple StorageGRID versions. At runtime, the collector will select the object configuration file that closest matches the version of the target StorageGRID system.

    "},{"location":"configure-storagegrid/#object-configuration-file","title":"Object configuration file","text":"

    The Object configuration file (\"subtemplate\") should contain the following parameters:

    parameter type description default name string, required display name of the collector that will collect this object query string, required REST endpoint used to issue a REST request object string, required short name of the object api string StorageGRID REST endpoint version to use, overrides default management API version 3 counters list list of counters to collect (see notes below) plugins list plugins and their parameters to run on the collected data export_options list parameters to pass to exporters (see notes below)"},{"location":"configure-storagegrid/#counters","title":"counters","text":"

    This section defines the list of counters that will be collected. These counters can be labels, numeric metrics or histograms. The exact property of each counter is fetched from StorageGRID and updated periodically.

    The display name of a counter can be changed with => (e.g., policy.quotaObjectBytes => logical_quota).

    Counters that are stored as labels will only be exported if they are included in the export_options section.

    "},{"location":"configure-storagegrid/#export_options","title":"export_options","text":"

    Parameters in this section tell the exporters how to handle the collected data. The set of parameters varies by exporter. For Prometheus and InfluxDB exporters, the following parameters can be defined:

    • instances_keys (list): display names of labels to export with each data-point
    • instance_labels (list): display names of labels to export as a separate _label metric
    • include_all_labels (bool): export all labels with each data-point (overrides previous two parameters)
    "},{"location":"configure-templates/","title":"Templates","text":""},{"location":"configure-templates/#creatingediting-templates","title":"Creating/editing templates","text":"

    This document covers how to use Collector and Object templates to extend Harvest.

    1. How to add a new object template
    2. How to extend an existing object template

    There are a couple of ways to learn about ZAPIs and their attributes:

    • ONTAP's documentation
    • Using Harvest's zapi tool to explore available APIs and metrics on your cluster. Examples:
    $ harvest zapi --poller <poller> show apis\n  # will print list of apis that are available\n# usually apis with the \"get-iter\" suffix can provide useful metrics\n$ harvest zapi --poller <poller> show attrs --api volume-get-iter\n  # will print the attribute tree of the API\n$ harvest zapi --poller <poller> show data --api volume-get-iter\n  # will print raw data of the API attribute tree\n

    (Replace <poller> with the name of a poller that can connect to an ONTAP system.)

    "},{"location":"configure-templates/#collector-templates","title":"Collector templates","text":"

    Collector templates define which set of objects Harvest should collect from the system being monitored. In your harvest.yml configuration file, when you say that you want to use a Zapi collector, that collector will read the matching conf/zapi/default.yaml - same with ZapiPerf, it will read the conf/zapiperf/default.yaml file. Belows's a snippet from conf/zapi/default.yaml. Each object is mapped to a corresponding object template file. For example, the Node object searches for the most appropriate version of the node.yaml file in the conf/zapi/cdot/** directory.

    collector:          Zapi\nobjects:\n  Node:             node.yaml\n  Aggregate:        aggr.yaml\n  Volume:           volume.yaml\n  Disk:             disk.yaml\n

    Each collector will also check if a matching file named, custom.yaml exists, and if it does, it will read that file and merge it with default.yaml. The custom.yaml file should be located beside the matching default.yaml file. ( eg. conf/zapi/custom.yaml is beside conf/zapi/default.yaml).

    Let's take a look at some examples.

    1. Define a poller that uses the default Zapi collector. Using the default template is the easiest and most used option.
    Pollers:\njamaica:\ndatacenter: munich\naddr: 10.10.10.10\ncollectors:\n- Zapi # will use conf/zapi/default.yaml and optionally merge with conf/zapi/custom.yaml\n
    1. Define a poller that uses the Zapi collector, but with a custom template file:
    Pollers:\njamaica:\ndatacenter: munich\naddr: 10.10.10.10\ncollectors:\n- ZapiPerf:\n- limited.yaml # will use conf/zapiperf/limited.yaml\n# more templates can be added, they will be merged\n
    "},{"location":"configure-templates/#object-templates","title":"Object Templates","text":"

    Object templates (example: conf/zapi/cdot/9.8.0/lun.yaml) describe what to collect and export. These templates are used by collectors to gather metrics and send them to your time-series db.

    Object templates are made up of the following parts:

    1. the name of the object (or resource) to collect
    2. the ZAPI or REST query used to collect the object
    3. a list of object counters to collect and how to export them

    Instead of editing one of the existing templates, it's better to extend one of them. That way, your custom template will not be overwritten when upgrading Harvest. For example, if you want to extend conf/zapi/cdot/9.8.0/aggr.yaml, first create a copy (e.g., conf/zapi/cdot/9.8.0/custom_aggr.yaml), and then tell Harvest to use your custom template by adding these lines to conf/zapi/custom.yaml:

    objects:\nAggregate: custom_aggr.yaml\n

    After restarting your pollers, aggr.yaml and custom_aggr.yaml will be merged.

    "},{"location":"configure-templates/#create-a-new-object-template","title":"Create a new object template","text":"

    In this example, imagine that Harvest doesn't already collect environment sensor data and you wanted to collect it. Sensor does comes from the environment-sensors-get-iter ZAPI. Here are the steps to add a new object template.

    Create the file conf/zapi/cdot/9.8.0/sensor.yaml (optionally replace 9.8.0 with the earliest version of ONTAP that supports sensor data. Refer to Harvest Versioned Templates for more information. Add the following content to your new sensor.yaml file.

    name: Sensor                      # this name must match the key in your custom.yaml file\nquery: environment-sensors-get-iter\nobject: sensor\n\nmetric_type: int64\n\ncounters:\nenvironment-sensors-info:\n- critical-high-threshold    => critical_high\n- critical-low-threshold     => critical_low\n- ^discrete-sensor-state     => discrete_state\n- ^discrete-sensor-value     => discrete_value\n- ^^node-name                => node\n- ^^sensor-name              => sensor\n- ^sensor-type               => type\n- ^threshold-sensor-state    => threshold_state\n- threshold-sensor-value     => threshold_value\n- ^value-units               => unit\n- ^warning-high-threshold    => warning_high\n- ^warning-low-threshold     => warning_low\n\nexport_options:\ninclude_all_labels: true\n
    "},{"location":"configure-templates/#enable-the-new-object-template","title":"Enable the new object template","text":"

    To enable the new sensor object template, create the conf/zapi/custom.yaml file with the lines shown below.

    objects:\nSensor: sensor.yaml                 # this key must match the name in your sensor.yaml file\n

    The Sensor key used in the custom.yaml must match the name defined in the sensor.yaml file. That mapping is what connects this object with its template. In the future, if you add more object templates, you can add those in your existing custom.yaml file.

    "},{"location":"configure-templates/#test-your-object-template-changes","title":"Test your object template changes","text":"

    Test your new Sensor template with a single poller like this:

    ./bin/harvest start <poller> --foreground --verbose --collectors Zapi --objects Sensor\n

    Replace <poller> with the name of one of your ONTAP pollers.

    Once you have confirmed that the new template works, restart any already running pollers that you want to use the new template(s).

    "},{"location":"configure-templates/#check-the-metrics","title":"Check the metrics","text":"

    If you are using the Prometheus exporter, you can scrape the poller's HTTP endpoint with curl or a web browser. E.g., my poller exports its data on port 15001. Adjust as needed for your exporter.

    curl -s 'http://localhost:15001/metrics' | grep ^sensor_  # sensor_ name matches the object: value in your sensor.yaml file.\n\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",critical_high=\"3664\",node=\"shopfloor-02\",sensor=\"P3.3V STBY\",type=\"voltage\",warning_low=\"3040\",critical_low=\"2960\",threshold_state=\"normal\",unit=\"mV\",warning_high=\"3568\"} 3280\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",sensor=\"P1.2V STBY\",type=\"voltage\",threshold_state=\"normal\",warning_high=\"1299\",warning_low=\"1105\",critical_low=\"1086\",node=\"shopfloor-02\",critical_high=\"1319\",unit=\"mV\"} 1193\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",unit=\"mV\",critical_high=\"15810\",critical_low=\"0\",node=\"shopfloor-02\",sensor=\"P12V STBY\",type=\"voltage\",threshold_state=\"normal\"} 11842\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",sensor=\"P12V STBY Curr\",type=\"current\",threshold_state=\"normal\",unit=\"mA\",critical_high=\"3182\",critical_low=\"0\",node=\"shopfloor-02\"} 748\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",critical_low=\"1470\",node=\"shopfloor-02\",sensor=\"Sysfan2 F2 Speed\",type=\"fan\",threshold_state=\"normal\",unit=\"RPM\",warning_low=\"1560\"} 2820\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",sensor=\"PSU2 Fan1 Speed\",type=\"fan\",threshold_state=\"normal\",unit=\"RPM\",warning_low=\"4600\",critical_low=\"4500\",node=\"shopfloor-01\"} 6900\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",sensor=\"PSU1 InPwr Monitor\",type=\"unknown\",threshold_state=\"normal\",unit=\"mW\",node=\"shopfloor-01\"} 132000\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",critical_high=\"58\",type=\"thermal\",unit=\"C\",warning_high=\"53\",critical_low=\"0\",node=\"shopfloor-01\",sensor=\"Bat Temp\",threshold_state=\"normal\",warning_low=\"5\"} 24\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",critical_high=\"9000\",node=\"shopfloor-01\",sensor=\"Bat Charge Volt\",type=\"voltage\",threshold_state=\"normal\",unit=\"mV\",warning_high=\"8900\"} 8200\nsensor_value{datacenter=\"WDRF\",cluster=\"shopfloor\",node=\"shopfloor-02\",sensor=\"PSU1 InPwr Monitor\",type=\"unknown\",threshold_state=\"normal\",unit=\"mW\"} 132000\n
    "},{"location":"configure-templates/#extend-an-existing-object-template","title":"Extend an existing object template","text":""},{"location":"configure-templates/#how-to-extend-a-restrestperfstoragegridems-collectors-existing-object-template","title":"How to extend a Rest/RestPerf/StorageGRID/Ems collector's existing object template","text":"

    Instead of editing one of the existing templates, it's better to copy one and edit the copy. That way, your custom template will not be overwritten when upgrading Harvest. For example, if you want to change conf/rest/9.12.0/aggr.yaml, first create a copy (e.g., conf/rest/9.12.0/custom_aggr.yaml), then add these lines to conf/rest/custom.yaml:

    objects:\nAggregate: custom_aggr.yaml\n

    After restarting pollers, aggr.yaml will be ignored and the new, custom_aggr.yaml subtemplate will be used instead.

    "},{"location":"configure-templates/#how-to-extend-a-zapizapiperf-collectors-existing-object-template","title":"How to extend a Zapi/ZapiPerf collector's existing object template","text":"

    In this example, we want to extend one of the existing object templates that Harvest ships with, e.g. conf/zapi/cdot/9.8.0/lun.yaml and collect additional information as outlined below.

    Let's say you want to extend lun.yaml to:

    1. Increase client_timeout (You want to increase the default timeout of the lun ZAPI because it keeps timing out)
    2. Add additional counters, e.g. multiprotocol-type, application
    3. Add a new counter to the already collected lun metrics using the value_to_num plugin
    4. Add a new application instance_keys and labels to the collected metrics

    Let's assume the existing template is located at conf/zapi/cdot/9.8.0/lun.yaml and contains the following.

    name: Lun\nquery: lun-get-iter\nobject: lun\n\ncounters:\nlun-info:\n- ^node\n- ^path\n- ^qtree\n- size\n- size-used\n- ^state\n- ^^uuid\n- ^volume\n- ^vserver => svm\n\nplugins:\n- LabelAgent:\n# metric label zapi_value rest_value `default_value`\nvalue_to_num:\n- new_status state online online `0`\nsplit:\n- path `/` ,,,lun\n\nexport_options:\ninstance_keys:\n- node\n- qtree\n- lun\n- volume\n- svm\ninstance_labels:\n- state\n

    To extend the out-of-the-box lun.yaml template, create a conf/zapi/custom.yaml file if it doesn't already exist and add the lines shown below:

    objects:\nLun: custom_lun.yaml\n

    Create a new object template conf/zapi/cdot/9.8.0/custom_lun.yaml with the lines shown below.

    client_timeout: 5m\ncounters:\nlun-info:\n- ^multiprotocol-type\n- ^application\n\nplugins:\n- LabelAgent:\nvalue_to_num:\n- custom_status state online online `0`\n\nexport_options:\ninstance_keys:\n- application\n

    When you restart your pollers, Harvest will take the out-of-the-box template (lun.yaml) and your new one (custom_lun.yaml) and merge them into the following:

    name: Lun\nquery: lun-get-iter\nobject: lun\ncounters:\nlun-info:\n- ^node\n- ^path\n- ^qtree\n- size\n- size-used\n- ^state\n- ^^uuid\n- ^volume\n- ^vserver => svm\n- ^multiprotocol-type\n- ^application\nplugins:\nLabelAgent:\nvalue_to_num:\n- new_status state online online `0`\n- custom_status state online online `0`\nsplit:\n- path `/` ,,,lun\nexport_options:\ninstance_keys:\n- node\n- qtree\n- lun\n- volume\n- svm\n- application\nclient_timeout: 5m\n

    To help understand the merging process and the resulting combined template, you can view the result with:

    bin/harvest doctor merge --template conf/zapi/cdot/9.8.0/lun.yaml --with conf/zapi/cdot/9.8.0/custom_lun.yaml\n
    "},{"location":"configure-templates/#replace-an-existing-object-template-for-zapizapiperf-collector","title":"Replace an existing object template for Zapi/ZapiPerf Collector","text":"

    You can only extend existing templates for Zapi/ZapiPerf Collector as explained above. If you need to replace one of the existing object templates, let us know on Discord or GitHub.

    "},{"location":"configure-templates/#harvest-versioned-templates","title":"Harvest Versioned Templates","text":"

    Harvest ships with a set of versioned templates tailored for specific versions of ONTAP. At runtime, Harvest uses a BestFit heuristic to pick the most appropriate template. The BestFit heuristic compares the list of Harvest templates with the ONTAP version and selects the best match. There are versioned templates for both the ZAPI and REST collectors. Below is an example of how the BestFit algorithm works - assume Harvest has these templated versions:

    • 9.6.0
    • 9.6.1
    • 9.8.0
    • 9.9.0
    • 9.10.1

    if you are monitoring a cluster at these versions, Harvest will select the indicated template:

    • ONTAP version 9.4.1, Harvest will select the templates for 9.6.0
    • ONTAP version 9.6.0, Harvest will select the templates for 9.6.0
    • ONTAP version 9.7.X, Harvest will select the templates for 9.6.1
    • ONTAP version 9.12, Harvest will select the templates for 9.10.1
    "},{"location":"configure-templates/#counters","title":"counters","text":"

    This section contains the complete or partial attribute tree of the queried API. Since the collector does not get counter metadata from the ONTAP system, two additional symbols are used for non-numeric attributes:

    • ^ used as a prefix indicates that the attribute should be stored as a label
    • ^^ indicates that the attribute is a label and an instance key (i.e., a label that uniquely identifies an instance, such as name, uuid). If a single label does not uniquely identify an instance, then multiple instance keys should be indicated.

    Additionally, the symbol => can be used to set a custom display name for both instance labels and numeric counters. Example:

    name: Spare\nquery: aggr-spare-get-iter\nobject: spare\ncollect_only_labels: true\ncounters:\naggr-spare-disk-info:\n- ^^disk                                # creates label aggr-disk\n- ^disk-type                            # creates label aggr-disk-type\n- ^is-disk-zeroed   => is_disk_zeroed   # creates label is_disk_zeroed\n- ^^original-owner  => original_owner   # creates label original_owner\nexport_options:\ninstance_keys:\n- disk\n- original_owner\ninstance_labels:\n- disk_type\n- is_disk_zeroed\n

    Harvest does its best to determine a unique display name for each template's label and metric. Instead of relying on this heuristic, it is better to be explicit in your templates and define a display name using the caret (^) mapping. For example, instead of this:

    aggr-spare-disk-info:\n    - ^^disk\n    - ^disk-type\n

    do this:

    aggr-spare-disk-info:\n    - ^^disk      => disk\n    - ^disk-type  => disk_type\n

    See also #585

    "},{"location":"configure-unix/","title":"Unix","text":"

    This collector polls resource usage by Harvest pollers on the local system. Collector might be extended in the future to monitor any local or remote process.

    "},{"location":"configure-unix/#target-system","title":"Target System","text":"

    The machine where Harvest is running (\"localhost\").

    "},{"location":"configure-unix/#requirements","title":"Requirements","text":"

    Collector requires any OS where the proc-filesystem is available. If you are a developer, you are welcome to add support for other platforms. Currently, supported platforms includes most Unix/Unix-like systems:

    • Android / Termux
    • DragonFly BSD
    • FreeBSD
    • IBM AIX
    • Linux
    • NetBSD
    • Plan9
    • Solaris

    (On FreeBSD and NetBSD the proc-filesystem needs to be manually mounted).

    "},{"location":"configure-unix/#parameters","title":"Parameters","text":"parameter type description default mount_point string, optional path to the proc filesystem `/proc"},{"location":"configure-unix/#metrics","title":"Metrics","text":"

    The Collector follows the Linux proc(5) manual to parse a static set of metrics. Unless otherwise stated, the metric has a scalar value:

    metric type unit description start_time counter, float64 seconds process uptime cpu_percent gauge, float64 percent CPU used since last poll memory_percent gauge, float64 percent Memory used (RSS) since last poll cpu histogram, float64 seconds CPU used since last poll (system, user, iowait) memory histogram, uint64 kB Memory used since last poll (rss, vms, swap, etc) io histogram, uint64 bytecount IOs performed by process:rchar, wchar, read_bytes, write_bytes - read/write IOssyscr, syscw - syscalls for IO operations net histogram, uint64 count/byte Different IO operations over network devices ctx histogram, uint64 count Number of context switched (voluntary, involuntary) threads counter, uint64 count Number of threads fds counter, uint64 count Number of file descriptors

    Additionally, the collector provides the following instance labels:

    label description poller name of the poller pid PID of the poller"},{"location":"configure-unix/#issues","title":"Issues","text":"
    • Collector will fail on WSL because some non-critical files, in the proc-filesystem, are not present.
    "},{"location":"configure-zapi/","title":"ZAPI","text":"

    What about REST?

    ZAPI will reach end of availablity in ONTAP 9.13.1 released Q2 2023. Don't worry, Harvest has you covered. Switch to Harvest's REST collectors and collect idential metrics. See REST Strategy for more details.

    "},{"location":"configure-zapi/#zapi-collector","title":"Zapi Collector","text":"

    The Zapi collectors uses the ZAPI protocol to collect data from ONTAP systems. The collector submits data as received from the target system, and does not perform any calculations or post-processing. Since the attributes of most APIs have an irregular tree structure, sometimes a plugin will be required to collect all metrics from an API.

    The ZapiPerf collector is an extension of this collector, therefore they share many parameters and configuration settings.

    "},{"location":"configure-zapi/#target-system","title":"Target System","text":"

    Target system can be any cDot or 7Mode ONTAP system. Any version is supported, however the default configuration files may not completely match with older systems.

    "},{"location":"configure-zapi/#requirements","title":"Requirements","text":"

    No SDK or other requirements. It is recommended to create a read-only user for Harvest on the ONTAP system (see prepare monitored clusters for details)

    "},{"location":"configure-zapi/#metrics","title":"Metrics","text":"

    The collector collects a dynamic set of metrics. Since most ZAPIs have a tree structure, the collector converts that structure into a flat metric representation. No post-processing or calculation is performed on the collected data itself.

    As an example, the aggr-get-iter ZAPI provides the following partial attribute tree:

    aggr-attributes:\n- aggr-raid-attributes:\n- disk-count\n- aggr-snapshot-attributes:\n- files-total\n

    The Zapi collector will convert this tree into two \"flat\" metrics: aggr_raid_disk_count and aggr_snapshot_files_total. (The algorithm to generate a name for the metrics will attempt to keep it as simple as possible, but sometimes it's useful to manually set a short display name. See counters for more details.

    "},{"location":"configure-zapi/#parameters","title":"Parameters","text":"

    The parameters and configuration are similar to those of the ZapiPerf collector. Only the differences will be discussed below.

    "},{"location":"configure-zapi/#collector-configuration-file","title":"Collector configuration file","text":"

    Parameters different from ZapiPerf:

    parameter type description default schedule required same as for ZapiPerf, but only two elements: instance and data (collector does not run a counter poll) no_max_records bool, optional don't add max-records to the ZAPI request collect_only_labels bool, optional don't look for numeric metrics, only submit labels (suppresses the ErrNoMetrics error) only_cluster_instance bool, optional don't look for instance keys and assume only instance is the cluster itself"},{"location":"configure-zapi/#object-configuration-file","title":"Object configuration file","text":"

    The Zapi collector does not have the parameters instance_key and override parameters. The optional parameter metric_type allows you to override the default metric type (uint64). The value of this parameter should be one of the metric types supported by the matrix data-structure.

    "},{"location":"configure-zapi/#zapiperf-collector","title":"ZapiPerf Collector","text":""},{"location":"configure-zapi/#zapiperf","title":"ZapiPerf","text":"

    ZapiPerf collects performance metrics from ONTAP systems using the ZAPI protocol. The collector is designed to be easily extendable to collect new objects or to collect additional counters from already configured objects.

    This collector is an extension of the Zapi collector. The major difference between them is that ZapiPerf collects only the performance (perf) APIs. Additionally, ZapiPerf always calculates final values from the deltas of two subsequent polls.

    "},{"location":"configure-zapi/#metrics_1","title":"Metrics","text":"

    The collector collects a dynamic set of metrics. The metric values are calculated from two consecutive polls (therefore, no metrics are emitted after the first poll). The calculation algorithm depends on the property and base-counter attributes of each metric, the following properties are supported:

    property formula description raw x = xi no post-processing, value x is submitted as it is delta x = xi - xi-1 delta of two poll values, xi and xi-1 rate x = (xi - xi-1) / (ti - ti-1) delta divided by the interval of the two polls in seconds average x = (xi - xi-1) / (yi - yi-1) delta divided by the delta of the base counter y percent x = 100 * (xi - xi-1) / (yi - yi-1) average multiplied by 100"},{"location":"configure-zapi/#parameters_1","title":"Parameters","text":"

    The parameters of the collector are distributed across three files:

    • Harvest configuration file (default: harvest.yml)
    • ZapiPerf configuration file (default: conf/zapiperf/default.yaml)
    • Each object has its own configuration file (located in conf/zapiperf/cdot/ and conf/zapiperf/7mode/ for cDot and 7Mode systems respectively)

    Except for addr, datacenter and auth_style, all other parameters of the ZapiPerf collector can be defined in either of these three files. Parameters defined in the lower-level file, override parameters in the higher-level file. This allows the user to configure each objects individually, or use the same parameters for all objects.

    The full set of parameters are described below.

    "},{"location":"configure-zapi/#harvest-configuration-file","title":"Harvest configuration file","text":"

    Parameters in poller section should define (at least) the address and authentication method of the target system:

    parameter type description default addr string, required address (IP or FQDN) of the ONTAP system datacenter string, required name of the datacenter where the target system is located auth_style string, optional authentication method: either basic_auth or certificate_auth basic_auth ssl_cert, ssl_key string, optional full path of the SSL certificate and key pairs (when using certificate_auth) username, password string, optional full path of the SSL certificate and key pairs (when using basic_auth)"},{"location":"configure-zapi/#zapiperf-configuration-file","title":"ZapiPerf configuration file","text":"

    This configuration file (the \"template\") contains a list of objects that should be collected and the filenames of their configuration (explained in the next section).

    Additionally, this file contains the parameters that are applied as defaults to all objects. (As mentioned before, any of these parameters can be defined in the Harvest or object configuration files as well).

    parameter type description default use_insecure_tls bool, optional skip verifying TLS certificate of the target system false client_timeout duration (Go-syntax) how long to wait for server responses 30s batch_size int, optional max instances per API request 500 latency_io_reqd int, optional threshold of IOPs for calculating latency metrics (latencies based on very few IOPs are unreliable) 100 schedule list, required the poll frequencies of the collector/object, should include exactly these three elements in the exact same other: - counter duration (Go-syntax) poll frequency of updating the counter metadata cache (example value: 20m) - instance duration (Go-syntax) poll frequency of updating the instance cache (example value: 10m) - data duration (Go-syntax) poll frequency of updating the data cache (example value: 1m)Note Harvest allows defining poll intervals on sub-second level (e.g. 1ms), however keep in mind the following:
    • API response of an ONTAP system can take several seconds, so the collector is likely to enter failed state if the poll interval is less than client_timeout.
    • Small poll intervals will create significant workload on the ONTAP system, as many counters are aggregated on-demand.
    • Some metric values become less significant if they are calculated for very short intervals (e.g. latencies)

    The template should define objects in the objects section. Example:

    objects:\nSystemNode: system_node.yaml\nHostAdapter: hostadapter.yaml\n

    Note that for each object we only define the filename of the object configuration file. The object configuration files are located in subdirectories matching to the ONTAP version that was used to create these files. It is possible to have multiple version-subdirectories for multiple ONTAP versions. At runtime, the collector will select the object configuration file that closest matches to the version of the target ONTAP system. (A mismatch is tolerated since ZapiPerf will fetch and validate counter metadata from the system.)

    "},{"location":"configure-zapi/#object-configuration-file_1","title":"Object configuration file","text":"

    The Object configuration file (\"subtemplate\") should contain the following parameters:

    parameter type description default name string display name of the collector that will collect this object object string short name of the object query string raw object name used to issue a ZAPI request counters list list of counters to collect (see notes below) instance_key string label to use as instance key (either name or uuid) override list of key-value pairs override counter properties that we get from ONTAP (allows circumventing ZAPI bugs) plugins list plugins and their parameters to run on the collected data export_options list parameters to pass to exporters (see notes below)"},{"location":"configure-zapi/#counters","title":"counters","text":"

    This section defines the list of counters that will be collected. These counters can be labels, numeric metrics or histograms. The exact property of each counter is fetched from ONTAP and updated periodically.

    Some counters require a \"base-counter\" for post-processing. If the base-counter is missing, ZapiPerf will still run, but the missing data won't be exported.

    The display name of a counter can be changed with => (e.g., nfsv3_ops => ops). There's one conversion Harvest does for you by default, the instance_name counter will be renamed to the value of object.

    Counters that are stored as labels will only be exported if they are included in the export_options section.

    "},{"location":"configure-zapi/#export_options","title":"export_options","text":"

    Parameters in this section tell the exporters how to handle the collected data. The set of parameters varies by exporter. For Prometheus and InfluxDB exporters, the following parameters can be defined:

    • instances_keys (list): display names of labels to export with each data-point
    • instance_labels (list): display names of labels to export as a separate data-point
    • include_all_labels (bool): export all labels with each data-point (overrides previous two parameters)
    "},{"location":"dashboards/","title":"Dashboards","text":"

    Harvest can be used to import dashboards to Grafana.

    The bin/harvest grafana utility requires the address (hostname or IP), port of the Grafana server, and a Grafana API token. The port can be omitted if Grafana is configured to redirect the URL. Use the -d flag to point to the directory that contains the dashboards.

    "},{"location":"dashboards/#grafana-api-token","title":"Grafana API token","text":"

    The utility tool asks for an API token which can be generated from the Grafana web-gui.

    Click on Configuration in the left menu bar (1), click on API Keys (2) and click on the New API Key button. Choose a Key name (3), choose Editor for role (4) and click on add (5). Copy the generated key and paste it in your terminal or add the token to the Tools section of your configuration file. (see below)

    For example, let's say your Grafana server is on http://my.grafana.server:3000 and you want to import the Prometheus-based dashboards from the grafana directory. You would run this:

    $ bin/harvest grafana import --addr my.grafana.server:3000\n

    Similarly, to export:

    $ bin/harvest grafana export --addr my.grafana.server:3000 --directory /path/to/export/directory --serverfolder grafanaFolderName\n

    By default, the dashboards are connected to the Prometheus datasource defined in Grafana. If your datasource has a different name, use the --datasource flag during import/export.

    "},{"location":"dashboards/#cli","title":"CLI","text":"

    The bin/harvest grafana tool includes CLI help when passing the --help command line argument flag like so:

    bin/harvest grafana import --help\n

    The labels argument requires more explanation.

    "},{"location":"dashboards/#labels","title":"Labels","text":"

    The grafana import --labels argument goes hand-in-hand with a poller's Labels section described here. Labels are used to add additional key-value pairs to a poller's metrics.

    When you run bin/harvest grafana import, you may optionally pass a set of labels like so:

    bin/harvest grafana import --labels org --labels dept

    This will cause Harvest to do the following for each dashboard: 1. Parse each dashboard and add a new variable for each label passed on the command line 2. Modify each dashboard variable to use the new label variable(s) in a chained query.

    Here's an example:

    bin/harvest grafana import --labels \"org,dept\"\n

    This will add the Org and Dept variables, as shown below, and modify the existing variables as shown.

    Results in

    "},{"location":"influxdb-exporter/","title":"InfluxDB Exporter","text":"InfluxDB Install

    The information below describes how to setup Harvest's InfluxDB exporter. If you need help installing or setting up InfluxDB, check out their documention.

    "},{"location":"influxdb-exporter/#overview","title":"Overview","text":"

    The InfluxDB Exporter will format metrics into the InfluxDB's line protocol and write it into a bucket. The Exporter is compatible with InfluxDB v2.0. For explanation about bucket, org and precision, see InfluxDB API documentation.

    If you are monitoring both CDOT and 7mode clusters, it is strongly recommended to use two different buckets.

    "},{"location":"influxdb-exporter/#parameters","title":"Parameters","text":"

    Overview of all parameters is provided below. Only one of url or addr should be provided and at least one of them is required. If addr is specified, it should be a valid TCP address or hostname of the InfluxDB server and should not include the scheme. When using addr, the bucket, org, and token key/values are required.

    addr only works with HTTP. If you need to use HTTPS, you should use url instead.

    If url is specified, you must add all arguments to the url. Harvest will do no additional processing and use exactly what you specify. ( e.g. url: https://influxdb.example.com:8086/write?db=netapp&u=user&p=pass&precision=2. When using url, the bucket, org, port, and precision fields will be ignored.

    parameter type description default url string URL of the database, format: SCHEME://HOST[:PORT] addr string address of the database, format: HOST (HTTP only) port int, optional port of the database 8086 bucket string, required with addr InfluxDB bucket to write org string, required with addr InfluxDB organization name precision string, required with addr Preferred timestamp precision in seconds 2 client_timeout int, optional client timeout in seconds 5 token string token for authentication"},{"location":"influxdb-exporter/#example","title":"Example","text":"

    snippet from harvest.yml using addr: (supports HTTP only))

    Exporters:\nmy_influx:\nexporter: InfluxDB\naddr: localhost\nbucket: harvest\norg: harvest\ntoken: ZTTrt%24@#WNFM2VZTTNNT25wZWUdtUmhBZEdVUmd3dl@# 

    snippet from harvest.yml using url: (supports both HTTP/HTTPS))

    Exporters:\ninflux2:\nexporter: InfluxDB\nurl: https://localhost:8086/api/v2/write?org=harvest&bucket=harvest&precision=s\ntoken: my-token== 

    Notice: InfluxDB stores a token in ~/.influxdbv2/configs, but you can also retrieve it from the UI (usually serving on localhost:8086): click on \"Data\" on the left task bar, then on \"Tokens\".

    "},{"location":"license/","title":"License","text":"

    Harvest's License

    "},{"location":"manage-harvest/","title":"Manage Harvest Pollers","text":"

    Coming Soon

    "},{"location":"monitor-harvest/","title":"Monitor Harvest","text":""},{"location":"monitor-harvest/#harvest-metadata","title":"Harvest Metadata","text":"

    Harvest publishes metadata metrics about the key components of Harvest. Many of these metrics are used in the Harvest Metadata dashboard.

    If you want to understand more about these metrics, read on!

    Metrics are published for:

    • collectors
    • pollers
    • clusters being monitored
    • exporters

    Here's a high-level summary of the metadata metrics Harvest publishes with details below.

    Metric Description Units metadata_collector_api_time amount of time to collect data from monitored cluster object microseconds metadata_collector_instances number of objects collected from monitored cluster scalar metadata_collector_metrics number of counters collected from monitored cluster scalar metadata_collector_parse_time amount of time to parse XML, JSON, etc. for cluster object microseconds metadata_collector_plugin_time amount of time for all plugins to post-process metrics microseconds metadata_collector_poll_time amount of time it took for the poll to finish microseconds metadata_collector_task_time amount of time it took for each collector's subtasks to complete microseconds metadata_component_count number of metrics collected for each object scalar metadata_component_status status of the collector - 0 means running, 1 means standby, 2 means failed enum metadata_exporter_count number of metrics and labels exported scalar metadata_exporter_time amount of time it took to render, export, and serve exported data microseconds metadata_target_goroutines number of goroutines that exist within the poller scalar metadata_target_status status of the system being monitored. 0 means reachable, 1 means unreachable enum metadata_collector_calc_time amount of time it took to compute metrics between two successive polls, specifically using properties like raw, delta, rate, average, and percent. This metric is available for ZapiPerf/RestPerf collectors. microseconds metadata_collector_skips number of metrics that were not calculated between two successive polls. This metric is available for ZapiPerf/RestPerf collectors. scalar"},{"location":"monitor-harvest/#collector-metadata","title":"Collector Metadata","text":"

    A poller publishes the metadata metrics for each collector and exporter associated with it.

    Let's say we start a poller with the Zapi collector and the out-of-the-box default.yaml exporting metrics to Prometheus. That means you will be monitoring 22 different objects (uncommented lines in default.yaml as of 23.02).

    When we start this poller, we expect it to export 23 metadata_component_status metrics. One for each of the 22 objects, plus one for the Prometheus exporter.

    The following curl confirms there are 23 metadata_component_status metrics reported.

    curl -s http://localhost:12990/metrics | grep -v \"#\" | grep metadata_component_status | wc -l\n      23\n

    These metrics also tell us which collectors are in an standby or failed state. For example, filtering on components not in the running state shows the following since this cluster doesn't have any ClusterPeers, SecurityAuditDestinations, or SnapMirrors. The reason is listed as no instances and the metric value is 1 which means standby.

    curl -s http://localhost:12990/metrics | grep -v \"#\" | grep metadata_component_status | grep -Evo \"running\"\nmetadata_component_status{name=\"Zapi\", reason=\"no instances\",target=\"ClusterPeer\",type=\"collector\",version=\"23.04.1417\"} 1\nmetadata_component_status{name=\"Zapi\", reason=\"no instances\",target=\"SecurityAuditDestination\",type=\"collector\",version=\"23.04.1417\"} 1\nmetadata_component_status{name=\"Zapi\", reason=\"no instances\",target=\"SnapMirror\",type=\"collector\",version=\"23.04.1417\"} 1\n

    The log files for the poller show a similar story. The poller starts with 22 collectors, but drops to 19 after three of the collectors go to standby because there are no instances to collect.

    2023-04-17T13:14:18-04:00 INF ./poller.go:539 > updated status, up collectors: 22 (of 22), up exporters: 1 (of 1) Poller=u2\n2023-04-17T13:14:18-04:00 INF collector/collector.go:342 > no instances, entering standby Poller=u2 collector=Zapi:SecurityAuditDestination task=data\n2023-04-17T13:14:18-04:00 INF collector/collector.go:342 > no instances, entering standby Poller=u2 collector=Zapi:ClusterPeer task=data\n2023-04-17T13:14:18-04:00 INF collector/collector.go:342 > no instances, entering standby Poller=u2 collector=Zapi:SnapMirror task=data\n2023-04-17T13:15:18-04:00 INF ./poller.go:539 > updated status, up collectors: 19 (of 22), up exporters: 1 (of 1) Poller=u2\n
    "},{"location":"ontap-metrics/","title":"ONTAP Metrics","text":"

    This document describes how Harvest metrics relate to their relevant ONTAP ZAPI and REST mappings, including:

    • Details about which Harvest metrics each dashboard uses. These can be generated on demand by running bin/harvest grafana metrics. See #1577 for details.

    • More information about ONTAP REST performance counters can be found here.

    Creation Date : 2023-Nov-03\nONTAP Version: 9.13.1\n
    "},{"location":"ontap-metrics/#understanding-the-structure","title":"Understanding the structure","text":"

    Below is an annotated example of how to interpret the structure of each of the metrics.

    disk_io_queued Name of the metric exported by Harvest

    Number of I/Os queued to the disk but not yet issued Description of the ONTAP metric

    • API will be one of REST or ZAPI depending on which collector is used to collect the metric
    • Endpoint name of the REST or ZAPI API used to collect this metric
    • Metric name of the ONTAP metric Template path of the template that collects the metric

    Performance related metrics also include:

    • Unit the unit of the metric
    • Type describes how to calculate a cooked metric from two consecutive ONTAP raw metrics
    • Base some counters require a base counter for post-processing. When required, this property lists the base counter
    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#metrics","title":"Metrics","text":""},{"location":"ontap-metrics/#aggr_disk_busy","title":"aggr_disk_busy","text":"

    The utilization percent of the disk. aggr_disk_busy is disk_busy aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_capacity","title":"aggr_disk_capacity","text":"

    Disk capacity in MB. aggr_disk_capacity is disk_capacity aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_cp_read_chain","title":"aggr_disk_cp_read_chain","text":"

    Average number of blocks transferred in each consistency point read operation during a CP. aggr_disk_cp_read_chain is disk_cp_read_chain aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_cp_read_latency","title":"aggr_disk_cp_read_latency","text":"

    Average latency per block in microseconds for consistency point read operations. aggr_disk_cp_read_latency is disk_cp_read_latency aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_cp_reads","title":"aggr_disk_cp_reads","text":"

    Number of disk read operations initiated each second for consistency point processing. aggr_disk_cp_reads is disk_cp_reads aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_io_pending","title":"aggr_disk_io_pending","text":"

    Average number of I/Os issued to the disk for which we have not yet received the response. aggr_disk_io_pending is disk_io_pending aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_io_queued","title":"aggr_disk_io_queued","text":"

    Number of I/Os queued to the disk but not yet issued. aggr_disk_io_queued is disk_io_queued aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_busy","title":"aggr_disk_max_busy","text":"

    The utilization percent of the disk. aggr_disk_max_busy is the maximum of disk_busy for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_capacity","title":"aggr_disk_max_capacity","text":"

    Disk capacity in MB. aggr_disk_max_capacity is the maximum of disk_capacity for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_cp_read_chain","title":"aggr_disk_max_cp_read_chain","text":"

    Average number of blocks transferred in each consistency point read operation during a CP. aggr_disk_max_cp_read_chain is the maximum of disk_cp_read_chain for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_cp_read_latency","title":"aggr_disk_max_cp_read_latency","text":"

    Average latency per block in microseconds for consistency point read operations. aggr_disk_max_cp_read_latency is the maximum of disk_cp_read_latency for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_cp_reads","title":"aggr_disk_max_cp_reads","text":"

    Number of disk read operations initiated each second for consistency point processing. aggr_disk_max_cp_reads is the maximum of disk_cp_reads for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_disk_busy","title":"aggr_disk_max_disk_busy","text":"

    The utilization percent of the disk. aggr_disk_max_disk_busy is the maximum of disk_busy for label aggr.

    API Endpoint Metric Template ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_disk_capacity","title":"aggr_disk_max_disk_capacity","text":"

    Disk capacity in MB. aggr_disk_max_disk_capacity is the maximum of disk_capacity for label aggr.

    API Endpoint Metric Template ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_io_pending","title":"aggr_disk_max_io_pending","text":"

    Average number of I/Os issued to the disk for which we have not yet received the response. aggr_disk_max_io_pending is the maximum of disk_io_pending for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_io_queued","title":"aggr_disk_max_io_queued","text":"

    Number of I/Os queued to the disk but not yet issued. aggr_disk_max_io_queued is the maximum of disk_io_queued for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_total_data","title":"aggr_disk_max_total_data","text":"

    Total throughput for user operations per second. aggr_disk_max_total_data is the maximum of disk_total_data for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_total_transfers","title":"aggr_disk_max_total_transfers","text":"

    Total number of disk operations involving data transfer initiated per second. aggr_disk_max_total_transfers is the maximum of disk_total_transfers for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_read_blocks","title":"aggr_disk_max_user_read_blocks","text":"

    Number of blocks transferred for user read operations per second. aggr_disk_max_user_read_blocks is the maximum of disk_user_read_blocks for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_read_chain","title":"aggr_disk_max_user_read_chain","text":"

    Average number of blocks transferred in each user read operation. aggr_disk_max_user_read_chain is the maximum of disk_user_read_chain for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_read_latency","title":"aggr_disk_max_user_read_latency","text":"

    Average latency per block in microseconds for user read operations. aggr_disk_max_user_read_latency is the maximum of disk_user_read_latency for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_reads","title":"aggr_disk_max_user_reads","text":"

    Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. aggr_disk_max_user_reads is the maximum of disk_user_reads for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_write_blocks","title":"aggr_disk_max_user_write_blocks","text":"

    Number of blocks transferred for user write operations per second. aggr_disk_max_user_write_blocks is the maximum of disk_user_write_blocks for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_write_chain","title":"aggr_disk_max_user_write_chain","text":"

    Average number of blocks transferred in each user write operation. aggr_disk_max_user_write_chain is the maximum of disk_user_write_chain for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_write_latency","title":"aggr_disk_max_user_write_latency","text":"

    Average latency per block in microseconds for user write operations. aggr_disk_max_user_write_latency is the maximum of disk_user_write_latency for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_max_user_writes","title":"aggr_disk_max_user_writes","text":"

    Number of disk write operations initiated each second for storing data or metadata associated with user requests. aggr_disk_max_user_writes is the maximum of disk_user_writes for label aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_total_data","title":"aggr_disk_total_data","text":"

    Total throughput for user operations per second. aggr_disk_total_data is disk_total_data aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_total_transfers","title":"aggr_disk_total_transfers","text":"

    Total number of disk operations involving data transfer initiated per second. aggr_disk_total_transfers is disk_total_transfers aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_read_blocks","title":"aggr_disk_user_read_blocks","text":"

    Number of blocks transferred for user read operations per second. aggr_disk_user_read_blocks is disk_user_read_blocks aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_read_chain","title":"aggr_disk_user_read_chain","text":"

    Average number of blocks transferred in each user read operation. aggr_disk_user_read_chain is disk_user_read_chain aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_read_latency","title":"aggr_disk_user_read_latency","text":"

    Average latency per block in microseconds for user read operations. aggr_disk_user_read_latency is disk_user_read_latency aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_reads","title":"aggr_disk_user_reads","text":"

    Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. aggr_disk_user_reads is disk_user_reads aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_write_blocks","title":"aggr_disk_user_write_blocks","text":"

    Number of blocks transferred for user write operations per second. aggr_disk_user_write_blocks is disk_user_write_blocks aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_write_chain","title":"aggr_disk_user_write_chain","text":"

    Average number of blocks transferred in each user write operation. aggr_disk_user_write_chain is disk_user_write_chain aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_write_latency","title":"aggr_disk_user_write_latency","text":"

    Average latency per block in microseconds for user write operations. aggr_disk_user_write_latency is disk_user_write_latency aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_disk_user_writes","title":"aggr_disk_user_writes","text":"

    Number of disk write operations initiated each second for storing data or metadata associated with user requests. aggr_disk_user_writes is disk_user_writes aggregated by aggr.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_efficiency_savings","title":"aggr_efficiency_savings","text":"

    Space saved by storage efficiencies (logical_used - used)

    API Endpoint Metric Template REST api/storage/aggregates space.efficiency.savings conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_efficiency_savings_wo_snapshots","title":"aggr_efficiency_savings_wo_snapshots","text":"

    Space saved by storage efficiencies (logical_used - used)

    API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots.savings conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_efficiency_savings_wo_snapshots_flexclones","title":"aggr_efficiency_savings_wo_snapshots_flexclones","text":"

    Space saved by storage efficiencies (logical_used - used)

    API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots_flexclones.savings conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_hybrid_cache_size_total","title":"aggr_hybrid_cache_size_total","text":"

    Total usable space in bytes of SSD cache. Only provided when hybrid_cache.enabled is 'true'.

    API Endpoint Metric Template REST api/storage/aggregates block_storage.hybrid_cache.size conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.hybrid-cache-size-total conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_hybrid_disk_count","title":"aggr_hybrid_disk_count","text":"

    Number of disks used in the cache tier of the aggregate. Only provided when hybrid_cache.enabled is 'true'.

    API Endpoint Metric Template REST api/storage/aggregates block_storage.hybrid_cache.disk_count conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_files_private_used","title":"aggr_inode_files_private_used","text":"

    Number of system metadata files used. If the referenced file system is restricted or offline, a value of 0 is returned.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.

    API Endpoint Metric Template REST api/storage/aggregates inode_attributes.files_private_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.files-private-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_files_total","title":"aggr_inode_files_total","text":"

    Maximum number of user-visible files that this referenced file system can currently hold. If the referenced file system is restricted or offline, a value of 0 is returned.

    API Endpoint Metric Template REST api/storage/aggregates inode_attributes.files_total conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.files-total conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_files_used","title":"aggr_inode_files_used","text":"

    Number of user-visible files used in the referenced file system. If the referenced file system is restricted or offline, a value of 0 is returned.

    API Endpoint Metric Template REST api/storage/aggregates inode_attributes.files_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.files-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_inodefile_private_capacity","title":"aggr_inode_inodefile_private_capacity","text":"

    Number of files that can currently be stored on disk for system metadata files. This number will dynamically increase as more system files are created.This is an advanced property; there is an added computationl cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.

    API Endpoint Metric Template REST api/storage/aggregates inode_attributes.file_private_capacity conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.inodefile-private-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_inodefile_public_capacity","title":"aggr_inode_inodefile_public_capacity","text":"

    Number of files that can currently be stored on disk for user-visible files. This number will dynamically increase as more user-visible files are created.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.

    API Endpoint Metric Template REST api/storage/aggregates inode_attributes.file_public_capacity conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.inodefile-public-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_maxfiles_available","title":"aggr_inode_maxfiles_available","text":"

    The count of the maximum number of user-visible files currently allowable on the referenced file system.

    API Endpoint Metric Template REST api/storage/aggregates inode_attributes.max_files_available conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.maxfiles-available conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_maxfiles_possible","title":"aggr_inode_maxfiles_possible","text":"

    The largest value to which the maxfiles-available parameter can be increased by reconfiguration, on the referenced file system.

    API Endpoint Metric Template REST api/storage/aggregates inode_attributes.max_files_possible conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.maxfiles-possible conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_maxfiles_used","title":"aggr_inode_maxfiles_used","text":"

    The number of user-visible files currently in use on the referenced file system.

    API Endpoint Metric Template REST api/storage/aggregates inode_attributes.max_files_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.maxfiles-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_inode_used_percent","title":"aggr_inode_used_percent","text":"

    The percentage of disk space currently in use based on user-visible file count on the referenced file system.

    API Endpoint Metric Template REST api/storage/aggregates inode_attributes.used_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-inode-attributes.percent-inode-used-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_logical_used_wo_snapshots","title":"aggr_logical_used_wo_snapshots","text":"

    Logical used

    API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots.logical_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-logical-used-wo-snapshots conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_logical_used_wo_snapshots_flexclones","title":"aggr_logical_used_wo_snapshots_flexclones","text":"

    Logical used

    API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots_flexclones.logical_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-logical-used-wo-snapshots-flexclones conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_physical_used_wo_snapshots","title":"aggr_physical_used_wo_snapshots","text":"

    Total Data Reduction Physical Used Without Snapshots

    API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots.logical_used, space.efficiency_without_snapshots.savings conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-physical-used-wo-snapshots conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_physical_used_wo_snapshots_flexclones","title":"aggr_physical_used_wo_snapshots_flexclones","text":"

    Total Data Reduction Physical Used without snapshots and flexclones

    API Endpoint Metric Template REST api/storage/aggregates space.efficiency_without_snapshots_flexclones.logical_used, space.efficiency_without_snapshots_flexclones.savings conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-physical-used-wo-snapshots-flexclones conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_power","title":"aggr_power","text":"

    Power consumed by aggregate in Watts.

    API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#aggr_primary_disk_count","title":"aggr_primary_disk_count","text":"

    Number of disks used in the aggregate. This includes parity disks, but excludes disks in the hybrid cache.

    API Endpoint Metric Template REST api/storage/aggregates block_storage.primary.disk_count conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_raid_disk_count","title":"aggr_raid_disk_count","text":"

    Number of disks in the aggregate.

    API Endpoint Metric Template REST api/storage/aggregates block_storage.primary.disk_count, block_storage.hybrid_cache.disk_count conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-raid-attributes.disk-count conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_raid_plex_count","title":"aggr_raid_plex_count","text":"

    Number of plexes in the aggregate

    API Endpoint Metric Template REST api/storage/aggregates block_storage.plexes.# conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-raid-attributes.plex-count conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_raid_size","title":"aggr_raid_size","text":"

    Option to specify the maximum number of disks that can be included in a RAID group.

    API Endpoint Metric Template REST api/storage/aggregates block_storage.primary.raid_size conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-raid-attributes.raid-size conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_files_total","title":"aggr_snapshot_files_total","text":"

    Total files allowed in Snapshot copies

    API Endpoint Metric Template REST api/storage/aggregates snapshot.files_total conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.files-total conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_files_used","title":"aggr_snapshot_files_used","text":"

    Total files created in Snapshot copies

    API Endpoint Metric Template REST api/storage/aggregates snapshot.files_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.files-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_inode_used_percent","title":"aggr_snapshot_inode_used_percent","text":"

    The percentage of disk space currently in use based on user-visible file (inode) count on the referenced file system.

    API Endpoint Metric Template ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.percent-inode-used-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_maxfiles_available","title":"aggr_snapshot_maxfiles_available","text":"

    Maximum files available for Snapshot copies

    API Endpoint Metric Template REST api/storage/aggregates snapshot.max_files_available conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.maxfiles-available conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_maxfiles_possible","title":"aggr_snapshot_maxfiles_possible","text":"

    The largest value to which the maxfiles-available parameter can be increased by reconfiguration, on the referenced file system.

    API Endpoint Metric Template REST api/storage/aggregates snapshot.max_files_available, snapshot.max_files_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.maxfiles-possible conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_maxfiles_used","title":"aggr_snapshot_maxfiles_used","text":"

    Files in use by Snapshot copies

    API Endpoint Metric Template REST api/storage/aggregates snapshot.max_files_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.maxfiles-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_reserve_percent","title":"aggr_snapshot_reserve_percent","text":"

    Percentage of space reserved for Snapshot copies

    API Endpoint Metric Template REST api/storage/aggregates space.snapshot.reserve_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.snapshot-reserve-percent conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_size_available","title":"aggr_snapshot_size_available","text":"

    Available space for Snapshot copies in bytes

    API Endpoint Metric Template REST api/storage/aggregates space.snapshot.available conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.size-available conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_size_total","title":"aggr_snapshot_size_total","text":"

    Total space for Snapshot copies in bytes

    API Endpoint Metric Template REST api/storage/aggregates space.snapshot.total conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.size-total conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_size_used","title":"aggr_snapshot_size_used","text":"

    Space used by Snapshot copies in bytes

    API Endpoint Metric Template REST api/storage/aggregates space.snapshot.used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.size-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_snapshot_used_percent","title":"aggr_snapshot_used_percent","text":"

    Percentage of disk space used by Snapshot copies

    API Endpoint Metric Template REST api/storage/aggregates space.snapshot.used_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-snapshot-attributes.percent-used-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_available","title":"aggr_space_available","text":"

    Space available in bytes.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.available conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.size-available conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_capacity_tier_used","title":"aggr_space_capacity_tier_used","text":"

    Used space in bytes in the cloud store. Only applicable for aggregates with a cloud store tier.

    API Endpoint Metric Template REST api/storage/aggregates space.cloud_storage.used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.capacity-tier-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_data_compacted_count","title":"aggr_space_data_compacted_count","text":"

    Amount of compacted data in bytes.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.data_compacted_count conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.data-compacted-count conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_data_compaction_saved","title":"aggr_space_data_compaction_saved","text":"

    Space saved in bytes by compacting the data.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.data_compaction_space_saved conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.data-compaction-space-saved conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_data_compaction_saved_percent","title":"aggr_space_data_compaction_saved_percent","text":"

    Percentage saved by compacting the data.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.data_compaction_space_saved_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.data-compaction-space-saved-percent conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_performance_tier_inactive_user_data","title":"aggr_space_performance_tier_inactive_user_data","text":"

    The size that is physically used in the block storage and has a cold temperature, in bytes. This property is only supported if the aggregate is either attached to a cloud store or can be attached to a cloud store.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either block_storage.inactive_user_data or **.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.inactive_user_data conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.performance-tier-inactive-user-data conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_performance_tier_inactive_user_data_percent","title":"aggr_space_performance_tier_inactive_user_data_percent","text":"

    The percentage of inactive user data in the block storage. This property is only supported if the aggregate is either attached to a cloud store or can be attached to a cloud store.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either block_storage.inactive_user_data_percent or **.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.inactive_user_data_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.performance-tier-inactive-user-data-percent conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_performance_tier_used","title":"aggr_space_performance_tier_used","text":"

    A summation of volume footprints (including volume guarantees), in bytes. This includes all of the volume footprints in the block_storage tier and the cloud_storage tier.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.

    API Endpoint Metric Template REST api/storage/aggregates space.footprint conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_performance_tier_used_percent","title":"aggr_space_performance_tier_used_percent","text":"API Endpoint Metric Template REST api/storage/aggregates space.footprint_percent conf/rest/9.12.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_physical_used","title":"aggr_space_physical_used","text":"

    Total physical used size of an aggregate in bytes.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.physical_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.physical-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_physical_used_percent","title":"aggr_space_physical_used_percent","text":"

    Physical used percentage.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.physical_used_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.physical-used-percent conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_reserved","title":"aggr_space_reserved","text":"

    The total disk space in bytes that is reserved on the referenced file system. The reserved space is already counted in the used space, so this element can be used to see what portion of the used space represents space reserved for future use.

    API Endpoint Metric Template ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.total-reserved-space conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_sis_saved","title":"aggr_space_sis_saved","text":"

    Amount of space saved in bytes by storage efficiency.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.volume_deduplication_space_saved conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.sis-space-saved conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_sis_saved_percent","title":"aggr_space_sis_saved_percent","text":"

    Percentage of space saved by storage efficiency.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.volume_deduplication_space_saved_percent conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.sis-space-saved-percent conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_sis_shared_count","title":"aggr_space_sis_shared_count","text":"

    Amount of shared bytes counted by storage efficiency.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.volume_deduplication_shared_count conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.sis-shared-count conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_total","title":"aggr_space_total","text":"

    Total usable space in bytes, not including WAFL reserve and aggregate Snapshot copy reserve.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.size conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.size-total conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_used","title":"aggr_space_used","text":"

    Space used or reserved in bytes. Includes volume guarantees and aggregate metadata.

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.used conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.size-used conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_space_used_percent","title":"aggr_space_used_percent","text":"

    The percentage of disk space currently in use on the referenced file system

    API Endpoint Metric Template REST api/storage/aggregates space.block_storage.used, space.block_storage.size conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-space-attributes.percent-used-capacity conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#aggr_total_logical_used","title":"aggr_total_logical_used","text":"

    Logical used

    API Endpoint Metric Template REST api/storage/aggregates space.efficiency.logical_used conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-logical-used conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_total_physical_used","title":"aggr_total_physical_used","text":"

    Total Physical Used

    API Endpoint Metric Template REST api/storage/aggregates space.efficiency.logical_used, space.efficiency.savings conf/rest/9.12.0/aggr.yaml ZAPI aggr-efficiency-get-iter aggr-efficiency-info.aggr-efficiency-cumulative-info.total-physical-used conf/zapi/cdot/9.9.0/aggr_efficiency.yaml"},{"location":"ontap-metrics/#aggr_volume_count_flexvol","title":"aggr_volume_count_flexvol","text":"

    Number of flexvol volumes in the aggregate.

    API Endpoint Metric Template REST api/storage/aggregates volume_count conf/rest/9.12.0/aggr.yaml ZAPI aggr-get-iter aggr-attributes.aggr-volume-count-attributes.flexvol-count conf/zapi/cdot/9.8.0/aggr.yaml"},{"location":"ontap-metrics/#cifs_session_connection_count","title":"cifs_session_connection_count","text":"

    A counter used to track requests that are sent to the volumes to the node.

    API Endpoint Metric Template REST api/protocols/cifs/sessions connection_count conf/rest/9.8.0/cifs_session.yaml ZAPI cifs-session-get-iter cifs-session.connection-count conf/zapi/cdot/9.8.0/cifs_session.yaml"},{"location":"ontap-metrics/#cloud_target_used","title":"cloud_target_used","text":"

    The amount of cloud space used by all the aggregates attached to the target, in bytes. This field is only populated for FabricPool targets. The value is recalculated once every 5 minutes.

    API Endpoint Metric Template REST api/cloud/targets used conf/rest/9.12.0/cloud_target.yaml ZAPI aggr-object-store-config-get-iter aggr-object-store-config-info.used-space conf/zapi/cdot/9.10.0/aggr_object_store_config.yaml"},{"location":"ontap-metrics/#cluster_new_status","title":"cluster_new_status","text":"

    It is an indicator of the overall health status of the cluster, with a value of 1 indicating a healthy status and a value of 0 indicating an unhealthy status.

    API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/status.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/status.yaml"},{"location":"ontap-metrics/#cluster_subsystem_outstanding_alerts","title":"cluster_subsystem_outstanding_alerts","text":"

    Number of outstanding alerts

    API Endpoint Metric Template REST api/private/cli/system/health/subsystem outstanding_alert_count conf/rest/9.12.0/subsystem.yaml ZAPI diagnosis-subsystem-config-get-iter diagnosis-subsystem-config-info.outstanding-alert-count conf/zapi/cdot/9.8.0/subsystem.yaml"},{"location":"ontap-metrics/#cluster_subsystem_suppressed_alerts","title":"cluster_subsystem_suppressed_alerts","text":"

    Number of suppressed alerts

    API Endpoint Metric Template REST api/private/cli/system/health/subsystem suppressed_alert_count conf/rest/9.12.0/subsystem.yaml ZAPI diagnosis-subsystem-config-get-iter diagnosis-subsystem-config-info.suppressed-alert-count conf/zapi/cdot/9.8.0/subsystem.yaml"},{"location":"ontap-metrics/#copy_manager_bce_copy_count_curr","title":"copy_manager_bce_copy_count_curr","text":"

    Current number of copy requests being processed by the Block Copy Engine.

    API Endpoint Metric Template REST api/cluster/counter/tables/copy_manager block_copy_engine_current_copy_countUnit: noneType: deltaBase: conf/restperf/9.12.0/copy_manager.yaml ZAPI perf-object-get-instances copy_manager bce_copy_count_currUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/copy_manager.yaml"},{"location":"ontap-metrics/#copy_manager_kb_copied","title":"copy_manager_kb_copied","text":"

    Sum of kilo-bytes copied.

    API Endpoint Metric Template REST api/cluster/counter/tables/copy_manager KB_copiedUnit: noneType: deltaBase: conf/restperf/9.12.0/copy_manager.yaml ZAPI perf-object-get-instances copy_manager KB_copiedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/copy_manager.yaml"},{"location":"ontap-metrics/#copy_manager_ocs_copy_count_curr","title":"copy_manager_ocs_copy_count_curr","text":"

    Current number of copy requests being processed by the ONTAP copy subsystem.

    API Endpoint Metric Template REST api/cluster/counter/tables/copy_manager ontap_copy_subsystem_current_copy_countUnit: noneType: deltaBase: conf/restperf/9.12.0/copy_manager.yaml ZAPI perf-object-get-instances copy_manager ocs_copy_count_currUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/copy_manager.yaml"},{"location":"ontap-metrics/#copy_manager_sce_copy_count_curr","title":"copy_manager_sce_copy_count_curr","text":"

    Current number of copy requests being processed by the System Continuous Engineering.

    API Endpoint Metric Template REST api/cluster/counter/tables/copy_manager system_continuous_engineering_current_copy_countUnit: noneType: deltaBase: conf/restperf/9.12.0/copy_manager.yaml ZAPI perf-object-get-instances copy_manager sce_copy_count_currUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/copy_manager.yaml"},{"location":"ontap-metrics/#copy_manager_spince_copy_count_curr","title":"copy_manager_spince_copy_count_curr","text":"

    Current number of copy requests being processed by the SpinCE.

    API Endpoint Metric Template REST api/cluster/counter/tables/copy_manager spince_current_copy_countUnit: noneType: deltaBase: conf/restperf/9.12.0/copy_manager.yaml ZAPI perf-object-get-instances copy_manager spince_copy_count_currUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/copy_manager.yaml"},{"location":"ontap-metrics/#disk_busy","title":"disk_busy","text":"

    The utilization percent of the disk

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_bytes_per_sector","title":"disk_bytes_per_sector","text":"

    Bytes per sector.

    API Endpoint Metric Template REST api/storage/disks bytes_per_sector conf/rest/9.12.0/disk.yaml ZAPI storage-disk-get-iter storage-disk-info.disk-inventory-info.bytes-per-sector conf/zapi/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_capacity","title":"disk_capacity","text":"

    Disk capacity in MB

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_cp_read_chain","title":"disk_cp_read_chain","text":"

    Average number of blocks transferred in each consistency point read operation during a CP

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_cp_read_latency","title":"disk_cp_read_latency","text":"

    Average latency per block in microseconds for consistency point read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_cp_reads","title":"disk_cp_reads","text":"

    Number of disk read operations initiated each second for consistency point processing

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_io_pending","title":"disk_io_pending","text":"

    Average number of I/Os issued to the disk for which we have not yet received the response

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_io_queued","title":"disk_io_queued","text":"

    Number of I/Os queued to the disk but not yet issued

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_power_on_hours","title":"disk_power_on_hours","text":"

    Hours powered on.

    API Endpoint Metric Template REST api/storage/disks stats.power_on_hours conf/rest/9.12.0/disk.yaml"},{"location":"ontap-metrics/#disk_sectors","title":"disk_sectors","text":"

    Number of sectors on the disk.

    API Endpoint Metric Template REST api/storage/disks sector_count conf/rest/9.12.0/disk.yaml ZAPI storage-disk-get-iter storage-disk-info.disk-inventory-info.capacity-sectors conf/zapi/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_stats_average_latency","title":"disk_stats_average_latency","text":"

    Average I/O latency across all active paths, in milliseconds.

    API Endpoint Metric Template REST api/storage/disks stats.average_latency conf/rest/9.12.0/disk.yaml ZAPI storage-disk-get-iter storage-disk-info.disk-stats-info.average-latency conf/zapi/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_stats_io_kbps","title":"disk_stats_io_kbps","text":"

    Total Disk Throughput in KBPS Across All Active Paths

    API Endpoint Metric Template ZAPI storage-disk-get-iter storage-disk-info.disk-stats-info.disk-io-kbps conf/zapi/cdot/9.8.0/disk.yaml REST api/private/cli/disk disk_io_kbps_total conf/rest/9.12.0/disk.yaml"},{"location":"ontap-metrics/#disk_stats_sectors_read","title":"disk_stats_sectors_read","text":"

    Number of Sectors Read

    API Endpoint Metric Template ZAPI storage-disk-get-iter storage-disk-info.disk-stats-info.sectors-read conf/zapi/cdot/9.8.0/disk.yaml REST api/private/cli/disk sectors_read conf/rest/9.12.0/disk.yaml"},{"location":"ontap-metrics/#disk_stats_sectors_written","title":"disk_stats_sectors_written","text":"

    Number of Sectors Written

    API Endpoint Metric Template ZAPI storage-disk-get-iter storage-disk-info.disk-stats-info.sectors-written conf/zapi/cdot/9.8.0/disk.yaml REST api/private/cli/disk sectors_written conf/rest/9.12.0/disk.yaml"},{"location":"ontap-metrics/#disk_total_data","title":"disk_total_data","text":"

    Total throughput for user operations per second

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_total_transfers","title":"disk_total_transfers","text":"

    Total number of disk operations involving data transfer initiated per second

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_uptime","title":"disk_uptime","text":"

    Number of seconds the drive has been powered on

    API Endpoint Metric Template REST api/storage/disks stats.power_on_hours, 60, 60 conf/rest/9.12.0/disk.yaml ZAPI storage-disk-get-iter storage-disk-info.disk-stats-info.power-on-time-interval conf/zapi/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_usable_size","title":"disk_usable_size","text":"

    Usable size of each disk, in bytes.

    API Endpoint Metric Template REST api/storage/disks usable_size conf/rest/9.12.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_read_blocks","title":"disk_user_read_blocks","text":"

    Number of blocks transferred for user read operations per second

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_read_chain","title":"disk_user_read_chain","text":"

    Average number of blocks transferred in each user read operation

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_read_latency","title":"disk_user_read_latency","text":"

    Average latency per block in microseconds for user read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_reads","title":"disk_user_reads","text":"

    Number of disk read operations initiated each second for retrieving data or metadata associated with user requests

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_write_blocks","title":"disk_user_write_blocks","text":"

    Number of blocks transferred for user write operations per second

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_write_chain","title":"disk_user_write_chain","text":"

    Average number of blocks transferred in each user write operation

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_write_latency","title":"disk_user_write_latency","text":"

    Average latency per block in microseconds for user write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#disk_user_writes","title":"disk_user_writes","text":"

    Number of disk write operations initiated each second for storing data or metadata associated with user requests

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#environment_sensor_average_ambient_temperature","title":"environment_sensor_average_ambient_temperature","text":"

    Average temperature of all ambient sensors for node in Celsius.

    API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_average_fan_speed","title":"environment_sensor_average_fan_speed","text":"

    Average fan speed for node in rpm.

    API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_average_temperature","title":"environment_sensor_average_temperature","text":"

    Average temperature of all non-ambient sensors for node in Celsius.

    API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_max_fan_speed","title":"environment_sensor_max_fan_speed","text":"

    Maximum fan speed for node in rpm.

    API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_max_temperature","title":"environment_sensor_max_temperature","text":"

    Maximum temperature of all non-ambient sensors for node in Celsius.

    API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_min_ambient_temperature","title":"environment_sensor_min_ambient_temperature","text":"

    Minimum temperature of all ambient sensors for node in Celsius.

    API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_min_fan_speed","title":"environment_sensor_min_fan_speed","text":"

    Minimum fan speed for node in rpm.

    API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_min_temperature","title":"environment_sensor_min_temperature","text":"

    Minimum temperature of all non-ambient sensors for node in Celsius.

    API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_power","title":"environment_sensor_power","text":"

    Power consumed by a node in Watts.

    API Endpoint Metric Template REST NA Harvest generated conf/rest/9.12.0/sensor.yaml ZAPI NA Harvest generated conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#environment_sensor_threshold_value","title":"environment_sensor_threshold_value","text":"

    Provides the sensor reading.

    API Endpoint Metric Template REST api/cluster/sensors value conf/rest/9.12.0/sensor.yaml ZAPI environment-sensors-get-iter environment-sensors-info.threshold-sensor-value conf/zapi/cdot/9.8.0/sensor.yaml"},{"location":"ontap-metrics/#external_service_op_num_not_found_responses","title":"external_service_op_num_not_found_responses","text":"

    Number of 'Not Found' responses for calls to this operation.

    API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_not_found_responsesUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_num_request_failures","title":"external_service_op_num_request_failures","text":"

    A cumulative count of all request failures.

    API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_request_failuresUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_num_requests_sent","title":"external_service_op_num_requests_sent","text":"

    Number of requests sent to this service.

    API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_requests_sentUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_num_responses_received","title":"external_service_op_num_responses_received","text":"

    Number of responses received from the server (does not include timeouts).

    API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_responses_receivedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_num_successful_responses","title":"external_service_op_num_successful_responses","text":"

    Number of successful responses to this operation.

    API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_successful_responsesUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_num_timeouts","title":"external_service_op_num_timeouts","text":"

    Number of times requests to the server for this operation timed out, meaning no response was recevied in a given time period.

    API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op num_timeoutsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_request_latency","title":"external_service_op_request_latency","text":"

    Average latency of requests for operations of this type on this server.

    API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op request_latencyUnit: microsecType: averageBase: num_requests_sent conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#external_service_op_request_latency_hist","title":"external_service_op_request_latency_hist","text":"

    This histogram holds the latency values for requests of this operation to the specified server.

    API Endpoint Metric Template ZAPI perf-object-get-instances external_service_op request_latency_histUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/external_service_operation.yaml"},{"location":"ontap-metrics/#fabricpool_average_latency","title":"fabricpool_average_latency","text":"

    This counter is deprecated.Average latencies executed during various phases of command execution. The execution-start latency represents the average time taken to start executing a operation. The request-prepare latency represent the average time taken to prepare the commplete request that needs to be sent to the server. The send latency represents the average time taken to send requests to the server. The execution-start-to-send-complete represents the average time taken to send a operation out since its execution started. The execution-start-to-first-byte-received represent the average time taken to to receive the first byte of a response since the command's request execution started. These counters can be used to identify performance bottlenecks within the object store client module.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_client_op average_latencyUnit: Type: Base: conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml"},{"location":"ontap-metrics/#fabricpool_cloud_bin_op_latency_average","title":"fabricpool_cloud_bin_op_latency_average","text":"

    Cloud bin operation latency average in milliseconds.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_comp_aggr_vol_bin cloud_bin_op_latency_averageUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_comp_aggr_vol_bin.yaml ZAPI perf-object-get-instances wafl_comp_aggr_vol_bin cloud_bin_op_latency_averageUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/wafl_comp_aggr_vol_bin.yaml"},{"location":"ontap-metrics/#fabricpool_cloud_bin_operation","title":"fabricpool_cloud_bin_operation","text":"

    Cloud bin operation counters.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_comp_aggr_vol_bin cloud_bin_opUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl_comp_aggr_vol_bin.yaml ZAPI perf-object-get-instances wafl_comp_aggr_vol_bin cloud_bin_operationUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl_comp_aggr_vol_bin.yaml"},{"location":"ontap-metrics/#fabricpool_get_throughput_bytes","title":"fabricpool_get_throughput_bytes","text":"

    This counter is deprecated. Counter that indicates the throughput for GET command in bytes per second.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_client_op get_throughput_bytesUnit: Type: Base: conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml"},{"location":"ontap-metrics/#fabricpool_put_throughput_bytes","title":"fabricpool_put_throughput_bytes","text":"

    This counter is deprecated. Counter that indicates the throughput for PUT command in bytes per second.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_client_op put_throughput_bytesUnit: Type: Base: conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml"},{"location":"ontap-metrics/#fabricpool_stats","title":"fabricpool_stats","text":"

    This counter is deprecated. Counter that indicates the number of object store operations sent, and their success and failure counts. The objstore_client_op_name array indicate the operation name such as PUT, GET, etc. The objstore_client_op_stats_name array contain the total number of operations, their success and failure counter for each operation.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_client_op statsUnit: Type: Base: conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml"},{"location":"ontap-metrics/#fabricpool_throughput_ops","title":"fabricpool_throughput_ops","text":"

    Counter that indicates the throughput for commands in ops per second.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_client_op throughput_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml"},{"location":"ontap-metrics/#fcp_avg_other_latency","title":"fcp_avg_other_latency","text":"

    Average latency for operations other than read and write

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp average_other_latencyUnit: microsecType: averageBase: other_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_avg_read_latency","title":"fcp_avg_read_latency","text":"

    Average latency for read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp average_read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_avg_write_latency","title":"fcp_avg_write_latency","text":"

    Average latency for write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp average_write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_discarded_frames_count","title":"fcp_discarded_frames_count","text":"

    Number of discarded frames.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp discarded_frames_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port discarded_frames_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_fabric_connected_speed","title":"fcp_fabric_connected_speed","text":"

    The negotiated data rate between the target FC port and the fabric in gigabits per second.

    API Endpoint Metric Template REST api/network/fc/ports fabric.connected_speed conf/rest/9.6.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_int_count","title":"fcp_int_count","text":"

    Number of interrupts

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp interrupt_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port int_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_invalid_crc","title":"fcp_invalid_crc","text":"

    Number of invalid cyclic redundancy checks (CRC count)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp invalid.crcUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port invalid_crcUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_invalid_transmission_word","title":"fcp_invalid_transmission_word","text":"

    Number of invalid transmission words

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp invalid.transmission_wordUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port invalid_transmission_wordUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_isr_count","title":"fcp_isr_count","text":"

    Number of interrupt responses

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp isr.countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port isr_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_lif_avg_latency","title":"fcp_lif_avg_latency","text":"

    Average latency for FCP operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif average_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_avg_other_latency","title":"fcp_lif_avg_other_latency","text":"

    Average latency for operations other than read and write

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif average_other_latencyUnit: microsecType: averageBase: other_ops conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_avg_read_latency","title":"fcp_lif_avg_read_latency","text":"

    Average latency for read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif average_read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_avg_write_latency","title":"fcp_lif_avg_write_latency","text":"

    Average latency for write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif average_write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_other_ops","title":"fcp_lif_other_ops","text":"

    Number of operations that are not read or write.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_read_data","title":"fcp_lif_read_data","text":"

    Amount of data read from the storage system

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_read_ops","title":"fcp_lif_read_ops","text":"

    Number of read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_total_ops","title":"fcp_lif_total_ops","text":"

    Total number of operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_write_data","title":"fcp_lif_write_data","text":"

    Amount of data written to the storage system

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_lif_write_ops","title":"fcp_lif_write_ops","text":"

    Number of write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp_lif write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp_lif.yaml ZAPI perf-object-get-instances fcp_lif write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp_lif.yaml"},{"location":"ontap-metrics/#fcp_link_down","title":"fcp_link_down","text":"

    Number of times the Fibre Channel link was lost

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp link.downUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port link_downUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_link_failure","title":"fcp_link_failure","text":"

    Number of link failures

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp link_failureUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port link_failureUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_loss_of_signal","title":"fcp_loss_of_signal","text":"

    Number of times this port lost signal

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp loss_of_signalUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port loss_of_signalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_loss_of_sync","title":"fcp_loss_of_sync","text":"

    Number of times this port lost sync

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp loss_of_syncUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port loss_of_syncUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_max_speed","title":"fcp_max_speed","text":"

    The maximum speed supported by the FC port in gigabits per second.

    API Endpoint Metric Template REST api/network/fc/ports speed.maximum conf/rest/9.6.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_other_latency","title":"fcp_nvmf_avg_other_latency","text":"

    Average latency for operations other than read and write (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_other_latencyUnit: microsecType: averageBase: nvmf.other_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_other_latencyUnit: microsecType: averageBase: nvmf_other_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_read_latency","title":"fcp_nvmf_avg_read_latency","text":"

    Average latency for read operations (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_read_latencyUnit: microsecType: averageBase: nvmf.read_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_read_latencyUnit: microsecType: averageBase: nvmf_read_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_remote_other_latency","title":"fcp_nvmf_avg_remote_other_latency","text":"

    Average latency for remote operations other than read and write (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_remote_other_latencyUnit: microsecType: averageBase: nvmf_remote.other_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_remote_other_latencyUnit: microsecType: averageBase: nvmf_remote_other_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_remote_read_latency","title":"fcp_nvmf_avg_remote_read_latency","text":"

    Average latency for remote read operations (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_remote_read_latencyUnit: microsecType: averageBase: nvmf_remote.read_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_remote_read_latencyUnit: microsecType: averageBase: nvmf_remote_read_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_remote_write_latency","title":"fcp_nvmf_avg_remote_write_latency","text":"

    Average latency for remote write operations (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_remote_write_latencyUnit: microsecType: averageBase: nvmf_remote.write_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_remote_write_latencyUnit: microsecType: averageBase: nvmf_remote_write_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_avg_write_latency","title":"fcp_nvmf_avg_write_latency","text":"

    Average latency for write operations (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.average_write_latencyUnit: microsecType: averageBase: nvmf.write_ops conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_avg_write_latencyUnit: microsecType: averageBase: nvmf_write_ops conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_caw_data","title":"fcp_nvmf_caw_data","text":"

    Amount of CAW data sent to the storage system (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.caw_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_caw_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_caw_ops","title":"fcp_nvmf_caw_ops","text":"

    Number of FC-NVMe CAW operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.caw_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_caw_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_command_slots","title":"fcp_nvmf_command_slots","text":"

    Number of command slots that have been used by initiators logging into this port. This shows the command fan-in on the port.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.command_slotsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_command_slotsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_other_ops","title":"fcp_nvmf_other_ops","text":"

    Number of NVMF operations that are not read or write.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_read_data","title":"fcp_nvmf_read_data","text":"

    Amount of data read from the storage system (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_read_ops","title":"fcp_nvmf_read_ops","text":"

    Number of FC-NVMe read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_caw_data","title":"fcp_nvmf_remote_caw_data","text":"

    Amount of remote CAW data sent to the storage system (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.caw_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_caw_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_caw_ops","title":"fcp_nvmf_remote_caw_ops","text":"

    Number of FC-NVMe remote CAW operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.caw_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_caw_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_other_ops","title":"fcp_nvmf_remote_other_ops","text":"

    Number of NVMF remote operations that are not read or write.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_read_data","title":"fcp_nvmf_remote_read_data","text":"

    Amount of remote data read from the storage system (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_read_ops","title":"fcp_nvmf_remote_read_ops","text":"

    Number of FC-NVMe remote read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_total_data","title":"fcp_nvmf_remote_total_data","text":"

    Amount of remote FC-NVMe traffic to and from the storage system

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_total_ops","title":"fcp_nvmf_remote_total_ops","text":"

    Total number of remote FC-NVMe operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_write_data","title":"fcp_nvmf_remote_write_data","text":"

    Amount of remote data written to the storage system (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_remote_write_ops","title":"fcp_nvmf_remote_write_ops","text":"

    Number of FC-NVMe remote write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf_remote.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_remote_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_total_data","title":"fcp_nvmf_total_data","text":"

    Amount of FC-NVMe traffic to and from the storage system

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_total_ops","title":"fcp_nvmf_total_ops","text":"

    Total number of FC-NVMe operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_write_data","title":"fcp_nvmf_write_data","text":"

    Amount of data written to the storage system (FC-NVMe)

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_nvmf_write_ops","title":"fcp_nvmf_write_ops","text":"

    Number of FC-NVMe write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp nvmf.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port nvmf_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/fcp.yaml"},{"location":"ontap-metrics/#fcp_other_ops","title":"fcp_other_ops","text":"

    Number of operations that are not read or write.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_prim_seq_err","title":"fcp_prim_seq_err","text":"

    Number of primitive sequence errors

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp primitive_seq_errUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port prim_seq_errUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_queue_full","title":"fcp_queue_full","text":"

    Number of times a queue full condition occurred.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp queue_fullUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port queue_fullUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_read_data","title":"fcp_read_data","text":"

    Amount of data read from the storage system

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_read_ops","title":"fcp_read_ops","text":"

    Number of read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_reset_count","title":"fcp_reset_count","text":"

    Number of physical port resets

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp reset_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port reset_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_shared_int_count","title":"fcp_shared_int_count","text":"

    Number of shared interrupts

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp shared_interrupt_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port shared_int_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_spurious_int_count","title":"fcp_spurious_int_count","text":"

    Number of spurious interrupts

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp spurious_interrupt_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port spurious_int_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_threshold_full","title":"fcp_threshold_full","text":"

    Number of times the total number of outstanding commands on the port exceeds the threshold supported by this port.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp threshold_fullUnit: noneType: deltaBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port threshold_fullUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_total_data","title":"fcp_total_data","text":"

    Amount of FCP traffic to and from the storage system

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_total_ops","title":"fcp_total_ops","text":"

    Total number of FCP operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_write_data","title":"fcp_write_data","text":"

    Amount of data written to the storage system

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcp_write_ops","title":"fcp_write_ops","text":"

    Number of write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/fcp write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/fcp.yaml ZAPI perf-object-get-instances fcp_port write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcp.yaml"},{"location":"ontap-metrics/#fcvi_firmware_invalid_crc_count","title":"fcvi_firmware_invalid_crc_count","text":"

    Firmware reported invalid CRC count

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.invalid_crc_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_invalid_crcUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_firmware_invalid_transmit_word_count","title":"fcvi_firmware_invalid_transmit_word_count","text":"

    Firmware reported invalid transmit word count

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.invalid_transmit_word_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_invalid_xmit_wordsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_firmware_link_failure_count","title":"fcvi_firmware_link_failure_count","text":"

    Firmware reported link failure count

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.link_failure_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_link_failureUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_firmware_loss_of_signal_count","title":"fcvi_firmware_loss_of_signal_count","text":"

    Firmware reported loss of signal count

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.loss_of_signal_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_loss_of_signalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_firmware_loss_of_sync_count","title":"fcvi_firmware_loss_of_sync_count","text":"

    Firmware reported loss of sync count

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.loss_of_sync_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_loss_of_syncUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_firmware_systat_discard_frames","title":"fcvi_firmware_systat_discard_frames","text":"

    Firmware reported SyStatDiscardFrames value

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi firmware.systat.discard_framesUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi fw_SyStatDiscardFramesUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_hard_reset_count","title":"fcvi_hard_reset_count","text":"

    Number of times hard reset of FCVI adapter got issued.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi hard_reset_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi hard_reset_cntUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_rdma_write_avg_latency","title":"fcvi_rdma_write_avg_latency","text":"

    Average RDMA write I/O latency.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi rdma.write_average_latencyUnit: microsecType: averageBase: rdma.write_ops conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi rdma_write_avg_latencyUnit: microsecType: averageBase: rdma_write_ops conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_rdma_write_ops","title":"fcvi_rdma_write_ops","text":"

    Number of RDMA write I/Os issued per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi rdma.write_opsUnit: noneType: rateBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi rdma_write_opsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_rdma_write_throughput","title":"fcvi_rdma_write_throughput","text":"

    RDMA write throughput in bytes per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi rdma.write_throughputUnit: b_per_secType: rateBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi rdma_write_throughputUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#fcvi_soft_reset_count","title":"fcvi_soft_reset_count","text":"

    Number of times soft reset of FCVI adapter got issued.

    API Endpoint Metric Template REST api/cluster/counter/tables/fcvi soft_reset_countUnit: noneType: deltaBase: conf/restperf/9.12.0/fcvi.yaml ZAPI perf-object-get-instances fcvi soft_reset_cntUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/fcvi.yaml"},{"location":"ontap-metrics/#flashcache_accesses","title":"flashcache_accesses","text":"

    External cache accesses per second

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache accessesUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj accessesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_disk_reads_replaced","title":"flashcache_disk_reads_replaced","text":"

    Estimated number of disk reads per second replaced by cache

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache disk_reads_replacedUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj disk_reads_replacedUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_evicts","title":"flashcache_evicts","text":"

    Number of blocks evicted from the external cache to make room for new blocks

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache evictsUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj evictsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit","title":"flashcache_hit","text":"

    Number of WAFL buffers served off the external cache

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.totalUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hitUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit_directory","title":"flashcache_hit_directory","text":"

    Number of directory buffers served off the external cache

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.directoryUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hit_directoryUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit_indirect","title":"flashcache_hit_indirect","text":"

    Number of indirect file buffers served off the external cache

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.indirectUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hit_indirectUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit_metadata_file","title":"flashcache_hit_metadata_file","text":"

    Number of metadata file buffers served off the external cache

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.metadata_fileUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hit_metadata_fileUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit_normal_lev0","title":"flashcache_hit_normal_lev0","text":"

    Number of normal level 0 WAFL buffers served off the external cache

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.normal_level_zeroUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hit_normal_lev0Unit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_hit_percent","title":"flashcache_hit_percent","text":"

    External cache hit rate

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache hit.percentUnit: percentType: averageBase: accesses conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj hit_percentUnit: percentType: percentBase: accesses conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_inserts","title":"flashcache_inserts","text":"

    Number of WAFL buffers inserted into the external cache

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache insertsUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj insertsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_invalidates","title":"flashcache_invalidates","text":"

    Number of blocks invalidated in the external cache

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache invalidatesUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj invalidatesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_miss","title":"flashcache_miss","text":"

    External cache misses

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache miss.totalUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj missUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_miss_directory","title":"flashcache_miss_directory","text":"

    External cache misses accessing directory buffers

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache miss.directoryUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj miss_directoryUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_miss_indirect","title":"flashcache_miss_indirect","text":"

    External cache misses accessing indirect file buffers

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache miss.indirectUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj miss_indirectUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_miss_metadata_file","title":"flashcache_miss_metadata_file","text":"

    External cache misses accessing metadata file buffers

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache miss.metadata_fileUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj miss_metadata_fileUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_miss_normal_lev0","title":"flashcache_miss_normal_lev0","text":"

    External cache misses accessing normal level 0 buffers

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache miss.normal_level_zeroUnit: per_secType: rateBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj miss_normal_lev0Unit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashcache_usage","title":"flashcache_usage","text":"

    Percentage of blocks in external cache currently containing valid data

    API Endpoint Metric Template REST api/cluster/counter/tables/external_cache usageUnit: percentType: rawBase: conf/restperf/9.12.0/ext_cache_obj.yaml ZAPI perf-object-get-instances ext_cache_obj usageUnit: percentType: rawBase: conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml"},{"location":"ontap-metrics/#flashpool_cache_stats","title":"flashpool_cache_stats","text":"

    Automated Working-set Analyzer (AWA) per-interval pseudo cache statistics for the most recent intervals. The number of intervals defined as recent is CM_WAFL_HYAS_INT_DIS_CNT. This array is a table with fields corresponding to the enum type of hyas_cache_stat_type_t.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_sizer cache_statsUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_sizer.yaml ZAPI perf-object-get-instances wafl_hya_sizer cache_statsUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/wafl_hya_sizer.yaml"},{"location":"ontap-metrics/#flashpool_evict_destage_rate","title":"flashpool_evict_destage_rate","text":"

    Number of block destage per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate evict_destage_rateUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr evict_destage_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_evict_remove_rate","title":"flashpool_evict_remove_rate","text":"

    Number of block free per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate evict_remove_rateUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr evict_remove_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_hya_read_hit_latency_average","title":"flashpool_hya_read_hit_latency_average","text":"

    Average of RAID I/O latency on read hit.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate hya_read_hit_latency_averageUnit: noneType: averageBase: hya_read_hit_latency_count conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr hya_read_hit_latency_averageUnit: noneType: averageBase: hya_read_hit_latency_count conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_hya_read_miss_latency_average","title":"flashpool_hya_read_miss_latency_average","text":"

    Average read miss latency.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate hya_read_miss_latency_averageUnit: noneType: averageBase: hya_read_miss_latency_count conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr hya_read_miss_latency_averageUnit: noneType: averageBase: hya_read_miss_latency_count conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_hya_write_hdd_latency_average","title":"flashpool_hya_write_hdd_latency_average","text":"

    Average write latency to HDD.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate hya_write_hdd_latency_averageUnit: noneType: averageBase: hya_write_hdd_latency_count conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr hya_write_hdd_latency_averageUnit: noneType: averageBase: hya_write_hdd_latency_count conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_hya_write_ssd_latency_average","title":"flashpool_hya_write_ssd_latency_average","text":"

    Average of RAID I/O latency on write to SSD.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate hya_write_ssd_latency_averageUnit: noneType: averageBase: hya_write_ssd_latency_count conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr hya_write_ssd_latency_averageUnit: noneType: averageBase: hya_write_ssd_latency_count conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_read_cache_ins_rate","title":"flashpool_read_cache_ins_rate","text":"

    Cache insert rate blocks/sec.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate read_cache_insert_rateUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr read_cache_ins_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_read_ops_replaced","title":"flashpool_read_ops_replaced","text":"

    Number of HDD read operations replaced by SSD reads per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate read_ops_replacedUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr read_ops_replacedUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_read_ops_replaced_percent","title":"flashpool_read_ops_replaced_percent","text":"

    Percentage of HDD read operations replace by SSD.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate read_ops_replaced_percentUnit: percentType: percentBase: read_ops_total conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr read_ops_replaced_percentUnit: percentType: percentBase: read_ops_total conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_ssd_available","title":"flashpool_ssd_available","text":"

    Total SSD blocks available.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate ssd_availableUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr ssd_availableUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_ssd_read_cached","title":"flashpool_ssd_read_cached","text":"

    Total read cached SSD blocks.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate ssd_read_cachedUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr ssd_read_cachedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_ssd_total","title":"flashpool_ssd_total","text":"

    Total SSD blocks.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate ssd_totalUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr ssd_totalUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_ssd_total_used","title":"flashpool_ssd_total_used","text":"

    Total SSD blocks used.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate ssd_total_usedUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr ssd_total_usedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_ssd_write_cached","title":"flashpool_ssd_write_cached","text":"

    Total write cached SSD blocks.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate ssd_write_cachedUnit: noneType: rawBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr ssd_write_cachedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_wc_write_blks_total","title":"flashpool_wc_write_blks_total","text":"

    Number of write-cache blocks written per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate wc_write_blocks_totalUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr wc_write_blks_totalUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_write_blks_replaced","title":"flashpool_write_blks_replaced","text":"

    Number of HDD write blocks replaced by SSD writes per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate write_blocks_replacedUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr write_blks_replacedUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#flashpool_write_blks_replaced_percent","title":"flashpool_write_blks_replaced_percent","text":"

    Percentage of blocks overwritten to write-cache among all disk writes.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl_hya_per_aggregate write_blocks_replaced_percentUnit: percentType: averageBase: estimated_write_blocks_total conf/restperf/9.12.0/wafl_hya_per_aggr.yaml ZAPI perf-object-get-instances wafl_hya_per_aggr write_blks_replaced_percentUnit: percentType: averageBase: est_write_blks_total conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_current_latency","title":"headroom_aggr_current_latency","text":"

    This is the storage aggregate average latency per message at the disk level.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate current_latencyUnit: microsecType: averageBase: current_ops conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr current_latencyUnit: microsecType: averageBase: current_ops conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_current_ops","title":"headroom_aggr_current_ops","text":"

    Total number of I/Os processed by the aggregate per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate current_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr current_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_current_utilization","title":"headroom_aggr_current_utilization","text":"

    This is the storage aggregate average utilization of all the data disks in the aggregate.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate current_utilizationUnit: percentType: percentBase: current_utilization_denominator conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr current_utilizationUnit: percentType: percentBase: current_utilization_total conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_ewma_daily","title":"headroom_aggr_ewma_daily","text":"

    Daily exponential weighted moving average.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate ewma.dailyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr ewma_dailyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_ewma_hourly","title":"headroom_aggr_ewma_hourly","text":"

    Hourly exponential weighted moving average.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate ewma.hourlyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr ewma_hourlyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_ewma_monthly","title":"headroom_aggr_ewma_monthly","text":"

    Monthly exponential weighted moving average.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate ewma.monthlyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr ewma_monthlyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_ewma_weekly","title":"headroom_aggr_ewma_weekly","text":"

    Weekly exponential weighted moving average.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate ewma.weeklyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr ewma_weeklyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_optimal_point_confidence_factor","title":"headroom_aggr_optimal_point_confidence_factor","text":"

    The confidence factor for the optimal point value based on the observed resource latency and utilization.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate optimal_point.confidence_factorUnit: noneType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr optimal_point_confidence_factorUnit: noneType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_optimal_point_latency","title":"headroom_aggr_optimal_point_latency","text":"

    The latency component of the optimal point of the latency/utilization curve.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate optimal_point.latencyUnit: microsecType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr optimal_point_latencyUnit: microsecType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_optimal_point_ops","title":"headroom_aggr_optimal_point_ops","text":"

    The ops component of the optimal point derived from the latency/utilzation curve.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate optimal_point.opsUnit: per_secType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr optimal_point_opsUnit: per_secType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_aggr_optimal_point_utilization","title":"headroom_aggr_optimal_point_utilization","text":"

    The utilization component of the optimal point of the latency/utilization curve.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_aggregate optimal_point.utilizationUnit: noneType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_aggr.yaml ZAPI perf-object-get-instances resource_headroom_aggr optimal_point_utilizationUnit: noneType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml"},{"location":"ontap-metrics/#headroom_cpu_current_latency","title":"headroom_cpu_current_latency","text":"

    Current operation latency of the resource.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu current_latencyUnit: microsecType: averageBase: current_ops conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu current_latencyUnit: microsecType: averageBase: current_ops conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_current_ops","title":"headroom_cpu_current_ops","text":"

    Total number of operations per second (also referred to as dblade ops).

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu current_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu current_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_current_utilization","title":"headroom_cpu_current_utilization","text":"

    Average processor utilization across all processors in the system.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu current_utilizationUnit: percentType: percentBase: elapsed_time conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu current_utilizationUnit: percentType: percentBase: current_utilization_total conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_ewma_daily","title":"headroom_cpu_ewma_daily","text":"

    Daily exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu ewma.dailyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu ewma_dailyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_ewma_hourly","title":"headroom_cpu_ewma_hourly","text":"

    Hourly exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu ewma.hourlyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu ewma_hourlyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_ewma_monthly","title":"headroom_cpu_ewma_monthly","text":"

    Monthly exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu ewma.monthlyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu ewma_monthlyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_ewma_weekly","title":"headroom_cpu_ewma_weekly","text":"

    Weekly exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu ewma.weeklyUnit: noneType: rawBase: conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu ewma_weeklyUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_optimal_point_confidence_factor","title":"headroom_cpu_optimal_point_confidence_factor","text":"

    Confidence factor for the optimal point value based on the observed resource latency and utilization. The possible values are: 0 - unknown, 1 - low, 2 - medium, 3 - high. This counter can provide an average confidence factor over a range of time.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu optimal_point.confidence_factorUnit: noneType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu optimal_point_confidence_factorUnit: noneType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_optimal_point_latency","title":"headroom_cpu_optimal_point_latency","text":"

    Latency component of the optimal point of the latency/utilization curve. This counter can provide an average latency over a range of time.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu optimal_point.latencyUnit: microsecType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu optimal_point_latencyUnit: microsecType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_optimal_point_ops","title":"headroom_cpu_optimal_point_ops","text":"

    Ops component of the optimal point derived from the latency/utilization curve. This counter can provide an average ops over a range of time.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu optimal_point.opsUnit: per_secType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu optimal_point_opsUnit: per_secType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#headroom_cpu_optimal_point_utilization","title":"headroom_cpu_optimal_point_utilization","text":"

    Utilization component of the optimal point of the latency/utilization curve. This counter can provide an average utilization over a range of time.

    API Endpoint Metric Template REST api/cluster/counter/tables/headroom_cpu optimal_point.utilizationUnit: noneType: averageBase: optimal_point.samples conf/restperf/9.12.0/resource_headroom_cpu.yaml ZAPI perf-object-get-instances resource_headroom_cpu optimal_point_utilizationUnit: noneType: averageBase: optimal_point_samples conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml"},{"location":"ontap-metrics/#hostadapter_bytes_read","title":"hostadapter_bytes_read","text":"

    Bytes read through a host adapter

    API Endpoint Metric Template REST api/cluster/counter/tables/host_adapter bytes_readUnit: per_secType: rateBase: conf/restperf/9.12.0/hostadapter.yaml ZAPI perf-object-get-instances hostadapter bytes_readUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/hostadapter.yaml"},{"location":"ontap-metrics/#hostadapter_bytes_written","title":"hostadapter_bytes_written","text":"

    Bytes written through a host adapter

    API Endpoint Metric Template REST api/cluster/counter/tables/host_adapter bytes_writtenUnit: per_secType: rateBase: conf/restperf/9.12.0/hostadapter.yaml ZAPI perf-object-get-instances hostadapter bytes_writtenUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/hostadapter.yaml"},{"location":"ontap-metrics/#iscsi_lif_avg_latency","title":"iscsi_lif_avg_latency","text":"

    Average latency for iSCSI operations

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif average_latencyUnit: microsecType: averageBase: cmd_transferred conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif avg_latencyUnit: microsecType: averageBase: cmd_transfered conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_avg_other_latency","title":"iscsi_lif_avg_other_latency","text":"

    Average latency for operations other than read and write (for example, Inquiry, Report LUNs, SCSI Task Management Functions)

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif average_other_latencyUnit: microsecType: averageBase: iscsi_other_ops conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif avg_other_latencyUnit: microsecType: averageBase: iscsi_other_ops conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_avg_read_latency","title":"iscsi_lif_avg_read_latency","text":"

    Average latency for read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif average_read_latencyUnit: microsecType: averageBase: iscsi_read_ops conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif avg_read_latencyUnit: microsecType: averageBase: iscsi_read_ops conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_avg_write_latency","title":"iscsi_lif_avg_write_latency","text":"

    Average latency for write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif average_write_latencyUnit: microsecType: averageBase: iscsi_write_ops conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif avg_write_latencyUnit: microsecType: averageBase: iscsi_write_ops conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_cmd_transfered","title":"iscsi_lif_cmd_transfered","text":"

    Command transfered by this iSCSI conn

    API Endpoint Metric Template ZAPI perf-object-get-instances iscsi_lif cmd_transferedUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_cmd_transferred","title":"iscsi_lif_cmd_transferred","text":"

    Command transferred by this iSCSI connection

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif cmd_transferredUnit: noneType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_iscsi_other_ops","title":"iscsi_lif_iscsi_other_ops","text":"

    iSCSI other operations per second on this logical interface (LIF)

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif iscsi_other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif iscsi_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_iscsi_read_ops","title":"iscsi_lif_iscsi_read_ops","text":"

    iSCSI read operations per second on this logical interface (LIF)

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif iscsi_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif iscsi_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_iscsi_write_ops","title":"iscsi_lif_iscsi_write_ops","text":"

    iSCSI write operations per second on this logical interface (LIF)

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif iscsi_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif iscsi_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_protocol_errors","title":"iscsi_lif_protocol_errors","text":"

    Number of protocol errors from iSCSI sessions on this logical interface (LIF)

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif protocol_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif protocol_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_read_data","title":"iscsi_lif_read_data","text":"

    Amount of data read from the storage system in bytes

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iscsi_lif_write_data","title":"iscsi_lif_write_data","text":"

    Amount of data written to the storage system in bytes

    API Endpoint Metric Template REST api/cluster/counter/tables/iscsi_lif write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/iscsi_lif.yaml ZAPI perf-object-get-instances iscsi_lif write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml"},{"location":"ontap-metrics/#iw_avg_latency","title":"iw_avg_latency","text":"

    Average RDMA I/O latency.

    API Endpoint Metric Template ZAPI perf-object-get-instances iwarp iw_avg_latencyUnit: microsecType: averageBase: iw_ops conf/zapiperf/cdot/9.8.0/iwarp.yaml"},{"location":"ontap-metrics/#iw_ops","title":"iw_ops","text":"

    Number of RDMA I/Os issued.

    API Endpoint Metric Template ZAPI perf-object-get-instances iwarp iw_opsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/iwarp.yaml"},{"location":"ontap-metrics/#iw_read_ops","title":"iw_read_ops","text":"

    Number of RDMA read I/Os issued.

    API Endpoint Metric Template ZAPI perf-object-get-instances iwarp iw_read_opsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/iwarp.yaml"},{"location":"ontap-metrics/#iw_write_ops","title":"iw_write_ops","text":"

    Number of RDMA write I/Os issued.

    API Endpoint Metric Template ZAPI perf-object-get-instances iwarp iw_write_opsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/iwarp.yaml"},{"location":"ontap-metrics/#lif_recv_data","title":"lif_recv_data","text":"

    Number of bytes received per second

    API Endpoint Metric Template REST api/cluster/counter/tables/lif received_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif recv_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lif_recv_errors","title":"lif_recv_errors","text":"

    Number of received Errors per second

    API Endpoint Metric Template REST api/cluster/counter/tables/lif received_errorsUnit: per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif recv_errorsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lif_recv_packet","title":"lif_recv_packet","text":"

    Number of packets received per second

    API Endpoint Metric Template REST api/cluster/counter/tables/lif received_packetsUnit: per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif recv_packetUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lif_sent_data","title":"lif_sent_data","text":"

    Number of bytes sent per second

    API Endpoint Metric Template REST api/cluster/counter/tables/lif sent_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif sent_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lif_sent_errors","title":"lif_sent_errors","text":"

    Number of sent errors per second

    API Endpoint Metric Template REST api/cluster/counter/tables/lif sent_errorsUnit: per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif sent_errorsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lif_sent_packet","title":"lif_sent_packet","text":"

    Number of packets sent per second

    API Endpoint Metric Template REST api/cluster/counter/tables/lif sent_packetsUnit: per_secType: rateBase: conf/restperf/9.12.0/lif.yaml ZAPI perf-object-get-instances lif sent_packetUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lif.yaml"},{"location":"ontap-metrics/#lun_avg_read_latency","title":"lun_avg_read_latency","text":"

    Average read latency in microseconds for all operations on the LUN

    API Endpoint Metric Template REST api/cluster/counter/tables/lun average_read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_avg_write_latency","title":"lun_avg_write_latency","text":"

    Average write latency in microseconds for all operations on the LUN

    API Endpoint Metric Template REST api/cluster/counter/tables/lun average_write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_avg_xcopy_latency","title":"lun_avg_xcopy_latency","text":"

    Average latency in microseconds for xcopy requests

    API Endpoint Metric Template REST api/cluster/counter/tables/lun average_xcopy_latencyUnit: microsecType: averageBase: xcopy_requests conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun avg_xcopy_latencyUnit: microsecType: averageBase: xcopy_reqs conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_caw_reqs","title":"lun_caw_reqs","text":"

    Number of compare and write requests

    API Endpoint Metric Template REST api/cluster/counter/tables/lun caw_requestsUnit: noneType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun caw_reqsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_enospc","title":"lun_enospc","text":"

    Number of operations receiving ENOSPC errors

    API Endpoint Metric Template REST api/cluster/counter/tables/lun enospcUnit: noneType: deltaBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun enospcUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_queue_full","title":"lun_queue_full","text":"

    Queue full responses

    API Endpoint Metric Template REST api/cluster/counter/tables/lun queue_fullUnit: per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun queue_fullUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_read_align_histo","title":"lun_read_align_histo","text":"

    Histogram of WAFL read alignment (number sectors off WAFL block start)

    API Endpoint Metric Template REST api/cluster/counter/tables/lun read_align_histogramUnit: percentType: percentBase: read_ops_sent conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun read_align_histoUnit: percentType: percentBase: read_ops_sent conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_read_data","title":"lun_read_data","text":"

    Read bytes

    API Endpoint Metric Template REST api/cluster/counter/tables/lun read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_read_ops","title":"lun_read_ops","text":"

    Number of read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/lun read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_read_partial_blocks","title":"lun_read_partial_blocks","text":"

    Percentage of reads whose size is not a multiple of WAFL block size

    API Endpoint Metric Template REST api/cluster/counter/tables/lun read_partial_blocksUnit: percentType: percentBase: read_ops conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun read_partial_blocksUnit: percentType: percentBase: read_ops conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_remote_bytes","title":"lun_remote_bytes","text":"

    I/O to or from a LUN which is not owned by the storage system handling the I/O.

    API Endpoint Metric Template REST api/cluster/counter/tables/lun remote_bytesUnit: b_per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun remote_bytesUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_remote_ops","title":"lun_remote_ops","text":"

    Number of operations received by a storage system that does not own the LUN targeted by the operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/lun remote_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun remote_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_size","title":"lun_size","text":"

    The total provisioned size of the LUN. The LUN size can be increased but not be made smaller using the REST interface.The maximum and minimum sizes listed here are the absolute maximum and absolute minimum sizes in bytes. The actual minimum and maxiumum sizes vary depending on the ONTAP version, ONTAP platform and the available space in the containing volume and aggregate.For more information, see Size properties in the docs section of the ONTAP REST API documentation.

    API Endpoint Metric Template REST api/storage/luns space.size conf/rest/9.12.0/lun.yaml ZAPI lun-get-iter lun-info.size conf/zapi/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_size_used","title":"lun_size_used","text":"

    The amount of space consumed by the main data stream of the LUN.This value is the total space consumed in the volume by the LUN, including filesystem overhead, but excluding prefix and suffix streams. Due to internal filesystem overhead and the many ways SAN filesystems and applications utilize blocks within a LUN, this value does not necessarily reflect actual consumption/availability from the perspective of the filesystem or application. Without specific knowledge of how the LUN blocks are utilized outside of ONTAP, this property should not be used as an indicator for an out-of-space condition.For more information, see Size properties in the docs section of the ONTAP REST API documentation.

    API Endpoint Metric Template REST api/storage/luns space.used conf/rest/9.12.0/lun.yaml ZAPI lun-get-iter lun-info.size-used conf/zapi/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_size_used_percent","title":"lun_size_used_percent","text":"API Endpoint Metric Template REST api/storage/luns size_used, size conf/rest/9.12.0/lun.yaml ZAPI lun-get-iter size_used, size conf/zapi/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_unmap_reqs","title":"lun_unmap_reqs","text":"

    Number of unmap command requests

    API Endpoint Metric Template REST api/cluster/counter/tables/lun unmap_requestsUnit: noneType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun unmap_reqsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_write_align_histo","title":"lun_write_align_histo","text":"

    Histogram of WAFL write alignment (number of sectors off WAFL block start)

    API Endpoint Metric Template REST api/cluster/counter/tables/lun write_align_histogramUnit: percentType: percentBase: write_ops_sent conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun write_align_histoUnit: percentType: percentBase: write_ops_sent conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_write_data","title":"lun_write_data","text":"

    Write bytes

    API Endpoint Metric Template REST api/cluster/counter/tables/lun write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_write_ops","title":"lun_write_ops","text":"

    Number of write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/lun write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_write_partial_blocks","title":"lun_write_partial_blocks","text":"

    Percentage of writes whose size is not a multiple of WAFL block size

    API Endpoint Metric Template REST api/cluster/counter/tables/lun write_partial_blocksUnit: percentType: percentBase: write_ops conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun write_partial_blocksUnit: percentType: percentBase: write_ops conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_writesame_reqs","title":"lun_writesame_reqs","text":"

    Number of write same command requests

    API Endpoint Metric Template REST api/cluster/counter/tables/lun writesame_requestsUnit: noneType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun writesame_reqsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_writesame_unmap_reqs","title":"lun_writesame_unmap_reqs","text":"

    Number of write same commands requests with unmap bit set

    API Endpoint Metric Template REST api/cluster/counter/tables/lun writesame_unmap_requestsUnit: noneType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun writesame_unmap_reqsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#lun_xcopy_reqs","title":"lun_xcopy_reqs","text":"

    Total number of xcopy operations on the LUN

    API Endpoint Metric Template REST api/cluster/counter/tables/lun xcopy_requestsUnit: noneType: rateBase: conf/restperf/9.12.0/lun.yaml ZAPI perf-object-get-instances lun xcopy_reqsUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/lun.yaml"},{"location":"ontap-metrics/#metadata_collector_api_time","title":"metadata_collector_api_time","text":"

    amount of time to collect data from monitored cluster object

    API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_collector_calc_time","title":"metadata_collector_calc_time","text":"

    amount of time it took to compute metrics between two successive polls, specifically using properties like raw, delta, rate, average, and percent. This metric is available for ZapiPerf/RestPerf collectors.

    API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_collector_instances","title":"metadata_collector_instances","text":"

    number of objects collected from monitored cluster

    API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_collector_metrics","title":"metadata_collector_metrics","text":"

    number of counters collected from monitored cluster

    API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_collector_parse_time","title":"metadata_collector_parse_time","text":"

    amount of time to parse XML, JSON, etc. for cluster object

    API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_collector_plugin_time","title":"metadata_collector_plugin_time","text":"

    amount of time for all plugins to post-process metrics

    API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_collector_poll_time","title":"metadata_collector_poll_time","text":"

    amount of time it took for the poll to finish

    API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_collector_skips","title":"metadata_collector_skips","text":"

    number of metrics that were not calculated between two successive polls. This metric is available for ZapiPerf/RestPerf collectors.

    API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_collector_task_time","title":"metadata_collector_task_time","text":"

    amount of time it took for each collector's subtasks to complete

    API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_component_count","title":"metadata_component_count","text":"

    number of metrics collected for each object

    API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_component_status","title":"metadata_component_status","text":"

    status of the collector - 0 means running, 1 means standby, 2 means failed

    API Endpoint Metric Template REST NA Harvest generatedUnit: enum NA ZAPI NA Harvest generatedUnit: enum NA"},{"location":"ontap-metrics/#metadata_exporter_count","title":"metadata_exporter_count","text":"

    number of metrics and labels exported

    API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_exporter_time","title":"metadata_exporter_time","text":"

    amount of time it took to render, export, and serve exported data

    API Endpoint Metric Template REST NA Harvest generatedUnit: microseconds NA ZAPI NA Harvest generatedUnit: microseconds NA"},{"location":"ontap-metrics/#metadata_target_goroutines","title":"metadata_target_goroutines","text":"

    number of goroutines that exist within the poller

    API Endpoint Metric Template REST NA Harvest generatedUnit: scalar NA ZAPI NA Harvest generatedUnit: scalar NA"},{"location":"ontap-metrics/#metadata_target_status","title":"metadata_target_status","text":"

    status of the system being monitored. 0 means reachable, 1 means unreachable

    API Endpoint Metric Template REST NA Harvest generatedUnit: enum NA ZAPI NA Harvest generatedUnit: enum NA"},{"location":"ontap-metrics/#namespace_avg_other_latency","title":"namespace_avg_other_latency","text":"

    Average other ops latency in microseconds for all operations on the Namespace

    API Endpoint Metric Template REST api/cluster/counter/tables/namespace average_other_latencyUnit: microsecType: averageBase: other_ops conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_avg_read_latency","title":"namespace_avg_read_latency","text":"

    Average read latency in microseconds for all operations on the Namespace

    API Endpoint Metric Template REST api/cluster/counter/tables/namespace average_read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_avg_write_latency","title":"namespace_avg_write_latency","text":"

    Average write latency in microseconds for all operations on the Namespace

    API Endpoint Metric Template REST api/cluster/counter/tables/namespace average_write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_block_size","title":"namespace_block_size","text":"

    The size of blocks in the namespace in bytes.Valid in POST when creating an NVMe namespace that is not a clone of another. Disallowed in POST when creating a namespace clone. Valid in POST.

    API Endpoint Metric Template REST api/storage/namespaces space.block_size conf/rest/9.12.0/namespace.yaml ZAPI nvme-namespace-get-iter nvme-namespace-info.block-size conf/zapi/cdot/9.8.0/namespace.yaml"},{"location":"ontap-metrics/#namespace_other_ops","title":"namespace_other_ops","text":"

    Number of other operations

    API Endpoint Metric Template REST api/cluster/counter/tables/namespace other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_read_data","title":"namespace_read_data","text":"

    Read bytes

    API Endpoint Metric Template REST api/cluster/counter/tables/namespace read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_read_ops","title":"namespace_read_ops","text":"

    Number of read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/namespace read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_remote_bytes","title":"namespace_remote_bytes","text":"

    Remote read bytes

    API Endpoint Metric Template REST api/cluster/counter/tables/namespace remote.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace remote_bytesUnit: Type: Base: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_remote_ops","title":"namespace_remote_ops","text":"

    Number of remote read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/namespace remote.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace remote_opsUnit: Type: Base: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_size","title":"namespace_size","text":"

    The total provisioned size of the NVMe namespace. Valid in POST and PATCH. The NVMe namespace size can be increased but not be made smaller using the REST interface.The maximum and minimum sizes listed here are the absolute maximum and absolute minimum sizes in bytes. The maximum size is variable with respect to large NVMe namespace support in ONTAP. If large namespaces are supported, the maximum size is 128 TB (140737488355328 bytes) and if not supported, the maximum size is just under 16 TB (17557557870592 bytes). The minimum size supported is always 4096 bytes.For more information, see Size properties in the docs section of the ONTAP REST API documentation.

    API Endpoint Metric Template REST api/storage/namespaces space.size conf/rest/9.12.0/namespace.yaml ZAPI nvme-namespace-get-iter nvme-namespace-info.size conf/zapi/cdot/9.8.0/namespace.yaml"},{"location":"ontap-metrics/#namespace_size_available","title":"namespace_size_available","text":"API Endpoint Metric Template REST api/storage/namespaces size, size_used conf/rest/9.12.0/namespace.yaml ZAPI nvme-namespace-get-iter size, size_used conf/zapi/cdot/9.8.0/namespace.yaml"},{"location":"ontap-metrics/#namespace_size_available_percent","title":"namespace_size_available_percent","text":"API Endpoint Metric Template REST api/storage/namespaces size_available, size conf/rest/9.12.0/namespace.yaml ZAPI nvme-namespace-get-iter size_available, size conf/zapi/cdot/9.8.0/namespace.yaml"},{"location":"ontap-metrics/#namespace_size_used","title":"namespace_size_used","text":"

    The amount of space consumed by the main data stream of the NVMe namespace.This value is the total space consumed in the volume by the NVMe namespace, including filesystem overhead, but excluding prefix and suffix streams. Due to internal filesystem overhead and the many ways NVMe filesystems and applications utilize blocks within a namespace, this value does not necessarily reflect actual consumption/availability from the perspective of the filesystem or application. Without specific knowledge of how the namespace blocks are utilized outside of ONTAP, this property should not be used and an indicator for an out-of-space condition.For more information, see Size properties in the docs section of the ONTAP REST API documentation.

    API Endpoint Metric Template REST api/storage/namespaces space.used conf/rest/9.12.0/namespace.yaml ZAPI nvme-namespace-get-iter nvme-namespace-info.size-used conf/zapi/cdot/9.8.0/namespace.yaml"},{"location":"ontap-metrics/#namespace_write_data","title":"namespace_write_data","text":"

    Write bytes

    API Endpoint Metric Template REST api/cluster/counter/tables/namespace write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#namespace_write_ops","title":"namespace_write_ops","text":"

    Number of write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/namespace write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/namespace.yaml ZAPI perf-object-get-instances namespace write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/namespace.yaml"},{"location":"ontap-metrics/#net_port_mtu","title":"net_port_mtu","text":"

    Maximum transmission unit, largest packet size on this network

    API Endpoint Metric Template REST api/network/ethernet/ports mtu conf/rest/9.12.0/netport.yaml ZAPI net-port-get-iter net-port-info.mtu conf/zapi/cdot/9.8.0/netport.yaml"},{"location":"ontap-metrics/#netstat_bytes_recvd","title":"netstat_bytes_recvd","text":"

    Number of bytes received by a TCP connection

    API Endpoint Metric Template ZAPI perf-object-get-instances netstat bytes_recvdUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_bytes_sent","title":"netstat_bytes_sent","text":"

    Number of bytes sent by a TCP connection

    API Endpoint Metric Template ZAPI perf-object-get-instances netstat bytes_sentUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_cong_win","title":"netstat_cong_win","text":"

    Congestion window of a TCP connection

    API Endpoint Metric Template ZAPI perf-object-get-instances netstat cong_winUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_cong_win_th","title":"netstat_cong_win_th","text":"

    Congestion window threshold of a TCP connection

    API Endpoint Metric Template ZAPI perf-object-get-instances netstat cong_win_thUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_ooorcv_pkts","title":"netstat_ooorcv_pkts","text":"

    Number of out-of-order packets received by this TCP connection

    API Endpoint Metric Template ZAPI perf-object-get-instances netstat ooorcv_pktsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_recv_window","title":"netstat_recv_window","text":"

    Receive window size of a TCP connection

    API Endpoint Metric Template ZAPI perf-object-get-instances netstat recv_windowUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_rexmit_pkts","title":"netstat_rexmit_pkts","text":"

    Number of packets retransmitted by this TCP connection

    API Endpoint Metric Template ZAPI perf-object-get-instances netstat rexmit_pktsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#netstat_send_window","title":"netstat_send_window","text":"

    Send window size of a TCP connection

    API Endpoint Metric Template ZAPI perf-object-get-instances netstat send_windowUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/netstat.yaml"},{"location":"ontap-metrics/#nfs_clients_idle_duration","title":"nfs_clients_idle_duration","text":"

    Specifies an ISO-8601 format of date and time to retrieve the idle time duration in hours, minutes, and seconds format.

    API Endpoint Metric Template REST api/protocols/nfs/connected-clients idle_duration conf/rest/9.7.0/nfs_clients.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_bytelockalloc","title":"nfs_diag_storePool_ByteLockAlloc","text":"

    Current number of byte range lock objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.byte_lock_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ByteLockAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_bytelockmax","title":"nfs_diag_storePool_ByteLockMax","text":"

    Maximum number of byte range lock objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.byte_lock_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ByteLockMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_clientalloc","title":"nfs_diag_storePool_ClientAlloc","text":"

    Current number of client objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.client_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ClientAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_clientmax","title":"nfs_diag_storePool_ClientMax","text":"

    Maximum number of client objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.client_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ClientMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_connectionparentsessionreferencealloc","title":"nfs_diag_storePool_ConnectionParentSessionReferenceAlloc","text":"

    Current number of connection parent session reference objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.connection_parent_session_reference_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ConnectionParentSessionReferenceAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_connectionparentsessionreferencemax","title":"nfs_diag_storePool_ConnectionParentSessionReferenceMax","text":"

    Maximum number of connection parent session reference objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.connection_parent_session_reference_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_ConnectionParentSessionReferenceMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_copystatealloc","title":"nfs_diag_storePool_CopyStateAlloc","text":"

    Current number of copy state objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.copy_state_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_CopyStateAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_copystatemax","title":"nfs_diag_storePool_CopyStateMax","text":"

    Maximum number of copy state objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.copy_state_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_CopyStateMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_delegalloc","title":"nfs_diag_storePool_DelegAlloc","text":"

    Current number of delegation lock objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.delegation_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_DelegAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_delegmax","title":"nfs_diag_storePool_DelegMax","text":"

    Maximum number delegation lock objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.delegation_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_DelegMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_delegstatealloc","title":"nfs_diag_storePool_DelegStateAlloc","text":"

    Current number of delegation state objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.delegation_state_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_DelegStateAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_delegstatemax","title":"nfs_diag_storePool_DelegStateMax","text":"

    Maximum number of delegation state objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.delegation_state_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_DelegStateMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_layoutalloc","title":"nfs_diag_storePool_LayoutAlloc","text":"

    Current number of layout objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.layout_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LayoutAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_layoutmax","title":"nfs_diag_storePool_LayoutMax","text":"

    Maximum number of layout objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.layout_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LayoutMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_layoutstatealloc","title":"nfs_diag_storePool_LayoutStateAlloc","text":"

    Current number of layout state objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.layout_state_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LayoutStateAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_layoutstatemax","title":"nfs_diag_storePool_LayoutStateMax","text":"

    Maximum number of layout state objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.layout_state_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LayoutStateMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_lockstatealloc","title":"nfs_diag_storePool_LockStateAlloc","text":"

    Current number of lock state objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.lock_state_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LockStateAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_lockstatemax","title":"nfs_diag_storePool_LockStateMax","text":"

    Maximum number of lock state objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.lock_state_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_LockStateMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_openalloc","title":"nfs_diag_storePool_OpenAlloc","text":"

    Current number of share objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.open_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OpenAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_openmax","title":"nfs_diag_storePool_OpenMax","text":"

    Maximum number of share lock objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.open_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OpenMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_openstatealloc","title":"nfs_diag_storePool_OpenStateAlloc","text":"

    Current number of open state objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.openstate_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OpenStateAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_openstatemax","title":"nfs_diag_storePool_OpenStateMax","text":"

    Maximum number of open state objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.openstate_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OpenStateMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_owneralloc","title":"nfs_diag_storePool_OwnerAlloc","text":"

    Current number of owner objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.owner_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OwnerAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_ownermax","title":"nfs_diag_storePool_OwnerMax","text":"

    Maximum number of owner objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.owner_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_OwnerMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionalloc","title":"nfs_diag_storePool_SessionAlloc","text":"

    Current number of session objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionconnectionholderalloc","title":"nfs_diag_storePool_SessionConnectionHolderAlloc","text":"

    Current number of session connection holder objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_connection_holder_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionConnectionHolderAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionconnectionholdermax","title":"nfs_diag_storePool_SessionConnectionHolderMax","text":"

    Maximum number of session connection holder objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_connection_holder_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionConnectionHolderMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionholderalloc","title":"nfs_diag_storePool_SessionHolderAlloc","text":"

    Current number of session holder objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_holder_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionHolderAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionholdermax","title":"nfs_diag_storePool_SessionHolderMax","text":"

    Maximum number of session holder objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_holder_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionHolderMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_sessionmax","title":"nfs_diag_storePool_SessionMax","text":"

    Maximum number of session objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.session_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_SessionMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_staterefhistoryalloc","title":"nfs_diag_storePool_StateRefHistoryAlloc","text":"

    Current number of state reference callstack history objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.state_reference_history_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_StateRefHistoryAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_staterefhistorymax","title":"nfs_diag_storePool_StateRefHistoryMax","text":"

    Maximum number of state reference callstack history objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.state_reference_history_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_StateRefHistoryMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_stringalloc","title":"nfs_diag_storePool_StringAlloc","text":"

    Current number of string objects allocated.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.string_allocatedUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_StringAllocUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nfs_diag_storepool_stringmax","title":"nfs_diag_storePool_StringMax","text":"

    Maximum number of string objects.

    API Endpoint Metric Template REST api/cluster/counter/tables/nfs_v4_diag storepool.string_maximumUnit: noneType: rawBase: conf/restperf/9.12.0/nfsv4_pool.yaml ZAPI perf-object-get-instances nfsv4_diag storePool_StringMaxUnit: noneType: raw,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml"},{"location":"ontap-metrics/#nic_link_up_to_downs","title":"nic_link_up_to_downs","text":"

    Number of link state change from UP to DOWN.

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common link_up_to_downUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common link_up_to_downsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_alignment_errors","title":"nic_rx_alignment_errors","text":"

    Alignment errors detected on received packets

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_alignment_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_alignment_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_bytes","title":"nic_rx_bytes","text":"

    Bytes received

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_bytesUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_bytesUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_crc_errors","title":"nic_rx_crc_errors","text":"

    CRC errors detected on received packets

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_crc_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_crc_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_errors","title":"nic_rx_errors","text":"

    Error received

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_errorsUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_errorsUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_length_errors","title":"nic_rx_length_errors","text":"

    Length errors detected on received packets

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_length_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_length_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_rx_total_errors","title":"nic_rx_total_errors","text":"

    Total errors received

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common receive_total_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common rx_total_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_tx_bytes","title":"nic_tx_bytes","text":"

    Bytes sent

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common transmit_bytesUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common tx_bytesUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_tx_errors","title":"nic_tx_errors","text":"

    Error sent

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common transmit_errorsUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common tx_errorsUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_tx_hw_errors","title":"nic_tx_hw_errors","text":"

    Transmit errors reported by hardware

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common transmit_hw_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common tx_hw_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#nic_tx_total_errors","title":"nic_tx_total_errors","text":"

    Total errors sent

    API Endpoint Metric Template REST api/cluster/counter/tables/nic_common transmit_total_errorsUnit: noneType: deltaBase: conf/restperf/9.12.0/nic_common.yaml ZAPI perf-object-get-instances nic_common tx_total_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/nic_common.yaml"},{"location":"ontap-metrics/#node_avg_processor_busy","title":"node_avg_processor_busy","text":"

    Average processor utilization across all processors in the system

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node average_processor_busy_percentUnit: percentType: percentBase: cpu_elapsed_time conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node avg_processor_busyUnit: percentType: percentBase: cpu_elapsed_time conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_cifs_connections","title":"node_cifs_connections","text":"

    Number of connections

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node connectionsUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node connectionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_established_sessions","title":"node_cifs_established_sessions","text":"

    Number of established SMB and SMB2 sessions

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node established_sessionsUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node established_sessionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_latency","title":"node_cifs_latency","text":"

    Average latency for CIFS operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node latencyUnit: microsecType: averageBase: latency_base conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_latencyUnit: microsecType: averageBase: cifs_latency_base conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_op_count","title":"node_cifs_op_count","text":"

    Array of select CIFS operation counts

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node op_countUnit: noneType: rateBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_op_countUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_open_files","title":"node_cifs_open_files","text":"

    Number of open files over SMB and SMB2

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node open_filesUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node open_filesUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_ops","title":"node_cifs_ops","text":"

    Number of CIFS operations per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node cifs_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node cifs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_cifs_read_latency","title":"node_cifs_read_latency","text":"

    Average latency for CIFS read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node average_read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_read_latencyUnit: microsecType: averageBase: cifs_read_ops conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_read_ops","title":"node_cifs_read_ops","text":"

    Total number of CIFS read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node total_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_total_ops","title":"node_cifs_total_ops","text":"

    Total number of CIFS operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_write_latency","title":"node_cifs_write_latency","text":"

    Average latency for CIFS write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node average_write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_write_latencyUnit: microsecType: averageBase: cifs_write_ops conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cifs_write_ops","title":"node_cifs_write_ops","text":"

    Total number of CIFS write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs:node total_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_node.yaml ZAPI perf-object-get-instances cifs:node cifs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_node.yaml"},{"location":"ontap-metrics/#node_cpu_busy","title":"node_cpu_busy","text":"

    System CPU resource utilization. Returns a computed percentage for the default CPU field. Basically computes a 'cpu usage summary' value which indicates how 'busy' the system is based upon the most heavily utilized domain. The idea is to determine the amount of available CPU until we're limited by either a domain maxing out OR we exhaust all available idle CPU cycles, whichever occurs first.

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node cpu_busyUnit: percentType: percentBase: cpu_elapsed_time conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node cpu_busyUnit: percentType: percentBase: cpu_elapsed_time conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_cpu_busytime","title":"node_cpu_busytime","text":"

    The time (in hundredths of a second) that the CPU has been doing useful work since the last boot

    API Endpoint Metric Template ZAPI system-node-get-iter node-details-info.cpu-busytime conf/zapi/cdot/9.8.0/node.yaml REST api/private/cli/node cpu_busy_time conf/rest/9.12.0/node.yaml"},{"location":"ontap-metrics/#node_cpu_domain_busy","title":"node_cpu_domain_busy","text":"

    Array of processor time in percentage spent in various domains

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node domain_busyUnit: percentType: percentBase: cpu_elapsed_time conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node domain_busyUnit: percentType: percentBase: cpu_elapsed_time conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_cpu_elapsed_time","title":"node_cpu_elapsed_time","text":"

    Elapsed time since boot

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node cpu_elapsed_timeUnit: microsecType: deltaBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node cpu_elapsed_timeUnit: noneType: delta,no-displayBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_disk_busy","title":"node_disk_busy","text":"

    The utilization percent of the disk. node_disk_busy is disk_busy aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_capacity","title":"node_disk_capacity","text":"

    Disk capacity in MB. node_disk_capacity is disk_capacity aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_cp_read_chain","title":"node_disk_cp_read_chain","text":"

    Average number of blocks transferred in each consistency point read operation during a CP. node_disk_cp_read_chain is disk_cp_read_chain aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_cp_read_latency","title":"node_disk_cp_read_latency","text":"

    Average latency per block in microseconds for consistency point read operations. node_disk_cp_read_latency is disk_cp_read_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_cp_reads","title":"node_disk_cp_reads","text":"

    Number of disk read operations initiated each second for consistency point processing. node_disk_cp_reads is disk_cp_reads aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_data_read","title":"node_disk_data_read","text":"

    Number of disk kilobytes (KB) read per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node disk_data_readUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node disk_data_readUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_disk_data_written","title":"node_disk_data_written","text":"

    Number of disk kilobytes (KB) written per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node disk_data_writtenUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node disk_data_writtenUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_disk_io_pending","title":"node_disk_io_pending","text":"

    Average number of I/Os issued to the disk for which we have not yet received the response. node_disk_io_pending is disk_io_pending aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_io_queued","title":"node_disk_io_queued","text":"

    Number of I/Os queued to the disk but not yet issued. node_disk_io_queued is disk_io_queued aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_busy","title":"node_disk_max_busy","text":"

    The utilization percent of the disk. node_disk_max_busy is the maximum of disk_busy for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_capacity","title":"node_disk_max_capacity","text":"

    Disk capacity in MB. node_disk_max_capacity is the maximum of disk_capacity for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_cp_read_chain","title":"node_disk_max_cp_read_chain","text":"

    Average number of blocks transferred in each consistency point read operation during a CP. node_disk_max_cp_read_chain is the maximum of disk_cp_read_chain for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_cp_read_latency","title":"node_disk_max_cp_read_latency","text":"

    Average latency per block in microseconds for consistency point read operations. node_disk_max_cp_read_latency is the maximum of disk_cp_read_latency for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_cp_reads","title":"node_disk_max_cp_reads","text":"

    Number of disk read operations initiated each second for consistency point processing. node_disk_max_cp_reads is the maximum of disk_cp_reads for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_disk_busy","title":"node_disk_max_disk_busy","text":"

    The utilization percent of the disk. node_disk_max_disk_busy is the maximum of disk_busy for label node.

    API Endpoint Metric Template ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_disk_capacity","title":"node_disk_max_disk_capacity","text":"

    Disk capacity in MB. node_disk_max_disk_capacity is the maximum of disk_capacity for label node.

    API Endpoint Metric Template ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_io_pending","title":"node_disk_max_io_pending","text":"

    Average number of I/Os issued to the disk for which we have not yet received the response. node_disk_max_io_pending is the maximum of disk_io_pending for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_io_queued","title":"node_disk_max_io_queued","text":"

    Number of I/Os queued to the disk but not yet issued. node_disk_max_io_queued is the maximum of disk_io_queued for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_total_data","title":"node_disk_max_total_data","text":"

    Total throughput for user operations per second. node_disk_max_total_data is the maximum of disk_total_data for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_total_transfers","title":"node_disk_max_total_transfers","text":"

    Total number of disk operations involving data transfer initiated per second. node_disk_max_total_transfers is the maximum of disk_total_transfers for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_read_blocks","title":"node_disk_max_user_read_blocks","text":"

    Number of blocks transferred for user read operations per second. node_disk_max_user_read_blocks is the maximum of disk_user_read_blocks for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_read_chain","title":"node_disk_max_user_read_chain","text":"

    Average number of blocks transferred in each user read operation. node_disk_max_user_read_chain is the maximum of disk_user_read_chain for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_read_latency","title":"node_disk_max_user_read_latency","text":"

    Average latency per block in microseconds for user read operations. node_disk_max_user_read_latency is the maximum of disk_user_read_latency for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_reads","title":"node_disk_max_user_reads","text":"

    Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. node_disk_max_user_reads is the maximum of disk_user_reads for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_write_blocks","title":"node_disk_max_user_write_blocks","text":"

    Number of blocks transferred for user write operations per second. node_disk_max_user_write_blocks is the maximum of disk_user_write_blocks for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_write_chain","title":"node_disk_max_user_write_chain","text":"

    Average number of blocks transferred in each user write operation. node_disk_max_user_write_chain is the maximum of disk_user_write_chain for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_write_latency","title":"node_disk_max_user_write_latency","text":"

    Average latency per block in microseconds for user write operations. node_disk_max_user_write_latency is the maximum of disk_user_write_latency for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_max_user_writes","title":"node_disk_max_user_writes","text":"

    Number of disk write operations initiated each second for storing data or metadata associated with user requests. node_disk_max_user_writes is the maximum of disk_user_writes for label node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_total_data","title":"node_disk_total_data","text":"

    Total throughput for user operations per second. node_disk_total_data is disk_total_data aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_total_transfers","title":"node_disk_total_transfers","text":"

    Total number of disk operations involving data transfer initiated per second. node_disk_total_transfers is disk_total_transfers aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_read_blocks","title":"node_disk_user_read_blocks","text":"

    Number of blocks transferred for user read operations per second. node_disk_user_read_blocks is disk_user_read_blocks aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_read_chain","title":"node_disk_user_read_chain","text":"

    Average number of blocks transferred in each user read operation. node_disk_user_read_chain is disk_user_read_chain aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_read_latency","title":"node_disk_user_read_latency","text":"

    Average latency per block in microseconds for user read operations. node_disk_user_read_latency is disk_user_read_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_reads","title":"node_disk_user_reads","text":"

    Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. node_disk_user_reads is disk_user_reads aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_write_blocks","title":"node_disk_user_write_blocks","text":"

    Number of blocks transferred for user write operations per second. node_disk_user_write_blocks is disk_user_write_blocks aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_write_chain","title":"node_disk_user_write_chain","text":"

    Average number of blocks transferred in each user write operation. node_disk_user_write_chain is disk_user_write_chain aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_write_latency","title":"node_disk_user_write_latency","text":"

    Average latency per block in microseconds for user write operations. node_disk_user_write_latency is disk_user_write_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_disk_user_writes","title":"node_disk_user_writes","text":"

    Number of disk write operations initiated each second for storing data or metadata associated with user requests. node_disk_user_writes is disk_user_writes aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#node_failed_fan","title":"node_failed_fan","text":"

    Specifies a count of the number of chassis fans that are not operating within the recommended RPM range.

    API Endpoint Metric Template REST api/cluster/nodes controller.failed_fan.count conf/rest/9.12.0/node.yaml ZAPI system-node-get-iter node-details-info.env-failed-fan-count conf/zapi/cdot/9.8.0/node.yaml"},{"location":"ontap-metrics/#node_failed_power","title":"node_failed_power","text":"

    Number of failed power supply units.

    API Endpoint Metric Template REST api/cluster/nodes controller.failed_power_supply.count conf/rest/9.12.0/node.yaml ZAPI system-node-get-iter node-details-info.env-failed-power-supply-count conf/zapi/cdot/9.8.0/node.yaml"},{"location":"ontap-metrics/#node_fcp_data_recv","title":"node_fcp_data_recv","text":"

    Number of FCP kilobytes (KB) received per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node fcp_data_receivedUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node fcp_data_recvUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_fcp_data_sent","title":"node_fcp_data_sent","text":"

    Number of FCP kilobytes (KB) sent per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node fcp_data_sentUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node fcp_data_sentUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_fcp_ops","title":"node_fcp_ops","text":"

    Number of FCP operations per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node fcp_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node fcp_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_hdd_data_read","title":"node_hdd_data_read","text":"

    Number of HDD Disk kilobytes (KB) read per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node hdd_data_readUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node hdd_data_readUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_hdd_data_written","title":"node_hdd_data_written","text":"

    Number of HDD kilobytes (KB) written per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node hdd_data_writtenUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node hdd_data_writtenUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_iscsi_ops","title":"node_iscsi_ops","text":"

    Number of iSCSI operations per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node iscsi_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node iscsi_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_memory","title":"node_memory","text":"

    Total memory in megabytes (MB)

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node memoryUnit: noneType: rawBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node memoryUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_net_data_recv","title":"node_net_data_recv","text":"

    Number of network kilobytes (KB) received per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node network_data_receivedUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node net_data_recvUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_net_data_sent","title":"node_net_data_sent","text":"

    Number of network kilobytes (KB) sent per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node network_data_sentUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node net_data_sentUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_nfs_access_avg_latency","title":"node_nfs_access_avg_latency","text":"

    Average latency of Access procedure requests. The counter keeps track of the average response time of Access requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_access_total","title":"node_nfs_access_total","text":"

    Total number of Access procedure requests. It is the total number of access success and access error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_backchannel_ctl_avg_latency","title":"node_nfs_backchannel_ctl_avg_latency","text":"

    Average latency of BACKCHANNEL_CTL operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node backchannel_ctl.average_latencyUnit: microsecType: averageBase: backchannel_ctl.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node backchannel_ctl.average_latencyUnit: microsecType: averageBase: backchannel_ctl.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node backchannel_ctl_avg_latencyUnit: microsecType: average,no-zero-valuesBase: backchannel_ctl_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node backchannel_ctl_avg_latencyUnit: microsecType: average,no-zero-valuesBase: backchannel_ctl_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_backchannel_ctl_total","title":"node_nfs_backchannel_ctl_total","text":"

    Total number of BACKCHANNEL_CTL operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node backchannel_ctl.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node backchannel_ctl.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node backchannel_ctl_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node backchannel_ctl_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_bind_conn_to_session_avg_latency","title":"node_nfs_bind_conn_to_session_avg_latency","text":"

    Average latency of BIND_CONN_TO_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node bind_connections_to_session.average_latencyUnit: microsecType: averageBase: bind_connections_to_session.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node bind_conn_to_session.average_latencyUnit: microsecType: averageBase: bind_conn_to_session.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node bind_conn_to_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: bind_conn_to_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node bind_conn_to_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: bind_conn_to_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_bind_conn_to_session_total","title":"node_nfs_bind_conn_to_session_total","text":"

    Total number of BIND_CONN_TO_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node bind_connections_to_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node bind_conn_to_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node bind_conn_to_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node bind_conn_to_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_close_avg_latency","title":"node_nfs_close_avg_latency","text":"

    Average latency of CLOSE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_close_total","title":"node_nfs_close_total","text":"

    Total number of CLOSE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_commit_avg_latency","title":"node_nfs_commit_avg_latency","text":"

    Average latency of Commit procedure requests. The counter keeps track of the average response time of Commit requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_commit_total","title":"node_nfs_commit_total","text":"

    Total number of Commit procedure requests. It is the total number of Commit success and Commit error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_create_avg_latency","title":"node_nfs_create_avg_latency","text":"

    Average latency of Create procedure requests. The counter keeps track of the average response time of Create requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_create_session_avg_latency","title":"node_nfs_create_session_avg_latency","text":"

    Average latency of CREATE_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node create_session.average_latencyUnit: microsecType: averageBase: create_session.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node create_session.average_latencyUnit: microsecType: averageBase: create_session.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node create_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node create_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_create_session_total","title":"node_nfs_create_session_total","text":"

    Total number of CREATE_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node create_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node create_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node create_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node create_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_create_total","title":"node_nfs_create_total","text":"

    Total number Create of procedure requests. It is the total number of create success and create error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_delegpurge_avg_latency","title":"node_nfs_delegpurge_avg_latency","text":"

    Average latency of DELEGPURGE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_delegpurge_total","title":"node_nfs_delegpurge_total","text":"

    Total number of DELEGPURGE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_delegreturn_avg_latency","title":"node_nfs_delegreturn_avg_latency","text":"

    Average latency of DELEGRETURN operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_delegreturn_total","title":"node_nfs_delegreturn_total","text":"

    Total number of DELEGRETURN operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_destroy_clientid_avg_latency","title":"node_nfs_destroy_clientid_avg_latency","text":"

    Average latency of DESTROY_CLIENTID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node destroy_clientid.average_latencyUnit: microsecType: averageBase: destroy_clientid.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node destroy_clientid.average_latencyUnit: microsecType: averageBase: destroy_clientid.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node destroy_clientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_clientid_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node destroy_clientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_clientid_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_destroy_clientid_total","title":"node_nfs_destroy_clientid_total","text":"

    Total number of DESTROY_CLIENTID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node destroy_clientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node destroy_clientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node destroy_clientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node destroy_clientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_destroy_session_avg_latency","title":"node_nfs_destroy_session_avg_latency","text":"

    Average latency of DESTROY_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node destroy_session.average_latencyUnit: microsecType: averageBase: destroy_session.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node destroy_session.average_latencyUnit: microsecType: averageBase: destroy_session.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node destroy_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node destroy_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_destroy_session_total","title":"node_nfs_destroy_session_total","text":"

    Total number of DESTROY_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node destroy_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node destroy_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node destroy_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node destroy_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_exchange_id_avg_latency","title":"node_nfs_exchange_id_avg_latency","text":"

    Average latency of EXCHANGE_ID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node exchange_id.average_latencyUnit: microsecType: averageBase: exchange_id.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node exchange_id.average_latencyUnit: microsecType: averageBase: exchange_id.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node exchange_id_avg_latencyUnit: microsecType: average,no-zero-valuesBase: exchange_id_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node exchange_id_avg_latencyUnit: microsecType: average,no-zero-valuesBase: exchange_id_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_exchange_id_total","title":"node_nfs_exchange_id_total","text":"

    Total number of EXCHANGE_ID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node exchange_id.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node exchange_id.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node exchange_id_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node exchange_id_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_free_stateid_avg_latency","title":"node_nfs_free_stateid_avg_latency","text":"

    Average latency of FREE_STATEID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node free_stateid.average_latencyUnit: microsecType: averageBase: free_stateid.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node free_stateid.average_latencyUnit: microsecType: averageBase: free_stateid.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node free_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: free_stateid_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node free_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: free_stateid_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_free_stateid_total","title":"node_nfs_free_stateid_total","text":"

    Total number of FREE_STATEID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node free_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node free_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node free_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node free_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_fsinfo_avg_latency","title":"node_nfs_fsinfo_avg_latency","text":"

    Average latency of FSInfo procedure requests. The counter keeps track of the average response time of FSInfo requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node fsinfo.average_latencyUnit: microsecType: averageBase: fsinfo.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node fsinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: fsinfo_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_fsinfo_total","title":"node_nfs_fsinfo_total","text":"

    Total number FSInfo of procedure requests. It is the total number of FSInfo success and FSInfo error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node fsinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node fsinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_fsstat_avg_latency","title":"node_nfs_fsstat_avg_latency","text":"

    Average latency of FSStat procedure requests. The counter keeps track of the average response time of FSStat requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node fsstat.average_latencyUnit: microsecType: averageBase: fsstat.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node fsstat_avg_latencyUnit: microsecType: average,no-zero-valuesBase: fsstat_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_fsstat_total","title":"node_nfs_fsstat_total","text":"

    Total number FSStat of procedure requests. It is the total number of FSStat success and FSStat error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node fsstat.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node fsstat_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_get_dir_delegation_avg_latency","title":"node_nfs_get_dir_delegation_avg_latency","text":"

    Average latency of GET_DIR_DELEGATION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node get_dir_delegation.average_latencyUnit: microsecType: averageBase: get_dir_delegation.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node get_dir_delegation.average_latencyUnit: microsecType: averageBase: get_dir_delegation.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node get_dir_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: get_dir_delegation_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node get_dir_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: get_dir_delegation_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_get_dir_delegation_total","title":"node_nfs_get_dir_delegation_total","text":"

    Total number of GET_DIR_DELEGATION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node get_dir_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node get_dir_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node get_dir_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node get_dir_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_getattr_avg_latency","title":"node_nfs_getattr_avg_latency","text":"

    Average latency of GetAttr procedure requests. This counter keeps track of the average response time of GetAttr requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_getattr_total","title":"node_nfs_getattr_total","text":"

    Total number of Getattr procedure requests. It is the total number of getattr success and getattr error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_getdeviceinfo_avg_latency","title":"node_nfs_getdeviceinfo_avg_latency","text":"

    Average latency of GETDEVICEINFO operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getdeviceinfo.average_latencyUnit: microsecType: averageBase: getdeviceinfo.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getdeviceinfo.average_latencyUnit: microsecType: averageBase: getdeviceinfo.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getdeviceinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdeviceinfo_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getdeviceinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdeviceinfo_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_getdeviceinfo_total","title":"node_nfs_getdeviceinfo_total","text":"

    Total number of GETDEVICEINFO operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getdeviceinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getdeviceinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getdeviceinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getdeviceinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_getdevicelist_avg_latency","title":"node_nfs_getdevicelist_avg_latency","text":"

    Average latency of GETDEVICELIST operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getdevicelist.average_latencyUnit: microsecType: averageBase: getdevicelist.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getdevicelist.average_latencyUnit: microsecType: averageBase: getdevicelist.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getdevicelist_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdevicelist_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getdevicelist_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdevicelist_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_getdevicelist_total","title":"node_nfs_getdevicelist_total","text":"

    Total number of GETDEVICELIST operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getdevicelist.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getdevicelist.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getdevicelist_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getdevicelist_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_getfh_avg_latency","title":"node_nfs_getfh_avg_latency","text":"

    Average latency of GETFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_getfh_total","title":"node_nfs_getfh_total","text":"

    Total number of GETFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_latency","title":"node_nfs_latency","text":"

    Average latency of NFSv3 requests. This counter keeps track of the average response time of NFSv3 requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutcommit_avg_latency","title":"node_nfs_layoutcommit_avg_latency","text":"

    Average latency of LAYOUTCOMMIT operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutcommit.average_latencyUnit: microsecType: averageBase: layoutcommit.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutcommit.average_latencyUnit: microsecType: averageBase: layoutcommit.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutcommit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutcommit_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutcommit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutcommit_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutcommit_total","title":"node_nfs_layoutcommit_total","text":"

    Total number of LAYOUTCOMMIT operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutcommit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutcommit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutcommit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutcommit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutget_avg_latency","title":"node_nfs_layoutget_avg_latency","text":"

    Average latency of LAYOUTGET operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutget.average_latencyUnit: microsecType: averageBase: layoutget.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutget.average_latencyUnit: microsecType: averageBase: layoutget.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutget_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutget_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutget_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutget_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutget_total","title":"node_nfs_layoutget_total","text":"

    Total number of LAYOUTGET operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutget.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutget.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutget_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutget_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutreturn_avg_latency","title":"node_nfs_layoutreturn_avg_latency","text":"

    Average latency of LAYOUTRETURN operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutreturn.average_latencyUnit: microsecType: averageBase: layoutreturn.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutreturn.average_latencyUnit: microsecType: averageBase: layoutreturn.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutreturn_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutreturn_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_layoutreturn_total","title":"node_nfs_layoutreturn_total","text":"

    Total number of LAYOUTRETURN operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node layoutreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node layoutreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node layoutreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node layoutreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_link_avg_latency","title":"node_nfs_link_avg_latency","text":"

    Average latency of Link procedure requests. The counter keeps track of the average response time of Link requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_link_total","title":"node_nfs_link_total","text":"

    Total number Link of procedure requests. It is the total number of Link success and Link error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lock_avg_latency","title":"node_nfs_lock_avg_latency","text":"

    Average latency of LOCK operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lock_total","title":"node_nfs_lock_total","text":"

    Total number of LOCK operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lockt_avg_latency","title":"node_nfs_lockt_avg_latency","text":"

    Average latency of LOCKT operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lockt_total","title":"node_nfs_lockt_total","text":"

    Total number of LOCKT operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_locku_avg_latency","title":"node_nfs_locku_avg_latency","text":"

    Average latency of LOCKU operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_locku_total","title":"node_nfs_locku_total","text":"

    Total number of LOCKU operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lookup_avg_latency","title":"node_nfs_lookup_avg_latency","text":"

    Average latency of LookUp procedure requests. This shows the average time it takes for the LookUp operation to reply to the request.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lookup_total","title":"node_nfs_lookup_total","text":"

    Total number of Lookup procedure requests. It is the total number of lookup success and lookup error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lookupp_avg_latency","title":"node_nfs_lookupp_avg_latency","text":"

    Average latency of LOOKUPP operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_lookupp_total","title":"node_nfs_lookupp_total","text":"

    Total number of LOOKUPP operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_mkdir_avg_latency","title":"node_nfs_mkdir_avg_latency","text":"

    Average latency of MkDir procedure requests. The counter keeps track of the average response time of MkDir requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node mkdir.average_latencyUnit: microsecType: averageBase: mkdir.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node mkdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: mkdir_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_mkdir_total","title":"node_nfs_mkdir_total","text":"

    Total number MkDir of procedure requests. It is the total number of MkDir success and MkDir error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node mkdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node mkdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_mknod_avg_latency","title":"node_nfs_mknod_avg_latency","text":"

    Average latency of MkNod procedure requests. The counter keeps track of the average response time of MkNod requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node mknod.average_latencyUnit: microsecType: averageBase: mknod.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node mknod_avg_latencyUnit: microsecType: average,no-zero-valuesBase: mknod_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_mknod_total","title":"node_nfs_mknod_total","text":"

    Total number MkNod of procedure requests. It is the total number of MkNod success and MkNod error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node mknod.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node mknod_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_null_avg_latency","title":"node_nfs_null_avg_latency","text":"

    Average latency of Null procedure requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_null_total","title":"node_nfs_null_total","text":"

    Total number of Null procedure requests. It is the total of null success and null error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_nverify_avg_latency","title":"node_nfs_nverify_avg_latency","text":"

    Average latency of NVERIFY operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_nverify_total","title":"node_nfs_nverify_total","text":"

    Total number of NVERIFY operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_avg_latency","title":"node_nfs_open_avg_latency","text":"

    Average latency of OPEN operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_confirm_avg_latency","title":"node_nfs_open_confirm_avg_latency","text":"

    Average latency of OPEN_CONFIRM procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node open_confirm.average_latencyUnit: microsecType: averageBase: open_confirm.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node open_confirm_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_confirm_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_confirm_total","title":"node_nfs_open_confirm_total","text":"

    Total number of OPEN_CONFIRM procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node open_confirm.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node open_confirm_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_downgrade_avg_latency","title":"node_nfs_open_downgrade_avg_latency","text":"

    Average latency of OPEN_DOWNGRADE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_downgrade_total","title":"node_nfs_open_downgrade_total","text":"

    Total number of OPEN_DOWNGRADE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_open_total","title":"node_nfs_open_total","text":"

    Total number of OPEN operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_openattr_avg_latency","title":"node_nfs_openattr_avg_latency","text":"

    Average latency of OPENATTR operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_openattr_total","title":"node_nfs_openattr_total","text":"

    Total number of OPENATTR operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_ops","title":"node_nfs_ops","text":"

    Number of NFS operations per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node nfs_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node nfs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_nfs_pathconf_avg_latency","title":"node_nfs_pathconf_avg_latency","text":"

    Average latency of PathConf procedure requests. The counter keeps track of the average response time of PathConf requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node pathconf.average_latencyUnit: microsecType: averageBase: pathconf.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node pathconf_avg_latencyUnit: microsecType: average,no-zero-valuesBase: pathconf_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_pathconf_total","title":"node_nfs_pathconf_total","text":"

    Total number PathConf of procedure requests. It is the total number of PathConf success and PathConf error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node pathconf.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node pathconf_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_putfh_avg_latency","title":"node_nfs_putfh_avg_latency","text":"

    The number of successful PUTPUBFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putfh.average_latencyUnit: noneType: deltaBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putfh.average_latencyUnit: microsecType: averageBase: putfh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putfh.average_latencyUnit: microsecType: averageBase: putfh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_putfh_total","title":"node_nfs_putfh_total","text":"

    Total number of PUTFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_putpubfh_avg_latency","title":"node_nfs_putpubfh_avg_latency","text":"

    Average latency of PUTPUBFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_putpubfh_total","title":"node_nfs_putpubfh_total","text":"

    Total number of PUTPUBFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_putrootfh_avg_latency","title":"node_nfs_putrootfh_avg_latency","text":"

    Average latency of PUTROOTFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_putrootfh_total","title":"node_nfs_putrootfh_total","text":"

    Total number of PUTROOTFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_avg_latency","title":"node_nfs_read_avg_latency","text":"

    Average latency of Read procedure requests. The counter keeps track of the average response time of Read requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_ops","title":"node_nfs_read_ops","text":"

    Total observed NFSv3 read operations per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_read_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_symlink_avg_latency","title":"node_nfs_read_symlink_avg_latency","text":"

    Average latency of ReadSymLink procedure requests. The counter keeps track of the average response time of ReadSymLink requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read_symlink.average_latencyUnit: microsecType: averageBase: read_symlink.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node read_symlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_symlink_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_symlink_total","title":"node_nfs_read_symlink_total","text":"

    Total number of ReadSymLink procedure requests. It is the total number of read symlink success and read symlink error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read_symlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node read_symlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_throughput","title":"node_nfs_read_throughput","text":"

    Rate of NFSv3 read data transfers per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node nfs41_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node nfs42_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node nfs4_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_read_total","title":"node_nfs_read_total","text":"

    Total number Read of procedure requests. It is the total number of read success and read error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_readdir_avg_latency","title":"node_nfs_readdir_avg_latency","text":"

    Average latency of ReadDir procedure requests. The counter keeps track of the average response time of ReadDir requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_readdir_total","title":"node_nfs_readdir_total","text":"

    Total number ReadDir of procedure requests. It is the total number of ReadDir success and ReadDir error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_readdirplus_avg_latency","title":"node_nfs_readdirplus_avg_latency","text":"

    Average latency of ReadDirPlus procedure requests. The counter keeps track of the average response time of ReadDirPlus requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node readdirplus.average_latencyUnit: microsecType: averageBase: readdirplus.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node readdirplus_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdirplus_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_readdirplus_total","title":"node_nfs_readdirplus_total","text":"

    Total number ReadDirPlus of procedure requests. It is the total number of ReadDirPlus success and ReadDirPlus error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node readdirplus.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node readdirplus_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_readlink_avg_latency","title":"node_nfs_readlink_avg_latency","text":"

    Average latency of READLINK operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_readlink_total","title":"node_nfs_readlink_total","text":"

    Total number of READLINK operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_reclaim_complete_avg_latency","title":"node_nfs_reclaim_complete_avg_latency","text":"

    Average latency of RECLAIM_COMPLETE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node reclaim_complete.average_latencyUnit: microsecType: averageBase: reclaim_complete.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node reclaim_complete.average_latencyUnit: microsecType: averageBase: reclaim_complete.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node reclaim_complete_avg_latencyUnit: microsecType: average,no-zero-valuesBase: reclaim_complete_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node reclaim_complete_avg_latencyUnit: microsecType: average,no-zero-valuesBase: reclaim_complete_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_reclaim_complete_total","title":"node_nfs_reclaim_complete_total","text":"

    Total number of RECLAIM_COMPLETE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node reclaim_complete.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node reclaim_complete.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node reclaim_complete_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node reclaim_complete_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_release_lock_owner_avg_latency","title":"node_nfs_release_lock_owner_avg_latency","text":"

    Average Latency of RELEASE_LOCKOWNER procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node release_lock_owner.average_latencyUnit: microsecType: averageBase: release_lock_owner.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node release_lock_owner_avg_latencyUnit: microsecType: average,no-zero-valuesBase: release_lock_owner_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_release_lock_owner_total","title":"node_nfs_release_lock_owner_total","text":"

    Total number of RELEASE_LOCKOWNER procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node release_lock_owner.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node release_lock_owner_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_remove_avg_latency","title":"node_nfs_remove_avg_latency","text":"

    Average latency of Remove procedure requests. The counter keeps track of the average response time of Remove requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_remove_total","title":"node_nfs_remove_total","text":"

    Total number Remove of procedure requests. It is the total number of Remove success and Remove error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_rename_avg_latency","title":"node_nfs_rename_avg_latency","text":"

    Average latency of Rename procedure requests. The counter keeps track of the average response time of Rename requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_rename_total","title":"node_nfs_rename_total","text":"

    Total number Rename of procedure requests. It is the total number of Rename success and Rename error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_renew_avg_latency","title":"node_nfs_renew_avg_latency","text":"

    Average latency of RENEW procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node renew.average_latencyUnit: microsecType: averageBase: renew.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node renew_avg_latencyUnit: microsecType: average,no-zero-valuesBase: renew_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_renew_total","title":"node_nfs_renew_total","text":"

    Total number of RENEW procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node renew.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node renew_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_restorefh_avg_latency","title":"node_nfs_restorefh_avg_latency","text":"

    Average latency of RESTOREFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_restorefh_total","title":"node_nfs_restorefh_total","text":"

    Total number of RESTOREFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_rmdir_avg_latency","title":"node_nfs_rmdir_avg_latency","text":"

    Average latency of RmDir procedure requests. The counter keeps track of the average response time of RmDir requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node rmdir.average_latencyUnit: microsecType: averageBase: rmdir.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node rmdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rmdir_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_rmdir_total","title":"node_nfs_rmdir_total","text":"

    Total number RmDir of procedure requests. It is the total number of RmDir success and RmDir error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node rmdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node rmdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_savefh_avg_latency","title":"node_nfs_savefh_avg_latency","text":"

    Average latency of SAVEFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_savefh_total","title":"node_nfs_savefh_total","text":"

    Total number of SAVEFH operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_secinfo_avg_latency","title":"node_nfs_secinfo_avg_latency","text":"

    Average latency of SECINFO operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_secinfo_no_name_avg_latency","title":"node_nfs_secinfo_no_name_avg_latency","text":"

    Average latency of SECINFO_NO_NAME operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node secinfo_no_name.average_latencyUnit: microsecType: averageBase: secinfo_no_name.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node secinfo_no_name.average_latencyUnit: microsecType: averageBase: secinfo_no_name.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node secinfo_no_name_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_no_name_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node secinfo_no_name_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_no_name_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_secinfo_no_name_total","title":"node_nfs_secinfo_no_name_total","text":"

    Total number of SECINFO_NO_NAME operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node secinfo_no_name.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node secinfo_no_name.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node secinfo_no_name_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node secinfo_no_name_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_secinfo_total","title":"node_nfs_secinfo_total","text":"

    Total number of SECINFO operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_sequence_avg_latency","title":"node_nfs_sequence_avg_latency","text":"

    Average latency of SEQUENCE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node sequence.average_latencyUnit: microsecType: averageBase: sequence.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node sequence.average_latencyUnit: microsecType: averageBase: sequence.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node sequence_avg_latencyUnit: microsecType: average,no-zero-valuesBase: sequence_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node sequence_avg_latencyUnit: microsecType: average,no-zero-valuesBase: sequence_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_sequence_total","title":"node_nfs_sequence_total","text":"

    Total number of SEQUENCE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node sequence.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node sequence.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node sequence_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node sequence_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_set_ssv_avg_latency","title":"node_nfs_set_ssv_avg_latency","text":"

    Average latency of SET_SSV operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node set_ssv.average_latencyUnit: microsecType: averageBase: set_ssv.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node set_ssv.average_latencyUnit: microsecType: averageBase: set_ssv.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node set_ssv_avg_latencyUnit: microsecType: average,no-zero-valuesBase: set_ssv_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node set_ssv_avg_latencyUnit: microsecType: average,no-zero-valuesBase: set_ssv_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_set_ssv_total","title":"node_nfs_set_ssv_total","text":"

    Total number of SET_SSV operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node set_ssv.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node set_ssv.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node set_ssv_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node set_ssv_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_setattr_avg_latency","title":"node_nfs_setattr_avg_latency","text":"

    Average latency of SetAttr procedure requests. The counter keeps track of the average response time of SetAttr requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_setattr_total","title":"node_nfs_setattr_total","text":"

    Total number of Setattr procedure requests. It is the total number of Setattr success and setattr error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_setclientid_avg_latency","title":"node_nfs_setclientid_avg_latency","text":"

    Average latency of SETCLIENTID procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node setclientid.average_latencyUnit: microsecType: averageBase: setclientid.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node setclientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setclientid_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_setclientid_confirm_avg_latency","title":"node_nfs_setclientid_confirm_avg_latency","text":"

    Average latency of SETCLIENTID_CONFIRM procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node setclientid_confirm.average_latencyUnit: microsecType: averageBase: setclientid_confirm.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node setclientid_confirm_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setclientid_confirm_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_setclientid_confirm_total","title":"node_nfs_setclientid_confirm_total","text":"

    Total number of SETCLIENTID_CONFIRM procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node setclientid_confirm.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node setclientid_confirm_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_setclientid_total","title":"node_nfs_setclientid_total","text":"

    Total number of SETCLIENTID procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4:node setclientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4:node setclientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_symlink_avg_latency","title":"node_nfs_symlink_avg_latency","text":"

    Average latency of SymLink procedure requests. The counter keeps track of the average response time of SymLink requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node symlink.average_latencyUnit: microsecType: averageBase: symlink.total conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node symlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: symlink_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_symlink_total","title":"node_nfs_symlink_total","text":"

    Total number SymLink of procedure requests. It is the total number of SymLink success and create SymLink requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node symlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node symlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_test_stateid_avg_latency","title":"node_nfs_test_stateid_avg_latency","text":"

    Average latency of TEST_STATEID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node test_stateid.average_latencyUnit: microsecType: averageBase: test_stateid.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node test_stateid.average_latencyUnit: microsecType: averageBase: test_stateid.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node test_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: test_stateid_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node test_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: test_stateid_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_test_stateid_total","title":"node_nfs_test_stateid_total","text":"

    Total number of TEST_STATEID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node test_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node test_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node test_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node test_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_throughput","title":"node_nfs_throughput","text":"

    Rate of NFSv3 data transfers per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node nfs41_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node nfs42_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node nfs4_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_total_ops","title":"node_nfs_total_ops","text":"

    Total number of NFSv3 procedure requests per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_verify_avg_latency","title":"node_nfs_verify_avg_latency","text":"

    Average latency of VERIFY operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_verify_total","title":"node_nfs_verify_total","text":"

    Total number of VERIFY operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv4_1:node verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_want_delegation_avg_latency","title":"node_nfs_want_delegation_avg_latency","text":"

    Average latency of WANT_DELEGATION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node want_delegation.average_latencyUnit: microsecType: averageBase: want_delegation.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node want_delegation.average_latencyUnit: microsecType: averageBase: want_delegation.total conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node want_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: want_delegation_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node want_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: want_delegation_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_want_delegation_total","title":"node_nfs_want_delegation_total","text":"

    Total number of WANT_DELEGATION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41:node want_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node want_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4_1:node want_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node want_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml"},{"location":"ontap-metrics/#node_nfs_write_avg_latency","title":"node_nfs_write_avg_latency","text":"

    Average latency of Write procedure requests. The counter keeps track of the average response time of Write requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_write_ops","title":"node_nfs_write_ops","text":"

    Total observed NFSv3 write operations per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_write_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml"},{"location":"ontap-metrics/#node_nfs_write_throughput","title":"node_nfs_write_throughput","text":"

    Rate of NFSv3 write data transfers per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node nfsv3_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node nfs41_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node nfs42_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node nfs4_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nfs_write_total","title":"node_nfs_write_total","text":"

    Total number of Write procedure requests. It is the total number of write success and write error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3:node write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3_node.yaml REST api/cluster/counter/tables/svm_nfs_v41:node write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1_node.yaml REST api/cluster/counter/tables/svm_nfs_v42:node write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2_node.yaml REST api/cluster/counter/tables/svm_nfs_v4:node write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_node.yaml ZAPI perf-object-get-instances nfsv3:node write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml ZAPI perf-object-get-instances nfsv4_1:node write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml ZAPI perf-object-get-instances nfsv4_2:node write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml ZAPI perf-object-get-instances nfsv4:node write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml"},{"location":"ontap-metrics/#node_nvmf_data_recv","title":"node_nvmf_data_recv","text":"

    NVMe/FC kilobytes (KB) received per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node nvme_fc_data_receivedUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node nvmf_data_recvUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_nvmf_data_sent","title":"node_nvmf_data_sent","text":"

    NVMe/FC kilobytes (KB) sent per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node nvme_fc_data_sentUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node nvmf_data_sentUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_nvmf_ops","title":"node_nvmf_ops","text":"

    NVMe/FC operations per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node nvme_fc_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node nvmf_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_ssd_data_read","title":"node_ssd_data_read","text":"

    Number of SSD Disk kilobytes (KB) read per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node ssd_data_readUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node ssd_data_readUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_ssd_data_written","title":"node_ssd_data_written","text":"

    Number of SSD Disk kilobytes (KB) written per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node ssd_data_writtenUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node ssd_data_writtenUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_total_data","title":"node_total_data","text":"

    Total throughput in bytes

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_total_latency","title":"node_total_latency","text":"

    Average latency for all operations in the system in microseconds

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node total_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node total_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_total_ops","title":"node_total_ops","text":"

    Total number of operations per second

    API Endpoint Metric Template REST api/cluster/counter/tables/system:node total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/system_node.yaml ZAPI perf-object-get-instances system:node total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/system_node.yaml"},{"location":"ontap-metrics/#node_uptime","title":"node_uptime","text":"

    The total time, in seconds, that the node has been up.

    API Endpoint Metric Template REST api/cluster/nodes uptime conf/rest/9.12.0/node.yaml ZAPI system-node-get-iter node-details-info.node-uptime conf/zapi/cdot/9.8.0/node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_other_latency","title":"node_vol_cifs_other_latency","text":"

    Average time for the WAFL filesystem to process other CIFS operations to the volume; not including CIFS protocol request processing or network communication time which will also be included in client observed CIFS request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.other_latencyUnit: microsecType: averageBase: cifs.other_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_other_latencyUnit: microsecType: averageBase: cifs_other_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_other_ops","title":"node_vol_cifs_other_ops","text":"

    Number of other CIFS operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_read_data","title":"node_vol_cifs_read_data","text":"

    Bytes read per second via CIFS

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_read_latency","title":"node_vol_cifs_read_latency","text":"

    Average time for the WAFL filesystem to process CIFS read requests to the volume; not including CIFS protocol request processing or network communication time which will also be included in client observed CIFS request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.read_latencyUnit: microsecType: averageBase: cifs.read_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_read_latencyUnit: microsecType: averageBase: cifs_read_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_read_ops","title":"node_vol_cifs_read_ops","text":"

    Number of CIFS read operations per second from the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_write_data","title":"node_vol_cifs_write_data","text":"

    Bytes written per second via CIFS

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_write_latency","title":"node_vol_cifs_write_latency","text":"

    Average time for the WAFL filesystem to process CIFS write requests to the volume; not including CIFS protocol request processing or network communication time which will also be included in client observed CIFS request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.write_latencyUnit: microsecType: averageBase: cifs.write_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_write_latencyUnit: microsecType: averageBase: cifs_write_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_cifs_write_ops","title":"node_vol_cifs_write_ops","text":"

    Number of CIFS write operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node cifs.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node cifs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_other_latency","title":"node_vol_fcp_other_latency","text":"

    Average time for the WAFL filesystem to process other FCP protocol operations to the volume; not including FCP protocol request processing or network communication time which will also be included in client observed FCP protocol request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.other_latencyUnit: microsecType: averageBase: fcp.other_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_other_latencyUnit: microsecType: averageBase: fcp_other_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_other_ops","title":"node_vol_fcp_other_ops","text":"

    Number of other block protocol operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_read_data","title":"node_vol_fcp_read_data","text":"

    Bytes read per second via block protocol

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_read_latency","title":"node_vol_fcp_read_latency","text":"

    Average time for the WAFL filesystem to process FCP protocol read operations to the volume; not including FCP protocol request processing or network communication time which will also be included in client observed FCP protocol request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.read_latencyUnit: microsecType: averageBase: fcp.read_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_read_latencyUnit: microsecType: averageBase: fcp_read_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_read_ops","title":"node_vol_fcp_read_ops","text":"

    Number of block protocol read operations per second from the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_write_data","title":"node_vol_fcp_write_data","text":"

    Bytes written per second via block protocol

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_write_latency","title":"node_vol_fcp_write_latency","text":"

    Average time for the WAFL filesystem to process FCP protocol write operations to the volume; not including FCP protocol request processing or network communication time which will also be included in client observed FCP protocol request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.write_latencyUnit: microsecType: averageBase: fcp.write_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_write_latencyUnit: microsecType: averageBase: fcp_write_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_fcp_write_ops","title":"node_vol_fcp_write_ops","text":"

    Number of block protocol write operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node fcp.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node fcp_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_other_latency","title":"node_vol_iscsi_other_latency","text":"

    Average time for the WAFL filesystem to process other iSCSI protocol operations to the volume; not including iSCSI protocol request processing or network communication time which will also be included in client observed iSCSI protocol request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.other_latencyUnit: microsecType: averageBase: iscsi.other_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_other_latencyUnit: microsecType: averageBase: iscsi_other_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_other_ops","title":"node_vol_iscsi_other_ops","text":"

    Number of other block protocol operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_read_data","title":"node_vol_iscsi_read_data","text":"

    Bytes read per second via block protocol

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_read_latency","title":"node_vol_iscsi_read_latency","text":"

    Average time for the WAFL filesystem to process iSCSI protocol read operations to the volume; not including iSCSI protocol request processing or network communication time which will also be included in client observed iSCSI protocol request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.read_latencyUnit: microsecType: averageBase: iscsi.read_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_read_latencyUnit: microsecType: averageBase: iscsi_read_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_read_ops","title":"node_vol_iscsi_read_ops","text":"

    Number of block protocol read operations per second from the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_write_data","title":"node_vol_iscsi_write_data","text":"

    Bytes written per second via block protocol

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_write_latency","title":"node_vol_iscsi_write_latency","text":"

    Average time for the WAFL filesystem to process iSCSI protocol write operations to the volume; not including iSCSI protocol request processing or network communication time which will also be included in client observed iSCSI request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.write_latencyUnit: microsecType: averageBase: iscsi.write_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_write_latencyUnit: microsecType: averageBase: iscsi_write_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_iscsi_write_ops","title":"node_vol_iscsi_write_ops","text":"

    Number of block protocol write operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node iscsi.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node iscsi_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_other_latency","title":"node_vol_nfs_other_latency","text":"

    Average time for the WAFL filesystem to process other NFS operations to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.other_latencyUnit: microsecType: averageBase: nfs.other_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_other_latencyUnit: microsecType: averageBase: nfs_other_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_other_ops","title":"node_vol_nfs_other_ops","text":"

    Number of other NFS operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_read_data","title":"node_vol_nfs_read_data","text":"

    Bytes read per second via NFS

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_read_latency","title":"node_vol_nfs_read_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol read requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.read_latencyUnit: microsecType: averageBase: nfs.read_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_read_latencyUnit: microsecType: averageBase: nfs_read_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_read_ops","title":"node_vol_nfs_read_ops","text":"

    Number of NFS read operations per second from the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_write_data","title":"node_vol_nfs_write_data","text":"

    Bytes written per second via NFS

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_write_latency","title":"node_vol_nfs_write_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol write requests to the volume; not including NFS protocol request processing or network communication time, which will also be included in client observed NFS request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.write_latencyUnit: microsecType: averageBase: nfs.write_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_write_latencyUnit: microsecType: averageBase: nfs_write_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_nfs_write_ops","title":"node_vol_nfs_write_ops","text":"

    Number of NFS write operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node nfs.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node nfs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_read_latency","title":"node_vol_read_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_vol_write_latency","title":"node_vol_write_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:node write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/volume_node.yaml ZAPI perf-object-get-instances volume:node write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/volume_node.yaml"},{"location":"ontap-metrics/#node_volume_avg_latency","title":"node_volume_avg_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process all the operations on the volume; not including request processing or network communication time. node_volume_avg_latency is volume_avg_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume average_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_access_latency","title":"node_volume_nfs_access_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol access requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_access_latency is volume_nfs_access_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.access_latencyUnit: microsecType: averageBase: nfs.access_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_access_latencyUnit: microsecType: averageBase: nfs_access_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_access_ops","title":"node_volume_nfs_access_ops","text":"

    Number of NFS accesses per second to the volume. node_volume_nfs_access_ops is volume_nfs_access_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.access_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_access_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_getattr_latency","title":"node_volume_nfs_getattr_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol getattr requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_getattr_latency is volume_nfs_getattr_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.getattr_latencyUnit: microsecType: averageBase: nfs.getattr_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_getattr_latencyUnit: microsecType: averageBase: nfs_getattr_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_getattr_ops","title":"node_volume_nfs_getattr_ops","text":"

    Number of NFS getattr per second to the volume. node_volume_nfs_getattr_ops is volume_nfs_getattr_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.getattr_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_getattr_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_lookup_latency","title":"node_volume_nfs_lookup_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol lookup requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_lookup_latency is volume_nfs_lookup_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.lookup_latencyUnit: microsecType: averageBase: nfs.lookup_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_lookup_latencyUnit: microsecType: averageBase: nfs_lookup_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_lookup_ops","title":"node_volume_nfs_lookup_ops","text":"

    Number of NFS lookups per second to the volume. node_volume_nfs_lookup_ops is volume_nfs_lookup_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.lookup_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_lookup_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_other_latency","title":"node_volume_nfs_other_latency","text":"

    Average time for the WAFL filesystem to process other NFS operations to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_other_latency is volume_nfs_other_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.other_latencyUnit: microsecType: averageBase: nfs.other_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_other_latencyUnit: microsecType: averageBase: nfs_other_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_other_ops","title":"node_volume_nfs_other_ops","text":"

    Number of other NFS operations per second to the volume. node_volume_nfs_other_ops is volume_nfs_other_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_punch_hole_latency","title":"node_volume_nfs_punch_hole_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol hole-punch requests to the volume. node_volume_nfs_punch_hole_latency is volume_nfs_punch_hole_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.punch_hole_latencyUnit: microsecType: averageBase: nfs.punch_hole_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_punch_hole_latencyUnit: microsecType: averageBase: nfs_punch_hole_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_punch_hole_ops","title":"node_volume_nfs_punch_hole_ops","text":"

    Number of NFS hole-punch requests per second to the volume. node_volume_nfs_punch_hole_ops is volume_nfs_punch_hole_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.punch_hole_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_punch_hole_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_read_latency","title":"node_volume_nfs_read_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol read requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_read_latency is volume_nfs_read_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.read_latencyUnit: microsecType: averageBase: nfs.read_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_read_latencyUnit: microsecType: averageBase: nfs_read_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_read_ops","title":"node_volume_nfs_read_ops","text":"

    Number of NFS read operations per second from the volume. node_volume_nfs_read_ops is volume_nfs_read_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_setattr_latency","title":"node_volume_nfs_setattr_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol setattr requests to the volume. node_volume_nfs_setattr_latency is volume_nfs_setattr_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.setattr_latencyUnit: microsecType: averageBase: nfs.setattr_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_setattr_latencyUnit: microsecType: averageBase: nfs_setattr_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_setattr_ops","title":"node_volume_nfs_setattr_ops","text":"

    Number of NFS setattr requests per second to the volume. node_volume_nfs_setattr_ops is volume_nfs_setattr_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.setattr_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_setattr_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_total_ops","title":"node_volume_nfs_total_ops","text":"

    Number of total NFS operations per second to the volume. node_volume_nfs_total_ops is volume_nfs_total_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_write_latency","title":"node_volume_nfs_write_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol write requests to the volume; not including NFS protocol request processing or network communication time, which will also be included in client observed NFS request latency. node_volume_nfs_write_latency is volume_nfs_write_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.write_latencyUnit: microsecType: averageBase: nfs.write_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_write_latencyUnit: microsecType: averageBase: nfs_write_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_nfs_write_ops","title":"node_volume_nfs_write_ops","text":"

    Number of NFS write operations per second to the volume. node_volume_nfs_write_ops is volume_nfs_write_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_other_latency","title":"node_volume_other_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process other operations to the volume; not including request processing or network communication time. node_volume_other_latency is volume_other_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume other_latencyUnit: microsecType: averageBase: total_other_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_other_ops","title":"node_volume_other_ops","text":"

    Number of other operations per second to the volume. node_volume_other_ops is volume_other_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume total_other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_read_data","title":"node_volume_read_data","text":"

    Bytes read per second. node_volume_read_data is volume_read_data aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume bytes_readUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_read_latency","title":"node_volume_read_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time. node_volume_read_latency is volume_read_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_read_ops","title":"node_volume_read_ops","text":"

    Number of read operations per second from the volume. node_volume_read_ops is volume_read_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume total_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_total_ops","title":"node_volume_total_ops","text":"

    Number of operations per second serviced by the volume. node_volume_total_ops is volume_total_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_write_data","title":"node_volume_write_data","text":"

    Bytes written per second. node_volume_write_data is volume_write_data aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume bytes_writtenUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_write_latency","title":"node_volume_write_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time. node_volume_write_latency is volume_write_latency aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#node_volume_write_ops","title":"node_volume_write_ops","text":"

    Number of write operations per second to the volume. node_volume_write_ops is volume_write_ops aggregated by node.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume total_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#nvme_lif_avg_latency","title":"nvme_lif_avg_latency","text":"

    Average latency for NVMF operations

    API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif average_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_avg_other_latency","title":"nvme_lif_avg_other_latency","text":"

    Average latency for operations other than read, write, compare or compare-and-write.

    API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif average_other_latencyUnit: microsecType: averageBase: other_ops conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_avg_read_latency","title":"nvme_lif_avg_read_latency","text":"

    Average latency for read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif average_read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_avg_write_latency","title":"nvme_lif_avg_write_latency","text":"

    Average latency for write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif average_write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_other_ops","title":"nvme_lif_other_ops","text":"

    Number of operations that are not read, write, compare or compare-and-write.

    API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_read_data","title":"nvme_lif_read_data","text":"

    Amount of data read from the storage system

    API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_read_ops","title":"nvme_lif_read_ops","text":"

    Number of read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_total_ops","title":"nvme_lif_total_ops","text":"

    Total number of operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_write_data","title":"nvme_lif_write_data","text":"

    Amount of data written to the storage system

    API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvme_lif_write_ops","title":"nvme_lif_write_ops","text":"

    Number of write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/nvmf_lif write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nvmf_lif.yaml ZAPI perf-object-get-instances nvmf_fc_lif write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_avg_latency","title":"nvmf_rdma_port_avg_latency","text":"

    Average latency for NVMF operations

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_avg_other_latency","title":"nvmf_rdma_port_avg_other_latency","text":"

    Average latency for operations other than read, write, compare or caw

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_avg_read_latency","title":"nvmf_rdma_port_avg_read_latency","text":"

    Average latency for read operations

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_avg_write_latency","title":"nvmf_rdma_port_avg_write_latency","text":"

    Average latency for write operations

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_other_ops","title":"nvmf_rdma_port_other_ops","text":"

    Number of operations that are not read, write, compare or caw.

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_read_data","title":"nvmf_rdma_port_read_data","text":"

    Amount of data read from the storage system

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_read_ops","title":"nvmf_rdma_port_read_ops","text":"

    Number of read operations

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_total_data","title":"nvmf_rdma_port_total_data","text":"

    Amount of NVMF traffic to and from the storage system

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_total_ops","title":"nvmf_rdma_port_total_ops","text":"

    Total number of operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_write_data","title":"nvmf_rdma_port_write_data","text":"

    Amount of data written to the storage system

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_rdma_port_write_ops","title":"nvmf_rdma_port_write_ops","text":"

    Number of write operations

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_rdma_port write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_avg_latency","title":"nvmf_tcp_port_avg_latency","text":"

    Average latency for NVMF operations

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_avg_other_latency","title":"nvmf_tcp_port_avg_other_latency","text":"

    Average latency for operations other than read, write, compare or caw

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port avg_other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_avg_read_latency","title":"nvmf_tcp_port_avg_read_latency","text":"

    Average latency for read operations

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port avg_read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_avg_write_latency","title":"nvmf_tcp_port_avg_write_latency","text":"

    Average latency for write operations

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port avg_write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_other_ops","title":"nvmf_tcp_port_other_ops","text":"

    Number of operations that are not read, write, compare or caw.

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_read_data","title":"nvmf_tcp_port_read_data","text":"

    Amount of data read from the storage system

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_read_ops","title":"nvmf_tcp_port_read_ops","text":"

    Number of read operations

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_total_data","title":"nvmf_tcp_port_total_data","text":"

    Amount of NVMF traffic to and from the storage system

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_total_ops","title":"nvmf_tcp_port_total_ops","text":"

    Total number of operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_write_data","title":"nvmf_tcp_port_write_data","text":"

    Amount of data written to the storage system

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#nvmf_tcp_port_write_ops","title":"nvmf_tcp_port_write_ops","text":"

    Number of write operations

    API Endpoint Metric Template ZAPI perf-object-get-instances nvmf_tcp_port write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml"},{"location":"ontap-metrics/#ontaps3_logical_used_size","title":"ontaps3_logical_used_size","text":"

    Specifies the bucket logical used size up to this point.

    API Endpoint Metric Template REST api/protocols/s3/buckets logical_used_size conf/rest/9.7.0/ontap_s3.yaml"},{"location":"ontap-metrics/#ontaps3_size","title":"ontaps3_size","text":"

    Specifies the bucket size in bytes; ranges from 80MB to 64TB.

    API Endpoint Metric Template REST api/protocols/s3/buckets size conf/rest/9.7.0/ontap_s3.yaml"},{"location":"ontap-metrics/#ontaps3_svm_abort_multipart_upload_failed","title":"ontaps3_svm_abort_multipart_upload_failed","text":"

    Number of failed Abort Multipart Upload operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server abort_multipart_upload_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_abort_multipart_upload_failed_client_close","title":"ontaps3_svm_abort_multipart_upload_failed_client_close","text":"

    Number of times Abort Multipart Upload operation failed because client terminated connection for operation pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server abort_multipart_upload_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_abort_multipart_upload_latency","title":"ontaps3_svm_abort_multipart_upload_latency","text":"

    Average latency for Abort Multipart Upload operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server abort_multipart_upload_latencyUnit: microsecType: averageBase: abort_multipart_upload_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_abort_multipart_upload_rate","title":"ontaps3_svm_abort_multipart_upload_rate","text":"

    Number of Abort Multipart Upload operations per second.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server abort_multipart_upload_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_abort_multipart_upload_total","title":"ontaps3_svm_abort_multipart_upload_total","text":"

    Number of Abort Multipart Upload operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server abort_multipart_upload_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_allow_access","title":"ontaps3_svm_allow_access","text":"

    Number of times access was allowed.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server allow_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_anonymous_access","title":"ontaps3_svm_anonymous_access","text":"

    Number of times anonymous access was allowed.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server anonymous_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_anonymous_deny_access","title":"ontaps3_svm_anonymous_deny_access","text":"

    Number of times anonymous access was denied.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server anonymous_deny_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_authentication_failures","title":"ontaps3_svm_authentication_failures","text":"

    Number of authentication failures.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server authentication_failuresUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_chunked_upload_reqs","title":"ontaps3_svm_chunked_upload_reqs","text":"

    Total number of object store server chunked object upload requests

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server chunked_upload_reqsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_complete_multipart_upload_failed","title":"ontaps3_svm_complete_multipart_upload_failed","text":"

    Number of failed Complete Multipart Upload operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server complete_multipart_upload_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_complete_multipart_upload_failed_client_close","title":"ontaps3_svm_complete_multipart_upload_failed_client_close","text":"

    Number of times Complete Multipart Upload operation failed because client terminated connection for operation pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server complete_multipart_upload_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_complete_multipart_upload_latency","title":"ontaps3_svm_complete_multipart_upload_latency","text":"

    Average latency for Complete Multipart Upload operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server complete_multipart_upload_latencyUnit: microsecType: averageBase: complete_multipart_upload_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_complete_multipart_upload_rate","title":"ontaps3_svm_complete_multipart_upload_rate","text":"

    Number of Complete Multipart Upload operations per second.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server complete_multipart_upload_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_complete_multipart_upload_total","title":"ontaps3_svm_complete_multipart_upload_total","text":"

    Number of Complete Multipart Upload operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server complete_multipart_upload_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_connected_connections","title":"ontaps3_svm_connected_connections","text":"

    Number of object store server connections currently established

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server connected_connectionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_connections","title":"ontaps3_svm_connections","text":"

    Total number of object store server connections.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server connectionsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_create_bucket_failed","title":"ontaps3_svm_create_bucket_failed","text":"

    Number of failed Create Bucket operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server create_bucket_failedUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_create_bucket_failed_client_close","title":"ontaps3_svm_create_bucket_failed_client_close","text":"

    Number of times Create Bucket operation failed because client terminated connection for operation pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server create_bucket_failed_client_closeUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_create_bucket_latency","title":"ontaps3_svm_create_bucket_latency","text":"

    Average latency for Create Bucket operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server create_bucket_latencyUnit: microsecType: average,no-zero-valuesBase: create_bucket_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_create_bucket_rate","title":"ontaps3_svm_create_bucket_rate","text":"

    Number of Create Bucket operations per second.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server create_bucket_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_create_bucket_total","title":"ontaps3_svm_create_bucket_total","text":"

    Number of Create Bucket operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server create_bucket_totalUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_default_deny_access","title":"ontaps3_svm_default_deny_access","text":"

    Number of times access was denied by default and not through any policy statement.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server default_deny_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_bucket_failed","title":"ontaps3_svm_delete_bucket_failed","text":"

    Number of failed Delete Bucket operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_bucket_failedUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_bucket_failed_client_close","title":"ontaps3_svm_delete_bucket_failed_client_close","text":"

    Number of times Delete Bucket operation failed because client terminated connection for operation pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_bucket_failed_client_closeUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_bucket_latency","title":"ontaps3_svm_delete_bucket_latency","text":"

    Average latency for Delete Bucket operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_bucket_latencyUnit: microsecType: average,no-zero-valuesBase: delete_bucket_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_bucket_rate","title":"ontaps3_svm_delete_bucket_rate","text":"

    Number of Delete Bucket operations per second.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_bucket_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_bucket_total","title":"ontaps3_svm_delete_bucket_total","text":"

    Number of Delete Bucket operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_bucket_totalUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_failed","title":"ontaps3_svm_delete_object_failed","text":"

    Number of failed DELETE object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_failed_client_close","title":"ontaps3_svm_delete_object_failed_client_close","text":"

    Number of times DELETE object operation failed due to the case where client closed the connection while the operation was still pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_latency","title":"ontaps3_svm_delete_object_latency","text":"

    Average latency for DELETE object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_latencyUnit: microsecType: averageBase: delete_object_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_rate","title":"ontaps3_svm_delete_object_rate","text":"

    Number of DELETE object operations per sec

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_tagging_failed","title":"ontaps3_svm_delete_object_tagging_failed","text":"

    Number of failed DELETE object tagging operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_tagging_failedUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_tagging_failed_client_close","title":"ontaps3_svm_delete_object_tagging_failed_client_close","text":"

    Number of times DELETE object tagging operation failed because client terminated connection for operation pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_tagging_failed_client_closeUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_tagging_latency","title":"ontaps3_svm_delete_object_tagging_latency","text":"

    Average latency for DELETE object tagging operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_tagging_latencyUnit: microsecType: average,no-zero-valuesBase: delete_object_tagging_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_tagging_rate","title":"ontaps3_svm_delete_object_tagging_rate","text":"

    Number of DELETE object tagging operations per sec.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_tagging_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_tagging_total","title":"ontaps3_svm_delete_object_tagging_total","text":"

    Number of DELETE object tagging operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_tagging_totalUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_delete_object_total","title":"ontaps3_svm_delete_object_total","text":"

    Number of DELETE object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server delete_object_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_explicit_deny_access","title":"ontaps3_svm_explicit_deny_access","text":"

    Number of times access was denied explicitly by a policy statement.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server explicit_deny_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_bucket_acl_failed","title":"ontaps3_svm_get_bucket_acl_failed","text":"

    Number of failed GET Bucket ACL operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_bucket_acl_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_bucket_acl_total","title":"ontaps3_svm_get_bucket_acl_total","text":"

    Number of GET Bucket ACL operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_bucket_acl_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_bucket_versioning_failed","title":"ontaps3_svm_get_bucket_versioning_failed","text":"

    Number of failed Get Bucket Versioning operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_bucket_versioning_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_bucket_versioning_total","title":"ontaps3_svm_get_bucket_versioning_total","text":"

    Number of Get Bucket Versioning operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_bucket_versioning_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_data","title":"ontaps3_svm_get_data","text":"

    Rate of GET object data transfers per second

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_acl_failed","title":"ontaps3_svm_get_object_acl_failed","text":"

    Number of failed GET Object ACL operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_acl_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_acl_total","title":"ontaps3_svm_get_object_acl_total","text":"

    Number of GET Object ACL operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_acl_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_failed","title":"ontaps3_svm_get_object_failed","text":"

    Number of failed GET object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_failed_client_close","title":"ontaps3_svm_get_object_failed_client_close","text":"

    Number of times GET object operation failed due to the case where client closed the connection while the operation was still pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_lastbyte_latency","title":"ontaps3_svm_get_object_lastbyte_latency","text":"

    Average last-byte latency for GET object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_lastbyte_latencyUnit: microsecType: averageBase: get_object_lastbyte_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_latency","title":"ontaps3_svm_get_object_latency","text":"

    Average first-byte latency for GET object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_latencyUnit: microsecType: averageBase: get_object_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_rate","title":"ontaps3_svm_get_object_rate","text":"

    Number of GET object operations per sec

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_tagging_failed","title":"ontaps3_svm_get_object_tagging_failed","text":"

    Number of failed GET object tagging operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_tagging_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_tagging_failed_client_close","title":"ontaps3_svm_get_object_tagging_failed_client_close","text":"

    Number of times GET object tagging operation failed due to the case where client closed the connection while the operation was still pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_tagging_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_tagging_latency","title":"ontaps3_svm_get_object_tagging_latency","text":"

    Average latency for GET object tagging operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_tagging_latencyUnit: microsecType: averageBase: get_object_tagging_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_tagging_rate","title":"ontaps3_svm_get_object_tagging_rate","text":"

    Number of GET object tagging operations per sec

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_tagging_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_tagging_total","title":"ontaps3_svm_get_object_tagging_total","text":"

    Number of GET object tagging operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_tagging_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_get_object_total","title":"ontaps3_svm_get_object_total","text":"

    Number of GET object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server get_object_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_group_policy_evaluated","title":"ontaps3_svm_group_policy_evaluated","text":"

    Number of times group policies were evaluated.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server group_policy_evaluatedUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_bucket_failed","title":"ontaps3_svm_head_bucket_failed","text":"

    Number of failed HEAD bucket operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_bucket_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_bucket_failed_client_close","title":"ontaps3_svm_head_bucket_failed_client_close","text":"

    Number of times HEAD bucket operation failed due to the case where client closed the connection while the operation was still pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_bucket_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_bucket_latency","title":"ontaps3_svm_head_bucket_latency","text":"

    Average latency for HEAD bucket operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_bucket_latencyUnit: microsecType: averageBase: head_bucket_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_bucket_rate","title":"ontaps3_svm_head_bucket_rate","text":"

    Number of HEAD bucket operations per sec

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_bucket_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_bucket_total","title":"ontaps3_svm_head_bucket_total","text":"

    Number of HEAD bucket operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_bucket_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_object_failed","title":"ontaps3_svm_head_object_failed","text":"

    Number of failed HEAD Object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_object_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_object_failed_client_close","title":"ontaps3_svm_head_object_failed_client_close","text":"

    Number of times HEAD object operation failed due to the case where client closed the connection while the operation was still pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_object_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_object_latency","title":"ontaps3_svm_head_object_latency","text":"

    Average latency for HEAD object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_object_latencyUnit: microsecType: averageBase: head_object_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_object_rate","title":"ontaps3_svm_head_object_rate","text":"

    Number of HEAD Object operations per sec

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_object_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_head_object_total","title":"ontaps3_svm_head_object_total","text":"

    Number of HEAD Object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server head_object_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_initiate_multipart_upload_failed","title":"ontaps3_svm_initiate_multipart_upload_failed","text":"

    Number of failed Initiate Multipart Upload operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server initiate_multipart_upload_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_initiate_multipart_upload_failed_client_close","title":"ontaps3_svm_initiate_multipart_upload_failed_client_close","text":"

    Number of times Initiate Multipart Upload operation failed because client terminated connection for operation pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server initiate_multipart_upload_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_initiate_multipart_upload_latency","title":"ontaps3_svm_initiate_multipart_upload_latency","text":"

    Average latency for Initiate Multipart Upload operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server initiate_multipart_upload_latencyUnit: microsecType: averageBase: initiate_multipart_upload_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_initiate_multipart_upload_rate","title":"ontaps3_svm_initiate_multipart_upload_rate","text":"

    Number of Initiate Multipart Upload operations per second.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server initiate_multipart_upload_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_initiate_multipart_upload_total","title":"ontaps3_svm_initiate_multipart_upload_total","text":"

    Number of Initiate Multipart Upload operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server initiate_multipart_upload_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_input_flow_control_entry","title":"ontaps3_svm_input_flow_control_entry","text":"

    Number of times input flow control was entered.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server input_flow_control_entryUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_input_flow_control_exit","title":"ontaps3_svm_input_flow_control_exit","text":"

    Number of times input flow control was exited.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server input_flow_control_exitUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_buckets_failed","title":"ontaps3_svm_list_buckets_failed","text":"

    Number of failed LIST Buckets operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_buckets_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_buckets_failed_client_close","title":"ontaps3_svm_list_buckets_failed_client_close","text":"

    Number of times LIST Bucket operation failed due to the case where client closed the connection while the operation was still pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_buckets_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_buckets_latency","title":"ontaps3_svm_list_buckets_latency","text":"

    Average latency for LIST Buckets operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_buckets_latencyUnit: microsecType: averageBase: head_object_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_buckets_rate","title":"ontaps3_svm_list_buckets_rate","text":"

    Number of LIST Buckets operations per sec

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_buckets_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_buckets_total","title":"ontaps3_svm_list_buckets_total","text":"

    Number of LIST Buckets operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_buckets_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_object_versions_failed","title":"ontaps3_svm_list_object_versions_failed","text":"

    Number of failed LIST object versions operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_object_versions_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_object_versions_failed_client_close","title":"ontaps3_svm_list_object_versions_failed_client_close","text":"

    Number of times LIST object versions operation failed due to the case where client closed the connection while the operation was still pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_object_versions_failed_client_closeUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_object_versions_latency","title":"ontaps3_svm_list_object_versions_latency","text":"

    Average latency for LIST Object versions operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_object_versions_latencyUnit: microsecType: average,no-zero-valuesBase: list_object_versions_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_object_versions_rate","title":"ontaps3_svm_list_object_versions_rate","text":"

    Number of LIST Object Versions operations per sec

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_object_versions_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_object_versions_total","title":"ontaps3_svm_list_object_versions_total","text":"

    Number of LIST Object Versions operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_object_versions_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_objects_failed","title":"ontaps3_svm_list_objects_failed","text":"

    Number of failed LIST objects operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_objects_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_objects_failed_client_close","title":"ontaps3_svm_list_objects_failed_client_close","text":"

    Number of times LIST objects operation failed due to the case where client closed the connection while the operation was still pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_objects_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_objects_latency","title":"ontaps3_svm_list_objects_latency","text":"

    Average latency for LIST Objects operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_objects_latencyUnit: microsecType: averageBase: list_objects_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_objects_rate","title":"ontaps3_svm_list_objects_rate","text":"

    Number of LIST Objects operations per sec

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_objects_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_objects_total","title":"ontaps3_svm_list_objects_total","text":"

    Number of LIST Objects operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_objects_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_uploads_failed","title":"ontaps3_svm_list_uploads_failed","text":"

    Number of failed LIST Uploads operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_uploads_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_uploads_failed_client_close","title":"ontaps3_svm_list_uploads_failed_client_close","text":"

    Number of times LIST Uploads operation failed due to the case where client closed the connection while the operation was still pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_uploads_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_uploads_latency","title":"ontaps3_svm_list_uploads_latency","text":"

    Average latency for LIST Uploads operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_uploads_latencyUnit: microsecType: averageBase: list_uploads_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_uploads_rate","title":"ontaps3_svm_list_uploads_rate","text":"

    Number of LIST Uploads operations per sec

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_uploads_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_list_uploads_total","title":"ontaps3_svm_list_uploads_total","text":"

    Number of LIST Uploads operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server list_uploads_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_max_cmds_per_connection","title":"ontaps3_svm_max_cmds_per_connection","text":"

    Maximum commands pipelined at any instance on a connection.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server max_cmds_per_connectionUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_max_connected_connections","title":"ontaps3_svm_max_connected_connections","text":"

    Maximum number of object store server connections established at one time

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server max_connected_connectionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_max_requests_outstanding","title":"ontaps3_svm_max_requests_outstanding","text":"

    Maximum number of object store server requests in process at one time

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server max_requests_outstandingUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_multi_delete_reqs","title":"ontaps3_svm_multi_delete_reqs","text":"

    Total number of object store server multiple object delete requests

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server multi_delete_reqsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_output_flow_control_entry","title":"ontaps3_svm_output_flow_control_entry","text":"

    Number of output flow control was entered.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server output_flow_control_entryUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_output_flow_control_exit","title":"ontaps3_svm_output_flow_control_exit","text":"

    Number of times output flow control was exited.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server output_flow_control_exitUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_presigned_url_reqs","title":"ontaps3_svm_presigned_url_reqs","text":"

    Total number of presigned object store server URL requests.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server presigned_url_reqsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_bucket_versioning_failed","title":"ontaps3_svm_put_bucket_versioning_failed","text":"

    Number of failed Put Bucket Versioning operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_bucket_versioning_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_bucket_versioning_total","title":"ontaps3_svm_put_bucket_versioning_total","text":"

    Number of Put Bucket Versioning operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_bucket_versioning_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_data","title":"ontaps3_svm_put_data","text":"

    Rate of PUT object data transfers per second

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_failed","title":"ontaps3_svm_put_object_failed","text":"

    Number of failed PUT object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_failed_client_close","title":"ontaps3_svm_put_object_failed_client_close","text":"

    Number of times PUT object operation failed due to the case where client closed the connection while the operation was still pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_latency","title":"ontaps3_svm_put_object_latency","text":"

    Average latency for PUT object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_latencyUnit: microsecType: averageBase: put_object_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_rate","title":"ontaps3_svm_put_object_rate","text":"

    Number of PUT object operations per sec

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_tagging_failed","title":"ontaps3_svm_put_object_tagging_failed","text":"

    Number of failed PUT object tagging operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_tagging_failedUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_tagging_failed_client_close","title":"ontaps3_svm_put_object_tagging_failed_client_close","text":"

    Number of times PUT object tagging operation failed because client terminated connection for operation pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_tagging_failed_client_closeUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_tagging_latency","title":"ontaps3_svm_put_object_tagging_latency","text":"

    Average latency for PUT object tagging operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_tagging_latencyUnit: microsecType: average,no-zero-valuesBase: put_object_tagging_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_tagging_rate","title":"ontaps3_svm_put_object_tagging_rate","text":"

    Number of PUT object tagging operations per second.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_tagging_rateUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_tagging_total","title":"ontaps3_svm_put_object_tagging_total","text":"

    Number of PUT object tagging operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_tagging_totalUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_put_object_total","title":"ontaps3_svm_put_object_total","text":"

    Number of PUT object operations

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server put_object_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_request_parse_errors","title":"ontaps3_svm_request_parse_errors","text":"

    Number of request parser errors due to malformed requests.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server request_parse_errorsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_requests","title":"ontaps3_svm_requests","text":"

    Total number of object store server requests

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server requestsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_requests_outstanding","title":"ontaps3_svm_requests_outstanding","text":"

    Number of object store server requests in process

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server requests_outstandingUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_root_user_access","title":"ontaps3_svm_root_user_access","text":"

    Number of times access was done by root user.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server root_user_accessUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_server_connection_close","title":"ontaps3_svm_server_connection_close","text":"

    Number of connection closes triggered by server due to fatal errors.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server server_connection_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_signature_v2_reqs","title":"ontaps3_svm_signature_v2_reqs","text":"

    Total number of object store server signature V2 requests

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server signature_v2_reqsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_signature_v4_reqs","title":"ontaps3_svm_signature_v4_reqs","text":"

    Total number of object store server signature V4 requests

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server signature_v4_reqsUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_tagging","title":"ontaps3_svm_tagging","text":"

    Number of requests with tagging specified.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server taggingUnit: noneType: delta,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_upload_part_failed","title":"ontaps3_svm_upload_part_failed","text":"

    Number of failed Upload Part operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server upload_part_failedUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_upload_part_failed_client_close","title":"ontaps3_svm_upload_part_failed_client_close","text":"

    Number of times Upload Part operation failed because client terminated connection for operation pending on server.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server upload_part_failed_client_closeUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_upload_part_latency","title":"ontaps3_svm_upload_part_latency","text":"

    Average latency for Upload Part operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server upload_part_latencyUnit: microsecType: averageBase: upload_part_latency_base conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_upload_part_rate","title":"ontaps3_svm_upload_part_rate","text":"

    Number of Upload Part operations per second.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server upload_part_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_svm_upload_part_total","title":"ontaps3_svm_upload_part_total","text":"

    Number of Upload Part operations.

    API Endpoint Metric Template ZAPI perf-object-get-instances object_store_server upload_part_totalUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml"},{"location":"ontap-metrics/#ontaps3_used_percent","title":"ontaps3_used_percent","text":"API Endpoint Metric Template REST api/protocols/s3/buckets logical_used_size, size conf/rest/9.7.0/ontap_s3.yaml"},{"location":"ontap-metrics/#path_read_data","title":"path_read_data","text":"

    The average read throughput in kilobytes per second read from the indicated target port by the controller.

    API Endpoint Metric Template REST api/cluster/counter/tables/path read_dataUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path read_dataUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_read_iops","title":"path_read_iops","text":"

    The number of I/O read operations sent from the initiator port to the indicated target port.

    API Endpoint Metric Template REST api/cluster/counter/tables/path read_iopsUnit: per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path read_iopsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_read_latency","title":"path_read_latency","text":"

    The average latency of I/O read operations sent from this controller to the indicated target port.

    API Endpoint Metric Template REST api/cluster/counter/tables/path read_latencyUnit: microsecType: averageBase: read_iops conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path read_latencyUnit: microsecType: averageBase: read_iops conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_total_data","title":"path_total_data","text":"

    The average throughput in kilobytes per second read and written from/to the indicated target port by the controller.

    API Endpoint Metric Template REST api/cluster/counter/tables/path total_dataUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path total_dataUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_total_iops","title":"path_total_iops","text":"

    The number of total read/write I/O operations sent from the initiator port to the indicated target port.

    API Endpoint Metric Template REST api/cluster/counter/tables/path total_iopsUnit: per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path total_iopsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_write_data","title":"path_write_data","text":"

    The average write throughput in kilobytes per second written to the indicated target port by the controller.

    API Endpoint Metric Template REST api/cluster/counter/tables/path write_dataUnit: kb_per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path write_dataUnit: kb_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_write_iops","title":"path_write_iops","text":"

    The number of I/O write operations sent from the initiator port to the indicated target port.

    API Endpoint Metric Template REST api/cluster/counter/tables/path write_iopsUnit: per_secType: rateBase: conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path write_iopsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#path_write_latency","title":"path_write_latency","text":"

    The average latency of I/O write operations sent from this controller to the indicated target port.

    API Endpoint Metric Template REST api/cluster/counter/tables/path write_latencyUnit: microsecType: averageBase: write_iops conf/restperf/9.12.0/path.yaml ZAPI perf-object-get-instances path write_latencyUnit: microsecType: averageBase: write_iops conf/zapiperf/cdot/9.8.0/path.yaml"},{"location":"ontap-metrics/#plex_disk_busy","title":"plex_disk_busy","text":"

    The utilization percent of the disk. plex_disk_busy is disk_busy aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_capacity","title":"plex_disk_capacity","text":"

    Disk capacity in MB. plex_disk_capacity is disk_capacity aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_cp_read_chain","title":"plex_disk_cp_read_chain","text":"

    Average number of blocks transferred in each consistency point read operation during a CP. plex_disk_cp_read_chain is disk_cp_read_chain aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_cp_read_latency","title":"plex_disk_cp_read_latency","text":"

    Average latency per block in microseconds for consistency point read operations. plex_disk_cp_read_latency is disk_cp_read_latency aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_cp_reads","title":"plex_disk_cp_reads","text":"

    Number of disk read operations initiated each second for consistency point processing. plex_disk_cp_reads is disk_cp_reads aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_io_pending","title":"plex_disk_io_pending","text":"

    Average number of I/Os issued to the disk for which we have not yet received the response. plex_disk_io_pending is disk_io_pending aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_io_queued","title":"plex_disk_io_queued","text":"

    Number of I/Os queued to the disk but not yet issued. plex_disk_io_queued is disk_io_queued aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_total_data","title":"plex_disk_total_data","text":"

    Total throughput for user operations per second. plex_disk_total_data is disk_total_data aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_total_transfers","title":"plex_disk_total_transfers","text":"

    Total number of disk operations involving data transfer initiated per second. plex_disk_total_transfers is disk_total_transfers aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_read_blocks","title":"plex_disk_user_read_blocks","text":"

    Number of blocks transferred for user read operations per second. plex_disk_user_read_blocks is disk_user_read_blocks aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_read_chain","title":"plex_disk_user_read_chain","text":"

    Average number of blocks transferred in each user read operation. plex_disk_user_read_chain is disk_user_read_chain aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_read_latency","title":"plex_disk_user_read_latency","text":"

    Average latency per block in microseconds for user read operations. plex_disk_user_read_latency is disk_user_read_latency aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_reads","title":"plex_disk_user_reads","text":"

    Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. plex_disk_user_reads is disk_user_reads aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_write_blocks","title":"plex_disk_user_write_blocks","text":"

    Number of blocks transferred for user write operations per second. plex_disk_user_write_blocks is disk_user_write_blocks aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_write_chain","title":"plex_disk_user_write_chain","text":"

    Average number of blocks transferred in each user write operation. plex_disk_user_write_chain is disk_user_write_chain aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_write_latency","title":"plex_disk_user_write_latency","text":"

    Average latency per block in microseconds for user write operations. plex_disk_user_write_latency is disk_user_write_latency aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#plex_disk_user_writes","title":"plex_disk_user_writes","text":"

    Number of disk write operations initiated each second for storing data or metadata associated with user requests. plex_disk_user_writes is disk_user_writes aggregated by plex.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#qos_concurrency","title":"qos_concurrency","text":"

    This is the average number of concurrent requests for the workload.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume concurrencyUnit: noneType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume concurrencyUnit: noneType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_detail_resource_latency","title":"qos_detail_resource_latency","text":"

    average latency for workload on Data ONTAP subsystems

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_detail Harvest generatedUnit: microsecondsType: Base: conf/restperf/9.12.0/workload_detail.yaml ZAPI perf-object-get-instances workload_detail Harvest generatedUnit: microsecondsType: Base: conf/zapiperf/9.12.0/workload_detail.yaml"},{"location":"ontap-metrics/#qos_latency","title":"qos_latency","text":"

    This is the average response time for requests that were initiated by the workload.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume latencyUnit: microsecType: averageBase: ops conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume latencyUnit: microsecType: average,no-zero-valuesBase: ops conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_ops","title":"qos_ops","text":"

    This field is the workload's rate of operations that completed during the measurement interval; measured per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume opsUnit: per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_other_ops","title":"qos_other_ops","text":"

    This is the rate of this workload's other operations that completed during the measurement interval.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/workload.yaml ZAPI perf-object-get-instances workload_volume other_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_read_data","title":"qos_read_data","text":"

    This is the amount of data read per second from the filer by the workload.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume read_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume read_dataUnit: b_per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_read_io_type","title":"qos_read_io_type","text":"

    This is the percentage of read requests served from various components (such as buffer cache, ext_cache, disk, etc.).

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume read_io_type_percentUnit: percentType: percentBase: read_io_type_base conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume read_io_typeUnit: percentType: percentBase: read_io_type_base conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_read_latency","title":"qos_read_latency","text":"

    This is the average response time for read requests that were initiated by the workload.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume read_latencyUnit: microsecType: averageBase: read_ops conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume read_latencyUnit: microsecType: average,no-zero-valuesBase: read_ops conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_read_ops","title":"qos_read_ops","text":"

    This is the rate of this workload's read operations that completed during the measurement interval.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume read_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_sequential_reads","title":"qos_sequential_reads","text":"

    This is the percentage of reads, performed on behalf of the workload, that were sequential.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume sequential_reads_percentUnit: percentType: percentBase: sequential_reads_base conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume sequential_readsUnit: percentType: percent,no-zero-valuesBase: sequential_reads_base conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_sequential_writes","title":"qos_sequential_writes","text":"

    This is the percentage of writes, performed on behalf of the workload, that were sequential. This counter is only available on platforms with more than 4GB of NVRAM.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume sequential_writes_percentUnit: percentType: percentBase: sequential_writes_base conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume sequential_writesUnit: percentType: percent,no-zero-valuesBase: sequential_writes_base conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_total_data","title":"qos_total_data","text":"

    This is the total amount of data read/written per second from/to the filer by the workload.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume total_dataUnit: b_per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_write_data","title":"qos_write_data","text":"

    This is the amount of data written per second to the filer by the workload.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume write_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume write_dataUnit: b_per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_write_latency","title":"qos_write_latency","text":"

    This is the average response time for write requests that were initiated by the workload.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume write_latencyUnit: microsecType: averageBase: write_ops conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume write_latencyUnit: microsecType: average,no-zero-valuesBase: write_ops conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qos_write_ops","title":"qos_write_ops","text":"

    This is the workload's write operations that completed during the measurement interval; measured per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/qos_volume write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/workload_volume.yaml ZAPI perf-object-get-instances workload_volume write_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/workload_volume.yaml"},{"location":"ontap-metrics/#qtree_cifs_ops","title":"qtree_cifs_ops","text":"

    Number of CIFS operations per second to the qtree

    API Endpoint Metric Template REST api/cluster/counter/tables/qtree cifs_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/qtree.yaml ZAPI perf-object-get-instances qtree cifs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#qtree_id","title":"qtree_id","text":"

    The identifier for the qtree, unique within the qtree's volume.

    API Endpoint Metric Template REST api/storage/qtrees id conf/rest/9.12.0/qtree.yaml"},{"location":"ontap-metrics/#qtree_internal_ops","title":"qtree_internal_ops","text":"

    Number of internal operations generated by activites such as snapmirror and backup per second to the qtree

    API Endpoint Metric Template REST api/cluster/counter/tables/qtree internal_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/qtree.yaml ZAPI perf-object-get-instances qtree internal_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#qtree_nfs_ops","title":"qtree_nfs_ops","text":"

    Number of NFS operations per second to the qtree

    API Endpoint Metric Template REST api/cluster/counter/tables/qtree nfs_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/qtree.yaml ZAPI perf-object-get-instances qtree nfs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#qtree_total_ops","title":"qtree_total_ops","text":"

    Summation of NFS ops, CIFS ops, CSS ops and internal ops

    API Endpoint Metric Template REST api/cluster/counter/tables/qtree total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/qtree.yaml ZAPI perf-object-get-instances qtree total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_disk_limit","title":"quota_disk_limit","text":"

    Maximum amount of disk space, in kilobytes, allowed for the quota target (hard disk space limit). The value is -1 if the limit is unlimited.

    API Endpoint Metric Template REST api/storage/quota/reports space.hard_limit conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter disk-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_disk_used","title":"quota_disk_used","text":"

    Current amount of disk space, in kilobytes, used by the quota target.

    API Endpoint Metric Template REST api/storage/quota/reports space.used.total conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter disk-used conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_disk_used_pct_disk_limit","title":"quota_disk_used_pct_disk_limit","text":"

    Current disk space used expressed as a percentage of hard disk limit.

    API Endpoint Metric Template REST api/storage/quota/reports space.used.hard_limit_percent conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter disk-used-pct-disk-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_disk_used_pct_soft_disk_limit","title":"quota_disk_used_pct_soft_disk_limit","text":"

    Current disk space used expressed as a percentage of soft disk limit.

    API Endpoint Metric Template REST api/storage/quota/reports space.used.soft_limit_percent conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter disk-used-pct-soft-disk-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_disk_used_pct_threshold","title":"quota_disk_used_pct_threshold","text":"

    Current disk space used expressed as a percentage of threshold.

    API Endpoint Metric Template ZAPI quota-report-iter disk-used-pct-threshold conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_file_limit","title":"quota_file_limit","text":"

    Maximum number of files allowed for the quota target (hard files limit). The value is -1 if the limit is unlimited.

    API Endpoint Metric Template REST api/storage/quota/reports files.hard_limit conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter file-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_files_used","title":"quota_files_used","text":"

    Current number of files used by the quota target.

    API Endpoint Metric Template REST api/storage/quota/reports files.used.total conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter files-used conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_files_used_pct_file_limit","title":"quota_files_used_pct_file_limit","text":"

    Current number of files used expressed as a percentage of hard file limit.

    API Endpoint Metric Template REST api/storage/quota/reports files.used.hard_limit_percent conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter files-used-pct-file-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_files_used_pct_soft_file_limit","title":"quota_files_used_pct_soft_file_limit","text":"

    Current number of files used expressed as a percentage of soft file limit.

    API Endpoint Metric Template REST api/storage/quota/reports files.used.soft_limit_percent conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter files-used-pct-soft-file-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_soft_disk_limit","title":"quota_soft_disk_limit","text":"

    soft disk space limit, in kilobytes, for the quota target. The value is -1 if the limit is unlimited.

    API Endpoint Metric Template REST api/storage/quota/reports space.soft_limit conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter soft-disk-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_soft_file_limit","title":"quota_soft_file_limit","text":"

    Soft file limit, in number of files, for the quota target. The value is -1 if the limit is unlimited.

    API Endpoint Metric Template REST api/storage/quota/reports files.soft_limit conf/rest/9.12.0/qtree.yaml ZAPI quota-report-iter soft-file-limit conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#quota_threshold","title":"quota_threshold","text":"

    Disk space threshold, in kilobytes, for the quota target. The value is -1 if the limit is unlimited.

    API Endpoint Metric Template ZAPI quota-report-iter threshold conf/zapi/cdot/9.8.0/qtree.yaml"},{"location":"ontap-metrics/#raid_disk_busy","title":"raid_disk_busy","text":"

    The utilization percent of the disk. raid_disk_busy is disk_busy aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent disk_busy_percentUnit: percentType: percentBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_busyUnit: percentType: percentBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_capacity","title":"raid_disk_capacity","text":"

    Disk capacity in MB. raid_disk_capacity is disk_capacity aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent capacityUnit: mbType: rawBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent disk_capacityUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_cp_read_chain","title":"raid_disk_cp_read_chain","text":"

    Average number of blocks transferred in each consistency point read operation during a CP. raid_disk_cp_read_chain is disk_cp_read_chain aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_chainUnit: noneType: averageBase: cp_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_chainUnit: noneType: averageBase: cp_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_cp_read_latency","title":"raid_disk_cp_read_latency","text":"

    Average latency per block in microseconds for consistency point read operations. raid_disk_cp_read_latency is disk_cp_read_latency aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_read_latencyUnit: microsecType: averageBase: cp_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_cp_reads","title":"raid_disk_cp_reads","text":"

    Number of disk read operations initiated each second for consistency point processing. raid_disk_cp_reads is disk_cp_reads aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent cp_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent cp_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_io_pending","title":"raid_disk_io_pending","text":"

    Average number of I/Os issued to the disk for which we have not yet received the response. raid_disk_io_pending is disk_io_pending aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_pendingUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_io_queued","title":"raid_disk_io_queued","text":"

    Number of I/Os queued to the disk but not yet issued. raid_disk_io_queued is disk_io_queued aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent io_queuedUnit: noneType: averageBase: base_for_disk_busy conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_total_data","title":"raid_disk_total_data","text":"

    Total throughput for user operations per second. raid_disk_total_data is disk_total_data aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_dataUnit: b_per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_total_transfers","title":"raid_disk_total_transfers","text":"

    Total number of disk operations involving data transfer initiated per second. raid_disk_total_transfers is disk_total_transfers aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent total_transfer_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent total_transfersUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_read_blocks","title":"raid_disk_user_read_blocks","text":"

    Number of blocks transferred for user read operations per second. raid_disk_user_read_blocks is disk_user_read_blocks aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_read_chain","title":"raid_disk_user_read_chain","text":"

    Average number of blocks transferred in each user read operation. raid_disk_user_read_chain is disk_user_read_chain aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_chainUnit: noneType: averageBase: user_read_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_chainUnit: noneType: averageBase: user_reads conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_read_latency","title":"raid_disk_user_read_latency","text":"

    Average latency per block in microseconds for user read operations. raid_disk_user_read_latency is disk_user_read_latency aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_read_latencyUnit: microsecType: averageBase: user_read_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_reads","title":"raid_disk_user_reads","text":"

    Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. raid_disk_user_reads is disk_user_reads aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_read_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_readsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_write_blocks","title":"raid_disk_user_write_blocks","text":"

    Number of blocks transferred for user write operations per second. raid_disk_user_write_blocks is disk_user_write_blocks aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_block_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_blocksUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_write_chain","title":"raid_disk_user_write_chain","text":"

    Average number of blocks transferred in each user write operation. raid_disk_user_write_chain is disk_user_write_chain aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_chainUnit: noneType: averageBase: user_write_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_chainUnit: noneType: averageBase: user_writes conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_write_latency","title":"raid_disk_user_write_latency","text":"

    Average latency per block in microseconds for user write operations. raid_disk_user_write_latency is disk_user_write_latency aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_block_count conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_write_latencyUnit: microsecType: averageBase: user_write_blocks conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#raid_disk_user_writes","title":"raid_disk_user_writes","text":"

    Number of disk write operations initiated each second for storing data or metadata associated with user requests. raid_disk_user_writes is disk_user_writes aggregated by raid.

    API Endpoint Metric Template REST api/cluster/counter/tables/disk:constituent user_write_countUnit: per_secType: rateBase: conf/restperf/9.12.0/disk.yaml ZAPI perf-object-get-instances disk:constituent user_writesUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#security_audit_destination_port","title":"security_audit_destination_port","text":"

    The destination port used to forward the message.

    API Endpoint Metric Template ZAPI cluster-log-forward-get-iter cluster-log-forward-info.port conf/zapi/cdot/9.8.0/security_audit_dest.yaml"},{"location":"ontap-metrics/#security_certificate_expiry_time","title":"security_certificate_expiry_time","text":"

    Certificate expiration time. Can be provided on POST if creating self-signed certificate. The expiration time range is between 1 day to 10 years.

    API Endpoint Metric Template REST api/security/certificates expiry_time conf/rest/9.12.0/security_certificate.yaml ZAPI security-certificate-get-iter certificate-info.expiration-date conf/zapi/cdot/9.8.0/security_certificate.yaml"},{"location":"ontap-metrics/#security_ssh_max_instances","title":"security_ssh_max_instances","text":"

    Maximum possible simultaneous connections.

    API Endpoint Metric Template REST api/security/ssh max_instances conf/rest/9.12.0/security_ssh.yaml"},{"location":"ontap-metrics/#shelf_average_ambient_temperature","title":"shelf_average_ambient_temperature","text":"

    Average temperature of all ambient sensors for shelf in Celsius.

    API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_average_fan_speed","title":"shelf_average_fan_speed","text":"

    Average fan speed for shelf in rpm.

    API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_average_temperature","title":"shelf_average_temperature","text":"

    Average temperature of all non-ambient sensors for shelf in Celsius.

    API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_disk_count","title":"shelf_disk_count","text":"

    Disk count in a shelf.

    API Endpoint Metric Template REST api/storage/shelves disk_count conf/rest/9.12.0/shelf.yaml ZAPI storage-shelf-info-get-iter storage-shelf-info.disk-count conf/zapi/cdot/9.8.0/shelf.yaml"},{"location":"ontap-metrics/#shelf_max_fan_speed","title":"shelf_max_fan_speed","text":"

    Maximum fan speed for shelf in rpm.

    API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_max_temperature","title":"shelf_max_temperature","text":"

    Maximum temperature of all non-ambient sensors for shelf in Celsius.

    API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_min_ambient_temperature","title":"shelf_min_ambient_temperature","text":"

    Minimum temperature of all ambient sensors for shelf in Celsius.

    API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_min_fan_speed","title":"shelf_min_fan_speed","text":"

    Minimum fan speed for shelf in rpm.

    API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_min_temperature","title":"shelf_min_temperature","text":"

    Minimum temperature of all non-ambient sensors for shelf in Celsius.

    API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#shelf_power","title":"shelf_power","text":"

    Power consumed by shelf in Watts.

    API Endpoint Metric Template REST NA Harvest generatedUnit: Type: Base: conf/restperf/9.12.0/disk.yaml ZAPI NA Harvest generatedUnit: Type: Base: conf/zapiperf/cdot/9.8.0/disk.yaml"},{"location":"ontap-metrics/#smb2_close_latency","title":"smb2_close_latency","text":"

    Average latency for SMB2_COM_CLOSE operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 close_latencyUnit: microsecType: averageBase: close_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_close_latency_histogram","title":"smb2_close_latency_histogram","text":"

    Latency histogram for SMB2_COM_CLOSE operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 close_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_close_ops","title":"smb2_close_ops","text":"

    Number of SMB2_COM_CLOSE operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 close_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_create_latency","title":"smb2_create_latency","text":"

    Average latency for SMB2_COM_CREATE operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 create_latencyUnit: microsecType: averageBase: create_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_create_latency_histogram","title":"smb2_create_latency_histogram","text":"

    Latency histogram for SMB2_COM_CREATE operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 create_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_create_ops","title":"smb2_create_ops","text":"

    Number of SMB2_COM_CREATE operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 create_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_lock_latency","title":"smb2_lock_latency","text":"

    Average latency for SMB2_COM_LOCK operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 lock_latencyUnit: microsecType: averageBase: lock_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_lock_latency_histogram","title":"smb2_lock_latency_histogram","text":"

    Latency histogram for SMB2_COM_LOCK operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 lock_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_lock_ops","title":"smb2_lock_ops","text":"

    Number of SMB2_COM_LOCK operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 lock_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_negotiate_latency","title":"smb2_negotiate_latency","text":"

    Average latency for SMB2_COM_NEGOTIATE operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 negotiate_latencyUnit: microsecType: averageBase: negotiate_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_negotiate_ops","title":"smb2_negotiate_ops","text":"

    Number of SMB2_COM_NEGOTIATE operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 negotiate_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_oplock_break_latency","title":"smb2_oplock_break_latency","text":"

    Average latency for SMB2_COM_OPLOCK_BREAK operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 oplock_break_latencyUnit: microsecType: averageBase: oplock_break_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_oplock_break_latency_histogram","title":"smb2_oplock_break_latency_histogram","text":"

    Latency histogram for SMB2_COM_OPLOCK_BREAK operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 oplock_break_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_oplock_break_ops","title":"smb2_oplock_break_ops","text":"

    Number of SMB2_COM_OPLOCK_BREAK operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 oplock_break_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_directory_latency","title":"smb2_query_directory_latency","text":"

    Average latency for SMB2_COM_QUERY_DIRECTORY operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_directory_latencyUnit: microsecType: averageBase: query_directory_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_directory_latency_histogram","title":"smb2_query_directory_latency_histogram","text":"

    Latency histogram for SMB2_COM_QUERY_DIRECTORY operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_directory_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_directory_ops","title":"smb2_query_directory_ops","text":"

    Number of SMB2_COM_QUERY_DIRECTORY operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_directory_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_info_latency","title":"smb2_query_info_latency","text":"

    Average latency for SMB2_COM_QUERY_INFO operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_info_latencyUnit: microsecType: averageBase: query_info_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_info_latency_histogram","title":"smb2_query_info_latency_histogram","text":"

    Latency histogram for SMB2_COM_QUERY_INFO operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_info_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_query_info_ops","title":"smb2_query_info_ops","text":"

    Number of SMB2_COM_QUERY_INFO operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 query_info_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_read_latency","title":"smb2_read_latency","text":"

    Average latency for SMB2_COM_READ operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_read_ops","title":"smb2_read_ops","text":"

    Number of SMB2_COM_READ operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_session_setup_latency","title":"smb2_session_setup_latency","text":"

    Average latency for SMB2_COM_SESSION_SETUP operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 session_setup_latencyUnit: microsecType: averageBase: session_setup_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_session_setup_latency_histogram","title":"smb2_session_setup_latency_histogram","text":"

    Latency histogram for SMB2_COM_SESSION_SETUP operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 session_setup_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_session_setup_ops","title":"smb2_session_setup_ops","text":"

    Number of SMB2_COM_SESSION_SETUP operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 session_setup_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_set_info_latency","title":"smb2_set_info_latency","text":"

    Average latency for SMB2_COM_SET_INFO operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 set_info_latencyUnit: microsecType: averageBase: set_info_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_set_info_latency_histogram","title":"smb2_set_info_latency_histogram","text":"

    Latency histogram for SMB2_COM_SET_INFO operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 set_info_latency_histogramUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_set_info_ops","title":"smb2_set_info_ops","text":"

    Number of SMB2_COM_SET_INFO operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 set_info_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_tree_connect_latency","title":"smb2_tree_connect_latency","text":"

    Average latency for SMB2_COM_TREE_CONNECT operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 tree_connect_latencyUnit: microsecType: averageBase: tree_connect_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_tree_connect_ops","title":"smb2_tree_connect_ops","text":"

    Number of SMB2_COM_TREE_CONNECT operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 tree_connect_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_write_latency","title":"smb2_write_latency","text":"

    Average latency for SMB2_COM_WRITE operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 write_latencyUnit: microsecType: averageBase: write_latency_base conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#smb2_write_ops","title":"smb2_write_ops","text":"

    Number of SMB2_COM_WRITE operations

    API Endpoint Metric Template ZAPI perf-object-get-instances smb2 write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/smb2.yaml"},{"location":"ontap-metrics/#snapmirror_break_failed_count","title":"snapmirror_break_failed_count","text":"

    The number of failed SnapMirror break operations for the relationship

    API Endpoint Metric Template REST api/private/cli/snapmirror break_failed_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.break-failed-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_break_successful_count","title":"snapmirror_break_successful_count","text":"

    The number of successful SnapMirror break operations for the relationship

    API Endpoint Metric Template REST api/private/cli/snapmirror break_successful_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.break-successful-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_lag_time","title":"snapmirror_lag_time","text":"

    Amount of time since the last snapmirror transfer in seconds

    API Endpoint Metric Template REST api/private/cli/snapmirror lag_time conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.lag-time conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_last_transfer_duration","title":"snapmirror_last_transfer_duration","text":"

    Duration of the last SnapMirror transfer in seconds

    API Endpoint Metric Template REST api/private/cli/snapmirror last_transfer_duration conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.last-transfer-duration conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_last_transfer_end_timestamp","title":"snapmirror_last_transfer_end_timestamp","text":"

    The Timestamp of the end of the last transfer

    API Endpoint Metric Template REST api/private/cli/snapmirror last_transfer_end_timestamp conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.last-transfer-end-timestamp conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_last_transfer_size","title":"snapmirror_last_transfer_size","text":"

    Size in kilobytes (1024 bytes) of the last transfer

    API Endpoint Metric Template REST api/private/cli/snapmirror last_transfer_size conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.last-transfer-size conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_newest_snapshot_timestamp","title":"snapmirror_newest_snapshot_timestamp","text":"

    The timestamp of the newest Snapshot copy on the destination volume

    API Endpoint Metric Template REST api/private/cli/snapmirror newest_snapshot_timestamp conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.newest-snapshot-timestamp conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_resync_failed_count","title":"snapmirror_resync_failed_count","text":"

    The number of failed SnapMirror resync operations for the relationship

    API Endpoint Metric Template REST api/private/cli/snapmirror resync_failed_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.resync-failed-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_resync_successful_count","title":"snapmirror_resync_successful_count","text":"

    The number of successful SnapMirror resync operations for the relationship

    API Endpoint Metric Template REST api/private/cli/snapmirror resync_successful_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.resync-successful-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_total_transfer_bytes","title":"snapmirror_total_transfer_bytes","text":"

    Cumulative bytes transferred for the relationship

    API Endpoint Metric Template REST api/private/cli/snapmirror total_transfer_bytes conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.total-transfer-bytes conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_total_transfer_time_secs","title":"snapmirror_total_transfer_time_secs","text":"

    Cumulative total transfer time in seconds for the relationship

    API Endpoint Metric Template REST api/private/cli/snapmirror total_transfer_time_secs conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.total-transfer-time-secs conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_update_failed_count","title":"snapmirror_update_failed_count","text":"

    The number of successful SnapMirror update operations for the relationship

    API Endpoint Metric Template REST api/private/cli/snapmirror update_failed_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.update-failed-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapmirror_update_successful_count","title":"snapmirror_update_successful_count","text":"

    Number of Successful Updates

    API Endpoint Metric Template REST api/private/cli/snapmirror update_successful_count conf/rest/9.12.0/snapmirror.yaml ZAPI snapmirror-get-iter snapmirror-info.update-successful-count conf/zapi/cdot/9.8.0/snapmirror.yaml"},{"location":"ontap-metrics/#snapshot_policy_total_schedules","title":"snapshot_policy_total_schedules","text":"

    Total Number of Schedules in this Policy

    API Endpoint Metric Template REST api/private/cli/snapshot/policy total_schedules conf/rest/9.12.0/snapshotpolicy.yaml ZAPI snapshot-policy-get-iter snapshot-policy-info.total-schedules conf/zapi/cdot/9.8.0/snapshotpolicy.yaml"},{"location":"ontap-metrics/#svm_cifs_connections","title":"svm_cifs_connections","text":"

    Number of connections

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs connectionsUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver connectionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_established_sessions","title":"svm_cifs_established_sessions","text":"

    Number of established SMB and SMB2 sessions

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs established_sessionsUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver established_sessionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_latency","title":"svm_cifs_latency","text":"

    Average latency for CIFS operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs latencyUnit: microsecType: averageBase: latency_base conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_latencyUnit: microsecType: averageBase: cifs_latency_base conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_op_count","title":"svm_cifs_op_count","text":"

    Array of select CIFS operation counts

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs op_countUnit: noneType: rateBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_op_countUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_open_files","title":"svm_cifs_open_files","text":"

    Number of open files over SMB and SMB2

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs open_filesUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver open_filesUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_ops","title":"svm_cifs_ops","text":"

    Total number of CIFS operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_read_latency","title":"svm_cifs_read_latency","text":"

    Average latency for CIFS read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs average_read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_read_latencyUnit: microsecType: averageBase: cifs_read_ops conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_read_ops","title":"svm_cifs_read_ops","text":"

    Total number of CIFS read operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs total_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_signed_sessions","title":"svm_cifs_signed_sessions","text":"

    Number of signed SMB and SMB2 sessions.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs signed_sessionsUnit: noneType: rawBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver signed_sessionsUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_write_latency","title":"svm_cifs_write_latency","text":"

    Average latency for CIFS write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs average_write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_write_latencyUnit: microsecType: averageBase: cifs_write_ops conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_cifs_write_ops","title":"svm_cifs_write_ops","text":"

    Total number of CIFS write operations

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_cifs total_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/cifs_vserver.yaml ZAPI perf-object-get-instances cifs:vserver cifs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml"},{"location":"ontap-metrics/#svm_nfs_access_avg_latency","title":"svm_nfs_access_avg_latency","text":"

    Average latency of Access procedure requests. The counter keeps track of the average response time of Access requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 access.average_latencyUnit: microsecType: averageBase: access.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 access_avg_latencyUnit: microsecType: average,no-zero-valuesBase: access_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_access_total","title":"svm_nfs_access_total","text":"

    Total number of Access procedure requests. It is the total number of access success and access error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 access.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 access_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_backchannel_ctl_avg_latency","title":"svm_nfs_backchannel_ctl_avg_latency","text":"

    Average latency of BACKCHANNEL_CTL operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 backchannel_ctl.average_latencyUnit: microsecType: averageBase: backchannel_ctl.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 backchannel_ctl.average_latencyUnit: microsecType: averageBase: backchannel_ctl.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 backchannel_ctl_avg_latencyUnit: microsecType: average,no-zero-valuesBase: backchannel_ctl_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 backchannel_ctl_avg_latencyUnit: microsecType: average,no-zero-valuesBase: backchannel_ctl_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_backchannel_ctl_total","title":"svm_nfs_backchannel_ctl_total","text":"

    Total number of BACKCHANNEL_CTL operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 backchannel_ctl.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 backchannel_ctl.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 backchannel_ctl_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 backchannel_ctl_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_bind_conn_to_session_avg_latency","title":"svm_nfs_bind_conn_to_session_avg_latency","text":"

    Average latency of BIND_CONN_TO_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 bind_connections_to_session.average_latencyUnit: microsecType: averageBase: bind_connections_to_session.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 bind_conn_to_session.average_latencyUnit: microsecType: averageBase: bind_conn_to_session.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 bind_conn_to_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: bind_conn_to_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 bind_conn_to_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: bind_conn_to_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_bind_conn_to_session_total","title":"svm_nfs_bind_conn_to_session_total","text":"

    Total number of BIND_CONN_TO_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 bind_connections_to_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 bind_conn_to_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 bind_conn_to_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 bind_conn_to_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_close_avg_latency","title":"svm_nfs_close_avg_latency","text":"

    Average latency of CLOSE procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 close.average_latencyUnit: microsecType: averageBase: close.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 close_avg_latencyUnit: microsecType: average,no-zero-valuesBase: close_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_close_total","title":"svm_nfs_close_total","text":"

    Total number of CLOSE procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 close.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 close_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_commit_avg_latency","title":"svm_nfs_commit_avg_latency","text":"

    Average latency of Commit procedure requests. The counter keeps track of the average response time of Commit requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 commit.average_latencyUnit: microsecType: averageBase: commit.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 commit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: commit_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_commit_total","title":"svm_nfs_commit_total","text":"

    Total number of Commit procedure requests. It is the total number of Commit success and Commit error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 commit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 commit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_create_avg_latency","title":"svm_nfs_create_avg_latency","text":"

    Average latency of Create procedure requests. The counter keeps track of the average response time of Create requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 create.average_latencyUnit: microsecType: averageBase: create.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 create_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_create_session_avg_latency","title":"svm_nfs_create_session_avg_latency","text":"

    Average latency of CREATE_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 create_session.average_latencyUnit: microsecType: averageBase: create_session.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 create_session.average_latencyUnit: microsecType: averageBase: create_session.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 create_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 create_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: create_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_create_session_total","title":"svm_nfs_create_session_total","text":"

    Total number of CREATE_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 create_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 create_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 create_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 create_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_create_total","title":"svm_nfs_create_total","text":"

    Total number Create of procedure requests. It is the total number of create success and create error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 create.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 create_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_delegpurge_avg_latency","title":"svm_nfs_delegpurge_avg_latency","text":"

    Average latency of DELEGPURGE procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 delegpurge.average_latencyUnit: microsecType: averageBase: delegpurge.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 delegpurge_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegpurge_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_delegpurge_total","title":"svm_nfs_delegpurge_total","text":"

    Total number of DELEGPURGE procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 delegpurge.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 delegpurge_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_delegreturn_avg_latency","title":"svm_nfs_delegreturn_avg_latency","text":"

    Average latency of DELEGRETURN procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 delegreturn.average_latencyUnit: microsecType: averageBase: delegreturn.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 delegreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: delegreturn_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_delegreturn_total","title":"svm_nfs_delegreturn_total","text":"

    Total number of DELEGRETURN procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 delegreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 delegreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_destroy_clientid_avg_latency","title":"svm_nfs_destroy_clientid_avg_latency","text":"

    Average latency of DESTROY_CLIENTID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 destroy_clientid.average_latencyUnit: microsecType: averageBase: destroy_clientid.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 destroy_clientid.average_latencyUnit: microsecType: averageBase: destroy_clientid.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 destroy_clientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_clientid_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 destroy_clientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_clientid_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_destroy_clientid_total","title":"svm_nfs_destroy_clientid_total","text":"

    Total number of DESTROY_CLIENTID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 destroy_clientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 destroy_clientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 destroy_clientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 destroy_clientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_destroy_session_avg_latency","title":"svm_nfs_destroy_session_avg_latency","text":"

    Average latency of DESTROY_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 destroy_session.average_latencyUnit: microsecType: averageBase: destroy_session.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 destroy_session.average_latencyUnit: microsecType: averageBase: destroy_session.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 destroy_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_session_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 destroy_session_avg_latencyUnit: microsecType: average,no-zero-valuesBase: destroy_session_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_destroy_session_total","title":"svm_nfs_destroy_session_total","text":"

    Total number of DESTROY_SESSION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 destroy_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 destroy_session.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 destroy_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 destroy_session_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_exchange_id_avg_latency","title":"svm_nfs_exchange_id_avg_latency","text":"

    Average latency of EXCHANGE_ID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 exchange_id.average_latencyUnit: microsecType: averageBase: exchange_id.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 exchange_id.average_latencyUnit: microsecType: averageBase: exchange_id.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 exchange_id_avg_latencyUnit: microsecType: average,no-zero-valuesBase: exchange_id_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 exchange_id_avg_latencyUnit: microsecType: average,no-zero-valuesBase: exchange_id_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_exchange_id_total","title":"svm_nfs_exchange_id_total","text":"

    Total number of EXCHANGE_ID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 exchange_id.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 exchange_id.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 exchange_id_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 exchange_id_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_free_stateid_avg_latency","title":"svm_nfs_free_stateid_avg_latency","text":"

    Average latency of FREE_STATEID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 free_stateid.average_latencyUnit: microsecType: averageBase: free_stateid.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 free_stateid.average_latencyUnit: microsecType: averageBase: free_stateid.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 free_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: free_stateid_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 free_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: free_stateid_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_free_stateid_total","title":"svm_nfs_free_stateid_total","text":"

    Total number of FREE_STATEID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 free_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 free_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 free_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 free_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_fsinfo_avg_latency","title":"svm_nfs_fsinfo_avg_latency","text":"

    Average latency of FSInfo procedure requests. The counter keeps track of the average response time of FSInfo requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 fsinfo.average_latencyUnit: microsecType: averageBase: fsinfo.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 fsinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: fsinfo_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_fsinfo_total","title":"svm_nfs_fsinfo_total","text":"

    Total number FSInfo of procedure requests. It is the total number of FSInfo success and FSInfo error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 fsinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 fsinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_fsstat_avg_latency","title":"svm_nfs_fsstat_avg_latency","text":"

    Average latency of FSStat procedure requests. The counter keeps track of the average response time of FSStat requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 fsstat.average_latencyUnit: microsecType: averageBase: fsstat.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 fsstat_avg_latencyUnit: microsecType: average,no-zero-valuesBase: fsstat_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_fsstat_total","title":"svm_nfs_fsstat_total","text":"

    Total number FSStat of procedure requests. It is the total number of FSStat success and FSStat error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 fsstat.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 fsstat_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_get_dir_delegation_avg_latency","title":"svm_nfs_get_dir_delegation_avg_latency","text":"

    Average latency of GET_DIR_DELEGATION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 get_dir_delegation.average_latencyUnit: microsecType: averageBase: get_dir_delegation.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 get_dir_delegation.average_latencyUnit: microsecType: averageBase: get_dir_delegation.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 get_dir_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: get_dir_delegation_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 get_dir_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: get_dir_delegation_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_get_dir_delegation_total","title":"svm_nfs_get_dir_delegation_total","text":"

    Total number of GET_DIR_DELEGATION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 get_dir_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 get_dir_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 get_dir_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 get_dir_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getattr_avg_latency","title":"svm_nfs_getattr_avg_latency","text":"

    Average latency of GetAttr procedure requests. This counter keeps track of the average response time of GetAttr requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getattr.average_latencyUnit: microsecType: averageBase: getattr.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getattr_total","title":"svm_nfs_getattr_total","text":"

    Total number of Getattr procedure requests. It is the total number of getattr success and getattr error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getdeviceinfo_avg_latency","title":"svm_nfs_getdeviceinfo_avg_latency","text":"

    Average latency of GETDEVICEINFO operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 getdeviceinfo.average_latencyUnit: microsecType: averageBase: getdeviceinfo.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getdeviceinfo.average_latencyUnit: microsecType: averageBase: getdeviceinfo.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 getdeviceinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdeviceinfo_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getdeviceinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdeviceinfo_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getdeviceinfo_total","title":"svm_nfs_getdeviceinfo_total","text":"

    Total number of GETDEVICEINFO operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 getdeviceinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getdeviceinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 getdeviceinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getdeviceinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getdevicelist_avg_latency","title":"svm_nfs_getdevicelist_avg_latency","text":"

    Average latency of GETDEVICELIST operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 getdevicelist.average_latencyUnit: microsecType: averageBase: getdevicelist.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getdevicelist.average_latencyUnit: microsecType: averageBase: getdevicelist.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 getdevicelist_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdevicelist_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getdevicelist_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getdevicelist_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getdevicelist_total","title":"svm_nfs_getdevicelist_total","text":"

    Total number of GETDEVICELIST operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 getdevicelist.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getdevicelist.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 getdevicelist_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getdevicelist_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getfh_avg_latency","title":"svm_nfs_getfh_avg_latency","text":"

    Average latency of GETFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getfh.average_latencyUnit: microsecType: averageBase: getfh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: getfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_getfh_total","title":"svm_nfs_getfh_total","text":"

    Total number of GETFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 getfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 getfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_latency","title":"svm_nfs_latency","text":"

    Average latency of NFSv3 requests. This counter keeps track of the average response time of NFSv3 requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 latencyUnit: microsecType: average,no-zero-valuesBase: total_ops conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutcommit_avg_latency","title":"svm_nfs_layoutcommit_avg_latency","text":"

    Average latency of LAYOUTCOMMIT operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutcommit.average_latencyUnit: microsecType: averageBase: layoutcommit.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutcommit.average_latencyUnit: microsecType: averageBase: layoutcommit.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutcommit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutcommit_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutcommit_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutcommit_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutcommit_total","title":"svm_nfs_layoutcommit_total","text":"

    Total number of LAYOUTCOMMIT operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutcommit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutcommit.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutcommit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutcommit_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutget_avg_latency","title":"svm_nfs_layoutget_avg_latency","text":"

    Average latency of LAYOUTGET operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutget.average_latencyUnit: microsecType: averageBase: layoutget.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutget.average_latencyUnit: microsecType: averageBase: layoutget.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutget_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutget_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutget_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutget_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutget_total","title":"svm_nfs_layoutget_total","text":"

    Total number of LAYOUTGET operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutget.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutget.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutget_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutget_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutreturn_avg_latency","title":"svm_nfs_layoutreturn_avg_latency","text":"

    Average latency of LAYOUTRETURN operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutreturn.average_latencyUnit: microsecType: averageBase: layoutreturn.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutreturn.average_latencyUnit: microsecType: averageBase: layoutreturn.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutreturn_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutreturn_avg_latencyUnit: microsecType: average,no-zero-valuesBase: layoutreturn_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_layoutreturn_total","title":"svm_nfs_layoutreturn_total","text":"

    Total number of LAYOUTRETURN operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 layoutreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 layoutreturn.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 layoutreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 layoutreturn_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_link_avg_latency","title":"svm_nfs_link_avg_latency","text":"

    Average latency of Link procedure requests. The counter keeps track of the average response time of Link requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 link.average_latencyUnit: microsecType: averageBase: link.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 link_avg_latencyUnit: microsecType: average,no-zero-valuesBase: link_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_link_total","title":"svm_nfs_link_total","text":"

    Total number Link of procedure requests. It is the total number of Link success and Link error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 link.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 link_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lock_avg_latency","title":"svm_nfs_lock_avg_latency","text":"

    Average latency of LOCK procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lock.average_latencyUnit: microsecType: averageBase: lock.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lock_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lock_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lock_total","title":"svm_nfs_lock_total","text":"

    Total number of LOCK procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lock.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lock_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lockt_avg_latency","title":"svm_nfs_lockt_avg_latency","text":"

    Average latency of LOCKT procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lockt.average_latencyUnit: microsecType: averageBase: lockt.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lockt_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lockt_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lockt_total","title":"svm_nfs_lockt_total","text":"

    Total number of LOCKT procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lockt.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lockt_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_locku_avg_latency","title":"svm_nfs_locku_avg_latency","text":"

    Average latency of LOCKU procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 locku.average_latencyUnit: microsecType: averageBase: locku.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 locku_avg_latencyUnit: microsecType: average,no-zero-valuesBase: locku_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_locku_total","title":"svm_nfs_locku_total","text":"

    Total number of LOCKU procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 locku.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 locku_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lookup_avg_latency","title":"svm_nfs_lookup_avg_latency","text":"

    Average latency of LookUp procedure requests. This shows the average time it takes for the LookUp operation to reply to the request.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lookup.average_latencyUnit: microsecType: averageBase: lookup.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lookup_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookup_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lookup_total","title":"svm_nfs_lookup_total","text":"

    Total number of Lookup procedure requests. It is the total number of lookup success and lookup error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lookup.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lookup_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lookupp_avg_latency","title":"svm_nfs_lookupp_avg_latency","text":"

    Average latency of LOOKUPP procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lookupp.average_latencyUnit: microsecType: averageBase: lookupp.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lookupp_avg_latencyUnit: microsecType: average,no-zero-valuesBase: lookupp_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_lookupp_total","title":"svm_nfs_lookupp_total","text":"

    Total number of LOOKUPP procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 lookupp.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 lookupp_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_mkdir_avg_latency","title":"svm_nfs_mkdir_avg_latency","text":"

    Average latency of MkDir procedure requests. The counter keeps track of the average response time of MkDir requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 mkdir.average_latencyUnit: microsecType: averageBase: mkdir.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 mkdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: mkdir_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_mkdir_total","title":"svm_nfs_mkdir_total","text":"

    Total number MkDir of procedure requests. It is the total number of MkDir success and MkDir error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 mkdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 mkdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_mknod_avg_latency","title":"svm_nfs_mknod_avg_latency","text":"

    Average latency of MkNod procedure requests. The counter keeps track of the average response time of MkNod requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 mknod.average_latencyUnit: microsecType: averageBase: mknod.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 mknod_avg_latencyUnit: microsecType: average,no-zero-valuesBase: mknod_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_mknod_total","title":"svm_nfs_mknod_total","text":"

    Total number MkNod of procedure requests. It is the total number of MkNod success and MkNod error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 mknod.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 mknod_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_null_avg_latency","title":"svm_nfs_null_avg_latency","text":"

    Average latency of Null procedure requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 null.average_latencyUnit: microsecType: averageBase: null.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 null_avg_latencyUnit: microsecType: average,no-zero-valuesBase: null_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_null_total","title":"svm_nfs_null_total","text":"

    Total number of Null procedure requests. It is the total of null success and null error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 null.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 null_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_nverify_avg_latency","title":"svm_nfs_nverify_avg_latency","text":"

    Average latency of NVERIFY procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 nverify.average_latencyUnit: microsecType: averageBase: nverify.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 nverify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: nverify_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_nverify_total","title":"svm_nfs_nverify_total","text":"

    Total number of NVERIFY procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 nverify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 nverify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_open_avg_latency","title":"svm_nfs_open_avg_latency","text":"

    Average latency of OPEN procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 open.average_latencyUnit: microsecType: averageBase: open.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 open_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_open_confirm_avg_latency","title":"svm_nfs_open_confirm_avg_latency","text":"

    Average latency of OPEN_CONFIRM procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open_confirm.average_latencyUnit: microsecType: averageBase: open_confirm.total conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 open_confirm_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_confirm_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_open_confirm_total","title":"svm_nfs_open_confirm_total","text":"

    Total number of OPEN_CONFIRM procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open_confirm.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 open_confirm_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_open_downgrade_avg_latency","title":"svm_nfs_open_downgrade_avg_latency","text":"

    Average latency of OPEN_DOWNGRADE procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 open_downgrade.average_latencyUnit: microsecType: averageBase: open_downgrade.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 open_downgrade_avg_latencyUnit: microsecType: average,no-zero-valuesBase: open_downgrade_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_open_downgrade_total","title":"svm_nfs_open_downgrade_total","text":"

    Total number of OPEN_DOWNGRADE procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 open_downgrade.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 open_downgrade_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_open_total","title":"svm_nfs_open_total","text":"

    Total number of OPEN procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 open.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 open_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_openattr_avg_latency","title":"svm_nfs_openattr_avg_latency","text":"

    Average latency of OPENATTR procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 openattr.average_latencyUnit: microsecType: averageBase: openattr.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 openattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: openattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_openattr_total","title":"svm_nfs_openattr_total","text":"

    Total number of OPENATTR procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 openattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 openattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_ops","title":"svm_nfs_ops","text":"

    Total number of NFSv3 procedure requests per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 total_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_pathconf_avg_latency","title":"svm_nfs_pathconf_avg_latency","text":"

    Average latency of PathConf procedure requests. The counter keeps track of the average response time of PathConf requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 pathconf.average_latencyUnit: microsecType: averageBase: pathconf.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 pathconf_avg_latencyUnit: microsecType: average,no-zero-valuesBase: pathconf_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_pathconf_total","title":"svm_nfs_pathconf_total","text":"

    Total number PathConf of procedure requests. It is the total number of PathConf success and PathConf error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 pathconf.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 pathconf_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_putfh_avg_latency","title":"svm_nfs_putfh_avg_latency","text":"

    Average latency of PUTFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putfh.average_latencyUnit: microsecType: averageBase: putfh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putfh.average_latencyUnit: noneType: deltaBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putfh.average_latencyUnit: microsecType: averageBase: putfh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_putfh_total","title":"svm_nfs_putfh_total","text":"

    Total number of PUTFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_putpubfh_avg_latency","title":"svm_nfs_putpubfh_avg_latency","text":"

    Average latency of PUTPUBFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putpubfh.average_latencyUnit: microsecType: averageBase: putpubfh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putpubfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putpubfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_putpubfh_total","title":"svm_nfs_putpubfh_total","text":"

    Total number of PUTPUBFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putpubfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putpubfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_putrootfh_avg_latency","title":"svm_nfs_putrootfh_avg_latency","text":"

    Average latency of PUTROOTFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putrootfh.average_latencyUnit: microsecType: averageBase: putrootfh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putrootfh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: putrootfh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_putrootfh_total","title":"svm_nfs_putrootfh_total","text":"

    Total number of PUTROOTFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 putrootfh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 putrootfh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_read_avg_latency","title":"svm_nfs_read_avg_latency","text":"

    Average latency of Read procedure requests. The counter keeps track of the average response time of Read requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 read.average_latencyUnit: microsecType: averageBase: read.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 read_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_read_ops","title":"svm_nfs_read_ops","text":"

    Total observed NFSv3 read operations per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_read_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_read_symlink_avg_latency","title":"svm_nfs_read_symlink_avg_latency","text":"

    Average latency of ReadSymLink procedure requests. The counter keeps track of the average response time of ReadSymLink requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read_symlink.average_latencyUnit: microsecType: averageBase: read_symlink.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 read_symlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: read_symlink_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_read_symlink_total","title":"svm_nfs_read_symlink_total","text":"

    Total number of ReadSymLink procedure requests. It is the total number of read symlink success and read symlink error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read_symlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 read_symlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_read_throughput","title":"svm_nfs_read_throughput","text":"

    Rate of NFSv3 read data transfers per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 total.read_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 nfs4_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 nfs41_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 nfs42_read_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_read_total","title":"svm_nfs_read_total","text":"

    Total number Read of procedure requests. It is the total number of read success and read error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 read.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 read_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_readdir_avg_latency","title":"svm_nfs_readdir_avg_latency","text":"

    Average latency of ReadDir procedure requests. The counter keeps track of the average response time of ReadDir requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 readdir.average_latencyUnit: microsecType: averageBase: readdir.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 readdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdir_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_readdir_total","title":"svm_nfs_readdir_total","text":"

    Total number ReadDir of procedure requests. It is the total number of ReadDir success and ReadDir error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 readdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 readdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_readdirplus_avg_latency","title":"svm_nfs_readdirplus_avg_latency","text":"

    Average latency of ReadDirPlus procedure requests. The counter keeps track of the average response time of ReadDirPlus requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 readdirplus.average_latencyUnit: microsecType: averageBase: readdirplus.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 readdirplus_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readdirplus_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_readdirplus_total","title":"svm_nfs_readdirplus_total","text":"

    Total number ReadDirPlus of procedure requests. It is the total number of ReadDirPlus success and ReadDirPlus error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 readdirplus.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 readdirplus_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_readlink_avg_latency","title":"svm_nfs_readlink_avg_latency","text":"

    Average latency of READLINK procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 readlink.average_latencyUnit: microsecType: averageBase: readlink.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 readlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: readlink_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_readlink_total","title":"svm_nfs_readlink_total","text":"

    Total number of READLINK procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 readlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 readlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_reclaim_complete_avg_latency","title":"svm_nfs_reclaim_complete_avg_latency","text":"

    Average latency of RECLAIM_COMPLETE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 reclaim_complete.average_latencyUnit: microsecType: averageBase: reclaim_complete.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 reclaim_complete.average_latencyUnit: microsecType: averageBase: reclaim_complete.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 reclaim_complete_avg_latencyUnit: microsecType: average,no-zero-valuesBase: reclaim_complete_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 reclaim_complete_avg_latencyUnit: microsecType: average,no-zero-valuesBase: reclaim_complete_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_reclaim_complete_total","title":"svm_nfs_reclaim_complete_total","text":"

    Total number of RECLAIM_COMPLETE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 reclaim_complete.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 reclaim_complete.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 reclaim_complete_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 reclaim_complete_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_release_lock_owner_avg_latency","title":"svm_nfs_release_lock_owner_avg_latency","text":"

    Average Latency of RELEASE_LOCKOWNER procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 release_lock_owner.average_latencyUnit: microsecType: averageBase: release_lock_owner.total conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 release_lock_owner_avg_latencyUnit: microsecType: average,no-zero-valuesBase: release_lock_owner_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_release_lock_owner_total","title":"svm_nfs_release_lock_owner_total","text":"

    Total number of RELEASE_LOCKOWNER procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 release_lock_owner.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 release_lock_owner_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_remove_avg_latency","title":"svm_nfs_remove_avg_latency","text":"

    Average latency of Remove procedure requests. The counter keeps track of the average response time of Remove requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 remove.average_latencyUnit: microsecType: averageBase: remove.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 remove_avg_latencyUnit: microsecType: average,no-zero-valuesBase: remove_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_remove_total","title":"svm_nfs_remove_total","text":"

    Total number Remove of procedure requests. It is the total number of Remove success and Remove error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 remove.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 remove_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_rename_avg_latency","title":"svm_nfs_rename_avg_latency","text":"

    Average latency of Rename procedure requests. The counter keeps track of the average response time of Rename requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 rename.average_latencyUnit: microsecType: averageBase: rename.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 rename_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rename_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_rename_total","title":"svm_nfs_rename_total","text":"

    Total number Rename of procedure requests. It is the total number of Rename success and Rename error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 rename.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 rename_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_renew_avg_latency","title":"svm_nfs_renew_avg_latency","text":"

    Average latency of RENEW procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 renew.average_latencyUnit: microsecType: averageBase: renew.total conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 renew_avg_latencyUnit: microsecType: average,no-zero-valuesBase: renew_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_renew_total","title":"svm_nfs_renew_total","text":"

    Total number of RENEW procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 renew.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 renew_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_restorefh_avg_latency","title":"svm_nfs_restorefh_avg_latency","text":"

    Average latency of RESTOREFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 restorefh.average_latencyUnit: microsecType: averageBase: restorefh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 restorefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: restorefh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_restorefh_total","title":"svm_nfs_restorefh_total","text":"

    Total number of RESTOREFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 restorefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 restorefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_rmdir_avg_latency","title":"svm_nfs_rmdir_avg_latency","text":"

    Average latency of RmDir procedure requests. The counter keeps track of the average response time of RmDir requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 rmdir.average_latencyUnit: microsecType: averageBase: rmdir.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 rmdir_avg_latencyUnit: microsecType: average,no-zero-valuesBase: rmdir_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_rmdir_total","title":"svm_nfs_rmdir_total","text":"

    Total number RmDir of procedure requests. It is the total number of RmDir success and RmDir error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 rmdir.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 rmdir_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_savefh_avg_latency","title":"svm_nfs_savefh_avg_latency","text":"

    Average latency of SAVEFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 savefh.average_latencyUnit: microsecType: averageBase: savefh.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 savefh_avg_latencyUnit: microsecType: average,no-zero-valuesBase: savefh_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_savefh_total","title":"svm_nfs_savefh_total","text":"

    Total number of SAVEFH procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 savefh.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 savefh_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_secinfo_avg_latency","title":"svm_nfs_secinfo_avg_latency","text":"

    Average latency of SECINFO procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 secinfo.average_latencyUnit: microsecType: averageBase: secinfo.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 secinfo_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_secinfo_no_name_avg_latency","title":"svm_nfs_secinfo_no_name_avg_latency","text":"

    Average latency of SECINFO_NO_NAME operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 secinfo_no_name.average_latencyUnit: microsecType: averageBase: secinfo_no_name.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 secinfo_no_name.average_latencyUnit: microsecType: averageBase: secinfo_no_name.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 secinfo_no_name_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_no_name_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 secinfo_no_name_avg_latencyUnit: microsecType: average,no-zero-valuesBase: secinfo_no_name_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_secinfo_no_name_total","title":"svm_nfs_secinfo_no_name_total","text":"

    Total number of SECINFO_NO_NAME operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 secinfo_no_name.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 secinfo_no_name.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 secinfo_no_name_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 secinfo_no_name_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_secinfo_total","title":"svm_nfs_secinfo_total","text":"

    Total number of SECINFO procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 secinfo.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 secinfo_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_sequence_avg_latency","title":"svm_nfs_sequence_avg_latency","text":"

    Average latency of SEQUENCE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 sequence.average_latencyUnit: microsecType: averageBase: sequence.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 sequence.average_latencyUnit: microsecType: averageBase: sequence.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 sequence_avg_latencyUnit: microsecType: average,no-zero-valuesBase: sequence_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 sequence_avg_latencyUnit: microsecType: average,no-zero-valuesBase: sequence_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_sequence_total","title":"svm_nfs_sequence_total","text":"

    Total number of SEQUENCE operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 sequence.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 sequence.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 sequence_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 sequence_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_set_ssv_avg_latency","title":"svm_nfs_set_ssv_avg_latency","text":"

    Average latency of SET_SSV operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 set_ssv.average_latencyUnit: microsecType: averageBase: set_ssv.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 set_ssv.average_latencyUnit: microsecType: averageBase: set_ssv.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 set_ssv_avg_latencyUnit: microsecType: average,no-zero-valuesBase: set_ssv_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 set_ssv_avg_latencyUnit: microsecType: average,no-zero-valuesBase: set_ssv_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_set_ssv_total","title":"svm_nfs_set_ssv_total","text":"

    Total number of SET_SSV operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 set_ssv.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 set_ssv.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 set_ssv_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 set_ssv_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_setattr_avg_latency","title":"svm_nfs_setattr_avg_latency","text":"

    Average latency of SetAttr procedure requests. The counter keeps track of the average response time of SetAttr requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 setattr.average_latencyUnit: microsecType: averageBase: setattr.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 setattr_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setattr_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_setattr_total","title":"svm_nfs_setattr_total","text":"

    Total number of Setattr procedure requests. It is the total number of Setattr success and setattr error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 setattr.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 setattr_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_setclientid_avg_latency","title":"svm_nfs_setclientid_avg_latency","text":"

    Average latency of SETCLIENTID procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 setclientid.average_latencyUnit: microsecType: averageBase: setclientid.total conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 setclientid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setclientid_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_setclientid_confirm_avg_latency","title":"svm_nfs_setclientid_confirm_avg_latency","text":"

    Average latency of SETCLIENTID_CONFIRM procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 setclientid_confirm.average_latencyUnit: microsecType: averageBase: setclientid_confirm.total conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 setclientid_confirm_avg_latencyUnit: microsecType: average,no-zero-valuesBase: setclientid_confirm_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_setclientid_confirm_total","title":"svm_nfs_setclientid_confirm_total","text":"

    Total number of SETCLIENTID_CONFIRM procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 setclientid_confirm.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 setclientid_confirm_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_setclientid_total","title":"svm_nfs_setclientid_total","text":"

    Total number of SETCLIENTID procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 setclientid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4 setclientid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml"},{"location":"ontap-metrics/#svm_nfs_symlink_avg_latency","title":"svm_nfs_symlink_avg_latency","text":"

    Average latency of SymLink procedure requests. The counter keeps track of the average response time of SymLink requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 symlink.average_latencyUnit: microsecType: averageBase: symlink.total conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 symlink_avg_latencyUnit: microsecType: average,no-zero-valuesBase: symlink_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_symlink_total","title":"svm_nfs_symlink_total","text":"

    Total number SymLink of procedure requests. It is the total number of SymLink success and create SymLink requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 symlink.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 symlink_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_test_stateid_avg_latency","title":"svm_nfs_test_stateid_avg_latency","text":"

    Average latency of TEST_STATEID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 test_stateid.average_latencyUnit: microsecType: averageBase: test_stateid.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 test_stateid.average_latencyUnit: microsecType: averageBase: test_stateid.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 test_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: test_stateid_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 test_stateid_avg_latencyUnit: microsecType: average,no-zero-valuesBase: test_stateid_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_test_stateid_total","title":"svm_nfs_test_stateid_total","text":"

    Total number of TEST_STATEID operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 test_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 test_stateid.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 test_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 test_stateid_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_throughput","title":"svm_nfs_throughput","text":"

    Rate of NFSv3 data transfers per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 nfs4_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 nfs41_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 nfs42_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_verify_avg_latency","title":"svm_nfs_verify_avg_latency","text":"

    Average latency of VERIFY procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 verify.average_latencyUnit: microsecType: averageBase: verify.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 verify_avg_latencyUnit: microsecType: average,no-zero-valuesBase: verify_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_verify_total","title":"svm_nfs_verify_total","text":"

    Total number of VERIFY procedures

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v4 verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 verify.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4 verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 verify_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_want_delegation_avg_latency","title":"svm_nfs_want_delegation_avg_latency","text":"

    Average latency of WANT_DELEGATION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 want_delegation.average_latencyUnit: microsecType: averageBase: want_delegation.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 want_delegation.average_latencyUnit: microsecType: averageBase: want_delegation.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 want_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: want_delegation_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 want_delegation_avg_latencyUnit: microsecType: average,no-zero-valuesBase: want_delegation_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_want_delegation_total","title":"svm_nfs_want_delegation_total","text":"

    Total number of WANT_DELEGATION operations.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v41 want_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 want_delegation.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv4_1 want_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 want_delegation_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_write_avg_latency","title":"svm_nfs_write_avg_latency","text":"

    Average latency of Write procedure requests. The counter keeps track of the average response time of Write requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 write.average_latencyUnit: microsecType: averageBase: write.total conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 write_avg_latencyUnit: microsecType: average,no-zero-valuesBase: write_total conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_write_ops","title":"svm_nfs_write_ops","text":"

    Total observed NFSv3 write operations per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_write_opsUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml"},{"location":"ontap-metrics/#svm_nfs_write_throughput","title":"svm_nfs_write_throughput","text":"

    Rate of NFSv3 write data transfers per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 total.write_throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 total.throughputUnit: per_secType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 nfsv3_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 nfs4_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 nfs41_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 nfs42_write_throughputUnit: per_secType: rate,no-zero-valuesBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_nfs_write_total","title":"svm_nfs_write_total","text":"

    Total number of Write procedure requests. It is the total number of write success and write error requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_nfs_v3 write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv3.yaml REST api/cluster/counter/tables/svm_nfs_v4 write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4.yaml REST api/cluster/counter/tables/svm_nfs_v41 write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_1.yaml REST api/cluster/counter/tables/svm_nfs_v42 write.totalUnit: noneType: rateBase: conf/restperf/9.12.0/nfsv4_2.yaml ZAPI perf-object-get-instances nfsv3 write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv3.yaml ZAPI perf-object-get-instances nfsv4 write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4.yaml ZAPI perf-object-get-instances nfsv4_1 write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml ZAPI perf-object-get-instances nfsv4_2 write_totalUnit: noneType: rateBase: conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml"},{"location":"ontap-metrics/#svm_vol_avg_latency","title":"svm_vol_avg_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process all the operations on the volume; not including request processing or network communication time

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm average_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_other_latency","title":"svm_vol_other_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process other operations to the volume; not including request processing or network communication time

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm other_latencyUnit: microsecType: averageBase: total_other_ops conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_other_ops","title":"svm_vol_other_ops","text":"

    Number of other operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm total_other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_read_data","title":"svm_vol_read_data","text":"

    Bytes read per second

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm bytes_readUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_read_latency","title":"svm_vol_read_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_read_ops","title":"svm_vol_read_ops","text":"

    Number of read operations per second from the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm total_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_total_ops","title":"svm_vol_total_ops","text":"

    Number of operations per second serviced by the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_write_data","title":"svm_vol_write_data","text":"

    Bytes written per second

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm bytes_writtenUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_write_latency","title":"svm_vol_write_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vol_write_ops","title":"svm_vol_write_ops","text":"

    Number of write operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume:svm total_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume_svm.yaml ZAPI perf-object-get-instances volume:vserver write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume_svm.yaml"},{"location":"ontap-metrics/#svm_vscan_connections_active","title":"svm_vscan_connections_active","text":"

    Total number of current active connections

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_vscan connections_activeUnit: noneType: rawBase: conf/restperf/9.13.0/vscan_svm.yaml ZAPI perf-object-get-instances offbox_vscan connections_activeUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/vscan_svm.yaml"},{"location":"ontap-metrics/#svm_vscan_dispatch_latency","title":"svm_vscan_dispatch_latency","text":"

    Average dispatch latency

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_vscan dispatch.latencyUnit: microsecType: averageBase: dispatch.requests conf/restperf/9.13.0/vscan_svm.yaml ZAPI perf-object-get-instances offbox_vscan dispatch_latencyUnit: microsecType: averageBase: dispatch_latency_base conf/zapiperf/cdot/9.8.0/vscan_svm.yaml"},{"location":"ontap-metrics/#svm_vscan_scan_latency","title":"svm_vscan_scan_latency","text":"

    Average scan latency

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_vscan scan.latencyUnit: microsecType: averageBase: scan.requests conf/restperf/9.13.0/vscan_svm.yaml ZAPI perf-object-get-instances offbox_vscan scan_latencyUnit: microsecType: averageBase: scan_latency_base conf/zapiperf/cdot/9.8.0/vscan_svm.yaml"},{"location":"ontap-metrics/#svm_vscan_scan_noti_received_rate","title":"svm_vscan_scan_noti_received_rate","text":"

    Total number of scan notifications received by the dispatcher per second

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_vscan scan.notification_received_rateUnit: per_secType: rateBase: conf/restperf/9.13.0/vscan_svm.yaml ZAPI perf-object-get-instances offbox_vscan scan_noti_received_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/vscan_svm.yaml"},{"location":"ontap-metrics/#svm_vscan_scan_request_dispatched_rate","title":"svm_vscan_scan_request_dispatched_rate","text":"

    Total number of scan requests sent to the Vscanner per second

    API Endpoint Metric Template REST api/cluster/counter/tables/svm_vscan scan.request_dispatched_rateUnit: per_secType: rateBase: conf/restperf/9.13.0/vscan_svm.yaml ZAPI perf-object-get-instances offbox_vscan scan_request_dispatched_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/vscan_svm.yaml"},{"location":"ontap-metrics/#token_copy_bytes","title":"token_copy_bytes","text":"

    Total number of bytes copied.

    API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_copy.bytesUnit: noneType: rateBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_copy_bytesUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_copy_failure","title":"token_copy_failure","text":"

    Number of failed token copy requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_copy.failuresUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_copy_failureUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_copy_success","title":"token_copy_success","text":"

    Number of successful token copy requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_copy.successesUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_copy_successUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_create_bytes","title":"token_create_bytes","text":"

    Total number of bytes for which tokens are created.

    API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_create.bytesUnit: noneType: rateBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_create_bytesUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_create_failure","title":"token_create_failure","text":"

    Number of failed token create requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_create.failuresUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_create_failureUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_create_success","title":"token_create_success","text":"

    Number of successful token create requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_create.successesUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_create_successUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_zero_bytes","title":"token_zero_bytes","text":"

    Total number of bytes zeroed.

    API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_zero.bytesUnit: noneType: rateBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_zero_bytesUnit: noneType: rateBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_zero_failure","title":"token_zero_failure","text":"

    Number of failed token zero requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_zero.failuresUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_zero_failureUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#token_zero_success","title":"token_zero_success","text":"

    Number of successful token zero requests.

    API Endpoint Metric Template REST api/cluster/counter/tables/token_manager token_zero.successesUnit: noneType: deltaBase: conf/restperf/9.12.0/token_manager.yaml ZAPI perf-object-get-instances token_manager token_zero_successUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/token_manager.yaml"},{"location":"ontap-metrics/#volume_autosize_grow_threshold_percent","title":"volume_autosize_grow_threshold_percent","text":"API Endpoint Metric Template REST api/private/cli/volume autosize_grow_threshold_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-autosize-attributes.grow-threshold-percent conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_autosize_maximum_size","title":"volume_autosize_maximum_size","text":"API Endpoint Metric Template REST api/private/cli/volume max_autosize conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-autosize-attributes.maximum-size conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_avg_latency","title":"volume_avg_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process all the operations on the volume; not including request processing or network communication time

    API Endpoint Metric Template REST api/cluster/counter/tables/volume average_latencyUnit: microsecType: averageBase: total_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume avg_latencyUnit: microsecType: averageBase: total_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_filesystem_size","title":"volume_filesystem_size","text":"API Endpoint Metric Template REST api/private/cli/volume filesystem_size conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.filesystem-size conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_inode_files_total","title":"volume_inode_files_total","text":"

    Total user-visible file (inode) count, i.e., current maximum number of user-visible files (inodes) that this volume can currently hold.

    API Endpoint Metric Template REST api/private/cli/volume files conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-inode-attributes.files-total conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_inode_files_used","title":"volume_inode_files_used","text":"

    Number of user-visible files (inodes) used. This field is valid only when the volume is online.

    API Endpoint Metric Template REST api/private/cli/volume files_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-inode-attributes.files-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_inode_used_percent","title":"volume_inode_used_percent","text":"

    volume_inode_files_used / volume_inode_total

    API Endpoint Metric Template REST api/private/cli/volume inode_files_used, inode_files_total conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter inode_files_used, inode_files_total conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_access_latency","title":"volume_nfs_access_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol access requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.access_latencyUnit: microsecType: averageBase: nfs.access_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_access_latencyUnit: microsecType: averageBase: nfs_access_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_access_ops","title":"volume_nfs_access_ops","text":"

    Number of NFS accesses per second to the volume.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.access_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_access_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_getattr_latency","title":"volume_nfs_getattr_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol getattr requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.getattr_latencyUnit: microsecType: averageBase: nfs.getattr_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_getattr_latencyUnit: microsecType: averageBase: nfs_getattr_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_getattr_ops","title":"volume_nfs_getattr_ops","text":"

    Number of NFS getattr per second to the volume.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.getattr_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_getattr_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_lookup_latency","title":"volume_nfs_lookup_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol lookup requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.lookup_latencyUnit: microsecType: averageBase: nfs.lookup_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_lookup_latencyUnit: microsecType: averageBase: nfs_lookup_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_lookup_ops","title":"volume_nfs_lookup_ops","text":"

    Number of NFS lookups per second to the volume.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.lookup_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_lookup_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_other_latency","title":"volume_nfs_other_latency","text":"

    Average time for the WAFL filesystem to process other NFS operations to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.other_latencyUnit: microsecType: averageBase: nfs.other_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_other_latencyUnit: microsecType: averageBase: nfs_other_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_other_ops","title":"volume_nfs_other_ops","text":"

    Number of other NFS operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_punch_hole_latency","title":"volume_nfs_punch_hole_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol hole-punch requests to the volume.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.punch_hole_latencyUnit: microsecType: averageBase: nfs.punch_hole_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_punch_hole_latencyUnit: microsecType: averageBase: nfs_punch_hole_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_punch_hole_ops","title":"volume_nfs_punch_hole_ops","text":"

    Number of NFS hole-punch requests per second to the volume.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.punch_hole_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_punch_hole_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_read_latency","title":"volume_nfs_read_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol read requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.read_latencyUnit: microsecType: averageBase: nfs.read_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_read_latencyUnit: microsecType: averageBase: nfs_read_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_read_ops","title":"volume_nfs_read_ops","text":"

    Number of NFS read operations per second from the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_setattr_latency","title":"volume_nfs_setattr_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol setattr requests to the volume.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.setattr_latencyUnit: microsecType: averageBase: nfs.setattr_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_setattr_latencyUnit: microsecType: averageBase: nfs_setattr_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_setattr_ops","title":"volume_nfs_setattr_ops","text":"

    Number of NFS setattr requests per second to the volume.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.setattr_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_setattr_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_total_ops","title":"volume_nfs_total_ops","text":"

    Number of total NFS operations per second to the volume.

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_write_latency","title":"volume_nfs_write_latency","text":"

    Average time for the WAFL filesystem to process NFS protocol write requests to the volume; not including NFS protocol request processing or network communication time, which will also be included in client observed NFS request latency

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.write_latencyUnit: microsecType: averageBase: nfs.write_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_write_latencyUnit: microsecType: averageBase: nfs_write_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_nfs_write_ops","title":"volume_nfs_write_ops","text":"

    Number of NFS write operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume nfs.write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume nfs_write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_other_latency","title":"volume_other_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process other operations to the volume; not including request processing or network communication time

    API Endpoint Metric Template REST api/cluster/counter/tables/volume other_latencyUnit: microsecType: averageBase: total_other_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume other_latencyUnit: microsecType: averageBase: other_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_other_ops","title":"volume_other_ops","text":"

    Number of other operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume total_other_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume other_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_overwrite_reserve_available","title":"volume_overwrite_reserve_available","text":"

    amount of storage space that is currently available for overwrites, calculated by subtracting the total amount of overwrite reserve space from the amount that has already been used.

    API Endpoint Metric Template REST api/private/cli/volume overwrite_reserve_total, overwrite_reserve_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter overwrite_reserve_total, overwrite_reserve_used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_overwrite_reserve_total","title":"volume_overwrite_reserve_total","text":"API Endpoint Metric Template REST api/private/cli/volume overwrite_reserve conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.overwrite-reserve conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_overwrite_reserve_used","title":"volume_overwrite_reserve_used","text":"API Endpoint Metric Template REST api/private/cli/volume overwrite_reserve_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.overwrite-reserve-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_read_data","title":"volume_read_data","text":"

    Bytes read per second

    API Endpoint Metric Template REST api/cluster/counter/tables/volume bytes_readUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_read_latency","title":"volume_read_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time

    API Endpoint Metric Template REST api/cluster/counter/tables/volume read_latencyUnit: microsecType: averageBase: total_read_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_latencyUnit: microsecType: averageBase: read_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_read_ops","title":"volume_read_ops","text":"

    Number of read operations per second from the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume total_read_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume read_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_compress_saved","title":"volume_sis_compress_saved","text":"

    The total disk space (in bytes) that is saved by compressing blocks on the referenced file system.

    API Endpoint Metric Template REST api/private/cli/volume compression_space_saved conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.compression-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_compress_saved_percent","title":"volume_sis_compress_saved_percent","text":"

    Percentage of the total disk space that is saved by compressing blocks on the referenced file system

    API Endpoint Metric Template REST api/private/cli/volume compression_space_saved_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.percentage-compression-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_dedup_saved","title":"volume_sis_dedup_saved","text":"

    The total disk space (in bytes) that is saved by deduplication and file cloning.

    API Endpoint Metric Template REST api/private/cli/volume dedupe_space_saved conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.deduplication-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_dedup_saved_percent","title":"volume_sis_dedup_saved_percent","text":"

    Percentage of the total disk space that is saved by deduplication and file cloning.

    API Endpoint Metric Template REST api/private/cli/volume dedupe_space_saved_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.percentage-deduplication-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_total_saved","title":"volume_sis_total_saved","text":"

    Total space saved (in bytes) in the volume due to deduplication, compression, and file cloning.

    API Endpoint Metric Template REST api/private/cli/volume sis_space_saved conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.total-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_sis_total_saved_percent","title":"volume_sis_total_saved_percent","text":"

    Percentage of total disk space that is saved by compressing blocks, deduplication and file cloning.

    API Endpoint Metric Template REST api/private/cli/volume sis_space_saved_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-sis-attributes.percentage-total-space-saved conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_size","title":"volume_size","text":"

    Physical size of the volume, in bytes. The minimum size for a FlexVol volume is 20MB and the minimum size for a FlexGroup volume is 200MB per constituent. The recommended size for a FlexGroup volume is a minimum of 100GB per constituent. For all volumes, the default size is equal to the minimum size.

    API Endpoint Metric Template REST api/private/cli/volume size conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_size_available","title":"volume_size_available","text":"API Endpoint Metric Template REST api/private/cli/volume available conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size-available conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_size_total","title":"volume_size_total","text":"API Endpoint Metric Template REST api/private/cli/volume total conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size-total conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_size_used","title":"volume_size_used","text":"API Endpoint Metric Template REST api/private/cli/volume used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_size_used_percent","title":"volume_size_used_percent","text":"

    percentage of utilized storage space in a volume relative to its total capacity

    API Endpoint Metric Template REST api/private/cli/volume percent_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.percentage-size-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_count","title":"volume_snapshot_count","text":"

    Number of Snapshot copies in the volume.

    API Endpoint Metric Template REST api/private/cli/volume snapshot_count conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-snapshot-attributes.snapshot-count conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_reserve_available","title":"volume_snapshot_reserve_available","text":"API Endpoint Metric Template REST api/private/cli/volume snapshot_reserve_available conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.snapshot-reserve-available conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_reserve_percent","title":"volume_snapshot_reserve_percent","text":"API Endpoint Metric Template REST api/private/cli/volume percent_snapshot_space conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.percentage-snapshot-reserve conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_reserve_size","title":"volume_snapshot_reserve_size","text":"API Endpoint Metric Template REST api/private/cli/volume snapshot_reserve_size conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.snapshot-reserve-size conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_reserve_used","title":"volume_snapshot_reserve_used","text":"

    amount of storage space currently used by a volume's snapshot reserve, which is calculated by subtracting the snapshot reserve available space from the snapshot reserve size.

    API Endpoint Metric Template REST api/private/cli/volume snapshot_reserve_size, snapshot_reserve_available conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter snapshot_reserve_size, snapshot_reserve_available conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshot_reserve_used_percent","title":"volume_snapshot_reserve_used_percent","text":"API Endpoint Metric Template REST api/private/cli/volume snapshot_space_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.percentage-snapshot-reserve-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshots_size_available","title":"volume_snapshots_size_available","text":"API Endpoint Metric Template REST api/private/cli/volume size_available_for_snapshots conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size-available-for-snapshots conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_snapshots_size_used","title":"volume_snapshots_size_used","text":"API Endpoint Metric Template REST api/private/cli/volume size_used_by_snapshots conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.size-used-by-snapshots conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_expected_available","title":"volume_space_expected_available","text":"API Endpoint Metric Template REST api/private/cli/volume expected_available conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.expected-available conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_logical_available","title":"volume_space_logical_available","text":"API Endpoint Metric Template ZAPI volume-get-iter volume-attributes.volume-space-attributes.logical-available conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_logical_used","title":"volume_space_logical_used","text":"API Endpoint Metric Template REST api/private/cli/volume logical_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.logical-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_logical_used_by_afs","title":"volume_space_logical_used_by_afs","text":"API Endpoint Metric Template REST api/private/cli/volume logical_used_by_afs conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.logical-used-by-afs conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_logical_used_by_snapshots","title":"volume_space_logical_used_by_snapshots","text":"API Endpoint Metric Template REST api/private/cli/volume logical_used_by_snapshots conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.logical-used-by-snapshots conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_logical_used_percent","title":"volume_space_logical_used_percent","text":"API Endpoint Metric Template REST api/private/cli/volume logical_used_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.logical-used-percent conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_physical_used","title":"volume_space_physical_used","text":"API Endpoint Metric Template REST api/private/cli/volume physical_used conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.physical-used conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_space_physical_used_percent","title":"volume_space_physical_used_percent","text":"API Endpoint Metric Template REST api/private/cli/volume physical_used_percent conf/rest/9.12.0/volume.yaml ZAPI volume-get-iter volume-attributes.volume-space-attributes.physical-used-percent conf/zapi/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_total_ops","title":"volume_total_ops","text":"

    Number of operations per second serviced by the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume total_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume total_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_write_data","title":"volume_write_data","text":"

    Bytes written per second

    API Endpoint Metric Template REST api/cluster/counter/tables/volume bytes_writtenUnit: b_per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_dataUnit: b_per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_write_latency","title":"volume_write_latency","text":"

    Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time

    API Endpoint Metric Template REST api/cluster/counter/tables/volume write_latencyUnit: microsecType: averageBase: total_write_ops conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_latencyUnit: microsecType: averageBase: write_ops conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#volume_write_ops","title":"volume_write_ops","text":"

    Number of write operations per second to the volume

    API Endpoint Metric Template REST api/cluster/counter/tables/volume total_write_opsUnit: per_secType: rateBase: conf/restperf/9.12.0/volume.yaml ZAPI perf-object-get-instances volume write_opsUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/volume.yaml"},{"location":"ontap-metrics/#vscan_scan_latency","title":"vscan_scan_latency","text":"

    Average scan latency

    API Endpoint Metric Template REST api/cluster/counter/tables/vscan scan.latencyUnit: microsecType: averageBase: scan.requests conf/restperf/9.13.0/vscan.yaml ZAPI perf-object-get-instances offbox_vscan_server scan_latencyUnit: microsecType: averageBase: scan_latency_base conf/zapiperf/cdot/9.8.0/vscan.yaml"},{"location":"ontap-metrics/#vscan_scan_request_dispatched_rate","title":"vscan_scan_request_dispatched_rate","text":"

    Total number of scan requests sent to the scanner per second

    API Endpoint Metric Template REST api/cluster/counter/tables/vscan scan.request_dispatched_rateUnit: per_secType: rateBase: conf/restperf/9.13.0/vscan.yaml ZAPI perf-object-get-instances offbox_vscan_server scan_request_dispatched_rateUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/vscan.yaml"},{"location":"ontap-metrics/#vscan_scanner_stats_pct_cpu_used","title":"vscan_scanner_stats_pct_cpu_used","text":"

    Percentage CPU utilization on scanner calculated over the last 15 seconds.

    API Endpoint Metric Template REST api/cluster/counter/tables/vscan scanner.stats_percent_cpu_usedUnit: noneType: rawBase: conf/restperf/9.13.0/vscan.yaml ZAPI perf-object-get-instances offbox_vscan_server scanner_stats_pct_cpu_usedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/vscan.yaml"},{"location":"ontap-metrics/#vscan_scanner_stats_pct_mem_used","title":"vscan_scanner_stats_pct_mem_used","text":"

    Percentage RAM utilization on scanner calculated over the last 15 seconds.

    API Endpoint Metric Template REST api/cluster/counter/tables/vscan scanner.stats_percent_mem_usedUnit: noneType: rawBase: conf/restperf/9.13.0/vscan.yaml ZAPI perf-object-get-instances offbox_vscan_server scanner_stats_pct_mem_usedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/vscan.yaml"},{"location":"ontap-metrics/#vscan_scanner_stats_pct_network_used","title":"vscan_scanner_stats_pct_network_used","text":"

    Percentage network utilization on scanner calculated for the last 15 seconds.

    API Endpoint Metric Template REST api/cluster/counter/tables/vscan scanner.stats_percent_network_usedUnit: noneType: rawBase: conf/restperf/9.13.0/vscan.yaml ZAPI perf-object-get-instances offbox_vscan_server scanner_stats_pct_network_usedUnit: noneType: rawBase: conf/zapiperf/cdot/9.8.0/vscan.yaml"},{"location":"ontap-metrics/#wafl_avg_msg_latency","title":"wafl_avg_msg_latency","text":"

    Average turnaround time for WAFL messages in milliseconds.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl average_msg_latencyUnit: millisecType: averageBase: msg_total conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl avg_wafl_msg_latencyUnit: millisecType: averageBase: wafl_msg_total conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_avg_non_wafl_msg_latency","title":"wafl_avg_non_wafl_msg_latency","text":"

    Average turnaround time for non-WAFL messages in milliseconds.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl average_non_wafl_msg_latencyUnit: millisecType: averageBase: non_wafl_msg_total conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl avg_non_wafl_msg_latencyUnit: millisecType: averageBase: non_wafl_msg_total conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_avg_repl_msg_latency","title":"wafl_avg_repl_msg_latency","text":"

    Average turnaround time for replication WAFL messages in milliseconds.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl average_replication_msg_latencyUnit: millisecType: averageBase: replication_msg_total conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl avg_wafl_repl_msg_latencyUnit: millisecType: averageBase: wafl_repl_msg_total conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_cp_count","title":"wafl_cp_count","text":"

    Array of counts of different types of Consistency Points (CP).

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl cp_countUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl cp_countUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_cp_phase_times","title":"wafl_cp_phase_times","text":"

    Array of percentage time spent in different phases of Consistency Point (CP).

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl cp_phase_timesUnit: percentType: percentBase: total_cp_msecs conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl cp_phase_timesUnit: percentType: percentBase: total_cp_msecs conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_memory_free","title":"wafl_memory_free","text":"

    The current WAFL memory available in the system.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl memory_freeUnit: mbType: rawBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_memory_freeUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_memory_used","title":"wafl_memory_used","text":"

    The current WAFL memory used in the system.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl memory_usedUnit: mbType: rawBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_memory_usedUnit: mbType: rawBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_msg_total","title":"wafl_msg_total","text":"

    Total number of WAFL messages per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl msg_totalUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_msg_totalUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_non_wafl_msg_total","title":"wafl_non_wafl_msg_total","text":"

    Total number of non-WAFL messages per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl non_wafl_msg_totalUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl non_wafl_msg_totalUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_read_io_type","title":"wafl_read_io_type","text":"

    Percentage of reads served from buffer cache, external cache, or disk.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl read_io_typeUnit: percentType: percentBase: read_io_type_base conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl read_io_typeUnit: percentType: percentBase: read_io_type_base conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_cache","title":"wafl_reads_from_cache","text":"

    WAFL reads from cache.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_cacheUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_cacheUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_cloud","title":"wafl_reads_from_cloud","text":"

    WAFL reads from cloud storage.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_cloudUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_cloudUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_cloud_s2c_bin","title":"wafl_reads_from_cloud_s2c_bin","text":"

    WAFL reads from cloud storage via s2c bin.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_cloud_s2c_binUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_cloud_s2c_binUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_disk","title":"wafl_reads_from_disk","text":"

    WAFL reads from disk.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_diskUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_diskUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_ext_cache","title":"wafl_reads_from_ext_cache","text":"

    WAFL reads from external cache.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_external_cacheUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_ext_cacheUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_fc_miss","title":"wafl_reads_from_fc_miss","text":"

    WAFL reads from remote volume for fc_miss.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_fc_missUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_fc_missUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_pmem","title":"wafl_reads_from_pmem","text":"

    Wafl reads from persistent mmeory.

    API Endpoint Metric Template ZAPI perf-object-get-instances wafl wafl_reads_from_pmemUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_reads_from_ssd","title":"wafl_reads_from_ssd","text":"

    WAFL reads from SSD.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl reads_from_ssdUnit: noneType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_reads_from_ssdUnit: noneType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_repl_msg_total","title":"wafl_repl_msg_total","text":"

    Total number of replication WAFL messages per second.

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl replication_msg_totalUnit: per_secType: rateBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl wafl_repl_msg_totalUnit: per_secType: rateBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_total_cp_msecs","title":"wafl_total_cp_msecs","text":"

    Milliseconds spent in Consistency Point (CP).

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl total_cp_msecsUnit: millisecType: deltaBase: conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl total_cp_msecsUnit: millisecType: deltaBase: conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"ontap-metrics/#wafl_total_cp_util","title":"wafl_total_cp_util","text":"

    Percentage of time spent in a Consistency Point (CP).

    API Endpoint Metric Template REST api/cluster/counter/tables/wafl total_cp_utilUnit: percentType: percentBase: cpu_elapsed_time conf/restperf/9.12.0/wafl.yaml ZAPI perf-object-get-instances wafl total_cp_utilUnit: percentType: percentBase: cpu_elapsed_time conf/zapiperf/cdot/9.8.0/wafl.yaml"},{"location":"plugins/","title":"Plugins","text":""},{"location":"plugins/#built-in-plugins","title":"Built-in Plugins","text":"

    The plugin feature allows users to manipulate and customize data collected by collectors without changing the collectors. Plugins have the same capabilities as collectors and therefore can collect data on their own as well. Furthermore, multiple plugins can be put in a pipeline to perform more complex operations.

    Harvest architecture defines two types of plugins:

    Built-in generic - Statically compiled, generic plugins. \"Generic\" means the plugin is collector-agnostic. These plugins are provided in this package and listed in the right sidebar.

    Built-in custom - These plugins are statically compiled, collector-specific plugins. Their source code should reside inside the plugins/ subdirectory of the collector package (e.g. (cmd/collectors/rest/plugins/svm/svm.go)[https://github.com/NetApp/harvest/blob/main/cmd/collectors/rest/plugins/svm/svm.go]). Custom plugins have access to all the parameters of their parent collector and should therefore be treated with great care.

    This documentation gives an overview of builtin plugins. For other plugins, see their respective documentation. For writing your own plugin, see Developer's documentation.

    Note: the rules are executed in the same order as you've added them.

    "},{"location":"plugins/#aggregator","title":"Aggregator","text":"

    Aggregator creates a new collection of metrics (Matrix) by summarizing and/or averaging metric values from an existing Matrix for a given label. For example, if the collected metrics are for volumes, you can create an aggregation for nodes or svms.

    "},{"location":"plugins/#rule-syntax","title":"Rule syntax","text":"

    simplest case:

    plugins:\nAggregator:\n- LABEL\n# will aggregate a new Matrix based on target label LABEL\n

    If you want to specify which labels should be included in the new instances, you can add those space-seperated after LABEL:

        - LABEL LABEL1,LABEL2\n# same, but LABEL1 and LABEL2 will be copied into the new instances\n# (default is to only copy LABEL and any global labels (such as cluster and datacenter)\n

    Or include all labels:

        - LABEL ...\n# copy all labels of the original instance\n

    By default, aggregated metrics will be prefixed with LABEL. For example if the object of the original Matrix is volume (meaning metrics are prefixed with volume_) and LABEL is aggr, then the metric volume_read_ops will become aggr_volume_read_ops, etc. You can override this by providing the <>OBJ using the following syntax:

        - LABEL<>OBJ\n# use OBJ as the object of the new matrix, e.g. if the original object is \"volume\" and you\n# want to leave metric names unchanged, use \"volume\"\n

    Finally, sometimes you only want to aggregate instances with a specific label value. You can use <VALUE> for that ( optionally follow by OBJ):

        - LABEL<VALUE>\n# aggregate all instances if LABEL has value VALUE\n- LABEL<`VALUE`>\n# same, but VALUE is regular expression\n- LABEL<LABELX=`VALUE`>\n# same, but check against \"LABELX\" (instead of \"LABEL\")\n

    Examples:

    plugins:\nAggregator:\n# will aggregate metrics of the aggregate. The labels \"node\" and \"type\" are included in the new instances\n- aggr node type\n# aggregate instances if label \"type\" has value \"flexgroup\"\n# include all original labels\n- type<flexgroup> ...\n# aggregate all instances if value of \"volume\" ends with underscore and 4 digits\n- volume<`_\\d{4}$`>\n
    "},{"location":"plugins/#aggregation-rules","title":"Aggregation rules","text":"

    The plugin tries to intelligently aggregate metrics based on a few rules:

    • Sum - the default rule, if no other rules apply
    • Average - if any of the following is true:
      • metric name has suffix _percent or _percentage
      • metric name has prefix average_ or avg_
      • metric has property (metric.GetProperty()) percent or average
    • Weighted Average - applied if metric has property average and suffix _latency and if there is a matching _ops metric. (This is currently only matching to ZapiPerf metrics, which use the Property field of metrics.)
    • Ignore - metrics created by some plugins, such as value_to_num by LabelAgent
    "},{"location":"plugins/#max","title":"Max","text":"

    Max creates a new collection of metrics (Matrix) by calculating max of metric values from an existing Matrix for a given label. For example, if the collected metrics are for disks, you can create max at the node or aggregate level. Refer Max Examples for more details.

    "},{"location":"plugins/#max-rule-syntax","title":"Max Rule syntax","text":"

    simplest case:

    plugins:\nMax:\n- LABEL\n# create a new Matrix of max values on target label LABEL\n

    If you want to specify which labels should be included in the new instances, you can add those space-seperated after LABEL:

        - LABEL LABEL1,LABEL2\n# similar to the above example, but LABEL1 and LABEL2 will be copied into the new instances\n# (default is to only copy LABEL and all global labels (such as cluster and datacenter)\n

    Or include all labels:

        - LABEL ...\n# copy all labels of the original instance\n

    By default, metrics will be prefixed with LABEL. For example if the object of the original Matrix is volume (meaning metrics are prefixed with volume_) and LABEL is aggr, then the metric volume_read_ops will become aggr_volume_read_ops. You can override this using the <>OBJ pattern shown below:

        - LABEL<>OBJ\n# use OBJ as the object of the new matrix, e.g. if the original object is \"volume\" and you\n# want to leave metric names unchanged, use \"volume\"\n

    Finally, sometimes you only want to generate instances with a specific label value. You can use <VALUE> for that ( optionally followed by OBJ):

        - LABEL<VALUE>\n# aggregate all instances if LABEL has value VALUE\n- LABEL<`VALUE`>\n# same, but VALUE is regular expression\n- LABEL<LABELX=`VALUE`>\n# same, but check against \"LABELX\" (instead of \"LABEL\")\n
    "},{"location":"plugins/#max-examples","title":"Max Examples","text":"
    plugins:\nMax:\n# will create max of each aggregate metric. All metrics will be prefixed with aggr_disk_max. All labels are included in the new instances\n- aggr<>aggr_disk_max ...\n# calculate max instances if label \"disk\" has value \"1.1.0\". Prefix with disk_max\n# include all original labels\n- disk<1.1.0>disk_max ...\n# max of all instances if value of \"volume\" ends with underscore and 4 digits\n- volume<`_\\d{4}$`>\n
    "},{"location":"plugins/#labelagent","title":"LabelAgent","text":"

    LabelAgent are used to manipulate instance labels based on rules. You can define multiple rules, here is an example of what you could add to the yaml file of a collector:

    plugins:\nLabelAgent:\n# our rules:\nsplit: node `/` ,aggr,plex,disk\nreplace_regex: node node `^(node)_(\\d+)_.*$` `Node-$2`\n

    Note: Labels for creating new label should use name defined in right side of =>. If not present then left side of => is used.

    "},{"location":"plugins/#split","title":"split","text":"

    Rule syntax:

    split:\n- LABEL `SEP` LABEL1,LABEL2,LABEL3\n# source label - separator - comma-seperated target labels\n

    Splits the value of a given label by separator SEP and creates new labels if their number matches to the number of target labels defined in rule. To discard a subvalue, just add a redundant , in the names of the target labels.

    Example:

    split:\n- node `/` ,aggr,plex,disk\n# will split the value of \"node\" using separator \"/\"\n# will expect 4 values: first will be discarded, remaining\n# three will be stored as labels \"aggr\", \"plex\" and \"disk\"\n
    "},{"location":"plugins/#split_regex","title":"split_regex","text":"

    Does the same as split but uses a regular expression instead of a separator.

    Rule syntax:

    split_regex:\n- LABEL `REGEX` LABEL1,LABEL2,LABEL3\n

    Example:

    split_regex:\n- node `.*_(ag\\d+)_(p\\d+)_(d\\d+)` aggr,plex,disk\n# will look for \"_ag\", \"_p\", \"_d\", each followed by one\n# or more numbers, if there is a match, the submatches\n# will be stored as \"aggr\", \"plex\" and \"disk\"\n
    "},{"location":"plugins/#split_pairs","title":"split_pairs","text":"

    Rule syntax:

    split_pairs:\n- LABEL `SEP1` `SEP2`\n# source label - pair separator - key-value separator\n

    Extracts key-value pairs from the value of source label LABEL. Note that you need to add these keys in the export options, otherwise they will not be exported.

    Example:

    split_pairs:\n- comment ` ` `:`\n# will split pairs using a single space and split key-values using colon\n# e.g. if comment=\"owner:jack contact:some@email\", the result wll be\n# two new labels: owner=\"jack\" and contact=\"some@email\"\n
    "},{"location":"plugins/#join","title":"join","text":"

    Join multiple label values using separator SEP and create a new label.

    Rule syntax:

    join:\n- LABEL `SEP` LABEL1,LABEL2,LABEL3\n# target label - separator - comma-seperated source labels\n

    Example:

    join:\n- plex_long `_` aggr,plex\n# will look for the values of labels \"aggr\" and \"plex\",\n# if they are set, a new \"plex_long\" label will be added\n# by joining their values with \"_\"\n
    "},{"location":"plugins/#replace","title":"replace","text":"

    Substitute substring OLD with NEW in label SOURCE and store in TARGET. Note that target and source labels can be the same.

    Rule syntax:

    replace:\n- SOURCE TARGET `OLD` `NEW`\n# source label - target label - substring to replace - replace with\n

    Example:

    replace:\n- node node_short `node_` ``\n# this rule will just remove \"node_\" from all values of label\n# \"node\". E.g. if label is \"node_jamaica1\", it will rewrite it \n# as \"jamaica1\"\n
    "},{"location":"plugins/#replace_regex","title":"replace_regex","text":"

    Same as replace, but will use a regular expression instead of OLD. Note you can use $n to specify nth submatch in NEW.

    Rule syntax:

    replace_regex:\n- SOURCE TARGET `REGEX` `NEW`\n# source label - target label - substring to replace - replace with\n

    Example:

    replace_regex:\n- node node `^(node)_(\\d+)_.*$` `Node-$2`\n# if there is a match, will capitalize \"Node\" and remove suffixes.\n# E.g. if label is \"node_10_dc2\", it will rewrite it as\n# will rewrite it as \"Node-10\"\n
    "},{"location":"plugins/#exclude_equals","title":"exclude_equals","text":"

    Exclude each instance, if the value of LABEL is exactly VALUE. Exclude means that metrics for this instance will not be exported.

    Rule syntax:

    exclude_equals:\n- LABEL `VALUE`\n# label name - label value\n

    Example:

    exclude_equals:\n- vol_type `flexgroup_constituent`\n# all instances, which have label \"vol_type\" with value\n# \"flexgroup_constituent\" will not be exported\n
    "},{"location":"plugins/#exclude_contains","title":"exclude_contains","text":"

    Same as exclude_equals, but all labels that contain VALUE will be excluded

    Rule syntax:

    exclude_contains:\n- LABEL `VALUE`\n# label name - label value\n

    Example:

    exclude_contains:\n- vol_type `flexgroup_`\n# all instances, which have label \"vol_type\" which contain\n# \"flexgroup_\" will not be exported\n
    "},{"location":"plugins/#exclude_regex","title":"exclude_regex","text":"

    Same as exclude_equals, but will use a regular expression and all matching instances will be excluded.

    Rule syntax:

    exclude_regex:\n- LABEL `REGEX`\n# label name - regular expression\n

    Example:

    exclude_regex:\n- vol_type `^flex`\n# all instances, which have label \"vol_type\" which starts with\n# \"flex\" will not be exported\n
    "},{"location":"plugins/#include_equals","title":"include_equals","text":"

    Include each instance, if the value of LABEL is exactly VALUE. Include means that metrics for this instance will be exported and instances that do not match will not be exported.

    Rule syntax:

    include_equals:\n- LABEL `VALUE`\n# label name - label value\n

    Example:

    include_equals:\n- vol_type `flexgroup_constituent`\n# all instances, which have label \"vol_type\" with value\n# \"flexgroup_constituent\" will be exported\n
    "},{"location":"plugins/#include_contains","title":"include_contains","text":"

    Same as include_equals, but all labels that contain VALUE will be included

    Rule syntax:

    include_contains:\n- LABEL `VALUE`\n# label name - label value\n

    Example:

    include_contains:\n- vol_type `flexgroup_`\n# all instances, which have label \"vol_type\" which contain\n# \"flexgroup_\" will be exported\n
    "},{"location":"plugins/#include_regex","title":"include_regex","text":"

    Same as include_equals, but a regular expression will be used for inclusion. Similar to the other includes, all matching instances will be included and all non-matching will not be exported.

    Rule syntax:

    include_regex:\n- LABEL `REGEX`\n# label name - regular expression\n

    Example:

    include_regex:\n- vol_type `^flex`\n# all instances, which have label \"vol_type\" which starts with\n# \"flex\" will be exported\n
    "},{"location":"plugins/#value_mapping","title":"value_mapping","text":"

    value_mapping was deprecated in 21.11 and removed in 22.02. Use value_to_num mapping instead.

    "},{"location":"plugins/#value_to_num","title":"value_to_num","text":"

    Map values of a given label to a numeric metric (of type uint8). This rule maps values of a given label to a numeric metric (of type unit8). Healthy is mapped to 1 and all non-healthy values are mapped to 0.

    This is handy to manipulate the data in the DB or Grafana (e.g. change color based on status or create alert).

    Note that you don't define the numeric values yourself, instead, you only provide the possible (expected) values, the plugin will map each value to its index in the rule.

    Rule syntax:

    value_to_num:\n- METRIC LABEL ZAPI_VALUE REST_VALUE `N`\n# map values of LABEL to 1 if it is ZAPI_VALUE or REST_VALUE\n# otherwise, value of METRIC is set to N\n

    The default value N is optional, if no default value is given and the label value does not match any of the given values, the metric value will not be set.

    Examples:

    value_to_num:\n- status state up online `0`\n# a new metric will be created with the name \"status\"\n# if an instance has label \"state\" with value \"up\", the metric value will be 1,\n# if it's \"online\", the value will be set to 1,\n# if it's any other value, it will be set to the specified default, 0\n
    value_to_num:\n- status state up online `4`\n# metric value will be set to 1 if \"state\" is \"up\", otherwise to **4**\n
    value_to_num:\n- status outage - - `0` #ok_value is empty value. \n# metric value will be set to 1 if \"outage\" is empty, if it's any other value, it will be set to the default, 0\n# '-' is a special symbol in this mapping, and it will be converted to blank while processing.\n
    "},{"location":"plugins/#value_to_num_regex","title":"value_to_num_regex","text":"

    Same as value_to_num, but will use a regular expression. All matches are mapped to 1 and non-matches are mapped to 0.

    This is handy to manipulate the data in the DB or Grafana (e.g. change color based on status or create alert).

    Note that you don't define the numeric values, instead, you provide the expected values and the plugin will map each value to its index in the rule.

    Rule syntax:

    value_to_num_regex:\n- METRIC LABEL ZAPI_REGEX REST_REGEX `N`\n# map values of LABEL to 1 if it matches ZAPI_REGEX or REST_REGEX\n# otherwise, value of METRIC is set to N\n

    The default value N is optional, if no default value is given and the label value does not match any of the given values, the metric value will not be set.

    Examples:

    value_to_num_regex:\n- certificateuser methods .*cert.*$ .*certificate.*$ `0`\n# a new metric will be created with the name \"certificateuser\"\n# if an instance has label \"methods\" with value contains \"cert\", the metric value will be 1,\n# if value contains \"certificate\", the value will be set to 1,\n# if value doesn't contain \"cert\" and \"certificate\", it will be set to the specified default, 0\n
    value_to_num_regex:\n- status state ^up$ ^ok$ `4`\n# metric value will be set to 1 if label \"state\" matches regex, otherwise set to **4**\n
    "},{"location":"plugins/#metricagent","title":"MetricAgent","text":"

    MetricAgent are used to manipulate metrics based on rules. You can define multiple rules, here is an example of what you could add to the yaml file of a collector:

    plugins:\nMetricAgent:\ncompute_metric:\n- snapshot_maxfiles_possible ADD snapshot.max_files_available snapshot.max_files_used\n- raid_disk_count ADD block_storage.primary.disk_count block_storage.hybrid_cache.disk_count\n

    Note: Metric names used to create new metrics can come from the left or right side of the rename operator (=>) Note: The metric agent currently does not work for histogram or array metrics.

    "},{"location":"plugins/#compute_metric","title":"compute_metric","text":"

    This rule creates a new metric (of type float64) using the provided scalar or an existing metric value combined with a mathematical operation.

    You can provide a numeric value or a metric name with an operation. The plugin will use the provided number or fetch the value of a given metric, perform the requested mathematical operation, and store the result in new custom metric.

    Currently, we support these operations: ADD SUBTRACT MULTIPLY DIVIDE PERCENT

    Rule syntax:

    compute_metric:\n- METRIC OPERATION METRIC1 METRIC2 METRIC3\n# target new metric - mathematical operation - input metric names \n# apply OPERATION on metric values of METRIC1, METRIC2 and METRIC3 and set result in METRIC\n# METRIC1, METRIC2, METRIC3 can be a scalar or an existing metric name.\n

    Examples:

    compute_metric:\n- space_total ADD space_available space_used\n# a new metric will be created with the name \"space_total\"\n# if an instance has metric \"space_available\" with value \"1000\", and \"space_used\" with value \"400\",\n# the result value will be \"1400\" and set to metric \"space_total\".\n
    compute_metric:\n- disk_count ADD primary.disk_count secondary.disk_count hybrid.disk_count\n# value of metric \"disk_count\" would be addition of all the given disk_counts metric values.\n# disk_count = primary.disk_count + secondary.disk_count + hybrid.disk_count\n
    compute_metric:\n- files_available SUBTRACT files files_used\n# value of metric \"files_available\" would be subtraction of the metric value of files_used from metric value of files.\n# files_available = files - files_used\n
    compute_metric:\n- total_bytes MULTIPLY bytes_per_sector sector_count\n# value of metric \"total_bytes\" would be multiplication of metric value of bytes_per_sector and metric value of sector_count.\n# total_bytes = bytes_per_sector * sector_count\n
    compute_metric:\n- uptime MULTIPLY stats.power_on_hours 60 60\n# value of metric \"uptime\" would be multiplication of metric value of stats.power_on_hours and scalar value of 60 * 60.\n# total_bytes = bytes_per_sector * sector_count\n
    compute_metric:\n- transmission_rate DIVIDE transfer.bytes_transferred transfer.total_duration\n# value of metric \"transmission_rate\" would be division of metric value of transfer.bytes_transferred by metric value of transfer.total_duration.\n# transmission_rate = transfer.bytes_transferred / transfer.total_duration\n
    compute_metric:\n- inode_used_percent PERCENT inode_files_used inode_files_total\n# a new metric named \"inode_used_percent\" will be created by dividing the metric \"inode_files_used\" by \n#  \"inode_files_total\" and multiplying the result by 100.\n# inode_used_percent = inode_files_used / inode_files_total * 100\n
    "},{"location":"plugins/#changelog-plugin","title":"ChangeLog Plugin","text":"

    The ChangeLog plugin is a feature of Harvest, designed to detect and track changes related to the creation, modification, and deletion of an object. By default, it supports volume, svm, and node objects. Its functionality can be extended to track changes in other objects by making relevant changes in the template.

    Please note that the ChangeLog plugin requires the uuid label, which is unique, to be collected by the template. Without the uuid label, the plugin will not function.

    The ChangeLog feature only detects changes when Harvest is up and running. It does not detect changes that occur when Harvest is down. Additionally, the plugin does not detect changes in metric values.

    "},{"location":"plugins/#enabling-the-plugin","title":"Enabling the Plugin","text":"

    The plugin can be enabled in the templates under the plugins section.

    For volume, svm, and node objects, you can enable the plugin with the following configuration:

    plugins:\n- ChangeLog\n

    For other objects, you need to specify the labels to track in the plugin configuration. These labels should be relevant to the object you want to track. If these labels are not specified in the template, the plugin will not be able to track changes for the object.

    Here's an example of how to enable the plugin for an aggregate object:

    plugins:\n- ChangeLog:\ntrack:\n- aggr\n- node\n- state\n

    In the above configuration, the plugin will track changes in the aggr, node, and state labels for the aggregate object.

    "},{"location":"plugins/#default-tracking-for-svm-node-volume","title":"Default Tracking for svm, node, volume","text":"

    By default, the plugin tracks changes in the following labels for svm, node, and volume objects:

    • svm: svm, state, type, anti_ransomware_state
    • node: node, location, healthy
    • volume: node, volume, svm, style, type, aggr, state, status

    Other objects are not tracked by default.

    These default settings can be overwritten as needed in the relevant templates. For instance, if you want to track junction_path labels for Volume, you can overwrite this in the volume template.

    plugins:\n- ChangeLog:\n- track:\n- node\n- volume\n- svm\n- style\n- type\n- aggr\n- state\n- status\n- junction_path\n
    "},{"location":"plugins/#change-types-and-metrics","title":"Change Types and Metrics","text":"

    The ChangeLog plugin publishes a metric with various labels providing detailed information about the change when an object is created, modified, or deleted.

    "},{"location":"plugins/#object-creation","title":"Object Creation","text":"

    When a new object is created, the ChangeLog plugin will publish a metric with the following labels:

    Label Description object name of the ONTAP object that was changed op type of change that was made metric value timestamp when Harvest captured the change. 1698735558 in the example below

    Example of metric shape for object creation:

    change_log{aggr=\"umeng_aff300_aggr2\", cluster=\"umeng-aff300-01-02\", datacenter=\"u2\", index=\"0\", instance=\"localhost:12993\", job=\"prometheus\", node=\"umeng-aff300-01\", object=\"volume\", op=\"create\", style=\"flexvol\", svm=\"harvest\", volume=\"harvest_demo\"} 1698735558\n
    "},{"location":"plugins/#object-modification","title":"Object Modification","text":"

    When an existing object is modified, the ChangeLog plugin will publish a metric with the following labels:

    Label Description object name of the ONTAP object that was changed op type of change that was made track property of the object which was modified new_value new value of the object after the change old_value previous value of the object before the change metric value timestamp when Harvest captured the change. 1698735677 in the example below

    Example of metric shape for object modification:

    change_log{aggr=\"umeng_aff300_aggr2\", cluster=\"umeng-aff300-01-02\", datacenter=\"u2\", index=\"1\", instance=\"localhost:12993\", job=\"prometheus\", new_value=\"offline\", node=\"umeng-aff300-01\", object=\"volume\", old_value=\"online\", op=\"update\", style=\"flexvol\", svm=\"harvest\", track=\"state\", volume=\"harvest_demo\"} 1698735677\n
    "},{"location":"plugins/#object-deletion","title":"Object Deletion","text":"

    When an object is deleted, the ChangeLog plugin will publish a metric with the following labels:

    Label Description object name of the ONTAP object that was changed op type of change that was made metric value timestamp when Harvest captured the change. 1698735708 in the example below

    Example of metric shape for object deletion:

    change_log{aggr=\"umeng_aff300_aggr2\", cluster=\"umeng-aff300-01-02\", datacenter=\"u2\", index=\"2\", instance=\"localhost:12993\", job=\"prometheus\", node=\"umeng-aff300-01\", object=\"volume\", op=\"delete\", style=\"flexvol\", svm=\"harvest\", volume=\"harvest_demo\"} 1698735708\n
    "},{"location":"plugins/#viewing-the-metrics","title":"Viewing the Metrics","text":"

    You can view the metrics published by the ChangeLog plugin in the ChangeLog Monitor dashboard in Grafana. This dashboard provides a visual representation of the changes tracked by the plugin for volume, svm, and node objects.

    "},{"location":"prepare-7mode-clusters/","title":"ONTAP 7mode","text":"

    NetApp Harvest requires login credentials to access monitored hosts. Although, a generic admin account can be used, it is best practice to create a dedicated monitoring account with the least privilege access.

    ONTAP 7-mode supports only username / password based authentication with NetApp Harvest. Harvest communicates with monitored systems exclusively via HTTPS, which is not enabled by default in Data ONTAP 7-mode. Login as a user with full administrative privileges and execute the following steps.

    "},{"location":"prepare-7mode-clusters/#enabling-https-and-tls-ontap-7-mode-only","title":"Enabling HTTPS and TLS (ONTAP 7-mode only)","text":"

    Verify SSL is configured

    secureadmin status ssl\n

    If ssl is \u2018active\u2019 continue. If not, setup SSL and be sure to choose a Key length (bits) of 2048:

    secureadmin setup ssl\n
    SSL Setup has already been done before. Do you want to proceed? [no] yes\nCountry Name (2 letter code) [US]: NL\nState or Province Name (full name) [California]: Noord-Holland\nLocality Name (city, town, etc.) [Santa Clara]: Schiphol\nOrganization Name (company) [Your Company]: NetApp\nOrganization Unit Name (division): SalesEngineering\nCommon Name (fully qualified domain name) [sdt-7dot1a.nltestlab.hq.netapp.com]:\nAdministrator email: noreply@netapp.com\nDays until expires [5475] :5475 Key length (bits) [512] :2048\n

    Enable management via SSL and enable TLS

    options httpd.admin.ssl.enable on\noptions tls.enable on  \n
    "},{"location":"prepare-7mode-clusters/#creating-ontap-user","title":"Creating ONTAP user","text":""},{"location":"prepare-7mode-clusters/#create-the-role-with-required-capabilities","title":"Create the role with required capabilities","text":"
    role add netapp-harvest-role -c \"Role for performance monitoring by NetApp Harvest\" -a login-http-admin,api-system-get-version,api-system-get-info,api-perf-object-*,api-emsautosupport-log \n
    "},{"location":"prepare-7mode-clusters/#create-a-group-for-this-role","title":"Create a group for this role","text":"
    useradmin group add netapp-harvest-group -c \"Group for performance monitoring by NetApp Harvest\" -r netapp-harvest-role \n
    "},{"location":"prepare-7mode-clusters/#create-a-user-for-the-role-and-enter-the-password-when-prompted","title":"Create a user for the role and enter the password when prompted","text":"
    useradmin user add netapp-harvest -c \"User account for performance monitoring by NetApp Harvest\" -n \"NetApp Harvest\" -g netapp-harvest-group\n

    The user is now created and can be configured for use by NetApp Harvest.

    "},{"location":"prepare-cdot-clusters/","title":"ONTAP cDOT","text":""},{"location":"prepare-cdot-clusters/#prepare-ontap-cdot-cluster","title":"Prepare ONTAP cDOT cluster","text":"

    NetApp Harvest requires login credentials to access monitored hosts. Although a generic admin account can be used, it is better to create a dedicated read-only monitoring account.

    In the examples below, the user, group, roles, etc., use a naming convention of netapp-harvest. These can be modified as needed to match your organizational needs.

    There are few steps required to prepare each system for monitoring. Harvest supports two authentication styles (auth_style) to connect to ONTAP clusters. These are basic_auth or certificate_auth. Both work well, but if you're starting fresh, the recommendation is to create a read-only harvest user on your ONTAP server and use certificate-based TLS authentication.

    Here's a summary of what we're going to do

    1. Create a read-only ONTAP role with the necessary capabilities that Harvest will use to auth and collect data
    2. Create a user account using the role created in step #1
    3. Update the harvest.yml file to use the user account and password created in step #2 and start Harvest.

    There are two ways to create a read-only ONTAP role. Pick the one that best fits your needs.

    • Create a role with read-only access to all API objects via System Manager.
    • Create a role with read-only access to the limited set of APIs Harvest collects via ONTAP's command line interface (CLI).
    "},{"location":"prepare-cdot-clusters/#system-manager","title":"System Manager","text":"

    Open System Manager. Click on CLUSTER in the left menu bar, Settings and Users and Roles.

    In the right column, under Roles, click on Add to add a new role.

    Choose a role name (e.g. harvest2-role). In the REST API PATH field, type /api and select Read-Only for ACCESS. Click on Save.

    In the left column, under Users, click on Add to create a new user. Choose a username. Under Role, select the role that we just created. Under User Login Methods select ONTAPI, and one of the two authentication methods. Press the Add button and select HTTP and one of the authentication methods. Type in a password if you chose Password. Click on Save

    If you chose Password, you can add the username and password to the Harvest configuration file and start Harvest. If you chose Certificate jump to Using Certificate Authentication to generate certificates files.

    System Manager Classic interface

    Open System Manager. Click on the Settings icon in the top-right corner of the window.

    Click on Roles in the left menu bar and click Add. Choose a role name (e.g. harvest2-role).

    Under Role Attributes click on Add, under Command type DEFAULT, leave Query empty, select readonly under Access Level, click on OK and Add.

    After you click on Add, this is what you should see:

    Now we need to create a user. Click on Users in the left menu bar and Add. Choose a username and password. Under User Login Methods click on Add, select ontapi as Application and select the role that we just created as Role. Repeat by clicking on Add, select http as Application and select the role that we just created as Role. Click on Add in the pop-up window to save.

    "},{"location":"prepare-cdot-clusters/#ontap-cli","title":"ONTAP CLI","text":"

    We are going to:

    1. create a Harvest role with read-only access to a limited set of objects
    2. create a Harvest user and assign it to that role

    Login to the CLI of your cDOT ONTAP system using SSH.

    "},{"location":"prepare-cdot-clusters/#least-privilege-approach","title":"Least-privilege approach","text":"

    Verify there are no errors when you copy/paste these. Warnings are fine.

    security login role create -role harvest2-role -access readonly -cmddirname \"cluster\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"lun\"    \nsecurity login role create -role harvest2-role -access readonly -cmddirname \"network interface\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"qos adaptive-policy-group\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"qos policy-group\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"qos workload show\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"security\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"snapmirror\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"statistics\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"storage aggregate\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"storage disk\"     \nsecurity login role create -role harvest2-role -access readonly -cmddirname \"storage encryption disk\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"storage shelf\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"system health status show\" \nsecurity login role create -role harvest2-role -access readonly -cmddirname \"system health subsystem show\"  \nsecurity login role create -role harvest2-role -access readonly -cmddirname \"system node\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"version\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"volume\"\nsecurity login role create -role harvest2-role -access readonly -cmddirname \"vserver\"\n
    "},{"location":"prepare-cdot-clusters/#create-harvest-user-and-associate-with-the-harvest-role","title":"Create harvest user and associate with the harvest role","text":"

    Use this for password authentication

    security login create -user-or-group-name harvest2 -application ontapi -role harvest2-role -authentication-method password\nsecurity login create -user-or-group-name harvest2 -application http -role harvest2-role -authentication-method password   

    Or this for certificate authentication

    security login create -user-or-group-name harvest2 -application ontapi -role harvest2-role -authentication-method cert\nsecurity login create -user-or-group-name harvest2 -application http -role harvest2-role -authentication-method cert 

    Check that the harvest role has web access for ONTAPI and REST.

    vserver services web access show -role harvest2-role -name ontapi\nvserver services web access show -role harvest2-role -name rest\nvserver services web access show -role harvest2-role -name docs-api\n

    If either entry is missing, enable access by running the following. Replace $ADMIN_VSERVER with your SVM admin name.

    vserver services web access create -vserver $ADMIN_VSERVER -name ontapi -role harvest2-role\nvserver services web access create -vserver $ADMIN_VSERVER -name rest -role harvest2-role\nvserver services web access create -vserver $ADMIN_VSERVER -name docs-api -role harvest2-role\n

    "},{"location":"prepare-cdot-clusters/#7-mode-cli","title":"7-Mode CLI","text":"

    Login to the CLI of your 7-Mode ONTAP system (e.g. using SSH). First, we create a user role. If you want to give the user readonly access to all API objects, type in the following command:

    useradmin role modify harvest2-role -a login-http-admin,api-system-get-version, \\\napi-system-get-info,api-perf-object-*,api-ems-autosupport-log,api-diagnosis-status-get, \\\napi-lun-list-info,api-diagnosis-subsystem-config-get-iter,api-disk-list-info, \\\napi-diagnosis-config-get-iter,api-aggr-list-info,api-volume-list-info, \\\napi-storage-shelf-environment-list-info,api-qtree-list,api-quota-report\n
    "},{"location":"prepare-cdot-clusters/#using-certificate-authentication","title":"Using Certificate Authentication","text":"

    See comments here for troubleshooting client certificate authentication.

    Client certificate authentication allows you to authenticate with your ONTAP cluster without including username/passwords in your harvest.yml file. The process to set up client certificates is straightforward, although self-signed certificates introduce more work as does Go's strict treatment of common names.

    Unless you've installed production certificates on your ONTAP cluster, you'll need to replace your cluster's common-name-based self-signed certificates with a subject alternative name-based certificate. After that step is completed, we'll create client certificates and add those for passwordless login.

    If you can't or don't want to replace your ONTAP cluster certificates, there are some workarounds. You can

    • Use use_insecure_tls: true in your harvest.yml to disable certificate verification
    • Change your harvest.yml to connect via hostname instead of IP address
    "},{"location":"prepare-cdot-clusters/#create-self-signed-subject-alternate-name-certificates-for-ontap","title":"Create Self-Signed Subject Alternate Name Certificates for ONTAP","text":"

    Subject alternate name (SAN) certificates allow multiple hostnames in a single certificate. Starting with Go 1.3, when connecting to a cluster via its IP address, the CN field in the server certificate is ignored. This often causes errors like this: x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs

    "},{"location":"prepare-cdot-clusters/#overview-of-steps-to-create-a-self-signed-san-certificate-and-make-ontap-use-it","title":"Overview of steps to create a self-signed SAN certificate and make ONTAP use it","text":"
    1. Create a root key
    2. Create a root certificate authority certificate
    3. Create a SAN certificate for your ONTAP cluster, using #2 to create it
    4. Install root ca certificate created in step #2 on cluster
    5. Install SAN certificate created in step #3 on your cluster
    6. Modify your cluster/SVM to use the new certificate installed at step #5
    "},{"location":"prepare-cdot-clusters/#setup","title":"Setup","text":"
    # create a place to store the certificate authority files, adjust as needed\nmkdir -p ca/{private,certs}\n
    "},{"location":"prepare-cdot-clusters/#create-a-root-key","title":"Create a root key","text":"
    cd ca\n# generate a private key that we will use to create our self-signed certificate authority\nopenssl genrsa -out private/ca.key.pem 4096\nchmod 400 private/ca.key.pem\n
    "},{"location":"prepare-cdot-clusters/#create-a-root-certificate-authority-certificate","title":"Create a root certificate authority certificate","text":"

    Download the sample openssl.cnf file and put it in the directory we created in setup. Edit line 9, changing dir to point to your ca directory created in setup.

    openssl req -config openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 -sha256 -extensions v3_ca -out certs/ca.cert.pem\n\n# Verify\nopenssl x509 -noout -text -in certs/ca.cert.pem\n\n# Make sure these are present\n    Signature Algorithm: sha256WithRSAEncryption               <======== Signature Algorithm can not be sha-1\n        X509v3 extensions:\n            X509v3 Subject Key Identifier: \n                --removed\n            X509v3 Authority Key Identifier: \n                --removed\n\n            X509v3 Basic Constraints: critical\n                CA:TRUE                                        <======== CA must be true\n            X509v3 Key Usage: critical\n                Digital Signature, Certificate Sign, CRL Sign  <======== Digital and certificate signature\n
    "},{"location":"prepare-cdot-clusters/#create-a-san-certificate-for-your-ontap-cluster","title":"Create a SAN certificate for your ONTAP cluster","text":"

    First, we'll create the certificate signing request and then the certificate. In this example, the ONTAP cluster is named umeng-aff300-05-06, update accordingly.

    Download the sample server_cert.cnf file and put it in the directory we created in setup. Edit lines 18-21 to include your ONTAP cluster hostnames and IP addresses. Edit lines 6-11 with new names as needed.

    openssl req -new -newkey rsa:4096 -nodes -sha256 -subj \"/\" -config server_cert.cnf -outform pem -out umeng-aff300-05-06.csr -keyout umeng-aff300-05-06.key\n\n# Verify\nopenssl req -text -noout -in umeng-aff300-05-06.csr\n\n# Make sure these are present\n        Attributes:\n        Requested Extensions:\n            X509v3 Subject Alternative Name:         <======== Section that lists alternate DNS and IP names\n                DNS:umeng-aff300-05-06-cm.rtp.openenglab.netapp.com, DNS:umeng-aff300-05-06, IP Address:10.193.48.11, IP Address:10.193.48.11\n    Signature Algorithm: sha256WithRSAEncryption     <======== Signature Algorithm can not be sha-1\n

    We'll now use the certificate signing request and the recently created certificate authority to create a new SAN certificate for our cluster.

    openssl x509 -req -sha256 -days 30 -in umeng-aff300-05-06.csr -CA certs/ca.cert.pem -CAkey private/ca.key.pem -CAcreateserial -out umeng-aff300-05-06.crt -extensions req_ext -extfile server_cert.cnf\n\n# Verify\nopenssl x509 -text -noout -in umeng-aff300-05-06.crt\n\n# Make sure these are present\nX509v3 extensions:\n            X509v3 Subject Alternative Name:       <======== Section that lists alternate DNS and IP names\n                DNS:umeng-aff300-05-06-cm.rtp.openenglab.netapp.com, DNS:umeng-aff300-05-06, IP Address:10.193.48.11, IP Address:10.193.48.11\n    Signature Algorithm: sha256WithRSAEncryption   <======== Signature Algorithm can not be sha-1\n
    "},{"location":"prepare-cdot-clusters/#install-root-ca-certificate-on-cluster","title":"Install Root CA Certificate On Cluster","text":"

    Login to your cluster with admin credentials and install the server certificate authority. Copy from ca/certs/ca.cert.pem

    ssh admin@IP\numeng-aff300-05-06::*> security certificate install -type server-ca\n\nPlease enter Certificate: Press <Enter> when done\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n\nYou should keep a copy of the CA-signed digital certificate for future reference.\n\nThe installed certificate's CA and serial number for reference:\nCA: ntap\nSerial: 46AFFC7A3A9999999E8FB2FEB0\n\nThe certificate's generated name for reference: ntap\n

    Now install the server certificate we created above with SAN. Copy certificate from ca/umeng-aff300-05-06.crt and private key from ca/umeng-aff300-05-06.key

    umeng-aff300-05-06::*> security certificate install -type server\n\nPlease enter Certificate: Press <Enter> when done\n-----BEGIN CERTIFICATE-----\n..\n-----END CERTIFICATE-----\n\nPlease enter Private Key: Press <Enter> when done\n-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n\nPlease enter certificates of Certification Authorities (CA) which form the certificate chain of the server certificate. This starts with the issuing CA certificate of the server certificate and can range up to the root CA certificate.\n\nDo you want to continue entering root and/or intermediate certificates {y|n}: n\n

    If ONTAP tells you the provided certificate does not have a common name in the subject field, type the hostname of the cluster like this:

    The provided certificate does not have a common name in the subject field.\n\nEnter a valid common name to continue installation of the certificate:\n\nEnter a valid common name to continue installation of the certificate: umeng-aff300-05-06-cm.rtp.openenglab.netapp.com\n\nYou should keep a copy of the private key and the CA-signed digital certificate for future reference.\n\nThe installed certificate's CA and serial number for reference:\nCA: ntap\nSerial: 67A94AA25B229A68AC5BABACA8939A835AA998A58\n\nThe certificate's generated name for reference: umeng-aff300-05-06-cm.rtp.openenglab.netapp.com\n
    "},{"location":"prepare-cdot-clusters/#modify-the-admin-svm-to-use-the-new-certificate","title":"Modify the admin SVM to use the new certificate","text":"

    We'll modify the cluster's admin SVM to use the just installed server certificate and certificate authority.

    vserver show -type admin -fields vserver,type\nvserver            type\n------------------ -----\numeng-aff300-05-06 admin\n\numeng-aff300-05-06::*> ssl modify -vserver umeng-aff300-05-06 -server-enabled true -serial 67A94AA25B229A68AC5BABACA8939A835AA998A58 -ca ntap\n  (security ssl modify)\n

    You can verify the certificate(s) are installed and working by using openssl like so:

    openssl s_client -CAfile certs/ca.cert.pem -showcerts -servername server -connect umeng-aff300-05-06-cm.rtp.openenglab.netapp.com:443\n\nCONNECTED(00000005)\ndepth=1 C = US, ST = NC, L = RTP, O = ntap, OU = ntap\nverify return:1\ndepth=0 \nverify return:1\n...\n

    without the -CAfile, openssl will report

    CONNECTED(00000005)\ndepth=0 \nverify error:num=20:unable to get local issuer certificate\nverify return:1\ndepth=0 \nverify error:num=21:unable to verify the first certificate\nverify return:1\n---\n
    "},{"location":"prepare-cdot-clusters/#create-client-certificates-for-password-less-login","title":"Create Client Certificates for Password-less Login","text":"

    Copy the server certificate we created above into the Harvest install directory.

    cp ca/umeng-aff300-05-06.crt /opt/harvest\ncd /opt/harvest\n

    Create a self-signed client key and certificate with the same name as the hostname where Harvest is running. It's not required to name the key/cert pair after the hostname, but if you do, Harvest will load them automatically when you specify auth_style: certificate_auth otherwise you can point to them directly. See Pollers for details.

    Change the common name to the ONTAP user you setup with the harvest role above. e.g harvest2

    cd /opt/harvest\nmkdir cert\nopenssl req -x509 -nodes -days 1095 -newkey rsa:2048 -keyout cert/$(hostname).key -out cert/$(hostname).pem -subj \"/CN=harvest2\"\n
    "},{"location":"prepare-cdot-clusters/#install-client-certificates-on-cluster","title":"Install Client Certificates on Cluster","text":"

    Login to your cluster with admin credentials and install the client certificate. Copy from cert/$(hostname).pem

    ssh admin@IP\numeng-aff300-05-06::*>  security certificate install -type client-ca -vserver umeng-aff300-05-06\n\nPlease enter Certificate: Press <Enter> when done\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n\nYou should keep a copy of the CA-signed digital certificate for future reference.\n\nThe installed certificate's CA and serial number for reference:\nCA: cbg\nSerial: B77B59444444CCCC\n\nThe certificate's generated name for reference: cbg_B77B59444444CCCC\n

    Now that the client certificate is installed, let's enable it.

    umeng-aff300-05-06::*> ssl modify -vserver umeng-aff300-05-06 -client-enabled true\n  (security ssl modify)\n

    Verify with a recent version of curl. If you are running on a Mac see below.

    curl --cacert umeng-aff300-05-06.crt --key cert/$(hostname).key --cert cert/$(hostname).pem https://umeng-aff300-05-06-cm.rtp.openenglab.netapp.com/api/storage/disks\n
    "},{"location":"prepare-cdot-clusters/#update-harvestyml-to-use-client-certificates","title":"Update Harvest.yml to use client certificates","text":"

    Update the poller section with auth_style: certificate_auth like this:

      u2-cert: \n    auth_style: certificate_auth\n    addr: umeng-aff300-05-06-cm.rtp.openenglab.netapp.com\n

    Restart your poller and enjoy your password-less life-style.

    "},{"location":"prepare-cdot-clusters/#macos","title":"macOS","text":"

    The version of curl installed on macOS up through Monterey is not recent enough to work with self-signed SAN certs. You will need to install a newer version of curl via Homebrew, MacPorts, source, etc.

    Example of failure when running with an older version of curl - you will see this in client auth test step above.

    curl --version\ncurl 7.64.1 (x86_64-apple-darwin20.0) libcurl/7.64.1 (SecureTransport) LibreSSL/2.8.3 zlib/1.2.11 nghttp2/1.41.0\n\ncurl --cacert umeng-aff300-05-06.crt --key cert/cgrindst-mac-0.key --cert cert/cgrindst-mac-0.pem https://umeng-aff300-05-06-cm.rtp.openenglab.netapp.com/api/storage/disks\n\ncurl: (60) SSL certificate problem: unable to get local issuer certificate\n

    Let's install curl via Homebrew. Make sure you don't miss the message that Homebrew prints about your path.

    If you need to have curl first in your PATH, run:\n  echo 'export PATH=\"/usr/local/opt/curl/bin:$PATH\"' >> /Users/cgrindst/.bash_profile\n

    Now when we make a client auth request with our self-signed certificate, it works! \\o/

    brew install curl\n\ncurl --version\ncurl 7.80.0 (x86_64-apple-darwin20.6.0) libcurl/7.80.0 (SecureTransport) OpenSSL/1.1.1l zlib/1.2.11 brotli/1.0.9 zstd/1.5.0 libidn2/2.3.2 libssh2/1.10.0 nghttp2/1.46.0 librtmp/2.3 OpenLDAP/2.6.0\nRelease-Date: 2021-11-10\nProtocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp \nFeatures: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz MultiSSL NTLM NTLM_WB SPNEGO SSL TLS-SRP UnixSockets zstd\n\ncurl --cacert umeng-aff300-05-06.crt --key cert/cgrindst-mac-0.key --cert cert/cgrindst-mac-0.pem https://umeng-aff300-05-06-cm.rtp.openenglab.netapp.com/api/storage/disks\n\n{\n  \"records\": [\n    {\n      \"name\": \"1.1.22\",\n      \"_links\": {\n        \"self\": {\n          \"href\": \"/api/storage/disks/1.1.22\"\n        }\n      }\n    }\n}\n

    Change directory to your Harvest home directory (replace /opt/harvest/ if this is not the default):

    $ cd /opt/harvest/\n

    Generate an SSL cert and key pair with the following command. Note that it's preferred to generate these files using the hostname of the local machine. The command below assumes debian8 as our hostname name and harvest2 as the user we created in the previous step:

    openssl req -x509 -nodes -days 1095 -newkey rsa:2048 -keyout cert/debian8.key \\\n-out cert/debian8.pem  -subj \"/CN=harvest2\"\n

    Next, open the public key (debian8.pem in our example) and copy all of its content. Login into your ONTAP CLI and run this command by replacing CLUSTER with the name of your cluster.

    security certificate install -type client-ca -vserver CLUSTER\n

    Paste the public key content and hit enter. Output should be similar to this:

    jamaica::> security certificate install -type client-ca -vserver jamaica \n\nPlease enter Certificate: Press <Enter> when done\n-----BEGIN CERTIFICATE-----                       \nMIIDETCCAfmgAwIBAgIUP9EUXyl2BDSUOkNEcDU0yqbJ29IwDQYJKoZIhvcNAQEL\nBQAwGDEWMBQGA1UEAwwNaGFydmVzdDItY2xpMzAeFw0yMDEwMDkxMjA0MDhaFw0y\nMzEwMDktcGFueSBMdGQxFzAVBgNVBAMlc3QyLWNsaTMwggEiMA0tcGFueSBGCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCVVy25BeCRoGCJWFOlyUL7Ddkze4Hl2/6u\nqye/3mk5vBNsGuXUrtad5XfBB70Ez9hWl5sraLiY68ro6MyX1icjiUTeaYDvS/76\nIw7HeXJ5Pyb/fWth1nePunytoLyG/vaTCySINkIV5nlxC+k0X3wWFJdfJzhloPtt\n1Vdm7aCF2q6a2oZRnUEBGQb6t5KyF0/Xh65mvfgB0pl/AS2HY5Gz+~L54Xyvs+BY\nV7UmTop7WBYl0L3QXLieERpHXnyOXmtwlm1vG5g4n/0DVBNTBXjEdvc6oRh8sxBN\nZlQWRApE7pa/I1bLD7G2AiS4UcPmR4cEpPRVEsOFOaAN3Z3YskvnAgMBAAGjUzBR\nMB0GA1UdDgQWBBQr4syV6TCcgO/5EcU/F8L2YYF15jAfBgNVHSMEGDAWgBQr4syV\n6TCcgO/5EcU/F8L2YYF15jAPBgNVHRMdfdfwerH/MA0GCSqGSIb^ECd3DQEBCwUA\nA4IBAQBjP1BVhClRKkO/M3zlWa2L9Ztce6SuGwSnm6Ebmbs+iMc7o2N9p3RmV6Xl\nh6NcdXRzzPAVrUoK8ewhnBzdghgIPoCI6inAf1CUhcCX2xcnE/osO+CfvKuFnPYE\nWQ7UNLsdfka0a9kTK13r3GMs09z/VsDs0gD8UhPjoeO7LQhdU9tJ/qOaSP3s48pv\nsYzZurHUgKmVOaOE4t9DAdevSECEWCETRETA$Vbn%@@@%%rcdrctru65ryFaByb+\nhTtGhDnoHwzt/cAGvLGV/RyWdGFAbu7Fb1rV94ceggE7nh1FqbdLH9siot6LlnQN\nMhEWp5PYgndOW49dDYUxoauCCkiA\n-----END CERTIFICATE-----\n\n\nYou should keep a copy of the CA-signed digital certificate for future reference.\n\nThe installed certificate's CA and serial number for reference:\nCA: harvest2\nSerial: 3FD1145F2976043012213d3009095534CCRDBD2\n\nThe certificate's generated name for reference: harvest2\n

    Finally, we need to enable SSL authentication with the following command (replace CLUSTER with the name of your cluster):

    security ssl modify -client-enabled true -vserver CLUSTER\n
    "},{"location":"prepare-cdot-clusters/#reference","title":"Reference","text":"
    • https://github.com/jcbsmpsn/golang-https-example
    "},{"location":"prepare-fsx-clusters/","title":"Amazon FSx for ONTAP","text":""},{"location":"prepare-fsx-clusters/#prepare-amazon-fsx-for-ontap","title":"Prepare Amazon FSx for ONTAP","text":"

    To set up Harvest and FSx make sure you read through Monitoring FSx for ONTAP file systems using Harvest and Grafana

    "},{"location":"prepare-fsx-clusters/#supported-harvest-dashboards","title":"Supported Harvest Dashboards","text":"

    Amazon FSx for ONTAP exposes a different set of metrics than ONTAP cDOT. That means a limited set of out-of-the-box dashboards are supported and some panels may be missing information.

    The dashboards that work with FSx are tagged with fsx and listed below:

    • ONTAP: Volume
    • ONTAP: SVM
    • ONTAP: Security
    • ONTAP: Data Protection Snapshots
    • ONTAP: Compliance
    "},{"location":"prepare-storagegrid-clusters/","title":"StorageGRID","text":""},{"location":"prepare-storagegrid-clusters/#prepare-storagegrid-cluster","title":"Prepare StorageGRID cluster","text":"

    NetApp Harvest requires login credentials to access StorageGRID hosts. Although, a generic admin account can be used, it is better to create a dedicated monitoring user with the fewest permissions.

    Here's a summary of what we're going to do

    1. Create a StorageGRID group with the necessary capabilities that Harvest will use to auth and collect data
    2. Create a user assigned to the group created in step #1.
    "},{"location":"prepare-storagegrid-clusters/#create-storagegrid-group-permissions","title":"Create StorageGRID group permissions","text":"

    These steps are documented here.

    You will need a root or admin account to create a new group permission.

    1. Select CONFIGURATION > Access control > Admin groups
    2. Select Create group
    3. Select Local group
    4. Enter a display name for the group, which you can update later as required. For example, Harvest or monitoring.
    5. Enter a unique name for the group, which you cannot update later.
    6. Select Continue
    7. On the Manage group permissions screen, select the permissions you want. At a minimum, Harvest requires the Tenant accounts and Metrics query permissions.
    8. Select Save changes

    "},{"location":"prepare-storagegrid-clusters/#create-a-storagegrid-user","title":"Create a StorageGRID user","text":"

    These steps are documented here.

    You will need a root or admin account to create a new user.

    1. Select CONFIGURATION > Access control > Admin users
    2. Select Create user
    3. Enter the user\u2019s full name, a unique username, and a password.
    4. Select Continue.
    5. Assign the user to the previously created harvest group.
    6. Select Create user and select Finish.

    "},{"location":"prepare-storagegrid-clusters/#reference","title":"Reference","text":"

    See group permissions for more information on StorageGRID permissions.

    "},{"location":"prometheus-exporter/","title":"Prometheus Exporter","text":"Prometheus Install

    The information below describes how to setup Harvest's Prometheus exporter. If you need help installing or setting up Prometheus, check out their documention.

    "},{"location":"prometheus-exporter/#overview","title":"Overview","text":"

    The Prometheus exporter is responsible for:

    • formatting metrics into the Prometheus line protocol
    • creating a web-endpoint on http://<ADDR>:<PORT>/metrics (or https: if TLS is enabled) for Prometheus to scrape

    A web end-point is required because Prometheus scrapes Harvest by polling that end-point.

    In addition to the /metrics end-point, the Prometheus exporter also serves an overview of all metrics and collectors available on its root address scheme://<ADDR>:<PORT>/.

    Because Prometheus polls Harvest, don't forget to update your Prometheus configuration and tell Prometheus how to scrape each poller.

    There are two ways to configure the Prometheus exporter: using a port range or individual ports.

    The port range is more flexible and should be used when you want multiple pollers all exporting to the same instance of Prometheus. Both options are explained below.

    "},{"location":"prometheus-exporter/#parameters","title":"Parameters","text":"

    All parameters of the exporter are defined in the Exporters section of harvest.yml.

    An overview of all parameters:

    parameter type description default port_range int-int (range), overrides port if specified lower port to upper port (inclusive) of the HTTP end-point to create when a poller specifies this exporter. Starting at lower port, each free port will be tried sequentially up to the upper port. port int, required if port_range is not specified port of the HTTP end-point local_http_addr string, optional address of the HTTP server Harvest starts for Prometheus to scrape:use localhost to serve only on the local machineuse 0.0.0.0 (default) if Prometheus is scrapping from another machine 0.0.0.0 global_prefix string, optional add a prefix to all metrics (e.g. netapp_) allow_addrs list of strings, optional allow access only if host matches any of the provided addresses allow_addrs_regex list of strings, optional allow access only if host address matches at least one of the regular expressions cache_max_keep string (Go duration format), optional maximum amount of time metrics are cached (in case Prometheus does not timely collect the metrics) 5m add_meta_tags bool, optional add HELP and TYPE metatags to metrics (currently no useful information, but required by some tools) false sort_labels bool, optional sort metric labels before exporting. Some open-metrics scrapers report stale metrics when labels are not sorted. false tls tls optional If present, enables TLS transport. If running in a container, see note tls cert_file, key_file required child of tls Relative or absolute path to TLS certificate and key file. TLS 1.3 certificates required.FIPS complaint P-256 TLS 1.3 certificates can be created with bin/harvest admin tls create server, openssl, mkcert, etc.

    A few examples:

    "},{"location":"prometheus-exporter/#port_range","title":"port_range","text":"
    Exporters:\nprom-prod:\nexporter: Prometheus\nport_range: 2000-2030\nPollers:\ncluster-01:\nexporters:\n- prom-prod\ncluster-02:\nexporters:\n- prom-prod\ncluster-03:\nexporters:\n- prom-prod\n# ... more\ncluster-16:\nexporters:\n- prom-prod\n

    Sixteen pollers will collect metrics from 16 clusters and make those metrics available to a single instance of Prometheus named prom-prod. Sixteen web end-points will be created on the first 16 available free ports between 2000 and 2030 (inclusive).

    After staring the pollers in the example above, running bin/harvest status shows the following. Note that ports 2000 and 2003 were not available so the next free port in the range was selected. If no free port can be found an error will be logged.

    Datacenter   Poller       PID     PromPort  Status              \n++++++++++++ ++++++++++++ +++++++ +++++++++ ++++++++++++++++++++\nDC-01        cluster-01   2339    2001      running         \nDC-01        cluster-02   2343    2002      running         \nDC-01        cluster-03   2351    2004      running         \n...\nDC-01        cluster-14   2405    2015      running         \nDC-01        cluster-15   2502    2016      running         \nDC-01        cluster-16   2514    2017      running         \n
    "},{"location":"prometheus-exporter/#allow_addrs","title":"allow_addrs","text":"
    Exporters:\nmy_prom:\nallow_addrs:\n- 192.168.0.102\n- 192.168.0.103\n

    will only allow access from exactly these two addresses.

    "},{"location":"prometheus-exporter/#allow_addrs_regex","title":"allow_addrs_regex","text":"
    Exporters:\nmy_prom:\nallow_addrs_regex:\n- `^192.168.0.\\d+$`\n

    will only allow access from the IP4 range 192.168.0.0-192.168.0.255.

    "},{"location":"prometheus-exporter/#configure-prometheus-to-scrape-harvest-pollers","title":"Configure Prometheus to scrape Harvest pollers","text":"

    There are two ways to tell Prometheus how to scrape Harvest: using HTTP service discovery (SD) or listing each poller individually.

    HTTP service discovery is the more flexible of the two. It is also less error-prone, and easier to manage. Combined with the port_range configuration described above, SD is the least effort to configure Prometheus and the easiest way to keep both Harvest and Prometheus in sync.

    NOTE HTTP service discovery does not work with Docker yet. With Docker, you will need to list each poller individually or if possible, use the Docker Compose workflow that uses file service discovery to achieve a similar ease-of-use as HTTP service discovery.

    See the example below for how to use HTTP SD and port_range together.

    "},{"location":"prometheus-exporter/#prometheus-http-service-discovery","title":"Prometheus HTTP Service Discovery","text":"

    HTTP service discovery was introduced in Prometheus version 2.28.0. Make sure you're using that version or later.

    The way service discovery works is:

    • shortly after a poller starts up, it registers with the SD node (if one exists)
    • the poller sends a heartbeat to the SD node, by default every 45s.
    • if a poller fails to send a heartbeat, the SD node removes the poller from the list of active targets after a minute
    • the SD end-point is reachable via SCHEMA:///api/v1/sd

      To use HTTP service discovery you need to:

      1. tell Harvest to start the HTTP service discovery process
      2. tell Prometheus to use the HTTP service discovery endpoint
      "},{"location":"prometheus-exporter/#enable-http-service-discovery-in-harvest","title":"Enable HTTP service discovery in Harvest","text":"

      Add the following to your harvest.yml

      Admin:\nhttpsd:\nlisten: :8887\n

      This tells Harvest to create an HTTP service discovery end-point on interface 0.0.0.0:8887. If you want to only listen on localhost, use 127.0.0.1:<port> instead. See net.Dial for details on the supported listen formats.

      Start the SD process by running bin/harvest admin start. Once it is started, you can curl the end-point for the list of running Harvest pollers.

      curl -s 'http://localhost:8887/api/v1/sd' | jq .\n[\n  {\n    \"targets\": [\n      \"10.0.1.55:12990\",\n      \"10.0.1.55:15037\",\n      \"127.0.0.1:15511\",\n      \"127.0.0.1:15008\",\n      \"127.0.0.1:15191\",\n      \"10.0.1.55:15343\"\n    ]\n  }\n]\n
      "},{"location":"prometheus-exporter/#harvest-http-service-discovery-options","title":"Harvest HTTP Service Discovery options","text":"

      HTTP service discovery (SD) is configured in the Admin > httpsd section of your harvest.yml.

      parameter type description default listen required Interface and port to listen on, use localhost:PORT or :PORT for all interfaces auth_basic optional If present, enables basic authentication on /api/v1/sd end-point auth_basic username, password required child of auth_basic tls optional If present, enables TLS transport. If running in a container, see note tls cert_file, key_file required child of tls Relative or absolute path to TLS certificate and key file. TLS 1.3 certificates required.FIPS complaint P-256 TLS 1.3 certificates can be created with bin/harvest admin tls create server ssl_cert, ssl_key optional if auth_style is certificate_auth Absolute paths to SSL (client) certificate and key used to authenticate with the target system.If not provided, the poller will look for <hostname>.key and <hostname>.pem in $HARVEST_HOME/cert/.To create certificates for ONTAP systems, see using certificate authentication heart_beat optional, Go Duration format How frequently each poller sends a heartbeat message to the SD node 45s expire_after optional, Go Duration format If a poller fails to send a heartbeat, the SD node removes the poller after this duration 1m"},{"location":"prometheus-exporter/#enable-http-service-discovery-in-prometheus","title":"Enable HTTP service discovery in Prometheus","text":"

      Edit your prometheus.yml and add the following section

      $ vim /etc/prometheus/prometheus.yml

      scrape_configs:\n- job_name: harvest\nhttp_sd_configs:\n- url: http://localhost:8887/api/v1/sd\n

      Harvest and Prometheus both support basic authentication for HTTP SD end-points. To enable basic auth, add the following to your Harvest config.

      Admin:\nhttpsd:\nlisten: :8887\n# Basic auth protects GETs and publishes\nauth_basic:\nusername: admin\npassword: admin\n

      Don't forget to also update your Prometheus config with the matching basic_auth credentials.

      "},{"location":"prometheus-exporter/#prometheus-http-service-discovery-and-port-range","title":"Prometheus HTTP Service Discovery and Port Range","text":"

      HTTP SD combined with Harvest's port_range feature leads to significantly less configuration in your harvest.yml. For example, if your clusters all export to the same Prometheus instance, you can refactor the per-poller exporter into a single exporter shared by all clusters in Defaults as shown below:

      Notice that none of the pollers specify an exporter. Instead, all the pollers share the single exporter named prometheus-r listed in Defaults. prometheus-r is the only exporter defined and as specified will manage up to 1,000 Harvest Prometheus exporters.

      If you add or remove more clusters in the Pollers section, you do not have to change Prometheus since it dynamically pulls the targets from the Harvest admin node.

      Admin:\nhttpsd:\nlisten: :8887\n\nExporters:\nprometheus-r:\nexporter: Prometheus\nport_range: 13000-13999\n\nDefaults:\ncollectors:\n- Zapi\n- ZapiPerf\nuse_insecure_tls: false\nauth_style: password\nusername: admin\npassword: pass\nexporters:\n- prometheus-r\n\nPollers:\numeng_aff300:\ndatacenter: meg\naddr: 10.193.48.11\n\nF2240-127-26:\ndatacenter: meg\naddr: 10.193.6.61\n\n# ... add more clusters\n
      "},{"location":"prometheus-exporter/#static-scrape-targets","title":"Static Scrape Targets","text":"

      If we define four prometheus exporters at ports: 12990, 12991, 14567, and 14568 you need to add four sections to your prometheus.yml.

      $ vim /etc/prometheus/prometheus.yml\n

      Scroll down to near the end of file and add the following lines:

        - job_name: 'harvest'\nscrape_interval: 60s\nstatic_configs:\n- targets:\n- 'localhost:12990'\n- 'localhost:12991'\n- 'localhost:14567'\n- 'localhost:14568'\n

      NOTE If Prometheus is not on the same machine as Harvest, then replace localhost with the IP address of your Harvest machine. Also note the scrape interval above is set to 1m. That matches the polling frequency of the default Harvest collectors. If you change the polling frequency of a Harvest collector to a lower value, you should also change the scrape interval.

      "},{"location":"prometheus-exporter/#prometheus-exporter-and-tls","title":"Prometheus Exporter and TLS","text":"

      The Harvest Prometheus exporter can be configured to serve its metrics via HTTPS by configuring the tls section in the Exporters section of harvest.yml.

      Let's walk through an example of how to set up Harvest's Prometheus exporter and how to configure Prometheus to use TLS.

      "},{"location":"prometheus-exporter/#generate-tls-certificates","title":"Generate TLS Certificates","text":"

      We'll use Harvest's admin command line tool to create a self-signed TLS certificate key/pair for the exporter and Prometheus. Note: If running in a container, see note.

      cd $Harvest_Install_Directory\nbin/harvest admin tls create server\n2023/06/23 09:39:48 wrote cert/admin-cert.pem\n2023/06/23 09:39:48 wrote cert/admin-key.pem\n

      Two files are created. Since we want to use these certificates for our Prometheus exporter, let's rename them to make that clearer.

      mv cert/admin-cert.pem cert/prom-cert.pem\nmv cert/admin-key.pem cert/prom-key.pem\n
      "},{"location":"prometheus-exporter/#configure-harvest-prometheus-exporter-to-use-tls","title":"Configure Harvest Prometheus Exporter to use TLS","text":"

      Edit your harvest.yml and add a TLS section to your exporter block like this:

      Exporters:\nmy-exporter:\nlocal_http_addr: localhost\nexporter: Prometheus\nport: 16001\ntls:\ncert_file: cert/prom-cert.pem\nkey_file: cert/prom-key.pem\n

      Update one of your Pollers to use this exporter and start the poller.

      Pollers:\nmy-cluster:\ndatacenter: dc-1\naddr: 10.193.48.11\nexporters:\n- my-exporter     # Use TLS exporter we created above\n

      When the poller is started, it will log whether https or http is being used as part of the url like so:

      bin/harvest start -f my-cluster\n2023-06-23T10:02:03-04:00 INF prometheus/httpd.go:40 > server listen Poller=my-cluster exporter=my-exporter url=https://localhost:16001/metrics\n

      If the url schema is https, TLS is being used.

      You can use curl to scrape the Prometheus exporter and verify that TLS is being used like so:

      curl --cacert cert/prom-cert.pem https://localhost:16001/metrics\n\n# or use --insecure to tell curl to skip certificate validation\n# curl --insecure cert/prom-cert.pem https://localhost:16001/metrics  \n
      "},{"location":"prometheus-exporter/#configure-prometheus-to-use-tls","title":"Configure Prometheus to use TLS","text":"

      Let's configure Prometheus to use HTTPs to communicate with the exporter setup above.

      Edit your prometheus.yml and add or adapt your scrape_configs job. You need to add scheme: https and setup a tls_config block to point to the earlier created prom-cert.pem like so:

      scrape_configs:\n- job_name: 'harvest-https'\nscheme: https\ntls_config:\nca_file: /path/to/prom-cert.pem\nstatic_configs:\n- targets:\n- 'localhost:16001'\n

      Start Prometheus and visit http://localhost:9090/targets with your browser. You should see https://localhost:16001/metrics in the list of targets.

      "},{"location":"prometheus-exporter/#prometheus-alerts","title":"Prometheus Alerts","text":"

      Prometheus includes out-of-the-box support for simple alerting. Alert rules are configured in your prometheus.yml file. Setup and details can be found in the Prometheus guide on alerting.

      Harvest also includes ems alerts and sample alerts for reference. Refer EMS Collector for more details about EMS events.

      "},{"location":"prometheus-exporter/#alertmanager","title":"Alertmanager","text":"

      Prometheus's builtin alerts are good for simple workflows. They do a nice job telling you what's happening at the moment. If you need a richer solution that includes summarization, notification, advanced delivery, deduplication, etc. checkout Alertmanager.

      "},{"location":"prometheus-exporter/#reference","title":"Reference","text":"
      • Prometheus Alerting
      • Alertmanager
      • Alertmanager's notification metrics
      • Prometheus Linter
      • Collection of example Prometheus Alerts
      "},{"location":"quickstart/","title":"Quickstart","text":""},{"location":"quickstart/#1-install-harvest","title":"1. Install Harvest","text":"

      Harvest is distributed as a container, tarball, and RPM and Debs. Pick the one that works best for you. More details can be found in the installation documentation.

      "},{"location":"quickstart/#2-configuration-file","title":"2. Configuration file","text":"

      Harvest's configuration information is defined in harvest.yml. There are a few ways to tell Harvest how to load this file:

      • If you don't use the --config flag, the harvest.yml file located in the current working directory will be used

      • If you specify the --config flag like so harvest status --config /opt/harvest/harvest.yml, Harvest will use that file

      To start collecting metrics, you need to define at least one poller and one exporter in your configuration file. The default configuration comes with a pre-configured poller named unix which collects metrics from the local system. This is useful if you want to monitor resource usage by Harvest and serves as a good example. Feel free to delete it if you want.

      The next step is to add pollers for your ONTAP clusters in the Pollers section of the Harvest configuration file, harvest.yml.

      "},{"location":"quickstart/#3-start-harvest","title":"3. Start Harvest","text":"

      Start all Harvest pollers as daemons:

      bin/harvest start\n

      Or start a specific poller(s). In this case, we're staring two pollers named jamaica and jamaica.

      bin/harvest start jamaica jamaica\n

      Replace jamaica and grenada with the poller names you defined in harvest.yml. The logs of each poller can be found in /var/log/harvest/.

      "},{"location":"quickstart/#4-import-grafana-dashboards","title":"4. Import Grafana dashboards","text":"

      The Grafana dashboards are located in the $HARVEST_HOME/grafana directory. You can manually import the dashboards or use the bin/harvest grafana command (more documentation).

      Note: the current dashboards specify Prometheus as the datasource. If you use the InfluxDB exporter, you will need to create your own dashboards.

      "},{"location":"quickstart/#5-verify-the-metrics","title":"5. Verify the metrics","text":"

      If you use a Prometheus Exporter, open a browser and navigate to http://0.0.0.0:12990/ (replace 12990 with the port number of your poller). This is the Harvest created HTTP end-point for your Prometheus exporter. This page provides a real-time generated list of running collectors and names of exported metrics.

      The metric data that is exported for Prometheus to scrap is available at http://0.0.0.0:12990/metrics/.

      More information on configuring the exporter can be found in the Prometheus exporter documentation.

      If you can't access the URL, check the logs of your pollers. These are located in /var/log/harvest/.

      "},{"location":"quickstart/#6-optional-setup-systemd-service-files","title":"6. (Optional) Setup Systemd service files","text":"

      If you're running Harvest on a system with Systemd, you may want to take advantage of systemd instantiated units to manage your pollers.

      "},{"location":"release-notes/","title":"Release Notes","text":"
      • Changelog
      • Releases
      "},{"location":"system-requirements/","title":"System Requirements","text":"

      Harvest is written in Go, which means it runs on recent Linux systems. It also runs on Macs for development.

      Hardware requirements depend on how many clusters you monitor and the number of metrics you chose to collect. With the default configuration, when monitoring 10 clusters, we recommend:

      • CPU: 2 cores
      • Memory: 1 GB
      • Disk: 500 MB (mostly used by log files)

      Harvest is compatible with:

      • Prometheus: 2.26 or higher
      • InfluxDB: v2
      • Grafana: 8.1.X or higher
      • Docker: 20.10.0 or higher and compatible Docker Compose
      "},{"location":"upgrade/","title":"Upgrade","text":"

      To upgrade Harvest

      Stop harvest

      cd <existing harvest directory>\nbin/harvest stop\n

      Verify that all pollers have stopped:

      bin/harvest status\nor\npgrep --full '\\-\\-poller'  # should return nothing if all pollers are stopped\n

      Follow the installation instructions to download and install Harvest and then copy your old harvest.yml into the new install directory like so:

      cp /path/to/old/harvest/harvest.yml /path/to/new/harvest.yml\n

      After upgrade, re-import all dashboards (either bin/harvest grafana import cli or via the Grafana UI) to get any new enhancements in dashboards.

      "},{"location":"architecture/rest-collector/","title":"REST collector","text":""},{"location":"architecture/rest-collector/#status","title":"Status","text":"

      ~~Accepted~~ Superseded by REST strategy

      The exact version of ONTAP that has full ZAPI parity is subject to change. Everywhere you see version 9.12, may become 9.13 or later.

      "},{"location":"architecture/rest-collector/#context","title":"Context","text":"

      We need to document and communicate to customers: - when they should switch from the ZAPI collectors to the REST ones - what versions of ONTAP are supported by Harvest's REST collectors - how to fill ONTAP gaps between the ZAPI and REST APIs

      The ONTAP version information is important because gaps are addressed in later versions of cDOT.

      "},{"location":"architecture/rest-collector/#considered-options","title":"Considered Options","text":"
      1. Only REST A clean cut-over, stop using ZAPI, and switch completely to REST.

      2. Both Support both ZAPI and REST collectors running at the same time, collecting the same objects. Flexible, but has the downside of last-write wins. Not recommended unless you selectively pick non-overlapping sets of objects.

      3. Template change that supports both Change the template to break ties, priority, etc. Rejected because additional complexity not worth the benefits.

      4. private-cli When there are REST gaps that have not been filled yet or will never be filled (WONTFIX), the Harvest REST collector will provide infrastructure and documentation on how to use private-cli pass-through to address gaps.

      "},{"location":"architecture/rest-collector/#chosen-decision","title":"Chosen Decision","text":"

      For clusters with ONTAP versions < 9.12, we recommend customers use the ZAPI collectors. (#2) (#4)

      Once ONTAP 9.12+ is released and customers have upgraded to it, they should make a clean cut-over to the REST collectors (#1). ONTAP 9.12 is the version of ONTAP that has the best parity with what Harvest collects in terms of config and performance counters. Harvest REST collectors, templates, and dashboards are validated against ONTAP 9.12+. Most of the REST config templates will work before 9.12, but unless you have specific needs, we recommend sticking with the ZAPI collectors until you upgrade to 9.12.

      There is little value in running both the ZAPI and REST collectors for an overlapping set of objects. It's unlikely you want to collect the same object via REST and ZAPI at the same time. Harvest doesn't support this use-case, but does nothing to detect or prevent it.

      If you want to collect a non-overlapping set of objects with REST and ZAPI, you can. If you do, we recommend you disable the ZAPI object collector. For example, if you enable the REST disk template, you should disable the ZAPI disk template. We do NOT recommend collecting an overlapping set of objects with both collectors since the last one to run will overwrite previously collected data.

      Harvest will document how to use the REST private cli pass-through to collect custom and non-public counters.

      The Harvest team recommends that customers open ONTAP issues for REST public API gaps that need filled.

      "},{"location":"architecture/rest-collector/#consequences","title":"Consequences","text":"

      The Harvest REST collectors will work with limitations on earlier versions of ONTAP. ONTAP 9.12+ is the minimally validated version. We only validate the full set of templates, dashboards, counters, etc. on versions of ONTAP 9.12+

      Harvest does not prevent you from collecting the same resource with ZAPI and REST.

      "},{"location":"architecture/rest-strategy/","title":"REST Strategy","text":""},{"location":"architecture/rest-strategy/#status","title":"Status","text":"

      Accepted

      "},{"location":"architecture/rest-strategy/#context","title":"Context","text":"

      ONTAP has published a customer product communiqu\u00e9 (CPC-00410) announcing that ZAPIs will reach end of availability (EOA) in ONTAP 9.13.1 released Q2 2023.

      This document describes how Harvest handles the ONTAP transition from ZAPI to REST. In most cases, no action is required on your part.

      "},{"location":"architecture/rest-strategy/#harvest-api-transition","title":"Harvest API Transition","text":"

      Harvest tries to use the protocol you specify in your harvest.yml config file.

      When specifying the ZAPI collector, Harvest will use the ZAPI protocol unless the cluster no longer speaks Zapi, in which case, Harvest will switch to REST.

      If you specify the REST collector, Harvest will use the REST protocol.

      Harvest includes a full set of REST templates that export identical metrics as the included ZAPI templates. No changes to dashboards or downstream metric-consumers should be required. See below if you have added metrics to the Harvest out-of-the-box templates.

      Read on if you want to know how you can use REST sooner, or you want to take advantage of REST-only features in ONTAP.

      "},{"location":"architecture/rest-strategy/#frequently-asked-questions","title":"Frequently Asked Questions","text":""},{"location":"architecture/rest-strategy/#how-does-harvest-decide-whether-to-use-rest-or-zapi-apis","title":"How does Harvest decide whether to use REST or ZAPI APIs?","text":"

      Harvest attempts to use the collector defined in your harvest.yml config file.

      • If you specify the ZAPI collector, Harvest will use the ZAPI protocol as long as the cluster still speaks Zapi. If the cluster no longer understands Zapi, Harvest will switch to Rest.

      • If you specify the REST collector, Harvest will use REST.

      Earlier versions of Harvest included a prefer_zapi poller option and a HARVEST_NO_COLLECTOR_UPGRADE environment variable. Both of these options are ignored in Harvest versions 23.08 onwards.

      "},{"location":"architecture/rest-strategy/#why-would-i-switch-to-rest-before-9131","title":"Why would I switch to REST before 9.13.1?","text":"
      • You have advanced use cases to validate before ONTAP removes ZAPIs
      • You want to take advantage of new ONTAP features that are only available via REST (e.g., cloud features, event remediation, name services, cluster peers, etc.)
      • You want to collect a metric that is not available via ZAPI
      • You want to collect a metric from the ONTAP CLI. The REST API includes a private CLI pass-through to access any ONTAP CLI command
      "},{"location":"architecture/rest-strategy/#can-i-start-using-rest-before-9131","title":"Can I start using REST before 9.13.1?","text":"

      Yes. Many customers do. Be aware of the following limitations:

      1. ONTAP includes a subset of performance counters via REST beginning in ONTAP 9.11.1.
      2. There may be performance metrics missing from versions of ONTAP earlier than 9.11.1.

      Where performance metrics are concerned, because of point #2, our recommendation is to wait until at least ONTAP 9.12.1 before switching to the RestPerf collector. You can continue using the ZapiPerf collector until you switch.

      "},{"location":"architecture/rest-strategy/#a-counter-is-missing-from-rest-what-do-i-do","title":"A counter is missing from REST. What do I do?","text":"

      The Harvest team has ensured that all the out-of-the-box ZAPI templates have matching REST templates with identical metrics as of Harvest 22.11 and ONTAP 9.12.1. Any additional ZAPI Perf counters you have added may be missing from ONTAP REST Perf.

      Join the Harvest discord channel and ask us about the counter. Sometimes we may know which release the missing counter is coming in, otherwise we can point you to the ONTAP process to request new counters.

      "},{"location":"architecture/rest-strategy/#can-i-use-the-rest-and-zapi-collectors-at-the-same-time","title":"Can I use the REST and ZAPI collectors at the same time?","text":"

      Yes. Harvest ensures that duplicate resources are not collected from both collectors.

      When there is potential duplication, Harvest first resolves the conflict in the order collectors are defined in your poller and then negotiates with the cluster on the most appropriate API to use per above.

      Let's take a look at a few examples using the following poller definition:

      cluster-1:\ndatacenter: dc-1\naddr: 10.1.1.1\ncollectors:\n- Zapi\n- Rest\n
      • When cluster-1 is running ONTAP 9.9.X (ONTAP still supports ZAPIs), the Zapi collector will be used since it is listed first in the list of collectors. When collecting a REST-only resource like, nfs_client, the Rest collector will be used since nfs_client objects are only available via REST.

      • When cluster-1 is running ONTAP 9.18.1 (ONTAP no longer supports ZAPIs), the Rest collector will be used since ONTAP can no longer speak the ZAPI protocol.

      If you want the REST collector to be used in all cases, change the order in the collectors section so Rest comes before Zapi.

      If the resource does not exist for the first collector, the next collector will be tried. Using the example above, when collecting VolumeAnalytics resources, the Zapi collector will not run for VolumeAnalytics objects since that resource is only available via REST. The Rest collector will run and collect the VolumeAnalytics objects.

      "},{"location":"architecture/rest-strategy/#ive-added-counters-to-existing-zapi-templates-will-those-counters-work-in-rest","title":"I've added counters to existing ZAPI templates. Will those counters work in REST?","text":"

      ZAPI config metrics often have a REST equivalent that can be found in ONTAP's ONTAPI to REST mapping document.

      ZAPI performance metrics may be missing in REST. If you have added new metrics or templates to the ZapiPerf collector, those metrics likely aren't available via REST. You can check if the performance counter is available, ask the Harvest team on Discord, or ask ONTAP to add the counter you need.

      "},{"location":"architecture/rest-strategy/#reference","title":"Reference","text":"

      Table of ONTAP versions, dates and API notes.

      ONTAPversion ReleaseDate ONTAPNotes 9.11.1 Q2 2022 First version of ONTAP with REST performance metrics 9.12.1 Q4 2022 ZAPIs still supported - REST performance metrics have parity with Harvest 22.11 collected ZAPI performance metrics 9.13.1 ZAPIs still supported 9.14.1-9.15.1 ZAPIs enabled if ONTAP upgrade detects they were being used earlier. New ONTAP installs default to REST only. ZAPIs may be enabled via CLI 9.16.1-9.17.1 ZAPIs disabled. See ONTAP communique for details on re-enabling 9.18.1 ZAPIs removed. No way to re-enable"},{"location":"help/faq/","title":"FAQ","text":""},{"location":"help/faq/#how-do-i-migrate-from-harvest-16-to-20","title":"How do I migrate from Harvest 1.6 to 2.0?","text":"

      There currently is not a tool to migrate data from Harvest 1.6 to 2.0. The most common workaround is to run both, 1.6 and 2.0, in parallel. Run both, until the 1.6 data expires due to normal retention policy, and then fully cut over to 2.0.

      Technically, it\u2019s possible to take a Graphite DB, extract the data, and send it to a Prometheus db, but it\u2019s not an area we\u2019ve invested in. If you want to explore that option, check out the promtool which supports importing, but probably not worth the effort.

      "},{"location":"help/faq/#how-do-i-share-sensitive-log-files-with-netapp","title":"How do I share sensitive log files with NetApp?","text":"

      Email them to ng-harvest-files@netapp.com This mail address is accessible to NetApp Harvest employees only.

      "},{"location":"help/faq/#multi-tenancy","title":"Multi-tenancy","text":""},{"location":"help/faq/#question","title":"Question","text":"

      Is there a way to allow per SVM level user views? I need to offer 1 tenant per SVM. Can I limit visibility to specific SVMs? Is there an SVM dashboard available?

      "},{"location":"help/faq/#answer","title":"Answer","text":"

      You can do this with Grafana. Harvest can provide the labels for SVMs. The pieces are there but need to be put together.

      Grafana templates support the $__user variable to make pre-selections and decisions. You can use that + metadata mapping the user <-> SVM. With both of those you can build SVM specific dashboards.

      There is a German service provider who is doing this. They have service managers responsible for a set of customers \u2013 and only want to see the data/dashboards of their corresponding customers.

      "},{"location":"help/faq/#harvest-authentication-and-permissions","title":"Harvest Authentication and Permissions","text":""},{"location":"help/faq/#question_1","title":"Question","text":"

      What permissions does Harvest need to talk to ONTAP?

      "},{"location":"help/faq/#answer_1","title":"Answer","text":"

      Permissions, authentication, role based security, and creating a Harvest user are covered here.

      "},{"location":"help/faq/#ontap-counters-are-missing","title":"ONTAP counters are missing","text":""},{"location":"help/faq/#question_2","title":"Question","text":"

      How do I make Harvest collect additional ONTAP counters?

      "},{"location":"help/faq/#answer_2","title":"Answer","text":"

      Instead of modifying the out-of-the-box templates in the conf/ directory, it is better to create your own custom templates following these instructions.

      "},{"location":"help/faq/#capacity-metrics","title":"Capacity Metrics","text":""},{"location":"help/faq/#question_3","title":"Question","text":"

      How are capacity and other metrics calculated by Harvest?

      "},{"location":"help/faq/#answer_3","title":"Answer","text":"

      Each collector has its own way of collecting and post-processing metrics. Check the documentation of each individual collector (usually under section #Metrics). Capacity and hardware-related metrics are collected by the Zapi collector which emits metrics as they are without any additional calculation. Performance metrics are collected by the ZapiPerf collector and the final values are calculated from the delta of two consequent polls.

      "},{"location":"help/faq/#tagging-volumes","title":"Tagging Volumes","text":""},{"location":"help/faq/#question_4","title":"Question","text":"

      How do I tag ONTAP volumes with metadata and surface that data in Harvest?

      "},{"location":"help/faq/#answer_4","title":"Answer","text":"

      See volume tagging issue and volume tagging via sub-templates

      "},{"location":"help/faq/#rest-and-zapi-documentation","title":"REST and Zapi Documentation","text":""},{"location":"help/faq/#question_5","title":"Question","text":"

      How do I relate ONTAP REST endpoints to ZAPI APIs and attributes?

      "},{"location":"help/faq/#answer_5","title":"Answer","text":"

      Please refer to the ONTAPI to REST API mapping document.

      "},{"location":"help/faq/#sizing","title":"Sizing","text":"

      How much disk space is required by Prometheus?

      This depends on the collectors you've added, # of nodes monitored, cardinality of labels, # instances, retention, ingest rate, etc. A good approximation is to curl your Harvest exporter and count the number of samples that it publishes and then feed that information into a Prometheus sizing formula.

      Prometheus stores an average of 1-2 bytes per sample. To plan the capacity of a Prometheus server, you can use the rough formula: needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample

      A rough approximation is outlined https://devops.stackexchange.com/questions/9298/how-to-calculate-disk-space-required-by-prometheus-v2-2

      "},{"location":"help/faq/#topk-usage-in-grafana","title":"Topk usage in Grafana","text":""},{"location":"help/faq/#question_6","title":"Question","text":"

      In Grafana, why do I see more results from topk than I asked for?

      "},{"location":"help/faq/#answer_6","title":"Answer","text":"

      Topk is one of Prometheus's out-of-the-box aggregation operators, and is used to calculate the largest k elements by sample value.

      Depending on the time range you select, Prometheus will often return more results than you asked for. That's because Prometheus is picking the topk for each time in the graph. In other words, different time series are the topk at different times in the graph. When you use a large duration, there are often many time series.

      This is a limitation of Prometheus and can be mitigated by:

      • reducing the time range to a smaller duration that includes fewer topk results - something like a five to ten minute range works well for most of Harvest's charts
      • the panel's table shows the current topk rows and that data can be used to supplement the additional series shown in the charts

      Additional details: here, here, and here

      "},{"location":"help/faq/#where-are-harvest-container-images-published","title":"Where are Harvest container images published?","text":"

      Harvest images are published to both NetApp's (cr.netapp.io) and Docker's (hub.docker.com) image registry. By default, cr.netapp.io is used.

      "},{"location":"help/faq/#how-do-i-switch-from-dockerhub-to-netapps-image-registry-crnetappio-or-vice-versa","title":"How do I switch from DockerHub to NetApp's image registry (cr.netapp.io) or vice-versa?","text":""},{"location":"help/faq/#answer_7","title":"Answer","text":"

      Replace all instances of rahulguptajss/harvest:latest with cr.netapp.io/harvest:latest

      • Edit your docker-compose file and make those replacements or regenerate the compose file using the --image cr.netapp.io/harvest:latest option)

      • Update any shell or Ansible scripts you have that are also using those images

      • After making these changes, you should stop your containers, pull new images, and restart.

      You can verify that you're using the cr.netapp.io images like so:

      Before

      docker image ls -a\nREPOSITORY              TAG       IMAGE ID       CREATED        SIZE\nrahulguptajss/harvest   latest    80061bbe1c2c   10 days ago    85.4MB <=== no prefix in the repository \nprom/prometheus         v2.33.1   e528f02c45a6   3 weeks ago    204MB       column means from DockerHub\ngrafana/grafana         8.3.4     4a34578e4374   5 weeks ago    274MB\n

      Pull image from cr.netapp.io

      docker pull cr.netapp.io/harvest\nUsing default tag: latest\nlatest: Pulling from harvest\nDigest: sha256:6ff88153812ebb61e9dd176182bf8a792cde847748c5654d65f4630e61b1f3ae\nStatus: Image is up to date for cr.netapp.io/harvest:latest\ncr.netapp.io/harvest:latest\n

      Notice that the IMAGE ID for both images are identical since the images are the same.

      docker image ls -a\nREPOSITORY              TAG       IMAGE ID       CREATED        SIZE\ncr.netapp.io/harvest    latest    80061bbe1c2c   10 days ago    85.4MB  <== Harvest image from cr.netapp.io\nrahulguptajss/harvest   latest    80061bbe1c2c   10 days ago    85.4MB\nprom/prometheus         v2.33.1   e528f02c45a6   3 weeks ago    204MB\ngrafana/grafana         8.3.4     4a34578e4374   5 weeks ago    274MB\ngrafana/grafana         latest    1d60b4b996ad   2 months ago   275MB\nprom/prometheus         latest    c10e9cbf22cd   3 months ago   194MB\n

      We can now remove the DockerHub pulled image

      docker image rm rahulguptajss/harvest\nUntagged: rahulguptajss/harvest:latest\nUntagged: rahulguptajss/harvest@sha256:6ff88153812ebb61e9dd176182bf8a792cde847748c5654d65f4630e61b1f3ae\n\ndocker image ls -a\nREPOSITORY             TAG       IMAGE ID       CREATED        SIZE\ncr.netapp.io/harvest   latest    80061bbe1c2c   10 days ago    85.4MB\nprom/prometheus        v2.33.1   e528f02c45a6   3 weeks ago    204MB\ngrafana/grafana        8.3.4     4a34578e4374   5 weeks ago    274MB\n
      "},{"location":"help/faq/#ports","title":"Ports","text":""},{"location":"help/faq/#what-ports-does-harvest-use","title":"What ports does Harvest use?","text":""},{"location":"help/faq/#answer_8","title":"Answer","text":"

      The default ports are shown in the following diagram.

      • Harvest's pollers use ZAPI or REST to communicate with ONTAP on port 443
      • Each poller exposes the Prometheus port defined in your harvest.yml file
      • Prometheus scrapes each poller-exposed Prometheus port (promPort1, promPort2, promPort3)
      • Prometheus's default port is 9090
      • Grafana's default port is 3000
      "},{"location":"help/faq/#snapmirror_labels","title":"Snapmirror_labels","text":""},{"location":"help/faq/#why-do-my-snapmirror_labels-have-an-empty-source_node","title":"Why do my snapmirror_labels have an empty source_node?","text":""},{"location":"help/faq/#answer_9","title":"Answer","text":"

      Snapmirror relationships have a source and destination node. ONTAP however does not expose the source side of that relationship, only the destination side is returned via ZAPI/REST APIs. Because of that, the Prometheus metric named, snapmirror_labels, will have an empty source_node label.

      The dashboards show the correct value for source_node since we join multiple metrics in the Grafana panels to synthesize that information.

      In short: don't rely on the snapmirror_labels for source_node labels. If you need source_node you will need to do a similar join as the Snapmirror dashboard does.

      See https://github.com/NetApp/harvest/issues/1192 for more information and linked pull requests for REST and ZAPI.

      "},{"location":"help/faq/#nfs-clients-dashboard","title":"NFS Clients Dashboard","text":""},{"location":"help/faq/#why-do-my-nfs-clients-dashboard-have-no-data","title":"Why do my NFS Clients Dashboard have no data?","text":""},{"location":"help/faq/#answer_10","title":"Answer","text":"

      NFS Clients dashboard is only available through Rest Collector. This information is not available through Zapi. You must enable the Rest collector in your harvest.yml config and uncomment the nfs_clients.yaml section in your default.yaml file.

      Note: Enabling nfs_clients.yaml may slow down data collection.

      "},{"location":"help/faq/#file-analytics-dashboard","title":"File Analytics Dashboard","text":""},{"location":"help/faq/#why-do-my-file-analytics-dashboard-have-no-data","title":"Why do my File Analytics Dashboard have no data?","text":""},{"location":"help/faq/#answer_11","title":"Answer","text":"

      This dashboard requires ONTAP 9.8+ and the APIs are only available via REST. Please enable the REST collector in your harvest config. To collect and display usage data such as capacity analytics, you need to enable File System Analytics on a volume. Please see https://docs.netapp.com/us-en/ontap/task_nas_file_system_analytics_enable.html for more details.

      "},{"location":"help/faq/#why-do-i-have-volume-sis-stat-panel-empty-in-volume-dashboard","title":"Why do I have Volume Sis Stat panel empty in Volume dashboard?","text":""},{"location":"help/faq/#answer_12","title":"Answer","text":"

      This panel requires ONTAP 9.12+ and the APIs are only available via REST. Enable the REST collector in your harvest.yml config.

      "},{"location":"help/log-collection/","title":"Harvest Logs Collection Guide","text":"

      This guide will help you collect Harvest logs on various platforms. Follow the instructions specific to your platform. If you would like to share the collected logs with the Harvest team, please email them to ng-harvest-files@netapp.com.

      "},{"location":"help/log-collection/#rpm-deb-and-native-installations","title":"RPM, DEB, and Native Installations","text":"

      For RPM, DEB, and native installations, use the following command to create a compressed tar file containing the logs:

      tar -czvf harvest_logs.tar.gz -C /var/log harvest\n

      This command will create a file named harvest_logs.tar.gz with the contents of the /var/log/harvest directory.

      "},{"location":"help/log-collection/#docker-container","title":"Docker Container","text":"

      For Docker containers, first, identify the container ID for your Harvest instance. Then, replace <container_id> with the actual container ID in the following command:

      docker logs <container_id> &> harvest_logs.txt && tar -czvf harvest_logs.tar.gz harvest_logs.txt\n

      This command will create a file named harvest_logs.tar.gz containing the logs from the specified container.

      "},{"location":"help/log-collection/#nabox","title":"NABox","text":"

      For NABox installations, ssh into your nabox instance, and use the following command to create a compressed tar file containing the logs:

      dc logs nabox-api > nabox-api.log; dc logs nabox-harvest2 > nabox-harvest2.log;\\\ntar -czf nabox-logs-`date +%Y-%m-%d_%H:%M:%S`.tgz *\n

      This command will create a file named nabox-logs-$date.tgz containing the nabox-api and Harvest poller logs.

      For more information, see the NABox documentation on collecting logs

      "},{"location":"help/troubleshooting/","title":"Checklists for Harvest","text":"

      A set of steps to go through when something goes wrong.

      "},{"location":"help/troubleshooting/#what-version-of-ontap-do-you-have","title":"What version of ONTAP do you have?","text":"

      Run the following, replacing <poller> with the poller from your harvest.yaml

      ./bin/harvest zapi -p <poller> show system\n

      Copy and paste the output into your issue. Here's an example:

      ./bin/harvest -p infinity show system\nconnected to infinity (NetApp Release 9.8P2: Tue Feb 16 03:49:46 UTC 2021)\n[results]                             -                                   *\n  [build-timestamp]                   -                          1613447386\n  [is-clustered]                      -                                true\n  [version]                           - NetApp Release 9.8P2: Tue Feb 16 03:49:46 UTC 2021\n  [version-tuple]                     -                                   *\n    [system-version-tuple]            -                                   *\n      [generation]                    -                                   9\n      [major]                         -                                   8\n      [minor]                         -                                   0\n

      "},{"location":"help/troubleshooting/#install-fails","title":"Install fails","text":"

      I tried to install and ...

      "},{"location":"help/troubleshooting/#how-do-i-tell-if-harvest-is-doing-anything","title":"How do I tell if Harvest is doing anything?","text":"

      You believe Harvest is installed fine, but it's not working.

      • Post the contents of your harvest.yml

      Try validating your harvest.yml with yamllint like so: yamllint -d relaxed harvest.yml If you do not have yamllint installed, look here.

      There should be no errors - warnings like the following are fine:

      harvest.yml\n  64:1      warning  too many blank lines (3 > 0)  (empty-lines)\n

      • How did you start Harvest?

      • What do you see in /var/log/harvest/*

      • What does ps aux | grep poller show?

      • If you are using Prometheus, try hitting Harvest's Prometheus endpoint like so:

      curl http://machine-this-is-running-harvest:prometheus-port-in-harvest-yaml/metrics

      • Check file ownership (user/group) and file permissions of your templates, executable, etc in your Harvest home directory (ls -la /opt/harvest/) See also.
      "},{"location":"help/troubleshooting/#how-do-i-start-harvest-in-debug-mode","title":"How do I start Harvest in debug mode?","text":"

      Use the --debug flag when starting a poller. In debug mode, the poller will only collect metrics, but not write to databases. Another useful flag is --foreground, in which case all log messages are written to the terminal. Note that you can only start one poller in foreground mode.

      Finally, you can use --loglevel=1 or --verbose, if you want to see a lot of log messages. For even more, you can use --loglevel=0 or --trace.

      Examples:

      bin/harvest start $POLLER_NAME --foreground --debug --loglevel=0\nor\nbin/harvest start $POLLER_NAME --loglevel=1 --collectors Zapi --objects Qtree\n
      "},{"location":"help/troubleshooting/#how-do-i-start-harvest-in-foreground-mode","title":"How do I start Harvest in foreground mode?","text":"

      See How do I start Harvest in debug mode?

      "},{"location":"help/troubleshooting/#how-do-i-start-my-poller-with-only-one-collector","title":"How do I start my poller with only one collector?","text":"

      Since a poller will start a large number of collectors (each collector-object pair is treated as a collector), it is often hard to find the issue you are looking for in the abundance of log messages. It might be therefore useful to start one single collector-object pair when troubleshooting. You can use the --collectors and --objects flags for that. For example, start only the ZapiPerf collector with the SystemNode object:

      harvest start my_poller --collectors ZapiPerf --objects SystemNode

      (To find to correct object name, check conf/COLLECTOR/default.yaml file of the collector).

      "},{"location":"help/troubleshooting/#errors-in-the-log-file","title":"Errors in the log file","text":""},{"location":"help/troubleshooting/#some-of-my-clusters-are-not-showing-up-in-grafana","title":"Some of my clusters are not showing up in Grafana","text":"

      The logs show these errors:

      context deadline exceeded (Client.Timeout or context cancellation while reading body)\n\nand then for each volume\n\nskipped instance [9c90facd-3730-48f1-b55c-afacc35c6dbe]: not found in cache\n

      "},{"location":"help/troubleshooting/#workarounds","title":"Workarounds","text":"

      context deadline exceeded (Client.Timeout or context cancellation while reading body)

      means Harvest is timing out when talking to your cluster. This sometimes happens when you have a large number of resources (e.g. volumes).

      There are a few parameters that you can change to avoid this from happening. You can do this by editing the subtemplate of the resource affected. E.g. you can add the parameters in conf/zapiperf/cdot/9.8.0/volume.yaml or conf/zapi/cdot/9.8.0/volume.yaml. If the errors happen for most of the resources, you can add them in the main template of the collector (conf/zapi/default.yaml or conf/zapiperf/default.yaml) to apply them on all objects.

      "},{"location":"help/troubleshooting/#client_timeout","title":"client_timeout","text":"

      Increase the client_timeout value by adding a client_timeout line at the beginning of the template, like so:

      # increase the timeout to 1 minute\nclient_timeout: 1m\n
      "},{"location":"help/troubleshooting/#batch_size","title":"batch_size","text":"

      Decrease the batch_size value by adding a batch_size line at the beginning of the template. The default value of this parameter is 500. By decreasing it, the collector will fetch less instances during each API request. Example:

      # decrease number of instances to 200 for each API request\nbatch_size: 200\n
      "},{"location":"help/troubleshooting/#schedule","title":"schedule","text":"

      If nothing else helps, you can increase the data poll interval of the collector (default is 1m for ZapiPerf and 3m for Zapi). You can do this either by adding a schedule attribute to the template or, if it already exists, by changing the - data line.

      Example for ZapiPerf:

      # increase data poll frequency to 2 minutes\nschedule:\n- counter: 20m\n- instance: 10m\n- data: 2m\n
      Example for Zapi:

      # increase data poll frequency to 5 minutes\nschedule:\n- instance: 10m\n- data: 5m\n
      "},{"location":"help/troubleshooting/#prometheus-http-service-discovery-doesnt-work","title":"Prometheus HTTP Service Discovery doesn't work","text":"

      Some things to check:

      • Make sure the Harvest admin node is started via bin/harvest admin start and there are no errors printed to the console
      • Make sure your harvest.yml includes a valid Admin: section
      • Ensure bin/harvest doctor runs without error. If it does, include the output of bin/harvest doctor --print in Slack or your GitHub issue
      • Ensure your /etc/prometheus/prometheus.yml has a scrape config with http_sd_configs and it points to the admin node's ip:port
      • Ensure there are no errors in your poller logs (/var/log/harvest) related to the poller publishing its Prometheus port to the admin node. Something like this should help narrow it down: grep -R -E \"error.*poller.go\" /var/log/harvest/
        • If you see errors like dial udp 1.1.1.1:80: connect: network is unreachable, make sure your machine has a default route setup for your main interface
      • If the admin node is running, your harvest.yml includes the Admin: section, and your pollers are using the Prometheus exporter you should be able to curl the admin node endpoint for a list of running Harvest pollers like this:
        curl -s -k https://localhost:8887/api/v1/sd | jq .\n[\n  {\n    \"targets\": [\n      \":12994\"\n    ],\n    \"labels\": {\n      \"__meta_poller\": \"F2240-127-26\"\n    }\n  },\n  {\n    \"targets\": [\n      \":39000\"\n    ],\n    \"labels\": {\n      \"__meta_poller\": \"simple1\"\n    }\n  }\n]\n
      "},{"location":"help/troubleshooting/#how-do-i-run-harvest-commands-in-nabox","title":"How do I run Harvest commands in NAbox?","text":"

      NAbox is a vApp running Alpine Linux and Docker. NAbox runs Harvest as a set of Docker containers. That means to execute Harvest commands on NAbox, you need to exec into the container by following these commands.

      1. ssh into your NAbox instance

      2. Start bash in the Harvest container

      dc exec nabox-harvest2 bash\n

      You should see no errors and your prompt will change to something like root@nabox-harvest2:/app#

      Below are examples of running Harvest commands against a cluster named umeng-aff300-05-06. Replace with your cluster name as appropriate.

      # inside container\n\n> cat /etc/issue\nDebian GNU/Linux 10 \\n \\l\n\n> cd /netapp-harvest\nbin/harvest version\nharvest version 22.08.0-1 (commit 93db10a) (build date 2022-08-19T09:10:05-0400) linux/amd64\nchecking GitHub for latest... you have the latest \u2713\n\n# harvest.yml is found at /conf/harvest.yml\n\n> bin/zapi --poller umeng-aff300-05-06 show system\nconnected to umeng-aff300-05-06 (NetApp Release 9.9.1P9X3: Tue Apr 19 19:05:24 UTC 2022)\n[results]                                          -                                   *\n  [build-timestamp]                                -                          1650395124\n[is-clustered]                                   -                                true\n[version]                                        - NetApp Release 9.9.1P9X3: Tue Apr 19 19:05:24 UTC 2022\n[version-tuple]                                  -                                   *\n    [system-version-tuple]                         -                                   *\n      [generation]                                 -                                   9\n[major]                                      -                                   9\n[minor]                                      -                                   1\n\nbin/zapi -p umeng-aff300-05-06 show data --api environment-sensors-get-iter --max 10000 > env-sensor.xml\n

      The env-sensor.xml file will be written to the /opt/packages/harvest2 directory on the host.

      If needed, you can scp that file off NAbox and share it with the Harvest team.

      "},{"location":"help/troubleshooting/#rest-collector-auth-errors","title":"Rest Collector Auth errors?","text":"

      If you are seeing errors like User is not authorized or not authorized for that command while using Rest Collector. Follow below steps to make sure permissions are set correctly.

      1. Verify that user has permissions for relevant authentication method.

      security login show -vserver ROOT_VSERVER -user-or-group-name harvest2 -application http

      1. Verify that user has read-only permissions to api.
      security login role show -role harvest2-role\n

      1. Verify if an entry is present for following command.
      vserver services web access show -role harvest2-role -name rest\n

      If It is missing then add an entry with following commands

      vserver services web access create -vserver umeng-aff300-01-02 -name rest -role harvest2-role\n
      "},{"location":"help/troubleshooting/#why-do-i-have-gaps-in-my-dashboards","title":"Why do I have gaps in my dashboards?","text":"

      Here are possible reasons and things to check:

      • Prometheus scrape_interval found via (http://$promIP:9090/config)
      • Prometheus log files
      • Harvest collector scrape interval check your:
        • conf/zapi/default.yaml - default for config is 3m
        • conf/zapiperf/default.yaml - default of perf is 1m
      • Check you poller logs for any errors or lag messages
      • When using VictoriaMetrics, make sure your Prometheus exporter config includes sort_labels: true, since VictoriaMetrics will mark series stale if the label order changes between polls.
      "},{"location":"help/troubleshooting/#nabox","title":"NABox","text":"

      For NABox installations, refer to the NABox documentation on troubleshooting:

      NABox Troubleshooting

      "},{"location":"install/containerd/","title":"Containerized Harvest on Mac using containerd","text":"

      Harvest runs natively on a Mac already. If you need that, git clone and use GOOS=darwin make build.

      This page describes how to run Harvest on your Mac in a containerized environment (Compose, K8, etc.) The documentation below uses Rancher Desktop, but lima works just as well. Keep in mind, both of them are considered alpha. They work, but are still undergoing a lot of change.

      "},{"location":"install/containerd/#setup","title":"Setup","text":"

      We're going to: - Install and Start Rancher Desktop - (Optional) Create Harvest Docker image by following Harvest's existing documentation - Generate a Compose file following Harvest' existing documentation - Concatenate the Prometheus/Grafana compose file with the harvest compose file since Rancher doesn't support multiple compose files yet - Fixup the concatenated file - Start containers

      Under the hood, Rancher is using lima. If you want to skip Rancher and use lima directly that works too.

      "},{"location":"install/containerd/#install-and-start-rancher-desktop","title":"Install and Start Rancher Desktop","text":"

      We'll use brew to install Rancher.

      brew install rancher\n

      After Rancher Desktop installs, start it Cmd + Space type: Rancher and wait for it to start a VM and download images. Once everything is started continue.

      "},{"location":"install/containerd/#create-harvest-docker-image","title":"Create Harvest Docker image","text":"

      You only need to create a new image if you've made changes to Harvest. If you just want to use the latest version of Harvest, skip this step.

      These are the same steps outline on Building Harvest Docker Image except we replace docker build with nerdctl like so:

      nerdctl build -f container/onePollerPerContainer/Dockerfile -t harvest:latest . --no-cache 
      "},{"location":"install/containerd/#generate-a-harvest-compose-file","title":"Generate a Harvest compose file","text":"

      Follow the existing documentation to setup your harvest.yml file

      Create your harvest-compose.yml file like this:

      docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest \\\ngenerate docker full \\\n--output harvest-compose.yml # --image tag, if you built a new image above\n
      "},{"location":"install/containerd/#combine-prometheusgrafana-and-harvest-compose-file","title":"Combine Prometheus/Grafana and Harvest compose file","text":"

      Currently nerdctl compose does not support running with multiple compose files, so we'll concat the prom-stack.yml and the harvest-compose.yml into one file and then fix it up.

      cat prom-stack.yml harvest-compose.yml > both.yml\n\n# jump to line 45 and remove redundant version and services lines (lines 45, 46, 47 should be removed)\n# fix indentation of remaining lines - in vim, starting at line 46\n# Shift V\n# Shift G\n# Shift .\n# Esc\n# Shift ZZ\n
      "},{"location":"install/containerd/#start-containers","title":"Start containers","text":"
      nerdctl compose -f both.yml up -d\n\nnerdctl ps -a\n\nCONTAINER ID    IMAGE                               COMMAND                   CREATED               STATUS    PORTS                       NAMES\nbd7131291960    docker.io/grafana/grafana:latest    \"/run.sh\"                 About a minute ago    Up        0.0.0.0:3000->3000/tcp      grafana\nf911553a14e2    docker.io/prom/prometheus:latest    \"/bin/prometheus --c\u2026\"    About a minute ago    Up        0.0.0.0:9090->9090/tcp      prometheus\n037a4785bfad    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15007->15007/tcp    poller_simple7_v21.11.0513\n03fb951cfe26    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    59 seconds ago        Up        0.0.0.0:15025->15025/tcp    poller_simple25_v21.11.0513\n049d0d65b434    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:16050->16050/tcp    poller_simple49_v21.11.0513\n0b77dd1bc0ff    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:16067->16067/tcp    poller_u2_v21.11.0513\n1cabd1633c6f    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15015->15015/tcp    poller_simple15_v21.11.0513\n1d78c1bf605f    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15062->15062/tcp    poller_sandhya_v21.11.0513\n286271eabc1d    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15010->15010/tcp    poller_simple10_v21.11.0513\n29710da013d4    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:12990->12990/tcp    poller_simple1_v21.11.0513\n321ae28637b6    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15020->15020/tcp    poller_simple20_v21.11.0513\n39c91ae54d68    docker.io/library/cbg:latest        \"bin/poller --poller\u2026\"    About a minute ago    Up        0.0.0.0:15053->15053/tcp    poller_simple-53_v21.11.0513\n\nnerdctl logs poller_simple1_v21.11.0513\nnerdctl compose -f both.yml down\n\n# http://localhost:9090/targets   Prometheus\n# http://localhost:3000           Grafana\n# http://localhost:15062/metrics  Poller metrics\n
      "},{"location":"install/containers/","title":"Docker","text":""},{"location":"install/containers/#overview","title":"Overview","text":"

      Harvest is container-ready and supports several deployment options:

      • Stand-up Prometheus, Grafana, and Harvest via Docker Compose. Choose this if you want to hit the ground running. Install, volume and network mounts automatically handled.

      • Stand-up Harvest via Docker Compose that offers more flexibility in configuration. Choose this if you only want to run Harvest containers. Since you pick-and-choose what gets built and how it's deployed, stronger familiarity with containers is recommended.

      • If you prefer Ansible, David Blackwell created an Ansible script that stands up Harvest, Grafana, and Prometheus.

      • Want to run Harvest on a Mac via containerd and Rancher Desktop? We got you covered.

      • K8 Deployment via Kompose

      "},{"location":"install/containers/#docker-compose","title":"Docker Compose","text":"

      This is a quick way to install and get started with Harvest. Follow the four steps below to:

      • Setup Harvest, Grafana, and Prometheus via Docker Compose
      • Harvest dashboards are automatically imported and setup in Grafana with a Prometheus data source
      • A separate poller container is created for each monitored cluster
      • All pollers are automatically added as Prometheus scrape targets
      "},{"location":"install/containers/#setup-harvestyml","title":"Setup harvest.yml","text":"
      • Create a harvest.yml file with your cluster details, below is an example with annotated comments. Modify as needed for your scenario.

      This config is using the Prometheus exporter port_range feature, so you don't have to manage the Prometheus exporter port mappings for each poller.

      Exporters:\n  prometheus1:\n    exporter: Prometheus\n    addr: 0.0.0.0\n    port_range: 2000-2030  # <====== adjust to be greater than equal to the number of monitored clusters\n\nDefaults:\n  collectors:\n    - Zapi\n    - ZapiPerf\n    - EMS\n  use_insecure_tls: true   # <====== adjust as needed to enable/disable TLS checks \n  exporters:\n    - prometheus1\n\nPollers:\n  infinity:                # <====== add your cluster(s) here, they use the exporter defined three lines above\n    datacenter: DC-01\n    addr: 10.0.1.2\n    auth_style: basic_auth\n    username: user\n    password: 123#abc\n  # next cluster ....  \n
      "},{"location":"install/containers/#generate-a-docker-compose-for-your-pollers","title":"Generate a Docker compose for your Pollers","text":"
      • Generate a Docker compose file from your harvest.yml
      docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest \\\ngenerate docker full \\\n--output harvest-compose.yml\n

      By default, the above command uses the harvest configuration file(harvest.yml) located in the current directory. If you want to use a harvest config from a different location.

      What if my harvest configuration file is somewhere else or not named harvest.yml

      Use the following docker run command, updating the HYML variable with the absolute path to your harvest.yml.

      HYML=\"/opt/custom_harvest.yml\"; \\\ndocker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"${HYML}:${HYML}\" \\\nghcr.io/netapp/harvest:latest \\\ngenerate docker full \\\n--output harvest-compose.yml \\\n--config \"${HYML}\"\n

      generate docker full does two things:

      1. Creates a Docker compose file with a container for each Harvest poller defined in your harvest.yml
      2. Creates a matching Prometheus service discovery file for each Harvest poller (located in container/prometheus/harvest_targets.yml). Prometheus uses this file to scrape the Harvest pollers.
      "},{"location":"install/containers/#start-everything","title":"Start everything","text":"

      Bring everything up

      docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n
      "},{"location":"install/containers/#prometheus-and-grafana","title":"Prometheus and Grafana","text":"

      The prom-stack.yml compose file creates a frontend and backend network. Prometheus and Grafana publish their admin ports on the front-end network and are routable to the local machine. By default, the Harvest pollers are part of the backend network and also expose their Prometheus web end-points. If you do not want their end-points exposed, add the --port=false option to the generate sub-command in the previous step.

      "},{"location":"install/containers/#prometheus","title":"Prometheus","text":"

      After bringing up the prom-stack.yml compose file, you can check Prometheus's list of targets at http://IP_OF_PROMETHEUS:9090/targets.

      "},{"location":"install/containers/#grafana","title":"Grafana","text":"

      After bringing up the prom-stack.yml compose file, you can access Grafana at http://IP_OF_GRAFANA:3000.

      You will be prompted to create a new password the first time you log in. Grafana's default credentials are

      username: admin\npassword: admin\n
      "},{"location":"install/containers/#manage-pollers","title":"Manage pollers","text":""},{"location":"install/containers/#how-do-i-add-a-new-poller","title":"How do I add a new poller?","text":"
      1. Add poller to harvest.yml
      2. Regenerate compose file by running harvest generate
      3. Run docker compose up, for example,
      docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n
      "},{"location":"install/containers/#stop-all-containers","title":"Stop all containers","text":"
      docker-compose -f prom-stack.yml -f harvest-compose.yml down\n

      If you encounter the following error message while attempting to stop your Docker containers using docker-compose down

      Error response from daemon: Conflict. The container name \"/poller-u2\" is already in use by container\n

      This error is likely due to running docker-compose down from a different directory than where you initially ran docker-compose up.

      To resolve this issue, make sure to run the docker-compose down command from the same directory where you ran docker-compose up. This will ensure that Docker can correctly match the container names and IDs with the directory you are working in. Alternatively, you can stop the Harvest, Prometheus, and Grafana containers by using the following command:

      docker ps -aq --filter \"name=prometheus\" --filter \"name=grafana\" --filter \"name=poller-\" | xargs docker stop | xargs docker rm\n

      Note: Deleting or stopping Docker containers does not remove the data stored in Docker volumes.

      "},{"location":"install/containers/#upgrade-harvest","title":"Upgrade Harvest","text":"

      Note: If you want to keep your historical Prometheus data, and you set up your Docker Compose workflow before Harvest 22.11, please read how to migrate your Prometheus volume before continuing with the upgrade steps below.

      To upgrade Harvest:

      1. Retrieve the most recent version of the Harvest Docker image by executing the following command.This is needed since the new version may contain new templates, dashboards, or other files not included in the Docker image.

        docker pull ghcr.io/netapp/harvest\n

      2. Stop all containers

      3. Regenerate your harvest-compose.yml file by running harvest generate By default, generate will use the latest tag. If you want to upgrade to a nightly build see the twisty.

        I want to upgrade to a nightly build

        Tell the generate cmd to use a different tag like so:

        docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest:nightly \\\ngenerate docker full \\\n--image ghcr.io/netapp/harvest:nightly \\\n--output harvest-compose.yml\n
      4. Restart your containers using the following:

      docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n
      "},{"location":"install/containers/#building-harvest-docker-image","title":"Building Harvest Docker Image","text":"

      Building a custom Harvest Docker image is only necessary if you require a tailored solution. If your intention is to run Harvest using Docker without any customizations, please refer to the Overview section above.

      docker build -f container/onePollerPerContainer/Dockerfile -t harvest:latest . --no-cache\n
      "},{"location":"install/harvest-containers/","title":"Harvest containers","text":"

      Follow this method if your goal is to establish a separate harvest container for each poller defined in harvest.yml file. Please note that these containers must be incorporated into your current infrastructure, which might include systems like Prometheus or Grafana.

      "},{"location":"install/harvest-containers/#setup-harvestyml","title":"Setup harvest.yml","text":"
      • Create a harvest.yml file with your cluster details, below is an example with annotated comments. Modify as needed for your scenario.

      This config is using the Prometheus exporter port_range feature, so you don't have to manage the Prometheus exporter port mappings for each poller.

      Exporters:\n  prometheus1:\n    exporter: Prometheus\n    addr: 0.0.0.0\n    port_range: 2000-2030  # <====== adjust to be greater than equal to the number of monitored clusters\n\nDefaults:\n  collectors:\n    - Zapi\n    - ZapiPerf\n    - EMS\n  use_insecure_tls: true   # <====== adjust as needed to enable/disable TLS checks \n  exporters:\n    - prometheus1\n\nPollers:\n  infinity:                # <====== add your cluster(s) here, they use the exporter defined three lines above\n    datacenter: DC-01\n    addr: 10.0.1.2\n    auth_style: basic_auth\n    username: user\n    password: 123#abc\n  # next cluster ....  \n
      "},{"location":"install/harvest-containers/#generate-a-docker-compose-for-your-pollers","title":"Generate a Docker compose for your Pollers","text":"
      • Generate a Docker compose file from your harvest.yml
      docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest \\\ngenerate docker \\\n--output harvest-compose.yml\n
      "},{"location":"install/harvest-containers/#start-everything","title":"Start everything","text":"

      Bring everything up

      docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n
      "},{"location":"install/harvest-containers/#manage-pollers","title":"Manage pollers","text":""},{"location":"install/harvest-containers/#how-do-i-add-a-new-poller","title":"How do I add a new poller?","text":"
      1. Add poller to harvest.yml
      2. Regenerate compose file by running harvest generate
      3. Run docker compose up, for example,
      docker-compose -f harvest-compose.yml up -d --remove-orphans\n
      "},{"location":"install/harvest-containers/#stop-all-containers","title":"Stop all containers","text":"
      docker-compose-f harvest-compose.yml down\n
      "},{"location":"install/harvest-containers/#upgrade-harvest","title":"Upgrade Harvest","text":"

      To upgrade Harvest:

      1. Retrieve the most recent version of the Harvest Docker image by executing the following command.This is needed since the new version may contain new templates, dashboards, or other files not included in the Docker image.

        docker pull ghcr.io/netapp/harvest\n

      2. Stop all containers

      3. Regenerate your harvest-compose.yml file by running harvest generate By default, generate will use the latest tag. If you want to upgrade to a nightly build see the twisty.

        I want to upgrade to a nightly build

        Tell the generate cmd to use a different tag like so:

        docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest:nightly \\\ngenerate docker \\\n--image ghcr.io/netapp/harvest:nightly \\\n--output harvest-compose.yml\n
      4. Restart your containers using the following:

      docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n
      "},{"location":"install/k8/","title":"K8 Deployment","text":"

      The following steps are provided for reference purposes only. Depending on the specifics of your k8 configuration, you may need to make modifications to the steps or files as necessary.

      "},{"location":"install/k8/#requirements","title":"Requirements","text":"
      • Kompose: v1.25 or higher
      "},{"location":"install/k8/#deployment","title":"Deployment","text":"
      • Local k8 Deployment
      • Cloud Deployment
      "},{"location":"install/k8/#local-k8-deployment","title":"Local k8 Deployment","text":"

      To run Harvest resources in Kubernetes, please execute the following commands:

      1. After adding your clusters to harvest.yml, generate harvest-compose.yml and prom-stack.yml.
      docker run --rm \\\n  --entrypoint \"bin/harvest\" \\\n  --volume \"$(pwd):/opt/temp\" \\\n  --volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\n  ghcr.io/netapp/harvest \\\n  generate docker full \\\n  --output harvest-compose.yml\n
      example harvest.yml

      Tools:\nExporters:\nprometheus1:\nexporter: Prometheus\nport_range: 12990-14000\nDefaults:\nuse_insecure_tls: true\ncollectors:\n- Zapi\n- ZapiPerf\nexporters:\n- prometheus1\nPollers:\nu2:\ndatacenter: u2\naddr: ADDRESS\nusername: USER\npassword: PASS\n

      harvest-compose.yml

      version: \"3.7\"\n\nservices:\n\nu2:\nimage: ghcr.io/netapp/harvest:latest\ncontainer_name: poller-u2\nrestart: unless-stopped\nports:\n- 12990:12990\ncommand: '--poller u2 --promPort 12990 --config /opt/harvest.yml'\nvolumes:\n- /Users/harvest/conf:/opt/harvest/conf\n- /Users/harvest/cert:/opt/harvest/cert\n- /Users/harvest/harvest.yml:/opt/harvest.yml\nnetworks:\n- backend\n

      1. Using kompose, convert harvest-compose.yml and prom-stack.yml into Kubernetes resources and save them as kub.yaml.
      kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\n
      kub.yaml

      ---\napiVersion: v1\nkind: Service\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: grafana\nname: grafana\nspec:\nports:\n- name: \"3000\"\nport: 3000\ntargetPort: 3000\nselector:\nio.kompose.service: grafana\ntype: NodePort\nstatus:\nloadBalancer: {}\n\n---\napiVersion: v1\nkind: Service\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: prometheus\nname: prometheus\nspec:\nports:\n- name: \"9090\"\nport: 9090\ntargetPort: 9090\nselector:\nio.kompose.service: prometheus\ntype: NodePort\nstatus:\nloadBalancer: {}\n\n---\napiVersion: v1\nkind: Service\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2\nspec:\nports:\n- name: \"12990\"\nport: 12990\ntargetPort: 12990\nselector:\nio.kompose.service: u2\nstatus:\nloadBalancer: {}\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: grafana\nname: grafana\nspec:\nreplicas: 1\nselector:\nmatchLabels:\nio.kompose.service: grafana\nstrategy:\ntype: Recreate\ntemplate:\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.network/harvest-backend: \"true\"\nio.kompose.network/harvest-frontend: \"true\"\nio.kompose.service: grafana\nspec:\ncontainers:\n- image: grafana/grafana:8.3.4\nname: grafana\nports:\n- containerPort: 3000\nresources: {}\nvolumeMounts:\n- mountPath: /var/lib/grafana\nname: grafana-data\n- mountPath: /etc/grafana/provisioning\nname: grafana-hostpath1\nrestartPolicy: Always\nvolumes:\n- hostPath:\npath: /Users/harvest\nname: grafana-data\n- hostPath:\npath: /Users/harvest/grafana\nname: grafana-hostpath1\nstatus: {}\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\ncreationTimestamp: null\nname: harvest-backend\nspec:\ningress:\n- from:\n- podSelector:\nmatchLabels:\nio.kompose.network/harvest-backend: \"true\"\npodSelector:\nmatchLabels:\nio.kompose.network/harvest-backend: \"true\"\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\ncreationTimestamp: null\nname: harvest-frontend\nspec:\ningress:\n- from:\n- podSelector:\nmatchLabels:\nio.kompose.network/harvest-frontend: \"true\"\npodSelector:\nmatchLabels:\nio.kompose.network/harvest-frontend: \"true\"\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: prometheus\nname: prometheus\nspec:\nreplicas: 1\nselector:\nmatchLabels:\nio.kompose.service: prometheus\nstrategy:\ntype: Recreate\ntemplate:\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.service.type: nodeport\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.network/harvest-backend: \"true\"\nio.kompose.service: prometheus\nspec:\ncontainers:\n- args:\n- --config.file=/etc/prometheus/prometheus.yml\n- --storage.tsdb.path=/prometheus\n- --web.console.libraries=/usr/share/prometheus/console_libraries\n- --web.console.templates=/usr/share/prometheus/consoles\nimage: prom/prometheus:v2.33.1\nname: prometheus\nports:\n- containerPort: 9090\nresources: {}\nvolumeMounts:\n- mountPath: /etc/prometheus\nname: prometheus-hostpath0\n- mountPath: /prometheus\nname: prometheus-data\nrestartPolicy: Always\nvolumes:\n- hostPath:\npath: /Users/harvest/container/prometheus\nname: prometheus-hostpath0\n- hostPath:\npath: /Users/harvest\nname: prometheus-data\nstatus: {}\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2\nspec:\nreplicas: 1\nselector:\nmatchLabels:\nio.kompose.service: u2\nstrategy:\ntype: Recreate\ntemplate:\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --file prom-stack.yml --out kub.yaml --volumes hostPath\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.network/harvest-backend: \"true\"\nio.kompose.service: u2\nspec:\ncontainers:\n- args:\n- --poller\n- u2\n- --promPort\n- \"12990\"\n- --config\n- /opt/harvest.yml\nimage: ghcr.io/netapp/harvest:latest\nname: poller-u2\nports:\n- containerPort: 12990\nresources: {}\nvolumeMounts:\n- mountPath: /opt/harvest/conf\nname: u2-hostpath0\n- mountPath: /opt/harvest/cert\nname: u2-hostpath1\n- mountPath: /opt/harvest.yml\nname: u2-hostpath2\nrestartPolicy: Always\nvolumes:\n- hostPath:\npath: /Users/harvest/conf\nname: u2-hostpath0\n- hostPath:\npath: /Users/harvest/cert\nname: u2-hostpath1\n- hostPath:\npath: /Users/harvest/harvest.yml\nname: u2-hostpath2\nstatus: {}\n

      1. Apply kub.yaml to k8.
      kubectl apply --filename kub.yaml\n
      1. List running pods.
      kubectl get pods\n
      pods

      NAME                          READY   STATUS    RESTARTS   AGE\nprometheus-666fc7b64d-xfkvk   1/1     Running   0          43m\ngrafana-7cd8bdc9c9-wmsxh      1/1     Running   0          43m\nu2-7dfb76b5f6-zbfm6           1/1     Running   0          43m\n

      "},{"location":"install/k8/#remove-all-harvest-resources-from-k8","title":"Remove all Harvest resources from k8","text":"

      kubectl delete --filename kub.yaml

      "},{"location":"install/k8/#helm-chart","title":"Helm Chart","text":"

      Generate helm charts

      kompose convert --file harvest-compose.yml --file prom-stack.yml --chart --volumes hostPath --out harvestchart\n
      "},{"location":"install/k8/#cloud-deployment","title":"Cloud Deployment","text":"

      We will use configMap to generate Kubernetes resources for deploying Harvest pollers in a cloud environment. Please note the following assumptions for the steps below:

      • The steps provided are solely for the deployment of Harvest poller pods. Separate configurations are required to set up Prometheus and Grafana.
      • Networking between Harvest and Prometheus must be configured, and this can be accomplished by adding the network configuration in harvest-compose.yaml.

      • After configuring the clusters in harvest.yml, generate harvest-compose.yml. We also want to remove the conf directory from the harvest-compose.yml file, otherwise kompose will create an empty configMap for it. We'll remove the conf directory by commenting out that line using sed.

      docker run --rm \\\n  --entrypoint \"bin/harvest\" \\\n  --volume \"$(pwd):/opt/temp\" \\\n  --volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\n  ghcr.io/netapp/harvest \\\n  generate docker full \\\n  --output harvest-compose.yml\n\nsed -i '/\\/conf/s/^/#/g' harvest-compose.yml\n
      harvest-compose.yml

      version: \"3.7\"\n\nservices:\n\nu2:\nimage: ghcr.io/netapp/harvest:latest\ncontainer_name: poller-u2\nrestart: unless-stopped\nports:\n- 12990:12990\ncommand: '--poller u2 --promPort 12990 --config /opt/harvest.yml'\nvolumes:\n#      - /Users/harvest/conf:/opt/harvest/conf\n- /Users/harvest/cert:/opt/harvest/cert\n- /Users/harvest/harvest.yml:/opt/harvest.yml\n

      1. Using kompose, convert harvest-compose.yml into Kubernetes resources and save them as kub.yaml.
      kompose convert --file harvest-compose.yml --volumes configMap -o kub.yaml\n
      kub.yaml

      ---\napiVersion: v1\nkind: Service\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --volumes configMap -o kub.yaml\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2\nspec:\nports:\n- name: \"12990\"\nport: 12990\ntargetPort: 12990\nselector:\nio.kompose.service: u2\nstatus:\nloadBalancer: {}\n\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --volumes configMap -o kub.yaml\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2\nspec:\nreplicas: 1\nselector:\nmatchLabels:\nio.kompose.service: u2\nstrategy:\ntype: Recreate\ntemplate:\nmetadata:\nannotations:\nkompose.cmd: kompose convert --file harvest-compose.yml --volumes configMap -o kub.yaml\nkompose.version: 1.28.0 (HEAD)\ncreationTimestamp: null\nlabels:\nio.kompose.network/harvest-default: \"true\"\nio.kompose.service: u2\nspec:\ncontainers:\n- args:\n- --poller\n- u2\n- --promPort\n- \"12990\"\n- --config\n- /opt/harvest.yml\nimage: ghcr.io/netapp/harvest:latest\nname: poller-u2\nports:\n- containerPort: 12990\nresources: {}\nvolumeMounts:\n- mountPath: /opt/harvest/cert\nname: u2-cm0\n- mountPath: /opt/harvest.yml\nname: u2-cm1\nsubPath: harvest.yml\nrestartPolicy: Always\nvolumes:\n- configMap:\nname: u2-cm0\nname: u2-cm0\n- configMap:\nitems:\n- key: harvest.yml\npath: harvest.yml\nname: u2-cm1\nname: u2-cm1\nstatus: {}\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2-cm0\n\n---\napiVersion: v1\ndata:\nharvest.yml: |+\nTools:\nExporters:\nprometheus1:\nexporter: Prometheus\nport_range: 12990-14000\nadd_meta_tags: false\nDefaults:\nuse_insecure_tls: true\nprefer_zapi: true\nPollers:\n\nu2:\ndatacenter: u2\naddr: ADDRESS\nusername: USER\npassword: PASS\ncollectors:\n- Rest\nexporters:\n- prometheus1\n\nkind: ConfigMap\nmetadata:\nannotations:\nuse-subpath: \"true\"\ncreationTimestamp: null\nlabels:\nio.kompose.service: u2\nname: u2-cm1\n\n---\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\ncreationTimestamp: null\nname: harvest-default\nspec:\ningress:\n- from:\n- podSelector:\nmatchLabels:\nio.kompose.network/harvest-default: \"true\"\npodSelector:\nmatchLabels:\nio.kompose.network/harvest-default: \"true\"\n

      1. Apply kub.yaml to k8.
      kubectl apply --filename kub.yaml\n
      1. List running pods.
      kubectl get pods\n
      pods

      NAME                  READY   STATUS    RESTARTS   AGE\nu2-6864cc7dbc-v6444   1/1     Running   0          6m27s\n

      "},{"location":"install/k8/#remove-all-harvest-resources-from-k8_1","title":"Remove all Harvest resources from k8","text":"

      kubectl delete --filename kub.yaml

      "},{"location":"install/k8/#helm-chart_1","title":"Helm Chart","text":"

      Generate helm charts

      kompose convert --file harvest-compose.yml --chart --volumes configMap --out harvestchart\n
      "},{"location":"install/native/","title":"Native","text":""},{"location":"install/native/#native","title":"Native","text":"

      Visit the Releases page and copy the tar.gz link for the latest release. For example, to download the v22.08.0 release:

      wget https://github.com/NetApp/harvest/releases/download/v22.08.0/harvest-22.08.0-1_linux_amd64.tar.gz\ntar -xvf harvest-22.08.0-1_linux_amd64.tar.gz\ncd harvest-22.08.0-1_linux_amd64\n\n# Run Harvest with the default unix localhost collector\nbin/harvest start\n

      With curl

      If you don't have wget installed, you can use curl like so:

      curl -L -O https://github.com/NetApp/harvest/releases/download/v22.08.0/harvest-22.08.0-1_linux_amd64.tar.gz\n

      It's best to run Harvest as a non-root user. Make sure the user running Harvest can write to /var/log/harvest/ or tell Harvest to write the logs somewhere else with the HARVEST_LOGS environment variable.

      If something goes wrong, examine the logs files in /var/log/harvest, check out the troubleshooting section on the wiki and jump onto Discord and ask for help.

      "},{"location":"install/overview/","title":"Overview","text":"

      Get up and running with Harvest on your preferred platform. We provide pre-compiled binaries for Linux, RPMs, Debs, as well as prebuilt container images for both nightly and stable releases.

      • Binaries for Linux
      • RPM and Debs
      • Containers
      "},{"location":"install/overview/#nabox","title":"Nabox","text":"

      Instructions on how to install Harvest via NAbox.

      "},{"location":"install/overview/#source","title":"Source","text":"

      To build Harvest from source code follow these steps.

      1. git clone https://github.com/NetApp/harvest.git
      2. cd harvest
      3. check the version of go required in the go.mod file
      4. ensure you have a working Go environment at that version or newer. Go installs found here.
      5. make build (if you want to run Harvest from a Mac use GOOS=darwin make build)
      6. bin/harvest version

      Checkout the Makefile for other targets of interest.

      "},{"location":"install/package-managers/","title":"Package Managers","text":""},{"location":"install/package-managers/#redhat","title":"Redhat","text":"

      Installation and upgrade of the Harvest package may require root or administrator privileges

      Download the latest rpm of Harvest from the releases tab and install or upgrade with yum.

      sudo yum install harvest.XXX.rpm\n

      Once the installation has finished, edit the harvest.yml configuration file located in /opt/harvest/harvest.yml

      After editing /opt/harvest/harvest.yml, manage Harvest with systemctl start|stop|restart harvest.

      After upgrade, re-import all dashboards (either bin/harvest grafana import cli or via the Grafana UI) to get any new enhancements in dashboards.

      To ensure that you don't run into permission issues, make sure you manage Harvest using systemctl instead of running the harvest binary directly.

      Changes install makes
      • Directories /var/log/harvest/ and /var/log/run/ are created
      • A harvest user and group are created and the installed files are chowned to harvest
      • Systemd /etc/systemd/system/harvest.service file is created and enabled
      "},{"location":"install/package-managers/#debian","title":"Debian","text":"

      Installation and upgrade of the Harvest package may require root or administrator privileges

      Download the latest deb of Harvest from the releases tab and install or upgrade with apt.

      sudo apt update\nsudo apt install|upgrade ./harvest-<RELEASE>.amd64.deb  \n

      Once the installation has finished, edit the harvest.yml configuration file located in /opt/harvest/harvest.yml

      After editing /opt/harvest/harvest.yml, manage Harvest with systemctl start|stop|restart harvest.

      After upgrade, re-import all dashboards (either bin/harvest grafana import cli or via the Grafana UI) to get any new enhancements in dashboards.

      To ensure that you don't run into permission issues, make sure you manage Harvest using systemctl instead of running the harvest binary directly.

      Changes install makes
      • Directories /var/log/harvest/ and /var/log/run/ are created
      • A harvest user and group are created and the installed files are chowned to harvest
      • Systemd /etc/systemd/system/harvest.service file is created and enabled
      "},{"location":"install/podman/","title":"Containerized Harvest on Linux using Rootless Podman","text":"

      RHEL 8 ships with Podman instead of Docker. There are two ways to run containers with Podman: rootless or with root. Both setups are outlined below. The Podman ecosystem is changing rapidly so the shelf life of these instructions may be short. Make sure you have at least the same versions of the tools listed below.

      If you don't want to bother with Podman, you can also install Docker on RHEL 8 and use it to run Harvest per normal.

      "},{"location":"install/podman/#setup","title":"Setup","text":"

      Make sure your OS is up-to-date with yum update. Podman's dependencies are updated frequently.

      sudo yum remove docker-ce\nsudo yum module enable -y container-tools:rhel8\nsudo yum module install -y container-tools:rhel8\nsudo yum install podman podman-docker podman-plugins\n

      We also need to install Docker Compose since Podman uses it for compose workflows. Install docker-compose like this:

      VERSION=1.29.2\nsudo curl -L \"https://github.com/docker/compose/releases/download/$VERSION/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose\nsudo chmod +x /usr/local/bin/docker-compose\nsudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose\n

      After all the packages are installed, start the Podman systemd socket-activated service:

      sudo systemctl start podman.socket\n
      "},{"location":"install/podman/#containerized-harvest-on-linux-using-rootful-podman","title":"Containerized Harvest on Linux using Rootful Podman","text":"

      Make sure you're able to curl the endpoint.

      sudo curl -H \"Content-Type: application/json\" --unix-socket /var/run/docker.sock http://localhost/_ping\n

      If the sudo curl does not print OK\u23ce troubleshoot before continuing.

      Proceed to Running Harvest

      "},{"location":"install/podman/#containerized-harvest-on-linux-using-rootless-podman_1","title":"Containerized Harvest on Linux using Rootless Podman","text":"

      To run Podman rootless, we'll create a non-root user named: harvest to run Harvest.

      # as root or sudo\nusermod --append --groups wheel harvest\n

      Login with the harvest user, setup the podman.socket, and make sure the curl below works. su or sudo aren't sufficient, you need to ssh into the machine as the harvest user or use machinectl login. See sudo-rootless-podman for details.

      # these must be run as the harvest user\nsystemctl --user enable podman.socket\nsystemctl --user start podman.socket\nsystemctl --user status podman.socket\nexport DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock\n\nsudo curl -H \"Content-Type: application/json\" --unix-socket /var/run/docker.sock http://localhost/_ping\n

      If the sudo curl does not print OK\u23ce troubleshoot before continuing.

      Run podman info and make sure runRoot points to /run/user/$UID/containers (see below). If it doesn't, you'll probably run into problems when restarting the machine. See errors after rebooting.

      podman info | grep runRoot\n  runRoot: /run/user/1001/containers\n
      "},{"location":"install/podman/#running-harvest","title":"Running Harvest","text":"

      By default, Cockpit runs on port 9090, same as Prometheus. We'll change Prometheus's host port to 9091 so we can run both Cockpit and Prometheus. Line 2 below does that.

      With these changes, the standard Harvest compose instructions can be followed as normal now. In summary,

      1. Add the clusters, exporters, etc. to your harvest.yml file
      2. Generate a compose file from your harvest.yml by running

        docker run --rm \\\n--entrypoint \"bin/harvest\" \\\n--volume \"$(pwd):/opt/temp\" \\\n--volume \"$(pwd)/harvest.yml:/opt/harvest/harvest.yml\" \\\nghcr.io/netapp/harvest \\\ngenerate docker full \\\n--output harvest-compose.yml \\\n--promPort 9091\n
      3. Bring everything up

        docker-compose -f prom-stack.yml -f harvest-compose.yml up -d --remove-orphans\n

      After starting the containers, you can view them with podman ps -a or using Cockpit https://host-ip:9090/podman.

      podman ps -a\nCONTAINER ID  IMAGE                                   COMMAND               CREATED        STATUS            PORTS                     NAMES\n45fd00307d0a  ghcr.io/netapp/harvest:latest           --poller unix --p...  5 seconds ago  Up 5 seconds ago  0.0.0.0:12990->12990/tcp  poller_unix_v21.11.0\nd40585bb903c  localhost/prom/prometheus:latest        --config.file=/et...  5 seconds ago  Up 5 seconds ago  0.0.0.0:9091->9090/tcp    prometheus\n17a2784bc282  localhost/grafana/grafana:latest                              4 seconds ago  Up 5 seconds ago  0.0.0.0:3000->3000/tcp    grafana\n
      "},{"location":"install/podman/#troubleshooting","title":"Troubleshooting","text":"

      Check Podman's troubleshooting docs

      "},{"location":"install/podman/#nothing-works","title":"Nothing works","text":"

      Make sure the DOCKER_HOST env variable is set and that this curl works.

      sudo curl -H \"Content-Type: application/json\" --unix-socket /var/run/docker.sock http://localhost/_ping\n

      Make sure your containers can talk to each other.

      ping prometheus\nPING prometheus (10.88.2.3): 56 data bytes\n64 bytes from 10.88.2.3: seq=0 ttl=42 time=0.059 ms\n64 bytes from 10.88.2.3: seq=1 ttl=42 time=0.065 ms\n
      "},{"location":"install/podman/#errors-after-rebooting","title":"Errors after rebooting","text":"

      After restarting the machine, I see errors like these when running podman ps.

      podman ps -a\nERRO[0000] error joining network namespace for container 424df6c: error retrieving network namespace at /run/user/1001/netns/cni-5fb97adc-b6ef-17e8-565b-0481b311ba09: failed to Statfs \"/run/user/1001/netns/cni-5fb97adc-b6ef-17e8-565b-0481b311ba09\": no such file or directory\n

      Run podman info and make sure runRoot points to /run/user/$UID/containers (see below). If it instead points to /tmp/podman-run-$UID you will likely have problems when restarting the machine. Typically this happens because you used su to become the harvest user or ran podman as root. You can fix this by logging in as the harvest user and running podman system reset.

      podman info | grep runRoot\n  runRoot: /run/user/1001/containers\n
      "},{"location":"install/podman/#linger-errors","title":"Linger errors","text":"

      When you logout, systemd may remove some temp files and tear down Podman's rootless network. Workaround is to run the following as the harvest user. Details here

      loginctl enable-linger\n
      "},{"location":"install/podman/#versions","title":"Versions","text":"

      The following versions were used to validate this workflow.

      podman version\n\nVersion:      3.2.3\nAPI Version:  3.2.3\nGo Version:   go1.15.7\nBuilt:        Thu Jul 29 11:02:43 2021\nOS/Arch:      linux/amd64\n\ndocker-compose -v\ndocker-compose version 1.29.2, build 5becea4c\n\ncat /etc/redhat-release\nRed Hat Enterprise Linux release 8.4 (Ootpa)\n
      "},{"location":"install/podman/#references","title":"References","text":"
      • https://github.com/containers/podman
      • https://www.redhat.com/sysadmin/sudo-rootless-podman
      • https://www.redhat.com/sysadmin/podman-docker-compose
      • https://fedoramagazine.org/use-docker-compose-with-podman-to-orchestrate-containers-on-fedora/
      • https://podman.io/getting-started/network.html mentions the need for podman-plugins, otherwise rootless containers running in separate containers cannot see each other
      • Troubleshoot Podman
      "},{"location":"resources/matrix/","title":"Matrix","text":""},{"location":"resources/matrix/#matrix","title":"Matrix","text":"

      The \u2133atri\u03c7 package provides the matrix.Matrix data-structure for storage, manipulation and transmission of both numeric and non-numeric (string) data. It is utilized by core components of Harvest, including collectors, plugins and exporters. It furthermore serves as an interface between these components, such that \"the left hand does not know what the right hand does\".

      Internally, the Matrix is a collection of metrics (matrix.Metric) and instances (matrix.Instance) in the form of a 2-dimensional array:

      Since we use hash tables for accessing the elements of the array, all metrics and instances added to the matrix must have a unique key. Metrics are typed and contain the numeric data (i.e. rows) of the Matrix. Instances only serve as pointers to the columents of the Matrix, but they also store non-numeric data as labels (*dict.Dict).

      This package is the architectural backbone of Harvest, therefore understanding it is key for an advanced user or contributor.

      "},{"location":"resources/matrix/#basic-usage","title":"Basic Usage","text":""},{"location":"resources/matrix/#initialize","title":"Initialize","text":"

      func matrix.New(name, object string, identifier string) *Matrix\n// always returns successfully pointer to (empty) Matrix \n
      This section describes how to properly initialize a new Matrix instance. Note that if you write a collector, a Matrix instance is already properly initialized for you (as MyCollector.matrix), and if you write a plugin or exporter, it is passed to you from the collector. That means most of the time you don't have to worry about initializing the Matrix.

      matrix.New() requires three arguments: * UUID is by convention the collector name (e.g. MyCollector) if the Matrix comes from a collector, or the collector name and the plugin name concatenated with a . (e.g. MyCollector.MyPlugin) if the Matrix comes from a plugin. * object is a description of the instances of the Matrix. For example, if we collect data about cars and our instances are cars, a good name would be car. * identifier is a unique key used to identify a matrix instance

      Note that identifier should uniquely identify a Matrix instance. This is not a strict requirement, but guarantees that your data is properly handled by exporters.

      "},{"location":"resources/matrix/#example","title":"Example","text":"

      Here is an example from the point of view of a collector:

      import \"github.com/netapp/harvest/v2/pkg/matrix\"\n\nvar myMatrix *matrix.Matrix\n\nmyMatrix = matrix.New(\"CarCollector\", \"car\", \"car\")\n

      Next step is to add metrics and instances to our Matrix.

      "},{"location":"resources/matrix/#add-instances-and-instance-labels","title":"Add instances and instance labels","text":"
      func (x *Matrix) NewInstance(key string) (*Instance, error)\n// returns pointer to a new Instance, or nil with error (if key is not unique)\n

      func (i *Instance) SetLabel(key, value string)\n// always successful, overwrites existing values\n
      func (i *Instance) GetLabel(key) string\n// always returns value, if label is not set, returns empty string\n

      Once we have initialized a Matrix, we can add instances and add labels to our instances.

      "},{"location":"resources/matrix/#example_1","title":"Example","text":"
      var (\ninstance *matrix.Instance\nerr error\n)\nif instance, err = myMatrix.NewInstance(\"SomeCarMark\"); err != nil {\nreturn err\n// or handle err, but beware that instance is nil\n}\ninstance.SetLabel(\"mark\", \"SomeCarMark\")\ninstance.SetLabel(\"color\", \"red\")\ninstance.SetLabel(\"style\", \"coupe\")\n// add as many labels as you like\ninstance.GetLabel(\"color\") // return \"red\"\ninstance.GetLabel(\"owner\") // returns \"\"\n
      "},{"location":"resources/matrix/#add-metrics","title":"Add Metrics","text":"
      func (x *Matrix) NewMetricInt64(key string) (Metric, error)\n// returns pointer to a new MetricInt64, or nil with error (if key is not unique)\n// note that Metric is an interface\n

      Metrics are typed and there are currently 8 types, all can be created with the same signature as above: * MetricUint8 * MetricUint32 * MetricUint64 * MetricInt * MetricInt32 * MetricInt64 * MetricFloat32 * MetricFloat64 * We are able to read from and write to a metric instance using different types (as displayed in the next section), however choosing a type wisely ensures that this is done efficiently and overflow does not occur.

      We can add labels to metrics just like instances. This is usually done when we deal with histograms:

      func (m Metric) SetLabel(key, value string)\n// always successful, overwrites existing values\n
      func (m Metric) GetLabel(key) string\n// always returns value, if label is not set, returns empty string\n

      "},{"location":"resources/matrix/#example_2","title":"Example","text":"

      Continuing our Matrix for collecting car-related data:

      var (\nspeed, length matrix.Metric\nerr error\n)\n\nif speed, err = myMatrix.NewMetricUint32(\"max_speed\"); err != nil {\nreturn err\n}\nif length, err = myMatrix.NewMetricFloat32(\"length_in_mm\"); err != nil {\nreturn err\n}\n
      "},{"location":"resources/matrix/#write-numeric-data","title":"Write numeric data","text":"

      func (x *Matrix) Reset()\n// flush numeric data from previous poll\n
      func (m Metric) SetValueInt64(i *Instance, v int64) error\nfunc (m Metric) SetValueUint8(i *Instance, v uint8) error\nfunc (m Metric) SetValueUint64(i *Instance, v uint64) error\nfunc (m Metric) SetValueFloat64(i *Instance, v float64) error\nfunc (m Metric) SetValueBytes(i *Instance, v []byte) error\nfunc (m Metric) SetValueString(i *Instance, v []string) error\n// sets the numeric value for the instance i to v\n// returns error if v is invalid (explained below)\n
      func (m Metric) AddValueInt64(i *Instance, v int64) error\n// increments the numeric value for the instance i by v\n// same signatures for all the types defined above\n

      When possible you should reuse a Matrix for each data poll, but to do that, you need to call Reset() to drop old data from the Matrix. It is safe to add new instances and metrics after calling this method.

      The SetValue*() and AddValue*() methods are typed same as the metrics. Even though you are not required to use the same type as the metric, it is the safest and most efficient way.

      Since most collectors get their data as bytes or strings, it is recommended to use the SetValueString() and SetValueBytes() methods.

      These methods return an error if value v can not be converted to the type of the metric. Error is always nil when the type of v matches the type of the metric.

      "},{"location":"resources/matrix/#example_3","title":"Example","text":"

      Continuing with the previous examples:

      if err = myMatrix.Reset(); err != nil {\nreturn\n}\n// write numbers to the matrix using the instance and the metrics we have created\n\n// let the metric do the conversion for us\nif err = speed.SetValueString(instance, \"500\"); err != nil {\nlogger.Error(me.Prefix, \"set speed value: \", err)\n}\n// here we ignore err since type is the metric type\nlength.SetValueFloat64(instance, 10000.00)\n\n// safe to add new instances\nvar instance2 matrix.Instance\nif instance2, err = myMatrix.NewInstance(\"SomeOtherCar\"); err != nil {\nreturn err\n}\n\n// possible and safe even though speed has type Float32\n} if err = length.SetValueInt64(instance2, 13000); err != nil {\nlogger.Error(me.Prefix, \"set speed value:\", err)\n}\n\n// possible, but will overflow since speed is unsigned\n} if err = speed.SetValueInt64(instance2, -500); err != nil {\nlogger.Error(me.Prefix, \"set length value:\", err)\n}\n
      "},{"location":"resources/matrix/#read-metrics-and-instances","title":"Read metrics and instances","text":"

      In this section we switch gears and look at the Matrix from the point of view of plugins and exporters. Both those components need to read from the Matrix and have no knowledge of its origin or contents.

      func (x *Matrix) GetMetrics() map[string]Metric\n// returns all metrics in the Matrix\n
      func (x *Matrix) GetInstances() map[string]*Instance\n// returns all instances in the Matrix\n

      Usually we will do a nested loop with these two methods to read all data in the Matrix. See examples below.

      "},{"location":"resources/matrix/#example-iterate-over-instances","title":"Example: Iterate over instances","text":"

      In this example the method PrintKeys() will iterate over a Matrix and print all metric and instance keys.

      func PrintKeys(x *matrix.Matrix) {\nfor instanceKey, _ := range x.GetInstances() {\nfmt.Println(\"instance key=\", instanceKey)\n}\n}\n
      "},{"location":"resources/matrix/#example-read-instance-labels","title":"Example: Read instance labels","text":"

      Each instance has a set of labels. We can iterate over these labels with the GetLabel() and GetLabels() method. In this example, we write a function that prints all labels of an instance:

      func PrintLabels(instance *matrix.Instance) {\nfor label, value, := range instance.GetLabels().Map() {\nfmt.Printf(\"%s=%s\\n\", label, value)\n}\n}\n
      "},{"location":"resources/matrix/#example-read-metric-values-labels","title":"Example: Read metric values labels","text":"

      Similar to the SetValue* and AddValue* methods, you can choose a type when reading from a metric. If you don't know the type of the metric, it is safe to read it as a string. In this example, we write a function that prints the value of a metric for all instances in a Matrix:

      func PrintMetricValues(x *matrix.Matrix, m matrix.Metric) {\nfor key, instance := range x.GetInstances() {\nif value, has := m.GetValueString(instance) {\nfmt.Printf(\"instance %s = %s\\n\", key, value)\n} else {\nfmt.Printf(\"instance %s has no value\\n\", key)\n}\n}\n}\n
      "},{"location":"resources/power-algorithm/","title":"Power Algorithm","text":"

      Gathering power metrics requires a cluster with:

      • ONTAP versions 9.6+
      • REST enabled, even when using the ZAPI collector

      REST is required because it is the only way to collect chassis field-replaceable-unit (FRU) information via the REST API /api/private/cli/system/chassis/fru.

      "},{"location":"resources/power-algorithm/#how-does-harvest-calculate-cluster-power","title":"How does Harvest calculate cluster power?","text":"

      Cluster power is the sum of a cluster's node(s) power + the sum of attached disk shelve(s) power.

      Redundant power supplies (PSU) load-share the total load. With n PSUs, each PSU does roughly (1/n) the work (the actual amount is slightly more than a single PSU due to additional fans.)

      "},{"location":"resources/power-algorithm/#node-power","title":"Node power","text":"

      Node power is calculated by collecting power supply unit (PSU) power, as reported by REST /api/private/cli/system/environment/sensors or by ZAPI environment-sensors-get-iter.

      When a power supply is shared between controllers, the PSU's power will be evenly divided across the controllers due to load-sharing.

      For example:

      • FAS2750 models have two power supplies that power both controllers. Each PSU is shared between the two controllers.
      • A800 models have four power supplies. PSU1 and PSU2 power Controller1 and PSU3 and PSU4 power Controller2. Each PSU provides power to a single controller.

      Harvest determines whether a PSU is shared between controllers by consulting the connected_nodes of each PSU, as reported by ONTAP via /api/private/cli/system/chassis/fru

      "},{"location":"resources/power-algorithm/#disk-shelf-power","title":"Disk shelf power","text":"

      Disk shelf power is calculated by collecting psu.power_drawn, as reported by REST, via /api/storage/shelves or sensor-reading, as reported by ZAPI storage-shelf-info-get-iter.

      The power for embedded shelves is ignored, since that power is already accounted for in the controller's power draw.

      "},{"location":"resources/power-algorithm/#examples","title":"Examples","text":""},{"location":"resources/power-algorithm/#fas2750","title":"FAS2750","text":"
      # Power Metrics for 10.61.183.200\n\n## ONTAP version NetApp Release 9.8P16: Fri Dec 02 02:05:05 UTC 2022\n\n## Nodes\nsystem show\n       Node         |  Model  | SerialNumber  \n----------------------+---------+---------------\ncie-na2750-g1344-01 | FAS2750 | 621841000123  \ncie-na2750-g1344-02 | FAS2750 | 621841000124\n\n## Chassis\nsystem chassis fru show\n ChassisId   |      Name       |         Fru         |    Type    | Status | NumNodes |              ConnectedNodes               \n---------------+-----------------+---------------------+------------+--------+----------+-------------------------------------------\n021827030435 | 621841000123    | cie-na2750-g1344-01 | controller | ok     |        1 | cie-na2750-g1344-01                       \n021827030435 | 621841000124    | cie-na2750-g1344-02 | controller | ok     |        1 | cie-na2750-g1344-02                       \n021827030435 | PSQ094182201794 | PSU2 FRU            | psu        | ok     |        2 | cie-na2750-g1344-02, cie-na2750-g1344-01  \n021827030435 | PSQ094182201797 | PSU1 FRU            | psu        | ok     |        2 | cie-na2750-g1344-02, cie-na2750-g1344-01\n\n## Sensors\nsystem environment sensors show\n(filtered by power, voltage, current)\n       Node         |     Name      |  Type   | State  | Value | Units  \n----------------------+---------------+---------+--------+-------+--------\ncie-na2750-g1344-01 | PSU1 12V Curr | current | normal |  9920 | mA     \ncie-na2750-g1344-01 | PSU1 12V      | voltage | normal | 12180 | mV     \ncie-na2750-g1344-01 | PSU1 5V Curr  | current | normal |  4490 | mA     \ncie-na2750-g1344-01 | PSU1 5V       | voltage | normal |  5110 | mV     \ncie-na2750-g1344-01 | PSU2 12V Curr | current | normal |  9140 | mA     \ncie-na2750-g1344-01 | PSU2 12V      | voltage | normal | 12100 | mV     \ncie-na2750-g1344-01 | PSU2 5V Curr  | current | normal |  4880 | mA     \ncie-na2750-g1344-01 | PSU2 5V       | voltage | normal |  5070 | mV     \ncie-na2750-g1344-02 | PSU1 12V Curr | current | normal |  9920 | mA     \ncie-na2750-g1344-02 | PSU1 12V      | voltage | normal | 12180 | mV     \ncie-na2750-g1344-02 | PSU1 5V Curr  | current | normal |  4330 | mA     \ncie-na2750-g1344-02 | PSU1 5V       | voltage | normal |  5110 | mV     \ncie-na2750-g1344-02 | PSU2 12V Curr | current | normal |  9170 | mA     \ncie-na2750-g1344-02 | PSU2 12V      | voltage | normal | 12100 | mV     \ncie-na2750-g1344-02 | PSU2 5V Curr  | current | normal |  4720 | mA     \ncie-na2750-g1344-02 | PSU2 5V       | voltage | normal |  5070 | mV\n\n## Shelf PSUs\nstorage shelf show\nShelf | ProductId | ModuleType | PSUId | PSUIsEnabled | PSUPowerDrawn | Embedded  \n------+-----------+------------+-------+--------------+---------------+---------\n  1.0 | DS224-12  | iom12e     | 1,2   | true,true    | 1397,1318     | true\n\n### Controller Power From Sum(InVoltage * InCurrent)/NumNodes\nPower: 256W\n
      "},{"location":"resources/power-algorithm/#aff-a800","title":"AFF A800","text":"
      # Power Metrics for 10.61.124.110\n\n## ONTAP version NetApp Release 9.13.1P1: Tue Jul 25 10:19:28 UTC 2023\n\n## Nodes\nsystem show\n  Node    |  Model   | SerialNumber  \n----------+----------+-------------\na800-1-01 | AFF-A800 | 941825000071  \na800-1-02 | AFF-A800 | 941825000072\n\n## Chassis\nsystem chassis fru show\n   ChassisId    |      Name      |    Fru    |    Type    | Status | NumNodes | ConnectedNodes  \n----------------+----------------+-----------+------------+--------+----------+---------------\nSHFFG1826000154 | 941825000071   | a800-1-01 | controller | ok     |        1 | a800-1-01       \nSHFFG1826000154 | 941825000072   | a800-1-02 | controller | ok     |        1 | a800-1-02       \nSHFFG1826000154 | EEQT1822002800 | PSU1 FRU  | psu        | ok     |        1 | a800-1-02       \nSHFFG1826000154 | EEQT1822002804 | PSU2 FRU  | psu        | ok     |        1 | a800-1-02       \nSHFFG1826000154 | EEQT1822002805 | PSU2 FRU  | psu        | ok     |        1 | a800-1-01       \nSHFFG1826000154 | EEQT1822002806 | PSU1 FRU  | psu        | ok     |        1 | a800-1-01\n\n## Sensors\nsystem environment sensors show\n(filtered by power, voltage, current)\n  Node    |     Name      |  Type   | State  | Value | Units  \n----------+---------------+---------+--------+-------+------\na800-1-01 | PSU1 Power In | unknown | normal |   376 | W      \na800-1-01 | PSU2 Power In | unknown | normal |   411 | W      \na800-1-02 | PSU1 Power In | unknown | normal |   383 | W      \na800-1-02 | PSU2 Power In | unknown | normal |   433 | W\n\n## Shelf PSUs\nstorage shelf show\nShelf |  ProductId  | ModuleType | PSUId | PSUIsEnabled | PSUPowerDrawn | Embedded  \n------+-------------+------------+-------+--------------+---------------+---------\n  1.0 | FS4483PSM3E | psm3e      |       |              |               | true      \n\n### Controller Power From Sum(InPower sensors)\nPower: 1603W\n
      "},{"location":"resources/rest-perf-metrics/","title":"REST Perf Metrics","text":"

      This document describes implementation details about ONTAP's REST performance metrics endpoints, including how we built the Harvest RESTPerf collectors.

      Warning

      These are implemenation details about ONTAP's REST performance metrics. You do not need to understand any of this to use Harvest. If you want to know how to use or configure Harvest's REST collectors, checkout the Rest Collector documentation instead. If you're interested in the gory details. Read on.

      "},{"location":"resources/rest-perf-metrics/#introduction","title":"Introduction","text":"

      ONTAP REST metrics were introduced in ONTAP 9.11.1 and included parity with Harvest-collected ZAPI performance metrics by ONTAP 9.12.1.

      "},{"location":"resources/rest-perf-metrics/#performance-rest-queries","title":"Performance REST queries","text":"

      Mapping table

      ZAPI REST Comment perf-object-counter-list-info /api/cluster/counter/tables returns counter tables and schemas perf-object-instance-list-info-iter /api/cluster/counter/tables/{name}/rows returns instances and counter values perf-object-get-instances /api/cluster/counter/tables/{name}/rows returns instances and counter values

      Performance REST responses include properties and counters. Counters are metric-like, while properties include instance attributes.

      "},{"location":"resources/rest-perf-metrics/#examples","title":"Examples","text":""},{"location":"resources/rest-perf-metrics/#ask-ontap-for-all-resources-that-report-performance-metrics","title":"Ask ONTAP for all resources that report performance metrics","text":"
      curl 'https://$clusterIP/api/cluster/counter/tables'\n
      Response

      {\n\"records\": [\n{\n\"name\": \"copy_manager\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/copy_manager\"\n}\n}\n},\n{\n\"name\": \"copy_manager:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/copy_manager%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"disk\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/disk\"\n}\n}\n},\n{\n\"name\": \"disk:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/disk%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"disk:raid_group\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/disk%3Araid_group\"\n}\n}\n},\n{\n\"name\": \"external_cache\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/external_cache\"\n}\n}\n},\n{\n\"name\": \"fcp\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp\"\n}\n}\n},\n{\n\"name\": \"fcp:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp%3Anode\"\n}\n}\n},\n{\n\"name\": \"fcp_lif\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp_lif\"\n}\n}\n},\n{\n\"name\": \"fcp_lif:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp_lif%3Anode\"\n}\n}\n},\n{\n\"name\": \"fcp_lif:port\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp_lif%3Aport\"\n}\n}\n},\n{\n\"name\": \"fcp_lif:svm\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcp_lif%3Asvm\"\n}\n}\n},\n{\n\"name\": \"fcvi\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/fcvi\"\n}\n}\n},\n{\n\"name\": \"headroom_aggregate\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/headroom_aggregate\"\n}\n}\n},\n{\n\"name\": \"headroom_cpu\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/headroom_cpu\"\n}\n}\n},\n{\n\"name\": \"host_adapter\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/host_adapter\"\n}\n}\n},\n{\n\"name\": \"iscsi_lif\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/iscsi_lif\"\n}\n}\n},\n{\n\"name\": \"iscsi_lif:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/iscsi_lif%3Anode\"\n}\n}\n},\n{\n\"name\": \"iscsi_lif:svm\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/iscsi_lif%3Asvm\"\n}\n}\n},\n{\n\"name\": \"lif\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/lif\"\n}\n}\n},\n{\n\"name\": \"lif:svm\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/lif%3Asvm\"\n}\n}\n},\n{\n\"name\": \"lun\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/lun\"\n}\n}\n},\n{\n\"name\": \"lun:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/lun%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"lun:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/lun%3Anode\"\n}\n}\n},\n{\n\"name\": \"namespace\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/namespace\"\n}\n}\n},\n{\n\"name\": \"namespace:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/namespace%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"nfs_v4_diag\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nfs_v4_diag\"\n}\n}\n},\n{\n\"name\": \"nic_common\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nic_common\"\n}\n}\n},\n{\n\"name\": \"nvmf_lif\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nvmf_lif\"\n}\n}\n},\n{\n\"name\": \"nvmf_lif:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nvmf_lif%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"nvmf_lif:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nvmf_lif%3Anode\"\n}\n}\n},\n{\n\"name\": \"nvmf_lif:port\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/nvmf_lif%3Aport\"\n}\n}\n},\n{\n\"name\": \"object_store_client_op\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/object_store_client_op\"\n}\n}\n},\n{\n\"name\": \"path\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/path\"\n}\n}\n},\n{\n\"name\": \"processor\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/processor\"\n}\n}\n},\n{\n\"name\": \"processor:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/processor%3Anode\"\n}\n}\n},\n{\n\"name\": \"qos\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos\"\n}\n}\n},\n{\n\"name\": \"qos:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"qos:policy_group\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos%3Apolicy_group\"\n}\n}\n},\n{\n\"name\": \"qos_detail\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos_detail\"\n}\n}\n},\n{\n\"name\": \"qos_detail_volume\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos_detail_volume\"\n}\n}\n},\n{\n\"name\": \"qos_volume\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos_volume\"\n}\n}\n},\n{\n\"name\": \"qos_volume:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qos_volume%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"qtree\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qtree\"\n}\n}\n},\n{\n\"name\": \"qtree:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/qtree%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_cifs\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_cifs\"\n}\n}\n},\n{\n\"name\": \"svm_cifs:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_cifs%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_cifs:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_cifs%3Anode\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v3\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v3\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v3:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v3%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v3:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v3%3Anode\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v4\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v4\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v41\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v41\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v41:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v41%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v41:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v41%3Anode\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v42\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v42\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v42:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v42%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v42:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v42%3Anode\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v4:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v4%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"svm_nfs_v4:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/svm_nfs_v4%3Anode\"\n}\n}\n},\n{\n\"name\": \"system\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system\"\n}\n}\n},\n{\n\"name\": \"system:constituent\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system%3Aconstituent\"\n}\n}\n},\n{\n\"name\": \"system:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system%3Anode\"\n}\n}\n},\n{\n\"name\": \"token_manager\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/token_manager\"\n}\n}\n},\n{\n\"name\": \"volume\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/volume\"\n}\n}\n},\n{\n\"name\": \"volume:node\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/volume%3Anode\"\n}\n}\n},\n{\n\"name\": \"volume:svm\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/volume%3Asvm\"\n}\n}\n},\n{\n\"name\": \"wafl\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/wafl\"\n}\n}\n},\n{\n\"name\": \"wafl_comp_aggr_vol_bin\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/wafl_comp_aggr_vol_bin\"\n}\n}\n},\n{\n\"name\": \"wafl_hya_per_aggregate\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/wafl_hya_per_aggregate\"\n}\n}\n},\n{\n\"name\": \"wafl_hya_sizer\",\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/wafl_hya_sizer\"\n}\n}\n}\n],\n\"num_records\": 71,\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/\"\n}\n}\n}\n

      "},{"location":"resources/rest-perf-metrics/#node-performance-metrics-metadata","title":"Node performance metrics metadata","text":"

      Ask ONTAP to return the schema for system:node. This will include the name, description, and metadata for all counters associated with system:node.

      curl 'https://$clusterIP/api/cluster/counter/tables/system:node?return_records=true'\n
      Response

      {\n\"name\": \"system:node\",\n\"description\": \"The System table reports general system activity. This includes global throughput for the main services, I/O latency, and CPU activity. The alias name for system:node is system_node.\",\n\"counter_schemas\": [\n{\n\"name\": \"average_processor_busy_percent\",\n\"description\": \"Average processor utilization across all processors in the system\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"cifs_ops\",\n\"description\": \"Number of CIFS operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"cp\",\n\"description\": \"CP time rate\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"cp_time\",\n\"description\": \"Processor time in CP\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"cpu_busy\",\n\"description\": \"System CPU resource utilization. Returns a computed percentage for the default CPU field. Basically computes a 'cpu usage summary' value which indicates how 'busy' the system is based upon the most heavily utilized domain. The idea is to determine the amount of available CPU until we're limited by either a domain maxing out OR we exhaust all available idle CPU cycles, whichever occurs first.\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"cpu_elapsed_time\",\n\"description\": \"Elapsed time since boot\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"disk_data_read\",\n\"description\": \"Number of disk kilobytes (KB) read per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"disk_data_written\",\n\"description\": \"Number of disk kilobytes (KB) written per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"domain_busy\",\n\"description\": \"Array of processor time in percentage spent in various domains\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"domain_shared\",\n\"description\": \"Array of processor time in percentage spent in various shared domains\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"dswitchto_cnt\",\n\"description\": \"Array of processor time in percentage spent in domain switch\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"fcp_data_received\",\n\"description\": \"Number of FCP kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"fcp_data_sent\",\n\"description\": \"Number of FCP kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"fcp_ops\",\n\"description\": \"Number of FCP operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"hard_switches\",\n\"description\": \"Number of context switches per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"hdd_data_read\",\n\"description\": \"Number of HDD Disk kilobytes (KB) read per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"hdd_data_written\",\n\"description\": \"Number of HDD kilobytes (KB) written per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"idle\",\n\"description\": \"Processor idle rate percentage\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"idle_time\",\n\"description\": \"Processor idle time\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"instance_name\",\n\"description\": \"Node name\",\n\"type\": \"string\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"interrupt\",\n\"description\": \"Processor interrupt rate percentage\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"interrupt_in_cp\",\n\"description\": \"Processor interrupt rate percentage\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cp_time\"\n}\n},\n{\n\"name\": \"interrupt_in_cp_time\",\n\"description\": \"Processor interrupt in CP time\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"interrupt_num\",\n\"description\": \"Processor interrupt number\",\n\"type\": \"delta\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"interrupt_num_in_cp\",\n\"description\": \"Number of processor interrupts in CP\",\n\"type\": \"delta\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"interrupt_time\",\n\"description\": \"Processor interrupt time\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"intr_cnt\",\n\"description\": \"Array of interrupt count per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"intr_cnt_ipi\",\n\"description\": \"IPI interrupt count per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"intr_cnt_msec\",\n\"description\": \"Millisecond interrupt count per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"intr_cnt_total\",\n\"description\": \"Total interrupt count per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"iscsi_data_received\",\n\"description\": \"iSCSI kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"iscsi_data_sent\",\n\"description\": \"iSCSI kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"iscsi_ops\",\n\"description\": \"Number of iSCSI operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"memory\",\n\"description\": \"Total memory in megabytes (MB)\",\n\"type\": \"raw\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"network_data_received\",\n\"description\": \"Number of network kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"network_data_sent\",\n\"description\": \"Number of network kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nfs_ops\",\n\"description\": \"Number of NFS operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"non_interrupt\",\n\"description\": \"Processor non-interrupt rate percentage\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"non_interrupt_time\",\n\"description\": \"Processor non-interrupt time\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"num_processors\",\n\"description\": \"Number of active processors in the system\",\n\"type\": \"raw\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"nvme_fc_data_received\",\n\"description\": \"NVMe/FC kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_fc_data_sent\",\n\"description\": \"NVMe/FC kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_fc_ops\",\n\"description\": \"NVMe/FC operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"nvme_roce_data_received\",\n\"description\": \"NVMe/RoCE kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_roce_data_sent\",\n\"description\": \"NVMe/RoCE kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_roce_ops\",\n\"description\": \"NVMe/RoCE operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"nvme_tcp_data_received\",\n\"description\": \"NVMe/TCP kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_tcp_data_sent\",\n\"description\": \"NVMe/TCP kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"nvme_tcp_ops\",\n\"description\": \"NVMe/TCP operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"other_data\",\n\"description\": \"Other throughput\",\n\"type\": \"rate\",\n\"unit\": \"b_per_sec\"\n},\n{\n\"name\": \"other_latency\",\n\"description\": \"Average latency for all other operations in the system in microseconds\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"other_ops\"\n}\n},\n{\n\"name\": \"other_ops\",\n\"description\": \"All other operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"partner_data_received\",\n\"description\": \"SCSI Partner kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"partner_data_sent\",\n\"description\": \"SCSI Partner kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"processor_plevel\",\n\"description\": \"Processor plevel rate percentage\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"processor_plevel_time\",\n\"description\": \"Processor plevel rate percentage\",\n\"type\": \"delta\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"read_data\",\n\"description\": \"Read throughput\",\n\"type\": \"rate\",\n\"unit\": \"b_per_sec\"\n},\n{\n\"name\": \"read_latency\",\n\"description\": \"Average latency for all read operations in the system in microseconds\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"read_ops\"\n}\n},\n{\n\"name\": \"read_ops\",\n\"description\": \"Read operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"sk_switches\",\n\"description\": \"Number of sk switches per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"ssd_data_read\",\n\"description\": \"Number of SSD Disk kilobytes (KB) read per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"ssd_data_written\",\n\"description\": \"Number of SSD Disk kilobytes (KB) written per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"sys_read_data\",\n\"description\": \"Network and FCP kilobytes (KB) received per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"sys_total_data\",\n\"description\": \"Network and FCP kilobytes (KB) received and sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"sys_write_data\",\n\"description\": \"Network and FCP kilobytes (KB) sent per second\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"tape_data_read\",\n\"description\": \"Tape bytes read per millisecond\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"tape_data_written\",\n\"description\": \"Tape bytes written per millisecond\",\n\"type\": \"rate\",\n\"unit\": \"kb_per_sec\"\n},\n{\n\"name\": \"time\",\n\"description\": \"Time in seconds since the Epoch (00:00:00 UTC January 1 1970)\",\n\"type\": \"raw\",\n\"unit\": \"sec\"\n},\n{\n\"name\": \"time_per_interrupt\",\n\"description\": \"Processor time per interrupt\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"interrupt_num\"\n}\n},\n{\n\"name\": \"time_per_interrupt_in_cp\",\n\"description\": \"Processor time per interrupt in CP\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"interrupt_num_in_cp\"\n}\n},\n{\n\"name\": \"total_data\",\n\"description\": \"Total throughput in bytes\",\n\"type\": \"rate\",\n\"unit\": \"b_per_sec\"\n},\n{\n\"name\": \"total_latency\",\n\"description\": \"Average latency for all operations in the system in microseconds\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"total_ops\"\n}\n},\n{\n\"name\": \"total_ops\",\n\"description\": \"Total number of operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n},\n{\n\"name\": \"total_processor_busy\",\n\"description\": \"Total processor utilization of all processors in the system\",\n\"type\": \"percent\",\n\"unit\": \"percent\",\n\"denominator\": {\n\"name\": \"cpu_elapsed_time\"\n}\n},\n{\n\"name\": \"total_processor_busy_time\",\n\"description\": \"Total processor time of all processors in the system\",\n\"type\": \"delta\",\n\"unit\": \"microsec\"\n},\n{\n\"name\": \"uptime\",\n\"description\": \"Time in seconds that the system has been up\",\n\"type\": \"raw\",\n\"unit\": \"sec\"\n},\n{\n\"name\": \"wafliron\",\n\"description\": \"Wafliron counters\",\n\"type\": \"delta\",\n\"unit\": \"none\"\n},\n{\n\"name\": \"write_data\",\n\"description\": \"Write throughput\",\n\"type\": \"rate\",\n\"unit\": \"b_per_sec\"\n},\n{\n\"name\": \"write_latency\",\n\"description\": \"Average latency for all write operations in the system in microseconds\",\n\"type\": \"average\",\n\"unit\": \"microsec\",\n\"denominator\": {\n\"name\": \"write_ops\"\n}\n},\n{\n\"name\": \"write_ops\",\n\"description\": \"Write operations per second\",\n\"type\": \"rate\",\n\"unit\": \"per_sec\"\n}\n],\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system:node\"\n}\n}\n}\n

      "},{"location":"resources/rest-perf-metrics/#node-performance-metrics-with-all-instances-properties-and-counters","title":"Node performance metrics with all instances, properties, and counters","text":"

      Ask ONTAP to return all instances of system:node. For each system:node include all of that node's properties and performance metrics.

      curl 'https://$clusterIP/api/cluster/counter/tables/system:node/rows?fields=*&return_records=true'\n
      Response

      {\n\"records\": [\n{\n\"counter_table\": {\n\"name\": \"system:node\"\n},\n\"id\": \"umeng-aff300-01:28e14eab-0580-11e8-bd9d-00a098d39e12\",\n\"properties\": [\n{\n\"name\": \"node.name\",\n\"value\": \"umeng-aff300-01\"\n},\n{\n\"name\": \"system_model\",\n\"value\": \"AFF-A300\"\n},\n{\n\"name\": \"ontap_version\",\n\"value\": \"NetApp Release R9.12.1xN_221108_1315: Tue Nov  8 15:32:25 EST 2022 \"\n},\n{\n\"name\": \"compile_flags\",\n\"value\": \"1\"\n},\n{\n\"name\": \"serial_no\",\n\"value\": \"721802000260\"\n},\n{\n\"name\": \"system_id\",\n\"value\": \"0537124012\"\n},\n{\n\"name\": \"hostname\",\n\"value\": \"umeng-aff300-01\"\n},\n{\n\"name\": \"name\",\n\"value\": \"umeng-aff300-01\"\n},\n{\n\"name\": \"uuid\",\n\"value\": \"28e14eab-0580-11e8-bd9d-00a098d39e12\"\n}\n],\n\"counters\": [\n{\n\"name\": \"memory\",\n\"value\": 88766\n},\n{\n\"name\": \"nfs_ops\",\n\"value\": 15991465\n},\n{\n\"name\": \"cifs_ops\",\n\"value\": 0\n},\n{\n\"name\": \"fcp_ops\",\n\"value\": 0\n},\n{\n\"name\": \"iscsi_ops\",\n\"value\": 355884195\n},\n{\n\"name\": \"nvme_fc_ops\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_ops\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_ops\",\n\"value\": 0\n},\n{\n\"name\": \"network_data_received\",\n\"value\": 33454266379\n},\n{\n\"name\": \"network_data_sent\",\n\"value\": 9938586739\n},\n{\n\"name\": \"fcp_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"fcp_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"iscsi_data_received\",\n\"value\": 4543696942\n},\n{\n\"name\": \"iscsi_data_sent\",\n\"value\": 3058795391\n},\n{\n\"name\": \"nvme_fc_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_fc_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"partner_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"partner_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"sys_read_data\",\n\"value\": 33454266379\n},\n{\n\"name\": \"sys_write_data\",\n\"value\": 9938586739\n},\n{\n\"name\": \"sys_total_data\",\n\"value\": 43392853118\n},\n{\n\"name\": \"disk_data_read\",\n\"value\": 32083838540\n},\n{\n\"name\": \"disk_data_written\",\n\"value\": 21102507352\n},\n{\n\"name\": \"hdd_data_read\",\n\"value\": 0\n},\n{\n\"name\": \"hdd_data_written\",\n\"value\": 0\n},\n{\n\"name\": \"ssd_data_read\",\n\"value\": 32083838540\n},\n{\n\"name\": \"ssd_data_written\",\n\"value\": 21102507352\n},\n{\n\"name\": \"tape_data_read\",\n\"value\": 0\n},\n{\n\"name\": \"tape_data_written\",\n\"value\": 0\n},\n{\n\"name\": \"read_ops\",\n\"value\": 33495530\n},\n{\n\"name\": \"write_ops\",\n\"value\": 324699398\n},\n{\n\"name\": \"other_ops\",\n\"value\": 13680732\n},\n{\n\"name\": \"total_ops\",\n\"value\": 371875660\n},\n{\n\"name\": \"read_latency\",\n\"value\": 14728140707\n},\n{\n\"name\": \"write_latency\",\n\"value\": 1568830328022\n},\n{\n\"name\": \"other_latency\",\n\"value\": 2132691612\n},\n{\n\"name\": \"total_latency\",\n\"value\": 1585691160341\n},\n{\n\"name\": \"read_data\",\n\"value\": 3212301497187\n},\n{\n\"name\": \"write_data\",\n\"value\": 4787509093524\n},\n{\n\"name\": \"other_data\",\n\"value\": 0\n},\n{\n\"name\": \"total_data\",\n\"value\": 7999810590711\n},\n{\n\"name\": \"cpu_busy\",\n\"value\": 790347800332\n},\n{\n\"name\": \"cpu_elapsed_time\",\n\"value\": 3979034040025\n},\n{\n\"name\": \"average_processor_busy_percent\",\n\"value\": 788429907770\n},\n{\n\"name\": \"total_processor_busy\",\n\"value\": 12614878524320\n},\n{\n\"name\": \"total_processor_busy_time\",\n\"value\": 12614878524320\n},\n{\n\"name\": \"num_processors\",\n\"value\": 16\n},\n{\n\"name\": \"interrupt_time\",\n\"value\": 118435504138\n},\n{\n\"name\": \"interrupt\",\n\"value\": 118435504138\n},\n{\n\"name\": \"interrupt_num\",\n\"value\": 1446537540\n},\n{\n\"name\": \"time_per_interrupt\",\n\"value\": 118435504138\n},\n{\n\"name\": \"non_interrupt_time\",\n\"value\": 12496443020182\n},\n{\n\"name\": \"non_interrupt\",\n\"value\": 12496443020182\n},\n{\n\"name\": \"idle_time\",\n\"value\": 51049666116080\n},\n{\n\"name\": \"idle\",\n\"value\": 51049666116080\n},\n{\n\"name\": \"cp_time\",\n\"value\": 221447740301\n},\n{\n\"name\": \"cp\",\n\"value\": 221447740301\n},\n{\n\"name\": \"interrupt_in_cp_time\",\n\"value\": 7969316828\n},\n{\n\"name\": \"interrupt_in_cp\",\n\"value\": 7969316828\n},\n{\n\"name\": \"interrupt_num_in_cp\",\n\"value\": 1639345044\n},\n{\n\"name\": \"time_per_interrupt_in_cp\",\n\"value\": 7969316828\n},\n{\n\"name\": \"sk_switches\",\n\"value\": 3830419593\n},\n{\n\"name\": \"hard_switches\",\n\"value\": 2786999477\n},\n{\n\"name\": \"intr_cnt_msec\",\n\"value\": 3978648113\n},\n{\n\"name\": \"intr_cnt_ipi\",\n\"value\": 1709054\n},\n{\n\"name\": \"intr_cnt_total\",\n\"value\": 1215253490\n},\n{\n\"name\": \"time\",\n\"value\": 1677516216\n},\n{\n\"name\": \"uptime\",\n\"value\": 3978648\n},\n{\n\"name\": \"processor_plevel_time\",\n\"values\": [\n3405835479577,\n2628275207938,\n1916273074545,\n1366761457118,\n964863281216,\n676002919489,\n472533086045,\n331487674159,\n234447654307,\n167247803300,\n120098535891,\n86312126550,\n61675398266,\n43549889374,\n30176461104,\n19891286233,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"0_CPU\",\n\"1_CPU\",\n\"2_CPU\",\n\"3_CPU\",\n\"4_CPU\",\n\"5_CPU\",\n\"6_CPU\",\n\"7_CPU\",\n\"8_CPU\",\n\"9_CPU\",\n\"10_CPU\",\n\"11_CPU\",\n\"12_CPU\",\n\"13_CPU\",\n\"14_CPU\",\n\"15_CPU\",\n\"16_CPU\",\n\"17_CPU\",\n\"18_CPU\",\n\"19_CPU\",\n\"20_CPU\",\n\"21_CPU\",\n\"22_CPU\",\n\"23_CPU\",\n\"24_CPU\",\n\"25_CPU\",\n\"26_CPU\",\n\"27_CPU\",\n\"28_CPU\",\n\"29_CPU\",\n\"30_CPU\",\n\"31_CPU\",\n\"32_CPU\",\n\"33_CPU\",\n\"34_CPU\",\n\"35_CPU\",\n\"36_CPU\",\n\"37_CPU\",\n\"38_CPU\",\n\"39_CPU\",\n\"40_CPU\",\n\"41_CPU\",\n\"42_CPU\",\n\"43_CPU\",\n\"44_CPU\",\n\"45_CPU\",\n\"46_CPU\",\n\"47_CPU\",\n\"48_CPU\",\n\"49_CPU\",\n\"50_CPU\",\n\"51_CPU\",\n\"52_CPU\",\n\"53_CPU\",\n\"54_CPU\",\n\"55_CPU\",\n\"56_CPU\",\n\"57_CPU\",\n\"58_CPU\",\n\"59_CPU\",\n\"60_CPU\",\n\"61_CPU\",\n\"62_CPU\",\n\"63_CPU\",\n\"64_CPU\",\n\"65_CPU\",\n\"66_CPU\",\n\"67_CPU\",\n\"68_CPU\",\n\"69_CPU\",\n\"70_CPU\",\n\"71_CPU\",\n\"72_CPU\",\n\"73_CPU\",\n\"74_CPU\",\n\"75_CPU\",\n\"76_CPU\",\n\"77_CPU\",\n\"78_CPU\",\n\"79_CPU\",\n\"80_CPU\",\n\"81_CPU\",\n\"82_CPU\",\n\"83_CPU\",\n\"84_CPU\",\n\"85_CPU\",\n\"86_CPU\",\n\"87_CPU\",\n\"88_CPU\",\n\"89_CPU\",\n\"90_CPU\",\n\"91_CPU\",\n\"92_CPU\",\n\"93_CPU\",\n\"94_CPU\",\n\"95_CPU\",\n\"96_CPU\",\n\"97_CPU\",\n\"98_CPU\",\n\"99_CPU\",\n\"100_CPU\",\n\"101_CPU\",\n\"102_CPU\",\n\"103_CPU\",\n\"104_CPU\",\n\"105_CPU\",\n\"106_CPU\",\n\"107_CPU\",\n\"108_CPU\",\n\"109_CPU\",\n\"110_CPU\",\n\"111_CPU\",\n\"112_CPU\",\n\"113_CPU\",\n\"114_CPU\",\n\"115_CPU\",\n\"116_CPU\",\n\"117_CPU\",\n\"118_CPU\",\n\"119_CPU\",\n\"120_CPU\",\n\"121_CPU\",\n\"122_CPU\",\n\"123_CPU\",\n\"124_CPU\",\n\"125_CPU\",\n\"126_CPU\",\n\"127_CPU\"\n]\n},\n{\n\"name\": \"processor_plevel\",\n\"values\": [\n3405835479577,\n2628275207938,\n1916273074545,\n1366761457118,\n964863281216,\n676002919489,\n472533086045,\n331487674159,\n234447654307,\n167247803300,\n120098535891,\n86312126550,\n61675398266,\n43549889374,\n30176461104,\n19891286233,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"0_CPU\",\n\"1_CPU\",\n\"2_CPU\",\n\"3_CPU\",\n\"4_CPU\",\n\"5_CPU\",\n\"6_CPU\",\n\"7_CPU\",\n\"8_CPU\",\n\"9_CPU\",\n\"10_CPU\",\n\"11_CPU\",\n\"12_CPU\",\n\"13_CPU\",\n\"14_CPU\",\n\"15_CPU\",\n\"16_CPU\",\n\"17_CPU\",\n\"18_CPU\",\n\"19_CPU\",\n\"20_CPU\",\n\"21_CPU\",\n\"22_CPU\",\n\"23_CPU\",\n\"24_CPU\",\n\"25_CPU\",\n\"26_CPU\",\n\"27_CPU\",\n\"28_CPU\",\n\"29_CPU\",\n\"30_CPU\",\n\"31_CPU\",\n\"32_CPU\",\n\"33_CPU\",\n\"34_CPU\",\n\"35_CPU\",\n\"36_CPU\",\n\"37_CPU\",\n\"38_CPU\",\n\"39_CPU\",\n\"40_CPU\",\n\"41_CPU\",\n\"42_CPU\",\n\"43_CPU\",\n\"44_CPU\",\n\"45_CPU\",\n\"46_CPU\",\n\"47_CPU\",\n\"48_CPU\",\n\"49_CPU\",\n\"50_CPU\",\n\"51_CPU\",\n\"52_CPU\",\n\"53_CPU\",\n\"54_CPU\",\n\"55_CPU\",\n\"56_CPU\",\n\"57_CPU\",\n\"58_CPU\",\n\"59_CPU\",\n\"60_CPU\",\n\"61_CPU\",\n\"62_CPU\",\n\"63_CPU\",\n\"64_CPU\",\n\"65_CPU\",\n\"66_CPU\",\n\"67_CPU\",\n\"68_CPU\",\n\"69_CPU\",\n\"70_CPU\",\n\"71_CPU\",\n\"72_CPU\",\n\"73_CPU\",\n\"74_CPU\",\n\"75_CPU\",\n\"76_CPU\",\n\"77_CPU\",\n\"78_CPU\",\n\"79_CPU\",\n\"80_CPU\",\n\"81_CPU\",\n\"82_CPU\",\n\"83_CPU\",\n\"84_CPU\",\n\"85_CPU\",\n\"86_CPU\",\n\"87_CPU\",\n\"88_CPU\",\n\"89_CPU\",\n\"90_CPU\",\n\"91_CPU\",\n\"92_CPU\",\n\"93_CPU\",\n\"94_CPU\",\n\"95_CPU\",\n\"96_CPU\",\n\"97_CPU\",\n\"98_CPU\",\n\"99_CPU\",\n\"100_CPU\",\n\"101_CPU\",\n\"102_CPU\",\n\"103_CPU\",\n\"104_CPU\",\n\"105_CPU\",\n\"106_CPU\",\n\"107_CPU\",\n\"108_CPU\",\n\"109_CPU\",\n\"110_CPU\",\n\"111_CPU\",\n\"112_CPU\",\n\"113_CPU\",\n\"114_CPU\",\n\"115_CPU\",\n\"116_CPU\",\n\"117_CPU\",\n\"118_CPU\",\n\"119_CPU\",\n\"120_CPU\",\n\"121_CPU\",\n\"122_CPU\",\n\"123_CPU\",\n\"124_CPU\",\n\"125_CPU\",\n\"126_CPU\",\n\"127_CPU\"\n]\n},\n{\n\"name\": \"domain_busy\",\n\"values\": [\n51049666116086,\n13419960088,\n13297686377,\n1735383373870,\n39183250298,\n6728050897,\n28229793795,\n17493622207,\n122290467,\n974721172619,\n47944793823,\n164946850,\n4162377932,\n407009733276,\n128199854099,\n9037374471285,\n38911301970,\n366749865,\n732045734,\n2997541695,\n14,\n18,\n40\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"domain_shared\",\n\"values\": [\n0,\n685164024474,\n0,\n0,\n0,\n24684879894,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"dswitchto_cnt\",\n\"values\": [\n0,\n322698663,\n172936437,\n446893016,\n96971,\n39788918,\n5,\n10,\n10670440,\n22,\n7,\n836,\n2407967,\n9798186907,\n9802868991,\n265242,\n53,\n2614118,\n4430780,\n66117706,\n1,\n1,\n1\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"intr_cnt\",\n\"values\": [\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n4191453008,\n8181232,\n1625052957,\n0,\n71854,\n0,\n71854,\n0,\n5,\n0,\n5,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"dev_0\",\n\"dev_1\",\n\"dev_2\",\n\"dev_3\",\n\"dev_4\",\n\"dev_5\",\n\"dev_6\",\n\"dev_7\",\n\"dev_8\",\n\"dev_9\",\n\"dev_10\",\n\"dev_11\",\n\"dev_12\",\n\"dev_13\",\n\"dev_14\",\n\"dev_15\",\n\"dev_16\",\n\"dev_17\",\n\"dev_18\",\n\"dev_19\",\n\"dev_20\",\n\"dev_21\",\n\"dev_22\",\n\"dev_23\",\n\"dev_24\",\n\"dev_25\",\n\"dev_26\",\n\"dev_27\",\n\"dev_28\",\n\"dev_29\",\n\"dev_30\",\n\"dev_31\",\n\"dev_32\",\n\"dev_33\",\n\"dev_34\",\n\"dev_35\",\n\"dev_36\",\n\"dev_37\",\n\"dev_38\",\n\"dev_39\",\n\"dev_40\",\n\"dev_41\",\n\"dev_42\",\n\"dev_43\",\n\"dev_44\",\n\"dev_45\",\n\"dev_46\",\n\"dev_47\",\n\"dev_48\",\n\"dev_49\",\n\"dev_50\",\n\"dev_51\",\n\"dev_52\",\n\"dev_53\",\n\"dev_54\",\n\"dev_55\",\n\"dev_56\",\n\"dev_57\",\n\"dev_58\",\n\"dev_59\",\n\"dev_60\",\n\"dev_61\",\n\"dev_62\",\n\"dev_63\",\n\"dev_64\",\n\"dev_65\",\n\"dev_66\",\n\"dev_67\",\n\"dev_68\",\n\"dev_69\",\n\"dev_70\",\n\"dev_71\",\n\"dev_72\",\n\"dev_73\",\n\"dev_74\",\n\"dev_75\",\n\"dev_76\",\n\"dev_77\",\n\"dev_78\",\n\"dev_79\",\n\"dev_80\",\n\"dev_81\",\n\"dev_82\",\n\"dev_83\",\n\"dev_84\",\n\"dev_85\",\n\"dev_86\",\n\"dev_87\",\n\"dev_88\",\n\"dev_89\",\n\"dev_90\",\n\"dev_91\",\n\"dev_92\",\n\"dev_93\",\n\"dev_94\",\n\"dev_95\",\n\"dev_96\",\n\"dev_97\",\n\"dev_98\",\n\"dev_99\",\n\"dev_100\",\n\"dev_101\",\n\"dev_102\",\n\"dev_103\",\n\"dev_104\",\n\"dev_105\",\n\"dev_106\",\n\"dev_107\",\n\"dev_108\",\n\"dev_109\",\n\"dev_110\",\n\"dev_111\",\n\"dev_112\",\n\"dev_113\",\n\"dev_114\",\n\"dev_115\",\n\"dev_116\",\n\"dev_117\",\n\"dev_118\",\n\"dev_119\",\n\"dev_120\",\n\"dev_121\",\n\"dev_122\",\n\"dev_123\",\n\"dev_124\",\n\"dev_125\",\n\"dev_126\",\n\"dev_127\",\n\"dev_128\",\n\"dev_129\",\n\"dev_130\",\n\"dev_131\",\n\"dev_132\",\n\"dev_133\",\n\"dev_134\",\n\"dev_135\",\n\"dev_136\",\n\"dev_137\",\n\"dev_138\",\n\"dev_139\",\n\"dev_140\",\n\"dev_141\",\n\"dev_142\",\n\"dev_143\",\n\"dev_144\",\n\"dev_145\",\n\"dev_146\",\n\"dev_147\",\n\"dev_148\",\n\"dev_149\",\n\"dev_150\",\n\"dev_151\",\n\"dev_152\",\n\"dev_153\",\n\"dev_154\",\n\"dev_155\",\n\"dev_156\",\n\"dev_157\",\n\"dev_158\",\n\"dev_159\",\n\"dev_160\",\n\"dev_161\",\n\"dev_162\",\n\"dev_163\",\n\"dev_164\",\n\"dev_165\",\n\"dev_166\",\n\"dev_167\",\n\"dev_168\",\n\"dev_169\",\n\"dev_170\",\n\"dev_171\",\n\"dev_172\",\n\"dev_173\",\n\"dev_174\",\n\"dev_175\",\n\"dev_176\",\n\"dev_177\",\n\"dev_178\",\n\"dev_179\",\n\"dev_180\",\n\"dev_181\",\n\"dev_182\",\n\"dev_183\",\n\"dev_184\",\n\"dev_185\",\n\"dev_186\",\n\"dev_187\",\n\"dev_188\",\n\"dev_189\",\n\"dev_190\",\n\"dev_191\",\n\"dev_192\",\n\"dev_193\",\n\"dev_194\",\n\"dev_195\",\n\"dev_196\",\n\"dev_197\",\n\"dev_198\",\n\"dev_199\",\n\"dev_200\",\n\"dev_201\",\n\"dev_202\",\n\"dev_203\",\n\"dev_204\",\n\"dev_205\",\n\"dev_206\",\n\"dev_207\",\n\"dev_208\",\n\"dev_209\",\n\"dev_210\",\n\"dev_211\",\n\"dev_212\",\n\"dev_213\",\n\"dev_214\",\n\"dev_215\",\n\"dev_216\",\n\"dev_217\",\n\"dev_218\",\n\"dev_219\",\n\"dev_220\",\n\"dev_221\",\n\"dev_222\",\n\"dev_223\",\n\"dev_224\",\n\"dev_225\",\n\"dev_226\",\n\"dev_227\",\n\"dev_228\",\n\"dev_229\",\n\"dev_230\",\n\"dev_231\",\n\"dev_232\",\n\"dev_233\",\n\"dev_234\",\n\"dev_235\",\n\"dev_236\",\n\"dev_237\",\n\"dev_238\",\n\"dev_239\",\n\"dev_240\",\n\"dev_241\",\n\"dev_242\",\n\"dev_243\",\n\"dev_244\",\n\"dev_245\",\n\"dev_246\",\n\"dev_247\",\n\"dev_248\",\n\"dev_249\",\n\"dev_250\",\n\"dev_251\",\n\"dev_252\",\n\"dev_253\",\n\"dev_254\",\n\"dev_255\"\n]\n},\n{\n\"name\": \"wafliron\",\n\"values\": [\n0,\n0,\n0\n],\n\"labels\": [\n\"iron_totstarts\",\n\"iron_nobackup\",\n\"iron_usebackup\"\n]\n}\n],\n\"aggregation\": {\n\"count\": 2,\n\"complete\": true\n},\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system:node/rows/umeng-aff300-01%3A28e14eab-0580-11e8-bd9d-00a098d39e12\"\n}\n}\n},\n{\n\"counter_table\": {\n\"name\": \"system:node\"\n},\n\"id\": \"umeng-aff300-02:1524afca-0580-11e8-ae74-00a098d390f2\",\n\"properties\": [\n{\n\"name\": \"node.name\",\n\"value\": \"umeng-aff300-02\"\n},\n{\n\"name\": \"system_model\",\n\"value\": \"AFF-A300\"\n},\n{\n\"name\": \"ontap_version\",\n\"value\": \"NetApp Release R9.12.1xN_221108_1315: Tue Nov  8 15:32:25 EST 2022 \"\n},\n{\n\"name\": \"compile_flags\",\n\"value\": \"1\"\n},\n{\n\"name\": \"serial_no\",\n\"value\": \"721802000259\"\n},\n{\n\"name\": \"system_id\",\n\"value\": \"0537123843\"\n},\n{\n\"name\": \"hostname\",\n\"value\": \"umeng-aff300-02\"\n},\n{\n\"name\": \"name\",\n\"value\": \"umeng-aff300-02\"\n},\n{\n\"name\": \"uuid\",\n\"value\": \"1524afca-0580-11e8-ae74-00a098d390f2\"\n}\n],\n\"counters\": [\n{\n\"name\": \"memory\",\n\"value\": 88766\n},\n{\n\"name\": \"nfs_ops\",\n\"value\": 2061227971\n},\n{\n\"name\": \"cifs_ops\",\n\"value\": 0\n},\n{\n\"name\": \"fcp_ops\",\n\"value\": 0\n},\n{\n\"name\": \"iscsi_ops\",\n\"value\": 183570559\n},\n{\n\"name\": \"nvme_fc_ops\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_ops\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_ops\",\n\"value\": 0\n},\n{\n\"name\": \"network_data_received\",\n\"value\": 28707362447\n},\n{\n\"name\": \"network_data_sent\",\n\"value\": 31199786274\n},\n{\n\"name\": \"fcp_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"fcp_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"iscsi_data_received\",\n\"value\": 2462501728\n},\n{\n\"name\": \"iscsi_data_sent\",\n\"value\": 962425592\n},\n{\n\"name\": \"nvme_fc_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_fc_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_tcp_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"nvme_roce_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"partner_data_received\",\n\"value\": 0\n},\n{\n\"name\": \"partner_data_sent\",\n\"value\": 0\n},\n{\n\"name\": \"sys_read_data\",\n\"value\": 28707362447\n},\n{\n\"name\": \"sys_write_data\",\n\"value\": 31199786274\n},\n{\n\"name\": \"sys_total_data\",\n\"value\": 59907148721\n},\n{\n\"name\": \"disk_data_read\",\n\"value\": 27355740700\n},\n{\n\"name\": \"disk_data_written\",\n\"value\": 3426898232\n},\n{\n\"name\": \"hdd_data_read\",\n\"value\": 0\n},\n{\n\"name\": \"hdd_data_written\",\n\"value\": 0\n},\n{\n\"name\": \"ssd_data_read\",\n\"value\": 27355740700\n},\n{\n\"name\": \"ssd_data_written\",\n\"value\": 3426898232\n},\n{\n\"name\": \"tape_data_read\",\n\"value\": 0\n},\n{\n\"name\": \"tape_data_written\",\n\"value\": 0\n},\n{\n\"name\": \"read_ops\",\n\"value\": 29957410\n},\n{\n\"name\": \"write_ops\",\n\"value\": 2141657620\n},\n{\n\"name\": \"other_ops\",\n\"value\": 73183500\n},\n{\n\"name\": \"total_ops\",\n\"value\": 2244798530\n},\n{\n\"name\": \"read_latency\",\n\"value\": 43283636161\n},\n{\n\"name\": \"write_latency\",\n\"value\": 1437635703835\n},\n{\n\"name\": \"other_latency\",\n\"value\": 628457365\n},\n{\n\"name\": \"total_latency\",\n\"value\": 1481547797361\n},\n{\n\"name\": \"read_data\",\n\"value\": 1908711454978\n},\n{\n\"name\": \"write_data\",\n\"value\": 23562759645410\n},\n{\n\"name\": \"other_data\",\n\"value\": 0\n},\n{\n\"name\": \"total_data\",\n\"value\": 25471471100388\n},\n{\n\"name\": \"cpu_busy\",\n\"value\": 511050841704\n},\n{\n\"name\": \"cpu_elapsed_time\",\n\"value\": 3979039364919\n},\n{\n\"name\": \"average_processor_busy_percent\",\n\"value\": 509151403977\n},\n{\n\"name\": \"total_processor_busy\",\n\"value\": 8146422463632\n},\n{\n\"name\": \"total_processor_busy_time\",\n\"value\": 8146422463632\n},\n{\n\"name\": \"num_processors\",\n\"value\": 16\n},\n{\n\"name\": \"interrupt_time\",\n\"value\": 108155323601\n},\n{\n\"name\": \"interrupt\",\n\"value\": 108155323601\n},\n{\n\"name\": \"interrupt_num\",\n\"value\": 3369179127\n},\n{\n\"name\": \"time_per_interrupt\",\n\"value\": 108155323601\n},\n{\n\"name\": \"non_interrupt_time\",\n\"value\": 8038267140031\n},\n{\n\"name\": \"non_interrupt\",\n\"value\": 8038267140031\n},\n{\n\"name\": \"idle_time\",\n\"value\": 55518207375072\n},\n{\n\"name\": \"idle\",\n\"value\": 55518207375072\n},\n{\n\"name\": \"cp_time\",\n\"value\": 64306316680\n},\n{\n\"name\": \"cp\",\n\"value\": 64306316680\n},\n{\n\"name\": \"interrupt_in_cp_time\",\n\"value\": 2024956616\n},\n{\n\"name\": \"interrupt_in_cp\",\n\"value\": 2024956616\n},\n{\n\"name\": \"interrupt_num_in_cp\",\n\"value\": 2661183541\n},\n{\n\"name\": \"time_per_interrupt_in_cp\",\n\"value\": 2024956616\n},\n{\n\"name\": \"sk_switches\",\n\"value\": 2798598514\n},\n{\n\"name\": \"hard_switches\",\n\"value\": 1354185066\n},\n{\n\"name\": \"intr_cnt_msec\",\n\"value\": 3978642246\n},\n{\n\"name\": \"intr_cnt_ipi\",\n\"value\": 797281\n},\n{\n\"name\": \"intr_cnt_total\",\n\"value\": 905575861\n},\n{\n\"name\": \"time\",\n\"value\": 1677516216\n},\n{\n\"name\": \"uptime\",\n\"value\": 3978643\n},\n{\n\"name\": \"processor_plevel_time\",\n\"values\": [\n2878770221447,\n1882901052733,\n1209134416474,\n771086627192,\n486829133301,\n306387520688,\n193706139760,\n123419519944,\n79080346535,\n50459518003,\n31714732122,\n19476561954,\n11616026278,\n6666253598,\n3623880168,\n1790458071,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"0_CPU\",\n\"1_CPU\",\n\"2_CPU\",\n\"3_CPU\",\n\"4_CPU\",\n\"5_CPU\",\n\"6_CPU\",\n\"7_CPU\",\n\"8_CPU\",\n\"9_CPU\",\n\"10_CPU\",\n\"11_CPU\",\n\"12_CPU\",\n\"13_CPU\",\n\"14_CPU\",\n\"15_CPU\",\n\"16_CPU\",\n\"17_CPU\",\n\"18_CPU\",\n\"19_CPU\",\n\"20_CPU\",\n\"21_CPU\",\n\"22_CPU\",\n\"23_CPU\",\n\"24_CPU\",\n\"25_CPU\",\n\"26_CPU\",\n\"27_CPU\",\n\"28_CPU\",\n\"29_CPU\",\n\"30_CPU\",\n\"31_CPU\",\n\"32_CPU\",\n\"33_CPU\",\n\"34_CPU\",\n\"35_CPU\",\n\"36_CPU\",\n\"37_CPU\",\n\"38_CPU\",\n\"39_CPU\",\n\"40_CPU\",\n\"41_CPU\",\n\"42_CPU\",\n\"43_CPU\",\n\"44_CPU\",\n\"45_CPU\",\n\"46_CPU\",\n\"47_CPU\",\n\"48_CPU\",\n\"49_CPU\",\n\"50_CPU\",\n\"51_CPU\",\n\"52_CPU\",\n\"53_CPU\",\n\"54_CPU\",\n\"55_CPU\",\n\"56_CPU\",\n\"57_CPU\",\n\"58_CPU\",\n\"59_CPU\",\n\"60_CPU\",\n\"61_CPU\",\n\"62_CPU\",\n\"63_CPU\",\n\"64_CPU\",\n\"65_CPU\",\n\"66_CPU\",\n\"67_CPU\",\n\"68_CPU\",\n\"69_CPU\",\n\"70_CPU\",\n\"71_CPU\",\n\"72_CPU\",\n\"73_CPU\",\n\"74_CPU\",\n\"75_CPU\",\n\"76_CPU\",\n\"77_CPU\",\n\"78_CPU\",\n\"79_CPU\",\n\"80_CPU\",\n\"81_CPU\",\n\"82_CPU\",\n\"83_CPU\",\n\"84_CPU\",\n\"85_CPU\",\n\"86_CPU\",\n\"87_CPU\",\n\"88_CPU\",\n\"89_CPU\",\n\"90_CPU\",\n\"91_CPU\",\n\"92_CPU\",\n\"93_CPU\",\n\"94_CPU\",\n\"95_CPU\",\n\"96_CPU\",\n\"97_CPU\",\n\"98_CPU\",\n\"99_CPU\",\n\"100_CPU\",\n\"101_CPU\",\n\"102_CPU\",\n\"103_CPU\",\n\"104_CPU\",\n\"105_CPU\",\n\"106_CPU\",\n\"107_CPU\",\n\"108_CPU\",\n\"109_CPU\",\n\"110_CPU\",\n\"111_CPU\",\n\"112_CPU\",\n\"113_CPU\",\n\"114_CPU\",\n\"115_CPU\",\n\"116_CPU\",\n\"117_CPU\",\n\"118_CPU\",\n\"119_CPU\",\n\"120_CPU\",\n\"121_CPU\",\n\"122_CPU\",\n\"123_CPU\",\n\"124_CPU\",\n\"125_CPU\",\n\"126_CPU\",\n\"127_CPU\"\n]\n},\n{\n\"name\": \"processor_plevel\",\n\"values\": [\n2878770221447,\n1882901052733,\n1209134416474,\n771086627192,\n486829133301,\n306387520688,\n193706139760,\n123419519944,\n79080346535,\n50459518003,\n31714732122,\n19476561954,\n11616026278,\n6666253598,\n3623880168,\n1790458071,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"0_CPU\",\n\"1_CPU\",\n\"2_CPU\",\n\"3_CPU\",\n\"4_CPU\",\n\"5_CPU\",\n\"6_CPU\",\n\"7_CPU\",\n\"8_CPU\",\n\"9_CPU\",\n\"10_CPU\",\n\"11_CPU\",\n\"12_CPU\",\n\"13_CPU\",\n\"14_CPU\",\n\"15_CPU\",\n\"16_CPU\",\n\"17_CPU\",\n\"18_CPU\",\n\"19_CPU\",\n\"20_CPU\",\n\"21_CPU\",\n\"22_CPU\",\n\"23_CPU\",\n\"24_CPU\",\n\"25_CPU\",\n\"26_CPU\",\n\"27_CPU\",\n\"28_CPU\",\n\"29_CPU\",\n\"30_CPU\",\n\"31_CPU\",\n\"32_CPU\",\n\"33_CPU\",\n\"34_CPU\",\n\"35_CPU\",\n\"36_CPU\",\n\"37_CPU\",\n\"38_CPU\",\n\"39_CPU\",\n\"40_CPU\",\n\"41_CPU\",\n\"42_CPU\",\n\"43_CPU\",\n\"44_CPU\",\n\"45_CPU\",\n\"46_CPU\",\n\"47_CPU\",\n\"48_CPU\",\n\"49_CPU\",\n\"50_CPU\",\n\"51_CPU\",\n\"52_CPU\",\n\"53_CPU\",\n\"54_CPU\",\n\"55_CPU\",\n\"56_CPU\",\n\"57_CPU\",\n\"58_CPU\",\n\"59_CPU\",\n\"60_CPU\",\n\"61_CPU\",\n\"62_CPU\",\n\"63_CPU\",\n\"64_CPU\",\n\"65_CPU\",\n\"66_CPU\",\n\"67_CPU\",\n\"68_CPU\",\n\"69_CPU\",\n\"70_CPU\",\n\"71_CPU\",\n\"72_CPU\",\n\"73_CPU\",\n\"74_CPU\",\n\"75_CPU\",\n\"76_CPU\",\n\"77_CPU\",\n\"78_CPU\",\n\"79_CPU\",\n\"80_CPU\",\n\"81_CPU\",\n\"82_CPU\",\n\"83_CPU\",\n\"84_CPU\",\n\"85_CPU\",\n\"86_CPU\",\n\"87_CPU\",\n\"88_CPU\",\n\"89_CPU\",\n\"90_CPU\",\n\"91_CPU\",\n\"92_CPU\",\n\"93_CPU\",\n\"94_CPU\",\n\"95_CPU\",\n\"96_CPU\",\n\"97_CPU\",\n\"98_CPU\",\n\"99_CPU\",\n\"100_CPU\",\n\"101_CPU\",\n\"102_CPU\",\n\"103_CPU\",\n\"104_CPU\",\n\"105_CPU\",\n\"106_CPU\",\n\"107_CPU\",\n\"108_CPU\",\n\"109_CPU\",\n\"110_CPU\",\n\"111_CPU\",\n\"112_CPU\",\n\"113_CPU\",\n\"114_CPU\",\n\"115_CPU\",\n\"116_CPU\",\n\"117_CPU\",\n\"118_CPU\",\n\"119_CPU\",\n\"120_CPU\",\n\"121_CPU\",\n\"122_CPU\",\n\"123_CPU\",\n\"124_CPU\",\n\"125_CPU\",\n\"126_CPU\",\n\"127_CPU\"\n]\n},\n{\n\"name\": \"domain_busy\",\n\"values\": [\n55518207375080,\n8102895398,\n12058227646,\n991838747162,\n28174147737,\n6669066926,\n14245801778,\n9009875224,\n118982762,\n177496844302,\n5888814259,\n167280195,\n3851617905,\n484154906167,\n91240285306,\n6180138216837,\n22111798640,\n344700584,\n266304074,\n2388625825,\n16,\n21,\n19\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"domain_shared\",\n\"values\": [\n0,\n153663450171,\n0,\n0,\n0,\n11834112384,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"dswitchto_cnt\",\n\"values\": [\n0,\n178192633,\n143964155,\n286324250,\n2365,\n39684121,\n5,\n10,\n10715325,\n22,\n7,\n30,\n2407970,\n7865489299,\n7870331008,\n265242,\n53,\n2535145,\n3252888,\n53334340,\n1,\n1,\n1\n],\n\"labels\": [\n\"idle\",\n\"kahuna\",\n\"storage\",\n\"exempt\",\n\"none\",\n\"raid\",\n\"raid_exempt\",\n\"xor_exempt\",\n\"target\",\n\"wafl_exempt\",\n\"wafl_mpcleaner\",\n\"sm_exempt\",\n\"protocol\",\n\"nwk_exempt\",\n\"network\",\n\"hostOS\",\n\"ssan_exempt\",\n\"unclassified\",\n\"kahuna_legacy\",\n\"ha\",\n\"ssan_exempt2\",\n\"exempt_ise\",\n\"zombie\"\n]\n},\n{\n\"name\": \"intr_cnt\",\n\"values\": [\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n724698481,\n8181275,\n488080162,\n0,\n71856,\n0,\n71856,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0,\n0\n],\n\"labels\": [\n\"dev_0\",\n\"dev_1\",\n\"dev_2\",\n\"dev_3\",\n\"dev_4\",\n\"dev_5\",\n\"dev_6\",\n\"dev_7\",\n\"dev_8\",\n\"dev_9\",\n\"dev_10\",\n\"dev_11\",\n\"dev_12\",\n\"dev_13\",\n\"dev_14\",\n\"dev_15\",\n\"dev_16\",\n\"dev_17\",\n\"dev_18\",\n\"dev_19\",\n\"dev_20\",\n\"dev_21\",\n\"dev_22\",\n\"dev_23\",\n\"dev_24\",\n\"dev_25\",\n\"dev_26\",\n\"dev_27\",\n\"dev_28\",\n\"dev_29\",\n\"dev_30\",\n\"dev_31\",\n\"dev_32\",\n\"dev_33\",\n\"dev_34\",\n\"dev_35\",\n\"dev_36\",\n\"dev_37\",\n\"dev_38\",\n\"dev_39\",\n\"dev_40\",\n\"dev_41\",\n\"dev_42\",\n\"dev_43\",\n\"dev_44\",\n\"dev_45\",\n\"dev_46\",\n\"dev_47\",\n\"dev_48\",\n\"dev_49\",\n\"dev_50\",\n\"dev_51\",\n\"dev_52\",\n\"dev_53\",\n\"dev_54\",\n\"dev_55\",\n\"dev_56\",\n\"dev_57\",\n\"dev_58\",\n\"dev_59\",\n\"dev_60\",\n\"dev_61\",\n\"dev_62\",\n\"dev_63\",\n\"dev_64\",\n\"dev_65\",\n\"dev_66\",\n\"dev_67\",\n\"dev_68\",\n\"dev_69\",\n\"dev_70\",\n\"dev_71\",\n\"dev_72\",\n\"dev_73\",\n\"dev_74\",\n\"dev_75\",\n\"dev_76\",\n\"dev_77\",\n\"dev_78\",\n\"dev_79\",\n\"dev_80\",\n\"dev_81\",\n\"dev_82\",\n\"dev_83\",\n\"dev_84\",\n\"dev_85\",\n\"dev_86\",\n\"dev_87\",\n\"dev_88\",\n\"dev_89\",\n\"dev_90\",\n\"dev_91\",\n\"dev_92\",\n\"dev_93\",\n\"dev_94\",\n\"dev_95\",\n\"dev_96\",\n\"dev_97\",\n\"dev_98\",\n\"dev_99\",\n\"dev_100\",\n\"dev_101\",\n\"dev_102\",\n\"dev_103\",\n\"dev_104\",\n\"dev_105\",\n\"dev_106\",\n\"dev_107\",\n\"dev_108\",\n\"dev_109\",\n\"dev_110\",\n\"dev_111\",\n\"dev_112\",\n\"dev_113\",\n\"dev_114\",\n\"dev_115\",\n\"dev_116\",\n\"dev_117\",\n\"dev_118\",\n\"dev_119\",\n\"dev_120\",\n\"dev_121\",\n\"dev_122\",\n\"dev_123\",\n\"dev_124\",\n\"dev_125\",\n\"dev_126\",\n\"dev_127\",\n\"dev_128\",\n\"dev_129\",\n\"dev_130\",\n\"dev_131\",\n\"dev_132\",\n\"dev_133\",\n\"dev_134\",\n\"dev_135\",\n\"dev_136\",\n\"dev_137\",\n\"dev_138\",\n\"dev_139\",\n\"dev_140\",\n\"dev_141\",\n\"dev_142\",\n\"dev_143\",\n\"dev_144\",\n\"dev_145\",\n\"dev_146\",\n\"dev_147\",\n\"dev_148\",\n\"dev_149\",\n\"dev_150\",\n\"dev_151\",\n\"dev_152\",\n\"dev_153\",\n\"dev_154\",\n\"dev_155\",\n\"dev_156\",\n\"dev_157\",\n\"dev_158\",\n\"dev_159\",\n\"dev_160\",\n\"dev_161\",\n\"dev_162\",\n\"dev_163\",\n\"dev_164\",\n\"dev_165\",\n\"dev_166\",\n\"dev_167\",\n\"dev_168\",\n\"dev_169\",\n\"dev_170\",\n\"dev_171\",\n\"dev_172\",\n\"dev_173\",\n\"dev_174\",\n\"dev_175\",\n\"dev_176\",\n\"dev_177\",\n\"dev_178\",\n\"dev_179\",\n\"dev_180\",\n\"dev_181\",\n\"dev_182\",\n\"dev_183\",\n\"dev_184\",\n\"dev_185\",\n\"dev_186\",\n\"dev_187\",\n\"dev_188\",\n\"dev_189\",\n\"dev_190\",\n\"dev_191\",\n\"dev_192\",\n\"dev_193\",\n\"dev_194\",\n\"dev_195\",\n\"dev_196\",\n\"dev_197\",\n\"dev_198\",\n\"dev_199\",\n\"dev_200\",\n\"dev_201\",\n\"dev_202\",\n\"dev_203\",\n\"dev_204\",\n\"dev_205\",\n\"dev_206\",\n\"dev_207\",\n\"dev_208\",\n\"dev_209\",\n\"dev_210\",\n\"dev_211\",\n\"dev_212\",\n\"dev_213\",\n\"dev_214\",\n\"dev_215\",\n\"dev_216\",\n\"dev_217\",\n\"dev_218\",\n\"dev_219\",\n\"dev_220\",\n\"dev_221\",\n\"dev_222\",\n\"dev_223\",\n\"dev_224\",\n\"dev_225\",\n\"dev_226\",\n\"dev_227\",\n\"dev_228\",\n\"dev_229\",\n\"dev_230\",\n\"dev_231\",\n\"dev_232\",\n\"dev_233\",\n\"dev_234\",\n\"dev_235\",\n\"dev_236\",\n\"dev_237\",\n\"dev_238\",\n\"dev_239\",\n\"dev_240\",\n\"dev_241\",\n\"dev_242\",\n\"dev_243\",\n\"dev_244\",\n\"dev_245\",\n\"dev_246\",\n\"dev_247\",\n\"dev_248\",\n\"dev_249\",\n\"dev_250\",\n\"dev_251\",\n\"dev_252\",\n\"dev_253\",\n\"dev_254\",\n\"dev_255\"\n]\n},\n{\n\"name\": \"wafliron\",\n\"values\": [\n0,\n0,\n0\n],\n\"labels\": [\n\"iron_totstarts\",\n\"iron_nobackup\",\n\"iron_usebackup\"\n]\n}\n],\n\"aggregation\": {\n\"count\": 2,\n\"complete\": true\n},\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system:node/rows/umeng-aff300-02%3A1524afca-0580-11e8-ae74-00a098d390f2\"\n}\n}\n}\n],\n\"num_records\": 2,\n\"_links\": {\n\"self\": {\n\"href\": \"/api/cluster/counter/tables/system:node/rows?fields=*&return_records=true\"\n}\n}\n}\n

      "},{"location":"resources/rest-perf-metrics/#references","title":"References","text":"
      • Harvest REST Strategy
      • ONTAP 9.11.1 ONTAPI-to-REST Counter Manager Mapping
      • ONTAP REST API reference documentation
      • ONTAP REST API
      "},{"location":"resources/templates-and-metrics/","title":"Harvest Templates and Metrics","text":"

      Harvest collects ONTAP counter information, augments it, and stores it in a time-series DB. Refer ONTAP Metrics for details about ONTAP metrics exposed by Harvest.

      flowchart RL\n    Harvest[Harvest<br>Get & Augment] -- REST<br>ZAPI --> ONTAP\n    id1[(Prometheus<br>Store)] -- Scrape --> Harvest

      Three concepts work in unison to collect ONTAP metrics data, prepare it and make it available to Prometheus.

      • ZAPI/REST
      • Harvest templates
      • Exporters

      We're going to walk through an example from a running system, focusing on the disk object.

      At a high-level, Harvest templates describe what ZAPIs to send to ONTAP and how to interpret the responses.

      • ONTAP defines twos ZAPIs to collect disk info
        • Config information is collected via storage-disk-get-iter
        • Performance counters are collected via disk:constituent
      • These ZAPIs are found in their corresponding object template file conf/zapi/cdot/9.8.0/disk.yaml and conf/zapiperf/cdot/9.8.0/disk.yaml. These files also describe how to map the ZAPI responses into a time-series-friendly format
      • Prometheus uniquely identifies a time series by its metric name and optional key-value pairs called labels.
      "},{"location":"resources/templates-and-metrics/#handy-tools","title":"Handy Tools","text":"
      • dasel is useful to convert between XML, YAML, JSON, etc. We'll use it to make displaying some of the data easier.
      "},{"location":"resources/templates-and-metrics/#ontap-zapi-disk-example","title":"ONTAP ZAPI disk example","text":"

      We'll use the bin/harvest zapi tool to interrogate the cluster and gather information about the counters. This is one way you can send ZAPIs to ONTAP and explore the return types and values.

      bin/harvest zapi -p u2 show attrs --api storage-disk-get-iter\n

      Output edited for brevity and line numbers added on left

      The hierarchy and return type of each counter is shown below. We'll use this hierarchy to build a matching Harvest template. For example, line 3 is the bytes-per-sector counter, which has an integer value, and is the child of storage-disk-info > disk-inventory-info.

      To capture that counter's value as a metric in a Harvest, the ZAPI template must use the same hierarchical path. The matching path can be seen below.

      building tree for attribute [attributes-list] => [storage-disk-info]\n\n 1 [storage-disk-info]            -               *\n 2   [disk-inventory-info]        -                \n 3     [bytes-per-sector]         -         integer\n 4     [capacity-sectors]         -         integer\n 5     [disk-type]                -          string\n 6     [is-shared]                -         boolean\n 7     [model]                    -          string\n 8     [serial-number]            -          string\n 9     [shelf]                    -          string\n10     [shelf-bay]                -          string\n11   [disk-name]                  -          string\n12   [disk-ownership-info]        -                \n13     [home-node-name]           -          string\n14     [is-failed]                -         boolean\n15     [owner-node-name]          -          string\n16   [disk-raid-info]             -                \n17     [container-type]           -          string\n18     [disk-outage-info]         -                \n19       [is-in-fdr]              -         boolean\n20       [reason]                 -          string  \n21   [disk-stats-info]            -                \n22     [average-latency]          -         integer\n23     [disk-io-kbps]             -         integer\n24     [power-on-time-interval]   -         integer\n25     [sectors-read]             -         integer\n26     [sectors-written]          -         integer\n27   [disk-uid]                   -          string\n28   [node-name]                  -          string\n29   [storage-disk-state]         -         integer\n30   [storage-disk-state-flags]   -         integer\n
      "},{"location":"resources/templates-and-metrics/#harvest-templates","title":"Harvest Templates","text":"

      To understand templates, there are a few concepts to cover:

      There are three kinds of information included in templates that define what Harvest collects and exports:

      1. Configuration information is exported into the _labels metric (e.g. disk_labels see below)
      2. Metrics data is exported as disk_\"metric name\" e.g. disk_bytes_per_sector, disk_sectors, etc. Metrics are leaf nodes that are not prefixed with a ^ or ^^. Metrics must be one of the number types: float or int.
      3. Plugins may add additional metrics, increasing the number of metrics exported in #2

      A resource will typically have multiple instances. Using disk as an example, that means there will be one disk_labels and a metric row per instance. If we have 24 disks and the disk template lists seven metrics to capture, Harvest will export a total of 192 rows of Prometheus data.

      24 instances * (7 metrics per instance + 1 label per instance) = 192 rows

      Sum of disk metrics that Harvest exports

      curl -s 'http://localhost:14002/metrics' | grep ^disk | cut -d'{' -f1 | sort | uniq -c\n  24 disk_bytes_per_sector\n  24 disk_labels\n  24 disk_sectors\n  24 disk_stats_average_latency\n  24 disk_stats_io_kbps\n  24 disk_stats_sectors_read\n  24 disk_stats_sectors_written\n  24 disk_uptime\n# 192 rows \n

      Read on to see how we control which labels from #1 and which metrics from #2 are included in the exported data.

      "},{"location":"resources/templates-and-metrics/#instance-keys-and-labels","title":"Instance Keys and Labels","text":"
      • Instance key - An instance key defines the set of attributes Harvest uses to construct a key that uniquely identifies an object. For example, the disk template uses the node + disk attributes to determine uniqueness. Using node or disk alone wouldn't be sufficient since disks on separate nodes can have the same name. If a single label does not uniquely identify an instance, combine multiple keys for uniqueness. Instance keys must refer to attributes that are of type string.

      Because instance keys define uniqueness, these keys are also added to each metric as a key-value pair. ( see Control What Labels and Metrics are Exported for examples)

      • Instance label - Labels are key-value pairs used to gather configuration information about each instance. All of the key-value pairs are combined into a single metric named disk_labels. There will be one disk_labels for each monitored instance. Here's an example reformatted so it's easier to read:
      disk_labels{\n  datacenter=\"dc-1\",\n  cluster=\"umeng-aff300-05-06\",\n  node=\"umeng-aff300-06\",\n  disk=\"1.1.23\",\n  type=\"SSD\",\n  model=\"X371_S1643960ATE\",\n  outage=\"\",\n  owner_node=\"umeng-aff300-06\",\n  shared=\"true\",\n  shelf=\"1\",\n  shelf_bay=\"23\",\n  serial_number=\"S3SENE0K500532\",\n  failed=\"false\",\n  container_type=\"shared\"\n}\n
      "},{"location":"resources/templates-and-metrics/#harvest-object-template","title":"Harvest Object Template","text":"

      Continuing with the disk example, below is the conf/zapi/cdot/9.8.0/disk.yaml that tells Harvest which ZAPI to send to ONTAP (storage-disk-get-iter) and describes how to interpret and export the response.

      • Line 1 defines the name of this resource and is an exact match to the object defined in your default.yaml or custom.yaml file. Eg.
      # default.yaml\nobjects:\n  Disk:  disk.yaml\n
      • Line 2 is the name of the ZAPI that Harvest will send to collect disk resources
      • Line 3 is the prefix used to export metrics associated with this object. i.e. all metrics will be of the form disk_*
      • Line 5 the counter section is where we define the metrics, labels, and what constitutes instance uniqueness
      • Line 7 the double hat prefix ^^ means this attribute is an instance key used to determine uniqueness. Instance keys are also included as labels. Uuids are good choices for uniqueness
      • Line 13 the single hat prefix ^ means this attribute should be stored as a label. That means we can include it in the export_options section as one of the key-value pairs in disk_labels
      • Rows 10, 11, 23, 24, 25, 26, 27 - these are the metrics rows - metrics are leaf nodes that are not prefixed with a ^ or ^^. If you refer back to the ONTAP ZAPI disk example above, you'll notice each of these attributes are integer types.
      • Line 43 defines the set of labels to use when constructing the disk_labels metrics. As mentioned above, these labels capture config-related attributes per instance.

      Output edited for brevity and line numbers added for reference.

       1  name:             Disk\n 2  query:            storage-disk-get-iter\n 3  object:           disk\n 4  \n 5  counters:\n 6    storage-disk-info:\n 7      - ^^disk-uid\n 8      - ^^disk-name               => disk\n 9      - disk-inventory-info:\n10        - bytes-per-sector        => bytes_per_sector        # notice this has the same hierarchical path we saw from bin/harvest zapi\n11        - capacity-sectors        => sectors\n12        - ^disk-type              => type\n13        - ^is-shared              => shared\n14        - ^model                  => model\n15        - ^serial-number          => serial_number\n16        - ^shelf                  => shelf\n17        - ^shelf-bay              => shelf_bay\n18      - disk-ownership-info:\n19        - ^home-node-name         => node\n20        - ^owner-node-name        => owner_node\n21        - ^is-failed              => failed\n22      - disk-stats-info:\n23        - average-latency\n24        - disk-io-kbps\n25        - power-on-time-interval  => uptime\n26        - sectors-read\n27        - sectors-written\n28      - disk-raid-info:\n29        - ^container-type         => container_type\n30        - disk-outage-info:\n31          - ^reason               => outage\n32  \n33  plugins:\n34    - LabelAgent:\n35      # metric label zapi_value rest_value `default_value`\n36      value_to_num:\n37        - new_status outage - - `0` #ok_value is empty value, '-' would be converted to blank while processing.\n38  \n39  export_options:\n40    instance_keys:\n41      - node\n42      - disk\n43    instance_labels:\n44      - type\n45      - model\n46      - outage\n47      - owner_node\n48      - shared\n49      - shelf\n50      - shelf_bay\n51      - serial_number\n52      - failed\n53      - container_type\n
      "},{"location":"resources/templates-and-metrics/#control-what-labels-and-metrics-are-exported","title":"Control What Labels and Metrics are Exported","text":"

      Let's continue with disk and look at a few examples. We'll use curl to examine the Prometheus wire format that Harvest uses to export the metrics from conf/zapi/cdot/9.8.0/disk.yaml.

      The curl below shows all exported disk metrics. There are 24 disks on this cluster, Harvest is collecting seven metrics + one disk_labels + one plugin-created metric, disk_new_status for a total of 216 rows.

      curl -s 'http://localhost:14002/metrics' | grep ^disk | cut -d'{' -f1 | sort | uniq -c\n  24 disk_bytes_per_sector           # metric\n  24 disk_labels                     # labels \n  24 disk_new_status                 # plugin created metric \n  24 disk_sectors                    # metric \n  24 disk_stats_average_latency      # metric   \n  24 disk_stats_io_kbps              # metric \n  24 disk_stats_sectors_read         # metric   \n  24 disk_stats_sectors_written      # metric  \n  24 disk_uptime                     # metric\n# sum = ((7 + 1 + 1) * 24 = 216 rows)\n

      Here's a disk_labels for one instance, reformated to make it easier to read.

      curl -s 'http://localhost:14002/metrics' | grep ^disk_labels | head -1\n\ndisk_labels{\n  datacenter = \"dc-1\",                 # always included - value taken from datacenter in harvest.yml\n  cluster = \"umeng-aff300-05-06\",      # always included\n  node = \"umeng-aff300-06\",            # node is in the list of export_options instance_keys\n  disk = \"1.1.13\",                     # disk is in the list of export_options instance_keys\n  type = \"SSD\",                        # remainder are included because they are listed in the template's instance_labels\n  model = \"X371_S1643960ATE\",\n  outage = \"\",\n  owner_node = \"umeng-aff300-06\",\n  shared = \"true\",\n  shelf = \"1\",\n  shelf_bay = \"13\",\n  serial_number = \"S3SENE0K500572\",\n  failed = \"false\",\n  container_type = \"\",\n} 1.0\n

      Here's the disk_sectors metric for a single instance.

      curl -s 'http://localhost:14002/metrics' | grep ^disk_sectors | head -1\n\ndisk_sectors{                          # prefix of disk_ + metric name (line 11 in template)\n  datacenter = \"dc-1\",                 # always included - value taken from datacenter in harvest.yml\n  cluster = \"umeng-aff300-05-06\",      # always included\n  node = \"umeng-aff300-06\",            # node is in the list of export_options instance_keys\n  disk = \"1.1.17\",                     # disk is in the list of export_options instance_keys\n} 1875385008                           # metric value - number of sectors for this disk instance\n
      Number of rows for each template = number of instances * (number of metrics + 1 (for <name>_labels row) + plugin additions)\nNumber of metrics                = number of counters which are not labels or keys, those without a ^ or ^^\n
      "},{"location":"resources/templates-and-metrics/#common-errors-and-troubleshooting","title":"Common Errors and Troubleshooting","text":""},{"location":"resources/templates-and-metrics/#1-failed-to-parse-any-metrics","title":"1. Failed to parse any metrics","text":"

      You add a new template to Harvest, restart your poller, and get an error message:

      WRN ./poller.go:649 > init collector-object (Zapi:NetPort): no metrics => failed to parse any\n

      This means the collector, Zapi NetPort, was unable to find any metrics. Recall metrics are lines without prefixes. In cases where you don't have any metrics, but still want to collect labels, add the collect_only_labels: true key-value to your template. This flag tells Harvest to ignore that you don't have metrics and continue. Example.

      "},{"location":"resources/templates-and-metrics/#2-missing-data","title":"2. Missing Data","text":"
      1. What happens if an attribute is listed in the list of instance_labels (line 43 above), but that label is missing from the list of counters captured at line 5?

      The label will still be written into disk_labels, but the value will be empty since it's missing. e.g if line 29 was deleted container_type would still be present in disk_labels{container_type=\"\"}.

      "},{"location":"resources/templates-and-metrics/#prometheus-wire-format","title":"Prometheus Wire Format","text":"

      https://prometheus.io/docs/instrumenting/exposition_formats/

      Keep in mind that Prometheus does not permit dashes (-) in labels. That's why Harvest templates use name replacement to convert dashed-names to underscored-names with =>. e.g. bytes-per-sector => bytes_per_sector converts bytes-per-sector into the Prometheus accepted bytes_per_sector.

      Every time series is uniquely identified by its metric name and optional key-value pairs called labels.

      Labels enable Prometheus's dimensional data model: any combination of labels for the same metric name identifies a particular dimensional instantiation of that metric (for example: all HTTP requests that used the method POST to the /api/tracks handler). The query language allows filtering and aggregation based on these dimensions. Changing any label value, including adding or removing a label, will create a new time series.

      <metric_name>{<label_name>=<label_value>, ...} value [ timestamp ]

      • metric_name and label_name carry the usual Prometheus expression language restrictions
      • label_value can be any sequence of UTF-8 characters, but the backslash (), double-quote (\"), and line feed (\\n) characters have to be escaped as \\, \\\", and \\n, respectively.
      • value is a float represented as required by Go's ParseFloat() function. In addition to standard numerical values, NaN, +Inf, and -Inf are valid values representing not a number, positive infinity, and negative infinity, respectively.
      • timestamp is an int64 (milliseconds since epoch, i.e. 1970-01-01 00:00:00 UTC, excluding leap seconds), represented as required by Go's ParseInt() function

      Exposition formats

      "}]} \ No newline at end of file diff --git a/nightly/sitemap.xml.gz b/nightly/sitemap.xml.gz index c91514b39..496fff6b3 100644 Binary files a/nightly/sitemap.xml.gz and b/nightly/sitemap.xml.gz differ