Skip to content

Commit

Permalink
Merge branch 'main' into DOCGUILD-22813
Browse files Browse the repository at this point in the history
  • Loading branch information
tcarter-splunk committed Sep 12, 2023
2 parents 24250a2 + fece9ac commit 7d84648
Show file tree
Hide file tree
Showing 56 changed files with 339 additions and 121 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,4 +32,4 @@ and the [LICENSE](LICENSE) specific to this repository.

You can contribute new documentation and edits to the existing documentation.

###
####
4 changes: 2 additions & 2 deletions _includes/benefits-events.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Configure the integration to access these features:

- View events. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see [View dashboards in Observability Cloud](https://docs.splunk.com/Observability/data-visualization/dashboards/view-dashboards.html#nav-View-dashboards).
- View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see [Splunk Infrastructure Monitoring navigators](https://docs.splunk.com/Observability/infrastructure/navigators/navigators.html#nav-Splunk-Infrastructure-Monitoring-navigators).
- View events. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see {ref}`view-dashboards`.
- View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see {ref}`use-navigators-imm`.

6 changes: 3 additions & 3 deletions _includes/benefits.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
After you configure the integration, you can access these features:

- View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see [View dashboards in Observability Cloud](https://docs.splunk.com/Observability/data-visualization/dashboards/view-dashboards.html#nav-View-dashboards).
- View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see [Splunk Infrastructure Monitoring navigators](https://docs.splunk.com/Observability/infrastructure/navigators/navigators.html#nav-Splunk-Infrastructure-Monitoring-navigators).
- Access the Metric Finder and search for metrics sent by the monitor. For information, see [Use the Metric Finder](https://docs.splunk.com/Observability/metrics-and-metadata/metrics-finder-metadata-catalog.html#use-the-metric-finder).
- View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see {ref}`view-dashboards`.
- View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see {ref}`use-navigators-imm`.
- Access the Metric Finder and search for metrics sent by the monitor. For information, see {ref}`metrics-finder-and-metadata-catalog`.
6 changes: 3 additions & 3 deletions _includes/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ To use this integration of a Smart Agent monitor with the Collector:
1. Include the Smart Agent receiver in your configuration file.
2. Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.

- Read more on how to [Use Smart Agent monitors with the Collector](https://docs.splunk.com/Observability/gdi/opentelemetry/smart-agent-migration-monitors.html).
- See how to set up the [Smart Agent receiver](https://docs.splunk.com/observability/gdi/opentelemetry/components/smartagent-receiver.html).
- Learn about config options in [Collector default configuration](https://docs.splunk.com/Observability/gdi/opentelemetry/configure-the-collector-ootb.html#nav-Collector-default-configuration).
- Read more on how to {ref}`migration-monitors`.
- See how to set up the {ref}`smartagent-receiver`.
- Learn about config options in {ref}`otel-configuration-ootb`.


6 changes: 3 additions & 3 deletions _includes/metric-defs.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
- Learn more about the available [metric types](https://docs.splunk.com/Observability/metrics-and-metadata/metric-types.html#nav-Metric-types) in Observability Cloud.
- In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See more about [metric categories](https://docs.splunk.com/Observability/metrics-and-metadata/metric-categories.html#nav-Metric-categories).
- To learn more about the available in Observability Cloud see {ref}`metric-types`.
- In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See {ref}`metric-categories` for more information.
- In MTS-based subscription plans, all metrics are custom.
- To add additional metrics, see how to configure `extraMetrics` [using the Collector](https://docs.splunk.com/Observability/gdi/opentelemetry/components/smartagent-receiver.html#add-additional-metrics).
- To add additional metrics, see how to configure `extraMetrics` in {ref}`otel-sareceiver-extrametrics`.
2 changes: 1 addition & 1 deletion _includes/missing_pipeline_configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ service:
exporters: [otlp, jaeger, zipkin]
```

See [How the OpenTelemetry Collector uses pipelines to process data](https://docs.splunk.com/observability/gdi/opentelemetry/data-processing.html#common-processing-scenarios) for more information.
See {ref}`otel-data-processing` for more information.
4 changes: 2 additions & 2 deletions _includes/out_of_memory_error.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

If you receive high memory usage or out of memory warnings, do the following before opening a support case:

1. Verify that you have installed the latest version of the [Splunk Distribution of OpenTelemetry Collector for Kubernetes](https://github.com/signalfx/splunk-otel-collector-chart/releases).
1. Verify that you have installed the latest version of the <a class="external" href="https://github.com/signalfx/splunk-otel-collector-chart/releases" target="_blank">Splunk Distribution of OpenTelemetry Collector for Kubernetes</a>.
2. Add or change the `memory_limiter` processor in your configuration file. For example:
```
processors:
Expand Down Expand Up @@ -37,4 +37,4 @@ For example, if you discover that the pod lasts 5 minutes before it gets killed:

How long does it take for the pod to be killed due to memory limit? Check the logs at the time of the issue to see if there are any obvious repeating errors.

Gather additional [support information](https://docs.splunk.com/Observability/gdi/opentelemetry/support-checklist.html#nav-Gather-information-to-open-a-support-request), including your end-to-end architecture information.
Gather additional support information, including your end-to-end architecture information. See {ref}`otel-support-checklist`.
12 changes: 6 additions & 6 deletions _includes/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@

If you are not able to see your data in Splunk Observability Cloud, try these tips:

- Submit a case in the [Splunk Support Portal](https://splunkcommunities.force.com/customers/home/home.jsp)
- Submit a case in the <a class="external" href="https://splunkcommunities.force.com/customers/home/home.jsp" target="_blank">Splunk Support Portal</a>.
- Available to Splunk Observability Cloud customers

- Call [Splunk Customer Support](https://www.splunk.com/en_us/about-splunk/contact-us.html#customer-support)
- Call <a class="external" href="https://www.splunk.com/en_us/about-splunk/contact-us.html#customer-support" target="_blank">Splunk Customer Support</a>
- Available to Splunk Observability Cloud customers

- Ask a question and get answers through community support at [Splunk Answers](https://community.splunk.com/t5/Splunk-Observability-Cloud/bd-p/it-signalfx)
- Ask a question and get answers through community support at <a class="external" href="https://community.splunk.com/t5/Splunk-Observability-Cloud/bd-p/it-signalfx" target="_blank">Splunk Answers</a>.
- Available to Splunk Observability Cloud customers and free trial users

- Join the Splunk [#observability](https://splunk-usergroups.slack.com/archives/C01AK4CCWR4) user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
- Join the Splunk <a class="external" href="https://splunk-usergroups.slack.com/archives/C01AK4CCWR4" target="_blank">#observability</a> user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
- Available to Splunk Observability Cloud customers and free trial users
- To learn how to join, see [Get Started with Splunk Community - Chat groups](https://docs.splunk.com/Documentation/Community/current/community/Chat)
- To learn how to join, see <a class="external" href="https://docs.splunk.com/Documentation/Community/current/community/Chat" target="_blank">Get Started with Splunk Community - Chat groups</a>.

To learn about even more support options, see [Splunk Customer Success](https://www.splunk.com/en_us/customer-success.html).
To learn about even more support options, see <a class="external" href="https://www.splunk.com/en_us/customer-success.html" target="_blank">Splunk Customer Success</a>.
2 changes: 1 addition & 1 deletion _static/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,7 @@ width: 100%;
height: 100%;
}

a.external:after {
a.external:not(.image-reference):after {
content: "";
width: 11px !important;
height: 11px;
Expand Down
4 changes: 2 additions & 2 deletions _static/signalfx-alabaster.css
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ div.sphinxsidebar hr {

a {
color: #004B6B;
text-decoration: underline;
text-decoration: none !important;
}

a:hover {
Expand Down Expand Up @@ -1080,4 +1080,4 @@ a.image-reference:hover
100% {
transform: translate(24px, 0);
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ The Splunk OpenTelemetry Lambda Layer supports the following runtimes in AWS Lam
- Python 3.8 and 3.9
- Node.js 14 and higher
- Ruby 2.7
- Go 1.18
- Go 1.19

The Lambda Layer requires 49 MB on-disk in standard x86_64 systems.

Expand Down
8 changes: 4 additions & 4 deletions gdi/monitors-cloud/heroku.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The Splunk OpenTelemetry Connector for Heroku is a buildpack for the Splunk Dist
* Splunk APM through the `sapm` exporter. The `otlphttp` exporter can be used with a custom configuration.
* Splunk Infrastructure Monitoring through the `signalfx` exporter.

To learn more about the Splunk Distribution of OpenTelemetry Collector, see [Install and configure the Splunk Distribution of OpenTelemetry Collector](../opentelemetry/opentelemetry.rst)
To learn more about the Splunk Distribution of OpenTelemetry Collector, see {ref}`otel-intro`.

## Installation

Expand All @@ -29,7 +29,7 @@ Follow these steps to collect metrics with the Heroku buildpack for the Splunk D

**_Note:_** Running `heroku` command outside of project directories results in unexpected behavior.

2. Configure the Heroku app to expose Dyno metadata, which is required by Splunk OpenTelemetry Connector to set global dimensions such as `app_name`, `app_id` and `dyno_id`. See [here](https://devcenter.heroku.com/articles/dyno-metadata) for more information.
2. Configure the Heroku app to expose Dyno metadata, which is required by Splunk OpenTelemetry Connector to set global dimensions such as `app_name`, `app_id` and `dyno_id`. See <a class="external" href="https://devcenter.heroku.com/articles/dyno-metadata" target="_blank">here</a> for more information.

``` bash
heroku labs:enable runtime-dyno-metadata
Expand Down Expand Up @@ -84,8 +84,8 @@ Use the following environment variables to configure the Heroku buildpack.

| Environment Variable | Required | Default | Description |
| ---------------------- | -------- | ------- | ------------------------------------------------------------------------- |
| `SPLUNK_ACCESS_TOKEN` | Yes | | [Splunk access token](https://docs.splunk.com/Observability/admin/authentication-tokens/org-tokens.html#admin-org-tokens). |
| `SPLUNK_REALM` | Yes | | [Splunk realm](https://dev.splunk.com/observability/docs/realms_in_endpoints/). |
| `SPLUNK_ACCESS_TOKEN` | Yes | | Splunk access token. |
| `SPLUNK_REALM` | Yes | | <a class="external" href="https://dev.splunk.com/observability/docs/realms_in_endpoints/" target="_blank">Splunk realm</a> |
| `SPLUNK_API_URL` | No | `https://api.SPLUNK_REALM.signalfx.com` | The Splunk API base URL. |
| `SPLUNK_CONFIG` | No | `/app/config.yaml` | The configuration to use. `/app/.splunk/config.yaml` used if default not found. |
| `SPLUNK_INGEST_URL` | No | `https://ingest.SPLUNK_REALM.signalfx.com` | The Splunk Infrastructure Monitoring base URL. |
Expand Down
2 changes: 1 addition & 1 deletion gdi/monitors-conviva/conviva.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The {ref}`Splunk Distribution of OpenTelemetry Collector <otel-intro>` uses the
This integration uses version 2.4 of the Conviva Experience Insights REST APIs.

Only `Live` conviva metrics listed on the
[Conviva Developer Community](https://community.conviva.com/site/global/apis_data/experience_insights_api/index.gsp#metrics) page are supported. All metrics are gauges. The Conviva metrics are converted to Splunk Observability Cloud metrics with dimensions named account and filter. The account dimension is the name of the Conviva account and the filter dimension is the name of the Conviva filter applied to the metric. In the case of MetricLenses, the constituent MetricLens metrics and MetricLens dimensions are included. The values of the MetricLens dimensions are derived from the values of the associated MetricLens dimension entities.
<a class="external" href="https://community.conviva.com/site/global/apis_data/experience_insights_api/index.gsp#metrics" target="_blank">Conviva Developer Community</a> page are supported. All metrics are gauges. The Conviva metrics are converted to Splunk Observability Cloud metrics with dimensions named account and filter. The account dimension is the name of the Conviva account and the filter dimension is the name of the Conviva filter applied to the metric. In the case of MetricLenses, the constituent MetricLens metrics and MetricLens dimensions are included. The values of the MetricLens dimensions are derived from the values of the associated MetricLens dimension entities.

## Benefits

Expand Down
4 changes: 2 additions & 2 deletions gdi/monitors-databases/apache-couchdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ This integration is only available on Kubernetes and Linux.
1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform.
- {ref}`Install on Kubernetes <otel-install-k8s>`
- {ref}`Install on Linux <otel-install-linux>`
2. Download [collectd-couchdb](https://github.com/signalfx/collectd-couchdb) in GitHub.
2. Download <a class="external" href="https://github.com/signalfx/collectd-couchdb" target="_blank">collectd-couchdb</a> in GitHub.
3. Move the `couchdb_plugin.py` file to `/usr/share/collectd/collectd-couchdb`.
4. Modify the [sample configuration file](https://github.com/signalfx/integrations/blob/master/collectd-couchdb/10-couchdb.conf) for the plugin in `/etc/collectd/managed_config` according to the [Configuration](#configuration) section.
4. Modify the <a class="external" href="[#configuration](https://github.com/signalfx/integrations/blob/master/collectd-couchdb/10-couchdb.conf" target="_blank">sample configuration file</a> for the plugin in `/etc/collectd/managed_config` according to the configuration section.
5. Install the Python requirements:

```
Expand Down
2 changes: 1 addition & 1 deletion gdi/monitors-databases/apache-spark.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ service:
receivers: [smartagent/collectd_spark_worker]
```
**Note:** The names `collectd_spark_master` and `collectd_spark_worker` are for identification purposes only and don't affect functionality. You can use either name in your configuration, but you need to select distinct monitor configurations and discovery rules for master and worker processes. For the master configuration, see the `isMaster` field in the [Configuration settings](#configuration-settings) section.
**Note:** The names `collectd_spark_master` and `collectd_spark_worker` are for identification purposes only and don't affect functionality. You can use either name in your configuration, but you need to select distinct monitor configurations and discovery rules for master and worker processes. For the master configuration, see the `isMaster` field in the configuration settings section.

## Configuration settings

Expand Down
2 changes: 1 addition & 1 deletion gdi/monitors-databases/cassandra.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ The following table shows the configuration options for this integration:
| `customDimensions` | no | `map of strings` | This object specifies custom dimensions to add at the connection level. |
| `mBeansToCollect` | no | `list of strings` | This array specifies a list of the MBeans defined in `mBeanDefinitions` that you want to collect. If you don't provide the array, the monitor collects all defined MBeans. |
| `mBeansToOmit` | no | `list of strings` | This array specifies a list of the MBeans defined in `mBeanDefinitions` that you want to omit. Use this list when you want to omit only a few MBeans from the default list. |
| `mBeanDefinitions` | no | `map of objects` (see the following table for details) | This object specifies how to map JMX MBean values to metrics. Cassandra comes pre-loaded with a set of mappings. Any mappings that you add in this option are merged with the pre-loaded ones. To learn more, see [https://collectd.org/documentation/manpages/collectd-java.5.shtml#genericjmx_plugin](https://collectd.org/documentation/manpages/collectd-java.5.shtml#genericjmx_plugin). |
| `mBeanDefinitions` | no | `map of objects` (see the following table for details) | This object specifies how to map JMX MBean values to metrics. Cassandra comes pre-loaded with a set of mappings. Any mappings that you add in this option are merged with the pre-loaded ones. To learn more, see <a class="external" href="https://collectd.org/documentation/manpages/collectd-java.5.shtml#genericjmx_plugin" target="_blank">https://collectd.org/documentation/manpages/collectd-java.5.shtml#genericjmx_plugin</a> |


The `mBeanDefinitions` configuration option has the following fields:
Expand Down
Loading

0 comments on commit 7d84648

Please sign in to comment.