The Salesforce Exporter offers an integration to process and forward Salesforce data to New Relic as either logs or events. The exporter currently supports sending the results of an SOQL query (with special handling for event log file queries) and sending information on Salesforce Org Limits.
The New Relic Salesforce Exporter can be run on any host environment with Python 3.9+ installed.
It can also be run inside a Docker container by leveraging
the published Docker image
directly, as a base image for
building a custom image, or using
the provided Dockerfile
to build a custom image.
In addition, the Salesforce Exporter requires the use of a Salesforce connected app in order to extract data via the Salesforce APIs. The connected app must be configured to allow OAuth authentication and authorization for API integration. See the Authentication section for more information.
To use the Salesforce Exporter on a host, perform the following steps.
- Clone this repository
- Run
pip install -r requirements.txt
to install dependencies
Once installed, the Salesforce Exporter can be run from the repository root
using the command python src/__main__.py
. See the section
Command Line Options and Configuration
for more details on using the exporter.
New Relic recommends that you update the Salesforce Exporter regularly and at a minimum every 3 months. To check that you are running the most current version, perform the following steps.
-
Navigate to the repository root
-
Run
git describe --tags --abbrev=0
-
Run
git describe --tags --abbrev=0 origin/main
-
Compare the output of the two commands. If they are the same, no action is required. If the version listed in the output of the second command is greater than the version listed in the output of the first command, perform the steps listed below.
NOTE: The above steps may not work if custom tags or branches have been added to the local repository or when operating on a fork. For these scenarios, please consult the git documentation for the appropriate commands to use to compare the latest tag in this repository with the latest tag in the local repository or fork.
To upgrade the Salesforce Exporter on a host, perform the following steps.
-
Navigate to the repository root.
-
Run the following commands.
git checkout main git pull origin/main
NOTE: The above steps may not work if there are pending changes to the local repository or when operating on a fork. For these scenarios, please consult the git documentation for the appropriate commands to use to update the local repository or fork to the latest version.
A Docker image for the Salesforce Exporter is available at https://hub.docker.com/r/newrelic/newrelic-salesforce-exporter. This image can be used in one of two ways.
Run directly from DockerHub
The Salesforce Exporter Docker image
can be run directly from DockerHub. To do this, the
config.yml
must be mapped into the running container. It can
be mapped using the default filename or using a custom filename. In the case of
the latter, the -f
command line option must be
specified with the custom filename. One of the same methods can be used to map
an event type fields mapping file and/or a
numeric fields mapping file into the running
container. In addition, environment variables can be passed to the container
using docker run
with the -e
, --env
, or --env-file
options
for configuration parameters that can be specified via
environment variables. See below for examples.
Example 1: Using the default configuration filename
In the following example, the file config.yml
in the current directory on the
host system is mapped with the default configuration filename in the container
(config.yml
). In addition, the license_key
value is
specified using the NR_LICENSE_KEY
environment variable and the
application name
and license key
agent parameters for the built-in New Relic Python agent
are specified using the NEW_RELIC_APP_NAME
AND NEW_RELIC_LICENSE_KEY
environment variables, respectively. No command line argument are passed to the
exporter.
docker run -t --rm --name salesforce-exporter \
-v "$PWD/config.yml":/usr/src/app/config.yml \
-e NR_LICENSE_KEY=$NR_LICENSE_KEY \
-e NEW_RELIC_APP_NAME="New Relic Salesforce Exporter" \
-e NEW_RELIC_LICENSE_KEY=$NEW_RELIC_LICENSE_KEY \
newrelic/newrelic-salesforce-exporter
Example 2: Using a custom configuration filename
In the following example, the file config.yml
in the current directory on the
host system is mapped with a custom configuration filename in the container
(my_custom_config.yml
) and the -f
command line option
is used to specify the custom filename. The full path is not needed as
/usr/src/app
is the working directory when the exporter runs in the container.
The environment variables are the same as in Example 1.
docker run -t --rm --name salesforce-exporter \
-v "$PWD/config.yml":/usr/src/app/my_custom_config.yml \
-e NR_LICENSE_KEY=$NR_LICENSE_KEY \
-e NEW_RELIC_APP_NAME="New Relic Salesforce Exporter" \
-e NEW_RELIC_LICENSE_KEY=$NEW_RELIC_LICENSE_KEY \
newrelic/newrelic-salesforce-exporter \
-f my_custom_config.yml
Example 3: Using an event type fields mapping file
The following example is the same as Example 1 except that an
event type fields mapping file is mapped into
the container with a custom filename in the container
(my_event_type_fields.yml
) and the -e
command line option
is used to specify the custom filename. Again, the full path is not needed as
/usr/src/app
is the working directory when the exporter runs in the container.
docker run -t --rm --name salesforce-exporter \
-v "$PWD/config.yml":/usr/src/app/config.yml \
-v "$PWD/my_event_type_fields.yml":/usr/src/app/my_event_type_fields.yml \
-e NR_LICENSE_KEY=$NR_LICENSE_KEY \
-e NEW_RELIC_APP_NAME="New Relic Salesforce Exporter" \
-e NEW_RELIC_LICENSE_KEY=$NEW_RELIC_LICENSE_KEY \
newrelic/newrelic-salesforce-exporter \
-e my_event_type_fields.yml
Example 4: Using additional environment variables
In the following example, additional environment variables are passed to the
container to configure the exporter. In this case, caching is enabled via the
CACHE_ENABLED
environment variable (in order to address
data de-duplication) and the Redis connection parameters
are set using the REDIS_*
environment variables.
docker run -t --rm --name salesforce-exporter \
-v "$PWD/config.yml":/usr/src/app/config.yml \
-e CACHE_ENABLED="yes" \
-e REDIS_HOST="my.redis.test" \
-e REDIS_PORT="15432" \
-e REDIS_DB_NUMBER="2" \
-e REDIS_SSL="on" \
-e REDIS_PASSWORD="R3d1s1sGr3@t" \
-e NR_LICENSE_KEY=$NR_LICENSE_KEY \
-e NEW_RELIC_APP_NAME="New Relic Salesforce Exporter" \
-e NEW_RELIC_LICENSE_KEY=$NEW_RELIC_LICENSE_KEY \
newrelic/newrelic-salesforce-exporter
NOTE: In this scenario, the container will need to have access to the Redis instance.
The Salesforce Exporter Docker image
can be used as the base image for building custom images. This scenario can be
easier as the config.yml
can be packaged into the custom image
and does not need to be mounted in. However, it does require access to
a Docker registry
where the custom image can be pushed (e.g. ECR
and that is accessible to the technology used to manage the container
(e.g. ECS). In addition, this scenario requires
maintenance of a custom Dockerfile
and the processes to build and publish the
image to a registry.
The minimal example of a Dockerfile
for building a custom image simply extends
the base image (newrelic/newrelic-salesforce-exporter
) and copies a
configuration file to the default location (/usr/src/app/config.yml
).
FROM newrelic/newrelic-salesforce-exporter
#
# Copy your config file into the default location.
# Adjust the local path as necessary.
#
COPY ./config.yml .
Note that the directory path in the container does not need to be specified.
This is because the base image sets the WORKDIR
to /usr/src/app
. In fact, custom Dockerfile
s should not change the
WORKDIR
.
The following commands can be used to build a custom image using a custom
Dockerfile
that extends the base image.
docker build -t newrelic-salesforce-exporter-custom -f Dockerfile-custom .
docker tag newrelic-salesforce-exporter-custom someregistry/username/newrelic-salesforce-exporter-custom
docker push someregistry/username/newrelic-salesforce-exporter-custom
Subsequently, the exporter can be run using the custom image as in the previous
examples but without the need to mount the configuration file. Similarly, if
an event type fields mapping file and/or
a numeric fields mapping file are required,
these can be copied into the default locations using the custom Dockerfile
as
well, eliminating the need for these files to be mounted into the container.
The Salesforce Exporter Docker image
can also be built locally using the provided Dockerfile
"as-is" or as the basis for building a custom Dockerfile
. As is the case when
extending the base image, this scenario does require access to
a Docker registry
where the custom image can be pushed (e.g. ECR
and that is accessible to the technology used to manage the container
(e.g. ECS). Similarly, it requires maintenance of
a custom Dockerfile
and the processes to build and publish the image to a
registry.
The general set of steps for building a custom image using the provided
Dockerfile
"as-is" are as follows.
- Clone this repository
- Navigate to the repository root
- Run the following commands
docker build -t newrelic-salesforce-exporter .
docker tag newrelic-salesforce-exporter someregistry/username/newrelic-salesforce-exporter
docker push someregistry/username/newrelic-salesforce-exporter
To use a custom Dockerfile
, backup the provided Dockerfile
, make necessary
changes to the original, and test the image to ensure that the Salesforce
Exporter functions as expected. The example commands in the section
run directly from DockerHub can be used as a way
to verify basic functionality. Run additional tests as needed based on the
nature of the changes to the custom Dockerfile
. Then follow the steps above to
tag the image and publish it to a registry.
As is the case when extending the base image, the exporter can be run using the custom image as in the previous examples but without the need to mount any files into the container.
NOTE: When using a custom Dockerfile
, the base image in the FROM
instruction should not be changed and will not be supported.
New Relic recommends that you update the Salesforce Exporter regularly and at a minimum every 3 months.
When running directly from DockerHub, ensure
that you are not referencing a specific tag in the docker run
command or that
you are using the tag 'latest'.
Similarly, if you are extending the base image, ensure
that you are not referencing a specific tag in the FROM
instruction or that
you are using the tag 'latest'.
Additionally, ensure that you rebuild your image and push the new image to your
custom Docker registries
and that all containers running from previous versions of the image are
recreated to use the new image.
When building a custom image using the provided
Dockerfile
"as-is", follow the steps to upgrade your local repository
and then follow the steps to build a custom image using
the provided Dockerfile
. Ensure that you rebuild your image and push the new
image to your custom Docker registries
and that all containers running from previous versions of the image are
recreated to use the new image.
When building a custom image using a custom
Dockerfile
, a new version of the custom Dockerfile
should be created by
reapplying changes to the provided Dockerfile
each time the Salesforce
Exporter is updated to ensure changes to major functionality and critical fixes
are included in images produced from the custom Dockerfile
. Ensure that you
rebuild your image and push the new image to your custom
Docker registries
and that all containers running from previous versions of the image are
recreated to use the new image.
The Salesforce Exporter supports the following capabilities.
-
The default behavior of the exporter, in the absence of configuration for additional capabilities, is to collect Salesforce event log files. Log messages can be sent to New Relic as logs or events.
-
The exporter can execute arbitrary SOQL queries and send the query results to New Relic as logs or events.
-
The exporter can collect Salesforce Org Limits and send either all limits or only select limits to New Relic as logs or events.
Option | Alias | Description | Default |
---|---|---|---|
-f | --config_file | name of configuration file | config.yml |
-c | --config_dir | path to the directory containing the configuration file | . |
-e | --event_type_fields_mapping | path to the event type fields mapping file | event_type_fields.yml |
-n | --num_fields_mapping | path to the numeric fields mapping file | numeric_fields.yml |
For historical purposes, you can also use the CONFIG_DIR
environment variable
to specify the directory containing the configuration file.
Several configuration files are used to control the behavior of the exporter.
The main configuration for the exporter is the config.yml
file. In fact, it
does not need to be named config.yml
although that is the default name if a
name is not specified on the command line. The supported configuration
parameters are listed below.
See config_sample.yml
for a full configuration example.
Description | Valid Values | Required | Default |
---|---|---|---|
ID used in logs generated by the exporter | string | N | com.newrelic.labs.sfdc.eventlogfiles |
The integration name is used in the exporter logs for troubleshooting purposes.
Description | Valid Values | Required | Default |
---|---|---|---|
Flag to enable the built-in scheduler | True / False |
N | False |
The exporter can run either as a service that uses a built-in scheduler to run the export process on a set schedule or as a simple command line utility which runs once and exits when it is complete. The latter is intended for use with an external scheduling mechanism like cron.
When set to True
, the exporter will run on a schedule specified in the
service_schedule
parameter. When set to False
or
when not specified, the exporter will run once and exit. In this case, the
cron_interval_minutes
parameter should be used to
indicate the interval configured in the external scheduler. For example, if
using cron, this would be the frequency (in minutes) between runs setup in the
crontab
.
Description | Valid Values | Required | Default |
---|---|---|---|
Schedule configuration used by the built-in scheduler | YAML Mapping | conditional | N/a |
This parameter is required if the run_as_service
parameter
is set to True
. The value of this parameter is a YAML mapping with two
attributes: hour
and minute
.
The hour
attribute specifies all the hours (0 - 23, comma separated) to invoke
the exporter. Use *
as awildcard to invoke the application every hour. The
minute
attribute specifies all the minutes (0 - 59, comma separated) at which
to invoke the exporter. For example, the following configuration will run the
exporter every hour on the hour as well as at the 15th minute, 30th minute
and 45th minute past every hour.
service_schedule:
hour: *
minute: "0, 15, 30, 45"
Description | Valid Values | Required | Default |
---|---|---|---|
The execution interval (in minutes) used by the external scheduler | integer | N | 60 |
This parameter is used when the run_as_service
parameter
is set to False
or is not set at all. This parameter is intended for use when
an external scheduling mechanism is being used to execute the exporter. The
value of this parameter is a number representing the interval (in minutes) at
which the external scheduler executes the exporter. For example, if using
CRON, this would be the frequency at which CRON invokes the process as
represented by the CRON expression in the crontab
.
See the section de-duplication without a cache
for more details on the interaction between the cache_enabled
attribute, the date_field
attribute, the
time_lag_minutes
attribute, the
generation_interval
attribute, and the
cron_interval_minutes
attribute.
NOTE: If run_as_service
is set to False
and you set
this parameter to 0
, the time range that will be used for EventLogFile
queries generated by the exporter will be "since now", "until now" which will
result in no query results being returned.
Description | Valid Values | Required | Default |
---|---|---|---|
An array of instance configurations | YAML Sequence | Y | N/a |
The exporter can run one or more exports each time it is invoked. Each export is an "instance" defined using an instance configuration. This parameter is an array where each element is an instance configuration. This parameter must contain at least one instance configuration.
Description | Valid Values | Required | Default |
---|---|---|---|
An array of custom query configurations | YAML mapping | N | {} |
The exporter is capable of running custom SOQL queries instead of the default generated log file queries. This parameter can be used to specify queries that should be run for all instances.
See the custom queries section for more details.
Description | Valid Values | Required | Default |
---|---|---|---|
New Relic configuration | YAML Mapping | Y | N/a |
This parameter contains the information necessary to send logs or events to your New Relic account.
NOTE: The exporter uses the New Relic Python APM agent
to report telemetry about itself to your account. The license_key
attribute
defined in this configuration is not used by the Python agent. See more
details about configuring the included Python agent in
the New Relic Python Agent section.
Description | Valid Values | Required | Default |
---|---|---|---|
New Relic telemetry type | logs / events |
N | logs |
This attribute specifies the type of telemetry to generate for exported data.
When set to logs
, data exported from Salesforce will be transformed into
New Relic logs and sent via the New Relic Logs API. When set to events
, data
exported from Salesforce will be transformed into New Relic events and sent via
the New Relic Events API.
Description | Valid Values | Required | Default |
---|---|---|---|
New Relic region identifier | US / EU |
Y | N/a |
This attribute specifies which New Relic region should be used to send generated telemetry.
Description | Valid Values | Required | Default |
---|---|---|---|
New Relic account ID | integer | conditional | N/a |
This attribute specifies which New Relic account generated events should be sent
to. This attribute is required if the data_format
attribute is
set to events
. It is ignored if the data_format
is logs
.
The account ID can also be specified using the NR_ACCOUNT_ID
environment
variable.
Description | Valid Values | Required | Default |
---|---|---|---|
New Relic license key | string | Y | N/a |
This attribute specifies the New Relic License Key (INGEST) that should be used to send generated logs and events.
The license key can also be specified using the NR_LICENSE_KEY
environment
variable.
An "instance" is defined using an instance configuration. An instance
configuration is a YAML mapping containing 3 attributes: name
, arguments
,
and labels
.
Description | Valid Values | Required | Default |
---|---|---|---|
A symbolic name for the instance | string | Y | N/a |
The instance name is used in the exporter logs for troubleshooting purposes.
Description | Valid Values | Required | Default |
---|---|---|---|
The main configuration for the instance | YAML Mapping | Y | N/a |
The majority of the instance configuration is specified in the arguments
attribute. The attribute value is a YAML mapping. The supported arguments are
documented in the instance arguments section.
Description | Valid Values | Required | Default |
---|---|---|---|
A set of labels to include on all logs and events | YAML mapping | N | {} |
The labels
parameter is a set of key/value pairs. The value of this parameter
is a YAML mapping. Each key/value pair is added to all logs and events generated
by the exporter.
The main configuration of an instance is specified in the arguments
attribute
of the instance configuration. The value
of this attribute is a YAML mapping that supports the following values.
Description | Valid Values | Required | Default |
---|---|---|---|
The version of the Salesforce API to use | any valid Salesforce API version number | N | 55.0 |
The api_ver
attribute can be used to customize the version of the Salesforce
API that the exporter should use when making API calls. The exporter was tested
against API version 60.0
.
NOTE: The API version can also be configured at the query level for custom queries and at the limit level for limits.
Description | Valid Values | Required | Default |
---|---|---|---|
The Salesforce URL to use for token-based authentication | URL | Y | N/a |
The exporter authenticates to Salesforce using token-based authentication. The value of this attribute is used as the token URL. For more details, see the Authentication section below.
Description | Valid Values | Required | Default |
---|---|---|---|
The authentication configuration | YAML Mapping | N | {} |
The configuration used to authenticate to Salesforce can be specified either in
the config.yml
or the environment. If the auth
attribute is
present, the exporter will attempt to load the configuration entirely from the
config.yml
. The attribute value is a YAML mapping.
See the Authentication section for more details.
Description | Valid Values | Required | Default |
---|---|---|---|
A prefix to use when looking up environment variables | string | N | '' |
Many, but not all, configuration values can be provided as environment
variables. When the exporter looks up an environment variable, it can
automatically prepend the environment variable name with a prefix. This prefix
is specified in the instance arguments using the
auth_env_prefix
attribute. This enables different instances]
to use different environment variables in a single run of the exporter.
NOTE: This variable is named auth_env_prefix
for historical reasons.
In previous releases, it only applied to authentication environment variables.
However, it currently applies to many other configuration environment variables.
Description | Valid Values | Required | Default |
---|---|---|---|
Flag to control usage of a Redis cache for storing query record IDs and log entry IDs | True / False |
N | False |
When this flag is set to True
, the exporter will use a cache to store the IDs
of query records and log messages that have already been processed to prevent
duplication of data in New Relic. With this flag set to False
or if the flag
is not specified, multiple occurrences of the same log message or query record
will result in duplicate log entries or events in New Relic.
See the section de-duplication with a cache for more details on the use of caching to help prevent duplicaton of data.
Description | Valid Values | Required | Default |
---|---|---|---|
The Redis configuration | YAML Mapping | conditional | {} |
The configuration used to authenticate to Redis. This attribute is required if
the cache_enabled
attribute is set to True
. The attribute
value is a YAML mapping.
See the section de-duplication with a cache for more details.
Description | Valid Values | Required | Default |
---|---|---|---|
The name of the date field on the EventLogFile object to use when building log file queries |
LogDate / CreatedDate |
N | conditional |
The EventLogFile
object has two date fields, LogDate
and CreatedDate
. Briefly, the LogDate
field represents the start of the collection interval for logs contained in the
given log file. The CreatedDate
represents the date the given log file was
created. The decision to use one versus the other is primarily relevant with
regards to caching.
The default value of this attribute is conditional on the value of
cache_enabled
attribute. When the cache_enabled
attribute is set to True
, the default value of this attribute is
CreatedDate
. When the cache_enabled
attribute is set to
False
, the default value of this attribute is LogDate
.
NOTE: This attribute is only used for EventLogFile
queries generated by
the exporter. It is not used for custom EventLogFile
queries.
See the section de-duplication without a cache
for more details on the interaction between the cache_enabled
attribute, the date_field
attribute, the
time_lag_minutes
attribute, the
generation_interval
attribute, and the
cron_interval_minutes
attribute.
Description | Valid Values | Required | Default |
---|---|---|---|
The value of the Interval field on the EventLogFile object to use when building log file queries |
Daily / Hourly |
N | Daily |
There are two types of event log files created by Salesforce: daily (or "24-hour") event log files and hourly event log files. Daily log files include all log files for the previous day and are generated at approximately 3am (server local time) each day. With respect to the exporter, the important difference between the two has to do with how deltas to these logs are published.
The value of this attribute is automatically used for EventLogFile
queries
generated by the exporter and is made available to custom queries
as the query substitution variable log_interval_type
.
See the section de-duplication without a cache
for more details on the interaction between the cache_enabled
attribute, the date_field
attribute, the
time_lag_minutes
attribute, the
generation_interval
attribute, and the
cron_interval_minutes
attribute.
Description | Valid Values | Required | Default |
---|---|---|---|
An offset duration (in minutes) to use when building log file queries | integer | N | conditional |
The value of this attribute affects the calculation of the start and end values
for the time range used in EventLogFile
queries generated by the exporter and
the to_timestamp
and from_timestamp
values made available to
custom queries. Specifically, on each execution of the
exporter, the starting value of the time range will be set to the end value of
the time range on the previous execution (or the current time minus the
cron_interval_minutes
when
run_as_service
is False
) minus the time_lag_minutes
and
the ending value of the time range will be set to the current time minus the
value of time_lag_minutes
.
The default value of this attribute is conditional on the value of
cache_enabled
attribute. When the cache_enabled
attribute is set to True
, the default value of this attribute is
0
. When the cache_enabled
attribute is set to
False
, the default value of this attribute is 300
.
See the section de-duplication without a cache
for more details on the interaction between the cache_enabled
attribute, the date_field
attribute, the
time_lag_minutes
attribute, the
generation_interval
attribute, and the
cron_interval_minutes
attribute.
Description | Valid Values | Required | Default |
---|---|---|---|
An array of instance-scoped custom queries | YAML Sequence | N | [] |
The exporter is capable of running custom SOQL queries
instead of the default generated log file queries. This attribute can be used
to specify queries that should only be run for the instance. These are
separate from, but additive to, the queries defined in the top-level
queries
configuration parameter.
See the custom queries section for more details.
Description | Valid Values | Required | Default |
---|---|---|---|
Limits collection configuration | YAML Mapping | N | {} |
In addition to exporting EventLogFile
logs and the results of SOQL queries, the exporter can also collect data about
Salesforce Organization Limits.
See the Org limits section for more details.
Description | Valid Values | Required | Default |
---|---|---|---|
Flag to explicitly enable or disable the default generated log file query | True / False |
N | True |
By default, for each instance, the exporter will execute the default generated log file query, unless custom queries are defined at the global or instance level.
This attribute can be used to prevent the default behavior. For example, to
configure an instance to only collect Salesforce org limits, set
this attribute to False
in conjunction with specifying the limits
configuration.
NOTE: When custom queries are enabled (either at the global or instance level) the default generated log file query will be disabled automatically and the value of this attribute will be ignored.
EventLogFile
data can contain many attributes and the set of attributes
returned can differ depending on the event type. By default, the exporter will
include all attributes returned in the generated New Relic Logs or Events. This
behavior can be customized using an event type fields mapping file. This file
contains a list of fields to record for each event type as shown in the
following example.
mapping:
Login: [ 'FIELD_A', 'FIELD_B' ]
API: [ 'FIELD_A' ]
OTHER_EVENT: [ 'OTHER_FIELDS' ]
See the file event_type_fields.yml at the root of the repository for an example.
NOTE: This mapping only applies to data in downloaded log message CSVs when
querying EventLogFile
data. It does not apply to any other type of SOQL query
results.
The data contained in downloaded log message CSVs when querying
EventLogFile
data are all represented using strings even if some of them are actually numeric
values. However, it may be beneficial to have numeric values converted to
numbers in the generated New Relic Events. To address this, a numeric fields
mapping file can be specified. This file defines the set of fields that should
be converted to numeric values by event type. The format of this file is the
same as the event type fields mapping file.
See the file numeric_fields.yml at the root of the repository for an example.
NOTE: The numeric fields mapping applies only when generating New Relic events. It is ignored when generating New Relic Logs.
NOTE: Unlike the event type fields mappings which only apply to
EventLogFile
data, the numeric fields mappings apply to fields returned in
SOQL result sets as well.
The exporter supports the OAuth 2.0 Username-Password flow and the OAuth 2.0 JWT Bearer flow for gaining access to the ReST API via a Connected App. The JWT Bearer flow is strongly recommended as it does not expose any passwords.
As mentioned above, authentication information can either be specified in the
auth
attribute of the arguments
parameter
of the instance configuration or in the
runtime system environment via environment variables.
Both OAuth 2.0 authorization flows require an access token.
The access token is obtained using the Salesforce instance’s
OAuth 2.0 token endpoint,
e.g. https://hostname/services/oauth2/token
.
The OAuth 2.0 token endpoint URL can be specified either as a
configuration parameter or using the
{auth_env_prefix}SF_TOKEN_URL
environment variable.
For the OAuth 2.0 Username-Password Flow, the following parameters are required.
The grant_type
for the OAuth 2.0 Username-Password Flow must be set to
password
(case-sensitive).
The grant type can also be specified using the {auth_env_prefix}SF_GRANT_TYPE
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Consumer key of the connected app | string | Y | N/a |
This parameter specifies the consumer key of the connected app. To access this value, navigate to "Manage Consumer Details" when viewing the Connected App details.
The client ID can also be specified using the {auth_env_prefix}SF_CLIENT_ID
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Consumer secret of the connected app | string | Y | N/a |
This parameter specifies the consumer secret of the connected app. To access this value, navigate to "Manage Consumer Details" when viewing the Connected App details.
The client secret can also be specified using the
{auth_env_prefix}SF_CLIENT_SECRET
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Username the connected app is impersonating | string | Y | N/a |
This parameter specifies the username that the connected app will impersonate/imitate for authentication and authorization purposes.
The username can also be specified using the {auth_env_prefix}SF_USERNAME
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Password of the user the connected app is impersonating | string | Y | N/a |
This parameter specifies the password of the user that the connected app will impersonate/imitate for authentication and authorization purposes.
The password can also be specified using the {auth_env_prefix}SF_PASSWORD
environment variable.
NOTE: As noted in the OAuth 2.0 Username-Password flow documentation, the password requires a security token to be concatenated with it when authenticating from an untrusted network.
Below is an example OAuth 2.0 Username-Password Flow configuration in the
auth
attribute of the instance arguments
attribute of the instance configuration parameter.
token_url: https://my.salesforce.test/services/oauth2/token
# ... other instance arguments ...
auth:
grant_type: password
client_id: "ABCDEFG1234567"
client_secret: "1123581321abc=="
username: pat
password: My5fPa55w0rd
Below is an example OAuth 2.0 Username-Password Flow configuration using
environment variables with no prefix from a bash
shell.
export SF_TOKEN_URL="https://my.salesforce.test/services/oauth2/token"
export SF_GRANT_TYPE="password"
export SF_CLIENT_ID="ABCDEFG1234567"
export SF_CLIENT_SECRET="1123581321abc=="
export SF_USERNAME="pat"
export SF_PASSWORD="My5fPa55w0rd"
For the OAuth 2.0 JWT Bearer Flow, the following parameters are required.
The grant_type
for the OAuth 2.0 JWT Bearer Flow must be set to
urn:ietf:params:oauth:grant-type:jwt-bearer
(case-sensitive).
The grant type can also be specified using the {auth_env_prefix}SF_GRANT_TYPE
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
client_id of the connected app |
string | Y | N/a |
This parameter specifies the client_id generated and assigned to the connected app when it is saved after registering the X509 certificate
The client ID can also be specified using the {auth_env_prefix}SF_CLIENT_ID
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Path to the file containing the private key of the connected app | file path | Y | N/a |
This parameter specifies the file system path to the file containing the private key that is associated with the X509 certificate that is registered with the connected app. The private key is used to sign the JWT.
The private key file path can also be specified using the
{auth_env_prefix}SF_PRIVATE_KEY
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Value for the sub claim |
string | Y | N/a |
This parameter specifies the value used for the sub
claim in the JSON Claims
Set for the JWT. Per the documentation,
this should be set to the user's username when accessing an Experience Cloud
site.
The subject can also be specified using the {auth_env_prefix}SF_SUBJECT
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Value for the aud claim |
string | Y | N/a |
This parameter specifies the value used for the aud
claim in the JSON Claims
Set for the JWT. Per the documentation,
this should be set to the authorization server's URL.
The audience can also be specified using the {auth_env_prefix}SF_AUDIENCE
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
An offset duration (in minutes) to use when calculating the JWT exp claim |
integer | N | 5 |
This value of this parameter is added to the current time and the result is used
as the value of the exp
claim in the JSON Claims Set for the JWT. The value
must be a positive integer.
The expiration offset can also be specified using the
{auth_env_prefix}SF_EXPIRATION_OFFSET
environment variable.
Below is an example OAuth 2.0 JWT Bearer Flow configuration in the
auth
attribute of the instance arguments
attribute of the instance configuration parameter.
auth:
grant_type: "urn:ietf:params:oauth:grant-type:jwt-bearer"
client_id: "GFEDCBA7654321"
private_key: "path/to/private_key_file"
subject: pat
audience: "https://login.salesforce.com"
Below is an example OAuth 2.0 JWT Bearer Flow configuration using environment
variables with no prefix from a bash
shell.
export SF_TOKEN_URL="https://my.salesforce.test/services/oauth2/token"
export SF_GRANT_TYPE="urn:ietf:params:oauth:grant-type:jwt-bearer"
export SF_CLIENT_ID="GFEDCBA7654321"
export SF_PRIVATE_KEY="path/to/private_key_file"
export SF_SUBJECT="pat"
export SF_AUDIENCE="https://login.salesforce.com"
The default behavior of the exporter, in the absence of
configuration of additional capabilities, is to collect event logs using the
log related attributes in the instance arguments of
each instance configuration, for
example, date_field
,
generation_interval
, etc.
Event log messages are collected by executing an
SOQL
query for EventLogFile
objects, iterating through each result object, and processing the log messages
in the log file referenced by the LogFile
attribute.
Subsequently, the event log messages in the log files are transformed into the New Relic Logs API payload format or the New Relic Events API payload format and sent to New Relic.
NOTE: The event log file functionality is really just specialized logic
for handling query results of EventLogFile
queries. Under the hood, the logic
to run custom queries and the logic to query for
EventLogFile
records run through the same code path. The paths diverge only
when the exporter detects that the query results contain EventLogFile
records.
In this case, the exporter proceeds to process the log messages in the
referenced event log file. In fact, the exporter will even detect EventLogFile
records returned from custom queries. This is why the
default event log file queries are disabled
when custom queries are specified. In this case, it is
assumed that customized EventLogFile
queries will be provided (although this
is not required).
By default, the exporter will execute one of the following queries.
-
When
date_field
is set toLogDate
:SELECT Id,EventType,CreatedDate,LogDate,Interval,LogFile,Sequence FROM EventLogFile WHERE LogDate>={from_timestamp} AND LogDate<{to_timestamp} AND Interval='{log_interval_type}'
-
When
date_field
is set toCreateDate
:SELECT Id,EventType,CreatedDate,LogDate,Interval,LogFile,Sequence FROM EventLogFile WHERE CreatedDate>={from_timestamp} AND CreatedDate<{to_timestamp} AND Interval='{log_interval_type}'
If custom queries are specified either at the global or
instance scope, or the logs_enabled flag is set to False
,
the default queries are disabled. However, event log files can still be
collected by adding a custom query against the EventLogFile
object. The exporter will automatically detect EventLogFile
results and
process the log messages in each event log file identified by the LogFile
attribute in each result.
NOTE: For more details on {from_timestamp}
, {to_timestamp}
and
log_interval_type
, see the query substitution variables
section.
Salesforce event log file data is mapped to New Relic data as follows.
- Before processing event log file messages, the "event type" is set to either
the
event_type
value or theEventType
attribute of theEventLogFile
record. - Next, a connection is opened to stream the log file specified by the
LogFile
attribute of theEventLogFile
into the CSV parser. - For each line of the CSV file, the line is converted to a Python
dict
object and processed into a single New Relic log entry as follows.- The
attributes
for the New Relic log entry are built first as follows.- If an event type fields mapping file
exists and the calculated event type from step 1 exists in the
mapping, copy the fields listed in the mapping from the log message
to the
attributes
. - Otherwise, copy all the fields.
- A timestamp to use in subsequent steps is calculated as follows.
- If a
TIMESTAMP
field exists on the log message, convert it to the number of seconds since the epoch and removeTIMESTAMP
from theattributes
. - Otherwise, use the current number of seconds since the epoch for the timestamp.
- If a
- Set the
LogFileId
attribute inattributes
to theId
field of theEventLogFile
record (not the log message). - Set the
EVENT_TYPE
attribute inattributes
to either theevent_type
value or theEVENT_TYPE
field from the log message. If neither exists, it is set toSFEvent
. - The calculated timestamp value is set with the attribute name
specified in
rename_timestamp
or the nametimestamp
.
- If an event type fields mapping file
exists and the calculated event type from step 1 exists in the
mapping, copy the fields listed in the mapping from the log message
to the
- The
message
for the New Relic log entry is set toLogFile RECORD_ID row LINE_NO
whereRECORD_ID
is theId
attribute of theEventLogFile
record andLINE_NO
is the number of the row of the CSV file currently being processed. - If, and only if, the calculated name of the
timestamp
field istimestamp
, thetimestamp
for the New Relic log entry is set to the calculated timestamp value. Otherwise, the time of ingestion will be used as the timestamp of the log entry.
- The
- If the target New Relic data type is an event, the calculated log entry is
converted to an event as follows.
- Each attribute from the
attributes
value of the log entry is copied to the event. - Any attribute with a name in the set of combined field names of all event types from the numeric fields mapping file, is converted to a numeric value. If the conversion fails, the value remains as a string.
- The
eventType
of the event is set to theEVENT_TYPE
attribute.
- Each attribute from the
Below is an example of an EventLogFile
record, a single log message, and the
New Relic log entry or New Relic event that would result from the above
transformation.
Example EventLogFile
Record:
{
"attributes": {
"type": "EventLogFile",
"url": "/services/data/v52.0/sobjects/EventLogFile/00001111AAAABBBB"
},
"Id": "00001111AAAABBBB",
"EventType": "ApexCallout",
"CreatedDate": "2024-03-11T15:00:00.000+0000",
"LogDate": "2024-03-11T02:00:00.000+0000",
"Interval": "Hourly",
"LogFile": "/services/data/v52.0/sobjects/EventLogFile/00001111AAAABBBB/LogFile",
"Sequence": 1
}
Example EventLogFile
Log Message (shown abridged and with header row):
"EVENT_TYPE","TIMESTAMP","REQUEST_ID","ORGANIZATION_ID","USER_ID","RUN_TIME","CPU_TIME",... "ApexCallout","20240311160000.000","YYZ:abcdef123456","001122334455667","000000001111111","2112","10",...
Example New Relic log entry:
{
"message": "LogFile 00001111AAAABBBB row 0",
"attributes": {
"EVENT_TYPE": "ApexCallout",
"REQUEST_ID": "YYZ:abcdef123456",
"ORGANIZATION_ID": "001122334455667",
"USER_ID": "000000001111111",
"RUN_TIME": "2112",
"CPU_TIME": "10",
"LogFileId": "00001111AAAABBBB",
"timestamp": 1710172800.0
},
"timestamp": 1710172800.0
}
Example New Relic event:
{
"EVENT_TYPE": "ApexCallout",
"REQUEST_ID": "YYZ:abcdef123456",
"ORGANIZATION_ID": "001122334455667",
"USER_ID": "000000001111111",
"RUN_TIME": "2112",
"CPU_TIME": "10",
"LogFileId": "00001111AAAABBBB",
"timestamp": 1710172800.0,
"eventType": "ApexCallout"
}
The exporter can be configured to execute one or more arbitrary SOQL queries and transform the query results into New Relic logs or events. Queries can be specified at the global level or at the instance level. Global queries are executed once for each instance while instance queries are executed just for that instance.
When custom queries are specified at any level,
the default event log file queries will be
disabled. However, the exporter will still detect EventLogFile
records
returned from custom queries and automatically process the log messages in each
event log file identified by the LogFile
attribute in each EventLogFile
result. This behavior can be leveraged to write custom EventLogFile
queries
when more (or less) functionality is needed than is provided by the default
queries.
Custom EventLogFile
queries must include the following fields.
Id
Interval
LogFile
EventType
CreatedDate
LogDate
Failure to include these fields will cause the exporter to terminate with an exit code.
NOTE: It is very important when writing custom EventLogFile
queries to
have an understanding of the issues discussed in the
Data De-duplication section and the use of the
query substitution variables in order to craft
queries that do not select duplicate data. For instance, the following is an
example of a query that would result in duplicate data.
SELECT Id,EventType,CreatedDate,LogDate,LogFile,Interval FROM EventLogFile WHERE LogDate<={to_timestamp} AND Interval='Daily'
The problem with this query is that query time range is unbounded in the past. Each time it is run, it would match all daily log files records since the current time and it would process all log messages for all the records. If the exporter was run, for example, every 15 minutes without using a cache, this query would post every log message from the past on every execution, resulting in a potentially massive amount of data duplication. See the default event log file queries for examples of using the query substitution variables in a way that will minimize data duplication.
Custom queries are defined by specifying one or more query configurations. An example query configuration is shown below.
queries:
- query: "SELECT Id,EventType,CreatedDate,LogDate,LogFile,Interval FROM EventLogFile WHERE CreatedDate>={from_timestamp} AND EventType='API' AND Interval='Hourly'"
- query: "SELECT Id,Action,CreatedDate,DelegateUser,Display FROM SetupAuditTrail WHERE CreatedDate>={from_timestamp}"
event_type: MySetupAuditTrailEvent
timestamp_attr: CreatedDate
rename_timestamp: actualTimestamp
- query: "SELECT EventName, EventType, UsageType, Client, Value, StartDate, EndDate FROM PlatformEventUsageMetric WHERE TimeSegment='FifteenMinutes' AND StartDate >= {start_date} AND EndDate <= {end_date}"
api_ver: "58.0"
id:
- Value
- EventType
timestamp_attr: StartDate
env:
end_date: "now()"
start_date: "now(timedelta(minutes=-60))"
- query: "SELECT FullName FROM EntityDefinition WHERE Label='Opportunity'"
api_name: tooling
Each query is defined with a set of configuration parameters. The supported configuration parameters are listed below.
Description | Valid Values | Required | Default |
---|---|---|---|
The SOQL query to execute | string | Y | N/a |
The query
parameter is used to specify the
SOQL
query to execute.
Description | Valid Values | Required | Default |
---|---|---|---|
The version of the Salesforce API to use | string | N | 55.0 |
The api_ver
attribute can be used to customize the version of the Salesforce
API that the exporter should use when executing query API calls.
Description | Valid Values | Required | Default |
---|---|---|---|
The name of the Salesforce Platform API to use | rest / tooling |
N | rest |
The api_name
attribute can be used to specify the name of the Salesforce
Platform API that the exporter should use when executing query API calls.
When the api_name
attribute is not set or when it is set to the value rest
,
the ReST API
will be used to execute the
SOQL
query.
When the api_name
attribute is set to tooling
, the
Tooling API
will be used to execute the
SOQL
query.
Specifying any other value will cause an error and the exporter will terminate.
NOTE: Not all queries can be executed with both APIs. Ensure that the query being used is appropriate for the specified Salesforce Platform API.
Description | Valid Values | Required | Default |
---|---|---|---|
An array of field names to use when generating record IDs | YAML Sequence | N | [] |
The value of the id
parameter is an array of strings. Each string specifies
the name of a field on the query records in the query results. The values of
these fields are used to generate a unique id for query records that do not have
an Id
field. The unique id is generated by combining the values of each field
and generating a SHA-3 256-bit hash.
This parameter is used only when transforming query records from custom queries. It is not used when processing event log files.
Description | Valid Values | Required | Default |
---|---|---|---|
The name of an event type to use when transforming log messages and query results to New Relic logs or events | string | N | conditional |
The value of the event_type
parameter is used during the transformation of
event log file messages and query results from custom queries.
For more details on the usage of this parameter when querying the EventLogFile
object, see the event log file data mapping. For
more details on the usage of this parameter when querying other objects, see the
query record data mapping.
Description | Valid Values | Required | Default |
---|---|---|---|
The name of the query record field containing the value to use for the timestamp | string | N | CreatedDate |
The value of the timestamp_attr
parameter specifies the name of the field on
query records in the query results that contains the value to use as the
timestamp when transforming query records to log entries or events. This
parameter is not used when transforming event log files.
Description | Valid Values | Required | Default |
---|---|---|---|
The name to use for the attribute on the log or event under which the timestamp will be stored | string | N | timestamp |
By default, the timestamp value taken from the query record will be stored
with the attribute name timestamp
. The rename_timestamp
parameter can be
used to specify an alternate name for this attribute. When present, the
generated log or event will not have a timestamp
attribute. As a result,
a timestamp
attribute will be added when the log or event is ingested and it
will be set to the time of ingestion.
This parameter is used both when transforming query records and when transforming event log files.
Description | Valid Values | Required | Default |
---|---|---|---|
A set of query substitution variables | YAML Mapping | N | {} |
This parameter is used to define a set of custom query substitution variables that can be used to build more dynamic queries. The value of each query is one of the supported Python expressions. For more details see the query substitution variables section.
The query
parameter can contain substitution variables in the form
{VARIABLE_NAME}
. The supported variables, listed below, are primarily
provided for the purposes of constructing custom EventLogFile
queries, since
the default event log file queries will not
be run when custom queries are present.
For example usages of these variables, see the query configuration example.
NOTE: As mentioned in custom EventLogFile
queries,
it is very important when writing custom EventLogFile
queries to
have an understanding of the issues discussed in the
Data De-duplication section and the use of the
substitution variables below in order to craft queries that do not select
duplicate data.
The from_timestamp
substitution variable represents the start of the current
query time range. It is provided in ISO-8601 format as this is what is used by
the LogDate
and CreatedDate
fields on the EventLogFile
record. When
run_as_service
is set to False
, this value will be the
current time minus the time_lag_minutes
minus the
cron_interval_minutes
. When
run_as_service
is set to True
, this will be the current
time minus the time_lag_minutes
on the initial run and
the end time of the previous run on subsequent runs.
The to_timestamp
substitution variable represents the end of the current query
time range. It is provided in ISO-8601 format as this is what is used by the
LogDate
and CreatedDate
fields on the EventLogFile
record. This value will
always be set to the current time minus the time_lag_minutes
.
The log_interval_type
substitution variable will always be set to the value of
the generation_interval
set in the
config.yml
.
In addition to the supported substitution variables, it is possible to define
additional substitution variables for each query using the env
configuration
parameter. These variables contain supported Python expressions that are
evaluated to generate the data to be susbtituted in the query. The supported
expressions are listed below.
For example usages of these variables, see the query configuration example.
NOTE: WARNING! This feature is currently implemented by using the built-in Python eval() function to evaluate the Python expressions specified in the configuration file. While the expressions are evaluated in a sandbox, it is still possible to break out of the sandbox and execute potentially malicious code. This functionality will likely be replaced in the future with a more secure mechanism for building dynamic queries.
The now()
expression will return the current time in ISO-8601 date-time
format. The expression can optionally take a timedelta argument and add it to
the current time, e.g. now(timedelta(minutes=-60))
.
The sf_time()
expression takes a Python
datetime
object and converts it to an ISO-8601 formatted date-time string.
The datetime
expression returns a Python
datetime
object. Parameters may be passed just as they may to the
datetime
constructor.
The timedelta
expression returns a Python
timedelta
object. Parameters may be passed just as they may to the
timedelta
constructor.
The global queries
parameter may "include" queries from other YAML files by
mixing file path strings into the queries
array as in the following example.
queries:
- "my_queries_file_1.yml"
- "my_queries_file_2.yml"
- query: "..."
The external query configuration files must contain a queries
key with exactly
the same format as the query configuration, with the
limitation that the "included" files can not "include" other files.
Query records are mapped to New Relic data as follows.
- For each query record returned in the query results, the record is converted
into a Python
dict
object and processed into a single New Relic log entry as follows.- The
attributes
for the New Relic log entry are built first as follows.-
Each field of the query record is copied into the
attributes
with the following considerations.- Query records typically contain their own
attributes
field that contain metadata about the record/object. This field is ignored. - The field names for nested fields are flattened by joining together
the names at each level with the
.
character (i.e. the same way as they are selected in the SOQL statement). - Nested field "leaf" values that are not primitives are ignored.
- Any
attributes
fields found in nested fields are ignored.
- Query records typically contain their own
-
If an
Id
field exists it is copied toattributes
. Otherwise, a unique ID is generated using theid
configuration parameter and it is copied toattributes
. If there is noid
configuration parameter or if the concatenated values yield an empty string, noId
field will be added. -
If the
attributes
field in the query record metadata is present and if it contains atype
attribute, set theEVENT_TYPE
attribute inattributes
to either theevent_type
value or the the value of thetype
attribute. -
A timestamp to use in subsequent steps is calculated as follows.
- If a value is specifed for the
timestamp_attr
configuration parameter and a field exists on the record with that name, convert the field value to the number of milliseconds since the epoch and use it for the timestamp. The field value (not the converted value) will also be used in the log entrymessage
(see below). - Otherwise, if the
CreatedDate
field exists on the record, treat the value in the same as in the previous step. - Otherwise, use the current number of milliseconds since the epoch for the timestamp.
- If a value is specifed for the
-
The timestamp value is set with the attribute name specified in
rename_timestamp
or the nametimestamp
.
-
- The
message
for the New Relic log entry is set toEVENT_TYPE CREATED_DATE
whereEVENT_TYPE
is either theevent_type
value, thetype
attribute in the query recordattributes
metadata if it exists, or the valueSFEvent
, and theCREATED_DATE
is the value of the field specified by thetimestamp_attr
configuration parameter, the value of theCreatedDate
field, or the empty string. - If, and only if, the calculated name of the
timestamp
field istimestamp
, thetimestamp
for the New Relic log entry is set to the calculated timestamp value. Otherwise, the time of ingestion will be used as the timestamp of the log entry.
- The
Below is an example of an SOQL query, a query result record, and the New Relic log entry or New Relic event that would result from the above transformation.
Example SOQL Query:
SELECT Id, Name, BillingCity, CreatedDate, CreatedBy.Name, CreatedBy.Profile.Name, CreatedBy.UserType FROM Account
Example Account
Record:
{
"attributes": {
"type": "Account",
"url": "/services/data/v58.0/sobjects/Account/12345"
},
"Id": "000012345",
"Name": "My Account",
"BillingCity": null,
"CreatedDate": "2024-03-11T00:00:00.000+0000",
"CreatedBy": {
"attributes": {
"type": "User",
"url": "/services/data/v55.0/sobjects/User/12345"
},
"Name": "Foo Bar",
"Profile": {
"attributes": {
"type": "Profile",
"url": "/services/data/v55.0/sobjects/Profile/12345"
},
"Name": "Beep Boop"
},
"UserType": "Bip Bop"
}
}
Example New Relic log entry:
{
"message": "Account 2024-03-11T00:00:00.000+0000",
"attributes": {
"EVENT_TYPE": "Account",
"Id": "000012345",
"Name": "My Account",
"BillingCity": null,
"CreatedDate": "2024-03-11T00:00:00.000+0000",
"CreatedBy.Name": "Foo Bar",
"CreatedBy.Profile.Name": "Beep Boop",
"CreatedBy.UserType": "Bip Bop",
"timestamp": 1710115200000,
},
"timestamp": 1710115200000
}
Example New Relic event:
{
"EVENT_TYPE": "Account",
"Id": "000012345",
"Name": "My Account",
"BillingCity": null,
"CreatedDate": "2024-03-11T00:00:00.000+0000",
"CreatedBy.Name": "Foo Bar",
"CreatedBy.Profile.Name": "Beep Boop",
"CreatedBy.UserType": "Bip Bop",
"timestamp": 1710115200000,
"eventType": "Account"
}
In addition to exporting EventLogFile
logs and the results of custom SOQL
queries, the exporter can also collect data about
Salesforce Org Limits.
Limits collection is configured at the instance level. It can not be
configured at the global level like custom queries can.
Limits collection is configured using the limits configuration
specified in the limits
attribute of the instance arguments
attribute of the instance configuration parameter.
An example limits configuration is shown below.
limits:
api_ver: "58.0"
names:
- ActiveScratchOrgs
- DailyApiRequests
The limits
configuration supports the following configuration
parameters.
Description | Valid Values | Required | Default |
---|---|---|---|
The version of the Salesforce API to use | string | N | 55.0 |
The api_ver
attribute can be used to customize the version of the Salesforce
API that the exporter should use when executing limits API calls.
Description | Valid Values | Required | Default |
---|---|---|---|
An array of limit names to collect | YAML Sequence | N | N/a |
By default, the exporter will collect information on all limits returned from
the limits API.
The names
configuration parameter can be used to limit the limits collected by
specifying a list of limit labels.
Description | Valid Values | Required | Default |
---|---|---|---|
The name of an event type to use when transforming limits to New Relic logs or events | string | N | SalesforceOrgLimit |
The value of the event_type
parameter is used during the transformation of
limits.
Limits data is mapped to New Relic data as follows.
-
The set of limit names is calculated as follows.
- If the
names
parameter exists, each limit label listed in the array will be used. NOTE: If an empty array is specified using the inline flow sequence[]
, no limits will be collected. - Otherwise, each limit label returned from the limits API. will be used.
- If the
-
For each limit name in the calculated set of limits, data is converted as follows.
- If the limit name is not in the retrieved set of limits, processing continues with the next limit name.
- Otherwise, the limit is converted into a Python
dict
object and processed into a single New Relic log entry as follows. - The
attributes
for the New Relic log entry are built first as follows.- Set the
name
attribute to the limit name. - If a
Max
attribute exists on the limit, convert the value to an integer and set theMax
attribute to the converted value. - If a
Remaining
attribute exists on the limit, convert the value to an integer and set theRemaining
attribute to the converted value. - If both the
Max
andRemaining
attributes exist, calculateMax - Remaining
and the theUsed
attribute to the result. - Set the
EVENT_TYPE
attribute to either theevent_type
value orSalesforceOrgLimit
.
- Set the
- The
message
for the New Relic log entry is set toSalesforce Org Limit: LIMIT_NAME
whereLIMIT_NAME
is the name of the limit being processed. - The
timestamp
for the New Relic log entry is set to the current time in milliseconds since the epoch.
Below is an example of an abridged result returned from the the limits API. and the New Relic log entry or New Relic event that would result from the above transformation for the first limit in the result.
Example Limits API Result (show abridged):
{
"ActiveScratchOrgs": {
"Max": 3,
"Remaining": 3
},
"AnalyticsExternalDataSizeMB": {
"Max": 40960,
"Remaining": 40960
},
"ConcurrentAsyncGetReportInstances": {
"Max": 200,
"Remaining": 200
},
"ConcurrentEinsteinDataInsightsStoryCreation": {
"Max": 5,
"Remaining": 5
},
"ConcurrentEinsteinDiscoveryStoryCreation": {
"Max": 2,
"Remaining": 2
}
}
Example New Relic log entry for the ActiveScratchOrgs
limit:
{
"message": "Salesforce Org Limit: ActiveScratchOrgs",
"attributes": {
"EVENT_TYPE": "SalesforceOrgLimit",
"name": "ActiveScratchOrgs",
"Max": 3,
"Remaining": 3,
"Used": 0
},
"timestamp": 1709876543210
}
Example New Relic event for the ActiveScratchOrgs
limit:
{
"EVENT_TYPE": "SalesforceOrgLimit",
"name": "ActiveScratchOrgs",
"Max": 3,
"Remaining": 3,
"Used": 0,
"eventType": "SalesforceOrgLimit"
}
In certain scenarios, it is possible to encounter the same query results on
separate executions of the exporter. Without some mechanism to handle these
scenarios, this would result in duplication of data in New Relic. This can
not only lead to inaccurate query results in New Relic but also to unintended
ingest. In the case of
EventLogFile
data, the magnitude of data could be significant.
To help prevent duplication of data, the exporter can use a cache to store the
IDs of query records and log messages that have been previously processed.
Caching is enabled at the instance level using the
cache_enabled
flag in the instance arguments.
With this flag set to True
, the exporter will attempt to connect to a Redis
cache using a combination of the configuration set in the redis
section of the instance arguments and/or environment
variables.
The following configuration parameters are supported.
Description | Valid Values | Required | Default |
---|---|---|---|
Redis server hostname or IP address | string | N | localhost |
This parameter specifies the hostname or IP address of the Redis server.
The host can also be specified using the {auth_env_prefix}REDIS_HOST
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Redis server port | number / numeric string | N | 6379 |
This parameter specifies the port to connect to on the Redis server.
The port can also be specified using the {auth_env_prefix}REDIS_PORT
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Redis database number | number / numeric string | N | 0 |
This parameter specifies the database number to connect to on the Redis server.
The database number can also be specified using the
{auth_env_prefix}REDIS_DB_NUMBER
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
SSL flag | True / False |
N | False |
This parameter specifies whether or not to use an SSL connection to connect to the Redis server.
The SSL flag can also be specified using the {auth_env_prefix}REDIS_SSL
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Redis password | string | Y | N/a |
This parameter specifies the password to use to connect to the Redis server.
The password can also be specified using the {auth_env_prefix}REDIS_PASSWORD
environment variable.
Description | Valid Values | Required | Default |
---|---|---|---|
Cache entry expiry, in days | number / numeric string | N | 2 |
This parameter specifies the expiration time to use when putting any entry into the cache. The time is specified in days.
The expiry can also be specified using the {auth_env_prefix}REDIS_EXPIRE_DAYS
environment variable.
Below is an example Redis configuration in the redis
attribute of
the instance arguments attribute of the
instance configuration parameter.
redis:
host: my.redis.test
port: 7721
db_number: 2
ssl: True
password: "R3d1s1sGr3@t"
expire_days: 1
Below is an example Redis configuration using environment variables with
no prefix from a bash
shell.
export REDIS_HOST="my.redis.test"
export REDIS_PORT="7721"
export REDIS_DB_NUMBER="2"
export REDIS_SSL="True"
export REDIS_PASSWORD="R3d1s1sGr3@t"
export REDIS_EXPIRE_DAYS="1"
If caching is not possible, several parameters are provided that can be used to
reduce the chances of log message duplication, namely, date_field
,
generation_interval
, and time_lag_minutes
. Use of these parameters may not
eliminate duplication but will reduce the chances of duplication.
In order to understand how these parameters interact, it can be helpful to have
an understanding of
the basics of using event monitoring with event log files,
the differences between Hourly
and Daily
event log files,
considerations when querying Hourly
event log files,
the difference between LogDate
and CreatedDate
, and how and when Salesforce generates event log files.
NOTE: The use of the parameters in this section only apply to the
EventLogFile
queries generated by the exporter and to the to_timestamp
,
from_timestamp
, and log_interval_type
arguments that can be used by
custom queries.
The best way to avoid duplication of log messages without a cache is to use
Hourly
event log files. As mentioned in
the ReST API Developer Guide,
Hourly
log files are incremental, meaning that as new log messages arrive for
a given hour, new log files are generated with just the new log messages.
To use Hourly
logs, set the generation_interval
to
Hourly
.
When using an external scheduler (run_as_service](#run_as_service) is
False), it is also necessary to ensure that the external scheduler fires on a regular interval (e.g. every 15 minutes or every day at 05:00) and that the [
cron_interval_minutes` matches the regular interval.
When using the built-in scheduler (run_as_service is True
),
no additional configuration is necessary as the time of the last run is
persisted in memory.
With these settings, their should be no duplication of log messages.
The best way to avoid duplication when querying Daily
event log files without
using a cache is to set the date_field
to LogDate
(the
default when cache_enabled
is False
) and to set
time_lag_minutes
to between 180 (3 hours) and 300
(5 hours).
As with Hourly
logs, when using an external scheduler (run_as_service](#run_as_service) is
False), it is also necessary to ensure that the external scheduler fires on a regular interval (e.g. every 15 minutes or every day at 05:00) and that the [
cron_interval_minutes` matches the regular interval.
When using the built-in scheduler (run_as_service is True
),
no additional configuration is necessary as the time of the last run is
persisted in memory.
With these settings, the Daily
event log file should be picked up once,
anywhere between 3 and 5 hours after the event log file LogDate
(which will
always be midnight of each new day). The lag time accounts for the fact that
Daily
event log files are generated each day around 0300 server local time
even though the reported LogDate
will be 0000. Higher lag times can also
account for delays in log file generation due to heavy load on the log file
processing servers (Gridforce/Hadoop).
Following are the recommended configurations for avoiding duplication in each
of the above scenarios. In general, using Hourly
for the
generation_interval
and CreatedDate
for the
date_field
is the recommended configuration with or
without a cache.
Scenario | Parameter | Recommended Value |
---|---|---|
With a cache | ||
cache_enabled |
True |
|
date_field |
CreatedDate |
|
generation_interval |
Hourly |
|
time_lag_minutes |
0 |
|
cron_interval_minutes (if run_as_service is False ) |
match external schedule | |
service_schedule (if run_as_service is True ) |
any | |
Without a cache (Hourly) | ||
cache_enabled |
False |
|
date_field |
CreatedDate |
|
generation_interval |
Hourly |
|
time_lag_minutes |
0 |
|
cron_interval_minutes (if run_as_service is False ) |
match external schedule | |
service_schedule (if run_as_service is True ) |
any | |
Without a cache (Daily) | ||
cache_enabled |
False |
|
date_field |
LogDate |
|
generation_interval |
Daily |
|
time_lag_minutes |
180 - 300 |
|
cron_interval_minutes (if run_as_service is False ) |
match external schedule | |
service_schedule (if run_as_service is True ) |
any |
The exporter uses the New Relic Python APM agent to report telemetry about itself to the New Relic account associated with the specified New Relic license key.
As with any New Relic APM agent,
a license key is required to report agent telemetry. The license key used by the
Python agent must be defined either
in the agent configuration file
located at newrelic.ini
or
using environment variables.
By default, the name of the application which the Python agent reports telemetry
to is New Relic Salesforce Exporter
. This name can be changed either
in the agent configuration file
located at newrelic.ini
or
using environment variables.
Additional agent configuration settings can be defined as outlined in the Python agent configuration documentation.
The exporter automatically generates logs to trace its health state and correct functioning. Logs are generated in the standard output as JSON objects, each line being an object. This will help other integrations and tools to collect these logs and handle them properly. The JSON object contains the following keys:
message
: String. The log message.timestamp
: Integer. Unix timestamp in milliseconds.level
: String.info
,error
orwarn
.
New Relic has open-sourced this project. This project is provided AS-IS WITHOUT WARRANTY OR DEDICATED SUPPORT. Issues and contributions should be reported to the project here on GitHub.
We encourage you to bring your experiences and questions to the Explorers Hub where our community members collaborate on solutions and new ideas.
New Relic recommends that you update the Salesforce Exporter regularly and at a minimum every 3 months.
To upgrade on-host deployments, see the section Upgrading on-host deployments. To upgrade Docker deployments, see the section Upgrading Docker deployments.
At New Relic we take your privacy and the security of your information seriously, and are committed to protecting your information. We must emphasize the importance of not sharing personal data in public forums, and ask all users to scrub logs and diagnostic information for sensitive information, whether personal, proprietary, or otherwise.
We define “Personal Data” as any information relating to an identified or identifiable individual, including, for example, your name, phone number, post code or zip code, Device ID, IP address, and email address.
For more information, review New Relic’s General Data Privacy Notice.
We encourage your contributions to improve this project! Keep in mind that when you submit your pull request, you'll need to sign the CLA via the click-through using CLA-Assistant. You only have to sign the CLA one time per project.
If you have any questions, or to execute our corporate CLA (which is required if your contribution is on behalf of a company), drop us an email at [email protected].
A note about vulnerabilities
As noted in our security policy, New Relic is committed to the privacy and security of our customers and their data. We believe that providing coordinated disclosure by security researchers and engaging with the security community are important means to achieve our security goals.
If you believe you have found a security vulnerability in this project or any of New Relic's products or websites, we welcome and greatly appreciate you reporting it to New Relic through HackerOne.
If you would like to contribute to this project, review these guidelines.
To all contributors, we thank you! Without your contribution, this project would not be what it is today.
The [New Relic Salesforce Exporter] project is licensed under the Apache 2.0 License.