Skip to content

Commit

Permalink
Merge branch 'release/v1.67.0' into feat/ssapi-receiver
Browse files Browse the repository at this point in the history
  • Loading branch information
Caleb-Hurshman authored Dec 11, 2024
2 parents 7dea9f4 + 6816c7f commit cc0dfd3
Show file tree
Hide file tree
Showing 12 changed files with 1,730 additions and 230 deletions.
34 changes: 21 additions & 13 deletions exporter/chronicleexporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,19 +22,23 @@ This exporter facilitates the sending of logs to Chronicle, which is a security

The exporter can be configured using the following fields:

| Field | Type | Default | Required | Description |
| ----------------------- | ----------------- | -------------------------------------- | -------- | ------------------------------------------------------------------------------------------- |
| `endpoint` | string | `malachiteingestion-pa.googleapis.com` | `false` | The Endpoint for sending to chronicle. |
| `creds_file_path` | string | | `true` | The file path to the Google credentials JSON file. |
| `creds` | string | | `true` | The Google credentials JSON. |
| `log_type` | string | | `false` | The type of log that will be sent. |
| `raw_log_field` | string | | `false` | The field name for raw logs. |
| `customer_id` | string | | `false` | The customer ID used for sending logs. |
| `override_log_type` | bool | `true` | `false` | Whether or not to override the `log_type` in the config with `attributes["log_type"]` |
| `namespace` | string | | `false` | User-configured environment namespace to identify the data domain the logs originated from. |
| `compression` | string | `none` | `false` | The compression type to use when sending logs. valid values are `none` and `gzip` |
| `ingestion_labels` | map[string]string | | `false` | Key-value pairs of labels to be applied to the logs when sent to chronicle. |
| `collect_agent_metrics` | bool | `true` | `false` | Enables collecting metrics about the agent's process and log ingestion metrics |
| Field | Type | Default | Required | Description |
| ------------------------------- | ----------------- | -------------------------------------- | -------- | ------------------------------------------------------------------------------------------- |
| `endpoint` | string | `malachiteingestion-pa.googleapis.com` | `false` | The Endpoint for sending to chronicle. |
| `creds_file_path` | string | | `true` | The file path to the Google credentials JSON file. |
| `creds` | string | | `true` | The Google credentials JSON. |
| `log_type` | string | | `false` | The type of log that will be sent. |
| `raw_log_field` | string | | `false` | The field name for raw logs. |
| `customer_id` | string | | `false` | The customer ID used for sending logs. |
| `override_log_type` | bool | `true` | `false` | Whether or not to override the `log_type` in the config with `attributes["log_type"]` |
| `namespace` | string | | `false` | User-configured environment namespace to identify the data domain the logs originated from. |
| `compression` | string | `none` | `false` | The compression type to use when sending logs. valid values are `none` and `gzip` |
| `ingestion_labels` | map[string]string | | `false` | Key-value pairs of labels to be applied to the logs when sent to chronicle. |
| `collect_agent_metrics` | bool | `true` | `false` | Enables collecting metrics about the agent's process and log ingestion metrics |
| `batch_log_count_limit_grpc` | int | `1000` | `false` | The maximum number of logs allowed in a gRPC batch creation request. |
| `batch_request_size_limit_grpc` | int | `1048576` | `false` | The maximum size, in bytes, allowed for a gRPC batch creation request. |
| `batch_log_count_limit_http` | int | `1000` | `false` | The maximum number of logs allowed in a HTTP batch creation request. |
| `batch_request_size_limit_http` | int | `1048576` | `false` | The maximum size, in bytes, allowed for a HTTP batch creation request. |

### Log Type

Expand Down Expand Up @@ -63,6 +67,10 @@ Besides the default endpoint, there are also regional endpoints that can be used

For additional information on accessing Chronicle, see the [Chronicle documentation](https://cloud.google.com/chronicle/docs/reference/ingestion-api#getting_api_authentication_credentials).

## Log Batch Creation Request Limits

`batch_log_count_limit_grpc`, `batch_request_size_limit_grpc`, `batch_log_count_limit_http`, `batch_request_size_limit_http` are all used for ensuring log batch creation requests don't exceed Chronicle's backend limits - the former two for Chronicle's gRPC endpoint, and the latter two for Chronicle's HTTP endpoint. If a request either exceeds the configured size limit or contains more logs than the configured log count limit, the request will be split into multiple requests that adhere to these limits, with each request containing a subset of the logs contained in the original request. Any single logs that result in the request exceeding the size limit will be dropped.

## Example Configuration

### Basic Configuration
Expand Down
47 changes: 42 additions & 5 deletions exporter/chronicleexporter/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,26 @@ type Config struct {

// Forwarder is the forwarder that will be used when the protocol is https.
Forwarder string `mapstructure:"forwarder"`

// BatchLogCountLimitGRPC is the maximum number of logs that can be sent in a single batch to Chronicle via the GRPC protocol
// This field is defaulted to 1000, as that is the default Chronicle backend limit.
// All batched logs beyond the backend limit will be dropped. Any batches with more logs than this limit will be split into multiple batches
BatchLogCountLimitGRPC int `mapstructure:"batch_log_count_limit_grpc"`

// BatchRequestSizeLimitGRPC is the maximum batch request size, in bytes, that can be sent to Chronicle via the GRPC protocol
// This field is defaulted to 1048576 as that is the default Chronicle backend limit
// Setting this option to a value above the Chronicle backend limit may result in rejected log batch requests
BatchRequestSizeLimitGRPC int `mapstructure:"batch_request_size_limit_grpc"`

// BatchLogCountLimitHTTP is the maximum number of logs that can be sent in a single batch to Chronicle via the HTTP protocol
// This field is defaulted to 1000, as that is the default Chronicle backend limit.
// All batched logs beyond the backend limit will be dropped. Any batches with more logs than this limit will be split into multiple batches
BatchLogCountLimitHTTP int `mapstructure:"batch_log_count_limit_http"`

// BatchRequestSizeLimitHTTP is the maximum batch request size, in bytes, that can be sent to Chronicle via the HTTP protocol
// This field is defaulted to 1048576 as that is the default Chronicle backend limit
// Setting this option to a value above the Chronicle backend limit may result in rejected log batch requests
BatchRequestSizeLimitHTTP int `mapstructure:"batch_request_size_limit_http"`
}

// Validate checks if the configuration is valid.
Expand All @@ -110,10 +130,6 @@ func (cfg *Config) Validate() error {
return fmt.Errorf("endpoint should not contain a protocol: %s", cfg.Endpoint)
}

if cfg.Protocol != protocolHTTPS && cfg.Protocol != protocolGRPC {
return fmt.Errorf("invalid protocol: %s", cfg.Protocol)
}

if cfg.Protocol == protocolHTTPS {
if cfg.Location == "" {
return errors.New("location is required when protocol is https")
Expand All @@ -124,7 +140,28 @@ func (cfg *Config) Validate() error {
if cfg.Forwarder == "" {
return errors.New("forwarder is required when protocol is https")
}
if cfg.BatchLogCountLimitHTTP <= 0 {
return errors.New("positive batch count log limit is required when protocol is https")
}

if cfg.BatchRequestSizeLimitHTTP <= 0 {
return errors.New("positive batch request size limit is required when protocol is https")
}

return nil
}

if cfg.Protocol == protocolGRPC {
if cfg.BatchLogCountLimitGRPC <= 0 {
return errors.New("positive batch count log limit is required when protocol is grpc")
}

if cfg.BatchRequestSizeLimitGRPC <= 0 {
return errors.New("positive batch request size limit is required when protocol is grpc")
}

return nil
}

return nil
return fmt.Errorf("invalid protocol: %s", cfg.Protocol)
}
152 changes: 117 additions & 35 deletions exporter/chronicleexporter/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,44 +29,76 @@ func TestConfigValidate(t *testing.T) {
{
desc: "Both creds_file_path and creds are set",
config: &Config{
CredsFilePath: "/path/to/creds_file",
Creds: "creds_example",
LogType: "log_type_example",
Compression: noCompression,
CredsFilePath: "/path/to/creds_file",
Creds: "creds_example",
LogType: "log_type_example",
Compression: noCompression,
BatchLogCountLimitGRPC: defaultBatchLogCountLimitGRPC,
BatchRequestSizeLimitGRPC: defaultBatchRequestSizeLimitGRPC,
},
expectedErr: "can only specify creds_file_path or creds",
},
{
desc: "Valid config with creds",
config: &Config{
Creds: "creds_example",
LogType: "log_type_example",
Compression: noCompression,
Protocol: protocolGRPC,
Creds: "creds_example",
LogType: "log_type_example",
Compression: noCompression,
Protocol: protocolGRPC,
BatchLogCountLimitGRPC: defaultBatchLogCountLimitGRPC,
BatchRequestSizeLimitGRPC: defaultBatchRequestSizeLimitGRPC,
},
expectedErr: "",
},
{
desc: "Valid config with creds_file_path",
config: &Config{
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Compression: noCompression,
Protocol: protocolGRPC,
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Compression: noCompression,
Protocol: protocolGRPC,
BatchLogCountLimitGRPC: defaultBatchLogCountLimitGRPC,
BatchRequestSizeLimitGRPC: defaultBatchRequestSizeLimitGRPC,
},
expectedErr: "",
},
{
desc: "Valid config with raw log field",
config: &Config{
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
RawLogField: `body["field"]`,
Compression: noCompression,
Protocol: protocolGRPC,
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
RawLogField: `body["field"]`,
Compression: noCompression,
Protocol: protocolGRPC,
BatchLogCountLimitGRPC: defaultBatchLogCountLimitGRPC,
BatchRequestSizeLimitGRPC: defaultBatchRequestSizeLimitGRPC,
},
expectedErr: "",
},
{
desc: "Invalid batch log count limit",
config: &Config{
Creds: "creds_example",
LogType: "log_type_example",
Compression: noCompression,
Protocol: protocolGRPC,
BatchLogCountLimitGRPC: 0,
BatchRequestSizeLimitGRPC: defaultBatchRequestSizeLimitGRPC,
},
expectedErr: "positive batch count log limit is required when protocol is grpc",
},
{
desc: "Invalid batch request size limit",
config: &Config{
Creds: "creds_example",
LogType: "log_type_example",
Compression: noCompression,
Protocol: protocolGRPC,
BatchLogCountLimitGRPC: defaultBatchLogCountLimitGRPC,
BatchRequestSizeLimitGRPC: 0,
},
expectedErr: "positive batch request size limit is required when protocol is grpc",
},
{
desc: "Invalid compression type",
config: &Config{
Expand All @@ -79,39 +111,89 @@ func TestConfigValidate(t *testing.T) {
{
desc: "Protocol is https and location is empty",
config: &Config{
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Protocol: protocolHTTPS,
Compression: noCompression,
Forwarder: "forwarder_example",
Project: "project_example",
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Protocol: protocolHTTPS,
Compression: noCompression,
Forwarder: "forwarder_example",
Project: "project_example",
BatchRequestSizeLimitHTTP: defaultBatchRequestSizeLimitHTTP,
BatchLogCountLimitHTTP: defaultBatchLogCountLimitHTTP,
},
expectedErr: "location is required when protocol is https",
},
{
desc: "Protocol is https and forwarder is empty",
config: &Config{
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Protocol: protocolHTTPS,
Compression: noCompression,
Project: "project_example",
Location: "location_example",
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Protocol: protocolHTTPS,
Compression: noCompression,
Project: "project_example",
Location: "location_example",
BatchRequestSizeLimitHTTP: defaultBatchRequestSizeLimitHTTP,
BatchLogCountLimitHTTP: defaultBatchLogCountLimitHTTP,
},
expectedErr: "forwarder is required when protocol is https",
},
{
desc: "Protocol is https and project is empty",
config: &Config{
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Protocol: protocolHTTPS,
Compression: noCompression,
Location: "location_example",
Forwarder: "forwarder_example",
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Protocol: protocolHTTPS,
Compression: noCompression,
Location: "location_example",
Forwarder: "forwarder_example",
BatchRequestSizeLimitHTTP: defaultBatchRequestSizeLimitHTTP,
BatchLogCountLimitHTTP: defaultBatchLogCountLimitHTTP,
},
expectedErr: "project is required when protocol is https",
},
{
desc: "Protocol is https and http batch log count limit is 0",
config: &Config{
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Protocol: protocolHTTPS,
Compression: noCompression,
Project: "project_example",
Location: "location_example",
Forwarder: "forwarder_example",
BatchRequestSizeLimitHTTP: defaultBatchRequestSizeLimitHTTP,
BatchLogCountLimitHTTP: 0,
},
expectedErr: "positive batch count log limit is required when protocol is https",
},
{
desc: "Protocol is https and http batch request size limit is 0",
config: &Config{
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Protocol: protocolHTTPS,
Compression: noCompression,
Project: "project_example",
Location: "location_example",
Forwarder: "forwarder_example",
BatchRequestSizeLimitHTTP: 0,
BatchLogCountLimitHTTP: defaultBatchLogCountLimitHTTP,
},
expectedErr: "positive batch request size limit is required when protocol is https",
},
{
desc: "Valid https config",
config: &Config{
CredsFilePath: "/path/to/creds_file",
LogType: "log_type_example",
Protocol: protocolHTTPS,
Compression: noCompression,
Project: "project_example",
Location: "location_example",
Forwarder: "forwarder_example",
BatchRequestSizeLimitHTTP: defaultBatchRequestSizeLimitHTTP,
BatchLogCountLimitHTTP: defaultBatchLogCountLimitHTTP,
},
},
}

for _, tc := range testCases {
Expand Down
8 changes: 5 additions & 3 deletions exporter/chronicleexporter/exporter.go
Original file line number Diff line number Diff line change
Expand Up @@ -257,9 +257,11 @@ func (ce *chronicleExporter) logsHTTPDataPusher(ctx context.Context, ld plog.Log
return fmt.Errorf("marshal logs: %w", err)
}

for logType, payload := range payloads {
if err := ce.uploadToChronicleHTTP(ctx, payload, logType); err != nil {
return fmt.Errorf("upload to chronicle: %w", err)
for logType, logTypePayloads := range payloads {
for _, payload := range logTypePayloads {
if err := ce.uploadToChronicleHTTP(ctx, payload, logType); err != nil {
return fmt.Errorf("upload to chronicle: %w", err)
}
}
}

Expand Down
25 changes: 17 additions & 8 deletions exporter/chronicleexporter/factory.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,17 +35,26 @@ func NewFactory() exporter.Factory {
exporter.WithLogs(createLogsExporter, metadata.LogsStability))
}

const defaultBatchLogCountLimitGRPC = 1000
const defaultBatchRequestSizeLimitGRPC = 1048576
const defaultBatchLogCountLimitHTTP = 1000
const defaultBatchRequestSizeLimitHTTP = 1048576

// createDefaultConfig creates the default configuration for the exporter.
func createDefaultConfig() component.Config {
return &Config{
Protocol: protocolGRPC,
TimeoutConfig: exporterhelper.NewDefaultTimeoutConfig(),
QueueConfig: exporterhelper.NewDefaultQueueConfig(),
BackOffConfig: configretry.NewDefaultBackOffConfig(),
OverrideLogType: true,
Endpoint: baseEndpoint,
Compression: noCompression,
CollectAgentMetrics: true,
Protocol: protocolGRPC,
TimeoutConfig: exporterhelper.NewDefaultTimeoutConfig(),
QueueConfig: exporterhelper.NewDefaultQueueConfig(),
BackOffConfig: configretry.NewDefaultBackOffConfig(),
OverrideLogType: true,
Endpoint: baseEndpoint,
Compression: noCompression,
CollectAgentMetrics: true,
BatchLogCountLimitGRPC: defaultBatchLogCountLimitGRPC,
BatchRequestSizeLimitGRPC: defaultBatchRequestSizeLimitGRPC,
BatchLogCountLimitHTTP: defaultBatchLogCountLimitHTTP,
BatchRequestSizeLimitHTTP: defaultBatchRequestSizeLimitHTTP,
}
}

Expand Down
Loading

0 comments on commit cc0dfd3

Please sign in to comment.