From b734629ce8e3359519cc10f521c5b5c5de7c07a6 Mon Sep 17 00:00:00 2001 From: Ruben van Staden Date: Fri, 13 Dec 2024 09:13:07 -0500 Subject: [PATCH] add exact hardware profile used in benchmarking --- .../apm/troubleshooting/processing-performance.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc index 30e173befc..9fce721e69 100644 --- a/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc +++ b/docs/en/observability/apm/troubleshooting/processing-performance.asciidoc @@ -7,7 +7,7 @@ agent and server settings, versions, and protocol. We tested several scenarios to help you understand how to size the APM Server so that it can keep up with the load that your Elastic APM agents are sending: -* Using the default hardware template on AWS, GCP and Azure on {ecloud}. +* Using the _CPU Optimized_ hardware template on AWS, GCP and Azure on {ecloud}, see link:https://www.elastic.co/guide/en/cloud/current/ec-configure-deployment-settings.html#ec-hardware-profiles[Hardware Profiles]. * For each hardware template, testing with several sizes: 1 GB, 4 GB, 8 GB, and 32 GB. * For each size, using a fixed number of APM agents: 10 agents for 1 GB, 30 agents for 4 GB, 60 agents for 8 GB, and 240 agents for 32 GB. * In all scenarios, using medium sized events. Events include