Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: Oliver Howell <[email protected]>
  • Loading branch information
pollett and oliverhowell authored Oct 21, 2024
1 parent 4311d1b commit 5a86e2a
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions docs/modules/deploy/pages/performance.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,17 +16,17 @@ Doubling the available resources can roughly half your processing latency and in

== Multiple instances

{short-product-name} should be run across multiple instances in your production environments, we recommend at least 3 to provide job resilience and maintain the quorum in the Hazelcast cluster.
{short-product-name} should be run across multiple instances in your production environments. We recommend at least 3 instances to provide job resilience and maintain the quorum in the Hazelcast cluster.

This will provide you with redundancy in case of node failure and allow multiple jobs to be spread across the nodes in the cluster.

{short-product-name} can be scaled to run on extra nodes as you execute more jobs that need resources to run in parallel.

See our xref:deploy:production-deployments.adoc[production deployment] templates to configure {short-product-name} with multiple nodes.
For more detail on templates to configure {short-product-name} with multiple nodes, see xref:deploy:production-deployments.adoc[].

== Tuning

When running {short-product-name} in xref:deploy:production-deployments.adoc[production], ensure you've configured an external xref:query:observability.adoc#performance-metrics--prometheus[Prometheus] server to capture your metrics, then {short-product-name} will graph them on the Endpoint page. From there you can see the throughput of your query and the latency of each message processed.
When running {short-product-name} in xref:deploy:production-deployments.adoc[production], ensure you've configured an external xref:query:observability.adoc#performance-metrics--prometheus[Prometheus] server to capture your metrics, which will enable {short-product-name} to graph them on the Endpoint page. From there you can see the throughput of your query and the latency of each message processed.

image:perf-metrics.png[]

Expand Down

0 comments on commit 5a86e2a

Please sign in to comment.