From 64c3651d4bc5879885c223a8158082d8768ee799 Mon Sep 17 00:00:00 2001 From: Pierre Gayvallet Date: Wed, 18 Dec 2024 18:02:26 +0100 Subject: [PATCH 1/2] o11y AI assistant: add doc entry about the product doc (#4686) (cherry picked from commit 5b4a5e2b91f95c23acc42516b38b510ab36b56e2) # Conflicts: # docs/en/serverless/ai-assistant/ai-assistant.asciidoc --- .../observability-ai-assistant.asciidoc | 13 + .../ai-assistant/ai-assistant.asciidoc | 363 ++++++++++++++++++ 2 files changed, 376 insertions(+) create mode 100644 docs/en/serverless/ai-assistant/ai-assistant.asciidoc diff --git a/docs/en/observability/observability-ai-assistant.asciidoc b/docs/en/observability/observability-ai-assistant.asciidoc index 7dac0e3bc3..43780fcc23 100644 --- a/docs/en/observability/observability-ai-assistant.asciidoc +++ b/docs/en/observability/observability-ai-assistant.asciidoc @@ -345,6 +345,19 @@ The AI Assistant Settings page contains the following tabs: * *Settings*: Configures the main AI Assistant settings, which are explained directly within the interface. * *Knowledge base*: Manages <>. * *Search Connectors*: Provides a link to {kib} *Search* -> *Content* -> *Connectors* UI for connectors configuration. + +[discrete] +[[obs-ai-product-documentation]] +== Elastic documentation for the AI Assistant + +It is possible to make the Elastic official documentation available to the AI Assistant, which significantly increases +its efficiency and accuracy in answering questions related to the Elastic stack and Elastic products. + +Enabling that feature can be done from the *Settings* tab of the AI Assistant Settings page, using the "Install Elastic Documentation" action. + +IMPORTANT: Installing the product documentation in air gapped environments requires specific installation and configuration instructions, +which are available in the {kibana-ref}/ai-assistant-settings-kb.html[{kib} Kibana AI Assistants settings documentation]. + [discrete] [[obs-ai-known-issues]] == Known issues diff --git a/docs/en/serverless/ai-assistant/ai-assistant.asciidoc b/docs/en/serverless/ai-assistant/ai-assistant.asciidoc new file mode 100644 index 0000000000..7e6b77a092 --- /dev/null +++ b/docs/en/serverless/ai-assistant/ai-assistant.asciidoc @@ -0,0 +1,363 @@ +[[observability-ai-assistant]] += {observability} AI Assistant + +// :keywords: serverless, observability, overview + +The AI Assistant uses generative AI to provide: + +* **Chat**: Have conversations with the AI Assistant. Chat uses function calling to request, analyze, and visualize your data. +* **Contextual insights**: Open prompts throughout {obs-serverless} that explain errors and messages and suggest remediation. + +[role="screenshot"] +image::images/ai-assistant-overview.gif[Observability AI assistant preview, 60%] + +The AI Assistant integrates with your large language model (LLM) provider through our supported Elastic connectors: + +* {kibana-ref}/openai-action-type.html[OpenAI connector] for OpenAI or Azure OpenAI Service. +* {kibana-ref}/bedrock-action-type.html[Amazon Bedrock connector] for Amazon Bedrock, specifically for the Claude models. +* {kibana-ref}/gemini-action-type.html[Google Gemini connector] for Google Gemini. + +[IMPORTANT] +==== +The AI Assistant is powered by an integration with your large language model (LLM) provider. +LLMs are known to sometimes present incorrect information as if it's correct. +Elastic supports configuration and connection to the LLM provider and your knowledge base, +but is not responsible for the LLM's responses. +==== + +[IMPORTANT] +==== +Also, the data you provide to the Observability AI assistant is _not_ anonymized, and is stored and processed by the third-party AI provider. This includes any data used in conversations for analysis or context, such as alert or event data, detection rule configurations, and queries. Therefore, be careful about sharing any confidential or sensitive details while using this feature. +==== + +[discrete] +[[observability-ai-assistant-requirements]] +== Requirements + +The AI assistant requires the following: + +* An account with a third-party generative AI provider that preferably supports function calling. +If your AI provider does not support function calling, you can configure AI Assistant settings under **Project settings** → **Management** → **AI Assistant for Observability Settings** to simulate function calling, but this might affect performance. ++ +Refer to the {kibana-ref}/action-types.html[connector documentation] for your provider to learn about supported and default models. +* The knowledge base requires a 4 GB {ml} node. + +[IMPORTANT] +==== +The free tier offered by third-party generative AI providers may not be sufficient for the proper functioning of the AI assistant. +In most cases, a paid subscription to one of the supported providers is required. +The Observability AI assistant doesn't support connecting to a private LLM. +Elastic doesn't recommend using private LLMs with the Observability AI assistant. +==== + +[discrete] +[[observability-ai-assistant-your-data-and-the-ai-assistant]] +== Your data and the AI Assistant + +Elastic does not use customer data for model training. This includes anything you send the model, such as alert or event data, detection rule configurations, queries, and prompts. However, any data you provide to the AI Assistant will be processed by the third-party provider you chose when setting up the OpenAI connector as part of the assistant setup. + +Elastic does not control third-party tools, and assumes no responsibility or liability for their content, operation, or use, nor for any loss or damage that may arise from your using such tools. Please exercise caution when using AI tools with personal, sensitive, or confidential information. Any data you submit may be used by the provider for AI training or other purposes. There is no guarantee that the provider will keep any information you provide secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use. + +[discrete] +[[observability-ai-assistant-set-up-the-ai-assistant]] +== Set up the AI Assistant + +To set up the AI Assistant: + +. Create an authentication key with your AI provider to authenticate requests from the AI Assistant. You'll use this in the next step. Refer to your provider's documentation for information about creating authentication keys: ++ +** https://platform.openai.com/docs/api-reference[OpenAI API keys] +** https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference[Azure OpenAI Service API keys] +** https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html[Amazon Bedrock authentication keys and secrets] +** https://cloud.google.com/iam/docs/keys-list-get[Google Gemini service account keys] +. From **Project settings** → **Management** → **Connectors**, create a connector for your AI provider: ++ +** {kibana-ref}/openai-action-type.html[OpenAI] +** {kibana-ref}/bedrock-action-type.html[Amazon Bedrock] +** {kibana-ref}/gemini-action-type.html[Google Gemini] +. Authenticate communication between {obs-serverless} and the AI provider by providing the following information: ++ +.. In the **URL** field, enter the AI provider's API endpoint URL. +.. Under **Authentication**, enter the key or secret you created in the previous step. + +[discrete] +[[observability-ai-assistant-add-data-to-the-ai-assistant-knowledge-base]] +== Add data to the AI Assistant knowledge base + +[IMPORTANT] +==== +**If you started using the AI Assistant in technical preview**, +any knowledge base articles you created using ELSER v1 will need to be reindexed or upgraded before they can be used. +Going forward, you must create knowledge base articles using ELSER v2. +You can either: + +* Clear all old knowledge base articles manually and reindex them. +* Upgrade all knowledge base articles indexed with ELSER v1 to ELSER v2 using a https://github.com/elastic/elasticsearch-labs/blob/main/notebooks/model-upgrades/upgrading-index-to-use-elser.ipynb[Python script]. +==== + +The AI Assistant uses {ml-docs}/ml-nlp-elser.html[ELSER], Elastic's semantic search engine, to recall data from its internal knowledge base index to create retrieval augmented generation (RAG) responses. Adding data such as Runbooks, GitHub issues, internal documentation, and Slack messages to the knowledge base gives the AI Assistant context to provide more specific assistance. + +[NOTE] +==== +Your AI provider may collect telemetry when using the AI Assistant. Contact your AI provider for information on how data is collected. +==== + +You can add information to the knowledge base by asking the AI Assistant to remember something while chatting (for example, "remember this for next time"). The assistant will create a summary of the information and add it to the knowledge base. + +You can also add external data to the knowledge base either in the Project Settings UI or using the {es} Index API. + +[discrete] +[[observability-ai-assistant-use-the-ui]] +=== Use the UI + +To add external data to the knowledge base in the Project Settings UI: + +. Go to **Project Settings**. +. In the _Other_ section, click **AI assistant for Observability settings**. +. Then select the **Elastic AI Assistant for Observability**. +. Switch to the **Knowledge base** tab. +. Click the **New entry** button, and choose either: ++ +** **Single entry**: Write content for a single entry in the UI. +** **Bulk import**: Upload a newline delimited JSON (`ndjson`) file containing a list of entries to add to the knowledge base. +Each object should conform to the following format: ++ +[source,json] +---- +{ + "id": "a_unique_human_readable_id", + "text": "Contents of item", +} +---- + +[discrete] +[[observability-ai-assistant-use-the-es-index-api]] +=== Use the {es} Index API + +. Ingest external data (GitHub issues, Markdown files, Jira tickets, text files, etc.) into {es} using the {es} {ref}/docs-index_.html[Index API]. +. Reindex your data into the AI Assistant's knowledge base index by completing the following query in **Developer Tools** → **Console**. Update the following fields before reindexing: ++ +** `InternalDocsIndex`: Name of the index where your internal documents are stored. +** `text_field`: Name of the field containing your internal documents' text. +** `timestamp`: Name of the timestamp field in your internal documents. +** `public`: If `true`, the document is available to all users with access to your Observability project. If `false`, the document is restricted to the user indicated in the following `user.name` field. +** `user.name` (optional): If defined, restricts the internal document's availability to a specific user. +** You can add a query filter to index specific documents. + +[source,console] +---- +POST _reindex +{ + "source": { + "index": "", + "_source": [ + "", + "", + "namespace", + "is_correction", + "public", + "confidence" + ] + }, + "dest": { + "index": ".kibana-observability-ai-assistant-kb-000001", + "pipeline": ".kibana-observability-ai-assistant-kb-ingest-pipeline" + }, + "script": { + "inline": "ctx._source.text = ctx._source.remove(\"\");ctx._source.namespace=\"\";ctx._source.is_correction=false;ctx._source.public=;ctx._source.confidence=\"high\";ctx._source['@timestamp'] = ctx._source.remove(\"\");ctx._source['user.name'] = \"\"" + } +} +---- + +[discrete] +[[observability-ai-assistant-interact-with-the-ai-assistant]] +== Interact with the AI Assistant + +You can chat with the AI Assistant or interact with contextual insights located throughout {obs-serverless}. +See the following sections for more on interacting with the AI Assistant. + +[TIP] +==== +After every answer the LLM provides, let us know if the answer was helpful. +Your feedback helps us improve the AI Assistant! +==== + +[discrete] +[[observability-ai-assistant-chat-with-the-assistant]] +=== Chat with the assistant + +Click the AI Assistant button (image:images/ai-assistant-button.png[AI Assistant icon]) in the upper-right corner where available to start the chat. + +This opens the AI Assistant flyout, where you can ask the assistant questions about your instance: + +[role="screenshot"] +image::images/ai-assistant-chat.png[Observability AI assistant chat, 60%] + +[IMPORTANT] +==== +Asking questions about your data requires function calling, which enables LLMs to reliably interact with third-party generative AI providers to perform searches or run advanced functions using customer data. + +When the Observability AI Assistant performs searches in the cluster, the queries are run with the same level of permissions as the user. +==== + +[discrete] +[[observability-ai-assistant-suggest-functions]] +=== Suggest functions + +beta::[] + +The AI Assistant uses several functions to include relevant context in the chat conversation through text, data, and visual components. Both you and the AI Assistant can suggest functions. You can also edit the AI Assistant's function suggestions and inspect function responses. For example, you could use the `kibana` function to call a {kib} API on your behalf. + +You can suggest the following functions: + +|=== +| Function | Description + +| `alerts` +| Get alerts for {obs-serverless}. + +| `elasticsearch` +| Call {es} APIs on your behalf. + +| `kibana` +| Call {kib} APIs on your behalf. + +| `summarize` +| Summarize parts of the conversation. + +| `visualize_query` +| Visualize charts for ES|QL queries. +|=== + +Additional functions are available when your cluster has APM data: + +|=== +| Function | Description + +| `get_apm_correlations` +| Get field values that are more prominent in the foreground set than the background set. This can be useful in determining which attributes (such as `error.message`, `service.node.name`, or `transaction.name`) are contributing to, for instance, a higher latency. Another option is a time-based comparison, where you compare before and after a change point. + +| `get_apm_downstream_dependencies` +| Get the downstream dependencies (services or uninstrumented backends) for a service. Map the downstream dependency name to a service by returning both `span.destination.service.resource` and `service.name`. Use this to drill down further if needed. + +| `get_apm_error_document` +| Get a sample error document based on the grouping name. This also includes the stacktrace of the error, which might hint to the cause. + +| `get_apm_service_summary` +| Get a summary of a single service, including the language, service version, deployments, the environments, and the infrastructure that it is running in. For example, the number of pods and a list of their downstream dependencies. It also returns active alerts and anomalies. + +| `get_apm_services_list` +| Get the list of monitored services, their health statuses, and alerts. + +| `get_apm_timeseries` +| Display different APM metrics (such as throughput, failure rate, or latency) for any service or all services and any or all of their dependencies. Displayed both as a time series and as a single statistic. Additionally, the function returns any changes, such as spikes, step and trend changes, or dips. You can also use it to compare data by requesting two different time ranges, or, for example, two different service versions. +|=== + +[discrete] +[[observability-ai-assistant-use-contextual-prompts]] +=== Use contextual prompts + +AI Assistant contextual prompts throughout {obs-serverless} provide the following information: + +* **Alerts**: Provides possible causes and remediation suggestions for log rate changes. +* **Application performance monitoring (APM)**: Explains APM errors and provides remediation suggestions. +* **Logs**: Explains log messages and generates search patterns to find similar issues. + +// Not included in initial serverless launch + +// - **Universal Profiling**: explains the most expensive libraries and functions in your fleet and provides optimization suggestions. + +// - **Infrastructure Observability**: explains the processes running on a host. + +For example, in the log details, you'll see prompts for **What's this message?** and **How do I find similar log messages?**: + +[role="screenshot"] +image::images/ai-assistant-logs-prompts.png[Observability AI assistant example prompts for logs, 60%] + +Clicking a prompt generates a message specific to that log entry. +You can continue a conversation from a contextual prompt by clicking **Start chat** to open the AI Assistant chat. + +[role="screenshot"] +image::images/ai-assistant-logs.png[Observability AI assistant example, 60%] + +[discrete] +[[observability-ai-assistant-add-the-ai-assistant-connector-to-alerting-workflows]] +=== Add the AI Assistant connector to alerting workflows + +You can use the {kibana-ref}/obs-ai-assistant-action-type.html[Observability AI Assistant connector] to add AI-generated insights and custom actions to your alerting workflows. +To do this: + +. <> and specify the conditions that must be met for the alert to fire. +. Under **Actions**, select the **Observability AI Assistant** connector type. +. In the **Connector** list, select the AI connector you created when you set up the assistant. +. In the **Message** field, specify the message to send to the assistant: + +[role="screenshot"] +image::images/obs-ai-assistant-action-high-cpu.png[Add an Observability AI assistant action while creating a rule in the Observability UI] + +You can ask the assistant to generate a report of the alert that fired, +recall any information or potential resolutions of past occurrences stored in the knowledge base, +provide troubleshooting guidance and resolution steps, +and also include other active alerts that may be related. +As a last step, you can ask the assistant to trigger an action, +such as sending the report (or any other message) to a Slack webhook. + +.NOTE +[NOTE] +==== +Currently you can only send messages to Slack, email, Jira, PagerDuty, or a webhook. +Additional actions will be added in the future. +==== + +When the alert fires, contextual details about the event—such as when the alert fired, +the service or host impacted, and the threshold breached—are sent to the AI Assistant, +along with the message provided during configuration. +The AI Assistant runs the tasks requested in the message and creates a conversation you can use to chat with the assistant: + +[role="screenshot"] +image::images/obs-ai-assistant-output.png[AI Assistant conversation created in response to an alert] + +[IMPORTANT] +==== +Conversations created by the AI Assistant are public and accessible to every user with permissions to use the assistant. +==== + +It might take a minute or two for the AI Assistant to process the message and create the conversation. + +Note that overly broad prompts may result in the request exceeding token limits. +For more information, refer to <>. +Also, attempting to analyze several alerts in a single connector execution may cause you to exceed the function call limit. +If this happens, modify the message specified in the connector configuration to avoid exceeding limits. + +When asked to send a message to another connector, such as Slack, +the AI Assistant attempts to include a link to the generated conversation. + +[role="screenshot"] +image::images/obs-ai-assistant-slack-message.png[Message sent by Slack by the AI Assistant includes a link to the conversation] + +The Observability AI Assistant connector is called when the alert fires and when it recovers. + +To learn more about alerting, actions, and connectors, refer to <>. + +[discrete] +[[obs-ai-product-documentation]] +== Elastic documentation for the AI Assistant + +It is possible to make the Elastic official documentation available to the AI Assistant, which significantly increases +its efficiency and accuracy in answering questions related to the Elastic stack and Elastic products. + +Enabling that feature can be done from the *Settings* tab of the AI Assistant Settings page, using the "Install Elastic Documentation" action. + +[discrete] +[[observability-ai-assistant-known-issues]] +== Known issues + +[discrete] +[[token-limits]] +=== Token limits + +Most LLMs have a set number of tokens they can manage in single a conversation. +When you reach the token limit, the LLM will throw an error, and Elastic will display a "Token limit reached" error. +The exact number of tokens that the LLM can support depends on the LLM provider and model you're using. +If you are using an OpenAI connector, you can monitor token usage in **OpenAI Token Usage** dashboard. +For more information, refer to the {kibana-ref}/openai-action-type.html#openai-connector-token-dashboard[OpenAI Connector documentation]. From c2e3cece18d5eb8a7089b6452838f360ca7c915a Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Wed, 18 Dec 2024 17:04:24 +0000 Subject: [PATCH 2/2] Delete docs/en/serverless directory --- .../ai-assistant/ai-assistant.asciidoc | 363 ------------------ 1 file changed, 363 deletions(-) delete mode 100644 docs/en/serverless/ai-assistant/ai-assistant.asciidoc diff --git a/docs/en/serverless/ai-assistant/ai-assistant.asciidoc b/docs/en/serverless/ai-assistant/ai-assistant.asciidoc deleted file mode 100644 index 7e6b77a092..0000000000 --- a/docs/en/serverless/ai-assistant/ai-assistant.asciidoc +++ /dev/null @@ -1,363 +0,0 @@ -[[observability-ai-assistant]] -= {observability} AI Assistant - -// :keywords: serverless, observability, overview - -The AI Assistant uses generative AI to provide: - -* **Chat**: Have conversations with the AI Assistant. Chat uses function calling to request, analyze, and visualize your data. -* **Contextual insights**: Open prompts throughout {obs-serverless} that explain errors and messages and suggest remediation. - -[role="screenshot"] -image::images/ai-assistant-overview.gif[Observability AI assistant preview, 60%] - -The AI Assistant integrates with your large language model (LLM) provider through our supported Elastic connectors: - -* {kibana-ref}/openai-action-type.html[OpenAI connector] for OpenAI or Azure OpenAI Service. -* {kibana-ref}/bedrock-action-type.html[Amazon Bedrock connector] for Amazon Bedrock, specifically for the Claude models. -* {kibana-ref}/gemini-action-type.html[Google Gemini connector] for Google Gemini. - -[IMPORTANT] -==== -The AI Assistant is powered by an integration with your large language model (LLM) provider. -LLMs are known to sometimes present incorrect information as if it's correct. -Elastic supports configuration and connection to the LLM provider and your knowledge base, -but is not responsible for the LLM's responses. -==== - -[IMPORTANT] -==== -Also, the data you provide to the Observability AI assistant is _not_ anonymized, and is stored and processed by the third-party AI provider. This includes any data used in conversations for analysis or context, such as alert or event data, detection rule configurations, and queries. Therefore, be careful about sharing any confidential or sensitive details while using this feature. -==== - -[discrete] -[[observability-ai-assistant-requirements]] -== Requirements - -The AI assistant requires the following: - -* An account with a third-party generative AI provider that preferably supports function calling. -If your AI provider does not support function calling, you can configure AI Assistant settings under **Project settings** → **Management** → **AI Assistant for Observability Settings** to simulate function calling, but this might affect performance. -+ -Refer to the {kibana-ref}/action-types.html[connector documentation] for your provider to learn about supported and default models. -* The knowledge base requires a 4 GB {ml} node. - -[IMPORTANT] -==== -The free tier offered by third-party generative AI providers may not be sufficient for the proper functioning of the AI assistant. -In most cases, a paid subscription to one of the supported providers is required. -The Observability AI assistant doesn't support connecting to a private LLM. -Elastic doesn't recommend using private LLMs with the Observability AI assistant. -==== - -[discrete] -[[observability-ai-assistant-your-data-and-the-ai-assistant]] -== Your data and the AI Assistant - -Elastic does not use customer data for model training. This includes anything you send the model, such as alert or event data, detection rule configurations, queries, and prompts. However, any data you provide to the AI Assistant will be processed by the third-party provider you chose when setting up the OpenAI connector as part of the assistant setup. - -Elastic does not control third-party tools, and assumes no responsibility or liability for their content, operation, or use, nor for any loss or damage that may arise from your using such tools. Please exercise caution when using AI tools with personal, sensitive, or confidential information. Any data you submit may be used by the provider for AI training or other purposes. There is no guarantee that the provider will keep any information you provide secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use. - -[discrete] -[[observability-ai-assistant-set-up-the-ai-assistant]] -== Set up the AI Assistant - -To set up the AI Assistant: - -. Create an authentication key with your AI provider to authenticate requests from the AI Assistant. You'll use this in the next step. Refer to your provider's documentation for information about creating authentication keys: -+ -** https://platform.openai.com/docs/api-reference[OpenAI API keys] -** https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference[Azure OpenAI Service API keys] -** https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html[Amazon Bedrock authentication keys and secrets] -** https://cloud.google.com/iam/docs/keys-list-get[Google Gemini service account keys] -. From **Project settings** → **Management** → **Connectors**, create a connector for your AI provider: -+ -** {kibana-ref}/openai-action-type.html[OpenAI] -** {kibana-ref}/bedrock-action-type.html[Amazon Bedrock] -** {kibana-ref}/gemini-action-type.html[Google Gemini] -. Authenticate communication between {obs-serverless} and the AI provider by providing the following information: -+ -.. In the **URL** field, enter the AI provider's API endpoint URL. -.. Under **Authentication**, enter the key or secret you created in the previous step. - -[discrete] -[[observability-ai-assistant-add-data-to-the-ai-assistant-knowledge-base]] -== Add data to the AI Assistant knowledge base - -[IMPORTANT] -==== -**If you started using the AI Assistant in technical preview**, -any knowledge base articles you created using ELSER v1 will need to be reindexed or upgraded before they can be used. -Going forward, you must create knowledge base articles using ELSER v2. -You can either: - -* Clear all old knowledge base articles manually and reindex them. -* Upgrade all knowledge base articles indexed with ELSER v1 to ELSER v2 using a https://github.com/elastic/elasticsearch-labs/blob/main/notebooks/model-upgrades/upgrading-index-to-use-elser.ipynb[Python script]. -==== - -The AI Assistant uses {ml-docs}/ml-nlp-elser.html[ELSER], Elastic's semantic search engine, to recall data from its internal knowledge base index to create retrieval augmented generation (RAG) responses. Adding data such as Runbooks, GitHub issues, internal documentation, and Slack messages to the knowledge base gives the AI Assistant context to provide more specific assistance. - -[NOTE] -==== -Your AI provider may collect telemetry when using the AI Assistant. Contact your AI provider for information on how data is collected. -==== - -You can add information to the knowledge base by asking the AI Assistant to remember something while chatting (for example, "remember this for next time"). The assistant will create a summary of the information and add it to the knowledge base. - -You can also add external data to the knowledge base either in the Project Settings UI or using the {es} Index API. - -[discrete] -[[observability-ai-assistant-use-the-ui]] -=== Use the UI - -To add external data to the knowledge base in the Project Settings UI: - -. Go to **Project Settings**. -. In the _Other_ section, click **AI assistant for Observability settings**. -. Then select the **Elastic AI Assistant for Observability**. -. Switch to the **Knowledge base** tab. -. Click the **New entry** button, and choose either: -+ -** **Single entry**: Write content for a single entry in the UI. -** **Bulk import**: Upload a newline delimited JSON (`ndjson`) file containing a list of entries to add to the knowledge base. -Each object should conform to the following format: -+ -[source,json] ----- -{ - "id": "a_unique_human_readable_id", - "text": "Contents of item", -} ----- - -[discrete] -[[observability-ai-assistant-use-the-es-index-api]] -=== Use the {es} Index API - -. Ingest external data (GitHub issues, Markdown files, Jira tickets, text files, etc.) into {es} using the {es} {ref}/docs-index_.html[Index API]. -. Reindex your data into the AI Assistant's knowledge base index by completing the following query in **Developer Tools** → **Console**. Update the following fields before reindexing: -+ -** `InternalDocsIndex`: Name of the index where your internal documents are stored. -** `text_field`: Name of the field containing your internal documents' text. -** `timestamp`: Name of the timestamp field in your internal documents. -** `public`: If `true`, the document is available to all users with access to your Observability project. If `false`, the document is restricted to the user indicated in the following `user.name` field. -** `user.name` (optional): If defined, restricts the internal document's availability to a specific user. -** You can add a query filter to index specific documents. - -[source,console] ----- -POST _reindex -{ - "source": { - "index": "", - "_source": [ - "", - "", - "namespace", - "is_correction", - "public", - "confidence" - ] - }, - "dest": { - "index": ".kibana-observability-ai-assistant-kb-000001", - "pipeline": ".kibana-observability-ai-assistant-kb-ingest-pipeline" - }, - "script": { - "inline": "ctx._source.text = ctx._source.remove(\"\");ctx._source.namespace=\"\";ctx._source.is_correction=false;ctx._source.public=;ctx._source.confidence=\"high\";ctx._source['@timestamp'] = ctx._source.remove(\"\");ctx._source['user.name'] = \"\"" - } -} ----- - -[discrete] -[[observability-ai-assistant-interact-with-the-ai-assistant]] -== Interact with the AI Assistant - -You can chat with the AI Assistant or interact with contextual insights located throughout {obs-serverless}. -See the following sections for more on interacting with the AI Assistant. - -[TIP] -==== -After every answer the LLM provides, let us know if the answer was helpful. -Your feedback helps us improve the AI Assistant! -==== - -[discrete] -[[observability-ai-assistant-chat-with-the-assistant]] -=== Chat with the assistant - -Click the AI Assistant button (image:images/ai-assistant-button.png[AI Assistant icon]) in the upper-right corner where available to start the chat. - -This opens the AI Assistant flyout, where you can ask the assistant questions about your instance: - -[role="screenshot"] -image::images/ai-assistant-chat.png[Observability AI assistant chat, 60%] - -[IMPORTANT] -==== -Asking questions about your data requires function calling, which enables LLMs to reliably interact with third-party generative AI providers to perform searches or run advanced functions using customer data. - -When the Observability AI Assistant performs searches in the cluster, the queries are run with the same level of permissions as the user. -==== - -[discrete] -[[observability-ai-assistant-suggest-functions]] -=== Suggest functions - -beta::[] - -The AI Assistant uses several functions to include relevant context in the chat conversation through text, data, and visual components. Both you and the AI Assistant can suggest functions. You can also edit the AI Assistant's function suggestions and inspect function responses. For example, you could use the `kibana` function to call a {kib} API on your behalf. - -You can suggest the following functions: - -|=== -| Function | Description - -| `alerts` -| Get alerts for {obs-serverless}. - -| `elasticsearch` -| Call {es} APIs on your behalf. - -| `kibana` -| Call {kib} APIs on your behalf. - -| `summarize` -| Summarize parts of the conversation. - -| `visualize_query` -| Visualize charts for ES|QL queries. -|=== - -Additional functions are available when your cluster has APM data: - -|=== -| Function | Description - -| `get_apm_correlations` -| Get field values that are more prominent in the foreground set than the background set. This can be useful in determining which attributes (such as `error.message`, `service.node.name`, or `transaction.name`) are contributing to, for instance, a higher latency. Another option is a time-based comparison, where you compare before and after a change point. - -| `get_apm_downstream_dependencies` -| Get the downstream dependencies (services or uninstrumented backends) for a service. Map the downstream dependency name to a service by returning both `span.destination.service.resource` and `service.name`. Use this to drill down further if needed. - -| `get_apm_error_document` -| Get a sample error document based on the grouping name. This also includes the stacktrace of the error, which might hint to the cause. - -| `get_apm_service_summary` -| Get a summary of a single service, including the language, service version, deployments, the environments, and the infrastructure that it is running in. For example, the number of pods and a list of their downstream dependencies. It also returns active alerts and anomalies. - -| `get_apm_services_list` -| Get the list of monitored services, their health statuses, and alerts. - -| `get_apm_timeseries` -| Display different APM metrics (such as throughput, failure rate, or latency) for any service or all services and any or all of their dependencies. Displayed both as a time series and as a single statistic. Additionally, the function returns any changes, such as spikes, step and trend changes, or dips. You can also use it to compare data by requesting two different time ranges, or, for example, two different service versions. -|=== - -[discrete] -[[observability-ai-assistant-use-contextual-prompts]] -=== Use contextual prompts - -AI Assistant contextual prompts throughout {obs-serverless} provide the following information: - -* **Alerts**: Provides possible causes and remediation suggestions for log rate changes. -* **Application performance monitoring (APM)**: Explains APM errors and provides remediation suggestions. -* **Logs**: Explains log messages and generates search patterns to find similar issues. - -// Not included in initial serverless launch - -// - **Universal Profiling**: explains the most expensive libraries and functions in your fleet and provides optimization suggestions. - -// - **Infrastructure Observability**: explains the processes running on a host. - -For example, in the log details, you'll see prompts for **What's this message?** and **How do I find similar log messages?**: - -[role="screenshot"] -image::images/ai-assistant-logs-prompts.png[Observability AI assistant example prompts for logs, 60%] - -Clicking a prompt generates a message specific to that log entry. -You can continue a conversation from a contextual prompt by clicking **Start chat** to open the AI Assistant chat. - -[role="screenshot"] -image::images/ai-assistant-logs.png[Observability AI assistant example, 60%] - -[discrete] -[[observability-ai-assistant-add-the-ai-assistant-connector-to-alerting-workflows]] -=== Add the AI Assistant connector to alerting workflows - -You can use the {kibana-ref}/obs-ai-assistant-action-type.html[Observability AI Assistant connector] to add AI-generated insights and custom actions to your alerting workflows. -To do this: - -. <> and specify the conditions that must be met for the alert to fire. -. Under **Actions**, select the **Observability AI Assistant** connector type. -. In the **Connector** list, select the AI connector you created when you set up the assistant. -. In the **Message** field, specify the message to send to the assistant: - -[role="screenshot"] -image::images/obs-ai-assistant-action-high-cpu.png[Add an Observability AI assistant action while creating a rule in the Observability UI] - -You can ask the assistant to generate a report of the alert that fired, -recall any information or potential resolutions of past occurrences stored in the knowledge base, -provide troubleshooting guidance and resolution steps, -and also include other active alerts that may be related. -As a last step, you can ask the assistant to trigger an action, -such as sending the report (or any other message) to a Slack webhook. - -.NOTE -[NOTE] -==== -Currently you can only send messages to Slack, email, Jira, PagerDuty, or a webhook. -Additional actions will be added in the future. -==== - -When the alert fires, contextual details about the event—such as when the alert fired, -the service or host impacted, and the threshold breached—are sent to the AI Assistant, -along with the message provided during configuration. -The AI Assistant runs the tasks requested in the message and creates a conversation you can use to chat with the assistant: - -[role="screenshot"] -image::images/obs-ai-assistant-output.png[AI Assistant conversation created in response to an alert] - -[IMPORTANT] -==== -Conversations created by the AI Assistant are public and accessible to every user with permissions to use the assistant. -==== - -It might take a minute or two for the AI Assistant to process the message and create the conversation. - -Note that overly broad prompts may result in the request exceeding token limits. -For more information, refer to <>. -Also, attempting to analyze several alerts in a single connector execution may cause you to exceed the function call limit. -If this happens, modify the message specified in the connector configuration to avoid exceeding limits. - -When asked to send a message to another connector, such as Slack, -the AI Assistant attempts to include a link to the generated conversation. - -[role="screenshot"] -image::images/obs-ai-assistant-slack-message.png[Message sent by Slack by the AI Assistant includes a link to the conversation] - -The Observability AI Assistant connector is called when the alert fires and when it recovers. - -To learn more about alerting, actions, and connectors, refer to <>. - -[discrete] -[[obs-ai-product-documentation]] -== Elastic documentation for the AI Assistant - -It is possible to make the Elastic official documentation available to the AI Assistant, which significantly increases -its efficiency and accuracy in answering questions related to the Elastic stack and Elastic products. - -Enabling that feature can be done from the *Settings* tab of the AI Assistant Settings page, using the "Install Elastic Documentation" action. - -[discrete] -[[observability-ai-assistant-known-issues]] -== Known issues - -[discrete] -[[token-limits]] -=== Token limits - -Most LLMs have a set number of tokens they can manage in single a conversation. -When you reach the token limit, the LLM will throw an error, and Elastic will display a "Token limit reached" error. -The exact number of tokens that the LLM can support depends on the LLM provider and model you're using. -If you are using an OpenAI connector, you can monitor token usage in **OpenAI Token Usage** dashboard. -For more information, refer to the {kibana-ref}/openai-action-type.html#openai-connector-token-dashboard[OpenAI Connector documentation].