diff --git a/docs/assets/images/webui-runs-metadata-filter.png b/docs/assets/images/webui-runs-metadata-filter.png new file mode 100644 index 00000000000..1009ebc8948 Binary files /dev/null and b/docs/assets/images/webui-runs-metadata-filter.png differ diff --git a/docs/get-started/webui-qs.rst b/docs/get-started/webui-qs.rst index d26b63c882d..8fdd4fcd507 100644 --- a/docs/get-started/webui-qs.rst +++ b/docs/get-started/webui-qs.rst @@ -20,6 +20,8 @@ You must have a running Determined cluster with the CLI installed. - To set up a remote cluster, visit the :ref:`Installation Guide ` where you'll find options for On Prem, AWS, GCP, Kubernetes, and Slurm. +.. _qs-webui-concepts: + ********** Concepts ********** diff --git a/docs/tutorials/_index.rst b/docs/tutorials/_index.rst index 78bdf8be153..0a32e26b766 100644 --- a/docs/tutorials/_index.rst +++ b/docs/tutorials/_index.rst @@ -46,7 +46,7 @@ Examples let you build off of an existing model that already runs on Determined. :hidden: Quickstart for Model Developers - Arbitrary Metadata Logging + Logging Arbitrary Metadata Porting Your PyTorch Model to Determined Get Started with Detached Mode Viewing Epoch-Based Metrics in the WebUI diff --git a/docs/tutorials/metadata-logging.rst b/docs/tutorials/metadata-logging.rst index 997d63c9203..5849f6e6b35 100644 --- a/docs/tutorials/metadata-logging.rst +++ b/docs/tutorials/metadata-logging.rst @@ -4,82 +4,65 @@ Arbitrary Metadata Logging ############################ -This tutorial demonstrates how to log custom metadata for your experiments. - -**Why Use Arbitrary Metadata Logging?** +Arbitrary Metadata Logging enhances your experiment tracking capabilities by allowing you to: -Arbitrary Metadata Logging allows you to: +#. Log custom metadata specific to your experiments. +#. View logged metadata in the WebUI for each trial. +#. Filter and sort experiment runs based on custom metadata. +#. Compare and analyze experiments using custom metadata fields. -- Capture experiment-specific information beyond standard metrics -- Compare and analyze custom data across experiments -- Filter and sort experiments based on custom metadata +By leveraging this feature, you can capture and analyze experiment-specific information beyond +standard metrics, leading to more insightful comparisons and better experiment management within the +Determined platform. ****************** - Logging Metadata + Example Use Case ****************** -You can log metadata using the Determined Core API. Here's how to do it in your training code: +This example creates an arbitrary metadata field ``effectiveness`` and then (does something else). -#. Import the necessary module: +**Section Title** - .. code:: python +#. Run an experiment to create a :ref:`single-trial run `. +#. Note the Run ID, e.g., Run 110. +#. Navigate to the cluster address for your training environment, e.g., **http://localhost:8080/**. +#. In the WebUI, click **API(Beta)** in the left navigation pane. +#. Execute the following PostRunMetadata ``/Internal/PostRunMetadata`` - from determined.core import Context +.. code:: bash -#. In your trial class, add a method to log metadata: + { + "runId": 110, + "metadata": { + "effectiveness": 20 + } + } - .. code:: python +Next, we'll filter our runs by a specific metadata condition. - def log_metadata(self, context: Context): - context.train.report_metadata({ - "dataset_version": "MNIST-v1.0", - "preprocessing": "normalization", - "hardware": { - "gpu": "NVIDIA A100", - "cpu": "Intel Xeon" - } - }) +**Filter by Metadata** -#. Call this method in your training loop: +#. In the WebUI, select your experiment to view the Runs table. +#. Select the **Filter**. +#. In **Show runs...**, select your metadata field from the dropdown menu. +#. Choose a condition (e.g., is, is not, or contains) and enter a value. - .. code:: python +.. image:: /assets/images/webui-runs-metadata-filter.png + :alt: Determined AI metadata filter for runs for an experiment - def train_batch(self, batch: TorchData, epoch_idx: int, batch_idx: int): - # Existing training code... +Finally, let's view the logged metadata for our run. - if batch_idx == 0: - self.log_metadata(self.context) +**View Metadata** - # Rest of the training code... +To view the logged metadata: -This example logs metadata at the beginning of each epoch. Adjust the frequency based on your needs. +#. In the WebUI, navigate to your experiment. +#. Click on the run you want to inspect. +#. In the Run details page, find the "Metadata" section under the "Overview" tab. -******************************* - Viewing Metadata in the WebUI -******************************* - -To view logged metadata: - -#. Open the WebUI and navigate to your experiment. -#. Click on the trial you want to inspect. -#. In the trial details page, find the "Metadata" section under the "Overview" tab. - -*********************************** - Filtering and Sorting by Metadata -*********************************** - -The :ref:`Web UI ` allows you to filter and sort experiment runs based on logged -metadata: - -#. Navigate to the Runs Table in the WebUI. -#. Click on the filter icon. -#. Select a metadata field from the dropdown menu. -#. Choose a condition (is, is not, or contains) and enter a value. - -For more detailed instructions on filtering and sorting, refer to the WebUI guide: - -Performance Considerations -========================== +**************************** + Performance Considerations +**************************** When using Arbitrary Metadata Logging, consider the following: @@ -88,50 +71,9 @@ When using Arbitrary Metadata Logging, consider the following: - Use consistent naming conventions for keys to make filtering and sorting easier. - For deeply nested JSON structures, filtering and sorting are supported at the top level. -Example Use Case -================ - -Let's say you're running experiments to benchmark different hardware setups. For each run, you might -log: - -.. code:: python - - def log_hardware_metadata(self, context: Context): - context.train.report_metadata({ - "hardware": { - "gpu": "NVIDIA A100", - "cpu": "Intel Xeon", - "ram": "64GB" - }, - "software": { - "cuda_version": "11.2", - "python_version": "3.8.10" - }, - "runtime_seconds": 3600 - }) - -You can then use these logged metadata fields to: - -#. Filter for experiments that ran on a specific GPU model. -#. Compare runtimes across different hardware configurations. -#. Analyze the impact of software versions on performance. - -Summary -======= - -Arbitrary Metadata Logging enhances your experiment tracking capabilities by allowing you to: - -#. Log custom metadata specific to your experiments. -#. View logged metadata in the WebUI for each trial. -#. Filter and sort experiment runs based on custom metadata. -#. Compare and analyze experiments using custom metadata fields. - -By leveraging this feature, you can capture and analyze experiment-specific information beyond -standard metrics, leading to more insightful comparisons and better experiment management within the -Determined AI platform. - -Next Steps -========== +************ + Next Steps +************ - Experiment with logging different types of metadata in your trials. - Use the filtering and sorting capabilities in the WebUI to analyze your experiments.