Skip to content

Commit

Permalink
Accept suggestions
Browse files Browse the repository at this point in the history
  • Loading branch information
FlorianJacta committed Sep 25, 2023
1 parent 470df0b commit 0bf10e1
Show file tree
Hide file tree
Showing 10 changed files with 78 additions and 51 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

Taipy requires **Python 3.8** or newer.

Welcome to the **Tutorial** guide on how to make a full application from frontend to
backend. No prior knowledge is required to complete this Tutorial.
Welcome to the **Tutorial** guide, which will walk you through creating a complete application from the front end
to the back end. You don't need any prior knowledge to complete this tutorial.

![Tutorial application](step_01/overview.gif){ width=700 style="margin:auto;display:block;border: 4px solid rgb(210,210,210);border-radius:7px" }

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@
from taipy.gui import Gui
import taipy as tp


def on_change(state, var_name: str, var_value):
state['scenario'].on_change(state, var_name, var_value)

Expand All @@ -27,7 +26,8 @@ def on_change(state, var_name: str, var_value):
}



if __name__ == "__main__":
tp.Core().run()
gui = Gui(pages=pages)
gui.run(title="Taipy Application")
gui.run(title="Taipy Application", port=3455)
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,6 @@
<br/> <|Save|button|on_action=save|active={scenario}|>
|>

<|{scenario}|scenario|>
<|{scenario}|scenario|on_submission_change=submission_change|>

<|{predictions_dataset}|chart|x=Date|y[1]=Historical values|type[1]=bar|y[2]=Predicted values ML|y[3]=Predicted values Baseline|>
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@
"Predicted values Baseline":[0],
"Historical values":[0]}


def submission_change(state, submittable, details: dict):
print(f"submission_change(state, submittable: {submittable}, details: {details})")
notify(state, "info", f"submission_change(state, submittable: {submittable}, details: {details})")

def save(state):
print("Saving scenario...")
Expand Down Expand Up @@ -47,5 +49,4 @@ def on_change(state, var_name, var_value):
state.predictions_dataset = predictions_dataset



scenario_page = Markdown("pages/scenario/scenario.md")
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# Data Visualization Page

This is a guide to create a Data Visualization page for our example. The page contains interactive visual elements to display data from a CSV file.
This is a guide for creating a Data Visualization page for our example. The page includes interactive visual elements for showcasing data from a CSV file.

![Interactive GUI](result.gif){ width=700 style="margin:auto;display:block;border: 4px solid rgb(210,210,210);border-radius:7px" }

Expand All @@ -25,7 +25,7 @@ dataset = get_data(path_to_csv)

## Visual Elements

Taipy introduces the concept of *Visual elements*, which are graphical objects displayed on the client. You can use various visual elements such as [slider](../../../../manuals/gui/viselements/slider.md), a
Taipy introduces the concept of *Visual elements*, which are graphic objects shown on the client interface. You can use various visual elements such as [slider](../../../../manuals/gui/viselements/slider.md), a
[chart](../../../../manuals/gui/viselements/chart.md_template), a
[table](../../../../manuals/gui/viselements/table.md_template), an
[input](../../../../manuals/gui/viselements/input.md_template), a
Expand All @@ -51,26 +51,26 @@ To display a chart with the dataset's content, use the following syntax:

## Interactive GUI

The Data Visualization page contains the following visual elements:
The Data Visualization page includes the following visual elements:

- A slider connected to the Python variable *n_week*.
- A chart representing the DataFrame content.

## Multi-client - state

Taipy maintains a separate state for each client connection. The state holds the values of all variables used in the user interface. For example, modifying *n_week* through a slider will
Taipy maintains a distinct state for every client connection. This state stores the values of all variables used in the user interface. For example, modifying *n_week* through a slider will
update *state.n_week*, not the global Python variable *n_week*. Each client has its own state,
ensuring that changes made by one client don't affect others.

## [Callbacks](../../../../manuals/gui/callbacks.md)

In every visual element, you can add callbacks. This allows you to update variables based on user actions. (Check out local callbacks and global callbacks for more information.)
You can include callbacks in each visual element, enabling you to modify variables according to user actions. For further details, explore local callbacks and global callbacks.

- state: The state object containing all the variables.
- The name of the modified variable. (optional)
- Its new value. (optional)

Here's an example of *on_change()* function to update *state.dataset_week* based on the selected week from the slider:
Here's an example of `on_change()` function to update *state.dataset_week* based on the selected week from the slider:

```markdown
<|{n_week}|slider|min=1|max=52|on_change=on_slider|>
Expand All @@ -97,7 +97,7 @@ Select week: *<|{n_week}|>*

## Python code (pages/data_viz/data_viz.py)

Below, you'll find the code that goes along with the Markdown. This code will fill in the objects on the page and establish the interaction between the slider and the chart.
Here is the code that complements the Markdown. This code populates the objects on the page and creates the connection between the slider and the chart.

```python
from taipy.gui import Markdown
Expand Down Expand Up @@ -126,4 +126,5 @@ def on_slider(state):
data_viz = Markdown("pages/data_viz/data_viz.md")
```

With this configuration, you can create an interactive Data Visualization page using Taipy. The page will display the dataset based on the selected week from the slider.
Using this setup, you can construct an interactive Data Visualization page using Taipy.
This page will showcase the dataset corresponding to the chosen week from the slider.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,13 @@
# Algorithms used

The application contains functions for different tasks, such as cleaning data, making baseline predictions,
using machine learning (ML) for predictions, calculating metrics, and generating a dataset to display the predictions.
The application includes functions for various tasks, including data cleaning, creating baseline predictions,
utilizing machine learning (ML) for predictions, computing metrics, and generating a dataset for displaying the predictions.

The `clean_data` function manages the job of cleaning the initial dataset. It achieves this by transforming
the 'Date' column into a datetime format. This function takes an initial DataFrame as input
and delivers a cleaned copy of that DataFrame as its output.

The function `clean_data` handles the task of cleaning the initial
dataset. It does this by converting the 'Date' column into datetime
format. It takes an initial DataFrame as input and provides a cleaned copy of that DataFrame as output.

```python
def clean_data(initial_dataset: pd.DataFrame):
Expand All @@ -22,11 +23,12 @@ def clean_data(initial_dataset: pd.DataFrame):
## Predictions:

`predict_baseline()` and `predict_ml()` returns prediction values from the cleaned
DataFrame (`cleaned_dataset`), the number of predictions to make (`n_predictions`), a
specific date (`day`), and a maximum capacity value (`max_capacity`).
DataFrame (*cleaned_dataset*), the number of predictions to make (*n_predictions*), a
specific date (*day*), and a maximum capacity value (*max_capacity*).

First, they pick the training dataset up to the date mentioned. After that, they perform certain calculations or adjustments to make predictions.
These predictions must not surpass the maximum limit.
Initially, they select the training dataset up to the specified date. Following that, they carry out specific calculations
or adjustments to generate predictions. It's important to ensure that these predictions
do not exceed the maximum limit.

```python
def predict_baseline(cleaned_dataset: pd.DataFrame, n_predictions: int, day: dt.datetime, max_capacity: int):
Expand Down Expand Up @@ -65,9 +67,19 @@ def compute_metrics(historical_data, predicted_data):
## Output dataset

`create_predictions_dataset()` creates a predictions dataset for visualization purposes. It
takes the predicted baseline values (`predictions_baseline`), ML predicted values
(`predictions_ml`), a specific date (`day`), the number of predictions to make
(`n_predictions`), and the cleaned dataset (`cleaned_data`).The function returns a DataFrame
takes:

- the predicted baseline values (*predictions_baseline*),

- ML predicted values
(*predictions_ml*),

- a specific date (*day*), the number of predictions to make
(*n_predictions*),

- and the cleaned dataset (*cleaned_data*).

The function returns a DataFrame
containing the date, historical values, ML predicted values, and baseline predicted values.


Expand All @@ -88,6 +100,8 @@ def create_predictions_dataset(predictions_baseline, predictions_ml, day, n_pred

Chaining all the functions together can be represented as a following graph:

![Execution Graph](config_toml.png){ width=300 style="margin:auto;display:block" }

```python
# For the sake of clarity, we have used an AutoRegressive model rather than a pure ML model such as:
# Random Forest, Linear Regression, LSTM, etc
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ To apprehend what is a _Scenario_, you need to understand the _Data node_ and _T
## Configuration [Basics](../../../../manuals/core/index.md)

- [**Data Nodes**](../../../../manuals/core/concepts/data-node.md): are the translation of variables in
Taipy. Data Nodes don't contain the data itself but point to the data and know how to retrieve it. Data Nodes can point to various data source types,
(CSV files, Pickle files, databases, etc.) and can represent any Python variable (integer, string, data frames, lists, etc.).
Taipy. Data Nodes don't contain the data itself but point to the data and know how to retrieve it. These Data Nodes can point to different types of data sources like CSV files, Pickle files, databases, etc.,
and they can represent various types of Python variables such as integers, strings, data frames, lists, and more.

- [**Tasks**](../../../../manuals/core/concepts/task.md): are the translation of functions in Taipy where their inputs and outputs are data nodes.

Expand All @@ -43,9 +43,9 @@ Parameters for Data Node configuration:
3- `Scope.GLOBAL`: Finally, extend the scope globally (across all scenarios of all cycles). For example, the initial/historical dataset is usually shared by all the scenarios/pipelines/cycles. It is unique in the entire application.


In an ML context, it is common to have numerous training and testing models. In this tutorial, we configure a scenario that predict on a given **day**
the values for the following days using two models: a baseline and a ML
model.
In a Machine Learning context, it's typical to have multiple training and testing models. In this tutorial,
we set up a scenario where we predict the values for the upcoming days based on a specific **day**,
using two models: a baseline model and a Machine Learning model.

- Retrieval of the initial dataset,

Expand All @@ -63,8 +63,8 @@ The graph below represents the scenario to configure, where tasks are in orange

### Input Data Nodes configuration

These are the input Data Nodes. They represent the variables in Taipy when a pipeline is executed. Still, first, we
have to configure them to create the DAG.
These are the input Data Nodes. They stand for the variables in Taipy when a
pipeline is run. However, initially, we need to set them up to build the DAG.

- *initial_dataset* is simply the initial CSV file. Taipy needs some parameters to read this data: *path* and
*header*. The `scope` is global; each scenario or pipeline has the same initial dataset.
Expand Down Expand Up @@ -148,7 +148,7 @@ clean_data_task_cfg = Config.configure_task(id="clean_data",

### predict_baseline_task

This task will take the cleaned dataset and predict it according to your parameters i.e. the three input Data Nodes:
This task will use the cleaned dataset and make predictions based on your specified parameters, which are the three input Data Nodes:

*Day*, *Number of predictions* and *Max Capacity*.

Expand All @@ -166,7 +166,8 @@ with all the predictions and historical data.

## Scenario configuration

Now, all these task and Data Nodes configuration can set up a scenario. These tasks creating an execution graph will be executed when a scenario is submitted.
All of these task and Data Node configurations can create a scenario. These tasks
that form an execution graph will be executed when a scenario is submitted.

```python
scenario_cfg = Config.configure_scenario(id="scenario",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,11 @@
# Step 4: Scenario Page

The Scenario Page is a section of the application made for crafting and tailoring
prediction scenarios using time series data. Users can adjust various parameters for the
prediction, like the prediction date, maximum capacity, and the quantity of predictions.
This page also features a chart that shows past data and predictions created using both machine learning and baseline techniques.
The Scenario Page is a part of the application designed for creating and
customizing prediction scenarios using time series data. Users can modify different parameters
for the prediction, such as the prediction date, maximum capacity,
and the number of predictions. This page also includes a chart displaying historical data
and predictions generated using both machine learning and baseline methods.

![Scenario Page](result.png){ width=700 style="margin:auto;display:block;border: 4px solid rgb(210,210,210);border-radius:7px" }"

Expand Down Expand Up @@ -128,7 +129,10 @@ The global variables *scenario*, *day*, *n_predictions*, *max_capacity*, and *pr

- **Save Function**:

The `save` function is responsible for saving the current scenario state. When the user clicks the "Save" button, this function is called. It takes the state of the page as input, converts the date format to the appropriate format, and updates the scenario parameters accordingly. It then notifies the user with a success message.
The `save` function is in charge of preserving the current scenario state.
When the user clicks the "Save" button, this function gets activated.
It receives the page's state as input, converts the date format to the correct one,
adjusts the scenario parameters accordingly, and then informs the user with a success message.

- **On Change Function**:

Expand All @@ -138,11 +142,14 @@ The `on_change` function is called when any variable on the page changes its val

The *scenario_page* variable is initialized as a Markdown object, representing the content of the Scenario Page.

It provides an interactive interface for users to create and customize different scenarios for time series predictions. It allows users to select prediction dates, set maximum capacity, and choose the number of predictions to make. The page also presents a chart to visualize the historical data and the predicted values from both machine learning and baseline methods. Users can save their selected scenarios to use them for further analysis and comparison.
It provides an interactive interface for users to create and customize different scenarios for time series predictions. It allows users to select prediction dates,
set maximum capacity, and choose the number of predictions to make. The page also presents a chart to visualize the historical data and the predicted values from
both machine learning and baseline methods. Users can save their selected scenarios to use them for further analysis and comparison.

## Connection to the entire application

Use the `on_change` function created in the *scenario* page; it has to be called in the global `on_change` (main script) of the application. This global function is called whenever a variable changes on the user interface.
Use the `on_change` function created in the *scenario* page; it has to be called in the global `on_change` (main script) of the application.
This global function is called whenever a variable changes on the user interface.

In your main script:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,10 @@
# Performance

The Performance Page is a part of the application. It allows users to compare the performance metrics (Root Mean Squared Error and Mean Absolute Error) of different scenarios.
The page displays a table and two bar charts for comparing the metrics between baseline and machine learning predictions.
The Performance Page is a section of the application that permits users to compare the performance metrics,
including Root Mean Squared Error and Mean Absolute Error, across various scenarios.
The page displays a table and two bar charts for comparing these metrics between baseline and
machine learning predictions.

![Performance Page](result.png){ width=700 style="margin:auto;display:block;border: 4px solid rgb(210,210,210);border-radius:7px" }"

Expand Down Expand Up @@ -111,9 +113,9 @@ The *comparison_scenario* DataFrame stores the comparison data, while *metric_se

- **Compare Function**:

The `compare` function is responsible for performing the comparison process. This function is called when the user clicks the "Compare" button.
It retrieves the primary scenarios from the application and goes through each scenario to extract the RMSE and MAE metrics for both baseline and
machine learning predictions.
The `compare` function takes care of the comparison process. This function is triggered when the user clicks
the "Compare" button. It gathers the primary scenarios from the application and then goes through each scenario
to collect the RMSE and MAE metrics for both baseline and machine learning predictions.

The data is then stored in the *comparison_scenario* DataFrame.

Expand All @@ -122,6 +124,7 @@ The data is then stored in the *comparison_scenario* DataFrame.
The performance variable is initialized as a Markdown object, representing the content of the Performance Page.


The Performance Page in the Python application lets users compare how well different scenarios perform when making time series predictions.
Users can select between RMSE and MAE metrics and see the comparison results displayed as bar charts. This page is a useful tool for assessing
the efficiency of various prediction scenarios and can assist in making informed decisions based on performance evaluations.
The Performance Page in the Python application enables users to compare the effectiveness of different scenarios
in making time series predictions. Users can choose between RMSE and MAE metrics and view the comparison results
presented as bar charts. This page serves as a valuable tool for evaluating the efficiency of various prediction
scenarios and can assist in making informed decisions based on performance assessments.

0 comments on commit 0bf10e1

Please sign in to comment.