From aba9693ce83fa2f4b0b9b36424cf26ce99d6f764 Mon Sep 17 00:00:00 2001 From: erikfTW Date: Mon, 18 Sep 2023 22:15:40 +0200 Subject: [PATCH] Tutorial videos with description --- docs/tutorials/videos/charts.md | 8 + docs/tutorials/videos/data-dashboard.md | 8 + docs/tutorials/videos/data_nodes.md | 5 + docs/tutorials/videos/data_pipelines.md | 9 + docs/tutorials/videos/graphical_pages.md | 8 + docs/tutorials/videos/index.md | 3 + docs/tutorials/videos/markdown-syntax.md | 19 + docs/tutorials/videos/partials.md | 12 + mkdocs.yml_template | 8 + tools/config_doc.txt | 2364 ---------------------- 10 files changed, 80 insertions(+), 2364 deletions(-) create mode 100644 docs/tutorials/videos/charts.md create mode 100644 docs/tutorials/videos/data-dashboard.md create mode 100644 docs/tutorials/videos/data_nodes.md create mode 100644 docs/tutorials/videos/data_pipelines.md create mode 100644 docs/tutorials/videos/graphical_pages.md create mode 100644 docs/tutorials/videos/index.md create mode 100644 docs/tutorials/videos/markdown-syntax.md create mode 100644 docs/tutorials/videos/partials.md delete mode 100644 tools/config_doc.txt diff --git a/docs/tutorials/videos/charts.md b/docs/tutorials/videos/charts.md new file mode 100644 index 000000000..0248b33df --- /dev/null +++ b/docs/tutorials/videos/charts.md @@ -0,0 +1,8 @@ +# Taipy Charts to Update Line Types + +Make a line chart for your Taipy application. We're using a basic time series dataset with two variables. You can either use your dataset or get it from the Taipy website. + +Charts are visual ways to show and study patterns, trends, and connections in data. Line charts, in particular, show data points connected by straight lines, +making it easy to see trends and changes over time. You can change the line styles in charts by adjusting the line properties to create different looks. + +To discover more about the Taipy Taipy charts to update line types, click on the tutorial video [here](https://www.youtube.com/watch?v=M32xhZP04yo). diff --git a/docs/tutorials/videos/data-dashboard.md b/docs/tutorials/videos/data-dashboard.md new file mode 100644 index 000000000..c1a039ea2 --- /dev/null +++ b/docs/tutorials/videos/data-dashboard.md @@ -0,0 +1,8 @@ +# Data Dashboards + +Take advantage of Taipy's extra visual tools to build a data dashboard. You can upload a dataset to the app, tweak settings, and display the data using various graphical choices. + +Data dashboards serve to simplify and make complex data visually attractive. They bring together data from different sources like databases, APIs, +or real-time feeds into one easy-to-read interface. Typically, they use charts, graphs, tables, gauges, and other visuals to present data in a clear and interactive way. + +To discover more about the Taipy data dashboard, click on the tutorial video [here](https://www.youtube.com/watch?v=0KlZ3IDFJz4). diff --git a/docs/tutorials/videos/data_nodes.md b/docs/tutorials/videos/data_nodes.md new file mode 100644 index 000000000..952edaf99 --- /dev/null +++ b/docs/tutorials/videos/data_nodes.md @@ -0,0 +1,5 @@ +# Data Nodes and Tasks + +Here's a practical, step-by-step example of how to create scenarios to make the most of your data. It involves two main parts: Data nodes and Tasks. + +To discover more about data nodes and tasks, click on the tutorial video [here](https://www.youtube.com/watch?v=rsrXBQr3LKo). diff --git a/docs/tutorials/videos/data_pipelines.md b/docs/tutorials/videos/data_pipelines.md new file mode 100644 index 000000000..d86ac87fe --- /dev/null +++ b/docs/tutorials/videos/data_pipelines.md @@ -0,0 +1,9 @@ +# Manage Data Pipelines with Scenarios + +Build, control, and run your data pipelines effortlessly with Taipy. Taipy simplifies the process of designing pipelines, handling data, +and coordinating data movement through a feature known as scenario management. + +In Taipy, a Scenario is specifically crafted for modeling pipelines and data flows. It connects your data nodes (representing your datasets) +with tasks (which are the Python functions you want to execute). Configuring these scenarios becomes a breeze when you use the user-friendly Taipy Studio interface. + +To discover more about managing data pipelines with scenarios, click on the tutorial video [here](https://www.youtube.com/watch?v=c2hMbr4HCA0). diff --git a/docs/tutorials/videos/graphical_pages.md b/docs/tutorials/videos/graphical_pages.md new file mode 100644 index 000000000..b4c8e555b --- /dev/null +++ b/docs/tutorials/videos/graphical_pages.md @@ -0,0 +1,8 @@ +# Manage Multiple Graphical Pages + +Use Taipy to arrange and control multiple graphical pages, creating a user-friendly and personalized interface. + +Managing multiple graphical pages means arranging and controlling different screens or pages within a graphical user interface (GUI) application. +This includes switching between pages, handling their content, and providing navigation options for users to move between them. + +To discover more about managing multiple graphical pages, click on the tutorial video [here](https://www.youtube.com/watch?v=w-tMYCB-I3A). diff --git a/docs/tutorials/videos/index.md b/docs/tutorials/videos/index.md new file mode 100644 index 000000000..64ae02ead --- /dev/null +++ b/docs/tutorials/videos/index.md @@ -0,0 +1,3 @@ +# Tutorial videos + +Let's explore what you can learn with Taipy by watching the tutorial videos below. diff --git a/docs/tutorials/videos/markdown-syntax.md b/docs/tutorials/videos/markdown-syntax.md new file mode 100644 index 000000000..75a3788b0 --- /dev/null +++ b/docs/tutorials/videos/markdown-syntax.md @@ -0,0 +1,19 @@ +# Markdown Syntax + +With Taipy GUI, you can easily create a basic application page using Markdown syntax. This covers things like connecting variables and adding visual elements like sliders and charts. + +Markdown syntax is a simple way to format plain text and structure documents. It lets you easily add things like headings, lists, links, images, and emphasis +to text without needing complex HTML or formatting tools. + +Taipy uses [Python Markdown](https://python-markdown.github.io/) to turn your Markdown text into web pages. There are several language extensions that help make your pages look nice and user-friendly. +Specifically, Taipy uses the following Markdown extensions: + +- [Admonition](https://python-markdown.github.io/extensions/admonition/) +- [Attribute Lists](https://python-markdown.github.io/extensions/attr_list/) +- [Fenced Code Blocks](https://python-markdown.github.io/extensions/fenced_code_blocks/) +- [Meta-Data](https://python-markdown.github.io/extensions/meta_data/) +- [Markdown in HTML](https://python-markdown.github.io/extensions/md_in_html/) +- [Sane Lists](https://python-markdown.github.io/extensions/sane_lists/) +- [Tables](https://python-markdown.github.io/extensions/tables/) + +To discover more about the Taipy markdown syntax, click on the tutorial video [here](https://www.youtube.com/watch?v=OpHAncCb8Zo&t=1s). diff --git a/docs/tutorials/videos/partials.md b/docs/tutorials/videos/partials.md new file mode 100644 index 000000000..d6d31d14f --- /dev/null +++ b/docs/tutorials/videos/partials.md @@ -0,0 +1,12 @@ +# Partials in Taipy + +Utilize Taipy's "Partials" feature to save time when creating GUI pages and components. Partials come in handy when you need to use the same page (or part of a page) +multiple times in your application, like for general instructions, contributions, or generic constants. + +Additionally, two other essential visual elements help you structure your application interface: + +- **Dialog:** A dialog is a pop-up window with specific behavior designed to enhance the user's experience. + +- **Pane:** A pane is another type of pop-up window with distinct behavior aimed at improving the user's interaction with the application. + +To discover more about partials in Taipy, click on the tutorial video [here](https://www.youtube.com/watch?v=gFyfGk4_wEM). diff --git a/mkdocs.yml_template b/mkdocs.yml_template index 4d51f065f..83bcd1a83 100644 --- a/mkdocs.yml_template +++ b/mkdocs.yml_template @@ -18,6 +18,14 @@ nav: - "Understanding GUI": tutorials/understanding_gui/index.md - "Scenario management Overview": tutorials/scenario_management_overview/index.md - "Using Templates": tutorials/using_templates/index.md + - "Tutorial videos": tutorials/videos/index.md + - "Markdown Syntax": tutorials/videos/markdown-syntax.md + - "Data Dashboard": tutorials/videos/data-dashboard.md + - "Taipy Charts to Update Line Types": tutorials/videos/charts.md + - "Manage Multiple Graphical Pages": tutorials/videos/graphical_pages.md + - "Partials in Taipy": tutorials/videos/partials.md + - "Manage Data Pipelines with Scenarios": tutorials/videos/data_pipelines.md + - "Data Nodes and Tasks": tutorials/videos/data_nodes.md - "Manuals": - manuals/index.md - "User Manuals": diff --git a/tools/config_doc.txt b/tools/config_doc.txt deleted file mode 100644 index ea683b40e..000000000 --- a/tools/config_doc.txt +++ /dev/null @@ -1,2364 +0,0 @@ - @staticmethod - def configure_gui(**properties) -> '_GuiSection': - """NOT DOCUMENTED - Configure the Graphical User Interface. - - Parameters: - **properties (dict[str, any]): Keyword arguments that configure the behavior of the `Gui^` instances.
- Please refer to the - [Configuration section](../gui/configuration.md#configuring-the-gui-instance) - of the User Manual for more information on the accepted arguments. - Returns: - The GUI configuration. - - """ - pass - - @staticmethod - def configure_job_executions(mode: Optional[str] = None, max_nb_of_workers: Union[int, str, NoneType] = None, **properties) -> 'JobConfig': - """Configure job execution. - - Parameters: - mode (Optional[str]): The job execution mode. - Possible values are: *"standalone"* (the default value) or *"development"*. - max_nb_of_workers (Optional[int, str]): Parameter used only in default *"standalone"* mode. - This indicates the maximum number of jobs able to run in parallel.
- The default value is 1.
- A string can be provided to dynamically set the value using an environment - variable. The string must follow the pattern: `ENV[<env_var>]` where - `<env_var>` is the name of an environment variable. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new job execution configuration. - """ - pass - - @staticmethod - def configure_data_node(id: str, storage_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new data node configuration. - - Parameters: - id (str): The unique identifier of the new data node configuration. - storage_type (Optional[str]): The data node configuration storage type. The possible values - are None (which is the default value of *"pickle"*, unless it has been overloaded by the - *storage_type* value set in the default data node configuration - (see `(Config.)set_default_data_node_configuration()^`)), *"pickle"*, *"csv"*, *"excel"*, - *"sql_table"*, *"sql"*, *"json"*, *"parquet"*, *"mongo_collection"*, *"in_memory"*, or - *"generic"*. - scope (Optional[Scope^]): The scope of the data node configuration.
- The default value is `Scope.SCENARIO` (or the one specified in - `(Config.)set_default_data_node_configuration()^`). - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new data node configuration. - """ - pass - - @staticmethod - def configure_data_node_from(source_configuration: 'DataNodeConfig', id: str, **properties) -> 'DataNodeConfig': - """Configure a new data node configuration from an existing one. - - Parameters: - source_configuration (DataNodeConfig): The source data node configuration. - id (str): The unique identifier of the new data node configuration. - **properties (dict[str, any]): A keyworded variable length list of additional arguments.
- The default properties are the properties of the source data node configuration. - - Returns: - The new data node configuration. - """ - pass - - @staticmethod - def set_default_data_node_configuration(storage_type: str, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Set the default values for data node configurations. - - This function creates the _default data node configuration_ object, - where all data node configuration objects will find their default - values when needed. - - Parameters: - storage_type (str): The default storage type for all data node configurations. - The possible values are *"pickle"* (the default value), *"csv"*, *"excel"*, - *"sql"*, *"mongo_collection"*, *"in_memory"*, *"json"*, *"parquet"* or - *"generic"*. - scope (Optional[Scope^]): The default scope for all data node configurations.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The default data node configuration. - """ - pass - - @staticmethod - def configure_csv_data_node(id: str, default_path: Optional[str] = None, encoding: Optional[str] = None, has_header: Optional[bool] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new CSV data node configuration. - - Parameters: - id (str): The unique identifier of the new CSV data node configuration. - default_path (Optional[str]): The default path of the CSV file. - encoding (Optional[str]): The encoding of the CSV file. - has_header (Optional[bool]): If True, indicates that the CSV file has a header. - exposed_type (Optional[str]): The exposed type of the data read from CSV file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the CSV data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new CSV data node configuration. - """ - pass - - @staticmethod - def configure_json_data_node(id: str, default_path: Optional[str] = None, encoding: Optional[str] = None, encoder: Optional[json.encoder.JSONEncoder] = None, decoder: Optional[json.decoder.JSONDecoder] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new JSON data node configuration. - - Parameters: - id (str): The unique identifier of the new JSON data node configuration. - default_path (Optional[str]): The default path of the JSON file. - encoding (Optional[str]): The encoding of the JSON file. - encoder (Optional[json.JSONEncoder]): The JSON encoder used to write data into the JSON file. - decoder (Optional[json.JSONDecoder]): The JSON decoder used to read data from the JSON file. - scope (Optional[Scope^]): The scope of the JSON data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new JSON data node configuration. - """ - pass - - @staticmethod - def configure_parquet_data_node(id: str, default_path: Optional[str] = None, engine: Optional[str] = None, compression: Optional[str] = None, read_kwargs: Optional[Dict] = None, write_kwargs: Optional[Dict] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Parquet data node configuration. - - Parameters: - id (str): The unique identifier of the new Parquet data node configuration. - default_path (Optional[str]): The default path of the Parquet file. - engine (Optional[str]): Parquet library to use. Possible values are *"fastparquet"* or - *"pyarrow"*.
- The default value is *"pyarrow"*. - compression (Optional[str]): Name of the compression to use. Possible values are *"snappy"*, - *"gzip"*, *"brotli"*, or *"none"* (no compression). The default value is *"snappy"*. - read_kwargs (Optional[dict]): Additional parameters passed to the `pandas.read_parquet()` - function. - write_kwargs (Optional[dict]): Additional parameters passed to the - `pandas.DataFrame.write_parquet()` function.
- The parameters in *read_kwargs* and *write_kwargs* have a **higher precedence** than the - top-level parameters which are also passed to Pandas. - exposed_type (Optional[str]): The exposed type of the data read from Parquet file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the Parquet data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Parquet data node configuration. - """ - pass - - @staticmethod - def configure_sql_table_data_node(id: str, db_name: str, db_engine: str, table_name: str, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_driver: Optional[str] = None, sqlite_folder_path: Optional[str] = None, sqlite_file_extension: Optional[str] = None, db_extra_args: Optional[Dict[str, Any]] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new SQL table data node configuration. - - Parameters: - id (str): The unique identifier of the new SQL data node configuration. - db_name (str): The database name, or the name of the SQLite database file. - db_engine (str): The database engine. Possible values are *"sqlite"*, *"mssql"*, *"mysql"*, - or *"postgresql"*. - table_name (str): The name of the SQL table. - db_username (Optional[str]): The database username. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_password (Optional[str]): The database password. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 1433. - db_driver (Optional[str]): The database driver. - sqlite_folder_path (Optional[str]): The path to the folder that contains SQLite file.
- The default value is the current working folder. - sqlite_file_extension (Optional[str]): The file extension of the SQLite file.
- The default value is ".db". - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - exposed_type (Optional[str]): The exposed type of the data read from SQL table.
- The default value is "pandas". - scope (Optional[Scope^]): The scope of the SQL data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new SQL data node configuration. - """ - pass - - @staticmethod - def configure_sql_data_node(id: str, db_name: str, db_engine: str, read_query: str, write_query_builder: Callable, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_driver: Optional[str] = None, sqlite_folder_path: Optional[str] = None, sqlite_file_extension: Optional[str] = None, db_extra_args: Optional[Dict[str, Any]] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new SQL data node configuration. - - Parameters: - id (str): The unique identifier of the new SQL data node configuration. - db_name (str): The database name, or the name of the SQLite database file. - db_engine (str): The database engine. Possible values are *"sqlite"*, *"mssql"*, *"mysql"*, - or *"postgresql"*. - read_query (str): The SQL query string used to read the data from the database. - write_query_builder (Callable): A callback function that takes the data as an input parameter - and returns a list of SQL queries. - db_username (Optional[str]): The database username. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_password (Optional[str]): The database password. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 1433. - db_driver (Optional[str]): The database driver. - sqlite_folder_path (Optional[str]): The path to the folder that contains SQLite file.
- The default value is the current working folder. - sqlite_file_extension (Optional[str]): The file extension of the SQLite file.
- The default value is ".db". - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - exposed_type (Optional[str]): The exposed type of the data read from SQL query.
- The default value is "pandas". - scope (Optional[Scope^]): The scope of the SQL data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new SQL data node configuration. - """ - pass - - @staticmethod - def configure_mongo_collection_data_node(id: str, db_name: str, collection_name: str, custom_document: Optional[Any] = None, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_extra_args: Optional[Dict[str, Any]] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Mongo collection data node configuration. - - Parameters: - id (str): The unique identifier of the new Mongo collection data node configuration. - db_name (str): The database name. - collection_name (str): The collection in the database to read from and to write the data to. - custom_document (Optional[any]): The custom document class to store, encode, and decode data - when reading and writing to a Mongo collection. The custom_document can have an optional - *decode()* method to decode data in the Mongo collection to a custom object, and an - optional *encode()*) method to encode the object's properties to the Mongo collection - when writing. - db_username (Optional[str]): The database username. - db_password (Optional[str]): The database password. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 27017. - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - scope (Optional[Scope^]): The scope of the Mongo collection data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Mongo collection data node configuration. - """ - pass - - @staticmethod - def configure_in_memory_data_node(id: str, default_data: Optional[Any] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new *in-memory* data node configuration. - - Parameters: - id (str): The unique identifier of the new in_memory data node configuration. - default_data (Optional[any]): The default data of the data nodes instantiated from - this in_memory data node configuration. - scope (Optional[Scope^]): The scope of the in_memory data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new *in-memory* data node configuration. - """ - pass - - @staticmethod - def configure_pickle_data_node(id: str, default_path: Optional[str] = None, default_data: Optional[Any] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new pickle data node configuration. - - Parameters: - id (str): The unique identifier of the new pickle data node configuration. - default_path (Optional[str]): The path of the pickle file. - default_data (Optional[any]): The default data of the data nodes instantiated from - this pickle data node configuration. - scope (Optional[Scope^]): The scope of the pickle data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new pickle data node configuration. - """ - pass - - @staticmethod - def configure_excel_data_node(id: str, default_path: Optional[str] = None, has_header: Optional[bool] = None, sheet_name: Union[List[str], str, NoneType] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Excel data node configuration. - - Parameters: - id (str): The unique identifier of the new Excel data node configuration. - default_path (Optional[str]): The path of the Excel file. - has_header (Optional[bool]): If True, indicates that the Excel file has a header. - sheet_name (Optional[Union[List[str], str]]): The list of sheet names to be used. - This can be a unique name. - exposed_type (Optional[str]): The exposed type of the data read from Excel file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the Excel data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Excel data node configuration. - """ - pass - - @staticmethod - def configure_generic_data_node(id: str, read_fct: Optional[Callable] = None, write_fct: Optional[Callable] = None, read_fct_args: Optional[List] = None, write_fct_args: Optional[List] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new generic data node configuration. - - Parameters: - id (str): The unique identifier of the new generic data node configuration. - read_fct (Optional[Callable]): The Python function called to read the data. - write_fct (Optional[Callable]): The Python function called to write the data. - The provided function must have at least one parameter that receives the data to be written. - read_fct_args (Optional[List]): The list of arguments that are passed to the function - *read_fct* to read data. - write_fct_args (Optional[List]): The list of arguments that are passed to the function - *write_fct* to write the data. - scope (Optional[Scope^]): The scope of the Generic data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new Generic data node configuration. - """ - pass - - @staticmethod - def configure_task(id: str, function, input: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, output: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, skippable: Optional[bool] = False, **properties) -> 'TaskConfig': - """Configure a new task configuration. - - Parameters: - id (str): The unique identifier of this task configuration. - function (Callable): The python function called by Taipy to run the task. - input (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - function input data node configurations. This can be a unique data node - configuration if there is a single input data node, or None if there are none. - output (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - function output data node configurations. This can be a unique data node - configuration if there is a single output data node, or None if there are none. - skippable (bool): If True, indicates that the task can be skipped if no change has - been made on inputs.
- The default value is False. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new task configuration. - """ - pass - - @staticmethod - def set_default_task_configuration(function, input: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, output: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, skippable: Optional[bool] = False, **properties) -> 'TaskConfig': - """Set the default values for task configurations. - - This function creates the *default task configuration* object, - where all task configuration objects will find their default - values when needed. - - Parameters: - function (Callable): The python function called by Taipy to run the task. - input (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - input data node configurations. This can be a unique data node - configuration if there is a single input data node, or None if there are none. - output (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - output data node configurations. This can be a unique data node - configuration if there is a single output data node, or None if there are none. - skippable (bool): If True, indicates that the task can be skipped if no change has - been made on inputs.
- The default value is False. - **properties (dict[str, any]): A keyworded variable length list of additional - arguments. - Returns: - The default task configuration. - """ - pass - - @staticmethod - def configure_scenario(id: str, task_configs: Optional[List[taipy.core.config.task_config.TaskConfig]] = None, additional_data_node_configs: Optional[List[taipy.core.config.data_node_config.DataNodeConfig]] = None, frequency: Optional[taipy.config.common.frequency.Frequency] = None, comparators: Optional[Dict[str, Union[List[Callable], Callable]]] = None, sequences: Optional[Dict[str, List[taipy.core.config.task_config.TaskConfig]]] = None, **properties) -> 'ScenarioConfig': - """Configure a new scenario configuration. - - Parameters: - id (str): The unique identifier of the new scenario configuration. - task_configs (Optional[List[TaskConfig^]]): The list of task configurations used by this - scenario configuration. The default value is None. - additional_data_node_configs (Optional[List[DataNodeConfig^]]): The list of additional data nodes - related to this scenario configuration. The default value is None. - frequency (Optional[Frequency^]): The scenario frequency.
- It corresponds to the recurrence of the scenarios instantiated from this - configuration. Based on this frequency each scenario will be attached to the - relevant cycle. - comparators (Optional[Dict[str, Union[List[Callable], Callable]]]): The list of - functions used to compare scenarios. A comparator function is attached to a - scenario's data node configuration. The key of the dictionary parameter - corresponds to the data node configuration id. During the scenarios' - comparison, each comparator is applied to all the data nodes instantiated from - the data node configuration attached to the comparator. See - `(taipy.)compare_scenarios()^` more more details. - sequences (Optional[Dict[str, List[TaskConfig]]]): Dictionary of sequence descriptions. - The default value is None. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new scenario configuration. - """ - pass - - @staticmethod - def set_default_scenario_configuration(task_configs: Optional[List[taipy.core.config.task_config.TaskConfig]] = None, additional_data_node_configs: List[taipy.core.config.data_node_config.DataNodeConfig] = None, frequency: Optional[taipy.config.common.frequency.Frequency] = None, comparators: Optional[Dict[str, Union[List[Callable], Callable]]] = None, sequences: Optional[Dict[str, List[taipy.core.config.task_config.TaskConfig]]] = None, **properties) -> 'ScenarioConfig': - """Set the default values for scenario configurations. - - This function creates the *default scenario configuration* object, - where all scenario configuration objects will find their default - values when needed. - - Parameters: - task_configs (Optional[List[TaskConfig^]]): The list of task configurations used by this - scenario configuration. - additional_data_node_configs (Optional[List[DataNodeConfig^]]): The list of additional data nodes - related to this scenario configuration. - frequency (Optional[Frequency^]): The scenario frequency. - It corresponds to the recurrence of the scenarios instantiated from this - configuration. Based on this frequency each scenario will be attached to - the relevant cycle. - comparators (Optional[Dict[str, Union[List[Callable], Callable]]]): The list of - functions used to compare scenarios. A comparator function is attached to a - scenario's data node configuration. The key of the dictionary parameter - corresponds to the data node configuration id. During the scenarios' - comparison, each comparator is applied to all the data nodes instantiated from - the data node configuration attached to the comparator. See - `taipy.compare_scenarios()^` more more details. - sequences (Optional[Dict[str, List[TaskConfig]]]): Dictionary of sequences. The default value is None. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new default scenario configuration. - """ - pass - - @staticmethod - def add_migration_function(target_version: str, config: Union[taipy.config.section.Section, str], migration_fct: Callable, **properties): - """Add a migration function for a Configuration to migrate entities to the target version. - - Parameters: - target_version (str): The production version that entities are migrated to. - config (Union[Section, str]): The configuration or the `id` of the config that needs to migrate. - migration_fct (Callable): Migration function that takes an entity as input and returns a new entity - that is compatible with the target production version. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments. - Returns: - `MigrationConfig^`: The Migration configuration. - """ - pass - - @staticmethod - def configure_core(root_folder: Optional[str] = None, storage_folder: Optional[str] = None, repository_type: Optional[str] = None, repository_properties: Optional[Dict[str, Union[str, int]]] = None, read_entity_retry: Optional[int] = None, mode: Optional[str] = None, version_number: Optional[str] = None, force: Optional[bool] = None, **properties) -> 'CoreSection': - """Configure the Core service. - - Parameters: - root_folder (Optional[str]): Path of the base folder for the taipy application. - The default value is "./taipy/" - storage_folder (Optional[str]): Folder name used to store Taipy data. The default value is ".data/". - It is used in conjunction with the `root_folder` field. That means the storage path is - (The default path is "./taipy/.data/"). - repository_type (Optional[str]): The type of the repository to be used to store Taipy data. - The default value is "filesystem". - repository_properties (Optional[Dict[str, Union[str, int]]]): A dictionary of additional properties - to be used by the repository. - read_entity_retry (Optional[int]): Number of retries to read an entity from the repository - before return failure. The default value is 3. - mode (Optional[str]): Indicates the mode of the version management system. - Possible values are *"development"*, *"experiment"*, or *"production"*. - version_number (Optional[str]): The string identifier of the version. - In development mode, the version number is ignored. - force (Optional[bool]): If True, Taipy will override a version even if the configuration - has changed and run the application. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments configure the - behavior of the `Core^` service. - Returns: - The Core configuration. - """ - pass - - @staticmethod - def configure_authentication(protocol: str, secret_key: Optional[str] = None, auth_session_duration: int = 3600, **properties) -> 'AuthenticationConfig': - """Configure authentication. - - Parameters: - protocol (str): The name of the protocol to configure ("ldap", "taipy" or "none"). - secret_key (str): A secret string used to internally encrypt the credentials' information. - If no value is provided, the first run-time authentication sets the default value to a - random text string. - auth_session_duration (int): How long, in seconds, are credentials valid after their creation. The - default value is 3600, corresponding to an hour. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments.
- Depending on the protocol, these arguments are: - - - "LDAP" protocol: the following arguments are accepted: - - *server*: the URL of the LDAP server this authenticator connects to. - - *base_dn*: the LDAP distinguished name that is used. - - "Taipy" protocol: the following arguments are accepted: - - *roles*: a dictionary that configures the association of usernames to - roles. - - *passwords*: if required, a dictionary that configures the association of - usernames to hashed passwords. - A user can be authenticated if it appears at least in one of the *roles* - or the *password* dictionaries.
- If it only appears in *roles*, then the user is authenticated if provided - a password exactly identical to its username.
- If it only appears in *passwords*, then the user is assigned no roles. - - "None": No additional arguments are required. - - Returns: - The authentication configuration. - """ - pass - - @staticmethod - def configure_gui(**properties) -> '_GuiSection': - """NOT DOCUMENTED - Configure the Graphical User Interface. - - Parameters: - **properties (dict[str, any]): Keyword arguments that configure the behavior of the `Gui^` instances.
- Please refer to the - [Configuration section](../gui/configuration.md#configuring-the-gui-instance) - of the User Manual for more information on the accepted arguments. - Returns: - The GUI configuration. - - """ - pass - - @staticmethod - def configure_job_executions(mode: Optional[str] = None, max_nb_of_workers: Union[int, str, NoneType] = None, **properties) -> 'JobConfig': - """Configure job execution. - - Parameters: - mode (Optional[str]): The job execution mode. - Possible values are: *"standalone"* (the default value) or *"development"*. - max_nb_of_workers (Optional[int, str]): Parameter used only in default *"standalone"* mode. - This indicates the maximum number of jobs able to run in parallel.
- The default value is 1.
- A string can be provided to dynamically set the value using an environment - variable. The string must follow the pattern: `ENV[<env_var>]` where - `<env_var>` is the name of an environment variable. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new job execution configuration. - """ - pass - - @staticmethod - def configure_data_node(id: str, storage_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new data node configuration. - - Parameters: - id (str): The unique identifier of the new data node configuration. - storage_type (Optional[str]): The data node configuration storage type. The possible values - are None (which is the default value of *"pickle"*, unless it has been overloaded by the - *storage_type* value set in the default data node configuration - (see `(Config.)set_default_data_node_configuration()^`)), *"pickle"*, *"csv"*, *"excel"*, - *"sql_table"*, *"sql"*, *"json"*, *"parquet"*, *"mongo_collection"*, *"in_memory"*, or - *"generic"*. - scope (Optional[Scope^]): The scope of the data node configuration.
- The default value is `Scope.SCENARIO` (or the one specified in - `(Config.)set_default_data_node_configuration()^`). - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new data node configuration. - """ - pass - - @staticmethod - def configure_data_node_from(source_configuration: 'DataNodeConfig', id: str, **properties) -> 'DataNodeConfig': - """Configure a new data node configuration from an existing one. - - Parameters: - source_configuration (DataNodeConfig): The source data node configuration. - id (str): The unique identifier of the new data node configuration. - **properties (dict[str, any]): A keyworded variable length list of additional arguments.
- The default properties are the properties of the source data node configuration. - - Returns: - The new data node configuration. - """ - pass - - @staticmethod - def set_default_data_node_configuration(storage_type: str, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Set the default values for data node configurations. - - This function creates the _default data node configuration_ object, - where all data node configuration objects will find their default - values when needed. - - Parameters: - storage_type (str): The default storage type for all data node configurations. - The possible values are *"pickle"* (the default value), *"csv"*, *"excel"*, - *"sql"*, *"mongo_collection"*, *"in_memory"*, *"json"*, *"parquet"* or - *"generic"*. - scope (Optional[Scope^]): The default scope for all data node configurations.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The default data node configuration. - """ - pass - - @staticmethod - def configure_csv_data_node(id: str, default_path: Optional[str] = None, encoding: Optional[str] = None, has_header: Optional[bool] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new CSV data node configuration. - - Parameters: - id (str): The unique identifier of the new CSV data node configuration. - default_path (Optional[str]): The default path of the CSV file. - encoding (Optional[str]): The encoding of the CSV file. - has_header (Optional[bool]): If True, indicates that the CSV file has a header. - exposed_type (Optional[str]): The exposed type of the data read from CSV file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the CSV data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new CSV data node configuration. - """ - pass - - @staticmethod - def configure_json_data_node(id: str, default_path: Optional[str] = None, encoding: Optional[str] = None, encoder: Optional[json.encoder.JSONEncoder] = None, decoder: Optional[json.decoder.JSONDecoder] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new JSON data node configuration. - - Parameters: - id (str): The unique identifier of the new JSON data node configuration. - default_path (Optional[str]): The default path of the JSON file. - encoding (Optional[str]): The encoding of the JSON file. - encoder (Optional[json.JSONEncoder]): The JSON encoder used to write data into the JSON file. - decoder (Optional[json.JSONDecoder]): The JSON decoder used to read data from the JSON file. - scope (Optional[Scope^]): The scope of the JSON data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new JSON data node configuration. - """ - pass - - @staticmethod - def configure_parquet_data_node(id: str, default_path: Optional[str] = None, engine: Optional[str] = None, compression: Optional[str] = None, read_kwargs: Optional[Dict] = None, write_kwargs: Optional[Dict] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Parquet data node configuration. - - Parameters: - id (str): The unique identifier of the new Parquet data node configuration. - default_path (Optional[str]): The default path of the Parquet file. - engine (Optional[str]): Parquet library to use. Possible values are *"fastparquet"* or - *"pyarrow"*.
- The default value is *"pyarrow"*. - compression (Optional[str]): Name of the compression to use. Possible values are *"snappy"*, - *"gzip"*, *"brotli"*, or *"none"* (no compression). The default value is *"snappy"*. - read_kwargs (Optional[dict]): Additional parameters passed to the `pandas.read_parquet()` - function. - write_kwargs (Optional[dict]): Additional parameters passed to the - `pandas.DataFrame.write_parquet()` function.
- The parameters in *read_kwargs* and *write_kwargs* have a **higher precedence** than the - top-level parameters which are also passed to Pandas. - exposed_type (Optional[str]): The exposed type of the data read from Parquet file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the Parquet data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Parquet data node configuration. - """ - pass - - @staticmethod - def configure_sql_table_data_node(id: str, db_name: str, db_engine: str, table_name: str, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_driver: Optional[str] = None, sqlite_folder_path: Optional[str] = None, sqlite_file_extension: Optional[str] = None, db_extra_args: Optional[Dict[str, Any]] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new SQL table data node configuration. - - Parameters: - id (str): The unique identifier of the new SQL data node configuration. - db_name (str): The database name, or the name of the SQLite database file. - db_engine (str): The database engine. Possible values are *"sqlite"*, *"mssql"*, *"mysql"*, - or *"postgresql"*. - table_name (str): The name of the SQL table. - db_username (Optional[str]): The database username. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_password (Optional[str]): The database password. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 1433. - db_driver (Optional[str]): The database driver. - sqlite_folder_path (Optional[str]): The path to the folder that contains SQLite file.
- The default value is the current working folder. - sqlite_file_extension (Optional[str]): The file extension of the SQLite file.
- The default value is ".db". - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - exposed_type (Optional[str]): The exposed type of the data read from SQL table.
- The default value is "pandas". - scope (Optional[Scope^]): The scope of the SQL data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new SQL data node configuration. - """ - pass - - @staticmethod - def configure_sql_data_node(id: str, db_name: str, db_engine: str, read_query: str, write_query_builder: Callable, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_driver: Optional[str] = None, sqlite_folder_path: Optional[str] = None, sqlite_file_extension: Optional[str] = None, db_extra_args: Optional[Dict[str, Any]] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new SQL data node configuration. - - Parameters: - id (str): The unique identifier of the new SQL data node configuration. - db_name (str): The database name, or the name of the SQLite database file. - db_engine (str): The database engine. Possible values are *"sqlite"*, *"mssql"*, *"mysql"*, - or *"postgresql"*. - read_query (str): The SQL query string used to read the data from the database. - write_query_builder (Callable): A callback function that takes the data as an input parameter - and returns a list of SQL queries. - db_username (Optional[str]): The database username. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_password (Optional[str]): The database password. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 1433. - db_driver (Optional[str]): The database driver. - sqlite_folder_path (Optional[str]): The path to the folder that contains SQLite file.
- The default value is the current working folder. - sqlite_file_extension (Optional[str]): The file extension of the SQLite file.
- The default value is ".db". - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - exposed_type (Optional[str]): The exposed type of the data read from SQL query.
- The default value is "pandas". - scope (Optional[Scope^]): The scope of the SQL data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new SQL data node configuration. - """ - pass - - @staticmethod - def configure_mongo_collection_data_node(id: str, db_name: str, collection_name: str, custom_document: Optional[Any] = None, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_extra_args: Optional[Dict[str, Any]] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Mongo collection data node configuration. - - Parameters: - id (str): The unique identifier of the new Mongo collection data node configuration. - db_name (str): The database name. - collection_name (str): The collection in the database to read from and to write the data to. - custom_document (Optional[any]): The custom document class to store, encode, and decode data - when reading and writing to a Mongo collection. The custom_document can have an optional - *decode()* method to decode data in the Mongo collection to a custom object, and an - optional *encode()*) method to encode the object's properties to the Mongo collection - when writing. - db_username (Optional[str]): The database username. - db_password (Optional[str]): The database password. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 27017. - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - scope (Optional[Scope^]): The scope of the Mongo collection data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Mongo collection data node configuration. - """ - pass - - @staticmethod - def configure_in_memory_data_node(id: str, default_data: Optional[Any] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new *in-memory* data node configuration. - - Parameters: - id (str): The unique identifier of the new in_memory data node configuration. - default_data (Optional[any]): The default data of the data nodes instantiated from - this in_memory data node configuration. - scope (Optional[Scope^]): The scope of the in_memory data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new *in-memory* data node configuration. - """ - pass - - @staticmethod - def configure_pickle_data_node(id: str, default_path: Optional[str] = None, default_data: Optional[Any] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new pickle data node configuration. - - Parameters: - id (str): The unique identifier of the new pickle data node configuration. - default_path (Optional[str]): The path of the pickle file. - default_data (Optional[any]): The default data of the data nodes instantiated from - this pickle data node configuration. - scope (Optional[Scope^]): The scope of the pickle data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new pickle data node configuration. - """ - pass - - @staticmethod - def configure_excel_data_node(id: str, default_path: Optional[str] = None, has_header: Optional[bool] = None, sheet_name: Union[List[str], str, NoneType] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Excel data node configuration. - - Parameters: - id (str): The unique identifier of the new Excel data node configuration. - default_path (Optional[str]): The path of the Excel file. - has_header (Optional[bool]): If True, indicates that the Excel file has a header. - sheet_name (Optional[Union[List[str], str]]): The list of sheet names to be used. - This can be a unique name. - exposed_type (Optional[str]): The exposed type of the data read from Excel file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the Excel data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Excel data node configuration. - """ - pass - - @staticmethod - def configure_generic_data_node(id: str, read_fct: Optional[Callable] = None, write_fct: Optional[Callable] = None, read_fct_args: Optional[List] = None, write_fct_args: Optional[List] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new generic data node configuration. - - Parameters: - id (str): The unique identifier of the new generic data node configuration. - read_fct (Optional[Callable]): The Python function called to read the data. - write_fct (Optional[Callable]): The Python function called to write the data. - The provided function must have at least one parameter that receives the data to be written. - read_fct_args (Optional[List]): The list of arguments that are passed to the function - *read_fct* to read data. - write_fct_args (Optional[List]): The list of arguments that are passed to the function - *write_fct* to write the data. - scope (Optional[Scope^]): The scope of the Generic data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new Generic data node configuration. - """ - pass - - @staticmethod - def configure_task(id: str, function, input: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, output: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, skippable: Optional[bool] = False, **properties) -> 'TaskConfig': - """Configure a new task configuration. - - Parameters: - id (str): The unique identifier of this task configuration. - function (Callable): The python function called by Taipy to run the task. - input (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - function input data node configurations. This can be a unique data node - configuration if there is a single input data node, or None if there are none. - output (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - function output data node configurations. This can be a unique data node - configuration if there is a single output data node, or None if there are none. - skippable (bool): If True, indicates that the task can be skipped if no change has - been made on inputs.
- The default value is False. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new task configuration. - """ - pass - - @staticmethod - def set_default_task_configuration(function, input: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, output: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, skippable: Optional[bool] = False, **properties) -> 'TaskConfig': - """Set the default values for task configurations. - - This function creates the *default task configuration* object, - where all task configuration objects will find their default - values when needed. - - Parameters: - function (Callable): The python function called by Taipy to run the task. - input (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - input data node configurations. This can be a unique data node - configuration if there is a single input data node, or None if there are none. - output (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - output data node configurations. This can be a unique data node - configuration if there is a single output data node, or None if there are none. - skippable (bool): If True, indicates that the task can be skipped if no change has - been made on inputs.
- The default value is False. - **properties (dict[str, any]): A keyworded variable length list of additional - arguments. - Returns: - The default task configuration. - """ - pass - - @staticmethod - def configure_scenario(id: str, task_configs: Optional[List[taipy.core.config.task_config.TaskConfig]] = None, additional_data_node_configs: Optional[List[taipy.core.config.data_node_config.DataNodeConfig]] = None, frequency: Optional[taipy.config.common.frequency.Frequency] = None, comparators: Optional[Dict[str, Union[List[Callable], Callable]]] = None, sequences: Optional[Dict[str, List[taipy.core.config.task_config.TaskConfig]]] = None, **properties) -> 'ScenarioConfig': - """Configure a new scenario configuration. - - Parameters: - id (str): The unique identifier of the new scenario configuration. - task_configs (Optional[List[TaskConfig^]]): The list of task configurations used by this - scenario configuration. The default value is None. - additional_data_node_configs (Optional[List[DataNodeConfig^]]): The list of additional data nodes - related to this scenario configuration. The default value is None. - frequency (Optional[Frequency^]): The scenario frequency.
- It corresponds to the recurrence of the scenarios instantiated from this - configuration. Based on this frequency each scenario will be attached to the - relevant cycle. - comparators (Optional[Dict[str, Union[List[Callable], Callable]]]): The list of - functions used to compare scenarios. A comparator function is attached to a - scenario's data node configuration. The key of the dictionary parameter - corresponds to the data node configuration id. During the scenarios' - comparison, each comparator is applied to all the data nodes instantiated from - the data node configuration attached to the comparator. See - `(taipy.)compare_scenarios()^` more more details. - sequences (Optional[Dict[str, List[TaskConfig]]]): Dictionary of sequence descriptions. - The default value is None. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new scenario configuration. - """ - pass - - @staticmethod - def set_default_scenario_configuration(task_configs: Optional[List[taipy.core.config.task_config.TaskConfig]] = None, additional_data_node_configs: List[taipy.core.config.data_node_config.DataNodeConfig] = None, frequency: Optional[taipy.config.common.frequency.Frequency] = None, comparators: Optional[Dict[str, Union[List[Callable], Callable]]] = None, sequences: Optional[Dict[str, List[taipy.core.config.task_config.TaskConfig]]] = None, **properties) -> 'ScenarioConfig': - """Set the default values for scenario configurations. - - This function creates the *default scenario configuration* object, - where all scenario configuration objects will find their default - values when needed. - - Parameters: - task_configs (Optional[List[TaskConfig^]]): The list of task configurations used by this - scenario configuration. - additional_data_node_configs (Optional[List[DataNodeConfig^]]): The list of additional data nodes - related to this scenario configuration. - frequency (Optional[Frequency^]): The scenario frequency. - It corresponds to the recurrence of the scenarios instantiated from this - configuration. Based on this frequency each scenario will be attached to - the relevant cycle. - comparators (Optional[Dict[str, Union[List[Callable], Callable]]]): The list of - functions used to compare scenarios. A comparator function is attached to a - scenario's data node configuration. The key of the dictionary parameter - corresponds to the data node configuration id. During the scenarios' - comparison, each comparator is applied to all the data nodes instantiated from - the data node configuration attached to the comparator. See - `taipy.compare_scenarios()^` more more details. - sequences (Optional[Dict[str, List[TaskConfig]]]): Dictionary of sequences. The default value is None. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new default scenario configuration. - """ - pass - - @staticmethod - def add_migration_function(target_version: str, config: Union[taipy.config.section.Section, str], migration_fct: Callable, **properties): - """Add a migration function for a Configuration to migrate entities to the target version. - - Parameters: - target_version (str): The production version that entities are migrated to. - config (Union[Section, str]): The configuration or the `id` of the config that needs to migrate. - migration_fct (Callable): Migration function that takes an entity as input and returns a new entity - that is compatible with the target production version. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments. - Returns: - `MigrationConfig^`: The Migration configuration. - """ - pass - - @staticmethod - def configure_core(root_folder: Optional[str] = None, storage_folder: Optional[str] = None, repository_type: Optional[str] = None, repository_properties: Optional[Dict[str, Union[str, int]]] = None, read_entity_retry: Optional[int] = None, mode: Optional[str] = None, version_number: Optional[str] = None, force: Optional[bool] = None, **properties) -> 'CoreSection': - """Configure the Core service. - - Parameters: - root_folder (Optional[str]): Path of the base folder for the taipy application. - The default value is "./taipy/" - storage_folder (Optional[str]): Folder name used to store Taipy data. The default value is ".data/". - It is used in conjunction with the `root_folder` field. That means the storage path is - (The default path is "./taipy/.data/"). - repository_type (Optional[str]): The type of the repository to be used to store Taipy data. - The default value is "filesystem". - repository_properties (Optional[Dict[str, Union[str, int]]]): A dictionary of additional properties - to be used by the repository. - read_entity_retry (Optional[int]): Number of retries to read an entity from the repository - before return failure. The default value is 3. - mode (Optional[str]): Indicates the mode of the version management system. - Possible values are *"development"*, *"experiment"*, or *"production"*. - version_number (Optional[str]): The string identifier of the version. - In development mode, the version number is ignored. - force (Optional[bool]): If True, Taipy will override a version even if the configuration - has changed and run the application. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments configure the - behavior of the `Core^` service. - Returns: - The Core configuration. - """ - pass - - @staticmethod - def configure_authentication(protocol: str, secret_key: Optional[str] = None, auth_session_duration: int = 3600, **properties) -> 'AuthenticationConfig': - """Configure authentication. - - Parameters: - protocol (str): The name of the protocol to configure ("ldap", "taipy" or "none"). - secret_key (str): A secret string used to internally encrypt the credentials' information. - If no value is provided, the first run-time authentication sets the default value to a - random text string. - auth_session_duration (int): How long, in seconds, are credentials valid after their creation. The - default value is 3600, corresponding to an hour. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments.
- Depending on the protocol, these arguments are: - - - "LDAP" protocol: the following arguments are accepted: - - *server*: the URL of the LDAP server this authenticator connects to. - - *base_dn*: the LDAP distinguished name that is used. - - "Taipy" protocol: the following arguments are accepted: - - *roles*: a dictionary that configures the association of usernames to - roles. - - *passwords*: if required, a dictionary that configures the association of - usernames to hashed passwords. - A user can be authenticated if it appears at least in one of the *roles* - or the *password* dictionaries.
- If it only appears in *roles*, then the user is authenticated if provided - a password exactly identical to its username.
- If it only appears in *passwords*, then the user is assigned no roles. - - "None": No additional arguments are required. - - Returns: - The authentication configuration. - """ - pass - - @staticmethod - def configure_gui(**properties) -> '_GuiSection': - """NOT DOCUMENTED - Configure the Graphical User Interface. - - Parameters: - **properties (dict[str, any]): Keyword arguments that configure the behavior of the `Gui^` instances.
- Please refer to the - [Configuration section](../gui/configuration.md#configuring-the-gui-instance) - of the User Manual for more information on the accepted arguments. - Returns: - The GUI configuration. - - """ - pass - - @staticmethod - def configure_job_executions(mode: Optional[str] = None, max_nb_of_workers: Union[int, str, NoneType] = None, **properties) -> 'JobConfig': - """Configure job execution. - - Parameters: - mode (Optional[str]): The job execution mode. - Possible values are: *"standalone"* (the default value) or *"development"*. - max_nb_of_workers (Optional[int, str]): Parameter used only in default *"standalone"* mode. - This indicates the maximum number of jobs able to run in parallel.
- The default value is 1.
- A string can be provided to dynamically set the value using an environment - variable. The string must follow the pattern: `ENV[<env_var>]` where - `<env_var>` is the name of an environment variable. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new job execution configuration. - """ - pass - - @staticmethod - def configure_data_node(id: str, storage_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new data node configuration. - - Parameters: - id (str): The unique identifier of the new data node configuration. - storage_type (Optional[str]): The data node configuration storage type. The possible values - are None (which is the default value of *"pickle"*, unless it has been overloaded by the - *storage_type* value set in the default data node configuration - (see `(Config.)set_default_data_node_configuration()^`)), *"pickle"*, *"csv"*, *"excel"*, - *"sql_table"*, *"sql"*, *"json"*, *"parquet"*, *"mongo_collection"*, *"in_memory"*, or - *"generic"*. - scope (Optional[Scope^]): The scope of the data node configuration.
- The default value is `Scope.SCENARIO` (or the one specified in - `(Config.)set_default_data_node_configuration()^`). - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new data node configuration. - """ - pass - - @staticmethod - def configure_data_node_from(source_configuration: 'DataNodeConfig', id: str, **properties) -> 'DataNodeConfig': - """Configure a new data node configuration from an existing one. - - Parameters: - source_configuration (DataNodeConfig): The source data node configuration. - id (str): The unique identifier of the new data node configuration. - **properties (dict[str, any]): A keyworded variable length list of additional arguments.
- The default properties are the properties of the source data node configuration. - - Returns: - The new data node configuration. - """ - pass - - @staticmethod - def set_default_data_node_configuration(storage_type: str, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Set the default values for data node configurations. - - This function creates the _default data node configuration_ object, - where all data node configuration objects will find their default - values when needed. - - Parameters: - storage_type (str): The default storage type for all data node configurations. - The possible values are *"pickle"* (the default value), *"csv"*, *"excel"*, - *"sql"*, *"mongo_collection"*, *"in_memory"*, *"json"*, *"parquet"* or - *"generic"*. - scope (Optional[Scope^]): The default scope for all data node configurations.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The default data node configuration. - """ - pass - - @staticmethod - def configure_csv_data_node(id: str, default_path: Optional[str] = None, encoding: Optional[str] = None, has_header: Optional[bool] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new CSV data node configuration. - - Parameters: - id (str): The unique identifier of the new CSV data node configuration. - default_path (Optional[str]): The default path of the CSV file. - encoding (Optional[str]): The encoding of the CSV file. - has_header (Optional[bool]): If True, indicates that the CSV file has a header. - exposed_type (Optional[str]): The exposed type of the data read from CSV file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the CSV data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new CSV data node configuration. - """ - pass - - @staticmethod - def configure_json_data_node(id: str, default_path: Optional[str] = None, encoding: Optional[str] = None, encoder: Optional[json.encoder.JSONEncoder] = None, decoder: Optional[json.decoder.JSONDecoder] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new JSON data node configuration. - - Parameters: - id (str): The unique identifier of the new JSON data node configuration. - default_path (Optional[str]): The default path of the JSON file. - encoding (Optional[str]): The encoding of the JSON file. - encoder (Optional[json.JSONEncoder]): The JSON encoder used to write data into the JSON file. - decoder (Optional[json.JSONDecoder]): The JSON decoder used to read data from the JSON file. - scope (Optional[Scope^]): The scope of the JSON data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new JSON data node configuration. - """ - pass - - @staticmethod - def configure_parquet_data_node(id: str, default_path: Optional[str] = None, engine: Optional[str] = None, compression: Optional[str] = None, read_kwargs: Optional[Dict] = None, write_kwargs: Optional[Dict] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Parquet data node configuration. - - Parameters: - id (str): The unique identifier of the new Parquet data node configuration. - default_path (Optional[str]): The default path of the Parquet file. - engine (Optional[str]): Parquet library to use. Possible values are *"fastparquet"* or - *"pyarrow"*.
- The default value is *"pyarrow"*. - compression (Optional[str]): Name of the compression to use. Possible values are *"snappy"*, - *"gzip"*, *"brotli"*, or *"none"* (no compression). The default value is *"snappy"*. - read_kwargs (Optional[dict]): Additional parameters passed to the `pandas.read_parquet()` - function. - write_kwargs (Optional[dict]): Additional parameters passed to the - `pandas.DataFrame.write_parquet()` function.
- The parameters in *read_kwargs* and *write_kwargs* have a **higher precedence** than the - top-level parameters which are also passed to Pandas. - exposed_type (Optional[str]): The exposed type of the data read from Parquet file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the Parquet data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Parquet data node configuration. - """ - pass - - @staticmethod - def configure_sql_table_data_node(id: str, db_name: str, db_engine: str, table_name: str, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_driver: Optional[str] = None, sqlite_folder_path: Optional[str] = None, sqlite_file_extension: Optional[str] = None, db_extra_args: Optional[Dict[str, Any]] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new SQL table data node configuration. - - Parameters: - id (str): The unique identifier of the new SQL data node configuration. - db_name (str): The database name, or the name of the SQLite database file. - db_engine (str): The database engine. Possible values are *"sqlite"*, *"mssql"*, *"mysql"*, - or *"postgresql"*. - table_name (str): The name of the SQL table. - db_username (Optional[str]): The database username. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_password (Optional[str]): The database password. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 1433. - db_driver (Optional[str]): The database driver. - sqlite_folder_path (Optional[str]): The path to the folder that contains SQLite file.
- The default value is the current working folder. - sqlite_file_extension (Optional[str]): The file extension of the SQLite file.
- The default value is ".db". - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - exposed_type (Optional[str]): The exposed type of the data read from SQL table.
- The default value is "pandas". - scope (Optional[Scope^]): The scope of the SQL data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new SQL data node configuration. - """ - pass - - @staticmethod - def configure_sql_data_node(id: str, db_name: str, db_engine: str, read_query: str, write_query_builder: Callable, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_driver: Optional[str] = None, sqlite_folder_path: Optional[str] = None, sqlite_file_extension: Optional[str] = None, db_extra_args: Optional[Dict[str, Any]] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new SQL data node configuration. - - Parameters: - id (str): The unique identifier of the new SQL data node configuration. - db_name (str): The database name, or the name of the SQLite database file. - db_engine (str): The database engine. Possible values are *"sqlite"*, *"mssql"*, *"mysql"*, - or *"postgresql"*. - read_query (str): The SQL query string used to read the data from the database. - write_query_builder (Callable): A callback function that takes the data as an input parameter - and returns a list of SQL queries. - db_username (Optional[str]): The database username. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_password (Optional[str]): The database password. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 1433. - db_driver (Optional[str]): The database driver. - sqlite_folder_path (Optional[str]): The path to the folder that contains SQLite file.
- The default value is the current working folder. - sqlite_file_extension (Optional[str]): The file extension of the SQLite file.
- The default value is ".db". - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - exposed_type (Optional[str]): The exposed type of the data read from SQL query.
- The default value is "pandas". - scope (Optional[Scope^]): The scope of the SQL data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new SQL data node configuration. - """ - pass - - @staticmethod - def configure_mongo_collection_data_node(id: str, db_name: str, collection_name: str, custom_document: Optional[Any] = None, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_extra_args: Optional[Dict[str, Any]] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Mongo collection data node configuration. - - Parameters: - id (str): The unique identifier of the new Mongo collection data node configuration. - db_name (str): The database name. - collection_name (str): The collection in the database to read from and to write the data to. - custom_document (Optional[any]): The custom document class to store, encode, and decode data - when reading and writing to a Mongo collection. The custom_document can have an optional - *decode()* method to decode data in the Mongo collection to a custom object, and an - optional *encode()*) method to encode the object's properties to the Mongo collection - when writing. - db_username (Optional[str]): The database username. - db_password (Optional[str]): The database password. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 27017. - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - scope (Optional[Scope^]): The scope of the Mongo collection data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Mongo collection data node configuration. - """ - pass - - @staticmethod - def configure_in_memory_data_node(id: str, default_data: Optional[Any] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new *in-memory* data node configuration. - - Parameters: - id (str): The unique identifier of the new in_memory data node configuration. - default_data (Optional[any]): The default data of the data nodes instantiated from - this in_memory data node configuration. - scope (Optional[Scope^]): The scope of the in_memory data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new *in-memory* data node configuration. - """ - pass - - @staticmethod - def configure_pickle_data_node(id: str, default_path: Optional[str] = None, default_data: Optional[Any] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new pickle data node configuration. - - Parameters: - id (str): The unique identifier of the new pickle data node configuration. - default_path (Optional[str]): The path of the pickle file. - default_data (Optional[any]): The default data of the data nodes instantiated from - this pickle data node configuration. - scope (Optional[Scope^]): The scope of the pickle data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new pickle data node configuration. - """ - pass - - @staticmethod - def configure_excel_data_node(id: str, default_path: Optional[str] = None, has_header: Optional[bool] = None, sheet_name: Union[List[str], str, NoneType] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Excel data node configuration. - - Parameters: - id (str): The unique identifier of the new Excel data node configuration. - default_path (Optional[str]): The path of the Excel file. - has_header (Optional[bool]): If True, indicates that the Excel file has a header. - sheet_name (Optional[Union[List[str], str]]): The list of sheet names to be used. - This can be a unique name. - exposed_type (Optional[str]): The exposed type of the data read from Excel file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the Excel data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Excel data node configuration. - """ - pass - - @staticmethod - def configure_generic_data_node(id: str, read_fct: Optional[Callable] = None, write_fct: Optional[Callable] = None, read_fct_args: Optional[List] = None, write_fct_args: Optional[List] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new generic data node configuration. - - Parameters: - id (str): The unique identifier of the new generic data node configuration. - read_fct (Optional[Callable]): The Python function called to read the data. - write_fct (Optional[Callable]): The Python function called to write the data. - The provided function must have at least one parameter that receives the data to be written. - read_fct_args (Optional[List]): The list of arguments that are passed to the function - *read_fct* to read data. - write_fct_args (Optional[List]): The list of arguments that are passed to the function - *write_fct* to write the data. - scope (Optional[Scope^]): The scope of the Generic data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new Generic data node configuration. - """ - pass - - @staticmethod - def configure_task(id: str, function, input: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, output: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, skippable: Optional[bool] = False, **properties) -> 'TaskConfig': - """Configure a new task configuration. - - Parameters: - id (str): The unique identifier of this task configuration. - function (Callable): The python function called by Taipy to run the task. - input (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - function input data node configurations. This can be a unique data node - configuration if there is a single input data node, or None if there are none. - output (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - function output data node configurations. This can be a unique data node - configuration if there is a single output data node, or None if there are none. - skippable (bool): If True, indicates that the task can be skipped if no change has - been made on inputs.
- The default value is False. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new task configuration. - """ - pass - - @staticmethod - def set_default_task_configuration(function, input: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, output: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, skippable: Optional[bool] = False, **properties) -> 'TaskConfig': - """Set the default values for task configurations. - - This function creates the *default task configuration* object, - where all task configuration objects will find their default - values when needed. - - Parameters: - function (Callable): The python function called by Taipy to run the task. - input (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - input data node configurations. This can be a unique data node - configuration if there is a single input data node, or None if there are none. - output (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - output data node configurations. This can be a unique data node - configuration if there is a single output data node, or None if there are none. - skippable (bool): If True, indicates that the task can be skipped if no change has - been made on inputs.
- The default value is False. - **properties (dict[str, any]): A keyworded variable length list of additional - arguments. - Returns: - The default task configuration. - """ - pass - - @staticmethod - def configure_scenario(id: str, task_configs: Optional[List[taipy.core.config.task_config.TaskConfig]] = None, additional_data_node_configs: Optional[List[taipy.core.config.data_node_config.DataNodeConfig]] = None, frequency: Optional[taipy.config.common.frequency.Frequency] = None, comparators: Optional[Dict[str, Union[List[Callable], Callable]]] = None, sequences: Optional[Dict[str, List[taipy.core.config.task_config.TaskConfig]]] = None, **properties) -> 'ScenarioConfig': - """Configure a new scenario configuration. - - Parameters: - id (str): The unique identifier of the new scenario configuration. - task_configs (Optional[List[TaskConfig^]]): The list of task configurations used by this - scenario configuration. The default value is None. - additional_data_node_configs (Optional[List[DataNodeConfig^]]): The list of additional data nodes - related to this scenario configuration. The default value is None. - frequency (Optional[Frequency^]): The scenario frequency.
- It corresponds to the recurrence of the scenarios instantiated from this - configuration. Based on this frequency each scenario will be attached to the - relevant cycle. - comparators (Optional[Dict[str, Union[List[Callable], Callable]]]): The list of - functions used to compare scenarios. A comparator function is attached to a - scenario's data node configuration. The key of the dictionary parameter - corresponds to the data node configuration id. During the scenarios' - comparison, each comparator is applied to all the data nodes instantiated from - the data node configuration attached to the comparator. See - `(taipy.)compare_scenarios()^` more more details. - sequences (Optional[Dict[str, List[TaskConfig]]]): Dictionary of sequence descriptions. - The default value is None. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new scenario configuration. - """ - pass - - @staticmethod - def set_default_scenario_configuration(task_configs: Optional[List[taipy.core.config.task_config.TaskConfig]] = None, additional_data_node_configs: List[taipy.core.config.data_node_config.DataNodeConfig] = None, frequency: Optional[taipy.config.common.frequency.Frequency] = None, comparators: Optional[Dict[str, Union[List[Callable], Callable]]] = None, sequences: Optional[Dict[str, List[taipy.core.config.task_config.TaskConfig]]] = None, **properties) -> 'ScenarioConfig': - """Set the default values for scenario configurations. - - This function creates the *default scenario configuration* object, - where all scenario configuration objects will find their default - values when needed. - - Parameters: - task_configs (Optional[List[TaskConfig^]]): The list of task configurations used by this - scenario configuration. - additional_data_node_configs (Optional[List[DataNodeConfig^]]): The list of additional data nodes - related to this scenario configuration. - frequency (Optional[Frequency^]): The scenario frequency. - It corresponds to the recurrence of the scenarios instantiated from this - configuration. Based on this frequency each scenario will be attached to - the relevant cycle. - comparators (Optional[Dict[str, Union[List[Callable], Callable]]]): The list of - functions used to compare scenarios. A comparator function is attached to a - scenario's data node configuration. The key of the dictionary parameter - corresponds to the data node configuration id. During the scenarios' - comparison, each comparator is applied to all the data nodes instantiated from - the data node configuration attached to the comparator. See - `taipy.compare_scenarios()^` more more details. - sequences (Optional[Dict[str, List[TaskConfig]]]): Dictionary of sequences. The default value is None. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new default scenario configuration. - """ - pass - - @staticmethod - def add_migration_function(target_version: str, config: Union[taipy.config.section.Section, str], migration_fct: Callable, **properties): - """Add a migration function for a Configuration to migrate entities to the target version. - - Parameters: - target_version (str): The production version that entities are migrated to. - config (Union[Section, str]): The configuration or the `id` of the config that needs to migrate. - migration_fct (Callable): Migration function that takes an entity as input and returns a new entity - that is compatible with the target production version. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments. - Returns: - `MigrationConfig^`: The Migration configuration. - """ - pass - - @staticmethod - def configure_core(root_folder: Optional[str] = None, storage_folder: Optional[str] = None, repository_type: Optional[str] = None, repository_properties: Optional[Dict[str, Union[str, int]]] = None, read_entity_retry: Optional[int] = None, mode: Optional[str] = None, version_number: Optional[str] = None, force: Optional[bool] = None, **properties) -> 'CoreSection': - """Configure the Core service. - - Parameters: - root_folder (Optional[str]): Path of the base folder for the taipy application. - The default value is "./taipy/" - storage_folder (Optional[str]): Folder name used to store Taipy data. The default value is ".data/". - It is used in conjunction with the `root_folder` field. That means the storage path is - (The default path is "./taipy/.data/"). - repository_type (Optional[str]): The type of the repository to be used to store Taipy data. - The default value is "filesystem". - repository_properties (Optional[Dict[str, Union[str, int]]]): A dictionary of additional properties - to be used by the repository. - read_entity_retry (Optional[int]): Number of retries to read an entity from the repository - before return failure. The default value is 3. - mode (Optional[str]): Indicates the mode of the version management system. - Possible values are *"development"*, *"experiment"*, or *"production"*. - version_number (Optional[str]): The string identifier of the version. - In development mode, the version number is ignored. - force (Optional[bool]): If True, Taipy will override a version even if the configuration - has changed and run the application. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments configure the - behavior of the `Core^` service. - Returns: - The Core configuration. - """ - pass - - @staticmethod - def configure_authentication(protocol: str, secret_key: Optional[str] = None, auth_session_duration: int = 3600, **properties) -> 'AuthenticationConfig': - """Configure authentication. - - Parameters: - protocol (str): The name of the protocol to configure ("ldap", "taipy" or "none"). - secret_key (str): A secret string used to internally encrypt the credentials' information. - If no value is provided, the first run-time authentication sets the default value to a - random text string. - auth_session_duration (int): How long, in seconds, are credentials valid after their creation. The - default value is 3600, corresponding to an hour. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments.
- Depending on the protocol, these arguments are: - - - "LDAP" protocol: the following arguments are accepted: - - *server*: the URL of the LDAP server this authenticator connects to. - - *base_dn*: the LDAP distinguished name that is used. - - "Taipy" protocol: the following arguments are accepted: - - *roles*: a dictionary that configures the association of usernames to - roles. - - *passwords*: if required, a dictionary that configures the association of - usernames to hashed passwords. - A user can be authenticated if it appears at least in one of the *roles* - or the *password* dictionaries.
- If it only appears in *roles*, then the user is authenticated if provided - a password exactly identical to its username.
- If it only appears in *passwords*, then the user is assigned no roles. - - "None": No additional arguments are required. - - Returns: - The authentication configuration. - """ - pass - - @staticmethod - def configure_gui(**properties) -> '_GuiSection': - """NOT DOCUMENTED - Configure the Graphical User Interface. - - Parameters: - **properties (dict[str, any]): Keyword arguments that configure the behavior of the `Gui^` instances.
- Please refer to the - [Configuration section](../gui/configuration.md#configuring-the-gui-instance) - of the User Manual for more information on the accepted arguments. - Returns: - The GUI configuration. - - """ - pass - - @staticmethod - def configure_job_executions(mode: Optional[str] = None, max_nb_of_workers: Union[int, str, NoneType] = None, **properties) -> 'JobConfig': - """Configure job execution. - - Parameters: - mode (Optional[str]): The job execution mode. - Possible values are: *"standalone"* (the default value) or *"development"*. - max_nb_of_workers (Optional[int, str]): Parameter used only in default *"standalone"* mode. - This indicates the maximum number of jobs able to run in parallel.
- The default value is 1.
- A string can be provided to dynamically set the value using an environment - variable. The string must follow the pattern: `ENV[<env_var>]` where - `<env_var>` is the name of an environment variable. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new job execution configuration. - """ - pass - - @staticmethod - def configure_data_node(id: str, storage_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new data node configuration. - - Parameters: - id (str): The unique identifier of the new data node configuration. - storage_type (Optional[str]): The data node configuration storage type. The possible values - are None (which is the default value of *"pickle"*, unless it has been overloaded by the - *storage_type* value set in the default data node configuration - (see `(Config.)set_default_data_node_configuration()^`)), *"pickle"*, *"csv"*, *"excel"*, - *"sql_table"*, *"sql"*, *"json"*, *"parquet"*, *"mongo_collection"*, *"in_memory"*, or - *"generic"*. - scope (Optional[Scope^]): The scope of the data node configuration.
- The default value is `Scope.SCENARIO` (or the one specified in - `(Config.)set_default_data_node_configuration()^`). - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new data node configuration. - """ - pass - - @staticmethod - def configure_data_node_from(source_configuration: 'DataNodeConfig', id: str, **properties) -> 'DataNodeConfig': - """Configure a new data node configuration from an existing one. - - Parameters: - source_configuration (DataNodeConfig): The source data node configuration. - id (str): The unique identifier of the new data node configuration. - **properties (dict[str, any]): A keyworded variable length list of additional arguments.
- The default properties are the properties of the source data node configuration. - - Returns: - The new data node configuration. - """ - pass - - @staticmethod - def set_default_data_node_configuration(storage_type: str, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Set the default values for data node configurations. - - This function creates the _default data node configuration_ object, - where all data node configuration objects will find their default - values when needed. - - Parameters: - storage_type (str): The default storage type for all data node configurations. - The possible values are *"pickle"* (the default value), *"csv"*, *"excel"*, - *"sql"*, *"mongo_collection"*, *"in_memory"*, *"json"*, *"parquet"* or - *"generic"*. - scope (Optional[Scope^]): The default scope for all data node configurations.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The default data node configuration. - """ - pass - - @staticmethod - def configure_csv_data_node(id: str, default_path: Optional[str] = None, encoding: Optional[str] = None, has_header: Optional[bool] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new CSV data node configuration. - - Parameters: - id (str): The unique identifier of the new CSV data node configuration. - default_path (Optional[str]): The default path of the CSV file. - encoding (Optional[str]): The encoding of the CSV file. - has_header (Optional[bool]): If True, indicates that the CSV file has a header. - exposed_type (Optional[str]): The exposed type of the data read from CSV file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the CSV data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new CSV data node configuration. - """ - pass - - @staticmethod - def configure_json_data_node(id: str, default_path: Optional[str] = None, encoding: Optional[str] = None, encoder: Optional[json.encoder.JSONEncoder] = None, decoder: Optional[json.decoder.JSONDecoder] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new JSON data node configuration. - - Parameters: - id (str): The unique identifier of the new JSON data node configuration. - default_path (Optional[str]): The default path of the JSON file. - encoding (Optional[str]): The encoding of the JSON file. - encoder (Optional[json.JSONEncoder]): The JSON encoder used to write data into the JSON file. - decoder (Optional[json.JSONDecoder]): The JSON decoder used to read data from the JSON file. - scope (Optional[Scope^]): The scope of the JSON data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new JSON data node configuration. - """ - pass - - @staticmethod - def configure_parquet_data_node(id: str, default_path: Optional[str] = None, engine: Optional[str] = None, compression: Optional[str] = None, read_kwargs: Optional[Dict] = None, write_kwargs: Optional[Dict] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Parquet data node configuration. - - Parameters: - id (str): The unique identifier of the new Parquet data node configuration. - default_path (Optional[str]): The default path of the Parquet file. - engine (Optional[str]): Parquet library to use. Possible values are *"fastparquet"* or - *"pyarrow"*.
- The default value is *"pyarrow"*. - compression (Optional[str]): Name of the compression to use. Possible values are *"snappy"*, - *"gzip"*, *"brotli"*, or *"none"* (no compression). The default value is *"snappy"*. - read_kwargs (Optional[dict]): Additional parameters passed to the `pandas.read_parquet()` - function. - write_kwargs (Optional[dict]): Additional parameters passed to the - `pandas.DataFrame.write_parquet()` function.
- The parameters in *read_kwargs* and *write_kwargs* have a **higher precedence** than the - top-level parameters which are also passed to Pandas. - exposed_type (Optional[str]): The exposed type of the data read from Parquet file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the Parquet data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Parquet data node configuration. - """ - pass - - @staticmethod - def configure_sql_table_data_node(id: str, db_name: str, db_engine: str, table_name: str, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_driver: Optional[str] = None, sqlite_folder_path: Optional[str] = None, sqlite_file_extension: Optional[str] = None, db_extra_args: Optional[Dict[str, Any]] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new SQL table data node configuration. - - Parameters: - id (str): The unique identifier of the new SQL data node configuration. - db_name (str): The database name, or the name of the SQLite database file. - db_engine (str): The database engine. Possible values are *"sqlite"*, *"mssql"*, *"mysql"*, - or *"postgresql"*. - table_name (str): The name of the SQL table. - db_username (Optional[str]): The database username. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_password (Optional[str]): The database password. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 1433. - db_driver (Optional[str]): The database driver. - sqlite_folder_path (Optional[str]): The path to the folder that contains SQLite file.
- The default value is the current working folder. - sqlite_file_extension (Optional[str]): The file extension of the SQLite file.
- The default value is ".db". - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - exposed_type (Optional[str]): The exposed type of the data read from SQL table.
- The default value is "pandas". - scope (Optional[Scope^]): The scope of the SQL data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new SQL data node configuration. - """ - pass - - @staticmethod - def configure_sql_data_node(id: str, db_name: str, db_engine: str, read_query: str, write_query_builder: Callable, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_driver: Optional[str] = None, sqlite_folder_path: Optional[str] = None, sqlite_file_extension: Optional[str] = None, db_extra_args: Optional[Dict[str, Any]] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new SQL data node configuration. - - Parameters: - id (str): The unique identifier of the new SQL data node configuration. - db_name (str): The database name, or the name of the SQLite database file. - db_engine (str): The database engine. Possible values are *"sqlite"*, *"mssql"*, *"mysql"*, - or *"postgresql"*. - read_query (str): The SQL query string used to read the data from the database. - write_query_builder (Callable): A callback function that takes the data as an input parameter - and returns a list of SQL queries. - db_username (Optional[str]): The database username. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_password (Optional[str]): The database password. Required by the *"mssql"*, *"mysql"*, and - *"postgresql"* engines. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 1433. - db_driver (Optional[str]): The database driver. - sqlite_folder_path (Optional[str]): The path to the folder that contains SQLite file.
- The default value is the current working folder. - sqlite_file_extension (Optional[str]): The file extension of the SQLite file.
- The default value is ".db". - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - exposed_type (Optional[str]): The exposed type of the data read from SQL query.
- The default value is "pandas". - scope (Optional[Scope^]): The scope of the SQL data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new SQL data node configuration. - """ - pass - - @staticmethod - def configure_mongo_collection_data_node(id: str, db_name: str, collection_name: str, custom_document: Optional[Any] = None, db_username: Optional[str] = None, db_password: Optional[str] = None, db_host: Optional[str] = None, db_port: Optional[int] = None, db_extra_args: Optional[Dict[str, Any]] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Mongo collection data node configuration. - - Parameters: - id (str): The unique identifier of the new Mongo collection data node configuration. - db_name (str): The database name. - collection_name (str): The collection in the database to read from and to write the data to. - custom_document (Optional[any]): The custom document class to store, encode, and decode data - when reading and writing to a Mongo collection. The custom_document can have an optional - *decode()* method to decode data in the Mongo collection to a custom object, and an - optional *encode()*) method to encode the object's properties to the Mongo collection - when writing. - db_username (Optional[str]): The database username. - db_password (Optional[str]): The database password. - db_host (Optional[str]): The database host.
- The default value is "localhost". - db_port (Optional[int]): The database port.
- The default value is 27017. - db_extra_args (Optional[dict[str, any]]): A dictionary of additional arguments to be passed - into database connection string. - scope (Optional[Scope^]): The scope of the Mongo collection data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Mongo collection data node configuration. - """ - pass - - @staticmethod - def configure_in_memory_data_node(id: str, default_data: Optional[Any] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new *in-memory* data node configuration. - - Parameters: - id (str): The unique identifier of the new in_memory data node configuration. - default_data (Optional[any]): The default data of the data nodes instantiated from - this in_memory data node configuration. - scope (Optional[Scope^]): The scope of the in_memory data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new *in-memory* data node configuration. - """ - pass - - @staticmethod - def configure_pickle_data_node(id: str, default_path: Optional[str] = None, default_data: Optional[Any] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new pickle data node configuration. - - Parameters: - id (str): The unique identifier of the new pickle data node configuration. - default_path (Optional[str]): The path of the pickle file. - default_data (Optional[any]): The default data of the data nodes instantiated from - this pickle data node configuration. - scope (Optional[Scope^]): The scope of the pickle data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new pickle data node configuration. - """ - pass - - @staticmethod - def configure_excel_data_node(id: str, default_path: Optional[str] = None, has_header: Optional[bool] = None, sheet_name: Union[List[str], str, NoneType] = None, exposed_type: Optional[str] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new Excel data node configuration. - - Parameters: - id (str): The unique identifier of the new Excel data node configuration. - default_path (Optional[str]): The path of the Excel file. - has_header (Optional[bool]): If True, indicates that the Excel file has a header. - sheet_name (Optional[Union[List[str], str]]): The list of sheet names to be used. - This can be a unique name. - exposed_type (Optional[str]): The exposed type of the data read from Excel file.
- The default value is `pandas`. - scope (Optional[Scope^]): The scope of the Excel data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new Excel data node configuration. - """ - pass - - @staticmethod - def configure_generic_data_node(id: str, read_fct: Optional[Callable] = None, write_fct: Optional[Callable] = None, read_fct_args: Optional[List] = None, write_fct_args: Optional[List] = None, scope: Optional[taipy.config.common.scope.Scope] = None, validity_period: Optional[datetime.timedelta] = None, **properties) -> 'DataNodeConfig': - """Configure a new generic data node configuration. - - Parameters: - id (str): The unique identifier of the new generic data node configuration. - read_fct (Optional[Callable]): The Python function called to read the data. - write_fct (Optional[Callable]): The Python function called to write the data. - The provided function must have at least one parameter that receives the data to be written. - read_fct_args (Optional[List]): The list of arguments that are passed to the function - *read_fct* to read data. - write_fct_args (Optional[List]): The list of arguments that are passed to the function - *write_fct* to write the data. - scope (Optional[Scope^]): The scope of the Generic data node configuration.
- The default value is `Scope.SCENARIO`. - validity_period (Optional[timedelta]): The duration since the last edit date for which the data node can be - considered up-to-date. Once the validity period has passed, the data node is considered stale and - relevant tasks will run even if they are skippable (see the - [Task configs page](../core/config/task-config.md) for more details). - If *validity_period* is set to None, the data node is always up-to-date. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - Returns: - The new Generic data node configuration. - """ - pass - - @staticmethod - def configure_task(id: str, function, input: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, output: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, skippable: Optional[bool] = False, **properties) -> 'TaskConfig': - """Configure a new task configuration. - - Parameters: - id (str): The unique identifier of this task configuration. - function (Callable): The python function called by Taipy to run the task. - input (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - function input data node configurations. This can be a unique data node - configuration if there is a single input data node, or None if there are none. - output (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - function output data node configurations. This can be a unique data node - configuration if there is a single output data node, or None if there are none. - skippable (bool): If True, indicates that the task can be skipped if no change has - been made on inputs.
- The default value is False. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new task configuration. - """ - pass - - @staticmethod - def set_default_task_configuration(function, input: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, output: Union[taipy.core.config.data_node_config.DataNodeConfig, List[taipy.core.config.data_node_config.DataNodeConfig], NoneType] = None, skippable: Optional[bool] = False, **properties) -> 'TaskConfig': - """Set the default values for task configurations. - - This function creates the *default task configuration* object, - where all task configuration objects will find their default - values when needed. - - Parameters: - function (Callable): The python function called by Taipy to run the task. - input (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - input data node configurations. This can be a unique data node - configuration if there is a single input data node, or None if there are none. - output (Optional[Union[DataNodeConfig^, List[DataNodeConfig^]]]): The list of the - output data node configurations. This can be a unique data node - configuration if there is a single output data node, or None if there are none. - skippable (bool): If True, indicates that the task can be skipped if no change has - been made on inputs.
- The default value is False. - **properties (dict[str, any]): A keyworded variable length list of additional - arguments. - Returns: - The default task configuration. - """ - pass - - @staticmethod - def configure_scenario(id: str, task_configs: Optional[List[taipy.core.config.task_config.TaskConfig]] = None, additional_data_node_configs: Optional[List[taipy.core.config.data_node_config.DataNodeConfig]] = None, frequency: Optional[taipy.config.common.frequency.Frequency] = None, comparators: Optional[Dict[str, Union[List[Callable], Callable]]] = None, sequences: Optional[Dict[str, List[taipy.core.config.task_config.TaskConfig]]] = None, **properties) -> 'ScenarioConfig': - """Configure a new scenario configuration. - - Parameters: - id (str): The unique identifier of the new scenario configuration. - task_configs (Optional[List[TaskConfig^]]): The list of task configurations used by this - scenario configuration. The default value is None. - additional_data_node_configs (Optional[List[DataNodeConfig^]]): The list of additional data nodes - related to this scenario configuration. The default value is None. - frequency (Optional[Frequency^]): The scenario frequency.
- It corresponds to the recurrence of the scenarios instantiated from this - configuration. Based on this frequency each scenario will be attached to the - relevant cycle. - comparators (Optional[Dict[str, Union[List[Callable], Callable]]]): The list of - functions used to compare scenarios. A comparator function is attached to a - scenario's data node configuration. The key of the dictionary parameter - corresponds to the data node configuration id. During the scenarios' - comparison, each comparator is applied to all the data nodes instantiated from - the data node configuration attached to the comparator. See - `(taipy.)compare_scenarios()^` more more details. - sequences (Optional[Dict[str, List[TaskConfig]]]): Dictionary of sequence descriptions. - The default value is None. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new scenario configuration. - """ - pass - - @staticmethod - def set_default_scenario_configuration(task_configs: Optional[List[taipy.core.config.task_config.TaskConfig]] = None, additional_data_node_configs: List[taipy.core.config.data_node_config.DataNodeConfig] = None, frequency: Optional[taipy.config.common.frequency.Frequency] = None, comparators: Optional[Dict[str, Union[List[Callable], Callable]]] = None, sequences: Optional[Dict[str, List[taipy.core.config.task_config.TaskConfig]]] = None, **properties) -> 'ScenarioConfig': - """Set the default values for scenario configurations. - - This function creates the *default scenario configuration* object, - where all scenario configuration objects will find their default - values when needed. - - Parameters: - task_configs (Optional[List[TaskConfig^]]): The list of task configurations used by this - scenario configuration. - additional_data_node_configs (Optional[List[DataNodeConfig^]]): The list of additional data nodes - related to this scenario configuration. - frequency (Optional[Frequency^]): The scenario frequency. - It corresponds to the recurrence of the scenarios instantiated from this - configuration. Based on this frequency each scenario will be attached to - the relevant cycle. - comparators (Optional[Dict[str, Union[List[Callable], Callable]]]): The list of - functions used to compare scenarios. A comparator function is attached to a - scenario's data node configuration. The key of the dictionary parameter - corresponds to the data node configuration id. During the scenarios' - comparison, each comparator is applied to all the data nodes instantiated from - the data node configuration attached to the comparator. See - `taipy.compare_scenarios()^` more more details. - sequences (Optional[Dict[str, List[TaskConfig]]]): Dictionary of sequences. The default value is None. - **properties (dict[str, any]): A keyworded variable length list of additional arguments. - - Returns: - The new default scenario configuration. - """ - pass - - @staticmethod - def add_migration_function(target_version: str, config: Union[taipy.config.section.Section, str], migration_fct: Callable, **properties): - """Add a migration function for a Configuration to migrate entities to the target version. - - Parameters: - target_version (str): The production version that entities are migrated to. - config (Union[Section, str]): The configuration or the `id` of the config that needs to migrate. - migration_fct (Callable): Migration function that takes an entity as input and returns a new entity - that is compatible with the target production version. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments. - Returns: - `MigrationConfig^`: The Migration configuration. - """ - pass - - @staticmethod - def configure_core(root_folder: Optional[str] = None, storage_folder: Optional[str] = None, repository_type: Optional[str] = None, repository_properties: Optional[Dict[str, Union[str, int]]] = None, read_entity_retry: Optional[int] = None, mode: Optional[str] = None, version_number: Optional[str] = None, force: Optional[bool] = None, **properties) -> 'CoreSection': - """Configure the Core service. - - Parameters: - root_folder (Optional[str]): Path of the base folder for the taipy application. - The default value is "./taipy/" - storage_folder (Optional[str]): Folder name used to store Taipy data. The default value is ".data/". - It is used in conjunction with the `root_folder` field. That means the storage path is - (The default path is "./taipy/.data/"). - repository_type (Optional[str]): The type of the repository to be used to store Taipy data. - The default value is "filesystem". - repository_properties (Optional[Dict[str, Union[str, int]]]): A dictionary of additional properties - to be used by the repository. - read_entity_retry (Optional[int]): Number of retries to read an entity from the repository - before return failure. The default value is 3. - mode (Optional[str]): Indicates the mode of the version management system. - Possible values are *"development"*, *"experiment"*, or *"production"*. - version_number (Optional[str]): The string identifier of the version. - In development mode, the version number is ignored. - force (Optional[bool]): If True, Taipy will override a version even if the configuration - has changed and run the application. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments configure the - behavior of the `Core^` service. - Returns: - The Core configuration. - """ - pass - - @staticmethod - def configure_authentication(protocol: str, secret_key: Optional[str] = None, auth_session_duration: int = 3600, **properties) -> 'AuthenticationConfig': - """Configure authentication. - - Parameters: - protocol (str): The name of the protocol to configure ("ldap", "taipy" or "none"). - secret_key (str): A secret string used to internally encrypt the credentials' information. - If no value is provided, the first run-time authentication sets the default value to a - random text string. - auth_session_duration (int): How long, in seconds, are credentials valid after their creation. The - default value is 3600, corresponding to an hour. - **properties (Dict[str, Any]): A keyworded variable length list of additional arguments.
- Depending on the protocol, these arguments are: - - - "LDAP" protocol: the following arguments are accepted: - - *server*: the URL of the LDAP server this authenticator connects to. - - *base_dn*: the LDAP distinguished name that is used. - - "Taipy" protocol: the following arguments are accepted: - - *roles*: a dictionary that configures the association of usernames to - roles. - - *passwords*: if required, a dictionary that configures the association of - usernames to hashed passwords. - A user can be authenticated if it appears at least in one of the *roles* - or the *password* dictionaries.
- If it only appears in *roles*, then the user is authenticated if provided - a password exactly identical to its username.
- If it only appears in *passwords*, then the user is assigned no roles. - - "None": No additional arguments are required. - - Returns: - The authentication configuration. - """ - pass -