diff --git a/en/docs/administer/key-managers/configure-custom-connector.md b/en/docs/administer/key-managers/configure-custom-connector.md index 6a751a6c97..d928a16b9f 100644 --- a/en/docs/administer/key-managers/configure-custom-connector.md +++ b/en/docs/administer/key-managers/configure-custom-connector.md @@ -132,7 +132,7 @@ When registering a third-party Identity Provider as a Key Manager in the Admin P 1. Sign in to the Admin Portal using the following URL: `https://:9443/admin` - !!! tip + !!! tip For example, this URL can be `https://localhost:9443/admin` and you can use `admin` as the username and password to access the Admin Portal. 2. Add a new Key Manager. @@ -462,7 +462,7 @@ When registering a third-party Identity Provider as a Key Manager in the Admin P 1. Sign in to the Developer Portal using the following URL: `https://:9443/devportal` - !!! tip + !!! tip This can be `https://localhost:9443/devportal` and you can use “admin” as the username and password to access the Developer Portal. 2. Click **Applications**. diff --git a/en/docs/administer/key-managers/configure-keycloak-connector.md b/en/docs/administer/key-managers/configure-keycloak-connector.md index 359a89e257..d9696ac097 100644 --- a/en/docs/administer/key-managers/configure-keycloak-connector.md +++ b/en/docs/administer/key-managers/configure-keycloak-connector.md @@ -97,7 +97,7 @@ Follow the instructions given below to configure Keycloak as a third-party Key M [![Add Keycloak configurations]({{base_path}}/assets/img/administer/keycloak-endpoints.png)]({{base_path}}/assets/img/administer/keycloak-endpoints.png) - !!! tip + !!! tip * Configuring the well-known URL populates all the default configurations in the table once you click **Import**. * It is mandatory to provide the **Client Id** and **Client Secret**. diff --git a/en/docs/administer/logging-and-monitoring/logging/admin-configuring-the-log-provider.md b/en/docs/administer/logging-and-monitoring/logging/admin-configuring-the-log-provider.md index 5941b9eaa4..02be394d15 100644 --- a/en/docs/administer/logging-and-monitoring/logging/admin-configuring-the-log-provider.md +++ b/en/docs/administer/logging-and-monitoring/logging/admin-configuring-the-log-provider.md @@ -60,8 +60,8 @@ After implementing the above interfaces, update the `logging-config.xml` file st ``` - !!! note - The default "InMemoryLogProvider" uses the CarbonMemoryAppender. Therefore the log4j.properties file stored in <PRODUCT\_HOME>/repository/conf/ directory should be updated with the following log4j.appender.CARBON\_MEMORY property: + !!! note + The default "InMemoryLogProvider" uses the CarbonMemoryAppender. Therefore the log4j.properties file stored in <PRODUCT\_HOME>/repository/conf/ directory should be updated with the following log4j.appender.CARBON\_MEMORY property: ``` java log4j.appender.CARBON_MEMORY=org.wso2.carbon.logging.service.appender.CarbonMemoryAppender] diff --git a/en/docs/administer/managing-users-and-roles/managing-user-stores/configuring-the-system-administrator.md b/en/docs/administer/managing-users-and-roles/managing-user-stores/configuring-the-system-administrator.md index cc7c0cd001..8cb6e2dc39 100644 --- a/en/docs/administer/managing-users-and-roles/managing-user-stores/configuring-the-system-administrator.md +++ b/en/docs/administer/managing-users-and-roles/managing-user-stores/configuring-the-system-administrator.md @@ -3,7 +3,7 @@ The **admin** user is the super tenant that will be able to manage all other users, roles and permissions in the system by using the management console of the product. Therefore, the user that should have admin permissions is required to be stored in the primary user store when you start the system for the first time . The documentation on setting up primary user stores will explain how to configure the administrator while configuring the user store. The information under this topic will explain the main configurations that are relevant to setting up the system administrator. !!! note -If the primary user store is read-only, you will be using a user ID and role that already exists in the user store, for the administrator. If the user store is read/write, you have the option of creating the administrator user in the user store as explained below. By default, the embedded H2 database (with read/write enabled) is used for both these purposes in WSO2 products. + If the primary user store is read-only, you will be using a user ID and role that already exists in the user store, for the administrator. If the user store is read/write, you have the option of creating the administrator user in the user store as explained below. By default, the embedded H2 database (with read/write enabled) is used for both these purposes in WSO2 products. Note the following key facts about the system administrator in your system: @@ -72,8 +72,10 @@ Note the following regarding the configuration above.

Do NOT put the password here but leave the default value. I f the user store is read-only, this element and its value are ignored. This password is used only if the user store is read-write and the AddAdmin value is set to true .

-!!! note +
+

Note

Note that the password in the user-mgt.xml file is written to the primary user store when the server starts for the first time. Thereafter, the password will be validated from the primary user store and not from the user-mgt.xml file. Therefore, if you need to change the admin password stored in the user store, you cannot simply change the value in the user-mgt.xml file. To change the admin password, you must use the Change Password option from the management console as explained here .

+
diff --git a/en/docs/administer/managing-users-and-roles/managing-user-stores/working-with-properties-of-user-stores.md b/en/docs/administer/managing-users-and-roles/managing-user-stores/working-with-properties-of-user-stores.md index 8c98c56c15..61d9688961 100644 --- a/en/docs/administer/managing-users-and-roles/managing-user-stores/working-with-properties-of-user-stores.md +++ b/en/docs/administer/managing-users-and-roles/managing-user-stores/working-with-properties-of-user-stores.md @@ -46,8 +46,10 @@ The following table provides descriptions of the key properties you use to confi UserSearchBase

DN of the context or object under which the user entries are stored in the user store. In this case, it is the "users" container. When the user store searches for users, it will start from this location of the directory.

-!!! info +
+

Info

Different databases have different search bases.

+
@@ -59,8 +61,10 @@ The following table provides descriptions of the key properties you use to confi UserNameAttribute

The attribute used for uniquely identifying a user entry. Users can be authenticated using their email address, UID, etc.

-!!! info +
+

Info

The name of the attribute is considered as the username.

+

For information on using email address to authenticate users, click here .

@@ -205,8 +209,10 @@ The following table provides descriptions of the key properties you use to confi

StoreSaltedPassword

(JDBC) Indicates whether to salt the password.

-!!! tip +
+

Tip

Tip: Make sure you secure the password with salt and key.

+
diff --git a/en/docs/administer/multitenancy/adding-new-tenants.md b/en/docs/administer/multitenancy/adding-new-tenants.md index 42e725e3eb..2766defe96 100644 --- a/en/docs/administer/multitenancy/adding-new-tenants.md +++ b/en/docs/administer/multitenancy/adding-new-tenants.md @@ -51,8 +51,8 @@ You can invoke these operations using a SOAP client like SOAP UI as follows: api-manager.bat ``` - !!! tip - Get the list of available admin services + !!! tip + Get the list of available admin services If you want to discover the admin services that are exposed by your product: @@ -84,11 +84,11 @@ You can invoke these operations using a SOAP client like SOAP UI as follows: This assumes that you are running the SOAP UI client from the same machine as the product instance. Note that there are several operations shown in the SOAP UI after importing the WSDL file: ![]({{base_path}}/assets/attachments/126562777/126562782.png) - !!! warning - Before invoking an operation: + !!! warning + Before invoking an operation: - - Be sure to set the admin user's credentials for authorization in the SOAP UI. - - Note that it is **not recommended** to delete tenants. + - Be sure to set the admin user's credentials for authorization in the SOAP UI. + - Note that it is **not recommended** to delete tenants. 4. Click on the operation to open the request view. For example, to activate a tenant use the `activateTenant` operation. diff --git a/en/docs/api-analytics/samples/publishing-analytics-events-to-external-systems.md b/en/docs/api-analytics/samples/publishing-analytics-events-to-external-systems.md index 33c54c03d6..1e5ad2be9b 100644 --- a/en/docs/api-analytics/samples/publishing-analytics-events-to-external-systems.md +++ b/en/docs/api-analytics/samples/publishing-analytics-events-to-external-systems.md @@ -23,7 +23,7 @@ Follow the instructions below to create the custom event publisher. 2. Define the `wso2-nexus` repository in the `pom.xml` file. - ``` + ```xml wso2-nexus WSO2 internal Repository @@ -38,7 +38,7 @@ Follow the instructions below to create the custom event publisher. 3. Add the dependency in the `pom.xml` file. - ``` + ```xml org.wso2.am.analytics.publisher org.wso2.am.analytics.publisher.client @@ -86,15 +86,16 @@ Follow the instructions below to configure WSO2 API Gateway and Choreo Connect f ??? info "API Manager Gateway" Follow the instructions below to configure WSO2 API Gateway for the sample created above: + 1. Copy the JAR file to the `/repository/components/lib` directory. 2. Open the `/repository/conf/deployment.toml` file in a text editor and modify the `apim.analytics` section as follows: - ``` - [apim.analytics] - enable = true - properties."publisher.reporter.class" = "" - logger.reporter.level = "INFO" - ``` + ```toml + [apim.analytics] + enable = true + properties."publisher.reporter.class" = "" + logger.reporter.level = "INFO" + ``` 3. Open the `/repository/conf/log4j2.properties` file in a text editor and do the following modifications. @@ -113,10 +114,11 @@ Follow the instructions below to configure WSO2 API Gateway and Choreo Connect f ??? info "Choreo Connect" Follow the instructions below to configure Choreo Connect for the sample created above: + 1. Copy the JAR file to the `choreo-connect-1.0.0/docker-compose/resources/enforcer/dropins` directory. 2. Open the `choreo-connect-1.0.0/docker-compose/choreo-connect-with-apim/conf/config.toml` file in a text editor and modify the `analytics` section as follows: - ``` + ``` [analytics] enabled = true [analytics.enforcer] diff --git a/en/docs/assets/css/orange-palette.css b/en/docs/assets/css/orange-palette.css index fa61a012b0..a0a4906c23 100644 --- a/en/docs/assets/css/orange-palette.css +++ b/en/docs/assets/css/orange-palette.css @@ -8,7 +8,7 @@ --md-accent-fg-color--transparent: #526cfe1a; --md-accent-bg-color: #fff; --md-accent-bg-color--light: #ffffffb3; - --md-hjs-color: #00f; + --md-hjs-color: #444; } :root { --md-text-font: 'Helvetica Neue', Helvetica, Arial, sans-serif; diff --git a/en/docs/assets/css/theme.css b/en/docs/assets/css/theme.css index ff2aa0b240..4bf354623f 100644 --- a/en/docs/assets/css/theme.css +++ b/en/docs/assets/css/theme.css @@ -16,7 +16,7 @@ * under the License. */ - .md-tabs__link--active { +.md-tabs__link--active { opacity: 1 !important; } @@ -222,7 +222,7 @@ html .md-footer-meta.md-typeset a { .md-tabs>.md-grid, .md-main>.md-grid { max-width: none; - padding-left: 2.5rem; + padding-left: 1.5rem; padding-right: 2rem; } @@ -354,45 +354,6 @@ html .md-footer-meta.md-typeset a { } } -.text--replace { - overflow: hidden; - color: transparent; - text-indent: 100%; - white-space: nowrap -} - -.cd-top { - position: fixed; - bottom: 20px; - right: 20px; - display: inline-block; - height: 40px; - width: 40px; - box-shadow: 0 0 10px rgba(0, 0, 0, 0.05); - background: url(../lib/backtotop/img/cd-top-arrow.svg) no-repeat center 50%; - background-color: hsla(5, 76%, 62%, 0.8); -} - -.js .cd-top { - visibility: hidden; - opacity: 0; - transition: opacity .3s, visibility .3s, background-color .3s -} - -.js .cd-top--is-visible { - visibility: visible; - opacity: 1 -} - -.js .cd-top--fade-out { - opacity: .5 -} - -.js .cd-top:hover { - background-color: hsl(5, 76%, 62%); - opacity: 1 -} - .md-footer__title { font-size: .75rem; } @@ -422,8 +383,7 @@ html .md-footer-meta.md-typeset a { width: 100%; margin: 0 auto; display: flex; - justify-content: space-between; - align-items: center; + flex-wrap: wrap; } @media screen and (max-width: 767px) { @@ -433,7 +393,7 @@ html .md-footer-meta.md-typeset a { } .card { - height: 110px; + min-height: 110px; flex-basis: 0; flex-grow: 1; color: #404040; @@ -449,7 +409,6 @@ html .md-footer-meta.md-typeset a { position: relative; display: flex; justify-content: left; - align-items: center; flex-direction: row; cursor: pointer; transition: all 0.6s ease; @@ -459,6 +418,7 @@ html .md-footer-meta.md-typeset a { .card.img { flex-direction: column; height: 200px; + align-items: center; } .card:hover { @@ -473,8 +433,6 @@ html .md-footer-meta.md-typeset a { } .card-content { - justify-content: center; - height: 88px; display: flex; align-items: left; text-align: left; @@ -597,7 +555,8 @@ html .md-footer-meta.md-typeset a { } */ .md-typeset table:not([class]) td { - vertical-align: middle; + vertical-align: top; + padding: 0.6rem 0.8rem; } .mb-tabs__dropdown { @@ -677,7 +636,7 @@ html .md-footer-meta.md-typeset a { .md-header__inner { max-width: none; - padding-left: 3.5rem; + padding-left: 2.5rem; padding-right: 2rem; } @@ -1380,4 +1339,10 @@ html .md-typeset .superfences-tabs>label:hover { .md-main__inner { margin-top: 0; +} + +@media only screen and (min-width: 60em) { + .md-content { + margin-right: 12.1rem; + } } \ No newline at end of file diff --git a/en/docs/assets/js/theme.js b/en/docs/assets/js/theme.js index 8a2fbe22d2..bcdb053d1b 100644 --- a/en/docs/assets/js/theme.js +++ b/en/docs/assets/js/theme.js @@ -224,44 +224,6 @@ if (tocBtn) { }; } -/* - * TOC position highlight on scroll - */ -// var observeeList = document.querySelectorAll(".md-sidebar__inner > .md-nav--secondary .md-nav__link"); -// var listElems = document.querySelectorAll(".md-sidebar__inner > .md-nav--secondary > ul li"); -// var config = { attributes: true, childList: true, subtree: true }; - -// var callback = function(mutationsList, observer) { -// for(var mutation of mutationsList) { -// if (mutation.type == 'attributes') { -// mutation.target.parentNode.setAttribute(mutation.attributeName, -// mutation.target.getAttribute(mutation.attributeName)); -// scrollerPosition(mutation); -// } -// } -// }; - -var observer = new MutationObserver(callback); - -if (listElems.length > 0) { - listElems[0].classList.add('active'); -} - -for (var i = 0; i < observeeList.length; i++) { - var el = observeeList[i]; - - observer.observe(el, config); - - el.onclick = function(e) { - listElems.forEach(function(elm) { - if (elm.classList) { - elm.classList.remove('active'); - } - }); - - e.target.parentNode.classList.add('active'); - } -} function scrollerPosition(mutation) { var blurList = document.querySelectorAll(".md-sidebar__inner > .md-nav--secondary > ul li > .md-nav__link[data-md-state='blur']"); @@ -301,21 +263,6 @@ function setActive(parentNode, i) { setActive(parentNode.parentNode.parentNode.parentNode, ++i); } - -/* - * Handle edit icon on scroll - */ -var editIcon = document.getElementById('editIcon'); - -window.addEventListener('scroll', function() { - var scrollPosition = window.scrollY || document.documentElement.scrollTop; - if (scrollPosition >= 90) { - editIcon.classList.add('active'); - } else { - editIcon.classList.remove('active'); - } -}); - /* * Fixes the issue related to clicking on anchors and landing somewhere below it */ diff --git a/en/docs/consume/customizations/adding-internationalization.md b/en/docs/consume/customizations/adding-internationalization.md index 19341531ac..43cbba13a1 100644 --- a/en/docs/consume/customizations/adding-internationalization.md +++ b/en/docs/consume/customizations/adding-internationalization.md @@ -163,7 +163,7 @@ Follow the instructions below to change the direction of the UI: Add the following configuration to change the page direction to RTL (Right To Left). - !!! note + !!! note If you have already done customizations to the default theme, make sure to merge the following with the existing changes carefully. ```js diff --git a/en/docs/deploy-and-publish/deploy-on-gateway/api-gateway/passing-enduser-attributes-to-the-backend-via-api-gateway.md b/en/docs/deploy-and-publish/deploy-on-gateway/api-gateway/passing-enduser-attributes-to-the-backend-via-api-gateway.md index 853a3f6e4f..e5ddb5de54 100644 --- a/en/docs/deploy-and-publish/deploy-on-gateway/api-gateway/passing-enduser-attributes-to-the-backend-via-api-gateway.md +++ b/en/docs/deploy-and-publish/deploy-on-gateway/api-gateway/passing-enduser-attributes-to-the-backend-via-api-gateway.md @@ -156,7 +156,7 @@ Follow the instructions below if you want to pass additional attributes to the b generator_impl = "org.wso2.carbon.test.CustomTokenGenerator" ``` - !!! note + !!! note Note that `CustomTokenGenerator` is for opaque tokens only and public class `CustomGatewayJWTGenerator` is for JWT. 4. Set the `apim.jwt.enable` element to **true** in the `deployment.toml` file. diff --git a/en/docs/design/api-documentation/add-api-documentation.md b/en/docs/design/api-documentation/add-api-documentation.md index 5259cace0f..749fad5f03 100644 --- a/en/docs/design/api-documentation/add-api-documentation.md +++ b/en/docs/design/api-documentation/add-api-documentation.md @@ -202,10 +202,10 @@ Follow the instructions below to add documentation to an API: As a subscriber, you can read the documentation and learn about the API. - !!! note - For REST APIs, generated document will be listed as `Default` + !!! note + For REST APIs, generated document will be listed as `Default` - [![View API related documentation]({{base_path}}/assets/img/learn/view-docs-api.png)]({{base_path}}/assets/img/learn/view-docs-api.png) + [![View API related documentation]({{base_path}}/assets/img/learn/view-docs-api.png)]({{base_path}}/assets/img/learn/view-docs-api.png) You have created documentation using the API Publisher and viewed the documentation as a subscriber in the Developer Portal. diff --git a/en/docs/develop/streaming-apps/working-with-the-design-view.md b/en/docs/develop/streaming-apps/working-with-the-design-view.md index 57f8fd1a75..1ec572114f 100644 --- a/en/docs/develop/streaming-apps/working-with-the-design-view.md +++ b/en/docs/develop/streaming-apps/working-with-the-design-view.md @@ -248,7 +248,10 @@ to the grid of the design view when you create a Siddhi application.

To configure the sink, click the settings icon on the sink component you added to the grid.

- !!! info To access the form in which you can configure a sink, you must first connect the sink as the target object to a stream component. +
+

Info

+

To access the form in which you can configure a sink, you must first connect the sink as the target object to a stream component.

+
  • Sink Type : This specifies the transport via which the sink publishes processed events. The value should be entered in lower case (e.g., log ).
  • Map Type : This specifies the format in which you want to publish the events (e.g., passThrough ). The other parameters displayed for the map depends on the map type selected. If you want to add more configurations to the mapping, click Customized Options and set the required properties and key value pairs.
  • @@ -583,8 +586,10 @@ to the grid of the design view when you create a Siddhi application.

    Incremental aggregation allows you to obtain aggregates in an incremental manner for a specified set of time periods. For more information, see Siddhi Query Guide - Incremental Aggregation.

    - !!! tip - Before you add an aggregation, make sure that you have already added the stream with the events to which the aggregation is applied is already defined. +
    +

    Tip

    +

    Before you add an aggregation, make sure that you have already added the stream with the events to which the aggregation is applied is already defined.

    +
    @@ -701,13 +706,15 @@ to the grid of the design view when you create a Siddhi application. Description
    - !!! tip +
    +

    Tip

    Before you add a projection query:

    You need to add and configure the following:

    • The input stream with the events to be processed by the query.
    • The output stream to which the events processed by the query are directed.
    +

    This icon represents a query to project the events in an input stream to an output stream. This invoves selectng the attributes to be included in the output, renaming attributes, introducing constant values, and using mathematical and/or logical expressions. For more information, see Siddhi Query Guide - Query Projection.

    @@ -787,13 +794,15 @@ to the grid of the design view when you create a Siddhi application. Description
    -!!! tip +
    +

    Tip

    Before you add a filter query:

    You need to add and configure the following:

    • The input stream with the events to be processed by the query.
    • The output stream to which the events processed by the query are directed.
    +

    A filter query filters information in an input stream based on a given condition. For more information, see Siddhi Query Guide - Filters.

    @@ -804,9 +813,12 @@ to the grid of the design view when you create a Siddhi application.

    Once you connect the query to an input stream (source) and an output stream (target), you can configure it. To configure the filter query, click the settings icon on the filter query component you added to the grid, and update the following information.

    • By default, the Stream Handler check box is selected, and a stream handler of the filter type is available under it to indicate that the query is a filter. Expand this stream handler, and enter the condition based on which the information needs to be filtered.

      -

      !!! info

      +
      +

      Info

      A Siddhi application can have multiple stream handlers. To add another stream handler, click the + Stream Handler. Multiple functions, filters and windows can be defined within the same form as stream handlers.

      -

    • +

      +
+
  • Projection : This section specifies the attributes to be included in the output. In the Select field, you can select All Attributes to select all the attributes of the events, or select User Defined Attributes to select specific attributes from the input stream. If you select User Defined Attributes , you can add attributes to be selected to be inserted into the output stream. Here, you can enter the names of specific attributes in the input stream, or enter expressions to convert input stream attribute values as required to generate output events. You can also specify the attribute(s) by which you want to group the output.
  • Output : This section specifies the action to be performed on the output event. The fields to be configured in this section are as follows:
      @@ -867,13 +879,15 @@ to the grid of the design view when you create a Siddhi application. Description
      - !!! tip +
      +

      Tip

      Before you add a window query:

      You need to add and configure the following:

      • The input stream with the events to be processed by the query.
      • The output stream to which the events processed by the query are directed.
      +

      Window queries include a window to select a subset of events to be processed based on a specific criterion. For more information, see Siddhi Query Guide - (Defined) Window.

      @@ -886,9 +900,11 @@ to the grid of the design view when you create a Siddhi application.
      • By default, the Stream Handler check box is selected, and a stream handler of the window type is available under it to indicate that the query is a filter. Expand this stream handler, and enter details to determine the window including the window type and the basis on which the subset of events considered by the window is determined (i.e., based on the window type selected).

        -

        !!! info

        +
        +

        Info

        A Siddhi application can have multiple stream handlers. To add another stream handler, click the + Stream Handler. Multiple functions, filters and windows can be defined within the same form as stream handlers.

        +
      • Projection : This section specifies the attributes to be included in the output. In the Select field, you can select All Attributes to select all the attributes of the events, or select User Defined Attributes to select specific attributes from the input stream. If you select User Defined Attributes , you can add attributes to be selected to be inserted into the output stream. Here, you can enter the names of specific attributes in the input stream, or enter expressions to convert input stream attribute values as required to generate output events. You can also specify the attribute(s) by which you want to group the output.
      • @@ -925,8 +941,10 @@ to the grid of the design view when you create a Siddhi application. Source
        - !!! info +
        +

        Info

        A window query can have only one source at a given time.

        +
        • Streams
        • Tables
        • @@ -1012,8 +1030,10 @@ to the grid of the design view when you create a Siddhi application. Source
          - !!! info - A join query must always be connected to two sources, and at least one of them must be a defined stream/trigger/window. +
          +

          Info

          +

          A join query must always be connected to two sources, and at least one of them must be a defined stream/trigger/window.

          +
          • Streams
          • Tables
          • @@ -1027,8 +1047,10 @@ to the grid of the design view when you create a Siddhi application. Target
            - !!! info - A join query must always be connected to a single target. +
            +

            Info

            +

            A join query must always be connected to a single target.

            +
            • Streams
            • Tables
            • @@ -1060,13 +1082,15 @@ to the grid of the design view when you create a Siddhi application. Description
              - !!! tip +
              +

              Tip

              Before you add a pattern query:

              You need to add and configure the following:

              • The input stream with the events to be processed by the query.
              • The output stream to which the events processed by the query are directed.
              +

              A pattern query detects patterns in events that arrive overtime. For more information, see Siddhi Query Guide - Patterns.

              @@ -1157,13 +1181,15 @@ to the grid of the design view when you create a Siddhi application. Description
              - !!! tip +
              +

              Tip

              Before you add a sequence query:

              You need to add and configure the following:

              • The input stream with the events to be processed by the query.
              • The output stream to which the events processed by the query are directed.
              +

              A sequence query detects sequences in event occurrences over time. For more information, see Siddhi Query Guide - Sequence.

              @@ -1253,9 +1279,11 @@ to the grid of the design view when you create a Siddhi application. Description
              - !!! tip +
              +

              Tip

              Before you add a partition:

              You need to add the stream to be partitioned.

              +

              Partitions divide streams and queries into isolated groups in order to process them in parallel and in isolation. For more information, see Siddhi Query Guide - Partition.

              diff --git a/en/docs/includes/deploy/cc-configuration-file.md b/en/docs/includes/deploy/cc-configuration-file.md index ba47d4eb33..1b25e0a7c1 100644 --- a/en/docs/includes/deploy/cc-configuration-file.md +++ b/en/docs/includes/deploy/cc-configuration-file.md @@ -1,12 +1,12 @@ Open the Choreo Connect configuration file according to the deployment type you are using. - ??? abstract "Click here to see the configuration file location for your Choreo Connect deployment." - Navigate to the correct folder path and open the `config.toml` or `config-toml-configmap.yaml` file based on your Choreo Connect deployment. +??? abstract "Click here to see the configuration file location for your Choreo Connect deployment." + Navigate to the correct folder path and open the `config.toml` or `config-toml-configmap.yaml` file based on your Choreo Connect deployment. - | **Deployment** | **Mode**| **File name** | **Directory** | - |----------------|---------|---------------|---------------| - | Docker Compose |[Choreo Connect as a Standalone Gateway]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/concepts/as-a-standalone-gateway/)| `config.toml` | `/docker-compose/choreo-connect/conf/` | - | Docker Compose |[Choreo Connect with WSO2 API Manager as a Control Plane]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/concepts/apim-as-control-plane/) | `config.toml` | `/docker-compose/choreo-connect-with-apim/conf/` | - | Kubernetes |[Choreo Connect as a Standalone Gateway]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/concepts/as-a-standalone-gateway/)| `config-toml-configmap.yaml` | `/k8s-artifacts/choreo-connect/` | - | Kubernetes |[Choreo Connect with WSO2 API Manager as a Control Plane]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/concepts/apim-as-control-plane/)| `config-toml-configmap.yaml` | `/k8s-artifacts/choreo-connect-with-apim/` | + | **Deployment** | **Mode**| **File name** | **Directory** | + |----------------|---------|---------------|---------------| + | Docker Compose |[Choreo Connect as a Standalone Gateway]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/concepts/as-a-standalone-gateway/)| `config.toml` | `/docker-compose/choreo-connect/conf/` | + | Docker Compose |[Choreo Connect with WSO2 API Manager as a Control Plane]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/concepts/apim-as-control-plane/) | `config.toml` | `/docker-compose/choreo-connect-with-apim/conf/` | + | Kubernetes |[Choreo Connect as a Standalone Gateway]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/concepts/as-a-standalone-gateway/)| `config-toml-configmap.yaml` | `/k8s-artifacts/choreo-connect/` | + | Kubernetes |[Choreo Connect with WSO2 API Manager as a Control Plane]({{base_path}}/deploy-and-publish/deploy-on-gateway/choreo-connect/concepts/apim-as-control-plane/)| `config-toml-configmap.yaml` | `/k8s-artifacts/choreo-connect-with-apim/` | diff --git a/en/docs/index.md b/en/docs/index.md index f49de8cf70..1d272fd87e 100644 --- a/en/docs/index.md +++ b/en/docs/index.md @@ -25,6 +25,24 @@ template: templates/single-column.html -webkit-font-feature-settings: 'liga'; -webkit-font-smoothing: antialiased; } + + @media (max-width: 1386px) { + .md-main .md-sidebar.md-sidebar--primary { + width: 0; + } + } + + @media (max-width: 1219px) { + .md-content, .md-nav { + margin-top: 0; + } + .md-container { + margin-top: 2.4rem; + } + .md-main__inner { + padding-top: 1.5rem; + } + }
              diff --git a/en/docs/install-and-setup/install/admin-product-startup-options.md b/en/docs/install-and-setup/install/admin-product-startup-options.md index ca24358164..caec22d1d6 100644 --- a/en/docs/install-and-setup/install/admin-product-startup-options.md +++ b/en/docs/install-and-setup/install/admin-product-startup-options.md @@ -54,13 +54,15 @@ Listed below are some system properties that can be used when starting the serve -DworkerNode

              Starts the product as a worker node, which means the front-end features of your product will not be enabled.

              -!!! note +
              +

              Note

              Note that from Carbon 4.4.1 onwards, you can also start the worker profile by setting the following system property to 'true' in the product startup script before the script is executed.

              -DworkerNode=false
              +
              diff --git a/en/docs/install-and-setup/setup/advance-configurations/configuring-the-crypto-provider.md b/en/docs/install-and-setup/setup/advance-configurations/configuring-the-crypto-provider.md index 154d3b1b9c..9c94726144 100644 --- a/en/docs/install-and-setup/setup/advance-configurations/configuring-the-crypto-provider.md +++ b/en/docs/install-and-setup/setup/advance-configurations/configuring-the-crypto-provider.md @@ -16,62 +16,72 @@ APIM supports the configuration of crypto provider to either Bouncy Castle (defa 1. Run the script fips.sh or fips.bat in the /bin directory before starting the server. - ``` java tab="Linux/Mac OS" - cd /bin/ - sh fips.sh - ``` - - ``` java tab="Windows" - cd \bin\ - fips.bat --run - ``` + === "Linux/Mac OS" + ``` java + cd /bin/ + sh fips.sh + ``` + + === "Windows" + ``` java + cd \bin\ + fips.bat --run + ``` 2. Verify whether the required changes are done by running the following command. - ``` java tab="Linux/Mac OS" - cd /bin/ - sh fips.sh VERIFY - ``` + === "Linux/Mac OS" + ``` java + cd /bin/ + sh fips.sh VERIFY + ``` - ``` java tab="Windows" - cd \bin\ - fips.bat --run VERIFY - ``` + === "Windows" + ``` java + cd \bin\ + fips.bat --run VERIFY + ``` 3. Start the APIM server with the following system property. - ``` java tab="Linux/Mac OS" - cd /bin/ - sh api-manager.sh -Dsecurity.jce.provider=BCFIPS - ``` + === "Linux/Mac OS" + ``` java + cd /bin/ + sh api-manager.sh -Dsecurity.jce.provider=BCFIPS + ``` - ``` java tab="Windows" - cd \bin\ - api-manager.bat --run -Dsecurity.jce.provider=BCFIPS - ``` + === "Windows" + ``` java + cd \bin\ + api-manager.bat --run -Dsecurity.jce.provider=BCFIPS + ``` ### Change the crypto provider to BC (Bouncy Castle) 1. Run the following command before starting the server. - ``` java tab="Linux/Mac OS" - cd /bin/ - sh fips.sh DISABLE - ``` + === "Linux/Mac OS" + ``` java + cd /bin/ + sh fips.sh DISABLE + ``` - ``` java tab="Windows" - cd \bin\ - fips.bat --run DISABLE - ``` + === "Windows" + ``` java + cd \bin\ + fips.bat --run DISABLE + ``` 2. Start the APIM server as usual. - ``` java tab="Linux/Mac OS" - cd /bin/ - sh api-manager.sh - ``` - - ``` java tab="Windows" - cd \bin\ - api-manager.bat --run - ``` + === "Linux/Mac OS" + ``` java + cd /bin/ + sh api-manager.sh + ``` + + === "Windows" + ``` java + cd \bin\ + api-manager.bat --run + ``` diff --git a/en/docs/install-and-setup/setup/api-controller/managing-choreo-connect/managing-choreo-connect-with-ctl.md b/en/docs/install-and-setup/setup/api-controller/managing-choreo-connect/managing-choreo-connect-with-ctl.md index 23881cbe21..82975116be 100644 --- a/en/docs/install-and-setup/setup/api-controller/managing-choreo-connect/managing-choreo-connect-with-ctl.md +++ b/en/docs/install-and-setup/setup/api-controller/managing-choreo-connect/managing-choreo-connect-with-ctl.md @@ -174,6 +174,7 @@ This command can be used to list the deployed APIs on a given Choreo Connect ada ```bash apictl mg get apis -e ``` + !!! tip By default, the number of APIs listed will be limited to 25. To increase or decrease the limit set the flag `--limit` or its shorthand flag `-l`. For an example, ```bash diff --git a/en/docs/install-and-setup/setup/mi-setup/security/gdpr_ei.md b/en/docs/install-and-setup/setup/mi-setup/security/gdpr_ei.md index ac591dba38..bc6b0f501c 100644 --- a/en/docs/install-and-setup/setup/mi-setup/security/gdpr_ei.md +++ b/en/docs/install-and-setup/setup/mi-setup/security/gdpr_ei.md @@ -119,6 +119,7 @@ INFO - LogMediator USER_NAME = Sam ``` Let's look at how to anonymize the username value in log files. + 1. [Download](https://github.com/wso2-docs/WSO2_EI/raw/master/Forget-Me-Tool/org.wso2.carbon.privacy.forgetme.tool-1.3.1.zip) the **Forget-Me** tool and extract the contents. The location of the extracted folder will be referred to as `TOOL_HOME` from this point onwards. diff --git a/en/docs/install-and-setup/setup/mi-setup/transport_configurations/multi-https-transport.md b/en/docs/install-and-setup/setup/mi-setup/transport_configurations/multi-https-transport.md index 114946a107..6cdcbfb1f0 100644 --- a/en/docs/install-and-setup/setup/mi-setup/transport_configurations/multi-https-transport.md +++ b/en/docs/install-and-setup/setup/mi-setup/transport_configurations/multi-https-transport.md @@ -91,7 +91,7 @@ Multi-HTTPS transport receiver) as a custom transport receiver. ssl_profile.file_path= "conf/sslprofiles/listenerprofiles.xml" ssl_profile.read_interval = 600000 - ``` + ``` 3. Create the `listenerprofiles.xml` file in the `MI_HOME/conf/sslprofiles` directory and add the following configurations: diff --git a/en/docs/install-and-setup/setup/security/logins-and-passwords/encrypting-passwords-with-cipher-tool.md b/en/docs/install-and-setup/setup/security/logins-and-passwords/encrypting-passwords-with-cipher-tool.md index d479931c0d..bcb12490a9 100644 --- a/en/docs/install-and-setup/setup/security/logins-and-passwords/encrypting-passwords-with-cipher-tool.md +++ b/en/docs/install-and-setup/setup/security/logins-and-passwords/encrypting-passwords-with-cipher-tool.md @@ -5,8 +5,8 @@ The instructions on this page explain how plain text passwords in configuration In any WSO2 product that is based on Carbon 4.4.0 or a later version, the Cipher Tool feature will be installed by default. You can use this tool to easily encrypt passwords or other elements in configuration files. !!! note -- If you are a developer who is building a Carbon product, see the topic on enabling [Cipher Tool for password encryption](https://docs.wso2.com/display/Carbon4410/Enabling+Cipher+Tool+for+Password+Encryption) for instructions on how to include the Cipher Tool as a feature in your product build. -- The default keystore that is shipped with your WSO2 product (i.e. `wso2carbon.jks` ) is used for password encryption by default. See this [link](https://docs.wso2.com/display/ADMIN44x/Creating+New+Keystores) for details on how to set up and configure new keystores for encrypting plain text passwords. + - If you are a developer who is building a Carbon product, see the topic on enabling [Cipher Tool for password encryption](https://docs.wso2.com/display/Carbon4410/Enabling+Cipher+Tool+for+Password+Encryption) for instructions on how to include the Cipher Tool as a feature in your product build. + - The default keystore that is shipped with your WSO2 product (i.e. `wso2carbon.jks` ) is used for password encryption by default. See this [link](https://docs.wso2.com/display/ADMIN44x/Creating+New+Keystores) for details on how to set up and configure new keystores for encrypting plain text passwords. Follow the topics given below for instructions. @@ -28,8 +28,8 @@ Follow the steps given below to have passwords encrypted using the automated pro 1. The first step is to update the `cipher-tool.properties` file and the `cipher-text.properties` file with information of the passwords that you want to encrypt. - !!! info - By default, the `cipher-tool.properties` and `cipher-text.properties` files that are shipped with your product will contain information on the most common passwords that require encryption. If a required password is missing in the default files, you can **add them manually** . + !!! info + By default, the `cipher-tool.properties` and `cipher-text.properties` files that are shipped with your product will contain information on the most common passwords that require encryption. If a required password is missing in the default files, you can **add them manually** . Follow the steps given below. @@ -40,14 +40,14 @@ Follow the steps given below to have passwords encrypted using the automated pro =//, ``` - !!! info - **Important!** + !!! info + **Important!** - - The `` should be the same value that is hard-coded in the relevant Carbon component. - - The `` specifies the path to the XML file that contains the password. This can be the relative file path, or the absolute file path (starting from `` ). + - The `` should be the same value that is hard-coded in the relevant Carbon component. + - The `` specifies the path to the XML file that contains the password. This can be the relative file path, or the absolute file path (starting from `` ). - - The `` specifies the XPath to the XML **element** / **attribute** / **tag** that should be encrypted. See the examples given below. - - The flag that follows the XPath should be set to 'false' if you are encrypting the value of an **XML element,** or the value of an **XML attribute's tag.** The flag should be 'true' if you are encrypting the **tag** of an **XML attribute** . See the examples given below. + - The `` specifies the XPath to the XML **element** / **attribute** / **tag** that should be encrypted. See the examples given below. + - The flag that follows the XPath should be set to 'false' if you are encrypting the value of an **XML element,** or the value of an **XML attribute's tag.** The flag should be 'true' if you are encrypting the **tag** of an **XML attribute** . See the examples given below. - When using Secure Vault, as you use the password aliases in the `/repository/conf/carbon.xml` file, make sure to define these aliases in the following files, which are in the `/repository/conf/security` directory as follows: @@ -87,8 +87,7 @@ Follow the steps given below to have passwords encrypted using the automated pro ``` - ** - Example 1:** Consider the admin user's password in the `user-mgt.xml` file shown below. + **Example 1:** Consider the admin user's password in the `user-mgt.xml` file shown below. ``` java @@ -133,8 +132,8 @@ Follow the steps given below to have passwords encrypted using the automated pro UserManager.Configuration.Property.ConnectionPassword=repository/conf/user-mgt.xml//UserManager/Realm/UserStoreManager/Property[@name='ConnectionPassword'],false ``` - !!! note - If you are trying the above example, be sure that only the relevant user store manager is enabled in the `user-mgt.xml` file. + !!! note + If you are trying the above example, be sure that only the relevant user store manager is enabled in the `user-mgt.xml` file. **Example 3:** Consider the keystore password specified in the `catalina-server.xml` file shown below. @@ -188,8 +187,8 @@ Follow the steps given below to have passwords encrypted using the automated pro 4. The following message will be prompted:  "\[Please Enter Primary KeyStore Password of Carbon Server :\]". Enter the keystore password (which is "wso2carbon" for the default [keystore](https://docs.wso2.com/display/ADMIN44x/Using+Asymmetric+Encryption) ) and proceed. If the script execution is successful, you will see the following message: "Secret Configurations are written to the property file successfully". - !!! note - If you are using the cipher tool for the first time, the - `Dconfigure` command will first initialize the tool for your product. The tool will then start encrypting the plain text passwords you specified in the `cipher-text.properties` file. + !!! note + If you are using the cipher tool for the first time, the - `Dconfigure` command will first initialize the tool for your product. The tool will then start encrypting the plain text passwords you specified in the `cipher-text.properties` file. Shown below is an example of an alias and the corresponding plaintext password (in square brackets) in the `cipher-text.properties` file: @@ -252,8 +251,8 @@ Since we cannot use the [automated process](#EncryptingPasswordswithCipherTool-a Enter Plain Text Value :admin ``` - !!! info - Note that in certain configuration files, the password that requires encryption may not be specified as a single value as it is in the log4j.properties file. For example, the jndi.properties file used in WSO2 ESB contains the password in the connection URL. In such cases, you need to encrypt the entire connection URL as explained [here](#EncryptingPasswordswithCipherTool-encrypting_jndi) . + !!! info + Note that in certain configuration files, the password that requires encryption may not be specified as a single value as it is in the log4j.properties file. For example, the jndi.properties file used in WSO2 ESB contains the password in the connection URL. In such cases, you need to encrypt the entire connection URL as explained [here](#EncryptingPasswordswithCipherTool-encrypting_jndi) . 7. You will receive the encrypted value. For example: @@ -285,8 +284,8 @@ Since we cannot use the [automated process](#EncryptingPasswordswithCipherTool-a 11. If you are encrypting a password in the `/repository/conf/identity/EndpointConfig.properties` file, you need to add the encrypted values of the keys in the `EndpointConfig.properties` file itself. - !!! note - This step is **only applicable** if you are encrypting a password in the `EndpointConfig.properties` file. + !!! note + This step is **only applicable** if you are encrypting a password in the `EndpointConfig.properties` file. For example, if you have encrypted values for the following keys. diff --git a/en/docs/install-and-setup/setup/security/user-account-management.md b/en/docs/install-and-setup/setup/security/user-account-management.md index 2b50cb2e1e..b5609bb92f 100644 --- a/en/docs/install-and-setup/setup/security/user-account-management.md +++ b/en/docs/install-and-setup/setup/security/user-account-management.md @@ -206,6 +206,7 @@ Follow the instructions below to disable anonymous access to the Developer Porta ```toml [apim.devportal] enable_anonymous_mode=false + ``` 5. Restart the server or wait for 15 mins until the registry cache expires. diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/admin-searching-the-registry.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/admin-searching-the-registry.md index 9d7d8e1989..d811568f9c 100644 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/admin-searching-the-registry.md +++ b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/admin-searching-the-registry.md @@ -10,9 +10,9 @@ The management console provides facility to search all resources in the registry - Resource name - **Created/updated date range** - The date when a resource was created/updated - !!! info - Created/updated dates must be in MM/DD/YYYY format. Alternatively, you can pick it from the calendar interface provided. - ![]({{base_path}}/assets/attachments/126562657/126562658.png) + !!! info + Created/updated dates must be in MM/DD/YYYY format. Alternatively, you can pick it from the calendar interface provided. + ![]({{base_path}}/assets/attachments/126562657/126562658.png) - **Created/updated author** - The person who created/updated the resource diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/editing-collections-using-the-entries-panel.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/editing-collections-using-the-entries-panel.md index 0261b7b2ce..5b47af664e 100644 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/editing-collections-using-the-entries-panel.md +++ b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/editing-collections-using-the-entries-panel.md @@ -11,11 +11,11 @@ If you select a collection, in its detailed view, you can see the Entries panel - The Info link specifying media type, feed, rating . - The Actions link to rename, move, copy or delete a resource/collection - !!! info - You cannot move/copy resources and collections across registry mounts if they have dependencies or associations. You can only move/copy within a mount. For more information on mounts, read WSO2 Governance Registry documentation: [Remote Instance and Mount Configuration Details](http://docs.wso2.org/display/Governance460/Remote+Instance+and+Mount+Configuration+Details) . + !!! info + You cannot move/copy resources and collections across registry mounts if they have dependencies or associations. You can only move/copy within a mount. For more information on mounts, read WSO2 Governance Registry documentation: [Remote Instance and Mount Configuration Details](http://docs.wso2.org/display/Governance460/Remote+Instance+and+Mount+Configuration+Details) . - !!! info - These options are not available for all resources/collections. + !!! info + These options are not available for all resources/collections. diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/role-permissions.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/role-permissions.md index 10db3ccd9f..12c442f113 100644 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/role-permissions.md +++ b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/managing-the-registry/role-permissions.md @@ -7,8 +7,8 @@ When you select a collection in the registry, the **Permissions** panel opens wi 1. In the **New Role Permissions** section, select a role from the drop-down list. This list is populated by all user roles configured in the system. ![]({{base_path}}/assets/attachments/126562645/126562646.png) - !!! info - The `wso2.anonymous.role` is a special role that represents a user who is not logged in to the management console. Granting `Read` access to this role means that you do not require authentication to access resources using the respective Permalinks. + !!! info + The `wso2.anonymous.role` is a special role that represents a user who is not logged in to the management console. Granting `Read` access to this role means that you do not require authentication to access resources using the respective Permalinks. The **`everyone`** role is a special role that represents a user who is logged into the management console. Granting `Read` access to this role means that any user who has logged into the management console with sufficient permissions to access the Resource Browser can read the respective resource. Granting `Write` or `Delete` access means that any user who is logged in to the management console with sufficient permissions to access the Resource Browser can make changes to the respective resource. @@ -23,8 +23,8 @@ When you select a collection in the registry, the **Permissions** panel opens wi 3. Select whether to allow the action or deny and click **Add Permission** . For example ![]({{base_path}}/assets/attachments/126562645/126562647.png) - !!! info -`Deny` permissions have higher priority over `Allow.` That is, a `Deny` permission always overrides an `Allow` permission assigned to a role. + !!! info + `Deny` permissions have higher priority over `Allow.` That is, a `Deny` permission always overrides an `Allow` permission assigned to a role. `Deny` permission must be given at the collection level. For example, to deny the write/delete action on a given policy file, set Write/Delete actions for the role to `Deny` in `/trunk/policies` . If you set the `Deny` permission beyond the collection level (e.g., / or /\_system etc.) it will not be applied for the user's role. diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-a-remote-registry.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-a-remote-registry.md index b6393afeaf..12c2fdbfbd 100644 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-a-remote-registry.md +++ b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-a-remote-registry.md @@ -39,7 +39,7 @@ Database configurations are stored in $CARBON\_HOME/repository/conf/datasources/ 2. Navigate to $G-REG\_HOME/repository/conf/datasources/master-datasources.xml file where G-REG\_HOME is the Governance Registry distribution home. Replace the existing WSO2\_CARBON\_DB datasource with the following configuration: -``` html/xml +``` xml WSO2_CARBON_DB The datasource used for registry and user manager @@ -71,7 +71,7 @@ Change the values of the following elements according to your environment. 3. Navigate to $G-REG\_HOME /repository/conf/axis2/axis2.xml file in all Carbon-based product instances to be connected with the remote registry, and enable tribes clustering with the following configuration. -``` html/xml +``` xml ``` @@ -86,9 +86,9 @@ The above configuration is required only when caching is enabled for the Carbon ``` !!! warning -Deprecation of -DSetup + Deprecation of -DSetup -When proper Database Administrative (DBA) practices are followed, the systems (except analytics products) are not granted DDL (Data Definition) rights on the schema. Therefore, maintaining the `-DSetup` option is redundant and typically unusable. **As a result, from [January 2018 onwards](https://wso2.com/products/carbon/release-matrix/) WSO2 has deprecated the `-DSetup` option** . Note that the proper practice is for the DBA to run the DDL statements manually so that the DBA can examine and optimize any DDL statement (if necessary) based on the DBA best practices that are in place within the organization. + When proper Database Administrative (DBA) practices are followed, the systems (except analytics products) are not granted DDL (Data Definition) rights on the schema. Therefore, maintaining the `-DSetup` option is redundant and typically unusable. **As a result, from [January 2018 onwards](https://wso2.com/products/carbon/release-matrix/) WSO2 has deprecated the `-DSetup` option** . Note that the proper practice is for the DBA to run the DDL statements manually so that the DBA can examine and optimize any DDL statement (if necessary) based on the DBA best practices that are in place within the organization. The Governance Registry server is now running with all required user manager and registry tables for the server also created in ‘registrydb’ database. @@ -105,7 +105,7 @@ Now that the shared registry is configured, let's take a look at the configurati 3. Configure $CARBON \_HOME/repository/conf/datasource/master-datasources.xml where CARBON \_HOME is the distribution home of any WSO2 Carbon-based product you downloaded in step 1. Then, add the following datasource for the registry space. -``` html/xml +``` xml WSO2_CARBON_DB_GREG The datasource used for registry and user manager @@ -136,7 +136,7 @@ Change the values of the relevant elements accordingly. ** Add a new db config to the datasource configuration done in step 3 above. For example, -``` html/xml +``` xml jdbc/WSO2CarbonDB_GREG @@ -144,7 +144,7 @@ Add a new db config to the datasource configuration done in step 3 above. For ex Specify the remote Governance Registry instance with the following configuration: -``` html/xml +``` xml instanceid remote_registry @@ -164,7 +164,7 @@ Change the values of the following elements according to your environment. Define the registry partitions using the remote Governance Registry instance. In this deployment strategy, we are mounting the config and governance partitions of the Carbon-based product instances to the remote Governance Registry instance. This is graphically represented in Figure 2 at the beginning. -``` html/xml +``` xml instanceid /_system/config @@ -186,7 +186,7 @@ Define the registry partitions using the remote Governance Registry instance. In 1. Navigate to $CARBON \_HOME/repository/conf/axis2/axis2.xml file where CARBON \_HOME is the distribution home of any WSO2 Carbon-based products to be connected with the remote registry. Enable carbon clustering by copying the following configuration to all Carbon server instances: -``` html/xml +``` xml ``` diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-separate-nodes.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-separate-nodes.md index 03b8d9eeab..b702aa96e1 100644 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-separate-nodes.md +++ b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-config-and-governance-partitions-in-separate-nodes.md @@ -40,7 +40,7 @@ Database configurations are stored in $CARBON\_HOME/repository/conf/datasources/ 2. First, navigate to $G-REG\_HOME/repository/conf/datasources/master-datasources.xml file where G-REG\_HOME is the distribution home of Governance Registry of G-Reg 1. Replace the existing WSO2\_CARBON\_DB datasource with the following configuration: -``` html/xml +``` xml WSO2_CARBON_DB The datasource used for registry and user manager @@ -72,7 +72,7 @@ Change the values of the following elements according to your environment. 3. Similarly, replace the existing WSO2\_CARBON\_DB datasource in G-Reg 2 with the following : -``` html/xml +``` xml WSO2_CARBON_DB The datasource used for registry and user manager @@ -97,7 +97,7 @@ Change the values of the following elements according to your environment. 4. Repeat the same for G-Reg 3 as follows. -``` html/xml +``` xml WSO2_CARBON_DB The datasource used for registry and user manager @@ -122,7 +122,7 @@ Change the values of the following elements according to your environment. 5. Navigate to $G-REG\_HOME /repository/conf/axis2/axis2.xml file in all instances and enable clustering with the following configuration. -``` html/xml +``` xml ``` @@ -137,9 +137,9 @@ The above configuration is required only when caching is enabled for the Carbon ``` !!! warning -Deprecation of -DSetup + Deprecation of -DSetup -When proper Database Administrative (DBA) practices are followed, the systems (except analytics products) are not granted DDL (Data Definition) rights on the schema. Therefore, maintaining the `-DSetup` option is redundant and typically unusable. **As a result, from [January 2018 onwards](https://wso2.com/products/carbon/release-matrix/) WSO2 has deprecated the `-DSetup` option** . Note that the proper practice is for the DBA to run the DDL statements manually so that the DBA can examine and optimize any DDL statement (if necessary) based on the DBA best practices that are in place within the organization. + When proper Database Administrative (DBA) practices are followed, the systems (except analytics products) are not granted DDL (Data Definition) rights on the schema. Therefore, maintaining the `-DSetup` option is redundant and typically unusable. **As a result, from [January 2018 onwards](https://wso2.com/products/carbon/release-matrix/) WSO2 has deprecated the `-DSetup` option** . Note that the proper practice is for the DBA to run the DDL statements manually so that the DBA can examine and optimize any DDL statement (if necessary) based on the DBA best practices that are in place within the organization. The Governance Registry server instances are now running with all required user manager and registry tables for the server created in ‘registrydb’, ‘registrydb1’ and ‘registrydb2’ databases. @@ -154,7 +154,7 @@ Include the following configurations in the master node of Foo product cluster. 1. Configure $CARBON \_HOME/repository/conf/datasource/master-datasources.xml where CARBON \_HOME is the distribution home of any WSO2 Carbon-based product. Then, add the following datasource for the registry space. -``` html/xml +``` xml WSO2_CARBON_DB_GREG The datasource used for registry and user manager @@ -204,7 +204,7 @@ Change the values of the relevant elements according to your environment. Add a new db config to the datasource configuration done in step 1 above. For example, -``` html/xml +``` xml jdbc/WSO2CarbonDB_GREG @@ -215,7 +215,7 @@ Add a new db config to the datasource configuration done in step 1 above. For ex Specify the remote Governance Registry instance with the following configuration: -``` html/xml +``` xml governanceRegistryInstance governance_registry @@ -241,14 +241,14 @@ Change the values of the following elements according to your environment. - <enableCache> : Whether caching is enabled on the Carbon server instance. !!! info -Note + Note -When adding the corresponding configuration to the registry.xml file of a slave node, set <readOnly>true</readOnly>. This is the only configuration change. + When adding the corresponding configuration to the registry.xml file of a slave node, set <readOnly>true</readOnly>. This is the only configuration change. Define the registry partitions using the remote Governance Registry instance. -``` html/xml +``` xml configRegistryInstance /_system/config @@ -269,15 +269,9 @@ Define the registry partitions using the remote Governance Registry instance. ***Configuring axis2.xml file*** 3. Navigate to $CARBON \_HOME/repository/conf/axis2/axis2.xml file and enable carbon clustering by copying the following configuration to all Carbon server instances: - -``` html/xml - -``` - -!!! info -Note - - + ``` xml + + ``` 4. Copy 'MySQL JDBC connector jar' ( [http://dev.mysql.com/downloads/connector/j/5.1.html)](http://dev.mysql.com/downloads/connector/j/5.1.html) to $ G-REG\_HOME/repository/components/lib in Carbon server instances of Foo product cluster. ### Configuring the bar product cluster @@ -290,7 +284,7 @@ Include the following configurations in the master node of Foo product cluster. 1. Configure $CARBON \_HOME/repository/conf/datasource/master-datasources.xml where CARBON \_HOME is the distribution home of any WSO2 Carbon-based product. Then, add the following datasource for the registry space. -``` html/xml +``` xml WSO2_CARBON_DB_GREG The datasource used for registry and user manager @@ -340,7 +334,7 @@ Change the values of the relevant elements according to your environment. ****** Add a new db config to the datasource configuration done in step 1 above. For example, -``` html/xml +``` xml jdbc/WSO2CarbonDB_GREG @@ -351,7 +345,7 @@ Add a new db config to the datasource configuration done in step 1 above. For ex Specify the remote Governance Registry instance with the following configuration: -``` html/xml +``` xml governanceRegistryInstance governance_registry @@ -377,14 +371,14 @@ Change the values of the following elements according to your environment. - <enableCache> : Whether caching is enabled on the Carbon server instance. !!! info -Note + Note -When adding the corresponding configuration to the registry.xml file of a slave node, set <readOnly>true</readOnly>. This is the only configuration change. + When adding the corresponding configuration to the registry.xml file of a slave node, set <readOnly>true</readOnly>. This is the only configuration change. Define the registry partitions using the remote Governance Registry instance. -``` html/xml +``` xml configRegistryInstance /_system/config @@ -405,15 +399,9 @@ Define the registry partitions using the remote Governance Registry instance. ***Configuring axis2.xml file*** 3. Navigate to $CARBON \_HOME/repository/conf/axis2/axis2.xml file and enable carbon clustering by copying the following configuration to all Carbon server instances: - -``` html/xml +``` xml ``` - -!!! info -Note - - 4. Copy 'MySQL JDBC connector jar' ( [http://dev.mysql.com/downloads/connector/j/5.1.html)](http://dev.mysql.com/downloads/connector/j/5.1.html) to $ G-REG\_HOME/repository/components/lib in Carbon server instances of Bar product cluster. 5. Start both clusters and note the log entries that indicate successful mounting to the remote Governance Registry nodes. diff --git a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-governance-partition-in-a-remote-registry.md b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-governance-partition-in-a-remote-registry.md index a7e85b9326..ef869b415f 100644 --- a/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-governance-partition-in-a-remote-registry.md +++ b/en/docs/install-and-setup/setup/setting-up-databases/working-with-the-resgistry/using-remote-registry/admin-governance-partition-in-a-remote-registry.md @@ -39,7 +39,7 @@ Database configurations are stored in $CARBON\_HOME/repository/conf/datasources/ 2. Navigate to $G-REG\_HOME/repository/conf/datasources/master-datasources.xml file where G-REG\_HOME is the Governance Registry distribution home. Replace the existing WSO2\_CARBON\_DB datasource with the following configuration: -``` html/xml +``` xml WSO2_CARBON_DB The datasource used for registry and user manager @@ -71,7 +71,7 @@ Change the values of the following elements according to your environment. 3. Navigate to $G-REG\_HOME /repository/conf/axis2/axis2.xml file in all Carbon-based product instances to be connected with the remote registry, and enable clustering with the following configuration. -``` html/xml +``` xml ``` @@ -105,7 +105,7 @@ Now that the shared registry is configured, let's take a look at the configurati 3. Configure $CARBON \_HOME/repository/conf/datasource/master-datasources.xml where CARBON \_HOME is the distribution home of any WSO2 Carbon-based product you downloaded in step 1. Then, add the following datasource for the registry space. -``` html/xml +``` xml WSO2_CARBON_DB_GREG The datasource used for registry and user manager @@ -136,7 +136,7 @@ Change the values of the relevant elements accordingly. ** Add a new db config to the datasource configuration done in step 3 above. For example, -``` html/xml +``` xml jdbc/WSO2CarbonDB_GREG @@ -144,7 +144,7 @@ Add a new db config to the datasource configuration done in step 3 above. For ex Specify the remote Governance Registry instance with the following configuration: -``` html/xml +``` xml instanceid remote_registry @@ -164,7 +164,7 @@ Change the values of the following elements according to your environment. Define the registry partitions using the remote Governance Registry instance. In this deployment strategy, we are mounting the governance partition of the Carbon-based product instances to the remote Governance Registry instance. This is graphically represented in Figure 3 at the beginning. -``` html/xml +``` xml instanceid /_system/governance @@ -182,7 +182,7 @@ Define the registry partitions using the remote Governance Registry instance. In 5. Navigate to $CARBON \_HOME/repository/conf/axis2/axis2.xml file where CARBON \_HOME is the distribution home of any WSO2 Carbon-based products to be connected with the remote registry. Enable carbon clustering by copying the following configuration to all Carbon server instances: -``` html/xml +``` xml ``` diff --git a/en/docs/install-and-setup/setup/si-deployment/deploying-si-as-minimum-ha-cluster.md b/en/docs/install-and-setup/setup/si-deployment/deploying-si-as-minimum-ha-cluster.md index ab834645cc..fa01ffadd5 100644 --- a/en/docs/install-and-setup/setup/si-deployment/deploying-si-as-minimum-ha-cluster.md +++ b/en/docs/install-and-setup/setup/si-deployment/deploying-si-as-minimum-ha-cluster.md @@ -221,7 +221,7 @@ To configure the HA cluster, follow the steps below: The following is sample HA configuration. - ``` + ``` - deployment.config: type: ha passiveNodeDetailsWaitTimeOutMillis: 300000 @@ -241,7 +241,7 @@ To configure the HA cluster, follow the steps below: maxIdle: 10 maxWait: 60000 minEvictableIdleTimeMillis: 120000 - ``` + ``` ## Starting the cluster diff --git a/en/docs/install-and-setup/setup/si-setup/defining-tables-for-physical-stores.md b/en/docs/install-and-setup/setup/si-setup/defining-tables-for-physical-stores.md index 74f1d06b2e..bc97c2d54a 100644 --- a/en/docs/install-and-setup/setup/si-setup/defining-tables-for-physical-stores.md +++ b/en/docs/install-and-setup/setup/si-setup/defining-tables-for-physical-stores.md @@ -17,7 +17,7 @@ in the following ways: define table SweetProductionTable (name string, amount double); ``` - !!! info + !!! info This method is not recommended in a production environment because is less secure compared to the other methods. @@ -30,7 +30,7 @@ in the following ways: as a ref (i.e., in a separate section siddhi: and subsection refs:) as shown in the example below. - !!! info + !!! info The database connection is started when a Siddhi application is deployed, and disconnected when the Siddhi application is @@ -77,7 +77,7 @@ in the following ways: @Store(type='', datasource=’’) ``` - !!! info + !!! info The database connection pool is initialized at server startup, and destroyed at server shut down. diff --git a/en/docs/install-and-setup/setup/si-setup/user-management-via-the-idp-client-interface.md b/en/docs/install-and-setup/setup/si-setup/user-management-via-the-idp-client-interface.md index d5d53860aa..e2915688ad 100644 --- a/en/docs/install-and-setup/setup/si-setup/user-management-via-the-idp-client-interface.md +++ b/en/docs/install-and-setup/setup/si-setup/user-management-via-the-idp-client-interface.md @@ -56,9 +56,9 @@ The parameters used in the above configurations are as follows: !!! note -If new users/roles are added and the above default user and role are -also needed, the following parameters must be added to the user store -along with the added user/role. + If new users/roles are added and the above default user and role are + also needed, the following parameters must be added to the user store + along with the added user/role. @@ -145,8 +145,10 @@ IdP provider: @@ -197,8 +199,8 @@ grant type. !!! note -The identity provider with which WSO2 SP interacts with to authenticate -users must be started before the SP server. + The identity provider with which WSO2 SP interacts with to authenticate + users must be started before the SP server. The auth manager must be configured under the diff --git a/en/docs/install-and-setup/setup/si-setup/user-management.md b/en/docs/install-and-setup/setup/si-setup/user-management.md index 1a57817007..a784d0b8f1 100644 --- a/en/docs/install-and-setup/setup/si-setup/user-management.md +++ b/en/docs/install-and-setup/setup/si-setup/user-management.md @@ -130,9 +130,9 @@ The parameters used in the above configurations are as follows: !!! note -If new users/roles are added and the above default user and role are -also needed, the following parameters must be added to the user store -along with the added user/role. + If new users/roles are added and the above default user and role are + also needed, the following parameters must be added to the user store + along with the added user/role.
              3600

              The number of seconds for which the session is valid once the user logs in.

              -!!! info +
              +

              Note

              The value specified here needs to be greater than 60 seconds because the system checks the user credentials and keeps extending the session every minute until the session timeout is reached.

              +
              @@ -219,8 +219,10 @@ IdP provider: diff --git a/en/docs/install-and-setup/setup/si-setup/working-with-keystores.md b/en/docs/install-and-setup/setup/si-setup/working-with-keystores.md index 80ff4d53e2..cd775a5b64 100644 --- a/en/docs/install-and-setup/setup/si-setup/working-with-keystores.md +++ b/en/docs/install-and-setup/setup/si-setup/working-with-keystores.md @@ -184,7 +184,8 @@ Now we have a `.jks` file. This keystore (`.jks` file) can be used to generate a 3. After accepting the request, a signed certificate is provided along with several intermediate certificates (depending on the CA) as a bundle (.zip file). - !!!info "The following is a sample certificate by the CA (Comodo) + !!! info + The following is a sample certificate by the CA (Comodo) ```text The Root certificate of the CA: AddTrustExternalCARoot.crt diff --git a/en/docs/integrate/develop/creating-artifacts/data-services/securing-data-services.md b/en/docs/integrate/develop/creating-artifacts/data-services/securing-data-services.md index 309917a9a6..5cbd629150 100644 --- a/en/docs/integrate/develop/creating-artifacts/data-services/securing-data-services.md +++ b/en/docs/integrate/develop/creating-artifacts/data-services/securing-data-services.md @@ -19,8 +19,9 @@ module: 1. Right click on the [Integration project]({{base_path}}/integrate/develop/create-integration-project) and go to **New → Registry Resource**. - !!! Tip Alternatively, you can go to **File → New → Others** and - select **Registry Resources** from the opening wizard. + !!! Tip + Alternatively, you can go to **File → New → Others** and + select **Registry Resources** from the opening wizard. 2. Enter a name for the module and click **Next** . 3. Enter the Maven information about the module and click **Finish** . diff --git a/en/docs/integrate/examples/data_integration/batch-requesting.md b/en/docs/integrate/examples/data_integration/batch-requesting.md index f8563a03cc..4542c90f40 100644 --- a/en/docs/integrate/examples/data_integration/batch-requesting.md +++ b/en/docs/integrate/examples/data_integration/batch-requesting.md @@ -85,7 +85,7 @@ Let's send a request with multiple transactions to the data service: 3. Update the **addEmployeeOp** operation (under **batch_requesting_sampleSOAP11Binding**) with the request body as shown below: - !!! Tip + !!! Tip In this example, we are sending two transactions with details of two employees. ```xml diff --git a/en/docs/integrate/examples/data_integration/request-box.md b/en/docs/integrate/examples/data_integration/request-box.md index e5d71b78f3..b43c36df41 100644 --- a/en/docs/integrate/examples/data_integration/request-box.md +++ b/en/docs/integrate/examples/data_integration/request-box.md @@ -121,7 +121,7 @@ Let's send a request with multiple transactions to the data service: 3. Invoke the **request_box** under **request_box_exampleSOAP12Binding** with the following request body: - !!! Tip + !!! Tip Note that we are sending two transactions with details of two employees. ```xml diff --git a/en/docs/integrate/examples/routing_examples/routing_based_on_payloads.md b/en/docs/integrate/examples/routing_examples/routing_based_on_payloads.md index 67c22cac74..063837d359 100644 --- a/en/docs/integrate/examples/routing_examples/routing_based_on_payloads.md +++ b/en/docs/integrate/examples/routing_examples/routing_based_on_payloads.md @@ -101,7 +101,7 @@ Invoke the proxy service: - Send a request to get the IBM stock quote and see that a JSON response is received with the IBM stock quote. === "Request" - ` ```xml + ```xml HTTP method: POST Request URL: http://localhost:8290/services/ContentBasedRoutingProxy Content-Type: text/xml;charset=UTF-8 @@ -116,7 +116,7 @@ Invoke the proxy service: - ```` + ``` === "Response" ```xml diff --git a/en/docs/observe/api-manager/traces/monitoring-with-opentelemetry.md b/en/docs/observe/api-manager/traces/monitoring-with-opentelemetry.md index 79d5d137e0..117e2cd1af 100644 --- a/en/docs/observe/api-manager/traces/monitoring-with-opentelemetry.md +++ b/en/docs/observe/api-manager/traces/monitoring-with-opentelemetry.md @@ -16,21 +16,25 @@ For more information, see [OpenTelemetry Configurations]({{base_path}}/reference !!! note [``OTEL_RESOURCE_ATTRIBUTES``](https://opentelemetry.io/docs/specs/otel/resource/sdk/#specifying-resource-information-via-an-environment-variable) can be used to set resource attributes such as `deployment.environment` and `service.name`. This can be done in one of the following ways: - - Via `deployment.toml`. - ```toml - [[apim.open_telemetry.resource_attributes]] - name = "service.name" - value = "MyService" - - [[apim.open_telemetry.resource_attributes]] - name = "deployment.environment" - value = "Production" - ``` - - - Via the `OTEL_RESOURCE_ATTRIBUTES` environment variable (as per the OpenTelemetry spec). - ``` - export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=Production,service.name=MyService - ``` + + - Via `deployment.toml`. + + ```toml + [[apim.open_telemetry.resource_attributes]] + name = "service.name" + value = "MyService" + + [[apim.open_telemetry.resource_attributes]] + name = "deployment.environment" + value = "Production" + ``` + + - Via the `OTEL_RESOURCE_ATTRIBUTES` environment variable (as per the OpenTelemetry spec). + + ``` + export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=Production,service.name=MyService + ``` + When a resource attribute is given via both the `deployment.toml` and the `OTEL_RESOURCE_ATTRIBUTES` environment variable, the value of the attribute given via the environment variable will replace the value given via `deployment.toml`. ## Enabling Jaeger Tracing diff --git a/en/docs/observe/micro-integrator/classic-observability-metrics/jmx-monitoring.md b/en/docs/observe/micro-integrator/classic-observability-metrics/jmx-monitoring.md index 35fe60ff0f..7af4c1f3c3 100644 --- a/en/docs/observe/micro-integrator/classic-observability-metrics/jmx-monitoring.md +++ b/en/docs/observe/micro-integrator/classic-observability-metrics/jmx-monitoring.md @@ -74,13 +74,13 @@ detached) to an already running Java process. This universal agent uses the JVM 5. Start the JVM Agent ex: java -jar jolokia-jvm-1.7.1.jar --host=localhost --port=9764 start 6. Also you can call it with --help to get a short usage information: - Once the server starts, you can read MBeans using Jolokia APIs. The following are a few examples. + Once the server starts, you can read MBeans using Jolokia APIs. The following are a few examples. - List all available MBeans: `http://localhost:9763/jolokia/list` (Change the appropriate hostname and port accordingly.) - WSO2 ESB MBean: - ``` + ``` http://localhost:9763/jolokia/read/org.apache.synapse:Name=https-sender,Type=PassThroughConnections/ActiveConnections - ``` + ``` - Reading Heap Memory: `http://localhost:9763/jolokia/read/java.lang:type=Memory/HeapMemoryUsage` diff --git a/en/docs/observe/si-observe/monitoring-received-events-count-via-logs.md b/en/docs/observe/si-observe/monitoring-received-events-count-via-logs.md index 7f2b827a26..7450c933fd 100644 --- a/en/docs/observe/si-observe/monitoring-received-events-count-via-logs.md +++ b/en/docs/observe/si-observe/monitoring-received-events-count-via-logs.md @@ -9,8 +9,8 @@ To configure WSO2 Streaming Integrator to log the total received events count, f 2. Add a parameter named `enableLoggingEventCount` and set it to `true` as shown below: `enableLoggingEventCount: true` - - !!! info + + !!! info This is set to `false` by default. 3. Add another parameter named `loggingDuration` and give the time interval (in minutes) for which you want the total received event count to be logged. e.g., If you want the total received event count to be logged every minute, you can set the parameter as follows: diff --git a/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-reference.md b/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-reference.md index 43b63c23d7..ff6ba7cd46 100644 --- a/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-reference.md +++ b/en/docs/reference/connectors/amazondynamodb-connector/amazondynamodb-connector-reference.md @@ -413,7 +413,7 @@ To use the Amazon DynamoDB connector, add the element in y "Message":{ "S":"I want to update multiple items in a single call. What's the best way to do that?" } - } + } } ``` @@ -1827,7 +1827,7 @@ To use the Amazon DynamoDB connector, add the element in y "TableName":"Thread", "TableSizeBytes":0, "TableStatus":"UPDATING" - } + } } ``` diff --git a/en/docs/reference/connectors/amazonsqs-connector/amazon-inbound-endpoint-1.0.x/amazonsqs-inbound-endpoint-reference-configuration.md b/en/docs/reference/connectors/amazonsqs-connector/amazon-inbound-endpoint-1.0.x/amazonsqs-inbound-endpoint-reference-configuration.md index ce6be7331f..01af314718 100644 --- a/en/docs/reference/connectors/amazonsqs-connector/amazon-inbound-endpoint-1.0.x/amazonsqs-inbound-endpoint-reference-configuration.md +++ b/en/docs/reference/connectors/amazonsqs-connector/amazon-inbound-endpoint-1.0.x/amazonsqs-inbound-endpoint-reference-configuration.md @@ -93,8 +93,8 @@ The following configurations allow you to configure AmazonSQS Inbound Endpoint f ``` - ??? note "Sample fault sequence" - ``` +??? note "Sample fault sequence" + ``` @@ -110,4 +110,4 @@ The following configurations allow you to configure AmazonSQS Inbound Endpoint f - ``` \ No newline at end of file + ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-config.md b/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-config.md index b9e92f276e..c7245e73d0 100644 --- a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-config.md +++ b/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-connector-config.md @@ -297,7 +297,7 @@ To use the Amazon SQS connector, add the element in your config
              3600

              The number of seconds for which the session is valid once the user logs in.

              -!!! info +
              +

              Info

              The value specified here needs to be greater than 60 seconds because the system checks the user credentials and keeps extending the session every minute until the session timeout is reached.

              +
              - **Sample configuration** + **Sample configuration** ```xml @@ -351,7 +351,7 @@ To use the Amazon SQS connector, add the element in your config > **Note**: It is possible you will receive a message even after you have deleted it. This might happen on rare occasions if one of the servers storing a copy of the message is unavailable when you request to delete the message. The copy remains on the server and might be returned to you again on a subsequent receive request. You should create your system to be idempotent so that receiving a particular message more than once is not a problem. - **Sample configuration** + **Sample configuration** ```xml @@ -401,7 +401,7 @@ To use the Amazon SQS connector, add the element in your config - **Sample configuration** + **Sample configuration** ```xml @@ -465,7 +465,7 @@ To use the Amazon SQS connector, add the element in your config > > Unlike with a queue, when you change the visibility timeout for a specific message, that timeout value is applied immediately but is not saved in memory for that message. If you don't delete a message after it is received, the visibility timeout for the message the next time it is received reverts to the original timeout value, not the value you set with the changeMessageVisibility operation. - **Sample configuration** + **Sample configuration** ```xml @@ -516,7 +516,7 @@ To use the Amazon SQS connector, add the element in your config - **Sample configuration** + **Sample configuration** ```xml @@ -583,7 +583,7 @@ To use the Amazon SQS connector, add the element in your config - **Sample configuration** + **Sample configuration** ```xml diff --git a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-reference-configuration.md b/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-reference-configuration.md index 7d4dc8ec8d..4fbd76b798 100644 --- a/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-reference-configuration.md +++ b/en/docs/reference/connectors/amazonsqs-connector/amazonsqs-inbound-endpoint-reference-configuration.md @@ -96,8 +96,8 @@ The following configurations allow you to configure AmazonSQS Inbound Endpoint f ``` - ??? note "Sample fault sequence" - ``` +??? note "Sample fault sequence" + ``` @@ -113,4 +113,4 @@ The following configurations allow you to configure AmazonSQS Inbound Endpoint f - ``` \ No newline at end of file + ``` \ No newline at end of file diff --git a/en/docs/reference/connectors/bigquery-connector/bigquery-connector-example.md b/en/docs/reference/connectors/bigquery-connector/bigquery-connector-example.md index 6a23ef6822..6bcbf5d03a 100644 --- a/en/docs/reference/connectors/bigquery-connector/bigquery-connector-example.md +++ b/en/docs/reference/connectors/bigquery-connector/bigquery-connector-example.md @@ -618,7 +618,7 @@ Invoke the API as shown below using the curl command. Curl Application can be do **Expected Response** - ```json + ```json // API callback callBackFunction({ "kind": "bigquery#tableDataList", diff --git a/en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunitdetails.md b/en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunitdetails.md index 785d10f6f7..b71ea99f9c 100644 --- a/en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunitdetails.md +++ b/en/docs/reference/connectors/ceridiandayforce-connector/configuration/orgunitdetails.md @@ -180,8 +180,7 @@ Following example illustrates how to connect to Dayforce with the init operation - -``` + ``` 2. Create a json file named query.json and copy the configurations given below to it: diff --git a/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-example.md b/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-example.md index 94aa88084d..401abec634 100644 --- a/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-example.md +++ b/en/docs/reference/connectors/google-pubsub-connector/googlepubsub-connector-example.md @@ -185,8 +185,9 @@ First create an API, which will be where we configure the integration logic. Rig 3. Add the property mediator to capture the `subscriptionName` values. Follow the steps given in createTopicSubscription operation. -Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - !!! note "pubsubApi.xml" +Now you can switch into the Source view and check the XML configuration files of the created API and sequences. + +!!! note "pubsubApi.xml" ``` diff --git a/en/docs/reference/connectors/mongodb-connector/mongodb-connector-config.md b/en/docs/reference/connectors/mongodb-connector/mongodb-connector-config.md index 44f4783439..bf29012caf 100644 --- a/en/docs/reference/connectors/mongodb-connector/mongodb-connector-config.md +++ b/en/docs/reference/connectors/mongodb-connector/mongodb-connector-config.md @@ -899,48 +899,48 @@ Sample configuration of STANDARD (replica set) configs The following operations allow you to work with the MongoDB connector. Click an operation name to see parameter details and samples on how to use it. ??? note "insertOne" -Inserts a document into a collection. See the related [insertOne documentation](https://docs.mongodb.com/manual/reference/method/db.collection.insertOne/) for more information. - - - - - - - - - - - - - - - - - - - - - - -
              Parameter NameTypeDescriptionDefault ValueRequired
              -Collection - -String - -The name of the MongoDB collection. - - - -Yes -
              -Document - -JSON String - -A document to insert into the collection. - - - -Yes -
              + Inserts a document into a collection. See the related [insertOne documentation](https://docs.mongodb.com/manual/reference/method/db.collection.insertOne/) for more information. + + + + + + + + + + + + + + + + + + + + + + +
              Parameter NameTypeDescriptionDefault ValueRequired
              + Collection + + String + + The name of the MongoDB collection. + - + + Yes +
              + Document + + JSON String + + A document to insert into the collection. + - + + Yes +
              **Sample Configuration** @@ -965,65 +965,65 @@ Yes ``` ??? note "insertMany" -Inserts multiple documents into a collection. See the related [insertMany documentation](https://docs.mongodb.com/manual/reference/method/db.collection.insertMany) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
              Parameter NameTypeDescriptionDefault ValueRequired
              -Collection - -String - -The name of the MongoDB collection. - - - -Yes -
              -Documents - -JSON String - -An array of documents to insert into the collection. - - - -Yes -
              -Ordered - -Boolean - -A boolean specifying whether the MongoDB instance should perform an ordered or unordered insert. - -true - -No -
              + Inserts multiple documents into a collection. See the related [insertMany documentation](https://docs.mongodb.com/manual/reference/method/db.collection.insertMany) for more information. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
              Parameter NameTypeDescriptionDefault ValueRequired
              + Collection + + String + + The name of the MongoDB collection. + - + + Yes +
              + Documents + + JSON String + + An array of documents to insert into the collection. + - + + Yes +
              + Ordered + + Boolean + + A boolean specifying whether the MongoDB instance should perform an ordered or unordered insert. + + true + + No +
              **Sample Configuration** @@ -1058,81 +1058,81 @@ No ``` ??? note "findOne" -Returns one document that satisfies the specified query criteria on the collection. If multiple documents satisfy the query, this method returns the first document according to the [natural order](https://docs.mongodb.com/manual/reference/glossary/#term-natural-order). See the related [find documentation](https://docs.mongodb.com/manual/reference/method/db.collection.find/) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
              Parameter NameTypeDescriptionDefault ValueRequired
              -Collection - -String - -The name of the MongoDB collection. - - - -Yes -
              -Query - -JSON String - -Specifies query selection criteria using [query operators](https://docs.mongodb.com/manual/reference/operator/). To return the first document in a collection, omit this parameter or pass an empty document ({}). - -{} - -No -
              -Projection - -JSON String - -Specifies the fields to return using [projection operators](https://docs.mongodb.com/manual/reference/operator/projection/). Omit this parameter to return all fields in the matching document. - - - -No -
              -Collation - -JSON String - -Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - - - -No -
              + Returns one document that satisfies the specified query criteria on the collection. If multiple documents satisfy the query, this method returns the first document according to the [natural order](https://docs.mongodb.com/manual/reference/glossary/#term-natural-order). See the related [find documentation](https://docs.mongodb.com/manual/reference/method/db.collection.find/) for more information. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
              Parameter NameTypeDescriptionDefault ValueRequired
              + Collection + + String + + The name of the MongoDB collection. + - + + Yes +
              + Query + + JSON String + + Specifies query selection criteria using [query operators](https://docs.mongodb.com/manual/reference/operator/). To return the first document in a collection, omit this parameter or pass an empty document ({}). + + {} + + No +
              + Projection + + JSON String + + Specifies the fields to return using [projection operators](https://docs.mongodb.com/manual/reference/operator/projection/). Omit this parameter to return all fields in the matching document. + - + + No +
              + Collation + + JSON String + + Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. + - + + No +
              **Sample Configuration** @@ -1155,97 +1155,97 @@ No ``` ??? note "find" -Selects documents in a collection or [view](https://docs.mongodb.com/manual/core/views/) and returns a [cursor](https://docs.mongodb.com/manual/reference/glossary/#term-cursor) to the selected documents. See the related [find documentation](https://docs.mongodb.com/manual/reference/method/db.collection.find/) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
              Parameter NameTypeDescriptionDefault ValueRequired
              -Collection - -String - -The name of the MongoDB collection. - - - -Yes -
              -Query - -JSON String - -Selection filter using [query operators](https://docs.mongodb.com/manual/reference/operator/). To return all documents in a collection, omit this parameter or pass an empty document ({}). - -{} - -No -
              -Projection - -JSON String - -Specifies the fields to return using [projection operators](https://docs.mongodb.com/manual/reference/operator/projection/). Omit this parameter to return all fields in the matching document. - - - -No -
              -Collation - -JSON String - -Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - - - -No -
              -Sort - -JSON String - -A document that defines the sort order of the result set. - - - -No -
              + Selects documents in a collection or [view](https://docs.mongodb.com/manual/core/views/) and returns a [cursor](https://docs.mongodb.com/manual/reference/glossary/#term-cursor) to the selected documents. See the related [find documentation](https://docs.mongodb.com/manual/reference/method/db.collection.find/) for more information. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
              Parameter NameTypeDescriptionDefault ValueRequired
              + Collection + + String + + The name of the MongoDB collection. + - + + Yes +
              + Query + + JSON String + + Selection filter using [query operators](https://docs.mongodb.com/manual/reference/operator/). To return all documents in a collection, omit this parameter or pass an empty document ({}). + + {} + + No +
              + Projection + + JSON String + + Specifies the fields to return using [projection operators](https://docs.mongodb.com/manual/reference/operator/projection/). Omit this parameter to return all fields in the matching document. + - + + No +
              + Collation + + JSON String + + Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. + - + + No +
              + Sort + + JSON String + + A document that defines the sort order of the result set. + - + + No +
              **Sample Configuration** @@ -1268,114 +1268,114 @@ No ``` ??? note "updateOne" -Updates a single document within the collection based on the filter. See the related [updateOne documentation](https://docs.mongodb.com/manual/reference/method/db.collection.updateOne/) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
              Parameter NameTypeDescriptionDefault ValueRequired
              -Collection - -String - -The name of the MongoDB collection. - - - -Yes -
              -Query - -JSON String - -The selection criteria for the update. The same [query selectors](https://docs.mongodb.com/manual/reference/operator/query/#query-selectors) as in the [find()](https://docs.mongodb.com/manual/reference/method/db.collection.find/#db.collection.find) method are available. Specify an empty document {} to update the first document returned in the collection. - -{} - -No -
              -Update - -JSON String - -The modifications to apply. - - - -Yes -
              -Upsert - -Boolean - -Creates a new document if no documents match the filter. - -false - -No -
              -Collation - -JSON String - -Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - - - -No -
              -Array Filters - -JSON String - -An array of filter documents that determine which array elements to modify for an update operation on an array field. - - - -No -
              + Updates a single document within the collection based on the filter. See the related [updateOne documentation](https://docs.mongodb.com/manual/reference/method/db.collection.updateOne/) for more information. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
              Parameter NameTypeDescriptionDefault ValueRequired
              + Collection + + String + + The name of the MongoDB collection. + - + + Yes +
              + Query + + JSON String + + The selection criteria for the update. The same [query selectors](https://docs.mongodb.com/manual/reference/operator/query/#query-selectors) as in the [find()](https://docs.mongodb.com/manual/reference/method/db.collection.find/#db.collection.find) method are available. Specify an empty document {} to update the first document returned in the collection. + + {} + + No +
              + Update + + JSON String + + The modifications to apply. + - + + Yes +
              + Upsert + + Boolean + + Creates a new document if no documents match the filter. + + false + + No +
              + Collation + + JSON String + + Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. + - + + No +
              + Array Filters + + JSON String + + An array of filter documents that determine which array elements to modify for an update operation on an array field. + - + + No +
              !!! Info Array Filters parameter should be in a JSON object format. See the example given below. @@ -1429,114 +1429,114 @@ No ``` ??? note "updateMany" -Updates all documents that match the specified filter for a collection. See the related [updateMany documentation](https://docs.mongodb.com/manual/reference/method/db.collection.updateMany/) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
              Parameter NameTypeDescriptionDefault ValueRequired
              -Collection - -String - -The name of the MongoDB collection. - - - -Yes -
              -Query - -JSON String - -The selection criteria for the update. The same [query selectors](https://docs.mongodb.com/manual/reference/operator/query/#query-selectors) as in the [find()](https://docs.mongodb.com/manual/reference/method/db.collection.find/#db.collection.find) method are available. Specify an empty document {} to update all documents in the collection. - -{} - -No -
              -Update - -JSON String - -The modifications to apply. - - - -Yes -
              -Upsert - -Boolean - -Creates a new document if no documents match the filter. - -false - -No -
              -Collation - -JSON String - -Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - - - -No -
              -Array Filters - -JSON String - -An array of filter documents that determine which array elements to modify for an update operation on an array field. - - - -No -
              + Updates all documents that match the specified filter for a collection. See the related [updateMany documentation](https://docs.mongodb.com/manual/reference/method/db.collection.updateMany/) for more information. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
              Parameter NameTypeDescriptionDefault ValueRequired
              + Collection + + String + + The name of the MongoDB collection. + - + + Yes +
              + Query + + JSON String + + The selection criteria for the update. The same [query selectors](https://docs.mongodb.com/manual/reference/operator/query/#query-selectors) as in the [find()](https://docs.mongodb.com/manual/reference/method/db.collection.find/#db.collection.find) method are available. Specify an empty document {} to update all documents in the collection. + + {} + + No +
              + Update + + JSON String + + The modifications to apply. + - + + Yes +
              + Upsert + + Boolean + + Creates a new document if no documents match the filter. + + false + + No +
              + Collation + + JSON String + + Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. + - + + No +
              + Array Filters + + JSON String + + An array of filter documents that determine which array elements to modify for an update operation on an array field. + - + + No +
              !!! Info Array filters parameter should be in a JSON object format. See the example given below. @@ -1590,65 +1590,65 @@ No ``` ??? note "deleteOne" -Removes a single document from a collection. See the related [deleteOne documentation](https://docs.mongodb.com/manual/reference/method/db.collection.deleteOne/#db.collection.deleteOne) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
              Parameter NameTypeDescriptionDefault ValueRequired
              -Collection - -String - -The name of the MongoDB collection. - - - -Yes -
              -Query - -JSON String - -Specifies deletion criteria using [query operators](https://docs.mongodb.com/manual/reference/operator/). Specify an empty document {} to delete the first document returned in the collection. - -{} - -No -
              -Collation - -JSON String - -Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - - - -No -
              + Removes a single document from a collection. See the related [deleteOne documentation](https://docs.mongodb.com/manual/reference/method/db.collection.deleteOne/#db.collection.deleteOne) for more information. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
              Parameter NameTypeDescriptionDefault ValueRequired
              + Collection + + String + + The name of the MongoDB collection. + - + + Yes +
              + Query + + JSON String + + Specifies deletion criteria using [query operators](https://docs.mongodb.com/manual/reference/operator/). Specify an empty document {} to delete the first document returned in the collection. + + {} + + No +
              + Collation + + JSON String + + Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. + - + + No +
              **Sample Configuration** @@ -1671,65 +1671,65 @@ No ``` ??? note "deleteMany" -Removes all documents that match the query from a collection. See the related [deleteMany documentation](https://docs.mongodb.com/manual/reference/method/db.collection.deleteMany/#db.collection.deleteMany) for more information. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
              Parameter NameTypeDescriptionDefault ValueRequired
              -Collection - -String - -The name of the MongoDB collection. - - - -Yes -
              -Query - -JSON String - -Specifies deletion criteria using [query operators](https://docs.mongodb.com/manual/reference/operator/). To delete all documents in a collection, pass in an empty document ({}). - -{} - -No -
              -Collation - -JSON String - -Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. - - - -No -
              + Removes all documents that match the query from a collection. See the related [deleteMany documentation](https://docs.mongodb.com/manual/reference/method/db.collection.deleteMany/#db.collection.deleteMany) for more information. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
              Parameter NameTypeDescriptionDefault ValueRequired
              + Collection + + String + + The name of the MongoDB collection. + - + + Yes +
              + Query + + JSON String + + Specifies deletion criteria using [query operators](https://docs.mongodb.com/manual/reference/operator/). To delete all documents in a collection, pass in an empty document ({}). + + {} + + No +
              + Collation + + JSON String + + Collation allows users to specify language-specific rules for string comparison, such as rules for letter case and accent marks. + - + + No +
              **Sample Configuration** @@ -1752,49 +1752,49 @@ No ``` ??? note "aggregate" -Process data in collections and return computed results. For more information, see the documentation for [aggregate](https://www.mongodb.com/docs/manual/reference/method/db.collection.aggregate/#db.collection.aggregate). - - - - - - - - - - - - - - - - - - - - - - -
              Parameter NameTypeDescriptionDefault ValueRequired
              -Collection - -String - -The name of the MongoDB collection. - - - -Yes -
              -Stages - -JSON Array - -The stages of the aggregation/aggregation pipeline. Each stage is a document with a corresponding operator name, such as $match or $group. - -- - -Yes -
              + Process data in collections and return computed results. For more information, see the documentation for [aggregate](https://www.mongodb.com/docs/manual/reference/method/db.collection.aggregate/#db.collection.aggregate). + + + + + + + + + + + + + + + + + + + + + + +
              Parameter NameTypeDescriptionDefault ValueRequired
              + Collection + + String + + The name of the MongoDB collection. + - + + Yes +
              + Stages + + JSON Array + + The stages of the aggregation/aggregation pipeline. Each stage is a document with a corresponding operator name, such as $match or $group. + + - + + Yes +
              **Sample Configuration** diff --git a/en/docs/reference/connectors/mongodb-connector/mongodb-connector-example.md b/en/docs/reference/connectors/mongodb-connector/mongodb-connector-example.md index 30fe3f485b..58a30c9d7d 100644 --- a/en/docs/reference/connectors/mongodb-connector/mongodb-connector-example.md +++ b/en/docs/reference/connectors/mongodb-connector/mongodb-connector-example.md @@ -146,7 +146,7 @@ Follow these steps to deploy the exported CApp to the integration runtime. {!includes/reference/connectors/deploy-capp.md!} ??? note "Click here for instructions on removing the iterative mongodb server logs" -Add the configuration below to **remove** the iterative `org.mongodb.driver.cluster` server logs; + Add the configuration below to **remove** the iterative `org.mongodb.driver.cluster` server logs; 1. Add the following logger to the `log4j2.properties` file in the `/conf` folder. @@ -157,7 +157,9 @@ Add the configuration below to **remove** the iterative `org.mongodb.driver.clus 2. Then, add `org-mongodb-driver-cluster` to the list of `loggers`. -!!! Prerequisite 1. Download the Mongo java driver from [here](https://repo1.maven.org/maven2/org/mongodb/mongo-java-driver/3.12.12/mongo-java-driver-3.12.12.jar). +!!! Prerequisite + + 1. Download the Mongo java driver from [here](https://repo1.maven.org/maven2/org/mongodb/mongo-java-driver/3.12.12/mongo-java-driver-3.12.12.jar). 2. Add the driver to the `/dropins` folder. @@ -209,7 +211,7 @@ Add the configuration below to **remove** the iterative `org.mongodb.driver.clus ### Find Operation !!! Note -In order to find documents by ObjectId, the find query payload should be in the following format: + In order to find documents by ObjectId, the find query payload should be in the following format: ```json { diff --git a/en/docs/reference/connectors/redis-connector/redis-connector-example.md b/en/docs/reference/connectors/redis-connector/redis-connector-example.md index 23d1e99ff2..7d35a41bfc 100644 --- a/en/docs/reference/connectors/redis-connector/redis-connector-example.md +++ b/en/docs/reference/connectors/redis-connector/redis-connector-example.md @@ -165,8 +165,8 @@ Create a resource that sets up Redis hash map and sets a specific field in a has Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - ??? note "StockQuoteAPI.xml" - ``` +??? note "StockQuoteAPI.xml" + ``` @@ -256,7 +256,7 @@ Now you can switch into the Source view and check the XML configuration files of - ``` + ``` ## Get the project You can download the ZIP file and extract the contents to get the project code. @@ -279,13 +279,13 @@ Invoke the API as shown below using the curl command. Curl Application can be do **Sample request 1** - ``` - curl -v GET "http://localhost:8290/stockquote/view/WSO2" -H "Content-Type:application/json" - ``` +``` +curl -v GET "http://localhost:8290/stockquote/view/WSO2" -H "Content-Type:application/json" +``` **Expected Response** - ```json +```json { "Envelope": { "Body": { @@ -307,17 +307,17 @@ Invoke the API as shown below using the curl command. Curl Application can be do } } } - ``` +``` **Sample request 2** - ``` - curl -v GET "http://localhost:8290/stockquote/view/IBM" -H "Content-Type:application/json" - ``` +``` +curl -v GET "http://localhost:8290/stockquote/view/IBM" -H "Content-Type:application/json" +``` **Expected Response** - ```json +```json { "Envelope": { "Body": { @@ -340,73 +340,74 @@ Invoke the API as shown below using the curl command. Curl Application can be do } } - ``` +``` + **Inserted hash map can check using `redis-cli`** Log in to the `redis-cli` and execute `HGETALL StockVolume` command to retrieve inserted hash map details. - ``` +``` 127.0.0.1:6379> HGETALL StockVolume 1) "IBM" 2) "7791" 3) "WSO2" 4) "7791" 127.0.0.1:6379> - ``` +``` 2. Retrieve all stock volume details from the Redis server. **Sample request** - ``` - curl -v GET "http://localhost:8290/stockquote/getstockvolumedetails" -H "Content-Type:application/json" - ``` +``` +curl -v GET "http://localhost:8290/stockquote/getstockvolumedetails" -H "Content-Type:application/json" +``` **Expected Response** - ```json +```json { "output": "{IBM=7791, WSO2=7791}" } - ``` +``` 3. Remove stock volume details. **Sample request 1** - ``` - curl -v POST -d {"redisFields":"WSO2"} "http://localhost:8290/stockquote/deletestockvolumedetails" -H "Content-Type:application/json" - ``` +``` +curl -v POST -d {"redisFields":"WSO2"} "http://localhost:8290/stockquote/deletestockvolumedetails" -H "Content-Type:application/json" +``` **Expected Response** - ```json +```json { "output": 1 } - ``` +``` **Sample request 2 : Check the remaining stock volume details** **Sample request** - ``` +``` curl -v GET "http://localhost:8290/stockquote/getstockvolumedetails" -H "Content-Type:application/json" - ``` +``` **Expected Response** - ```json +```json { "output": "{IBM=7791}" } - ``` +``` **Inserted list can retrieve using `redis-cli`** Log in to the `redis-cli` and execute `HGETALL StockVolume` command to retrieve list length. - ``` +``` 127.0.0.1:6379> HGETALL StockVolume 1) "IBM" 2) "7791" 127.0.0.1:6379> - ``` +``` diff --git a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-example.md b/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-example.md index cd47cd72f9..7f73519bb9 100644 --- a/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-example.md +++ b/en/docs/reference/connectors/salesforce-connectors/salesforcebulk-connector-example.md @@ -150,7 +150,7 @@ Now follow the steps below to add configurations to the `insertEmployeeBulkRecor Add Respond mediator -#### Configure a resource for the getStatusOfBatch +#### Configure a resource for the getStatusOfBatch 1. Initialize the connector. @@ -193,8 +193,8 @@ Now follow the steps below to add configurations to the `insertEmployeeBulkRecor Now you can switch into the Source view and check the XML configuration files of the created API and sequences. - ??? note "create.xml" - ``` +??? note "create.xml" + ``` @@ -262,7 +262,7 @@ Now you can switch into the Source view and check the XML configuration files of - ``` + ``` ## Get the project You can download the ZIP file and extract the contents to get the project code. @@ -292,7 +292,7 @@ Invoke the API as shown below using the curl command. Curl application can be do **Expected Response** - ```xml +```xml @@ -307,17 +307,17 @@ Invoke the API as shown below using the curl command. Curl application can be do 2 0 - ``` +``` 2. Get status of the inserted employee details. **Sample request** - `curl -v POST -d 7502x000002yp73AAA7512x000002ywWrAAI "http://localhost:8290/resources/getStatusOfBatch" -H "Content-Type:application/xml"` + curl -v POST -d 7502x000002yp73AAA7512x000002ywWrAAI "http://localhost:8290/resources/getStatusOfBatch" -H "Content-Type:application/xml"` **Expected Response** - ```xml +```xml @@ -333,7 +333,7 @@ Invoke the API as shown below using the curl command. Curl application can be do 3 0 - ``` +``` ## What's Next * To customize this example for your own scenario, see [Salesforce bulk Connector Configuration]({{base_path}}/reference/connectors/salesforce-connectors/salesforcebulk-reference/) documentation for all operation details of the connector. diff --git a/en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-config.md b/en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-config.md index 06c5132257..e5637af4f5 100644 --- a/en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-config.md +++ b/en/docs/reference/connectors/salesforce-connectors/sf-rest-connector-config.md @@ -136,6 +136,7 @@ To use the Salesforce REST connector, add the `` element in "clientSecret": "XXXXXXXXXXXX (Replace with your client secret)", "blocking" : "false" } + ``` ??? note "salesforcerest.init for username/password flow" diff --git a/en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-example.md b/en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-example.md index 48c6750b9c..3f9edadd17 100644 --- a/en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-example.md +++ b/en/docs/reference/connectors/smpp-connector/smpp-inbound-endpoint-example.md @@ -126,14 +126,14 @@ You can download the ZIP file and extract the contents to get the project code. **Sample request** - ``` + ``` curl -v POST -d '{"sourceAddress":"16111", "message":"Hi! This is the first test SMS message.","distinationAddress":"071XXXXXXX"}' "http://172.17.0.1:8290/send" -H "Content-Type:application/json" - ``` + ``` SMPP Inbound Endpoint will consume message from the SMSC. **Expected response** - ``` + ``` [2020-05-18 10:56:05,495] INFO {org.apache.synapse.mediators.builtin.LogMediator} - MessageId = 0, SourceAddress = null, DataCoding = 0, ScheduleDeliveryTime = null, SequenceNumber = 7, ServiceType = null [2020-05-18 10:56:05,506] INFO {org.apache.synapse.mediators.builtin.LogMediator} - To: , MessageID: urn:uuid:F767BC9689D3D2221B1589779565430, Direction: request, Envelope: Hi! This is the first test SMS message. - ``` + ``` diff --git a/en/docs/reference/connectors/twitter-connector/twitter-connector-example.md b/en/docs/reference/connectors/twitter-connector/twitter-connector-example.md index 8d8432f2c1..f970285e9b 100644 --- a/en/docs/reference/connectors/twitter-connector/twitter-connector-example.md +++ b/en/docs/reference/connectors/twitter-connector/twitter-connector-example.md @@ -91,8 +91,8 @@ curl --location 'http://:/createtweet' \ "poll": {"options": ["yes", "maybe", "no"], "duration_minutes": 120} }' ``` + If you are using MI 4.2.0 in your local environment without configuring, ` = localhost` and ` = 8290`. -``` A response simillar to following will be received. ```json diff --git a/en/docs/reference/customize-product/customizations/adding-internationalization.md b/en/docs/reference/customize-product/customizations/adding-internationalization.md index 4a8f3f3a79..4f71b3557b 100644 --- a/en/docs/reference/customize-product/customizations/adding-internationalization.md +++ b/en/docs/reference/customize-product/customizations/adding-internationalization.md @@ -161,8 +161,8 @@ Follow the instructions below to change the direction of the UI: Add the following configuration to change the page direction to RTL (Right To Left). - !!! note - If you have already done customizations to the default theme, make sure to merge the following with the existing changes carefully. + !!! note + If you have already done customizations to the default theme, make sure to merge the following with the existing changes carefully. ```json { diff --git a/en/docs/reference/customize-product/customizations/customize-the-api-store-and-gateway-urls-for-tenants.md b/en/docs/reference/customize-product/customizations/customize-the-api-store-and-gateway-urls-for-tenants.md index ffd55e0f7c..723df02ca3 100644 --- a/en/docs/reference/customize-product/customizations/customize-the-api-store-and-gateway-urls-for-tenants.md +++ b/en/docs/reference/customize-product/customizations/customize-the-api-store-and-gateway-urls-for-tenants.md @@ -227,32 +227,36 @@ Carry out the following steps to configure NGINX as the load balancer to support When adding the `customUrl` parameter, make sure to add the valid context that the Developer Portal is accessed from. 4. Add the server name provided for the devportal in the nginx configurations to the callback URL of apim:devportal service provider. - ```tab="Format" - regexp=(https:///services/auth/callback/login|https://localhost:9443/services/auth/callback/login|https:///services/auth/callback/logout|https://localhost:9443/services/auth/callback/logout) - ``` + === "Format" + ``` + regexp=(https:///services/auth/callback/login|https://localhost:9443/services/auth/callback/login|https:///services/auth/callback/logout|https://localhost:9443/services/auth/callback/logout) + ``` - ```tab="Example" - regexp=(https://developer.wso2.com/services/auth/callback/login|https://localhost:9443/services/auth/callback/login|https://developer.wso2.com/services/auth/callback/logout|https://localhost:9443/services/auth/callback/logout) - ``` + === "Example" + ``` + regexp=(https://developer.wso2.com/services/auth/callback/login|https://localhost:9443/services/auth/callback/login|https://developer.wso2.com/services/auth/callback/logout|https://localhost:9443/services/auth/callback/logout) + ``` !!! note When adding the devportal URl to the callback URL regex, make sure to append it without removing the localhost:9443. 5. Add the following idp configurations to the `deployment.toml` file to make devportal login when accessing from the publisher portal. - ```tab="Format" - [apim.idp] - server_url = "https://" - authorize_endpoint = "https:///oauth2/authorize" - oidc_logout_endpoint = "https:///oidc/logout" - oidc_check_session_endpoint = "https:///oidc/checksession" - ``` + === "Format" + ```toml + [apim.idp] + server_url = "https://" + authorize_endpoint = "https:///oauth2/authorize" + oidc_logout_endpoint = "https:///oidc/logout" + oidc_check_session_endpoint = "https:///oidc/checksession" + ``` - ```tab="Example" - [apim.idp] - server_url = "https://localhost:9443" - authorize_endpoint = "https://localhost:9443/oauth2/authorize" - oidc_logout_endpoint = "https://localhost:9443/oidc/logout" - oidc_check_session_endpoint = "https://localhost:9443/oidc/checksession" - ``` + === "Example" + ```toml + [apim.idp] + server_url = "https://localhost:9443" + authorize_endpoint = "https://localhost:9443/oauth2/authorize" + oidc_logout_endpoint = "https://localhost:9443/oidc/logout" + oidc_check_session_endpoint = "https://localhost:9443/oidc/checksession" + ``` Now you should be able to access the developer portal and the gateways using custom URLs defined. diff --git a/en/docs/reference/customize-product/customizations/customizing-the-developer-portal/enable-or-disable-home-page.md b/en/docs/reference/customize-product/customizations/customizing-the-developer-portal/enable-or-disable-home-page.md index 41b375fe8a..ba29557a83 100644 --- a/en/docs/reference/customize-product/customizations/customizing-the-developer-portal/enable-or-disable-home-page.md +++ b/en/docs/reference/customize-product/customizations/customizing-the-developer-portal/enable-or-disable-home-page.md @@ -21,75 +21,75 @@ By using `defaultTheme.js` as a reference , you could customize these link tabs Following JSON is an example for a `userTheme.js` to define the look and feel, and the behavior of the landing page. You can set the attributes (components) such as `carousel`, `listByTag`, `parallax` and `contact` as shown in the below example. (Refer to the above screenshot to identify the components referred by the attribute names) ``` js -{ - "custom": { - "landingPage": { - "active": true, - "carousel": { + { + "custom": { + "landingPage": { "active": true, - "slides": [ - { - "src": "/site/public/images/landing/01.jpg", - "title": "Lorem ipsum dolor sit amet", - "content": - "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer felis lacus, placerat vel condimentum in, porta a urna. Suspendisse dolor diam, vestibulum at molestie dapibus, semper eget ex. Morbi sit amet euismod tortor." - }, - { - "src": "/site/public/images/landing/02.jpg", - "title": "Curabitur malesuada arcu sapien", - "content": - "Curabitur malesuada arcu sapien, suscipit egestas purus efficitur vitae. Etiam vulputate hendrerit venenatis. " - }, - { - "src": "/site/public/images/landing/03.jpg", - "title": "Nam vel ex feugiat nunc laoreet", - "content": - "Nam vel ex feugiat nunc laoreet elementum. Duis sed nibh condimentum, posuere risus a, mollis diam. Vivamus ultricies, augue id pulvinar semper, mauris lorem bibendum urna, eget tincidunt quam ex ut diam." - } - ] - }, - "listByTag": { - "active": true, - "content": [ - { - "tag": "finance", - "title": "Checkout our Finance APIs", - "description": - "WSO2 offers online payment solutions and have more than 123 million customers worldwide. The WSO2 Finance API makes powerful functionality available to developers by exposing various features of the platform. Functionality includes but is not limited to invoice management, transaction processing, and account management.", - "maxCount": 5 - }, - { - "tag": "weather", - "title": "Checkout our Weather APIs", - "description": - "WSO2 offers online payment solutions and have more than 123 million customers worldwide. The WSO2 Finance API makes powerful functionality available to developers by exposing various features of the platform. Functionality includes but is not limited to invoice management, transaction processing, and account management.", - "maxCount": 5 - } - ] - }, - "parallax": { - "active": true, - "content": [ - { - "src": "/site/public/images/landing/parallax1.jpg", - "title": "Lorem ipsum dolor sit amet", - "content": - "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer felis lacus, placerat vel condimentum in, porta a urna. Suspendisse dolor diam, vestibulum at molestie dapibus, semper eget ex. Morbi sit amet euismod tortor." - }, - { - "src": "/site/public/images/landing/parallax2.jpg", - "title": "Nam vel ex feugiat nunc laoreet", - "content": - "Nam vel ex feugiat nunc laoreet elementum. Duis sed nibh condimentum, posuere risus a, mollis diam. Vivamus ultricies, augue id pulvinar semper, mauris lorem bibendum urna, eget tincidunt quam ex ut diam." - } - ] - }, - "contact": { - "active": true + "carousel": { + "active": true, + "slides": [ + { + "src": "/site/public/images/landing/01.jpg", + "title": "Lorem ipsum dolor sit amet", + "content": + "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer felis lacus, placerat vel condimentum in, porta a urna. Suspendisse dolor diam, vestibulum at molestie dapibus, semper eget ex. Morbi sit amet euismod tortor." + }, + { + "src": "/site/public/images/landing/02.jpg", + "title": "Curabitur malesuada arcu sapien", + "content": + "Curabitur malesuada arcu sapien, suscipit egestas purus efficitur vitae. Etiam vulputate hendrerit venenatis. " + }, + { + "src": "/site/public/images/landing/03.jpg", + "title": "Nam vel ex feugiat nunc laoreet", + "content": + "Nam vel ex feugiat nunc laoreet elementum. Duis sed nibh condimentum, posuere risus a, mollis diam. Vivamus ultricies, augue id pulvinar semper, mauris lorem bibendum urna, eget tincidunt quam ex ut diam." + } + ] + }, + "listByTag": { + "active": true, + "content": [ + { + "tag": "finance", + "title": "Checkout our Finance APIs", + "description": + "WSO2 offers online payment solutions and have more than 123 million customers worldwide. The WSO2 Finance API makes powerful functionality available to developers by exposing various features of the platform. Functionality includes but is not limited to invoice management, transaction processing, and account management.", + "maxCount": 5 + }, + { + "tag": "weather", + "title": "Checkout our Weather APIs", + "description": + "WSO2 offers online payment solutions and have more than 123 million customers worldwide. The WSO2 Finance API makes powerful functionality available to developers by exposing various features of the platform. Functionality includes but is not limited to invoice management, transaction processing, and account management.", + "maxCount": 5 + } + ] + }, + "parallax": { + "active": true, + "content": [ + { + "src": "/site/public/images/landing/parallax1.jpg", + "title": "Lorem ipsum dolor sit amet", + "content": + "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer felis lacus, placerat vel condimentum in, porta a urna. Suspendisse dolor diam, vestibulum at molestie dapibus, semper eget ex. Morbi sit amet euismod tortor." + }, + { + "src": "/site/public/images/landing/parallax2.jpg", + "title": "Nam vel ex feugiat nunc laoreet", + "content": + "Nam vel ex feugiat nunc laoreet elementum. Duis sed nibh condimentum, posuere risus a, mollis diam. Vivamus ultricies, augue id pulvinar semper, mauris lorem bibendum urna, eget tincidunt quam ex ut diam." + } + ] + }, + "contact": { + "active": true + } } } } -} ``` diff --git a/en/docs/reference/customize-product/extending-api-manager/extending-workflows/configuring-workflows-in-a-cluster.md b/en/docs/reference/customize-product/extending-api-manager/extending-workflows/configuring-workflows-in-a-cluster.md index 5bf21e3c0a..653d7cf66a 100644 --- a/en/docs/reference/customize-product/extending-api-manager/extending-workflows/configuring-workflows-in-a-cluster.md +++ b/en/docs/reference/customize-product/extending-api-manager/extending-workflows/configuring-workflows-in-a-cluster.md @@ -54,8 +54,10 @@ In this guide, you access the Admin Portal ( `https://:9443/admin` ) Web applic </wsdl:service>
            - !!! tip +
            +

            Tip

            Note that all workflow process services of the BPS run on port 9765 because you changed its default port (9763) with an offset of 2.

            +
            @@ -137,8 +139,10 @@ In this guide, you access the Admin Portal ( `https://:9443/admin` ) Web applic </wsdl:service> 
        - !!! tip +
        +

        Tip

        Note that all workflow process services of the BPS run on port 9765 because you changed its default port (9763) with an offset of 2.

        +
        diff --git a/en/docs/reference/guides/database-upgrade-guide.md b/en/docs/reference/guides/database-upgrade-guide.md index 5fa306a399..67525dc9f9 100644 --- a/en/docs/reference/guides/database-upgrade-guide.md +++ b/en/docs/reference/guides/database-upgrade-guide.md @@ -12,8 +12,8 @@ The following are the specific prerequisites you must complete before an upgrade - Stop all the Carbon servers connected to the database before running the migration scripts. - !!! note - Note that the upgrade should be done during a period when there is low traffic on the system. + !!! note + Note that the upgrade should be done during a period when there is low traffic on the system. #### Limitations diff --git a/en/docs/reference/mediators/db-report-mediator.md b/en/docs/reference/mediators/db-report-mediator.md index 0761af6ef6..e674035a91 100644 --- a/en/docs/reference/mediators/db-report-mediator.md +++ b/en/docs/reference/mediators/db-report-mediator.md @@ -99,7 +99,7 @@ The parameters available to configure the DB Report mediator are as follows. - ``` + ``` diff --git a/en/docs/reference/mediators/dblookup-mediator.md b/en/docs/reference/mediators/dblookup-mediator.md index e1d23a1324..0ce072ad05 100644 --- a/en/docs/reference/mediators/dblookup-mediator.md +++ b/en/docs/reference/mediators/dblookup-mediator.md @@ -85,18 +85,18 @@ follows: !!! Info When specifying the DB connection using a connection pool, other than specifying parameter values inline, you can also specify following parameter values of the connection information (i.e. Driver, URL, User and password) as registry entries. The advantage of specifying a parameter value as a registry entry is that the same connection information configurations can be used in different environments simply by changing the registry entry value. To do this, give the registry path within the `key` attribute as shown in the example below. -``` - - - - - - - - - - -``` + ``` + + + + + + + + + + + ``` | Parameter Name | Description | |----------------------------|------------------------------------------------------------------------------------------------------------------| diff --git a/en/docs/reference/mediators/script-mediator.md b/en/docs/reference/mediators/script-mediator.md index bfe8c7a39e..6f04538096 100644 --- a/en/docs/reference/mediators/script-mediator.md +++ b/en/docs/reference/mediators/script-mediator.md @@ -479,8 +479,10 @@ The following table contains examples of how some of the commonly used methods c
        setProperty(property)

        See the example for the getProperty method. The setProperty method is used to set the response time calculated from the time durations obtained (using the getProperty method) in the message context.

        -!!! note +
        +

        Note

        In the ESB profile due to a Rhino engine upgrade, when strings are concatenated and set as a property in the message context, you need to use the toString() method to convert the result to a string.

        +

        In the following example, var result = "a" and then result = result + "b" . When concatenating these strings, the script invoked needs to be as follows:

        diff --git a/en/docs/use-cases/examples/streaming-examples/cdc-with-listening-mode.md b/en/docs/use-cases/examples/streaming-examples/cdc-with-listening-mode.md index 3e2d82123b..6756063841 100644 --- a/en/docs/use-cases/examples/streaming-examples/cdc-with-listening-mode.md +++ b/en/docs/use-cases/examples/streaming-examples/cdc-with-listening-mode.md @@ -10,8 +10,10 @@ This sample demonstrates how to capture change data from MySQL using Siddhi. The 2. Unzip the archive.
        3. Copy the `mysql-connector-java-5.1.45-bin.jar` JAR and place it in the `/lib` directory.
        3. Enable binary logging in the MySQL server. For detailed instructions, see [Debezium documentation - Enabling the binlog](https://debezium.io/docs/connectors/mysql/#enabling-the-binlog).
        + !!! info If you are using MySQL 8.0, use the following query to check the binlog status.
        + ``` SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::" FROM performance_schema.global_variables WHERE variable_name='log_bin'; diff --git a/en/docs/use-cases/examples/streaming-examples/cdc-with-polling-mode.md b/en/docs/use-cases/examples/streaming-examples/cdc-with-polling-mode.md index 1f06c47ea6..cfccddcbf0 100644 --- a/en/docs/use-cases/examples/streaming-examples/cdc-with-polling-mode.md +++ b/en/docs/use-cases/examples/streaming-examples/cdc-with-polling-mode.md @@ -92,5 +92,5 @@ The insert operation is logged in the Streaming Integrator console as shown belo from insertSweetProductionStream select name, amount insert into logStream; - ``` + ``` diff --git a/en/docs/use-cases/examples/streaming-examples/publish-mqtt-in-xml-format.md b/en/docs/use-cases/examples/streaming-examples/publish-mqtt-in-xml-format.md index 765a25d276..c46453fc70 100644 --- a/en/docs/use-cases/examples/streaming-examples/publish-mqtt-in-xml-format.md +++ b/en/docs/use-cases/examples/streaming-examples/publish-mqtt-in-xml-format.md @@ -17,7 +17,7 @@ This application demonstrates how to configure WSO2 Streaming Integrator Tooling 2. Execute the following command to install the Mosquitto broker package. ```bash sudo apt-get install mosquitto - ```` + ``` 3. Install Mosquitto developer libraries to develop MQTT clients. ```bash sudo apt-get install libmosquitto-dev diff --git a/en/docs/use-cases/streaming-tutorials/expose-a-kafka-topic-as-a-managed-websocket-api.md b/en/docs/use-cases/streaming-tutorials/expose-a-kafka-topic-as-a-managed-websocket-api.md index c85f5a4cc6..2220d262cf 100644 --- a/en/docs/use-cases/streaming-tutorials/expose-a-kafka-topic-as-a-managed-websocket-api.md +++ b/en/docs/use-cases/streaming-tutorials/expose-a-kafka-topic-as-a-managed-websocket-api.md @@ -26,19 +26,19 @@ Follow the instructions below to expose a third-party Service Provider stream as 2. Update the `service.catalog.configs:` section as follows: - ``` - service.catalog.configs: - enabled: true - hostname: localhost - port: 9448 - username: admin - password: admin - ``` - In the above configuration - + ``` + service.catalog.configs: + enabled: true + hostname: localhost + port: 9448 + username: admin + password: admin + ``` + In the above configuration - - - You are enabling the AsyncAPI generation functionality by setting the `enabled` parameter to `true`. + - You are enabling the AsyncAPI generation functionality by setting the `enabled` parameter to `true`. - - You are specifying `9448` as the port because you configured a port offset of 5 in the previous step. The default port of the API Manager is `9443`. + - You are specifying `9448` as the port because you configured a port offset of 5 in the previous step. The default port of the API Manager is `9443`. 4. Configure authentication between API-M and SI. @@ -67,7 +67,7 @@ Follow the instructions below to expose a third-party Service Provider stream as ??? note "3. Start Kafka" - 1.Navigate to the `` directory and start a Zookeeper node. + 1. Navigate to the `` directory and start a Zookeeper node. ``` sh bin/zookeeper-server-start.sh config/zookeeper.properties diff --git a/en/docs/use-cases/streaming-tutorials/exposing-processed-data-as-api.md b/en/docs/use-cases/streaming-tutorials/exposing-processed-data-as-api.md index 1e423f05ba..1b23a8ce3e 100644 --- a/en/docs/use-cases/streaming-tutorials/exposing-processed-data-as-api.md +++ b/en/docs/use-cases/streaming-tutorials/exposing-processed-data-as-api.md @@ -88,13 +88,13 @@ This tutorial demonstrates how you can use the Siddhi query API to perform essen curl -X POST "https://localhost:9443/siddhi-apps" -H "accept: application/json" -H "Content-Type: text/plain" -d @SweetProduction-Store.siddhi -u admin:admin -k ``` - Upon successful deployment, the following response is logged for the `CURL` command you just executed. + Upon successful deployment, the following response is logged for the `CURL` command you just executed. ``` {"type":"success","message":"Siddhi App saved succesfully and will be deployed in next deployment cycle"} ``` - In addition to that, the following is logged in the SI console. + In addition to that, the following is logged in the SI console. ``` INFO {org.wso2.carbon.streaming.integrator.core.internal.StreamProcessorService} - Siddhi App SweetProduction-Store deployed successfully diff --git a/en/docs/use-cases/streaming-tutorials/integrating-stores.md b/en/docs/use-cases/streaming-tutorials/integrating-stores.md index c06dbd0aa0..636acccfd9 100644 --- a/en/docs/use-cases/streaming-tutorials/integrating-stores.md +++ b/en/docs/use-cases/streaming-tutorials/integrating-stores.md @@ -73,10 +73,10 @@ In this section, let's learn the different ways in which you can connect a Siddh In Streaming Integrator Tooling, open a new file and start creating a new Siddhi Application named `StockManagementApp`. - ``` - @App:name("StockManagementApp") - @App:description("Managing Raw Materials") - ``` +``` +@App:name("StockManagementApp") +@App:description("Managing Raw Materials") +``` Now let's connect to the data stores (i.e., databases) you previously created to the Siddhi application. There are three methods in which this can be done. To learn them, let's connect each of the three databases in a different method. diff --git a/en/docs/use-cases/streaming-tutorials/performing-real-time-etl-with-mysql.md b/en/docs/use-cases/streaming-tutorials/performing-real-time-etl-with-mysql.md index 94bb0c53f2..42745b2050 100644 --- a/en/docs/use-cases/streaming-tutorials/performing-real-time-etl-with-mysql.md +++ b/en/docs/use-cases/streaming-tutorials/performing-real-time-etl-with-mysql.md @@ -32,8 +32,10 @@ You can capture following type of changes done to a database table: !!!info "Before you begin:" - You need to have access to a MySQL instance.
        - Enable binary logging in the MySQL server. For detailed instructions, see [Debezium documentation - Enabling the binlog](https://debezium.io/docs/connectors/mysql/#enabling-the-binlog).
        + !!! info If you are using MySQL 8.0, use the following query to check the binlog status.
        + ``` SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::" FROM performance_schema.global_variables WHERE variable_name='log_bin'; diff --git a/en/docs/use-cases/streaming-tutorials/transforming-data.md b/en/docs/use-cases/streaming-tutorials/transforming-data.md index 90ddfc338b..e012e32b08 100644 --- a/en/docs/use-cases/streaming-tutorials/transforming-data.md +++ b/en/docs/use-cases/streaming-tutorials/transforming-data.md @@ -44,8 +44,8 @@ To understand how the Streaming Integrator can transform streaming data from one define stream OutputStream(flight string, passengers int); ``` - !!! info - You can replace `Users/foo` with a path to a preferred location in your machine. + !!! info + You can replace `Users/foo` with a path to a preferred location in your machine. The above stream generates output events with values for the `flight` and `passengers` attributes. The connected sink annotation of the `file` type specifies that output events generated in the stream are published to the `Users/foo/output.json` file in JSON format. diff --git a/en/docs/use-cases/streaming-tutorials/triggering-integrations-via-micro-integrator.md b/en/docs/use-cases/streaming-tutorials/triggering-integrations-via-micro-integrator.md index 074e7fa89a..e59ee3e64e 100644 --- a/en/docs/use-cases/streaming-tutorials/triggering-integrations-via-micro-integrator.md +++ b/en/docs/use-cases/streaming-tutorials/triggering-integrations-via-micro-integrator.md @@ -86,35 +86,35 @@ Let's design a Siddhi application that triggers an integration flow and deploy i a. To calculate the average per minute, add a Siddhi query named `CalculateAverageProductionPerMinute` as follows: - ``` - @info(name = 'CalculateAverageProductionPerMinute') - from InputStream#window.timeBatch(1 min) - select avg(amount) as avgAmount, symbol - group by symbol - insert into AVGStream; - ``` + ``` + @info(name = 'CalculateAverageProductionPerMinute') + from InputStream#window.timeBatch(1 min) + select avg(amount) as avgAmount, symbol + group by symbol + insert into AVGStream; + ``` This query applies a time batch window to the `InputStream` stream so that events within each minute is considered a separate subset to be calculations in the query are applied. The minutes are considered in a tumbling manner because it is a batch window. Then the `avg()` function is applied to the `amount` attribute of the input stream to derive the average production amount. The results are then inserted into an inferred stream named `AVGStream`. b. To filter events from the `AVGStream` stream where the average production is greater then 100, add a query named `FilterExcessProduction` as follows. - ``` - @info(name = 'FilterExcessProduction') - from AVGStream[avgAmount > 100] - select symbol, avgAmount - insert into FooStream; - ``` + ``` + @info(name = 'FilterExcessProduction') + from AVGStream[avgAmount > 100] + select symbol, avgAmount + insert into FooStream; + ``` Here, the `avgAmount > 100` filter is applied to filter only events that report an average production amount greater than 100. The filtered events are inserted into the `FooStream` stream. c. To select all the responses from the Micro Integrator to be logged, add a new query named `LogResponseEvents` - ``` - @info(name = 'LogResponseEvents') - from BarStream - select * - insert into LogStream; - ``` + ``` + @info(name = 'LogResponseEvents') + from BarStream + select * + insert into LogStream; + ``` The responses received from the Micro Integrator are directed to the `BarStream` input stream. This query gets them all these events from the `BarStream` stream and inserts them into the `LogStream` stream that is connected to a `log` stream so that they can be published as logs in the terminal. diff --git a/en/docs/use-cases/streaming-tutorials/working-with-kafka.md b/en/docs/use-cases/streaming-tutorials/working-with-kafka.md index 3f5ab94f52..464283f801 100644 --- a/en/docs/use-cases/streaming-tutorials/working-with-kafka.md +++ b/en/docs/use-cases/streaming-tutorials/working-with-kafka.md @@ -121,7 +121,7 @@ Let's create a basic Siddhi application to consume messages from a Kafka topic. {"event":{ "name":"Almond cookie", "amount":100.0}} ``` - This pushes a message to the Kafka Server. Then, the Siddhi application you deployed in the Streaming Integrator consumes this message. As a result, the Streaming Integrator log displays the following: + This pushes a message to the Kafka Server. Then, the Siddhi application you deployed in the Streaming Integrator consumes this message. As a result, the Streaming Integrator log displays the following: ``` INFO {io.siddhi.core.stream.output.sink.LogSink} - HelloKafka : OutputStream : Event{timestamp=1562069868006, data=[ALMOND COOKIE, 100.0], isExpired=false} @@ -181,7 +181,7 @@ For this purpose, you can configure the `topic.offsets.map` parameter. Let's mod {"event":{ "name":"Cup cake", "amount":300.0}} ``` - The following log appears in the Streaming Integrator Studio console. + The following log appears in the Streaming Integrator Studio console. ``` INFO {io.siddhi.core.stream.output.sink.LogSink} - HelloKafka : OutputStream : Event{timestamp=1562676477785, data=[CUP CAKE, 300.0], isExpired=false} diff --git a/en/docs/use-cases/streaming-usecase/extracting-data-from-static-sources-in-real-time.md b/en/docs/use-cases/streaming-usecase/extracting-data-from-static-sources-in-real-time.md index decfadcf78..012a2504c4 100644 --- a/en/docs/use-cases/streaming-usecase/extracting-data-from-static-sources-in-real-time.md +++ b/en/docs/use-cases/streaming-usecase/extracting-data-from-static-sources-in-real-time.md @@ -88,8 +88,10 @@ Let's try out the example where you want to view the online bookings saved in a 1. Download and install MySQL. 2. Enable binary logging in the MySQL server. For detailed instructions, see [Debezium documentation - Enabling the binlog](https://debezium.io/docs/connectors/mysql/#enabling-the-binlog).
        + !!! info If you are using MySQL 8.0, use the following query to check the binlog status.
        + ``` SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::" FROM performance_schema.global_variables WHERE variable_name='log_bin'; diff --git a/en/docs/use-cases/streaming-usecase/receiving-data-in-transit.md b/en/docs/use-cases/streaming-usecase/receiving-data-in-transit.md index c849b564a1..376e82416c 100644 --- a/en/docs/use-cases/streaming-usecase/receiving-data-in-transit.md +++ b/en/docs/use-cases/streaming-usecase/receiving-data-in-transit.md @@ -85,7 +85,7 @@ To try out the example given above, let's include the source configuration in a }' ``` - The following is logged in the terminal. + The following is logged in the terminal. ```text INFO {io.siddhi.core.stream.output.sink.LogSink} - New Student : Event{timestamp=1603185021250, data=[John Doe, Graphic Design, 1], isExpired=false} diff --git a/en/docs/use-cases/streaming-usecase/transforming-data.md b/en/docs/use-cases/streaming-usecase/transforming-data.md index 3588b879f7..0ac2e10a13 100644 --- a/en/docs/use-cases/streaming-usecase/transforming-data.md +++ b/en/docs/use-cases/streaming-usecase/transforming-data.md @@ -231,8 +231,8 @@ To try out the transformations described above with some of the given examples, - Publishes the production statistics in a custom format. `name` and `amount` attributes are presented as `Name` and `Quantity`, and nested under `ProductionData` in the `Product` enclosing element. These events are published in the `Users/foo/productions.json` file. - !!! tip - You can save the `productions.json` file mentioned above in a different location of your choice if required. + !!! tip + You can save the `productions.json` file mentioned above in a different location of your choice if required. - Calculates the total production amount and the average production amount per sweet, and presents them as values for the `total` and `average` attributes in the output event published in the `productions.json` file. diff --git a/en/theme/material/main.html b/en/theme/material/main.html index 331fdfd43e..c9fbc735d4 100644 --- a/en/theme/material/main.html +++ b/en/theme/material/main.html @@ -71,7 +71,6 @@ {% block content %} {{ super() }} -Top