diff --git a/README.md b/README.md index 90491cf87..f6555777b 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ The documentation built with [Docusaurus](https://docusaurus.io). -The search is implemented using [DocSearch](https://docsearch.algolia.com). +The search is implemented using [Algolia DocSearch](https://docsearch.algolia.com). The OpenAPI documentation is generated using [PaloAltoNetworks docusaurus-openapi-docs plugin](https://github.com/PaloAltoNetworks/docusaurus-openapi-docs). @@ -10,17 +10,17 @@ The OpenAPI documentation is generated using ## Running locally 1. Install the dependencies -```bash +```console npm install ``` 2. Run application in development mode -```bash +```console npm run start ``` 3. For production ready build use the next commands: -```bash +```console npm run gen-all npm run build ``` diff --git a/docs/analysis/HowModelsAreRetrained.md b/docs/analysis/HowModelsAreRetrained.md index e5f4db107..769c1e337 100644 --- a/docs/analysis/HowModelsAreRetrained.md +++ b/docs/analysis/HowModelsAreRetrained.md @@ -5,28 +5,33 @@ sidebar_label: How models are retrained # How models are retrained -In the Auto-analysis and ML suggestions processes several models take part: -* Auto-analysis XGBoost model, which gives the probability for a test item to be of a specific type based on the most similar test item in the history with this defect type -* ML suggestions XGBoost model, which gives the probability for a test item to be similar to the test item from the history -* Error message language model on Tf-Idf vectors(Random Forest Classifier), which gives a probability for the error message to be of a specific defect type or its subtype based on the words in the message. The probability from this model is taken as a feature in the main boosting algorithm. +In the Auto-analysis and Machine Learning (ML) suggestions processes, several models contribute: -At the start of the project, you have global models. They were trained on 6 projects and were validated to give a good accuracy on average. To have a more powerful and personalized test failure analysis, the models should be retrained on the data from the project. +* Auto-analysis XGBoost model, which provides the likelihood of a test item being of a specific type based on the most similar test item in the history of that defect type. +* ML suggestions XGBoost model, which offers the probability for a test item to resemble the test item from the history. +* Error message language model on Tf-Idf vectors (Random Forest Classifier), which delivers a probability for the error message to be of a specific defect type or its subtype based on the words in the message. The probability from this model serves as a feature in the main boosting algorithm. + +In the beginning, you have global models at your disposal. These models, trained on six projects, have demonstrated average good accuracy. To develop a more powerful and personalized test failure analysis, you should retrain the models on the data from your project. he start of the project, you have global models. They were trained on 6 projects and were validated to give a good accuracy on average. To have a more powerful and personalized test failure analysis, the models should be retrained on the data from the project. :::note -If a global model performs better on your data, the retrained model won't be saved. As far as we save a custom model only if it performs better for your data than the global one. +If a global model performs better on your project data, the retrained model will not be saved, because we only keep custom models that outperform the global model on your data. ::: -Triggering information and retrained models are saved in Minio(or a filesystem) as you set up in the Analyzer service settings. +Triggering information and retrained models are stored in Minio (or a filesystem), as set up in the Analyzer service settings. + +Conditions for triggering retraining for **Error Message Random Forest Classifier**: +* Each time the test item defect type is changed to another issue type (except "To Investigate"), we update the triggering info. This stores the quantity of test items with defect types and the quantity of test items with defect types since the last training. This information is stored in the file "defect_type_trigger_info" in Minio. +* Retraining is triggered when over 100 labelled items are detected and 100 labelled test items have been identified since the last training. If validation data metrics are better than metrics for the global model on the same data points, a custom "defect_type" model is saved in Minio. This will then be utilized in the auto-analysis and suggestions functionality for enhancing test automation results dashboard. + + +Conditions for triggering retraining of **Auto-analysis** and **Suggestion XGBoost models**: +* We collect training data from several sources: + * When a suggestion is selected (the chosen test item will be a positive example, others will be negative). + * When you don't select any suggestion and manually edit the test item (all suggestions become negative examples). + * Auto-analysis identifies a similar test item; this is considered a positive example unless the defect type is manually changed by the user. -Retraining triggering conditions for **Error message Random Forest Classifier**: -* Each time the test item defect type is changed to another issue type(except "To Investigate"), we update the triggering info, which saves the quantity of test items with defect types and the quantity of test items with defect types since the last training. This information is saved in the file "defect_type_trigger_info" in Minio. -* When we have more than 100 labelled items, and since last training we have 100 labelled test items, retraining is triggered and if validation data metrics are better than metrics for a global model for the same data points, then we save a custom "defect_type" model in Minio and use it further in the auto-analysis and suggestions functionality. +When either a suggestion analysis runs or a defect type change occurs, we update the trigger info for both models. This information is stored in "auto_analysis_trigger_info" and "suggestion_trigger_info" files in Minio. -Retraining triggering conditions for **Auto-analysis** and **Suggestion XGBoost models**: -* We gather training data for training from several sources: - * when you choose one of the suggestions(the chosen test item will be a positive example, others will be negative ones) - * when you don't choose any suggestion and edit the test item somehow(set up a defect type manually, add a comment, etc.), all suggestions become negative examples; - * when auto-analysis runs and for a test item it finds a similar test item, we consider it a positive example, until the user changes the defect type for it manually. In this case, the result will be marked as a negative one. -* Each time a suggestion analysis runs or changing a defect type happens, we update the triggering info for both models. This information is saved in the files "auto_analysis_trigger_info" and "suggestion_tgrigger_info" in Minio. -* When we have more than 300 labelled items, and since last training we have 100 labelled test items, retraining is triggered and if validation data metrics are better than metrics for a global model for the same data points, then we save a custom "auto_anlysis" model in Minio and use it further in the auto-analysis functionality. -* When we have more than 100 labelled items, and since last training we have 50 labelled test items, retraining is triggered and if validation data metrics are better than metrics for a global model for the same data points, then we save a custom "suggestion" model in Minio and use it further in the suggestions functionality. +Retraining is triggered when: +* For the auto-analysis model: when over 300 labelled items and 100 labelled test items have been identified since the last training. If validation data metrics are improved, a custom "auto_analysis" model is saved in Minio and utilized in the auto-analysis function. +* For the suggestion model: when more than 100 labelled items and 50 labelled items have been identified since the last training. If validation data metrics are improved, a custom "suggestion" model is saved in Minio and used in the suggestions function. diff --git a/docs/api/versioned_sidebars/api-sidebars.ts b/docs/api/versioned_sidebars/api-sidebars.ts index 47dd38570..a77dd5fcf 100644 --- a/docs/api/versioned_sidebars/api-sidebars.ts +++ b/docs/api/versioned_sidebars/api-sidebars.ts @@ -47,10 +47,10 @@ const apiSidebars: SidebarsConfig = { link: { type: 'generated-index', title: 'Service API', - description: 'This is a generated index of the ReportPortal Authtorization API.', + description: 'This is a generated index of the ReportPortal Service API.', slug: '/category/api/service-api-5.10' }, - items: require('../service-api/5.10/sidebar.ts') + items: require('../service-api/versions/5.10/sidebar.ts') } ], }; diff --git a/docs/api/versioned_sidebars/uat-sidebars.ts b/docs/api/versioned_sidebars/uat-sidebars.ts index 1af1dac94..aca220434 100644 --- a/docs/api/versioned_sidebars/uat-sidebars.ts +++ b/docs/api/versioned_sidebars/uat-sidebars.ts @@ -50,7 +50,7 @@ const uatSidebars: SidebarsConfig = { description: 'This is a generated index of the ReportPortal Authtorization API.', slug: '/category/api/service-uat-5.10' }, - items: require('../service-uat/5.10/sidebar.ts') + items: require('../service-uat/versions/5.10/sidebar.ts') } ], } diff --git a/docs/dev-guides/BackEndJavaContributionGuide.mdx b/docs/dev-guides/BackEndJavaContributionGuide.mdx index a47a3eada..d40dec67a 100644 --- a/docs/dev-guides/BackEndJavaContributionGuide.mdx +++ b/docs/dev-guides/BackEndJavaContributionGuide.mdx @@ -32,7 +32,9 @@ There are three Java repositories that are part of the whole RP deployment: ## Code conventions ### IDE Formatter -Settings file can be found at: https://github.com/reportportal/reportportal/blob/master/idea_formatting_profile.xml + +[Settings file](https://github.com/reportportal/reportportal/blob/master/idea_formatting_profile.xml) + Steps to import: - Click on IDEA "Preferences" - Choose "Editor" section @@ -42,44 +44,37 @@ Steps to import: ### Code style -This document is aimed to improve some points in existing code base and implementation/design approaches synchronization inside team. It's a blueprint for now, but will be improved. -- Code should be simple and readable as possible. -- Avoid using useless interfaces. For our case, if we aren't planning to extend functionality, and can simply refactor -(For example it isn't some separated library build with some magic), -and if we don't need another implementation for example for some tests(mock, anonymous classes etc.), -code auto-generation (@repository for example) and so on(another real needed cases), than we should to avoid creating interface. -- Methods and Classes should be named accordingly to their responsibilities, name should be clear or description should be added otherwise -(For instance, difficult to describe responsibility in only few words). -- Methods with the same names(overloading/overriding) should do similar things. -- Parameters in overloaded methods should have the same order of the same parameters: same parameters at the beginning of the list, different at the end of the list. -- Name of parameters should be clear. For example, if used id, it should be clear to which entity it's related. May be used `entity`Id instead (userId, projectId, etc.). -- Same related to return parameters. Should be clear, for example, if it's Long, and it's not some calculations, than should be added explanation in description. -- Try to use suitable structure, to avoid useless converting. -- Try to move usually used flows (checks, or structure processing, etc.) to utils. -- Avoid using manually work with threads (use defined `TaskExecutor` or create a new one). -- `Optional` and `Stream` should be mostly used in cases when it really improves readability. -- Always return `Optional` instead of `null`. -- For `Stream` avoid double terminal operations. (Stream to list then back to stream to collect to list). + +This document is aimed at improving aspects of our existing code base and synchronizing implementation/design approaches within the team. It serves as a blueprint for now but will be continually updated and improved. + +- Code should be as simple and readable as possible. +- Avoid unnecessary interfaces. In our case, if we aren't planning to extend functionality and can easily refactor, we should avoid creating an interface unless there is a distinct need (e.g., for testing, code auto-generation, etc.). +- Methods and classes should be named according to their responsibilities. If a name isn't self-explanatory, provide a description. +- Overloaded or overridden methods with the same name should perform similar operations. +- Keep parameter order consistent across methods with the same name. +- Parameter names should be clear and indicative of their function. For example, instead of using a generic "id", specify the `entity` it's related to, like "userId" or "projectId." +- The same principle applies to return parameters. +- It's advisable to use a suitable structure to avoid unnecessary conversions. +- Commonly used flows (checks, structure processing, etc.) should be moved to utilities. +- Manually working with threads should be avoided unless necessary (utilize defined `TaskExecutor` or create a new one). +- `Optional` and `Stream` should be used primarily when they enhance readability. Always return `Optional` instead of `null`. +- Avoid using two terminal operations in a Stream. ### Git branch requirements -- Working branch name must be started with jira-id(EPMRPP-DDD)(-optional-description), for example: -`EPMRPP-444` or `EPMRPP-443-fix-some-bugs`. That also means, all Jira branches must be related to some Jira ticked, only jira-id should be a trigger for branch creation -(excluding some technical branches master, develop, release etc). -- All commit messages in `master`, `develop` and `release` branches must be started with jira-id as well: EPMRPP-445 Some important fix. -- Better to divide commits into small bug logic completed parts into the branch, to ease understanding during code review. -- When merging a PR into the mainstream branch (`master`, `develop`, `release`) all commits should be squashed and provided with suitable description with jira-id by person who performs merge. - -**For contributors that have no access to our Jira tasks branch name prefix should be GitHub `issue` name** -It is highly recommended creating an issue that doesn't already exist and then fix it within your PR. -Even if it's some new function. Issue will stay there with all your ideas and comments from other contributors. -Also, RP users who are not developers might prefer to look through issues -rather than PRs to check if something is already fixed in a new version of ReportPortal. + +- Working branch names should start with the Jira-id (EPMRPP-DDD) and an optional description, e.g., `PMRPP-444` or `EPMRPP-443-fix-some-bugs`. All Jira branches must be related to a Jira ticket. +- All commit messages in `master`, `develop`, and `release` branches must start with a Jira-id as well: EPMRPP-445 Some important fix. +- It is better to divide commits into small, logically complete parts within the branch, for clarity during code review. +- When merging a PR into the main branch (`master`, `develop`, `release`), all commits should be squashed and provided with a suitable description with the Jira-id by the person who performs the merge. + +**For contributors who do not have access to our Jira tasks**, the branch name prefix should be the GitHub issue name. It is highly recommended to create an issue that doesn't already exist and then fix it within your PR, even if it's a new function. The issue will remain there, a record of your ideas and comments from other contributors. Additionally, ReportPortal users who are not developers might prefer to look through issues rather than PRs to verify if a specific issue has been addressed in a new version of ReportPortal. This is a crucial component in maintaining a transparent, efficient test automation reporting dashboard for all users. ## Open-source contribution workflow All features fixes should be added to `develop` branch only, exclusions: hot-fixes Changes applying workflow: + - Clone repository or fork from it - Checkout `develop` branch - Create branch according to name policy @@ -90,6 +85,7 @@ Changes applying workflow: ## Dev environment setup ### Pre requirements + - Postgresql should be deployed - Service Migration should fill/update DB data - Binary data storage should be configured @@ -145,21 +141,20 @@ services: #### How to keep DB data up-to-date -`image: reportportal/migrations:5.6.0` is the released version of the migrations service, but if there were any changes in the `develop` branch -they won't be available in released version so migrated DB schema may be outdated. -To prevent this and have DB data up-to-date you should do the following: +Maintaining an up-to-date database (DB) schema is an important aspect of efficient test automation reporting tools management. + +`image: reportportal/migrations:5.6.0` is the released version of the migrations service. However, if any changes were made in the develop branch after the release, the migrated DB schema may be outdated. To prevent this and ensure your DB data remains current, follow the steps outlined below: + - clone/update [Migrations Repository](https://github.com/reportportal/migrations) - checkout `develop` branch - run the following command: ```shell docker-compose run --rm migrations ``` -This flow will use all SQL scripts in the `develop` branch and update your locally running DB instance. -
+This process leverages all SQL scripts present in the develop branch to update your locally running DB instance. -By default, filesystem storage is used for binary data (all data will be stored on your local filesystem). -If you want to store binaries using Minio (as we do on our production) you should deploy it too by adding this to already existing `docker-compose.yml`: +By default, filesystem storage is employed for binary data; meaning all data will be stored on your local filesystem. If you prefer to store binaries using Minio (as we do in our production), you need to deploy it as well. You can do this by adding the necessary pieces to the existing `docker-compose.yml`: ```yaml minio: @@ -183,9 +178,8 @@ If you want to store binaries using Minio (as we do on our production) you shoul restart: always ``` -Pay attention to `Windows` comments - if you develop on `Windows` uncomment sections for windows -and comment `Linux`-related sections both for Postgres and Minio -then add next statement to the `docker-compose.yml`: +Please be aware of the comments directed towards `Windows` - if you are developing on Windows, uncomment the sections for `Windows` and comment out any `Linux`-related sections, applicable to both Postgres and Minio. Then, add the following statement to the `docker-compose.yml`: + ```yaml volumes: postgres: @@ -264,28 +258,29 @@ volumes: minio: ``` -This file will be updated in next sections, but we can already start developing our first service +This file will be updated in the subsequent sections, but we can already initiate the development of our first service. ### Service Authorization -To start up Service Authorization you should fill marked values in the `application.yaml` file: +To start up Service Authorization, you should populate the marked values in the `application.yaml file`: -(Optional) change `context-path` value from `/` to `/uat` if you are planning to deploy Service UI locally (will be described later) +(Optional) Adjust the `context-path` value from `/` to `/uat` if you plan to deploy Service UI locally (described in a later section). ### Service API -Before starting Service API we should add RabbitMQ to our deployment. +Prior to initiating the Service API, RabbitMQ needs to be included in our deployment. -In ReportPortal RabbitMQ is required for 3 purposes: -- inter-service communication between Service API and Service Analyzer. -- async reporting feature -- user activity event publishing +In ReportPortal, RabbitMQ serves three key functions: -To add RabbitMQ to our deployment we should add the following to our existing `docker-compose.yml`: +- Inter-service communication between Service API and Service Analyzer. +- Asynchronous reporting feature. +- User activity event publishing. + +To add RabbitMQ to our deployment, the following should be incorporated into our existing `docker-compose.yml`: ```yaml rabbitmq: @@ -302,8 +297,7 @@ To add RabbitMQ to our deployment we should add the following to our existing `d restart: always ``` -We may start deploying Service API right now without any issues, but all Analyzer-related interactions (publishing to analyzer queues and receiving response) won't succeed. -We need to deploy Service Analyzer and all its required services. So we add the following to our `docker-compose.yml`: +We can now begin deploying the Service API without encountering any issues. However, it's important to note that all Analyzer-related interactions (such as publishing to analyzer queues and receiving responses) will not be successful. To rectify this, we need to deploy the Service Analyzer and all its required services. To achieve this, we add the following to our `docker-compose.yml`: ```yaml opensearch: @@ -348,9 +342,9 @@ We need to deploy Service Analyzer and all its required services. So we add the restart: always ``` -As the result our `docker-compose.yml` should be like [this](pathname:///files/DockerCompose.yml) +As a result of these additions, your `docker-compose.yml` should look something like [this](pathname:///files/DockerCompose.yml). -To start up Service API you should fill marked values in the `application.yaml` file: +To start the Service API, populate the marked values in your `application.yaml` file: @@ -358,11 +352,11 @@ To start up Service API you should fill marked values in the `application.yaml` -(Optional) change `context-path` value from `/` to `/api` if you are planning to deploy Service UI locally (will be described later) +Alternatively, you may need to change the `context-path` value from `/` to `/api` should you plan to deploy Service UI locally (detailed instructions will be provided at a later stage). ### Service Jobs -To start up Service Jobs you should fill marked values in the `application.yaml` file: +To start Service Jobs, fill in the marked values in your `application.yaml` file: @@ -373,12 +367,14 @@ To start up Service Jobs you should fill marked values in the `application.yaml` ### Service UI After all back-end services deployed you may want to interact with them not only using tool like Postman but use ReportPortal UI too. -To do this you should do the next steps: -- clone/update [Service UI repository](https://github.com/reportportal/service-ui) -- checkout `develop` branch -- apply changes to `dev.config.js` file +Once all back-end services are deployed, you may want to interface with them beyond using a tool like Postman and use ReportPortal UI. To accomplish this, follow these steps: + +- Clone or update the [Service UI repository](https://github.com/reportportal/service-ui). +- Checkout the `develop` branch. +- Make changes to the `dev.config.js` file. + +Before: -before: ```javascript proxy: [ { @@ -391,7 +387,7 @@ proxy: [ ] ``` -after: +After: ```javascript proxy: [ { @@ -411,22 +407,23 @@ proxy: [ ] ``` -- install NodeJs if it's not installed yet (minimum version is 10) -- run Service UI from the root folder(service-ui) using commands: +- If NodeJs is not already installed, install it (version 20 is recommended). +- From the root folder(service-ui), run Service UI using the following commands: ```shell cd app npm install npm run dev ``` -- open Service UI page on `localhost` with port `3000` and try to login using default credentials +- Open the Service UI page on `localhost` using port `3000` and try to login using the default credentials. + +This will allow you to view your test automation dashboard and interact with the reported test results. ## Development workflow ### Introduction to dependencies -All our Java services besides common dependencies like `spring-...` also have ReportPortal libraries that are separated to different repositories. -There is a full list of these dependencies: +In addition to common dependencies such as `spring-...`, our Java services also have ReportPortal libraries distributed across different repositories. Here is a list of these dependencies: - [Commons DAO](https://github.com/reportportal/commons-dao) - Data layer dependency with domain model configuration - [Commons Model](https://github.com/reportportal/commons-model) - REST models dependency - [Commons](https://github.com/reportportal/commons) - Some common utils for multiple usage purposes @@ -436,41 +433,39 @@ There is a full list of these dependencies: ### Updates in dependencies -Let's pretend that you found a bug when you try to get user from the DB. -All logic is invoked within `Service API` but the code with the bug is not directly in sources but in `Commons DAO` dependency. -How to apply a fix and check if everything works fine? -To do this you should follow these steps: -- clone/update `Commons DAO` repository -- checkout `develop` branch -- make changes -- create branch according to the name policy -- push to the remote -- create PR to the `develop` branch -Now you have your branch on the GitHub page and can see the `commit hash`: +Let's assume you found a bug when trying to retrieve a user from the DB. The logic is invoked within Service API, but the buggy code is in the Commons DAO dependency. To apply a fix and validate its effectiveness, follow these steps: + +- Clone or update the `Commons DAO` repository. +- Checkout the `develop` branch. +- Implement the necessary changes. +- Create a branch according to naming policy. +- Push the changes to the remote. +- Create a pull request (PR) to the `develop` branch. Now, you can see your branch and the `commit hash` on the GitHub page: -- go to the service where your changes should be applied (Service API in our example) -- copy `commit hash` and replace already existing in the `build.gradle` of the required service (Service API in our example): +- Go to the service where your changes need to be applied (Service API in our instance). +- Copy the `commit hash` and replace the existing one in the `build.gradle` of the required service (Service API in our instance): -- after re-building project using `gradle` dependency will be resolved and downloaded using [Jitpack](https://jitpack.io) tool -- create branch according to the name policy -- push to the remote -- create a PR to the `develop` branch +- After rebuilding the project using `Gradle`, the dependency will be resolved and downloaded using the [Jitpack tool](https://jitpack.io). +- Generate a branch according to the naming policy. +- Push the changes to the remote. +- Create a PR to the `develop` branch. ## Summary notes -This documentation should help you save your time by configuring ReportPortal local dev environment and give you understanding of some standards/conventions that we try stick to. - -Simplified development workflow should look like this: -- always have the latest schema and data in your local DB instance using [this instructions](#how-to-keep-db-data-up-to-date) -- checkout `develop` branch in the required repository -- make changes -- if changes in dependencies are required: - - go to the dependency repository, make changes, create a branch and PR according to conventions - - using `commit hash` update dependency in the `build.gradle` -- create branch according to the name policy -- push to the remote -- create a PR according to the name policy +This documentation should assist you in configuring the ReportPortal local development environment and provide an understanding of the standards and conventions we adhere to. + +The simplified development workflow should look as follows: + +- Always maintain the latest schema and data in your local DB instance using [these instructions](#how-to-keep-db-data-up-to-date). +- Checkout the `develop` branch in the required repository. +- Implement changes. +- If changes in dependencies are necessary: + - Go to the dependency repository, apply changes, and create a branch and PR according to conventions. + - Using the `commit hash`, update the dependency in the `build.gradle`. +- Create a branch according to the name policy. +- Push to the remote. +- Create a PR following the name policy. diff --git a/docs/installation-steps/ScalingUpReportPortalAPIService.md b/docs/installation-steps/ScalingUpReportPortalAPIService.md index 7f21da3ca..6cea2b169 100644 --- a/docs/installation-steps/ScalingUpReportPortalAPIService.md +++ b/docs/installation-steps/ScalingUpReportPortalAPIService.md @@ -19,11 +19,11 @@ To scale your ReportPortal services in Kubernetes, you need to adjust the `repli 1. **Update Replica Count**: Change `replicaCount` from `1` to `2` for additional replication.
- [values.yaml replicaCount](https://github.com/reportportal/kubernetes/blob/master/reportportal/values.yaml#L71) + [values.yaml replicaCount](https://github.com/reportportal/kubernetes/blob/master/reportportal/values.yaml#L73) 2. **Edit Total Number of Queues**: Modify `queues.totalNumber` from `10` to `20` to increase the total available queues.
- [values.yaml queues.totalNumber](https://github.com/reportportal/kubernetes/blob/master/reportportal/values.yaml#L120) + [values.yaml queues.totalNumber](https://github.com/reportportal/kubernetes/blob/master/reportportal/values.yaml#L139) Use the following formula for calculation:
`perPodNumber = totalNumber / serviceapi.replicaCount` diff --git a/docs/quality-gates/UploadQualityGateToReportPortal.mdx b/docs/quality-gates/UploadQualityGateToReportPortal.mdx index 6d9b3bd3c..24f856fc1 100644 --- a/docs/quality-gates/UploadQualityGateToReportPortal.mdx +++ b/docs/quality-gates/UploadQualityGateToReportPortal.mdx @@ -7,7 +7,7 @@ sidebar_label: Upload Quality Gate to ReportPortal The default configuration of our continuous testing platform doesn't contain Quality Gate. For adding this feature, you need to [receive a link to the .jar file from ReportPortal](/quality-gates/HowToInstallQualityGates). -Download the .jar file and upload it to ReportPortal. Fo that pleases perform, following actions: +Download the .jar file and upload it to ReportPortal. For that, perform following actions: * Login ReportPortal as an Admin * Open ```Admin Page > Plugins``` diff --git a/docs/reportportal-configuration/IntegrationViaPlugin.mdx b/docs/reportportal-configuration/IntegrationViaPlugin.mdx index 97cb2f46a..94528ac39 100644 --- a/docs/reportportal-configuration/IntegrationViaPlugin.mdx +++ b/docs/reportportal-configuration/IntegrationViaPlugin.mdx @@ -5,7 +5,7 @@ sidebar_label: Integration via plugin # Integration via plugin -Users can reinforce ReportPortal with adding additional integrations with: +Users can enhance ReportPortal by adding additional integrations with: * [Atlassian Jira Server](/plugins/AtlassianJiraServer) * [Atlassian Jira Cloud](/plugins/AtlassianJiraCloud) * [Rally](/plugins/Rally) @@ -20,26 +20,28 @@ Users can reinforce ReportPortal with adding additional integrations with: +If you're keen on incorporating ReportPortal with other external systems, and can't find the necessary tab in the Project Settings, please refer to the Plugins section in our documentation for guidance. Our test reporting tool integrates seamlessly, allowing for a streamlined connection with external systems. -If you want to integrate ReportPortal with these external systems, and you can not find a needed tab on the Project Settings, please check the section in documentation [Plugins](/category/plugins). +The integrations can be added at a global level, applicable for all projects on an instance, in the Administrate section. Alternatively, they can be specific to a project and can be configured in the Project Settings. -Integration configurations can be added on the global level (for all projects on the instance) in the Administrate section or the project level (only for one project) on Project Settings. +For those who require a different configuration to other projects, or want to integrate only their specific project with an external system, the following steps could be followed: -If you have another configuration than other projects have or you want to integrate only your project with an external system, you should perform the next actions: +1. Log into ReportPortal as a PROJECT MANAGER or ADMINISTRATOR -1. Log in to ReportPortal as `PROJECT MANAGER` or `ADMINISTRATOR` -2. Go to Project settings > Integrations -3. Click on one of the integration panels -4. And click the button "Unlink and setup manually" +2. Navigate to Project settings > Integrations -By this action, you unlink the current project from the global settings and configure your integration. +3. Click on one of the shown integration panels + +4. Click on the button titled "Unlink and setup manually" + +Performing these steps will unlink your current project from the global settings and initiate your own configuration. :::note If you unlink project setting and ADMIN changes global settings for the whole instance, your project will use your project settings. ::: -To return global settings, you need to click a button "Reset to global settings". -In this case, your settings will be deleted, and integration will use global settings. -You can always reset to the global settings. +To revert to the global settings, click the button titled "Reset to global settings". This action will erase your settings, and the integration will revert to using the global settings. + +Remember, you can always reset to the global settings. diff --git a/docs/reportportal-configuration/authorization/ActiveDirectory.md b/docs/reportportal-configuration/authorization/ActiveDirectory.md index b0c43fd50..c7dd5f540 100644 --- a/docs/reportportal-configuration/authorization/ActiveDirectory.md +++ b/docs/reportportal-configuration/authorization/ActiveDirectory.md @@ -4,17 +4,23 @@ sidebar_label: Active Directory # Active Directory -Active Directory plugin is available in ReportPortal on the Plugins page. +The Active Directory plugin is available in ReportPortal on the Plugins page, enhancing the efficiency of our test report dashboard. -To set up access with Active Directory: +To set up access with Active Directory, follow these steps: -1. Log in to the ReportPortal as an ADMIN user -2. Then open the list on the right of the user's image. -3. Click the 'Administrative' link -4. Click the 'Plugins' from the left-hand sidebar -5. Click on the'Activate Directory' tab. -6. Click on Add new integration -7. The next fields should be present: +1. Log in to ReportPortal as an ADMIN user. + +2. Click on the dropdown menu located to the right of the user's image. + +3. Select the 'Administrative' link. + +4. From the left-hand sidebar, click on 'Plugins'. + +5. Navigate to the 'Active Directory' tab. + +6. Click on 'Add new integration'. + +At this point, several fields should be displayed. ```javascript 'Domain*': text @@ -26,14 +32,11 @@ To set up access with Active Directory: 'UserSearchFilter' (the same as for LDAP): text ``` -Mandatory fields are marked with red. -Click the 'Submit' button. -All users of Active Directory will have access to the ReportPortal instance. -For entrance to ReportPortal, the user should use their domain credentials (Login and password). - +Fields marked in red are mandatory. Once filled, click the 'Submit' button. Completing this process grants all Active Directory users access to the ReportPortal instance. +To access ReportPortal, users should utilize their domain credentials (Login and password). -Please find the example with configurations for Microsoft Active Directory that worked successfully provided by our user: +Refer to the example provided below featuring configurations for Microsoft Active Directory that have been successfully utilized by our users: **Table with properties and values for LDAP Microsoft Active Directory** diff --git a/docs/work-with-reports/OperationsUnderLaunches.mdx b/docs/work-with-reports/OperationsUnderLaunches.mdx index 260fc115c..632612766 100644 --- a/docs/work-with-reports/OperationsUnderLaunches.mdx +++ b/docs/work-with-reports/OperationsUnderLaunches.mdx @@ -11,23 +11,18 @@ There are several ways of how launches could be modified and managed in our test The system allows editing attributes and descriptions for the launch on the "Launches" and "Debug" modes. -Permission: next users are able to modify launches: +Permission: next users can modify launches: - Administrator - - User with PROJECT MANAGER Project Role +- User with one of Project Role MEMBER, CUSTOMER on the project - Launch Owner -- User with one of Project Role {MEMBER, CUSTOMER} on the project - Launch Owner - -In order to edit a launch, perform the following steps: +To edit a launch, perform the following steps: 1. Navigate to the "Launches" page. - 2. Select "Edit" option ('pencil' icon) to the selected launch. - 3. The launch editor will be opened with the following options. - -6. Make the required changes and click the "Save" button. +4. Make the required changes and click the "Save" button. @@ -35,90 +30,71 @@ In order to edit a launch, perform the following steps: The system allows to edit attributes and description for the test items on the "Launches" and "Debug" pages. -Permission: next users are able to modify test items: +Permission: next users can modify test items: - Administrator - - User with PROJECT MANAGER Project Role +- User with one of Project Role MEMBER, CUSTOMER on the project - Launch Owner -- User with one of Project Role {MEMBER, CUSTOMER} on the project - Launch Owner - -In order to edit a test item, perform the following steps: +To edit a test item, perform the following steps: 1. Navigate to the "Launches" page. - 2. Drill down to the test level of any item. - 3. Select the "Edit" option ('pencil' icon) to the selected test item. - 4. The test item editor will be opened with the following options. - 5. Make the required changes and click the "Save" button. ## Merge launches -Merge launches feature can help you to merge the existing launches into one. -If your project has a really huge number of regression suites and they cannot be in one particular launch, so, they divided into parts. -As soon as they completed, they could be merged in one separate launch to represent this data on dashboards and create reports. +Merge launches feature can help you to merge the existing launches into one. If your project has a really huge number of regression suites and they cannot be in one particular launch, so, they divided into parts. As soon as they completed, they could be merged in one separate launch to represent this data on dashboards and create reports. ReportPortal provides two options for merge: Linear and Deep. The difference in merge options is described below. -Basically, the merge models distinguished by the way of how the launches elements are collected in a resulting launch as shown on a picture: +Basically, the merge models distinguished by how the launches elements are collected in a resulting launch as shown on a picture: **Linear merge** -In case the user selects the "Linear merge" option, the new launch is created. The new launch contains elements of merging launches. -Levels of elements stay the same as in origin launches. Status and issues statistics are calculated as the sum of statistics of all merged launches. The origin launches are deleted from the system. +In case the user selects the "Linear merge" option, the new launch is created. The new launch contains elements of merging launches. Levels of elements stay the same as in origin launches. Status and issues statistics are calculated as the sum of statistics of all merged launches. The origin launches are deleted from the system. **Deep merge** In case the user selects the "Deep merge" option, the system creates a new launch and check items with the following conditions simultaneously: + * test items with the same names -* test items have the same type (SUITE or TEST) -* test items are on the same path (number of parents) -* test items with descendants. -If such elements are found only the earliest one is added to the new launch. All descendants are collected on their levels as in the original launches. The merge is started from the upper levels to the lower levels. -In case the upper level is not merged, the lower levels will not be merged as well. Items without descendants are not merged despite their level. -Status and issues statistics are calculated for a new launch. -The original launches are deleted from the system. +* test items have the same type (SUITE or TEST) +* test items are on the same path (number of parents) +* test items with descendants. + +If such elements are found only the earliest one is added to the new launch. All descendants are collected on their levels as in the original launches. The merge is started from the upper levels to the lower levels. In case the upper level is not merged, the lower levels will not be merged as well. Items without descendants are not merged despite their level. Status and issues statistics are calculated for a new launch. The original launches are deleted from the system. The Linear and Deep Merge algorithm, as described above, is shown on a scheme: -For instance, we have Launch-1 and Launch-2 to be merged. If system founds that Suite_A in Launch-1 and Suite_A in Launch-2 -have the same names and the same types and the same path and have descendants, so only the earliest Suite_A (according to start time) is added to the new launch. All descendants of merged suites are combined under the Suite_A. Then the system searches for the same matches on the next level (Test level). +For instance, we have Launch-1 and Launch-2 to be merged. If system founds that Suite_A in Launch-1 and Suite_A in Launch-2 have the same names and the same types and the same path and have descendants, so only the earliest Suite_A (according to start time) is added to the new launch. All descendants of merged suites are combined under the Suite_A. Then the system searches for the same matches on the next level (Test level). -If items are not met the conditions described for the "Deep Merge" option then they are collected the same -way as described for the "Linear merge" option. +If items do not meet the conditions described for the "Deep Merge" option then they are collected the same way as described for the "Linear merge" option. -Permission: Next users are able to merge launches: +Permission: Next users can merge launches: - Administrator - - User with PROJECT MANAGER Project Role +- User with one of Project Role MEMBER, CUSTOMER on the project - Launch Owner -- User with one of Project Role {MEMBER, CUSTOMER} on the project - Launch Owner - -In order to merge launches, perform the following steps: +To merge launches, perform the following steps: 1. Navigate to the "Launches" page. - -2. Select required launches by click on their checkboxes. - +2. Select required launches by clicking on their checkboxes. 3. Open 'Actions' list - 4. Click the "Merge" button. - -5. Merge launches popup will be opened. - +5. Merge launches popup will be opened. 6. Select the merge type 'Linear Merge' or 'Deep Merge' - 7. Parameters fields become active. Merge popup contains: + ```javascript Launch name: (editable) Owner: The field is filled in with the current user. @@ -127,8 +103,8 @@ In order to merge launches, perform the following steps: Time Start/End: filled in and disabled. Checkbox 'Extend child suites description with original launches names': Unchecked by default ``` -8. Make changes and click the "Merge" button on the "Merge Launches" window. - After the merge, a new run will be shown on the common launches list. + +8. Make changes and click the "Merge" button on the "Merge Launches" window. After the merge, a new run will be shown on the common launches list. :::note The following launches cannot be merged: @@ -136,27 +112,22 @@ The following launches cannot be merged: - Launches with active "Analysis" process - Launches with active "Match issues in launch" process ::: + ## Compare launches -Compare launches feature can help you to compare launches side by side to define differences between them. +Compare launches feature in our test execution dashboard can help you to compare launches side by side to define differences between them. Permissions: All users on the project -In order to compare launches, perform the following steps: +To compare launches, perform the following steps: 1. Navigate to the "Launches" page. - -2. Select required launches by click on their checkboxes. - +2. Select required launches by clicking on their checkboxes. 3. Expand the 'Actions' list - 4. Select the "Compare" option. - -5. The system shows a window where a widget with bars is displayed, reflecting the -Passed/Failed/Skipped and Product Bug/Automation Bug/System Issue/To -Investigate test items. +5. The system shows a window where a widget with bars is displayed, reflecting the Passed/Failed/Skipped and Product Bug/Automation Bug/System Issue/To Investigate test items. :::note Launches can be compared on the 'Launches' page and not on the 'Debug' page. @@ -167,47 +138,33 @@ Launches can be compared on the 'Launches' page and not on the 'Debug' page. The "Debug" section is used to hide incorrect launches from the CUSTOMER view. -Permission: Next users are able to move launches to "Debug"/"Launches" mode: +Permission: Next users can move launches to "Debug"/"Launches" mode: - Administrator - - User with PROJECT MANAGER Project Role +- User with MEMBER Project Role - Launch Owner -- User with MEMBER Project Role - Launch Owner - -In order to move a launch to the "Debug" section, perform the following steps: - -1. Navigate to the "Launches" page +To move a launch to the "Debug" section, perform the following steps: +1. Navigate to the "Launches" page 2. Select the "Move to Debug" option from the context menu on the left hand of the launch name. - 3. The warning popup will be opened - -4. Click the 'Move' button - +4. Click the 'Move' button 5. The launch gets to "Debug" page and removed from "Launches" page -To return the launch to the "Launches" section, navigate to the "Debug" section and select the -"Move to All Launches" from the context menu. +To return the launch to the "Launches" section, navigate to the "Debug" section and select the "Move to All Launches" from the context menu. -In order to move some launches to the "Debug" section simultaneously, perform the following steps: +To move some launches to the "Debug" section simultaneously, perform the following steps: 1. Navigate to the "Launches" page - 2. Select required launches by click on their checkboxes - -3. Open 'Actions' list - -4. Select "Move to Debug" from the list - +3. Open 'Actions' list +4. Select "Move to Debug" from the list 5. The warning popup will be opened. - 6. Confirm the action - 7. The launches get to the "Debug" page and removed from the "Launches" page. -To return the launches to the "Launches" section, navigate to the "Debug" section, select them -and select "Move to All Launches" from the 'Actions' list. +To return the launches to the "Launches" section, navigate to the "Debug" section, select them and select "Move to All Launches" from the 'Actions' list. @@ -215,56 +172,40 @@ and select "Move to All Launches" from the 'Actions' list. The system allows finishing launches on the "Launches" and the "Debug" pages manually. -Permission: Next users are able to stop launches: +Permission: Next users can stop launches: - Administrator - - User with PROJECT MANAGER Project Role +- User with one of Project Role MEMBER, CUSTOMER on the project - Launch Owner -- User with one of Project Role {MEMBER, CUSTOMER} on the project - Launch Owner - -In order to finish a launch that is in progress now, perform the following steps: +To finish a launch that is in progress now, perform the following steps: 1. Navigate to the "Launches" page. - -2. Select the "Force Finish" option in the context menu on the left hand of the launch name. - +2. Select the "Force Finish" option in the context menu on the launch name's left hand. 3. The warning popup will be opened. - 4. Click the "Finish" button. - -5. The launch will be stopped and shown in the launches table with the "stopped" tag and the *"stopped"* description. All the statistics collected by this time will be displayed. +5. The launch will be stopped and shown in the launches table with the "stopped" tag and the "stopped" description. All the statistics collected by this time will be displayed. -In order to finish some launches simultaneously those are in progress now, perform the following steps: +To finish some launches simultaneously those are in progress now, perform the following steps: 1. Navigate to the "Launches" page. - 2. Select required launches that are in progress by click on their checkboxes - -3. Open 'Actions' list - +3. Open 'Actions' list 4. Select "Force Finish" from the list - 5. The warning popup will be opened. - 6. Confirm the action - -7. All selected launches will be stopped and shown in the launches table with the "stopped" tag and -the *"stopped"* description. All the statistics collected by this time will be displayed. +7. All selected launches will be stopped and shown in the launches table with the "stopped" tag and the "stopped" description. All the statistics collected by this time will be displayed. ## Export launches reports -The system allows exporting launches reports on the "Launches" and the "Debug" -modes. You can export the launch report in the following formats: PDF, XML, HTML. +The system allows exporting launches reports on the "Launches" and the "Debug" modes. You can export the launch report in the following formats: PDF, XML, HTML. -In order to export a launch, perform the following steps: +To export a launch, perform the following steps: 1. Navigate to the "Launches" page. - 2. Select the required format from the "Export" option in the context menu on the left hand of the launch name. - 3. The launch report in the selected format will be opened. @@ -276,93 +217,67 @@ The export operation works for a separate launch only. No multiple actions for t The system allows deleting launches on the "Launches" and "Debug" pages. -Permission: next users are able to delete launches: +Permission: next users can delete launches: - Administrator - - User with PROJECT MANAGER Project Role +- User with one of Project Role MEMBER, CUSTOMER on the project - Launch Owner -- User with one of Project Role {MEMBER, CUSTOMER} on the project - Launch Owner +There are two ways how the launch/es can be deleted. -There are two ways how the launch/es can be deleted - -In order to delete a launch, perform the following steps: +To delete a launch, perform the following steps: 1. Navigate to the "Launches" page. - -2. Select the "Delete" option in the context menu on the left hand of the launch name. - +2. Select the "Delete" option in the context menu on the launch name's left. 3. The warning popup will be opened. - 4. Click the "Delete" button. - -5. The launch will be deleted from ReportPortal. All related content will be deleted: test items, logs, screenshots. +5. The launch will be deleted from ReportPortal. All related content will be deleted: test items, logs, screenshots. -In order to delete more than one launch simultaneously, perform the following steps: +To delete more than one launch simultaneously, perform the following steps: 1. Navigate to the "Launches" page - 2. Select required launches by click on their checkboxes - 3. Open 'Actions' list - 4. Click 'Delete' option - 5. The warning popup will be opened. - 6. Confirm the action - 7. The launches will be deleted from ReportPortal. All related content will be deleted: test items, logs, screenshots. :::note It is impossible to delete launches IN PROGRESS - "Delete" launch option will be disabled. ::: + ## Delete test item The system allows deleting test items in all levels of launch in the "Launches" and "Debug" pages. -Permission: Next users are able to delete the test item: +Permission: Next users can delete the test item: - Administrator - - User with PROJECT MANAGER Project Role +- User with one of Project Role MEMBER, CUSTOMER on the project - Launch Owner -- User with one of Project Role {MEMBER, CUSTOMER} on the project - Launch Owner - -In order to delete a test item, perform the following steps: +To delete a test item, perform the following steps: 1. Navigate to the "Launches" page - 2. Drill down to the test level of any item - 3. Select the "Delete" option in the context menu next to the selected test item. - 4. The warning popup will be opened. - 5. Click the "Delete" button. - 6. The test item will be deleted from ReportPortal with all related content (logs, screenshots). -In order to delete some test items simultaneously, perform the following steps: +To delete some test items simultaneously, perform the following steps: 1. Navigate to the "Launches" page - 2. Drill down to the test level of any item - 3. Select required test items by click on their checkboxes - -4. If you are on SUITE or TEST view, click 'Delete' button from the header - - If you are on STEP view, open 'Actions' list and select "Delete" option - +4. If you are on SUITE or TEST view, click 'Delete' button from the header. If you are on STEP view, open 'Actions' list and select "Delete" option 5. The warning popup will be opened. - 6. Confirm the action - 7. Test items will be deleted from ReportPortal with all related content (logs, screenshots). :::note diff --git a/docusaurus.config.js b/docusaurus.config.js index 49624c6f5..ff5bd94ab 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -251,7 +251,7 @@ const config = { versions: { '5.10': { specPath: 'apis/5.10/service-api.yaml', - outputDir: 'docs/api/service-api/5.10', + outputDir: 'docs/api/service-api/versions/5.10', label: 'v5.10', baseUrl: `${baseUrl}category/api/service-api-5.10`, }, @@ -270,7 +270,7 @@ const config = { versions: { '5.10': { specPath: 'apis/5.10/service-uat.yaml', - outputDir: 'docs/api/service-uat/5.10', + outputDir: 'docs/api/service-uat/versions/5.10', label: 'v5.10', baseUrl: `${baseUrl}category/api/service-uat-5.10`, },