diff --git a/404.html b/404.html index 166f9ed7e..534f660b6 100644 --- a/404.html +++ b/404.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
Skip to main content

Page Not Found

You may have used an outdated link as there have been some changes in the structure of the documentation.

But it/'s clearly here somewhere!
Please, use the keyword search to find it!

- + \ No newline at end of file diff --git a/FAQ/index.html b/FAQ/index.html index 99952cdf5..3a2b8b5df 100644 --- a/FAQ/index.html +++ b/FAQ/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
Skip to main content

FAQ

On this page, we have put together a list of the most frequently asked questions. Here, you can find prompt responses.

1. Does on-prem installed ReportPortal make any external calls? What are the content and nature of these calls?

All test results and testing data reside in-house, within your instance of ReportPortal. However, there are two types of external calls that ReportPortal makes. The first checks our status page for the latest version and informs users of it on the login page. The second sends anonymized data to Google Analytics, helping us refine the user experience and better understand application usage. This can be toggled off if desired.

2. Assuming ReportPortal locally caches logs to understand their content, where are these stored, and what are the associated retention policies?

ReportPortal utilizes PostgreSQL for its database, MinIO and the local system for file storage, and Elasticsearch for log indexing and ML processes.

Retention policies can be set and adjusted within the application on a per-project basis.

3. How is data encrypted in transit and at rest?

We use encryption in transit via SSL for our SaaS instances. For on-prem installation it depends on your LoadBalancer.

We use encryption at rest for our SaaS instances. It is provided by AWS and configured during VM provisioning. For on-prem installation it depends on your DevOps.

4. Does the containerized solution function as a standalone, or can it be integrated with K8S or other orchestration platforms? Is there a helm chart available?

ReportPortal is containerized and can be orchestrated using either docker-compose or Kubernetes.

5. Is there any training available to use ReportPortal effectively?

Check our Tutorial and read blog post with tips to get ReportPortal benefits. We also recommend investigate our documentation where you can find screenshots and video instructions on ReportPortal functionality.

6. Is there a demo available?

For sure, you can explore ReportPortal without installation visiting our Demo instance.

7. How can I begin using ReportPortal?

The initial steps involve installing and configuring the tool. Installation steps you can find in our documentation.

8. Does the tool integrate with my existing test automation framework?

ReportPortal can be integrated with common testing frameworks and CI tools. Consult this section of the documentation for detailed information on test framework integration. And use following links for Integration with CI/CD: Integration with GitLab CI, Integration with Jenkins.

9. What type of license does ReportPortal use?

ReportPortal is licensed under Apache v2.0, which means it’s free to use, comes with no liability and warranty, and there is no service and support included. And can be utilized even for commercial usage.

10. Does ReportPortal have any paid features?

At that moment, we offer the only premium feature – Quality Gates. It is a set of predefined criteria that must be satisfied for a run to be deemed successful.

11. Does ReportPortal use AI?

We provide ML-driven failure triage. Read this article to know how we use AI.

12. Do we need specific infrastructure prerequisites to avoid performance problems?

Look into Optimal Performance Hardware setup.

13. What types of reports can I generate with the ReportPortal?

ReportPortal has a lot of widgets to visualize test results and understand the state of the product. Most popular our widgets: Overall statistics chart, Launch statistics chart, Failed cases trend chart, Launch execution and issue statistic, Component health check.

14. Can ReportPortal aggregate performance test results?

We do not support direct integration with performance testing frameworks, but as a workaround you can import performance test results in JUnit format into ReportPortal. Further information on this topic can be found here.

15. Does ReportPortal have integration with Jira?

Our test automation reporting dashboard have integration with following Bug Tracking Systems: Jira Server, Jira Cloud, Azure DevOps, and Rally.

- + \ No newline at end of file diff --git a/admin-panel/AllProjectsPage/index.html b/admin-panel/AllProjectsPage/index.html index eea84cdbd..d8c4ec10e 100644 --- a/admin-panel/AllProjectsPage/index.html +++ b/admin-panel/AllProjectsPage/index.html @@ -12,7 +12,7 @@ - + @@ -30,7 +30,7 @@ button to complete the registration.

  • The user will be assigned to the project that the invitation was sent from, and to "Personal Project" with the PROJECT MANAGER project role.

  • note

    The link for registration will be active until the user registers in the system for up to, but not exceeding 24 hours.

    Detailed project info

    1. Login into ReportPortal instance as Administrator.

    2. Navigate to the "Administrate" section -> "All Projects" page.

    3. Click "See detailed information" button.

    4. View aggregated data of the selected project. Available period options are:

    Delete projects

    To delete a project, perform the following steps:

    1. Login into ReportPortal instance as Administrator.

    2. Navigate to the "Administrate" section -> "All Projects" page.

    3. Click on ellipsis button on the project preview.

    4. Click on the "Delete" option. A warning pop-up message will appear.

    5. Click "Delete". The project will be deleted from ReportPortal.

    note

    PERSONAL PROJECTS cannot be deleted from the system.

    - + \ No newline at end of file diff --git a/admin-panel/AllUsersPage/index.html b/admin-panel/AllUsersPage/index.html index ae9a9b9ca..7602e6f73 100644 --- a/admin-panel/AllUsersPage/index.html +++ b/admin-panel/AllUsersPage/index.html @@ -12,7 +12,7 @@ - + @@ -36,7 +36,7 @@ dashboards and widgets), that the user owns on the projects will be kept in ReportPortal.

    Edit user account role

    The only space in ReportPortal where user can get the Administrator rights is All Users page.

    Give ADMINISTRATOR role

    To give Administrator role for any user, perform the following steps:

    1. Login into the ReportPortal instance as Administrator.

    2. Navigate to the "Administrate" section -> "All Users" page.

    3. Hover over user's name. - "Make admin" button will be displayed.

    4. Click on the "Make admin" button. - A confirmation message will be shown.

    5. Click "Change" button on pop-up window. Account role User will be changed to Administrator. The user account will be marked with "admin" label.

    Take away ADMINISTRATOR role

    To take away Administrator account role, perform the following steps:

    1. Login into ReportPortal instance as Administrator.

    2. Navigate to the "Administrate" section -> "All Users" page.

    3. Click on the "Admin" button near the user's name.

    4. A confirm message will be shown.

    5. Click "Change" button. Account role "Administrator" will be changed to "User".

    - + \ No newline at end of file diff --git a/admin-panel/EventMonitoring/index.html b/admin-panel/EventMonitoring/index.html index 698c1103a..1abb9fe90 100644 --- a/admin-panel/EventMonitoring/index.html +++ b/admin-panel/EventMonitoring/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Event monitoring

    Starting from version 23.2, ReportPortal can monitor all activities (events) at both the project and instance levels.

    Project level event monitoring

    To view the list of all activities within your project, open the menu at the bottom of the page as an Administrator and select the "Administrate" option. All existing projects are listed on the "All Projects" page. Click on the ellipsis button next to the project and choose the "Monitoring" option from the dropdown.

    Here, you will find a table with the following columns: Time, User, Action, Object Type, Object Name, Old Value, and New Value.

    Time

    This column displays the time in a "time ago" format (e.g., "10 minutes ago"). Hovering over it, the system should show the precise action time.

    User

    This column shows who acted. We track not only actions by specific users but also, for your convenience, actions performed by ReportPortal itself or certain ReportPortal services. For example, actions by Jobs Service (such as Launch deletions) are included.

    If the activity was on behalf of a user, and their account was deleted, then there will be a "deleted user" entry in the "User" column.

    Action

    This column displays all events within this project.

    Event actions: Create dashboard, Update dashboard, Delete dashboard, Create widget, Update widget, Delete widget, Create filter, Update filter, Delete filter, Create custom defect type, Update defect, Delete defect, Create integration, Update integration, Delete integration, Start launch, Finish launch, Delete launch, Update project, Update analyzer, Post issue, Link issue, Unlink issue, Generate index, Delete index, Start import, Finish import, Update item, AA linked issue, AA changed defect type, Create pattern rule, Update pattern rule, Delete pattern rule, PA find pattern.

    Object Type

    This refers to the object on which the action was taken.

    Event objects: Launch, Dashboard, Custom defect type, Notification rule, Filter, Import, Integration, Test item, Project, Ticket, User, Widget, Pattern Rule, index, Plugin.

    Object Name

    This is the name of the widget, launch, etc.

    The Old Value and New Value columns display the changes that were made.

    You can filter activities by user, action, object type, and object name.

    Another way to view the event list in your project is by creating a "Project Activity Panel" widget.

    Instance level event monitoring

    Instance level events are not displayed in the UI – they are stored in the database.

    Instance level events: Account deletion, Bulk account deletion, Administrator unassign, Provide Administrator permission for a user, Project creation, Bulk delete project by ReportPortal administrator, Delete project by ReportPortal administrator, Delete project by ReportPortal administrator, Delete Personal project when deleting user, Create Global Integration, Update Global Integration, Delete Global Integration, Bulk delete of Global Integration via API only, Manual plugin upload, Delete Plugin, Update Plugin (disable/enable), Create user in Administrate, Create user via auth service SAML.

    Additionally, during instance setup, you can enable event storage in an audit log file. This data can be sent to a Security Information and Event Management (SIEM) system using tools like Fluentd, Fluentbit, or Filebeat. Logs and events are then checked and monitored within the SIEM system.

    The primary advantage of the audit log file is that it preserves all records without alterations or deletions. In contrast, data in the database can be modified or deleted. For example, if launches or projects are deleted, the corresponding data is removed from the database. Deleting accounts leads to data obfuscation in the database.

    Hence, if historical monitoring and strict accountability are required, enabling event storage in an audit log file is recommended. Financial companies, for example, are often mandated to retain all user actions in their services for 3 years.

    note

    Administrators should ensure that log rotation is configured for the location where the audit log will be saved, as a substantial amount of data will accumulate.

    Event monitoring assists organizations, especially in industries like finance and healthcare, in maintaining the security of their systems and data.

    - + \ No newline at end of file diff --git a/analysis/AutoAnalysisOfLaunches/index.html b/analysis/AutoAnalysisOfLaunches/index.html index 23c01e1e1..21b95a7d0 100644 --- a/analysis/AutoAnalysisOfLaunches/index.html +++ b/analysis/AutoAnalysisOfLaunches/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ For effective using Auto–Analysis you should come through several stages.

    Create an analytical base in the ElasticSearch

    First of all, you need to create an analytical base. For that, you should start to analyze test results manually.

    All test items with a defect type which have been analyzed manually or automatically by ReportPortal are sent to the Elastic Search.

    The following info is sent:

    For the better analysis, we merge small logs (which consist of 1-2 log lines and words number <= 100) together. We store this merged log message as a separate document if there are no other big logs (consisting of more than 2 log lines or having a stacktrace) in the test item. We store this merged log message in a separate field "merged_small_logs" for all the big logs if there are ones.

    The Analyzer preprocesses log messages from the request for test failure analysis: extracts error message, stacktrace, numbers, exceptions, urls, paths, parameters and other parts from text to search for the most similar items by these parts in the analytical base. These parts are saved in a separate fields for each log entry.

    Each log entry along with its defect type is saved to ElasticSearch in the form of a separate document. All documents created compose an Index. The more test results index has, the more accurate results will be generated by the end of the analysis process.

    tip

    If you do not sure how many documents(logs) are contained in the Index at that moment, you can check it. For that, perform the following actions:

    Test items of a launch in Debug mode are not sent to the service Analyzer. If the test item is deleted or moved to the Debug mode, it is removed from the Index.

    Auto-Analysis process

    After your Index has been completed. You can start to use the auto-analysis feature.

    Analysis can be launched automatically (via Project Settings) or manually (via the menu on All launches view). After the process is started, all items with defect type “To investigate” with logs (log level >= 40 000) from the analyzed launch are picked and sent to the Analyzer Service and the service ElasticSearch for investigations.

    How Elasticsearch returns candidates for Analysis

    Here is a simplified procedure of the Auto-analysis candidates searching via ElasticSearch.

    When a "To investigate" test item appears we search for the most similar test items in the analytical base. We create a query which searches by several fields, message similarity is a compulsory condition, other conditions boost the better results and they will have a higher score (boost conditions are similarity by unique id, launch name, error message, found exceptions, numbers in the logs and etc.).

    Then ElasticSearch receives a log message and divides it into the terms (words) with a tokenizer and calculates the importance of each term (word). For that ElasticSearch computes TF-IDF for each term (word) in the analyzed log. If the level of term importance is low, the ElasticSearch ignores it.

    note

    Term frequency (TF) – how many time term (word) is used in an analyzed log;

    Document frequency (DF) – in how many documents this term (word) is used in Index;

    TF-IDF (TF — term frequency, IDF — inverse document frequency) — a statistical measure used to assess the importance of a term (word) in the context of a log that is part of an Index. The weight of a term (word) is proportional to the amount of use of this term (word) in the analyzed log and inversely proportional to the frequency of term (word) usage in Index.

    The term (word) with the highest level of importance is the term (word) that is used very frequently in analyzed log and moderately in the Index.

    After all important terms are defined, Elastic search calculates the level of equality between an analyzed log and each log in the Index. For each log from the Index is calculated a score.

    note

    How calculated a score:

    score(q,d) =

         coord(q,d) -
    SUM (
    tf(t in d),
    idf(t)²,
    t.getBoost(),
    ) (t in q)

    Where:

    The results are sorted by the score, in case the scores are the same, they are sorted by "start_time" field, which helps to boost the test items with closer to today dates. So the latest defect types will be higher in the returned by Elasticsearch results.

    The ElasticSearch returns to the service Analyzer 10 logs with the highest score for each log. Analyzer regroups all the results by a defect type and chooses the best representative for each defect type group, based on their scores.

    note

    In the case the test item has several logs, the best representative for a defect type group will become the log with the highest score among all logs.

    How Auto-analysis makes decisions for candidates, returned by Elasticsearch

    The ElasticSearch returns to the service Analyzer 10 logs with the highest score for each query and all these candidates will be processed further by the ML model. Analyzer regroups all the results by a defect type and chooses the best representative for each defect type group, based on their scores.

    The ML model is an XGBoost model which features (about 30 features) represent different statistics about the test item, log message texts, launch info and etc, for example:

    The model gives a probability for each defect type group, and we choose the defect type group with the highest probability and the probability should be >= 50%.

    A defect comment and a link to BTS of the best representative from this group come to the analyzed item.

    The Auto-analysis model is retrained for the project and this information can be found in the section "How models are retrained" below.

    So this is how Auto-Analysis works and defines the most relevant defect type on the base of the previous investigations. We give an ability to our users to configure auto-analysis manually.

    Auto-analysis Settings

    All settings and configurations of Analyzer and ElasticSearch are situated on a separate tab on Project settings.

    1. Login into ReportPortal instance as Administrator or project member with PROJECT MANAGER role on the project;

    2. Come on Project Settings, choose Auto-Analysis section;

    In this section user can perform the following actions:

    1. Switch ON/OFF auto-analysis;

    2. Choose a base for analysis (All launches/ Launches with the same name);

    3. Configure ElasticSearch settings;

    4. Remove/Generate ElasticSearch index.

    Switch ON/OFF automatic analysis;

    To activate the "Auto-Analysis" functionality in a project, perform the following steps:

    1. Login ReportPortal instance as Administrator or project member with PROJECT MANAGER role on the project.

    2. Select ON in the "Auto-Analysis" selector on the Project settings / Auto-analysis section.

    3. Click the "Submit" button. Now "Auto-Analysis" will start as soon as any launch finishes.

    Base for analysis (All launches/ Launches with the same name);

    You can choose which results from previous runs should be considered in Auto –Analysis for defining the failure reason.

    There two options:

    If you choose “All launches”, test results in the launch will have analyzed on the base of all data in Elastic search of the project.

    If you choose “Launches with the same name”, test results in the launch will have analyzed on the base of all data in Elastic search that have the same Launch name.

    You can choose those configurations via Project configuration or from the list of actions on All launches view.

    Configure ElasticSearch settings

    Also, we give the possibility for our users to configure 2 main parameters of ElasticSearch manually:

    Parameter MinShouldMatch is involved in the calculation of a score. It is a minimum value for coord(q,d) (the percent of words equality between an analyzed log and a particular log from the ElasticSearch). So you can increase search hardness and choose a minimum level of similarity that is required.

    With the parameter Number of log lines - you can write the root cause of test failure in the first lines and configure the analyzer to take into account only the required lines.

    With these 2 parameters, you can configure the accuracy of the analysis that you need. For your facilities we have prepared 3 pre-sets with values:

    Remove/Generate ElasticSearch index

    There two possible actions that can be performed under Index in ElasticSearch.

    You can remove the Index from ElasticSearch and all logs with there defect type will be deleted. ML will be set to zero. All data with your investigations will be deleted from the ElasticSearch. For creating a new one you could start to investigate test results manually or generate data based on previous results on the project once again.

    note

    Your investigations in ReportPortal will not be changed. The operation concerns only ElasticSearch base.

    Another option, you can generate the Index in ElasticSearch. In the case of generation, all data will be removed from ElasticSearch and the new one will be generated based on all previous investigations on the project following current analysis settings.

    At the end of the process, you will receive a letter with info about the end of the process and with several items that will be appeared in ElasticSearch.

    You can use index generation for several goals. For example, assume two hypothetical situations when index generation can be used:

    note

    The new base will be generated following logs and settings that are existing on the moment of operating. So index before removing and index after generation can be different.

    We strongly do not recommend use auto-analysis until the new index will be generated.

    Manual analysis

    Analysis can be launched manually. To start the analysis manually, perform the following steps:

    1. Navigate to the "Launches" page.

    2. Select the "Analysis" option from the context menu next to the selected launch name.

    3. Choose the scope of previous results on the base of which test items should be auto-analyzed. The default is the one that is chosen on the setting page, but you can change it manually.

    Via this menu you can choose 3 options unlike on Project Settings:

    Options All launches and Launches with the same name are working the same as on project settings. If you choose Only current launch, the system is analyzing the test items of chosen launch only on a base of already investigated date of this launch.

    1. Choose which items from launch should be analyzed:

    In case the user chooses Only To investigate items - the system is analyzing only items with defect type "To investigate" in the chosen launch;

    In case the user chooses Items analyzed automatically (by AA) - the system is analyzing only items that have been already analyzed by auto-analysis. The results of the previous run of analysis will be set to zero and items will be analyzed once again.

    In case the user chooses Items analyzed manually - the system is analyzing only items that have been already analyzed by the user manually. The results of the previous run of analysis will be set to zero and items will be analyzed once again.

    In the case of multi-combination - the system is analyzing results dependence on chosen options.

    note

    The Ignore flag is saved. If the item has flag Ignore in AA, it will not be re-analyzed.

    tip

    For option Only current lunch you can not choose Items analyzed automatically (by AA) and Items analyzed manually simultaneously.

    1. Click the "Analysis" button. Now "Auto-Analysis" will start.

    Any launches with an active analyzing process will be marked with the "Analysis" label.

    Label AA

    When the test item is analyzed by the ReportPortal, a label "AA" is set on the test item on a Step Level. You can filter results with a parameter “Analysed by RP (AA)”

    Ignore in Auto-Analysis

    If you don't want to save some test items in ElasticSearch, you can "Ignore in Auto-Analysis". For that you can choose this action in “Make decision” modal:

    Or from the action list for several test items:

    When you choose “Ignore in AA”, logs of the chosen item are removed from the ElasticSearch.

    - + \ No newline at end of file diff --git a/analysis/HowModelsAreRetrained/index.html b/analysis/HowModelsAreRetrained/index.html index 49d73c7e0..0ec8492dd 100644 --- a/analysis/HowModelsAreRetrained/index.html +++ b/analysis/HowModelsAreRetrained/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    How models are retrained

    In the Auto-analysis and ML suggestions processes several models take part:

    • Auto-analysis XGBoost model, which gives the probability for a test item to be of a specific type based on the most similar test item in the history with this defect type
    • ML suggestions XGBoost model, which gives the probability for a test item to be similar to the test item from the history
    • Error message language model on Tf-Idf vectors(Random Forest Classifier), which gives a probability for the error message to be of a specific defect type or its subtype based on the words in the message. The probability from this model is taken as a feature in the main boosting algorithm.

    At the start of the project, you have global models. They were trained on 6 projects and were validated to give a good accuracy on average. To have a more powerful and personalized test failure analysis, the models should be retrained on the data from the project.

    note

    If a global model performs better on your data, the retrained model won't be saved. As far as we save a custom model only if it performs better for your data than the global one.

    Triggering information and retrained models are saved in Minio(or a filesystem) as you set up in the Analyzer service settings.

    Retraining triggering conditions for Error message Random Forest Classifier:

    • Each time the test item defect type is changed to another issue type(except "To Investigate"), we update the triggering info, which saves the quantity of test items with defect types and the quantity of test items with defect types since the last training. This information is saved in the file "defect_type_trigger_info" in Minio.
    • When we have more than 100 labelled items, and since last training we have 100 labelled test items, retraining is triggered and if validation data metrics are better than metrics for a global model for the same data points, then we save a custom "defect_type" model in Minio and use it further in the auto-analysis and suggestions functionality.

    Retraining triggering conditions for Auto-analysis and Suggestion XGBoost models:

    • We gather training data for training from several sources:
      • when you choose one of the suggestions(the chosen test item will be a positive example, others will be negative ones)
      • when you don't choose any suggestion and edit the test item somehow(set up a defect type manually, add a comment, etc.), all suggestions become negative examples;
      • when auto-analysis runs and for a test item it finds a similar test item, we consider it a positive example, until the user changes the defect type for it manually. In this case, the result will be marked as a negative one.
    • Each time a suggestion analysis runs or changing a defect type happens, we update the triggering info for both models. This information is saved in the files "auto_analysis_trigger_info" and "suggestion_tgrigger_info" in Minio.
    • When we have more than 300 labelled items, and since last training we have 100 labelled test items, retraining is triggered and if validation data metrics are better than metrics for a global model for the same data points, then we save a custom "auto_anlysis" model in Minio and use it further in the auto-analysis functionality.
    • When we have more than 100 labelled items, and since last training we have 50 labelled test items, retraining is triggered and if validation data metrics are better than metrics for a global model for the same data points, then we save a custom "suggestion" model in Minio and use it further in the suggestions functionality.
    - + \ No newline at end of file diff --git a/analysis/MLSuggestions/index.html b/analysis/MLSuggestions/index.html index 969786c25..2baf16259 100644 --- a/analysis/MLSuggestions/index.html +++ b/analysis/MLSuggestions/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    ML Suggestions

    ML suggestions functionality is based on previously analyzed results (either manually or via Auto-analysis feature) using Machine Learning. The functionality is provided by the Analyzer service in combination with ElasticSearch.

    This analysis hints what are the most similar analyzed items to the current test item. You can interact with this functionality in several ways:

    • Choose one of the suggested items if you see that the reason for the current test item is similar to the suggested one. When you choose the item and apply changes to the current item, the following test item characteristics will be copied from the chosen test item:

      • a defect type;
      • a link to BTS (in case if it exists);
      • comment (in case if it exists);
    • If you see no suitable suggested test item for the current test item, just do not select any of them.

    How the ML suggestions functionality is working

    ML Suggestions searches for similar previously analyzed items to the current test item, so it requires an analytical base saved in Elasticsearch. ML suggestions takes into account all user-investigated, auto-analyzed items or items chosen from ML suggestions. While the analytical base is growing ML suggestions functionality will have more examples to search by and suggest you the best options.

    ML suggestions analysis is run everytime you enter "Make decision" editor. ML suggestions are run for all test items no matter what defect type they have now. This functionality is processing only test items with logs (log level >= 40000).

    The request for the suggestions part looks like this:

    • testItemId;
    • uniqueId;
    • testCaseHash;
    • launchId;
    • launchName;
    • project;
    • analyzerConfig;
    • logs = List of log objects (logId, logLevel, message)

    The Analyzer preprocesses log messages from the request for analysis: extracts error message, stacktrace, numbers, exceptions, urls, paths, parameters and other parts from text to search for the most similar items by these parts in the analytical base. We make several requests to the Elasticsearch to find similar test items by all the error logs.

    note

    When a test item has several error logs, we will use the log with the highest score as a representative of this test item.

    The ElasticSearch returns to the service Analyzer 10 logs with the highest score for each query and all these candidates will be processed further by the ML model. The ML model is an XGBoost model which features (about 40 features) represent different statistics about the test item, log message texts, launch info and etc, for example:

    • the percent of selected test items with the following defect type
    • max/min/mean scores for the following defect type
    • cosine similarity between vectors, representing error message/stacktrace/the whole message/urls/paths and other text fields
    • whether it has the same unique id, from the same launch
    • the probability for being of a specific defect type given by the Random Forest Classifier trained on Tf-Idf vectors

    The model gives a probability for each candidate, we filter out test items with the probability <= 40%. We sort the test items by this probability, after that we deduplicate test items inside this ranked list. If two test items are similar with >= 98% by their messages, then we will leave the test item with the highest probability. After deduplication we take maximimum 5 items with the highest score to show in the ML Suggestions section.

    ML suggestions section contains at maximum 5 suggested items, they are shown together with the scores given by the model and we divide them into 3 groups:

    • the group "SAME", test items with the score = 100%
    • the group "HIGH", test items with the score in the range [70% - 99.9%]
    • the group "LOW", test items with the score in the range [40% - 69.9%]
    - + \ No newline at end of file diff --git a/analysis/ManualAnalysis/index.html b/analysis/ManualAnalysis/index.html index 090058402..2f70ad542 100644 --- a/analysis/ManualAnalysis/index.html +++ b/analysis/ManualAnalysis/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Manual Analysis

    Manual Analysis is presented on our test report dashboard by “Make decision” modal.

    “Make decision” modal redesign

    Redesign of the “Make decision” modal was implemented in version 5.7. This feature helps to sort out auto tests and decide: What is the problem? How can it be marked? Is it required to post issue or link issue? It became easier to use this functionality after the redesign.

    The “Make decision” modal can be opened in three ways:

    1) from the Step level

    2) via Actions

    3) from the Log level

    “Execution to change” section

    “Execution to change” section is displayed at the top left of the “Make decision” modal. It includes Step name, current defect type. Also it can have a log, a comment, a link to a Bug Tracking System (BTS), a label (AA, PA, Ignore AA) if exist. You can expand logs to understand why this step was failed.

    How to set a defect type and type a comment

    “Select defect” section is displayed at the top right of the “Make decision” modal. It includes “Manual selection”, “Analyzer suggestions”, “History of the test”.

    You can select a defect type and type a comment manually. Selected defect type and added comment will be applied to the current item (there is also a possibility to apply them to other items – please, have a look at “Apply for” section).

    How to use “Analyzer suggestions”

    You also can select any step from the “Analyzer suggestions” with a similar log. Similar log is marked with a red asterisk. Then the defect type, the comment and linked BTS ticket (if exist) of the suggested step will be applied to the current item (there is also a possibility to apply them to other items – please, have a look at “Apply for” section).

    “History of the test” section

    You can see the “History of the test” – which defect type this step had in previous runs. You can select any item from the “History of the test”. Then the defect type, the comment and linked BTS ticket (if exist) of the suggested step will be applied to the current item (there is also a possibility to apply them to other items – please, have a look at “Apply for” section).

    How to select other steps for analysis

    The “Make decision” modal works not only with the current step. This feature allows to select other steps with “To Investigate” defect type which can be changed as well. For that, please, expand the “Apply for” section and select the needed option.

    The “Results will be applied for” message is displayed at the bottom after selection. This section was added in version 5.7.

    Now you can view all changes before applying. Expand the “Results will be applied for” section to see information about changes. You should click “Apply” button to apply selected changes.

    Bulk update

    There is also a possibility for Bulk update, when the changes are applied to all selected test items.

    As you can see, “Make decision” modal is a time-saving tool for engineers.

    - + \ No newline at end of file diff --git a/analysis/PatternAnalysis/index.html b/analysis/PatternAnalysis/index.html index 82f76d63f..204ca33bc 100644 --- a/analysis/PatternAnalysis/index.html +++ b/analysis/PatternAnalysis/index.html @@ -12,21 +12,21 @@ - +
    -
    Skip to main content

    Pattern Analysis

    Pattern analysis is a feature that helps you to speed up test failure analysis by finding common patterns in error logs.

    Types of Pattern Analysis

    String – any problem phrase.

    Regex – regular expression.

    Use case 1:

    Problem: A user knows the several common problems why test cases fail. During tests run a lot of test have failed. A user need to check logs a of tests to know by what reason test cases have failed.

    Solution: Create a pattern rules for all common reasons which contains a problem phrase (for example: "Expected status code <404> but > was <500>" or "Null response") or with Regex query. Switch On a pattern analysis. +

    Pattern Analysis

    Pattern analysis is a feature that helps you to speed up test failure analysis by finding common patterns in error logs.

    Types of Pattern Analysis

    String – any problem phrase.

    Regex – regular expression.

    note

    It would be better to use STRING rule instead of REGEX rule in all possible cases to speed up the Pattern Analysis processing in the database. As a result, you can get your analysis completed faster using the STRING patterns rather than REGEX and reduce the database workload.

    Use case 1:

    Problem: A user knows the several common problems why test cases fail. During tests run a lot of test have failed. A user need to check logs a of tests to know by what reason test cases have failed.

    Solution: Create a pattern rules for all common reasons which contains a problem phrase (for example: "Expected status code <404> but > was <500>" or "Null response") or with Regex query (for example: java:[0-9]*). Switch On a pattern analysis. Launch a test run. So that the ReportPortal systems finds all failed items which have known patterns in error logs and marks them with a label with pattern name. Find all items failed by the same reason by choosing a filter by Pattern Name on the Step view. Add The most popular pattern widget (TOP-20) and track the TOP-20 the most popular reason of test failing in the build.

    Use case 2:

    Problem: Test run has finished. A user found that more than 3 items have failed by the same reason. And he want to find all such items.

    Solution: Create a new pattern rule on Project Settings. Launch a pattern analysis manually for one launch. name. Find all items failed by the same reason by choosing a filter by Pattern Name on the Step view.

    - + \ No newline at end of file diff --git a/analysis/SearchForTheSimilarToInvestigateItems/index.html b/analysis/SearchForTheSimilarToInvestigateItems/index.html index 12f163da1..3f75b8a90 100644 --- a/analysis/SearchForTheSimilarToInvestigateItems/index.html +++ b/analysis/SearchForTheSimilarToInvestigateItems/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@

    Search for the similar "To investigate" items

    Let's consider below an example of ML-driven failure triage in ReportPortal.

    Use case:

    Situation: Analyzer has completed its work and marked known issues with defect types.

    But there are a lot of failures with a similar unknown reason in the run. All such items have "To investigate" defect type.

    Problem: A user should check and analyze all failed items.

    Solution:

    A user is on All launches, he clicks on "To investigate" and opens a list with items. When a user clicks on a pencil next to a defect type, the system opens the "Make decision" modal. In this modal a user can see all items with "To investigate" defect type and the same failure reason.

    There are 3 options for search the similar "To investigate" items on the Step level:

    • Current item only
    • Similar "To investigate" in the launch & current item
    • Similar "To investigate" in 10 launches & current item

    There are 4 options for search the similar "To investigate" items on the Log level:

    • Current item only
    • Similar "To investigate" in the launch & current item
    • Similar "To investigate" in 10 launches & current item
    • "To investigate" from the history line & current item

    If launches are filtered by filter on All Launches page, then addition option Similar "To investigate" in the Filter & current item appears on the Step and Log levels.

    A user can select all identical failures and perform the bulk operation for them.

    - + \ No newline at end of file diff --git a/analysis/UniqueErrorAnalysis/index.html b/analysis/UniqueErrorAnalysis/index.html index 418ce260a..4d9c4ee36 100644 --- a/analysis/UniqueErrorAnalysis/index.html +++ b/analysis/UniqueErrorAnalysis/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@

    Unique Error Analysis

    You can look at the test failure analysis from different points of view: qualitative (passing rate – How many tests have failed?) and quantitative (Why have they failed?). For example, if 1000 test cases are failed, then

    1. they could fail for the same reason

    2. they could fail for various reasons

    While failed tests allow you to understand what is broken, “Unique Error analysis” functionality implemented in version 5.7 will show you why it broke. The main advantage of this solution is that a list of all unique errors of the launch is presented in one place. Moreover, the system automatically groups tests by the same errors: when you expand some error, you see a list of steps where it occurred.

    “Unique error auto-analysis” is set ON by default.

    “Include/exclude numbers” settings

    There are 2 settings: “include/exclude numbers” – it depends if you decide that numbers in error logs have significant value for analysis or not.

    Now, Unique error auto-analysis will be started after a launch has been finished.

    To see the list of “Unique errors” for the launch, open any item level in the launch and click “Unique errors” tab.

    Finally, you can see the list of “Unique errors”.

    There you can see a list with groups of error logs. You can expand a group to check what tests belong to the same one and it could give you a hint during error analysis and defects assigning. The groups are formed based on unique error logs, some small error logs can be merged and displayed as one error.

    How to run “Unique error analysis” manually

    tip

    You can also run “Unique error analysis” manually from any item level in case auto analysis is set OFF. Please, follow the steps below:

    You can also run “Unique Error analysis” from the menu next to a particular launch.

    Examples with “Include/exclude numbers” settings

    Let’s consider some examples with the same Unique Errors but with different include/exclude numbers settings.

    We have 2 errors with identical text, but the difference is numeric value in the first error.

    1. An example with “Include numbers to analyzed logs” setting. Error with numeric value is displayed:
    1. An example with “Exclude numbers from analyzed logs” setting. As you can see, error with numeric value is not displayed:

    How to get ML suggestions for the group of steps

    In addition, there is a possibility to get ML suggestions for the group of steps. It speeds up the process of analyzing failed tests and assigning defect types by several times.

    Also, the new Quality Gates rule – “New Errors” – was implemented based on the “Unique Error analysis” functionality. This rule helps to identify if there are new unique errors in the current launch by comparing it to another specified launch.

    To summarize, ReportPortal got the following benefits thanks to the “Unique Error analysis” functionality:

    1. a list of unique errors for the launch with grouping,
    2. facilitating tests results analysis,
    3. ML suggestions for a group of steps,
    4. new Quality Gates rule.

    This way you can easily sort out the failures based on the unique errors found.

    - + \ No newline at end of file diff --git a/assets/images/ManualPatternAnalysis-e34eaf8c9e0acd6d6323e774e717462b.png b/assets/images/ManualPatternAnalysis-e34eaf8c9e0acd6d6323e774e717462b.png deleted file mode 100644 index b19c62085..000000000 Binary files a/assets/images/ManualPatternAnalysis-e34eaf8c9e0acd6d6323e774e717462b.png and /dev/null differ diff --git a/assets/images/ManualPatternAnalysis-f4328c2370a1cc455580a539b222e895.png b/assets/images/ManualPatternAnalysis-f4328c2370a1cc455580a539b222e895.png new file mode 100644 index 000000000..7360117ee Binary files /dev/null and b/assets/images/ManualPatternAnalysis-f4328c2370a1cc455580a539b222e895.png differ diff --git a/assets/images/PatternAnalysisRegex1-088780949e70ac5b75b4019411c8b321.png b/assets/images/PatternAnalysisRegex1-088780949e70ac5b75b4019411c8b321.png new file mode 100644 index 000000000..f3e3e43d1 Binary files /dev/null and b/assets/images/PatternAnalysisRegex1-088780949e70ac5b75b4019411c8b321.png differ diff --git a/assets/images/PatternAnalysisRegex1-cbd4be900ccc8cc6cc5e2e9f1cde062d.png b/assets/images/PatternAnalysisRegex1-cbd4be900ccc8cc6cc5e2e9f1cde062d.png deleted file mode 100644 index 0cee62091..000000000 Binary files a/assets/images/PatternAnalysisRegex1-cbd4be900ccc8cc6cc5e2e9f1cde062d.png and /dev/null differ diff --git a/assets/images/PatternAnalysisRegex2-45144c031833987067b21288979df1a8.png b/assets/images/PatternAnalysisRegex2-45144c031833987067b21288979df1a8.png deleted file mode 100644 index 007c6c74d..000000000 Binary files a/assets/images/PatternAnalysisRegex2-45144c031833987067b21288979df1a8.png and /dev/null differ diff --git a/assets/images/PatternAnalysisRegex2-55c59bdbd41286f3088582d2c9f185e4.png b/assets/images/PatternAnalysisRegex2-55c59bdbd41286f3088582d2c9f185e4.png new file mode 100644 index 000000000..7ef5eb652 Binary files /dev/null and b/assets/images/PatternAnalysisRegex2-55c59bdbd41286f3088582d2c9f185e4.png differ diff --git a/assets/js/7ef0b1f8.8c175450.js b/assets/js/7ef0b1f8.8c175450.js new file mode 100644 index 000000000..acf8582cb --- /dev/null +++ b/assets/js/7ef0b1f8.8c175450.js @@ -0,0 +1 @@ +"use strict";(self.webpackChunkdocumentation=self.webpackChunkdocumentation||[]).push([[8734],{3905:(e,t,a)=>{a.d(t,{Zo:()=>c,kt:()=>m});var n=a(67294);function s(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function r(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function i(e){for(var t=1;t=0||(s[a]=e[a]);return s}(e,t);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(s[a]=e[a])}return s}var o=n.createContext({}),p=function(e){var t=n.useContext(o),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},c=function(e){var t=p(e.components);return n.createElement(o.Provider,{value:t},e.children)},u="mdxType",d={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},y=n.forwardRef((function(e,t){var a=e.components,s=e.mdxType,r=e.originalType,o=e.parentName,c=l(e,["components","mdxType","originalType","parentName"]),u=p(a),y=s,m=u["".concat(o,".").concat(y)]||u[y]||d[y]||r;return a?n.createElement(m,i(i({ref:t},c),{},{components:a})):n.createElement(m,i({ref:t},c))}));function m(e,t){var a=arguments,s=t&&t.mdxType;if("string"==typeof e||s){var r=a.length,i=new Array(r);i[0]=y;var l={};for(var o in t)hasOwnProperty.call(t,o)&&(l[o]=t[o]);l.originalType=e,l[u]="string"==typeof e?e:s,i[1]=l;for(var p=2;p{a.r(t),a.d(t,{assets:()=>o,contentTitle:()=>i,default:()=>d,frontMatter:()=>r,metadata:()=>l,toc:()=>p});var n=a(87462),s=(a(67294),a(3905));const r={sidebar_position:7,sidebar_label:"Pattern Analysis"},i="Pattern Analysis",l={unversionedId:"analysis/PatternAnalysis",id:"analysis/PatternAnalysis",title:"Pattern Analysis",description:"Pattern analysis is a feature that helps you to speed up test failure analysis by finding common patterns in error logs.",source:"@site/docs/analysis/PatternAnalysis.mdx",sourceDirName:"analysis",slug:"/analysis/PatternAnalysis",permalink:"/docs/analysis/PatternAnalysis",draft:!1,editUrl:"https://github.com/reportportal/docs/blob/develop/docs/analysis/PatternAnalysis.mdx",tags:[],version:"current",sidebarPosition:7,frontMatter:{sidebar_position:7,sidebar_label:"Pattern Analysis"},sidebar:"defaultSidebar",previous:{title:"Manual Analysis",permalink:"/docs/analysis/ManualAnalysis"},next:{title:"Unique Error Analysis",permalink:"/docs/analysis/UniqueErrorAnalysis"}},o={},p=[{value:"Types of Pattern Analysis",id:"types-of-pattern-analysis",level:2},{value:"Use case 1:",id:"use-case-1",level:2},{value:"Use case 2:",id:"use-case-2",level:2}],c={toc:p},u="wrapper";function d(e){let{components:t,...r}=e;return(0,s.kt)(u,(0,n.Z)({},c,r,{components:t,mdxType:"MDXLayout"}),(0,s.kt)("h1",{id:"pattern-analysis"},"Pattern Analysis"),(0,s.kt)("p",null,"Pattern analysis is a feature that helps you to speed up test failure analysis by finding common patterns in error logs."),(0,s.kt)("h2",{id:"types-of-pattern-analysis"},"Types of Pattern Analysis"),(0,s.kt)("p",null,(0,s.kt)("strong",{parentName:"p"},"String")," \u2013 any problem phrase."),(0,s.kt)("media-view",{src:a(38394),alt:"Pattern Analysis String rule"}),(0,s.kt)("media-view",{src:a(42478),alt:"Pattern Analysis String in the logs for automated defect triaging"}),(0,s.kt)("p",null,(0,s.kt)("strong",{parentName:"p"},"Regex")," \u2013 regular expression."),(0,s.kt)("media-view",{src:a(74098),alt:"Pattern Analysis Regex rule"}),(0,s.kt)("media-view",{src:a(88783),alt:"Pattern Analysis Regex in the logs for automated bug triage process"}),(0,s.kt)("admonition",{type:"note"},(0,s.kt)("p",{parentName:"admonition"},"It would be better to use STRING rule instead of REGEX rule in all possible cases to speed up the Pattern Analysis processing in the database. As a result, you can get your analysis completed faster using the STRING patterns rather than REGEX and reduce the database workload.")),(0,s.kt)("h2",{id:"use-case-1"},"Use case 1:"),(0,s.kt)("p",null,(0,s.kt)("strong",{parentName:"p"},"Problem:")," A user knows the several common problems why test cases fail. During tests run a lot of test have failed. A user need to check logs a of tests to know by what reason test cases have failed."),(0,s.kt)("p",null,(0,s.kt)("strong",{parentName:"p"},"Solution:")," Create a pattern rules for all common reasons which contains a problem phrase (for example: ",(0,s.kt)("em",{parentName:"p"},'"Expected status code <404> but > was <500>"'),' or "',(0,s.kt)("em",{parentName:"p"},'Null response"'),") or with Regex query (for example: java:","[0-9]","*). Switch On a pattern analysis.\nLaunch a test run.\nSo that the ReportPortal systems finds all failed items which have known patterns in error logs and marks them with a label with pattern name.\nFind all items failed by the same reason by choosing a filter by Pattern Name on the Step view.\nAdd The most popular pattern widget (TOP-20) and track the TOP-20 the most popular reason of test failing in the build."),(0,s.kt)("media-view",{thumbnail:a(56705),src:"https://youtu.be/K3Wk_S4Cuko",alt:"Pattern Analysis launched Automatically",type:"video"}),(0,s.kt)("h2",{id:"use-case-2"},"Use case 2:"),(0,s.kt)("p",null,(0,s.kt)("strong",{parentName:"p"},"Problem:")," Test run has finished. A user found that more than 3 items have failed by the same reason. And he want to find all such items."),(0,s.kt)("p",null,(0,s.kt)("strong",{parentName:"p"},"Solution:")," Create a new pattern rule on Project Settings. Launch a pattern analysis manually for one launch.\nname.\nFind all items failed by the same reason by choosing a filter by Pattern Name on the Step view."),(0,s.kt)("media-view",{thumbnail:a(804),src:"https://youtu.be/_BhG0Tu4H_I",alt:"Pattern Analysis launched Manually",type:"video"}))}d.isMDXComponent=!0},56705:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/AutoPatternAnalysis-ad611730e3cb786e3af71c30fcf06e56.png"},804:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/ManualPatternAnalysis-f4328c2370a1cc455580a539b222e895.png"},74098:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/PatternAnalysisRegex1-088780949e70ac5b75b4019411c8b321.png"},88783:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/PatternAnalysisRegex2-55c59bdbd41286f3088582d2c9f185e4.png"},38394:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/PatternAnalysisString1-c088cdfff90c9ac38290b8e4ecdcaa4a.png"},42478:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/PatternAnalysisString2-96794cb8a6526a9a6f7d41547ac58707.png"}}]); \ No newline at end of file diff --git a/assets/js/7ef0b1f8.d32f0fd4.js b/assets/js/7ef0b1f8.d32f0fd4.js deleted file mode 100644 index 8cc3a0f8c..000000000 --- a/assets/js/7ef0b1f8.d32f0fd4.js +++ /dev/null @@ -1 +0,0 @@ -"use strict";(self.webpackChunkdocumentation=self.webpackChunkdocumentation||[]).push([[8734],{3905:(e,t,a)=>{a.d(t,{Zo:()=>p,kt:()=>m});var n=a(67294);function r(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function s(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function i(e){for(var t=1;t=0||(r[a]=e[a]);return r}(e,t);if(Object.getOwnPropertySymbols){var s=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(r[a]=e[a])}return r}var o=n.createContext({}),c=function(e){var t=n.useContext(o),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},p=function(e){var t=c(e.components);return n.createElement(o.Provider,{value:t},e.children)},u="mdxType",d={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},y=n.forwardRef((function(e,t){var a=e.components,r=e.mdxType,s=e.originalType,o=e.parentName,p=l(e,["components","mdxType","originalType","parentName"]),u=c(a),y=r,m=u["".concat(o,".").concat(y)]||u[y]||d[y]||s;return a?n.createElement(m,i(i({ref:t},p),{},{components:a})):n.createElement(m,i({ref:t},p))}));function m(e,t){var a=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var s=a.length,i=new Array(s);i[0]=y;var l={};for(var o in t)hasOwnProperty.call(t,o)&&(l[o]=t[o]);l.originalType=e,l[u]="string"==typeof e?e:r,i[1]=l;for(var c=2;c{a.r(t),a.d(t,{assets:()=>o,contentTitle:()=>i,default:()=>d,frontMatter:()=>s,metadata:()=>l,toc:()=>c});var n=a(87462),r=(a(67294),a(3905));const s={sidebar_position:7,sidebar_label:"Pattern Analysis"},i="Pattern Analysis",l={unversionedId:"analysis/PatternAnalysis",id:"analysis/PatternAnalysis",title:"Pattern Analysis",description:"Pattern analysis is a feature that helps you to speed up test failure analysis by finding common patterns in error logs.",source:"@site/docs/analysis/PatternAnalysis.mdx",sourceDirName:"analysis",slug:"/analysis/PatternAnalysis",permalink:"/docs/analysis/PatternAnalysis",draft:!1,editUrl:"https://github.com/reportportal/docs/blob/develop/docs/analysis/PatternAnalysis.mdx",tags:[],version:"current",sidebarPosition:7,frontMatter:{sidebar_position:7,sidebar_label:"Pattern Analysis"},sidebar:"defaultSidebar",previous:{title:"Manual Analysis",permalink:"/docs/analysis/ManualAnalysis"},next:{title:"Unique Error Analysis",permalink:"/docs/analysis/UniqueErrorAnalysis"}},o={},c=[{value:"Types of Pattern Analysis",id:"types-of-pattern-analysis",level:2},{value:"Use case 1:",id:"use-case-1",level:2},{value:"Use case 2:",id:"use-case-2",level:2}],p={toc:c},u="wrapper";function d(e){let{components:t,...s}=e;return(0,r.kt)(u,(0,n.Z)({},p,s,{components:t,mdxType:"MDXLayout"}),(0,r.kt)("h1",{id:"pattern-analysis"},"Pattern Analysis"),(0,r.kt)("p",null,"Pattern analysis is a feature that helps you to speed up test failure analysis by finding common patterns in error logs."),(0,r.kt)("h2",{id:"types-of-pattern-analysis"},"Types of Pattern Analysis"),(0,r.kt)("p",null,(0,r.kt)("strong",{parentName:"p"},"String")," \u2013 any problem phrase."),(0,r.kt)("media-view",{src:a(38394),alt:"Pattern Analysis String rule"}),(0,r.kt)("media-view",{src:a(42478),alt:"Pattern Analysis String in the logs for automated defect triaging"}),(0,r.kt)("p",null,(0,r.kt)("strong",{parentName:"p"},"Regex")," \u2013 regular expression."),(0,r.kt)("media-view",{src:a(74098),alt:"Pattern Analysis Regex rule"}),(0,r.kt)("media-view",{src:a(88783),alt:"Pattern Analysis Regex in the logs for automated bug triage process"}),(0,r.kt)("h2",{id:"use-case-1"},"Use case 1:"),(0,r.kt)("p",null,(0,r.kt)("strong",{parentName:"p"},"Problem:")," A user knows the several common problems why test cases fail. During tests run a lot of test have failed. A user need to check logs a of tests to know by what reason test cases have failed."),(0,r.kt)("p",null,(0,r.kt)("strong",{parentName:"p"},"Solution:")," Create a pattern rules for all common reasons which contains a problem phrase (for example: ",(0,r.kt)("em",{parentName:"p"},'"Expected status code <404> but > was <500>"'),' or "',(0,r.kt)("em",{parentName:"p"},'Null response"'),") or with Regex query. Switch On a pattern analysis.\nLaunch a test run.\nSo that the ReportPortal systems finds all failed items which have known patterns in error logs and marks them with a label with pattern name.\nFind all items failed by the same reason by choosing a filter by Pattern Name on the Step view.\nAdd The most popular pattern widget (TOP-20) and track the TOP-20 the most popular reason of test failing in the build."),(0,r.kt)("media-view",{thumbnail:a(56705),src:"https://youtu.be/K3Wk_S4Cuko",alt:"Pattern Analysis launched Automatically",type:"video"}),(0,r.kt)("h2",{id:"use-case-2"},"Use case 2:"),(0,r.kt)("p",null,(0,r.kt)("strong",{parentName:"p"},"Problem:")," Test run has finished. A user found that more than 3 items have failed by the same reason. And he want to find all such items."),(0,r.kt)("p",null,(0,r.kt)("strong",{parentName:"p"},"Solution:")," Create a new pattern rule on Project Settings. Launch a pattern analysis manually for one launch.\nname.\nFind all items failed by the same reason by choosing a filter by Pattern Name on the Step view."),(0,r.kt)("media-view",{thumbnail:a(804),src:"https://youtu.be/mN_wSJArZjQ",alt:"Pattern Analysis launched Manually",type:"video"}))}d.isMDXComponent=!0},56705:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/AutoPatternAnalysis-ad611730e3cb786e3af71c30fcf06e56.png"},804:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/ManualPatternAnalysis-e34eaf8c9e0acd6d6323e774e717462b.png"},74098:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/PatternAnalysisRegex1-cbd4be900ccc8cc6cc5e2e9f1cde062d.png"},88783:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/PatternAnalysisRegex2-45144c031833987067b21288979df1a8.png"},38394:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/PatternAnalysisString1-c088cdfff90c9ac38290b8e4ecdcaa4a.png"},42478:(e,t,a)=>{a.r(t),a.d(t,{default:()=>n});const n=a.p+"assets/images/PatternAnalysisString2-96794cb8a6526a9a6f7d41547ac58707.png"}}]); \ No newline at end of file diff --git a/assets/js/runtime~main.6192b067.js b/assets/js/runtime~main.e34f6474.js similarity index 99% rename from assets/js/runtime~main.6192b067.js rename to assets/js/runtime~main.e34f6474.js index c8afb4cba..2a41df951 100644 --- a/assets/js/runtime~main.6192b067.js +++ b/assets/js/runtime~main.e34f6474.js @@ -1 +1 @@ -(()=>{"use strict";var e,f,a,d,b,c={},t={};function r(e){var f=t[e];if(void 0!==f)return f.exports;var a=t[e]={id:e,loaded:!1,exports:{}};return c[e].call(a.exports,a,a.exports,r),a.loaded=!0,a.exports}r.m=c,r.c=t,e=[],r.O=(f,a,d,b)=>{if(!a){var c=1/0;for(i=0;i=b)&&Object.keys(r.O).every((e=>r.O[e](a[o])))?a.splice(o--,1):(t=!1,b0&&e[i-1][2]>b;i--)e[i]=e[i-1];e[i]=[a,d,b]},r.n=e=>{var f=e&&e.__esModule?()=>e.default:()=>e;return r.d(f,{a:f}),f},a=Object.getPrototypeOf?e=>Object.getPrototypeOf(e):e=>e.__proto__,r.t=function(e,d){if(1&d&&(e=this(e)),8&d)return e;if("object"==typeof e&&e){if(4&d&&e.__esModule)return e;if(16&d&&"function"==typeof e.then)return e}var b=Object.create(null);r.r(b);var c={};f=f||[null,a({}),a([]),a(a)];for(var t=2&d&&e;"object"==typeof t&&!~f.indexOf(t);t=a(t))Object.getOwnPropertyNames(t).forEach((f=>c[f]=()=>e[f]));return c.default=()=>e,r.d(b,c),b},r.d=(e,f)=>{for(var a in f)r.o(f,a)&&!r.o(e,a)&&Object.defineProperty(e,a,{enumerable:!0,get:f[a]})},r.f={},r.e=e=>Promise.all(Object.keys(r.f).reduce(((f,a)=>(r.f[a](e,f),f)),[])),r.u=e=>"assets/js/"+({26:"9bf2f1ad",38:"b02e6e62",39:"6417b00a",53:"935f2afb",102:"855c8e88",104:"527a0828",110:"4779d29a",130:"75581c38",150:"54146d47",195:"5aa350ff",225:"5668619f",288:"7a3e07c8",367:"74cbe0b8",376:"c9eb37d0",414:"306aee24",452:"295d89fa",511:"e8be83b4",529:"a11133f1",562:"7a770024",570:"a7f8e40f",629:"ce42bfab",666:"e4ce6935",720:"dffd512f",742:"37a6abbb",747:"07d78bf6",757:"9ce9f59c",795:"78dfb7e9",862:"ba4cd107",963:"bf477335",973:"f7444008",1006:"069ddc90",1078:"c059af28",1168:"87014e7c",1219:"ac3f047d",1229:"402a601e",1320:"849f9212",1392:"41a0b225",1413:"71498ad5",1434:"3938886d",1451:"ea56d3e1",1472:"6efd3b1f",1490:"7821a10c",1510:"a74ebc46",1562:"bc969540",1608:"8ff8e535",1613:"6e16c901",1627:"ef851841",1688:"37e2c88f",1904:"a76ff2eb",1985:"98ca32ed",1992:"6b0c725a",1995:"0a099a4f",2024:"ab510b4f",2039:"cdbcc403",2066:"661e81a0",2106:"33580f2e",2147:"220207ff",2169:"267ccbc6",2251:"e3793912",2253:"23a9762f",2297:"1da087e0",2310:"2a424e83",2346:"ff6a9465",2387:"1d9da1f4",2439:"5ce704b9",2471:"68053dc3",2483:"9c874da5",2507:"b738733a",2510:"fd1dd87b",2516:"98a1fcd9",2539:"5e24e3bc",2580:"a1e6466d",2629:"5c2df135",2630:"1e1599cb",2650:"ef6e5522",2654:"06de0160",2719:"98b03dc7",2720:"396f0464",2813:"61d84721",2822:"4b05701a",2947:"bc7fa033",3047:"76c8ca36",3079:"269e975a",3082:"d26eef7b",3099:"18fa88f3",3155:"9a364eec",3197:"befb6058",3322:"a573d8f7",3360:"7bd95ec4",3466:"00dd8e60",3477:"7de4166c",3540:"fb391353",3588:"cf1f907d",3608:"8a0d7266",3653:"d09084ee",3703:"e4cd4c7a",3731:"d008ba81",3761:"781adc30",3834:"e9d600cd",3854:"ea49187a",3885:"338561f9",3892:"e1b7d783",3897:"9bfb314f",3931:"7d141066",3938:"fea2b63c",3962:"cf84801a",3969:"11ffa791",4021:"6bc80c47",4057:"c3ddf248",4143:"d331ba2f",4216:"17e33bf8",4275:"7f9bbddb",4415:"dc2f4bb3",4472:"84982dca",4485:"8dc7c7b8",4571:"90abdf6b",4580:"3d7d2ef4",4585:"e2f45cda",4587:"0225d014",4612:"bebf2496",4665:"066b0604",4695:"855239bf",4708:"ed9ad71b",4735:"ec038b21",4772:"bc209e08",4946:"00e97bf6",4953:"de98de14",5069:"a0d375c1",5090:"46ef8a18",5106:"145e5b9f",5114:"05bac479",5165:"24ad2548",5170:"be477480",5212:"72119d79",5222:"5718d6db",5261:"da84d3ce",5296:"086e8e8b",5326:"73bb90fb",5355:"925c0b5c",5398:"1a613e91",5400:"2baa0d2f",5414:"a830598a",5421:"53e81573",5445:"84d47c34",5449:"c7b25a99",5502:"89b4dc5b",5618:"8f8fcdca",5635:"4859ca05",5711:"28b23d2a",5792:"bdb17ab6",5973:"027748e9",6002:"a89000ee",6027:"6ef1a459",6056:"caf5215e",6059:"910d5a4e",6066:"b3bfe149",6086:"e5386a75",6255:"882bd5c3",6325:"40be41a9",6329:"c7dff32f",6365:"47aa7b91",6385:"59b068d1",6412:"84e29575",6635:"bc0c4574",6636:"a68ea571",6797:"dbee08c0",6863:"dc8f36d8",7003:"491e614f",7062:"b8e550c7",7100:"c3038888",7145:"eebb9f3e",7162:"67d29b54",7213:"7236dcb4",7260:"4656db35",7312:"b90e80fa",7318:"7bbaf234",7330:"dadf679c",7537:"a4179dd8",7543:"6063f56a",7727:"a194bf09",7762:"85d780d7",7778:"50df0f53",7810:"17cee6e7",7811:"e378912b",7819:"35b6bff0",7868:"914c5fe7",7918:"17896441",7920:"1a4e3797",7971:"6f37af33",7979:"e9a76c81",8007:"8272edfd",8010:"24317c98",8069:"81bc17b9",8105:"d9fb2f5f",8125:"fa7527f5",8193:"146035e7",8448:"69cd44f6",8467:"fd64f10f",8633:"abf83c96",8658:"58f41459",8734:"7ef0b1f8",8760:"c84a9109",8834:"c8588910",8846:"a7c4da1a",8877:"60ab5c4c",8893:"fbe4c47e",8916:"8fda0747",8934:"525802b3",8939:"7ce8e9eb",8943:"838a8de2",8960:"512e5a2f",8963:"bcc7741c",8982:"bf044f46",9052:"69c1213a",9063:"c60355ff",9106:"5200637f",9145:"63a8a649",9168:"027cf034",9222:"3c75ee7d",9272:"a6ef86a7",9292:"d5397252",9314:"0e3ba2c6",9334:"247783bb",9413:"07801322",9417:"00685d85",9509:"ca88c6ba",9514:"1be78505",9559:"97b6d501",9671:"0e384e19",9709:"223301d0",9714:"1f98178b",9721:"db699483",9779:"fb2c10d0",9817:"14eb3368",9836:"39df2a2d",9866:"2c6eda14",9878:"50343655",9911:"8c63d4ea",9944:"67854d34"}[e]||e)+"."+{26:"6ee6df48",38:"25dbb0a5",39:"db0f7227",53:"394b7d7f",102:"c1d3dab3",104:"69a2b6b8",110:"f5cfe7a1",130:"29ced1dc",150:"d7d0926b",195:"8bf681a0",225:"3177252b",288:"48fe7540",367:"e48d7d06",376:"e0510c28",414:"8940b5d5",452:"62c4c0a7",511:"902511c8",529:"37973ca6",562:"e44f6a9c",570:"4f4c44ea",629:"42435a63",666:"120b9713",720:"55cab5bf",742:"4d967d0b",747:"2478ee7b",757:"b751fb80",795:"b4234a42",862:"418f3bd4",963:"dbaea5ba",973:"79855f17",1006:"2cf3ff86",1078:"0d94f3f3",1168:"4ed8b936",1219:"02dbc5b7",1229:"d0a6196c",1320:"d0e2d40e",1392:"4ea0db8e",1413:"c46dce27",1434:"c7a4d03c",1451:"b2ecce3d",1472:"234e467d",1490:"2165cfdf",1510:"8a5eb4ad",1562:"e06cd197",1608:"893f7126",1613:"14e9eca4",1627:"8c55abdf",1688:"b472ecf2",1904:"174b45b8",1949:"e89740ad",1985:"8c7dbcc3",1992:"8b10e965",1995:"18f4548d",2024:"e57a2cac",2039:"3a2f4e7b",2066:"14f8298d",2106:"af424545",2147:"b1f81382",2169:"a61ed3eb",2251:"9bbd76bb",2253:"882f27ab",2297:"4aba8fae",2310:"77297faf",2346:"d4bf8fd2",2387:"203a0347",2439:"ecd97376",2471:"22ba9fcb",2483:"f13a94f6",2507:"bde0391c",2510:"8a7754d4",2516:"494f8b22",2539:"753efd8a",2580:"8ea5763a",2629:"bf3a0a82",2630:"05b73ec0",2650:"bab99887",2654:"e1cb8f8c",2719:"46b7f948",2720:"e164a7a0",2813:"a59096f1",2822:"4d559ede",2947:"0c898d54",3047:"30e60acc",3079:"9122047a",3082:"2ae052c7",3099:"6b647eeb",3155:"2dbd5eaf",3197:"b9b03ab0",3322:"c5dfdcf8",3360:"12b719e2",3466:"27d8cba5",3477:"e391f6ae",3540:"9009efa3",3588:"ba8093a8",3608:"8378c4fd",3653:"c6557d0f",3703:"f48fa867",3731:"d2d17dbc",3761:"f453c153",3834:"4e5b9955",3854:"8f8eccdc",3885:"3a34a919",3892:"544ec8b6",3897:"5b7bbece",3931:"7c68e7ab",3938:"e7da601d",3962:"d0a15929",3969:"ead65a90",4021:"76d4b7c3",4057:"34e418cf",4143:"cd37cd95",4216:"704a22c2",4275:"7bff2e29",4415:"011105a8",4472:"73212368",4483:"dd6f4c67",4485:"9630363d",4571:"fed231d1",4580:"ef590e62",4585:"a875facb",4587:"2593e756",4612:"f36163ba",4665:"7f689ded",4695:"16c42d4f",4708:"723dc706",4735:"39315364",4772:"64a09734",4946:"2c67a4b7",4953:"484f49ed",5069:"cad12094",5090:"5bb085e8",5106:"c7833d51",5114:"2eda4c43",5165:"0383fd6a",5170:"20d85d6e",5212:"3cd30d3e",5222:"e67d1cf3",5261:"37925ae8",5296:"3c1a2d2c",5326:"6ea26858",5355:"1e0b67bb",5398:"08dd51e1",5400:"822ffc1a",5414:"376dedc8",5421:"62730e04",5445:"b4bf663a",5449:"68c091cf",5502:"efd1176c",5618:"df44a654",5635:"63bb2a27",5711:"574dcbb7",5792:"baa2f9a2",5973:"2077cfbc",6002:"832e026a",6027:"3ffd4a9b",6056:"efb3579e",6059:"1131599e",6066:"736f214a",6086:"d986e717",6255:"36506e18",6325:"619ee169",6329:"3bc4612f",6365:"0a9d5509",6385:"5a3ae9cd",6412:"5f71aa97",6635:"eeec2ec3",6636:"1fc271e6",6797:"7f7419e3",6798:"07cc0f1a",6863:"4e8837df",6945:"5f538c43",7003:"7a7d41db",7062:"c2c077f2",7100:"9e8a4622",7145:"6bbe307d",7162:"9b34a613",7213:"a1eb159f",7260:"b9ab1777",7312:"79766d3b",7318:"a2217fa8",7330:"8551e18f",7537:"6ba3cd60",7543:"ce33bac3",7727:"f8c3b35c",7762:"d6be86e6",7778:"e6ec523f",7810:"abbf84b1",7811:"c89adc31",7819:"f771775e",7868:"7441f7f0",7918:"6e1eeb6f",7920:"1e9d3f1b",7971:"fd2d0705",7979:"9d167bdd",8007:"96bb7f0a",8010:"84b1a128",8069:"38f7a8c6",8105:"654d919c",8125:"1fa1a6a4",8193:"952539e2",8382:"33a3e703",8448:"b3aaed4f",8467:"ce59a7dc",8633:"ffefd990",8658:"f2c58039",8734:"d32f0fd4",8760:"b62319bd",8834:"404cde65",8846:"0bb08f64",8877:"2f835953",8893:"c936a38a",8894:"1b69866b",8916:"75559028",8934:"15a38441",8939:"b1fa5be8",8943:"750dc47c",8960:"731d1b4c",8963:"03b8f94f",8982:"0453bd4b",9052:"986e9e16",9063:"66645129",9106:"36712c92",9145:"56a85afc",9168:"226c1545",9222:"098d4938",9272:"ad4b3729",9292:"dc7c1c76",9314:"95c6177b",9334:"b669e8d5",9413:"c2c4fde8",9417:"43db1677",9509:"d2f35955",9514:"f143f5a3",9559:"53f2c10e",9671:"c7dd3749",9709:"ed1ea233",9714:"3be9e931",9721:"db9fa5b0",9779:"188159e2",9817:"14d8dc3d",9836:"280bbbea",9866:"09de86c0",9878:"3818b03b",9911:"aa3bdb30",9944:"56b17b3c"}[e]+".js",r.miniCssF=e=>{},r.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(e){if("object"==typeof window)return window}}(),r.o=(e,f)=>Object.prototype.hasOwnProperty.call(e,f),d={},b="documentation:",r.l=(e,f,a,c)=>{if(d[e])d[e].push(f);else{var t,o;if(void 0!==a)for(var n=document.getElementsByTagName("script"),i=0;i{t.onerror=t.onload=null,clearTimeout(s);var b=d[e];if(delete d[e],t.parentNode&&t.parentNode.removeChild(t),b&&b.forEach((e=>e(a))),f)return f(a)},s=setTimeout(l.bind(null,void 0,{type:"timeout",target:t}),12e4);t.onerror=l.bind(null,t.onerror),t.onload=l.bind(null,t.onload),o&&document.head.appendChild(t)}},r.r=e=>{"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},r.p="/docs/",r.gca=function(e){return e={17896441:"7918",50343655:"9878","9bf2f1ad":"26",b02e6e62:"38","6417b00a":"39","935f2afb":"53","855c8e88":"102","527a0828":"104","4779d29a":"110","75581c38":"130","54146d47":"150","5aa350ff":"195","5668619f":"225","7a3e07c8":"288","74cbe0b8":"367",c9eb37d0:"376","306aee24":"414","295d89fa":"452",e8be83b4:"511",a11133f1:"529","7a770024":"562",a7f8e40f:"570",ce42bfab:"629",e4ce6935:"666",dffd512f:"720","37a6abbb":"742","07d78bf6":"747","9ce9f59c":"757","78dfb7e9":"795",ba4cd107:"862",bf477335:"963",f7444008:"973","069ddc90":"1006",c059af28:"1078","87014e7c":"1168",ac3f047d:"1219","402a601e":"1229","849f9212":"1320","41a0b225":"1392","71498ad5":"1413","3938886d":"1434",ea56d3e1:"1451","6efd3b1f":"1472","7821a10c":"1490",a74ebc46:"1510",bc969540:"1562","8ff8e535":"1608","6e16c901":"1613",ef851841:"1627","37e2c88f":"1688",a76ff2eb:"1904","98ca32ed":"1985","6b0c725a":"1992","0a099a4f":"1995",ab510b4f:"2024",cdbcc403:"2039","661e81a0":"2066","33580f2e":"2106","220207ff":"2147","267ccbc6":"2169",e3793912:"2251","23a9762f":"2253","1da087e0":"2297","2a424e83":"2310",ff6a9465:"2346","1d9da1f4":"2387","5ce704b9":"2439","68053dc3":"2471","9c874da5":"2483",b738733a:"2507",fd1dd87b:"2510","98a1fcd9":"2516","5e24e3bc":"2539",a1e6466d:"2580","5c2df135":"2629","1e1599cb":"2630",ef6e5522:"2650","06de0160":"2654","98b03dc7":"2719","396f0464":"2720","61d84721":"2813","4b05701a":"2822",bc7fa033:"2947","76c8ca36":"3047","269e975a":"3079",d26eef7b:"3082","18fa88f3":"3099","9a364eec":"3155",befb6058:"3197",a573d8f7:"3322","7bd95ec4":"3360","00dd8e60":"3466","7de4166c":"3477",fb391353:"3540",cf1f907d:"3588","8a0d7266":"3608",d09084ee:"3653",e4cd4c7a:"3703",d008ba81:"3731","781adc30":"3761",e9d600cd:"3834",ea49187a:"3854","338561f9":"3885",e1b7d783:"3892","9bfb314f":"3897","7d141066":"3931",fea2b63c:"3938",cf84801a:"3962","11ffa791":"3969","6bc80c47":"4021",c3ddf248:"4057",d331ba2f:"4143","17e33bf8":"4216","7f9bbddb":"4275",dc2f4bb3:"4415","84982dca":"4472","8dc7c7b8":"4485","90abdf6b":"4571","3d7d2ef4":"4580",e2f45cda:"4585","0225d014":"4587",bebf2496:"4612","066b0604":"4665","855239bf":"4695",ed9ad71b:"4708",ec038b21:"4735",bc209e08:"4772","00e97bf6":"4946",de98de14:"4953",a0d375c1:"5069","46ef8a18":"5090","145e5b9f":"5106","05bac479":"5114","24ad2548":"5165",be477480:"5170","72119d79":"5212","5718d6db":"5222",da84d3ce:"5261","086e8e8b":"5296","73bb90fb":"5326","925c0b5c":"5355","1a613e91":"5398","2baa0d2f":"5400",a830598a:"5414","53e81573":"5421","84d47c34":"5445",c7b25a99:"5449","89b4dc5b":"5502","8f8fcdca":"5618","4859ca05":"5635","28b23d2a":"5711",bdb17ab6:"5792","027748e9":"5973",a89000ee:"6002","6ef1a459":"6027",caf5215e:"6056","910d5a4e":"6059",b3bfe149:"6066",e5386a75:"6086","882bd5c3":"6255","40be41a9":"6325",c7dff32f:"6329","47aa7b91":"6365","59b068d1":"6385","84e29575":"6412",bc0c4574:"6635",a68ea571:"6636",dbee08c0:"6797",dc8f36d8:"6863","491e614f":"7003",b8e550c7:"7062",c3038888:"7100",eebb9f3e:"7145","67d29b54":"7162","7236dcb4":"7213","4656db35":"7260",b90e80fa:"7312","7bbaf234":"7318",dadf679c:"7330",a4179dd8:"7537","6063f56a":"7543",a194bf09:"7727","85d780d7":"7762","50df0f53":"7778","17cee6e7":"7810",e378912b:"7811","35b6bff0":"7819","914c5fe7":"7868","1a4e3797":"7920","6f37af33":"7971",e9a76c81:"7979","8272edfd":"8007","24317c98":"8010","81bc17b9":"8069",d9fb2f5f:"8105",fa7527f5:"8125","146035e7":"8193","69cd44f6":"8448",fd64f10f:"8467",abf83c96:"8633","58f41459":"8658","7ef0b1f8":"8734",c84a9109:"8760",c8588910:"8834",a7c4da1a:"8846","60ab5c4c":"8877",fbe4c47e:"8893","8fda0747":"8916","525802b3":"8934","7ce8e9eb":"8939","838a8de2":"8943","512e5a2f":"8960",bcc7741c:"8963",bf044f46:"8982","69c1213a":"9052",c60355ff:"9063","5200637f":"9106","63a8a649":"9145","027cf034":"9168","3c75ee7d":"9222",a6ef86a7:"9272",d5397252:"9292","0e3ba2c6":"9314","247783bb":"9334","07801322":"9413","00685d85":"9417",ca88c6ba:"9509","1be78505":"9514","97b6d501":"9559","0e384e19":"9671","223301d0":"9709","1f98178b":"9714",db699483:"9721",fb2c10d0:"9779","14eb3368":"9817","39df2a2d":"9836","2c6eda14":"9866","8c63d4ea":"9911","67854d34":"9944"}[e]||e,r.p+r.u(e)},(()=>{var e={1303:0,532:0};r.f.j=(f,a)=>{var d=r.o(e,f)?e[f]:void 0;if(0!==d)if(d)a.push(d[2]);else if(/^(1303|532)$/.test(f))e[f]=0;else{var b=new Promise(((a,b)=>d=e[f]=[a,b]));a.push(d[2]=b);var c=r.p+r.u(f),t=new Error;r.l(c,(a=>{if(r.o(e,f)&&(0!==(d=e[f])&&(e[f]=void 0),d)){var b=a&&("load"===a.type?"missing":a.type),c=a&&a.target&&a.target.src;t.message="Loading chunk "+f+" failed.\n("+b+": "+c+")",t.name="ChunkLoadError",t.type=b,t.request=c,d[1](t)}}),"chunk-"+f,f)}},r.O.j=f=>0===e[f];var f=(f,a)=>{var d,b,c=a[0],t=a[1],o=a[2],n=0;if(c.some((f=>0!==e[f]))){for(d in t)r.o(t,d)&&(r.m[d]=t[d]);if(o)var i=o(r)}for(f&&f(a);n{"use strict";var e,f,a,d,b,c={},t={};function r(e){var f=t[e];if(void 0!==f)return f.exports;var a=t[e]={id:e,loaded:!1,exports:{}};return c[e].call(a.exports,a,a.exports,r),a.loaded=!0,a.exports}r.m=c,r.c=t,e=[],r.O=(f,a,d,b)=>{if(!a){var c=1/0;for(i=0;i=b)&&Object.keys(r.O).every((e=>r.O[e](a[o])))?a.splice(o--,1):(t=!1,b0&&e[i-1][2]>b;i--)e[i]=e[i-1];e[i]=[a,d,b]},r.n=e=>{var f=e&&e.__esModule?()=>e.default:()=>e;return r.d(f,{a:f}),f},a=Object.getPrototypeOf?e=>Object.getPrototypeOf(e):e=>e.__proto__,r.t=function(e,d){if(1&d&&(e=this(e)),8&d)return e;if("object"==typeof e&&e){if(4&d&&e.__esModule)return e;if(16&d&&"function"==typeof e.then)return e}var b=Object.create(null);r.r(b);var c={};f=f||[null,a({}),a([]),a(a)];for(var t=2&d&&e;"object"==typeof t&&!~f.indexOf(t);t=a(t))Object.getOwnPropertyNames(t).forEach((f=>c[f]=()=>e[f]));return c.default=()=>e,r.d(b,c),b},r.d=(e,f)=>{for(var a in f)r.o(f,a)&&!r.o(e,a)&&Object.defineProperty(e,a,{enumerable:!0,get:f[a]})},r.f={},r.e=e=>Promise.all(Object.keys(r.f).reduce(((f,a)=>(r.f[a](e,f),f)),[])),r.u=e=>"assets/js/"+({26:"9bf2f1ad",38:"b02e6e62",39:"6417b00a",53:"935f2afb",102:"855c8e88",104:"527a0828",110:"4779d29a",130:"75581c38",150:"54146d47",195:"5aa350ff",225:"5668619f",288:"7a3e07c8",367:"74cbe0b8",376:"c9eb37d0",414:"306aee24",452:"295d89fa",511:"e8be83b4",529:"a11133f1",562:"7a770024",570:"a7f8e40f",629:"ce42bfab",666:"e4ce6935",720:"dffd512f",742:"37a6abbb",747:"07d78bf6",757:"9ce9f59c",795:"78dfb7e9",862:"ba4cd107",963:"bf477335",973:"f7444008",1006:"069ddc90",1078:"c059af28",1168:"87014e7c",1219:"ac3f047d",1229:"402a601e",1320:"849f9212",1392:"41a0b225",1413:"71498ad5",1434:"3938886d",1451:"ea56d3e1",1472:"6efd3b1f",1490:"7821a10c",1510:"a74ebc46",1562:"bc969540",1608:"8ff8e535",1613:"6e16c901",1627:"ef851841",1688:"37e2c88f",1904:"a76ff2eb",1985:"98ca32ed",1992:"6b0c725a",1995:"0a099a4f",2024:"ab510b4f",2039:"cdbcc403",2066:"661e81a0",2106:"33580f2e",2147:"220207ff",2169:"267ccbc6",2251:"e3793912",2253:"23a9762f",2297:"1da087e0",2310:"2a424e83",2346:"ff6a9465",2387:"1d9da1f4",2439:"5ce704b9",2471:"68053dc3",2483:"9c874da5",2507:"b738733a",2510:"fd1dd87b",2516:"98a1fcd9",2539:"5e24e3bc",2580:"a1e6466d",2629:"5c2df135",2630:"1e1599cb",2650:"ef6e5522",2654:"06de0160",2719:"98b03dc7",2720:"396f0464",2813:"61d84721",2822:"4b05701a",2947:"bc7fa033",3047:"76c8ca36",3079:"269e975a",3082:"d26eef7b",3099:"18fa88f3",3155:"9a364eec",3197:"befb6058",3322:"a573d8f7",3360:"7bd95ec4",3466:"00dd8e60",3477:"7de4166c",3540:"fb391353",3588:"cf1f907d",3608:"8a0d7266",3653:"d09084ee",3703:"e4cd4c7a",3731:"d008ba81",3761:"781adc30",3834:"e9d600cd",3854:"ea49187a",3885:"338561f9",3892:"e1b7d783",3897:"9bfb314f",3931:"7d141066",3938:"fea2b63c",3962:"cf84801a",3969:"11ffa791",4021:"6bc80c47",4057:"c3ddf248",4143:"d331ba2f",4216:"17e33bf8",4275:"7f9bbddb",4415:"dc2f4bb3",4472:"84982dca",4485:"8dc7c7b8",4571:"90abdf6b",4580:"3d7d2ef4",4585:"e2f45cda",4587:"0225d014",4612:"bebf2496",4665:"066b0604",4695:"855239bf",4708:"ed9ad71b",4735:"ec038b21",4772:"bc209e08",4946:"00e97bf6",4953:"de98de14",5069:"a0d375c1",5090:"46ef8a18",5106:"145e5b9f",5114:"05bac479",5165:"24ad2548",5170:"be477480",5212:"72119d79",5222:"5718d6db",5261:"da84d3ce",5296:"086e8e8b",5326:"73bb90fb",5355:"925c0b5c",5398:"1a613e91",5400:"2baa0d2f",5414:"a830598a",5421:"53e81573",5445:"84d47c34",5449:"c7b25a99",5502:"89b4dc5b",5618:"8f8fcdca",5635:"4859ca05",5711:"28b23d2a",5792:"bdb17ab6",5973:"027748e9",6002:"a89000ee",6027:"6ef1a459",6056:"caf5215e",6059:"910d5a4e",6066:"b3bfe149",6086:"e5386a75",6255:"882bd5c3",6325:"40be41a9",6329:"c7dff32f",6365:"47aa7b91",6385:"59b068d1",6412:"84e29575",6635:"bc0c4574",6636:"a68ea571",6797:"dbee08c0",6863:"dc8f36d8",7003:"491e614f",7062:"b8e550c7",7100:"c3038888",7145:"eebb9f3e",7162:"67d29b54",7213:"7236dcb4",7260:"4656db35",7312:"b90e80fa",7318:"7bbaf234",7330:"dadf679c",7537:"a4179dd8",7543:"6063f56a",7727:"a194bf09",7762:"85d780d7",7778:"50df0f53",7810:"17cee6e7",7811:"e378912b",7819:"35b6bff0",7868:"914c5fe7",7918:"17896441",7920:"1a4e3797",7971:"6f37af33",7979:"e9a76c81",8007:"8272edfd",8010:"24317c98",8069:"81bc17b9",8105:"d9fb2f5f",8125:"fa7527f5",8193:"146035e7",8448:"69cd44f6",8467:"fd64f10f",8633:"abf83c96",8658:"58f41459",8734:"7ef0b1f8",8760:"c84a9109",8834:"c8588910",8846:"a7c4da1a",8877:"60ab5c4c",8893:"fbe4c47e",8916:"8fda0747",8934:"525802b3",8939:"7ce8e9eb",8943:"838a8de2",8960:"512e5a2f",8963:"bcc7741c",8982:"bf044f46",9052:"69c1213a",9063:"c60355ff",9106:"5200637f",9145:"63a8a649",9168:"027cf034",9222:"3c75ee7d",9272:"a6ef86a7",9292:"d5397252",9314:"0e3ba2c6",9334:"247783bb",9413:"07801322",9417:"00685d85",9509:"ca88c6ba",9514:"1be78505",9559:"97b6d501",9671:"0e384e19",9709:"223301d0",9714:"1f98178b",9721:"db699483",9779:"fb2c10d0",9817:"14eb3368",9836:"39df2a2d",9866:"2c6eda14",9878:"50343655",9911:"8c63d4ea",9944:"67854d34"}[e]||e)+"."+{26:"6ee6df48",38:"25dbb0a5",39:"db0f7227",53:"394b7d7f",102:"c1d3dab3",104:"69a2b6b8",110:"f5cfe7a1",130:"29ced1dc",150:"d7d0926b",195:"8bf681a0",225:"3177252b",288:"48fe7540",367:"e48d7d06",376:"e0510c28",414:"8940b5d5",452:"62c4c0a7",511:"902511c8",529:"37973ca6",562:"e44f6a9c",570:"4f4c44ea",629:"42435a63",666:"120b9713",720:"55cab5bf",742:"4d967d0b",747:"2478ee7b",757:"b751fb80",795:"b4234a42",862:"418f3bd4",963:"dbaea5ba",973:"79855f17",1006:"2cf3ff86",1078:"0d94f3f3",1168:"4ed8b936",1219:"02dbc5b7",1229:"d0a6196c",1320:"d0e2d40e",1392:"4ea0db8e",1413:"c46dce27",1434:"c7a4d03c",1451:"b2ecce3d",1472:"234e467d",1490:"2165cfdf",1510:"8a5eb4ad",1562:"e06cd197",1608:"893f7126",1613:"14e9eca4",1627:"8c55abdf",1688:"b472ecf2",1904:"174b45b8",1949:"e89740ad",1985:"8c7dbcc3",1992:"8b10e965",1995:"18f4548d",2024:"e57a2cac",2039:"3a2f4e7b",2066:"14f8298d",2106:"af424545",2147:"b1f81382",2169:"a61ed3eb",2251:"9bbd76bb",2253:"882f27ab",2297:"4aba8fae",2310:"77297faf",2346:"d4bf8fd2",2387:"203a0347",2439:"ecd97376",2471:"22ba9fcb",2483:"f13a94f6",2507:"bde0391c",2510:"8a7754d4",2516:"494f8b22",2539:"753efd8a",2580:"8ea5763a",2629:"bf3a0a82",2630:"05b73ec0",2650:"bab99887",2654:"e1cb8f8c",2719:"46b7f948",2720:"e164a7a0",2813:"a59096f1",2822:"4d559ede",2947:"0c898d54",3047:"30e60acc",3079:"9122047a",3082:"2ae052c7",3099:"6b647eeb",3155:"2dbd5eaf",3197:"b9b03ab0",3322:"c5dfdcf8",3360:"12b719e2",3466:"27d8cba5",3477:"e391f6ae",3540:"9009efa3",3588:"ba8093a8",3608:"8378c4fd",3653:"c6557d0f",3703:"f48fa867",3731:"d2d17dbc",3761:"f453c153",3834:"4e5b9955",3854:"8f8eccdc",3885:"3a34a919",3892:"544ec8b6",3897:"5b7bbece",3931:"7c68e7ab",3938:"e7da601d",3962:"d0a15929",3969:"ead65a90",4021:"76d4b7c3",4057:"34e418cf",4143:"cd37cd95",4216:"704a22c2",4275:"7bff2e29",4415:"011105a8",4472:"73212368",4483:"dd6f4c67",4485:"9630363d",4571:"fed231d1",4580:"ef590e62",4585:"a875facb",4587:"2593e756",4612:"f36163ba",4665:"7f689ded",4695:"16c42d4f",4708:"723dc706",4735:"39315364",4772:"64a09734",4946:"2c67a4b7",4953:"484f49ed",5069:"cad12094",5090:"5bb085e8",5106:"c7833d51",5114:"2eda4c43",5165:"0383fd6a",5170:"20d85d6e",5212:"3cd30d3e",5222:"e67d1cf3",5261:"37925ae8",5296:"3c1a2d2c",5326:"6ea26858",5355:"1e0b67bb",5398:"08dd51e1",5400:"822ffc1a",5414:"376dedc8",5421:"62730e04",5445:"b4bf663a",5449:"68c091cf",5502:"efd1176c",5618:"df44a654",5635:"63bb2a27",5711:"574dcbb7",5792:"baa2f9a2",5973:"2077cfbc",6002:"832e026a",6027:"3ffd4a9b",6056:"efb3579e",6059:"1131599e",6066:"736f214a",6086:"d986e717",6255:"36506e18",6325:"619ee169",6329:"3bc4612f",6365:"0a9d5509",6385:"5a3ae9cd",6412:"5f71aa97",6635:"eeec2ec3",6636:"1fc271e6",6797:"7f7419e3",6798:"07cc0f1a",6863:"4e8837df",6945:"5f538c43",7003:"7a7d41db",7062:"c2c077f2",7100:"9e8a4622",7145:"6bbe307d",7162:"9b34a613",7213:"a1eb159f",7260:"b9ab1777",7312:"79766d3b",7318:"a2217fa8",7330:"8551e18f",7537:"6ba3cd60",7543:"ce33bac3",7727:"f8c3b35c",7762:"d6be86e6",7778:"e6ec523f",7810:"abbf84b1",7811:"c89adc31",7819:"f771775e",7868:"7441f7f0",7918:"6e1eeb6f",7920:"1e9d3f1b",7971:"fd2d0705",7979:"9d167bdd",8007:"96bb7f0a",8010:"84b1a128",8069:"38f7a8c6",8105:"654d919c",8125:"1fa1a6a4",8193:"952539e2",8382:"33a3e703",8448:"b3aaed4f",8467:"ce59a7dc",8633:"ffefd990",8658:"f2c58039",8734:"8c175450",8760:"b62319bd",8834:"404cde65",8846:"0bb08f64",8877:"2f835953",8893:"c936a38a",8894:"1b69866b",8916:"75559028",8934:"15a38441",8939:"b1fa5be8",8943:"750dc47c",8960:"731d1b4c",8963:"03b8f94f",8982:"0453bd4b",9052:"986e9e16",9063:"66645129",9106:"36712c92",9145:"56a85afc",9168:"226c1545",9222:"098d4938",9272:"ad4b3729",9292:"dc7c1c76",9314:"95c6177b",9334:"b669e8d5",9413:"c2c4fde8",9417:"43db1677",9509:"d2f35955",9514:"f143f5a3",9559:"53f2c10e",9671:"c7dd3749",9709:"ed1ea233",9714:"3be9e931",9721:"db9fa5b0",9779:"188159e2",9817:"14d8dc3d",9836:"280bbbea",9866:"09de86c0",9878:"3818b03b",9911:"aa3bdb30",9944:"56b17b3c"}[e]+".js",r.miniCssF=e=>{},r.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(e){if("object"==typeof window)return window}}(),r.o=(e,f)=>Object.prototype.hasOwnProperty.call(e,f),d={},b="documentation:",r.l=(e,f,a,c)=>{if(d[e])d[e].push(f);else{var t,o;if(void 0!==a)for(var n=document.getElementsByTagName("script"),i=0;i{t.onerror=t.onload=null,clearTimeout(s);var b=d[e];if(delete d[e],t.parentNode&&t.parentNode.removeChild(t),b&&b.forEach((e=>e(a))),f)return f(a)},s=setTimeout(l.bind(null,void 0,{type:"timeout",target:t}),12e4);t.onerror=l.bind(null,t.onerror),t.onload=l.bind(null,t.onload),o&&document.head.appendChild(t)}},r.r=e=>{"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},r.p="/docs/",r.gca=function(e){return e={17896441:"7918",50343655:"9878","9bf2f1ad":"26",b02e6e62:"38","6417b00a":"39","935f2afb":"53","855c8e88":"102","527a0828":"104","4779d29a":"110","75581c38":"130","54146d47":"150","5aa350ff":"195","5668619f":"225","7a3e07c8":"288","74cbe0b8":"367",c9eb37d0:"376","306aee24":"414","295d89fa":"452",e8be83b4:"511",a11133f1:"529","7a770024":"562",a7f8e40f:"570",ce42bfab:"629",e4ce6935:"666",dffd512f:"720","37a6abbb":"742","07d78bf6":"747","9ce9f59c":"757","78dfb7e9":"795",ba4cd107:"862",bf477335:"963",f7444008:"973","069ddc90":"1006",c059af28:"1078","87014e7c":"1168",ac3f047d:"1219","402a601e":"1229","849f9212":"1320","41a0b225":"1392","71498ad5":"1413","3938886d":"1434",ea56d3e1:"1451","6efd3b1f":"1472","7821a10c":"1490",a74ebc46:"1510",bc969540:"1562","8ff8e535":"1608","6e16c901":"1613",ef851841:"1627","37e2c88f":"1688",a76ff2eb:"1904","98ca32ed":"1985","6b0c725a":"1992","0a099a4f":"1995",ab510b4f:"2024",cdbcc403:"2039","661e81a0":"2066","33580f2e":"2106","220207ff":"2147","267ccbc6":"2169",e3793912:"2251","23a9762f":"2253","1da087e0":"2297","2a424e83":"2310",ff6a9465:"2346","1d9da1f4":"2387","5ce704b9":"2439","68053dc3":"2471","9c874da5":"2483",b738733a:"2507",fd1dd87b:"2510","98a1fcd9":"2516","5e24e3bc":"2539",a1e6466d:"2580","5c2df135":"2629","1e1599cb":"2630",ef6e5522:"2650","06de0160":"2654","98b03dc7":"2719","396f0464":"2720","61d84721":"2813","4b05701a":"2822",bc7fa033:"2947","76c8ca36":"3047","269e975a":"3079",d26eef7b:"3082","18fa88f3":"3099","9a364eec":"3155",befb6058:"3197",a573d8f7:"3322","7bd95ec4":"3360","00dd8e60":"3466","7de4166c":"3477",fb391353:"3540",cf1f907d:"3588","8a0d7266":"3608",d09084ee:"3653",e4cd4c7a:"3703",d008ba81:"3731","781adc30":"3761",e9d600cd:"3834",ea49187a:"3854","338561f9":"3885",e1b7d783:"3892","9bfb314f":"3897","7d141066":"3931",fea2b63c:"3938",cf84801a:"3962","11ffa791":"3969","6bc80c47":"4021",c3ddf248:"4057",d331ba2f:"4143","17e33bf8":"4216","7f9bbddb":"4275",dc2f4bb3:"4415","84982dca":"4472","8dc7c7b8":"4485","90abdf6b":"4571","3d7d2ef4":"4580",e2f45cda:"4585","0225d014":"4587",bebf2496:"4612","066b0604":"4665","855239bf":"4695",ed9ad71b:"4708",ec038b21:"4735",bc209e08:"4772","00e97bf6":"4946",de98de14:"4953",a0d375c1:"5069","46ef8a18":"5090","145e5b9f":"5106","05bac479":"5114","24ad2548":"5165",be477480:"5170","72119d79":"5212","5718d6db":"5222",da84d3ce:"5261","086e8e8b":"5296","73bb90fb":"5326","925c0b5c":"5355","1a613e91":"5398","2baa0d2f":"5400",a830598a:"5414","53e81573":"5421","84d47c34":"5445",c7b25a99:"5449","89b4dc5b":"5502","8f8fcdca":"5618","4859ca05":"5635","28b23d2a":"5711",bdb17ab6:"5792","027748e9":"5973",a89000ee:"6002","6ef1a459":"6027",caf5215e:"6056","910d5a4e":"6059",b3bfe149:"6066",e5386a75:"6086","882bd5c3":"6255","40be41a9":"6325",c7dff32f:"6329","47aa7b91":"6365","59b068d1":"6385","84e29575":"6412",bc0c4574:"6635",a68ea571:"6636",dbee08c0:"6797",dc8f36d8:"6863","491e614f":"7003",b8e550c7:"7062",c3038888:"7100",eebb9f3e:"7145","67d29b54":"7162","7236dcb4":"7213","4656db35":"7260",b90e80fa:"7312","7bbaf234":"7318",dadf679c:"7330",a4179dd8:"7537","6063f56a":"7543",a194bf09:"7727","85d780d7":"7762","50df0f53":"7778","17cee6e7":"7810",e378912b:"7811","35b6bff0":"7819","914c5fe7":"7868","1a4e3797":"7920","6f37af33":"7971",e9a76c81:"7979","8272edfd":"8007","24317c98":"8010","81bc17b9":"8069",d9fb2f5f:"8105",fa7527f5:"8125","146035e7":"8193","69cd44f6":"8448",fd64f10f:"8467",abf83c96:"8633","58f41459":"8658","7ef0b1f8":"8734",c84a9109:"8760",c8588910:"8834",a7c4da1a:"8846","60ab5c4c":"8877",fbe4c47e:"8893","8fda0747":"8916","525802b3":"8934","7ce8e9eb":"8939","838a8de2":"8943","512e5a2f":"8960",bcc7741c:"8963",bf044f46:"8982","69c1213a":"9052",c60355ff:"9063","5200637f":"9106","63a8a649":"9145","027cf034":"9168","3c75ee7d":"9222",a6ef86a7:"9272",d5397252:"9292","0e3ba2c6":"9314","247783bb":"9334","07801322":"9413","00685d85":"9417",ca88c6ba:"9509","1be78505":"9514","97b6d501":"9559","0e384e19":"9671","223301d0":"9709","1f98178b":"9714",db699483:"9721",fb2c10d0:"9779","14eb3368":"9817","39df2a2d":"9836","2c6eda14":"9866","8c63d4ea":"9911","67854d34":"9944"}[e]||e,r.p+r.u(e)},(()=>{var e={1303:0,532:0};r.f.j=(f,a)=>{var d=r.o(e,f)?e[f]:void 0;if(0!==d)if(d)a.push(d[2]);else if(/^(1303|532)$/.test(f))e[f]=0;else{var b=new Promise(((a,b)=>d=e[f]=[a,b]));a.push(d[2]=b);var c=r.p+r.u(f),t=new Error;r.l(c,(a=>{if(r.o(e,f)&&(0!==(d=e[f])&&(e[f]=void 0),d)){var b=a&&("load"===a.type?"missing":a.type),c=a&&a.target&&a.target.src;t.message="Loading chunk "+f+" failed.\n("+b+": "+c+")",t.name="ChunkLoadError",t.type=b,t.request=c,d[1](t)}}),"chunk-"+f,f)}},r.O.j=f=>0===e[f];var f=(f,a)=>{var d,b,c=a[0],t=a[1],o=a[2],n=0;if(c.some((f=>0!==e[f]))){for(d in t)r.o(t,d)&&(r.m[d]=t[d]);if(o)var i=o(r)}for(f&&f(a);n - + @@ -20,7 +20,7 @@

    Improving test automation stability

    Challenges

    • Complex, manual test runs
    • Low stability of regression for unclear reasons (60% passing rate)
    • Unclear reporting for non-technical stakeholders, leading to the lack of transparency in test automation results and progress
    • Test automation feedback is unclear, unreliable, incomprehensible, and insufficient to decide to push the app to production

    Highlights

    By integrating the test framework with ReportPortal.io, EPAM's team provided:

    • Simplified test runs
    • Key info for the manual root cause analysis of test failures, such as logs, screenshots, attachments
    • A possibility to triage failed items (AI-based and manual)
    • Clear reporting for non-technical stakeholders

    Results

    • Improved automation stability from 60% to 77% in one sprint
    • Discovered that most failures were caused by environment issues and reduced the number of such failures from 20% to 2%
    • Reduced test automation results analysis effort by 45%
    • Provided clear and comprehensive test automation reporting dashboard: number of test cases, regression passing rate, reasons for failures, product status
    - + \ No newline at end of file diff --git a/case-studies/IncreasingTestAutomationStabilityAndVisibility/index.html b/case-studies/IncreasingTestAutomationStabilityAndVisibility/index.html index 321ae9f61..70a3930d2 100644 --- a/case-studies/IncreasingTestAutomationStabilityAndVisibility/index.html +++ b/case-studies/IncreasingTestAutomationStabilityAndVisibility/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@

    Increasing test automation stability and visibility

    Challenges

    • 2,000 unique tests had low stability (25% passing rate)
    • Complex and inconvenient test results reporting
    • QA team didn’t have capacity for test failure analysis
    • Automation results were ignored by decision makers: no analysis of the causes of failed tests, no trust in the automation testing process
    • Lack of visibility into failed tests and failure causes
    • Absence of clear reporting of QA engineers’ workload and performance

    Highlights

    Integration with ReportPortal.io allowed the client to:

    • Collect history for previous test runs
    • Identify passing, failing, and unstable tests
    • Select stable tests in a separate run
    • Assign unstable test for refactoring and add them to a separate run for implementation
    • Configure ReportPortal.io charts to track a refactoring progress
    • Accelerate test failure analysis through access to related logs, screenshots, and attachments in one place

    Results

    • Automation stability improved from 25% to 95%
    • Analysis efforts of QA engineers decreased by 10 times
    • Client stakeholders use ReportPortal.io data to make release decisions
    • ReportPortal.io became the main tool for tracking test automation progress and health
    - + \ No newline at end of file diff --git a/case-studies/ReducingRegressionAnalysisEfforts/index.html b/case-studies/ReducingRegressionAnalysisEfforts/index.html index ee6f6c9a1..f4eeb5832 100644 --- a/case-studies/ReducingRegressionAnalysisEfforts/index.html +++ b/case-studies/ReducingRegressionAnalysisEfforts/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@

    Reducing regression analysis efforts

    Challenges

    • Test analysis could only start after full execution was completed (4 hours wasted daily)
    • All test failures had to be analyzed manually
    • No visibility into causes for tests failures
    • Absence of history and trends of test failures
    • No tools to manage team workload
    • Test execution reports were done manually (1 hour of daily efforts)

    Highlights

    • Real-time analysis during test runs: results available after the first job execution, saving team capacity and providing an early reaction
    • Automatic re-run of failed tests provided additional value and saved up to 5.5 of team’s hours per day
    • About 20% of defects previously analyzed manually are being updated automatically through ML capabilities
    • Clear visibility into the number of new /existing production defects, auto test related issues, and environment related issues
    • Full understanding of application quality, correct planning of maintenance time, and transparent communication of environment instability based on real-time statistics
    • History of tests execution helps to analyze causes of test failures more efficiently
    • Improved task management due to possibility to plan work allocation and track tests assigned to each team member
    • Real-time dashboards were tailored to client’s KPIs, giving full transparency of test execution results
    - + \ No newline at end of file diff --git a/case-studies/ReducingRegressionTimeBy50/index.html b/case-studies/ReducingRegressionTimeBy50/index.html index d7418c470..906c60461 100644 --- a/case-studies/ReducingRegressionTimeBy50/index.html +++ b/case-studies/ReducingRegressionTimeBy50/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@

    Reducing regression time by 50%

    EPAM helps a Canadian retail company to reverse-engineer their legacy IBM-based store management system to a modern tech stack. As part of this project, ReportPortal was deployed as a centralized test reporting tool.

    Challenges

    • Unavailble environments (15VMs) blocked by aggregation scripts
    • High risk of aggregation fail: 1 in 10 aggregations fails. In case of a fail, the whole regression should be re-run.
    • Constant regression fails move weekly releases for 1 day
    • Lack of information for investigation: no screenshots/no history/no structure/no all info
    • Duplicated analysis efforts: missing history of test cases and known issues

    Highlights

    • Simplified test run reporting by integrating the test framework with ReportPortal.io
    • Distributed test execution data for root cause analysis: logs/screenshots/ attachments
    • Provided a possibility for AI-based defects triage and manual triage
    • Provided clear reporting for non-technical stakeholders
    • Real-time reporting
    • Save on early reaction: team result analysis right after execution started in real time
    • Collaborative results analysis
    • Test Case History helped to identify flaky test cases
    • Extended ML Analyzer
    - + \ No newline at end of file diff --git a/category/admin-panel/index.html b/category/admin-panel/index.html index ddf83226c..c2f83b782 100644 --- a/category/admin-panel/index.html +++ b/category/admin-panel/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@ - + \ No newline at end of file diff --git a/category/analysis/index.html b/category/analysis/index.html index 1f2640d8f..99a597112 100644 --- a/category/analysis/index.html +++ b/category/analysis/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    - + \ No newline at end of file diff --git a/category/case-studies/index.html b/category/case-studies/index.html index 3815c0820..0110a71d9 100644 --- a/category/case-studies/index.html +++ b/category/case-studies/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@ - + \ No newline at end of file diff --git a/category/dashboards-and-widgets/index.html b/category/dashboards-and-widgets/index.html index 98a32f9b9..a71b646d0 100644 --- a/category/dashboards-and-widgets/index.html +++ b/category/dashboards-and-widgets/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@

    Dashboards and widgets

    - + \ No newline at end of file diff --git a/category/developers-guides/index.html b/category/developers-guides/index.html index 463638de5..120ff0a6d 100644 --- a/category/developers-guides/index.html +++ b/category/developers-guides/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    - + \ No newline at end of file diff --git a/category/features/index.html b/category/features/index.html index ca6ca4851..fdf0084f8 100644 --- a/category/features/index.html +++ b/category/features/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@

    Features

    - + \ No newline at end of file diff --git a/category/installation-steps/index.html b/category/installation-steps/index.html index 8145f1e1f..f3cd5870e 100644 --- a/category/installation-steps/index.html +++ b/category/installation-steps/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@

    Installation Steps

    - + \ No newline at end of file diff --git a/category/issues-troubleshooting/index.html b/category/issues-troubleshooting/index.html index 70507aa5e..1ccb716f1 100644 --- a/category/issues-troubleshooting/index.html +++ b/category/issues-troubleshooting/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    - + \ No newline at end of file diff --git a/category/loggers-1/index.html b/category/loggers-1/index.html index bd2382ab0..bf991432d 100644 --- a/category/loggers-1/index.html +++ b/category/loggers-1/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    - + \ No newline at end of file diff --git a/category/loggers/index.html b/category/loggers/index.html index 3a2c63beb..23da5f364 100644 --- a/category/loggers/index.html +++ b/category/loggers/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@

    Loggers

    - + \ No newline at end of file diff --git a/category/plugins/index.html b/category/plugins/index.html index 757e32b3b..f03c3ead9 100644 --- a/category/plugins/index.html +++ b/category/plugins/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    - + \ No newline at end of file diff --git a/category/quality-gates/index.html b/category/quality-gates/index.html index f809d524f..ca0d69e3c 100644 --- a/category/quality-gates/index.html +++ b/category/quality-gates/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    - + \ No newline at end of file diff --git a/category/releases/index.html b/category/releases/index.html index f195a0e3c..1c4eb9200 100644 --- a/category/releases/index.html +++ b/category/releases/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@

    Releases

    - + \ No newline at end of file diff --git a/category/reportportal-configuration/index.html b/category/reportportal-configuration/index.html index 25e7db559..4baa0d3bd 100644 --- a/category/reportportal-configuration/index.html +++ b/category/reportportal-configuration/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    - + \ No newline at end of file diff --git a/category/saved-searches-filters/index.html b/category/saved-searches-filters/index.html index 4b6407d99..c085a0385 100644 --- a/category/saved-searches-filters/index.html +++ b/category/saved-searches-filters/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@ - + \ No newline at end of file diff --git a/category/terms--conditions/index.html b/category/terms--conditions/index.html index f1762daeb..7dd98a225 100644 --- a/category/terms--conditions/index.html +++ b/category/terms--conditions/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    - + \ No newline at end of file diff --git a/category/user-account/index.html b/category/user-account/index.html index eb8c2cd5c..474400ae3 100644 --- a/category/user-account/index.html +++ b/category/user-account/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    - + \ No newline at end of file diff --git a/category/work-with-reports/index.html b/category/work-with-reports/index.html index 0a59de3bb..9c68c2cca 100644 --- a/category/work-with-reports/index.html +++ b/category/work-with-reports/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@ - + \ No newline at end of file diff --git a/dashboards-and-widgets/ComponentHealthCheck/index.html b/dashboards-and-widgets/ComponentHealthCheck/index.html index bb762759f..f3661eea0 100644 --- a/dashboards-and-widgets/ComponentHealthCheck/index.html +++ b/dashboards-and-widgets/ComponentHealthCheck/index.html @@ -12,7 +12,7 @@ - + @@ -34,7 +34,7 @@ link to the widget list view: Filter list view + test method: Test + status: Passed, Failed, Skipped, Interrupted, InProgress; the number of items is equal to the number of Test cases in the widget a color line which depends on passing rate (see section Widget legend) Widget legend

    Widget legend has two lines: Passed and Failed

    Failed

    The failed line has four colors:

    • light red
    • regular red
    • strong red
    • dark red

    And have values - less than specified on widget wizard -1

    Passed

    The passing line has only two colors:

    • slightly green
    • green = Passed

    And have values - from specified on widget wizard to 100%. Depends on this color scheme each group on the widget has its own color.

    Let's say we set 'The min allowable passing rate for the component' to be 90%.

    • passed green: groups which have passing rate 100%.
    • slightly green: groups which passing rate from 99 - specified on widget wizard.
    • light red: from 3* (90% - 1)/4 to (90% - 1)
    • strong red: from (90% - 1)/2 to 3* (90% - 1)/4
    • regular red: from (90% - 1)/4 to 2*(90% - 1)/4
    • dark red: 0 - ((90% - 1)/4 -1)
    note

    The widget doesn't contain 'IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/CumulativeTrendChart/index.html b/dashboards-and-widgets/CumulativeTrendChart/index.html index 9705e6fb5..144d06dfd 100644 --- a/dashboards-and-widgets/CumulativeTrendChart/index.html +++ b/dashboards-and-widgets/CumulativeTrendChart/index.html @@ -12,7 +12,7 @@ - + @@ -38,7 +38,7 @@ Separate bars - is shown a separate bar for each status, and each defect type group.

    Tests / Per cent Test mode - OY axis is calculated in test cases. Percent mode - OY axis is calculated in percent. OY = 100%.

    A user can combine different options. Options are saved per user.

    note

    The widget doesn't contain 'IN PROGRESS" launches. The widget statistics calculated only in items with method type TEST.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/DifferentLaunchesComparisonChart/index.html b/dashboards-and-widgets/DifferentLaunchesComparisonChart/index.html index dff4ab422..d00e4c793 100644 --- a/dashboards-and-widgets/DifferentLaunchesComparisonChart/index.html +++ b/dashboards-and-widgets/DifferentLaunchesComparisonChart/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Different launches comparison chart

    The widget allows you to compare statistics for the 2 last launches side by side.

    Widget's parameters:

    • Filter.

    Widget view

    • The X-axis shows launches numbers and launches names on hover.
    • Y-axis shows the percentage of test-cases by statuses.

    The widget contains agenda with statuses, the user can click on a status to remove/add it to the chart.

    The tooltip on mouse hover over the chart area shows launch details: launch name and number, launch start times and percentage of test cases of a particular type.

    The widget has clickable sections when you click on specific sections in the widget, the system forwards you to the launch view for the appropriate selection.

    note

    The widget doesn't contain 'IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/FailedCasesTrendChart/index.html b/dashboards-and-widgets/FailedCasesTrendChart/index.html index 25c3247ae..c75b4ee12 100644 --- a/dashboards-and-widgets/FailedCasesTrendChart/index.html +++ b/dashboards-and-widgets/FailedCasesTrendChart/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Failed cases trend chart

    The widget shows the trend of growth in the number of failed test cases (Product Bugs + Auto Bugs + System Issues + No Defects + To Investigates) from run to run.

    Widget's parameters:

    • Filter.
    • Items: 1-150. The default meaning is 50.

    Widget view

    The widget contains the agenda: "Failed".

    • The X-axis shows launches numbers and launches names on hover.
    • Y-axis shows several Failed issues (sum of Product Bugs + Auto Bugs + System Issues + No Defects + To Investigates).

    The tooltip on mouse hover over the chart area shows launch details: launch name and number, launch start time and several failed test cases.

    note

    The widget doesn't contain "IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/FlakyTestCasesTableTop20/index.html b/dashboards-and-widgets/FlakyTestCasesTableTop20/index.html index 90d1ff002..930663fdc 100644 --- a/dashboards-and-widgets/FlakyTestCasesTableTop20/index.html +++ b/dashboards-and-widgets/FlakyTestCasesTableTop20/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Flaky test cases table (TOP-20)

    Shows the TOP-20 the flakiest test cases within the specified previous launches. The widget defines test cases with the highest percentage of switching their status in the execution. So that you can click on the test cases and be redirected to the last test item in execution to check the reasons.

    Widget's parameters:

    • Launches count: 2-150. The default meaning is 30.

    • Launch name. Is required

    • Include /Exclude Before and After methods

    Widget view

    The widget has a table view with the following data displayed:

    • Test Item name - link to the Step level of the last launch

    • Switches - count of found results with often switches;

    • % of Switches - the percent of the fact switches and the possible;

    • Last switch - date and time of the last run, when the test item switches the status, displayed in 'time ago' format (i.e. "10 minutes ago").

    On mouse hover, the system will display accurate start times.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/InvestigatedPercentageOfLaunches/index.html b/dashboards-and-widgets/InvestigatedPercentageOfLaunches/index.html index 73d982f76..dd4834106 100644 --- a/dashboards-and-widgets/InvestigatedPercentageOfLaunches/index.html +++ b/dashboards-and-widgets/InvestigatedPercentageOfLaunches/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Investigated percentage of launches

    The widget can be used in two modes - Launch mode and Timeline mode:

    • The widget in the Launch mode shows the percentage of "Investigated" and "To Investigate" items by launch to sum (Product Bugs + Auto Bugs + System Issues + To Investigates).
    • The widget in the Timeline mode shows the percentage of "Investigated" and "To Investigate" items to sum (Product Bugs + Auto Bugs + System Issues + To Investigates) in all runs per day, distributed by dates.

    Widget's parameters:

    • Filter: At least one filter is required.
    • Items: 1-150. The default meaning is 50.
    • Mode: Launch or Timeline. The default meaning is Launch mode.

    Widget view

    The widget contains an agenda with "To Investigate" and "Investigated" labels.

    The widget view in Launch mode:

    • The X-axis shows launches numbers and launches names on hover.
    • Y-axis shows the percent of "Investigated" and "To Investigate" items to sum. (Product Bugs + Auto Bugs + System Issues + To Investigates)

    The tooltip on mouse hover over the chart area shows launch details: launch name, number, launch start time and "percentage of "Investigated" or "To Investigate" items.

    The widget view in Timeline mode:

    • The X-axis shows dates and weekdays.
    • Y-axis shows a percent of "Investigated" and "To Investigate" items to sum (Product Bugs + Auto Bugs + System Issues + No Defects + To Investigates) distributed by dates.

    The tooltip on mouse hover over the chart area shows launch details: date and percentage of "Investigated" or "To Investigate" items.

    The widget has clickable sections when you click on a specific section in the widget, the system forwards you to the launch view for the appropriate selection.

    note

    The widget doesn't contain "IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/LaunchExecutionAndIssueStatistic/index.html b/dashboards-and-widgets/LaunchExecutionAndIssueStatistic/index.html index 57853c35b..52e78b2fb 100644 --- a/dashboards-and-widgets/LaunchExecutionAndIssueStatistic/index.html +++ b/dashboards-and-widgets/LaunchExecutionAndIssueStatistic/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Launch execution and issue statistic

    The Launch Execution and Issue Statistic chart shows the status and issues statistics for the last launch of a specified range.

    Widget's parameters:

    • Filter: required

    Widget view

    The widget shows statistics of the last finished launch for the chosen filter. The statistics are divided into the following sections:

    • Skipped, Passed, Failed
    • Product Bug, System Issue, Automation Bug, No Defect (default and custom) and To Investigate.

    The widget contains agenda with statuses, the user can click on a status to remove/add it to the chart.

    Tooltip on mouse hover over chart area shows launch details: launch name, number, and duration.

    The statistics for every type is shown in percentage. On hover, the exact number is shown for each type.

    The widget has clickable sections when you click on a specified section in the widget, the system forwards you to the launch view for the appropriate selection.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/LaunchStatisticsChart/index.html b/dashboards-and-widgets/LaunchStatisticsChart/index.html index b42c8ea03..909d45e7e 100644 --- a/dashboards-and-widgets/LaunchStatisticsChart/index.html +++ b/dashboards-and-widgets/LaunchStatisticsChart/index.html @@ -12,7 +12,7 @@ - + @@ -27,7 +27,7 @@

    The widget view in Timeline mode:

    The tooltip on mouse hover over the chart area shows details: date and total launches statistics.

    The widget has clickable sections; when you click on a specified section in the widget, the system forwards you to launch view for appropriate selection.

    Area view

    Bar view

    note

    The widget doesn't contain "IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/LaunchesDurationChart/index.html b/dashboards-and-widgets/LaunchesDurationChart/index.html index 4dd70e193..260ccd15f 100644 --- a/dashboards-and-widgets/LaunchesDurationChart/index.html +++ b/dashboards-and-widgets/LaunchesDurationChart/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Launches duration chart

    The Launch Duration Chart shows the duration of the selected launches.

    Widget's parameters:

    • Filter: At least one filter is required
    • Items: 1-150. Default meaning is 50
    • Mode All launches/ Latest launches

    Widget view

    The widget shows the duration of the filtered launches.

    • The X-axis shows launches duration.
    • Y-axis shows launches numbers and launches names on hover.

    Tooltip on mouse hover over chart area shows launch details: launch name, number, and duration.

    The Widget has clickable sections when you click on a specified section in a widget, the system forwards you to launch view for appropriate selection.

    The Widget has two states: All launches and Latest lunches. If you chose All launches mode, the widget will show statistics about all launches in the filter. To view only the latest executions of each launch, you should choose Latest launches.

    note

    The widget doesn't contain "IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/LaunchesTable/index.html b/dashboards-and-widgets/LaunchesTable/index.html index 6f7cf7205..a25aff43d 100644 --- a/dashboards-and-widgets/LaunchesTable/index.html +++ b/dashboards-and-widgets/LaunchesTable/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Launches table

    The widget shows a configurable table of launches.

    Widget's parameters:

    • The widget criteria are as follows: Total, Passed, Failed, Skipped, Product Bug, Automation Bug, System Issue, To Investigate, Attributes, User, Description, Start time, Finish time. All criteria are selected by default. To specify them, uncheck unnecessary items in "Criteria for Widget".
    • Items: 1-150. The default meaning is 50.

    Widget view

    The widget has a table view.

    The widget has clickable elements "launch name", "owner", "attributes", and "number of items"; when you click on specific elements in the widget, the system forwards you to the launch view for the appropriate selection.

    note

    The widget doesn't contain "IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/ManageWidgets/index.html b/dashboards-and-widgets/ManageWidgets/index.html index 51768dad4..90932df21 100644 --- a/dashboards-and-widgets/ManageWidgets/index.html +++ b/dashboards-and-widgets/ManageWidgets/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Manage Widgets

    Customize widget

    When you create a widget in our test automation dashboard, it has a basic size. Afterward, you may change the widget size.

    To resize widget the user can hover the mouse cursor over the widget. The system will show resizing arrows.

    Grab the arrow with the cursor and drag it to the desired width and height.

    You can maintain the existing aspect ratio or set a new one when resizing the widget.

    note

    Widgets have a minimum and maximum mean of width and height defined within the application.

    Another way you can customize your dashboard is by changing the widgets' placement within the dashboard canvas area.

    To change a widget placement on the dashboard, perform the following: grab a widget with the cursor by clicking and dragging it to the selected position, holding down the mouse button.

    When you move the widget to the area with sufficient space, the system highlights this place. The widgets located in this space, are moved to the relocatable widget place. Using this option the user can configure the desired location of the widgets on a dashboard.

    Edit widget

    To create a new widget for data visualization, perform the following steps:

    1. Click the "Edit" icon in the right corner of the widget header.

    2. After the "Edit Widget" window is opened, you can edit any widget settings except the template itself.

    3. Make the necessary changes and click the "Save" button. The widget will be updated.

    View widgets in full-screen mode

    To view widgets on the whole screen, click the 'Full Screen' button in the right top corner of the dashboard.

    Widgets are shown in the same order as for standard view.

    note

    Clickable areas or elements are disabled for the full-screen mode selected, therefore it will not be possible to create a new widget, update, or delete available widgets in this mode.

    The auto-refresh timeout for widgets in full-screen mode is 30 sec.

    Delete widget

    To delete the widget:

    1. Click the "Delete" icon (X) in the right corner of the widget header.

    2. Click the "Delete" button on the confirmation popup.

    3. The widget will be deleted from the system.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/MostFailedTestCasesTableTop50/index.html b/dashboards-and-widgets/MostFailedTestCasesTableTop50/index.html index 01b419a55..d2c53fa31 100644 --- a/dashboards-and-widgets/MostFailedTestCasesTableTop50/index.html +++ b/dashboards-and-widgets/MostFailedTestCasesTableTop50/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Most failed test-cases table (TOP-50)

    The widget contains a table with statistical information about the TOP-50 most problematic test cases.

    Widget's parameters:

    • The widget criteria are as follows: Failed, Skipped, Product Bug, Automation Bug, System Issue, No Defect. Failed is selected by default.

    • Launches count: 2-150. By default, "Launches count" is 30.

    • Launch name: Is required.

    • Include /Exclude Before and After methods

    Widget view

    The widget has a table view with the following data displayed:

    • Test Item name - link to the Step level of the last launch
    • Failed - count of found failed results
    • Last failure - date and time of the last run, when the test item was failed, displayed in 'time ago' format (i.e. "10 minutes ago"). On mouse hover, the system will display accurate start times.
    note

    The widget contains statistics of the most problematic test cases in all launches, except "IN PROGRESS" and "INTERRUPTED" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/MostPopularPatternTableTop20/index.html b/dashboards-and-widgets/MostPopularPatternTableTop20/index.html index 8f4a5865d..86e982307 100644 --- a/dashboards-and-widgets/MostPopularPatternTableTop20/index.html +++ b/dashboards-and-widgets/MostPopularPatternTableTop20/index.html @@ -12,7 +12,7 @@ - + @@ -24,7 +24,7 @@ The system leaves only the latest launches in each group (if the user has chosen option Latest launches in the widget wizard). For each group of launches, a list with pattern aggregated.

    Widget view

    On the widget a user can view a table which shows:

    Via drop-down user can transit from group to group. A pattern name is clickable. By clicking on pattern name a user is redirected to a list with all test cases which have clicked pattern. A list of test cases includes test cases from different launches.

    note

    The widget doesn't contain 'IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/MostTimeConsumingTestCasesWidgetTop20/index.html b/dashboards-and-widgets/MostTimeConsumingTestCasesWidgetTop20/index.html index 2b360efe5..27fac8163 100644 --- a/dashboards-and-widgets/MostTimeConsumingTestCasesWidgetTop20/index.html +++ b/dashboards-and-widgets/MostTimeConsumingTestCasesWidgetTop20/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Most time-consuming test cases widget (TOP-20)

    Show the TOP 20 test cases with the highest duration in the last execution of the specified launch.

    Widget's parameters:

    • Test Status. Default value - Passed, Failed

    • Launch name. Is required

    • Include /Exclude Before and After methods

    • View options: Bar view, Table view

    Widget view

    Table View

    The widget has a table view with the following data displayed:

    • Test Item name - link to the log of the last launch

    • Test Status

    • Test Duration

    • Test Start Time - date and time of the last run, displayed in 'time ago' format (i.e. "10 minutes ago").

    On mouse hover, the system will display accurate start times.

    Bar View

    Bar chart where:

    • axis OY - Tests
    • axsic OX - Duration

    Bar color specifies a color of execution. On mouse hover, the system will display accurate start times.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/NonPassedTestCasesTrendChart/index.html b/dashboards-and-widgets/NonPassedTestCasesTrendChart/index.html index 0f403c273..578d14315 100644 --- a/dashboards-and-widgets/NonPassedTestCasesTrendChart/index.html +++ b/dashboards-and-widgets/NonPassedTestCasesTrendChart/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Non-passed test-cases trend chart

    The widget shows the percent ratio of non-passed test cases "Failed + Skipped" and "Total" cases from run to run.

    Widget's parameters:

    To configure the widget, click the "Add New Widget" button on the dashboard header, then select a template and specify the following settings on the second step:

    • Filter.
    • Items: 1-150. Default meaning is 50

    Widget view

    The widget contains agenda: % (Failed + Skipped) / Total.

    • The X-axis shows launches numbers and launches names on hover.
    • Y-axis shows the percent of sum Failed + Skipped test cases to Total.

    The tooltip on mouse hover over the chart area shows launch details: launch name and number, launch start time and percentage of non-passed cases.

    note

    The widget doesn't contain "IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/OverallStatistics/index.html b/dashboards-and-widgets/OverallStatistics/index.html index d74d36f12..846568cfc 100644 --- a/dashboards-and-widgets/OverallStatistics/index.html +++ b/dashboards-and-widgets/OverallStatistics/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Overall statistics

    The panel shows a summary of test cases with each status for each selected launch.

    Widget's parameters:

    • Filter: At least one filter is required
    • Items: 1-150. The default meaning is 50.
    • Widget Criteria: All criteria are selected by default.
    • Type of view: Panel view/ Donut view
    • Mode All launches/ Latest launches

    Widget view

    The widget shows statistics of the All launches/or Latest launches for the chosen filter. Statistics are divided into the following sections:

    • Skipped, Passed, Failed
    • Product Bug, System Issue, Automation Bug, No Defect and To Investigate.

    The statistics for every type are shown in percentages. On hover, the exact number is shown for each type. The Widget has clickable sections when you click on a specified section in the widget, the system forwards you to launch view for appropriate selection.

    If you chose All launches mode, the widget will show the statistics about all launches in the filter. To view only the latest executions of each launch, you should choose Latest launches.

    The widget can be viewed in two options as shown on pictures: Panel view

    or Donut view.

    note

    The widget doesn't contain "IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/PassingRatePerLaunch/index.html b/dashboards-and-widgets/PassingRatePerLaunch/index.html index ba585822d..b4ca27f31 100644 --- a/dashboards-and-widgets/PassingRatePerLaunch/index.html +++ b/dashboards-and-widgets/PassingRatePerLaunch/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Passing rate per launch

    Shows the percentage ratio of Passed test cases to Total test cases for the last run of selected launch.

    note

    Total test cases = Passed + Not Passed, while Not Passed = Failed + Skipped + Interrupted

    Thus, Passing rate = Passed / (Passed + Failed + Skipped + Interrupted)

    Widget's parameters:

    • Launch Name: the name of any finished launch

    • Mode: Bar View/Pie View

    • Widget name: any text

    • Description: any text

    Please find below an example of configuration:

    As you can see, this widget was built based on the test results of the last run of the Daily Smoke Suite:

    Widget view

    The widget can be displayed in two options as shown on the pictures below:

    Bar View

    Pie View

    The tooltip on mouse hover over chart area shows the quantity of Passed/Failed test cases and percentage ratio of Passed/Failed test cases to Total cases for the last run.

    The widget has clickable sections. When you click on a specific section in the widget, the system forwards you to the launch view for appropriate selection.

    note

    The widget doesn't contain 'IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/PassingRateSummary/index.html b/dashboards-and-widgets/PassingRateSummary/index.html index a08499233..ac157e1ed 100644 --- a/dashboards-and-widgets/PassingRateSummary/index.html +++ b/dashboards-and-widgets/PassingRateSummary/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Passing rate summary

    Shows the percentage ratio of Passed test cases to Total test cases for set of launches.

    note

    Total test cases = Passed + Not Passed, while Not Passed = Failed + Skipped + Interrupted

    Thus, Passing rate = Passed / (Passed + Failed + Skipped + Interrupted)

    Widget's parameters:

    • Filter: At least one filter is required

    • Items: 1-600. The default meaning is 50.

    • Mode: Bar View/Pie View

    • Widget name: any text

    • Description: any text

    Please find below an example of configuration:

    Widget view

    The widget can be displayed in two options as shown on the pictures below:

    Bar View

    Pie view

    As you can see, this widget was built based on the “regression” filter.

    The tooltip on mouse hover over chart area shows the quantity of Passed/Not passed test cases and percentage ratio of Passed/Not passed test cases to Total test cases for the specified set of launches.

    The widget has clickable sections. When you click on a specific section in the widget, the system forwards you to the launches view for the appropriate selection.

    Thanks to “Passing rate summary” widget, it is no longer needed to spend time on calculating Passing rate of the specified set of launches. ReportPortal provides these statistics as a visualization – it is a quick and convenient decision. You can take the screenshot of widget and use it in the Test results report.

    note

    The widget doesn't contain 'IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/PossibleDashboardsInReportPortal/index.html b/dashboards-and-widgets/PossibleDashboardsInReportPortal/index.html index 44b1c2eb4..f97e6c25f 100644 --- a/dashboards-and-widgets/PossibleDashboardsInReportPortal/index.html +++ b/dashboards-and-widgets/PossibleDashboardsInReportPortal/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ Let's assume that we have a Regression suite which contains:

    The whole Regression is running against nightly build every day. Different teams are responsible for different suites.

    Which dashboard I can create in such conditions?

    Report for one tests run (A dashboard for an engineer)

    The goal for this test results dashboard to show the status of the latest test run. For instance to see the latest results for launch with an API suite.

    You can configure: Passing rate widget that shows a passing rate for a latest launch "API suite'

    Most popular pattern tracks TOP-20 problems in the last and previous runs of this suite.

    note

    For Most popular pattern table, you should you create a set of rules and run Pattern Analysis

    With Investigated percentage of launches you can find out the status of failure investigations. You will be able to evaluate team performance and consistency of results.

    Failed cases trend chart shows the history of failures in previous runs.

    Duration chart will be very helpful for those who track duration KPI and want to increase the speed of tests run.

    Test growth trend chart shows you the speed of new test cases creation.

    Also, you can create "Most flaky test cases" and "Most failed test case" and find the most unstable items which should be taken into account.

    Let's assume that you have a lot of test results and a lot of teams.

    You can create Overall statistics and Launches table, and now a team who is responsible for API suite has no need to go to the test results. It can use only this dashboard which gives enough information for test failure management.

    Build / Release/ Sprint Report (A dashboard for a Team leads, PM, DM)

    The goal of this report to show status for the whole version. It means for this report we want to see the latest results of aggregated statistics for several lanches.

    In our example, I want to see the latest results for the whole Regression (latest results for API suite + latest results for UI + latest results for Integration tests).

    Also, it is very useful to compare the results of the Regression on the current version with the Regression on previous versions and to see details about business metrics.

    On this dashboard you can see different metrics:

    Also with a help of Component Health Check Widget you can create a Test Pyramid.

    note

    You need to report test executions with attributes which specified needed metrics or envs

    note

    Component Health Check Widget and Cumulative trend chart are very configurable and you can create your own widget based on project needs.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/ProjectActivityPanel/index.html b/dashboards-and-widgets/ProjectActivityPanel/index.html index 184f9d174..f0c0f2772 100644 --- a/dashboards-and-widgets/ProjectActivityPanel/index.html +++ b/dashboards-and-widgets/ProjectActivityPanel/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Project activity panel

    The widget shows all activities occurring on the project.

    Widget's parameters:

    • The actions for the widget: Start launch, Finish launch, Delete launch, Actions with issues, Assign/Invite users, Unassign user, Change role, Update Dashboard, Update widget, Update Filter, Update integration, Update project settings, Update Auto-Analysis settings, Update defect types, Import, Update Pattern-Analysis settings, Create pattern, Update pattern, Delete pattern, Pattern matched, Create project.

    • Items: 1-150. The default value is 50.

    • Criteria for widget: By default, all user's activities.

    Widget view

    The actions on the widget are present in a table, separated by days. Action messages have the following format:

    Member (name) did action.
    Time - displayed in 'time ago' format (i.e. "10 minutes ago"). On mouse hover, the system should display accurate action time.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/TableComponentHealthCheck/index.html b/dashboards-and-widgets/TableComponentHealthCheck/index.html index 089163557..24638e8d9 100644 --- a/dashboards-and-widgets/TableComponentHealthCheck/index.html +++ b/dashboards-and-widgets/TableComponentHealthCheck/index.html @@ -12,7 +12,7 @@ - + @@ -26,7 +26,7 @@ But if you run a new version (for instance version: xxx+1), you should repeat previous actions one more time: create the filter, update the widget.

    Solution: How you can skip these steps. Create filter which includes 3 launches: API launch, UI launch, and Integration launch. Create a Component Health Check widget (table view) with this filter and add attribute key 'version' for grouping. Now you will see a summary for the latest version every day. If a new version appears in the system, a widget automatically removes info about the previous one and adds the latest version.

    Use case #2

    Use Case: To track information regarding components such features/browsers/ platforms/ or others

    Problem: You are running different launches API launch, UI launch, and Integration launch. In these 3 launches, there are test cases which belong to different features. One feature can have test cases with different types: API, UI, Integration. You need to track overall statistics for features, not for launch.

    Solution: Create filter which includes 3 launches: API launch, UI launch, and Integration launch. Create a Component Health Check widget (table view) with this filter and add attribute key 'feature' for grouping. Now you will see a summary for all features from different launches.

    Widget logic is the same as for Component health check.

    Widget's parameters:

    Widget view

    The widget has a table view. Each line contains information regarding one component (one unique attribute value):

    The total line shows a summary of all components.

    Custom column

    Why you may need a custom column? Let's see it in the example.

    Use case #3

    Use Case: You need to understand the impact of failed test cases

    Problem: You created a Component Health Check widget and can see a list with features and their passing rate. But you can not understand the importance of failed features.

    Solution: Add for all test executions attributes with an attribute key 'priority: XXX'. For instance:

    Then add to widget wizard attribute key 'priority' in the custom column field. So that system adds to the widget view information regarding priority to each feature.

    *Custom sorting

    You can choose how components should be sorted in the table. Possible criteria:

    note

    Component Health Check widget (table view) is the first widget that uses a materialized view of PostgreSQL. It takes time to create it. So that is why information about new launches in the filter adds dynamically. For that reason, a user should update a widget manually by сlicking on the update button. On the widget, a user can see the time for the last update.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/TestCasesGrowthTrendChart/index.html b/dashboards-and-widgets/TestCasesGrowthTrendChart/index.html index 16b3d358f..0172aaa87 100644 --- a/dashboards-and-widgets/TestCasesGrowthTrendChart/index.html +++ b/dashboards-and-widgets/TestCasesGrowthTrendChart/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Test-cases growth trend chart

    The widget can be used in two modes - Launch mode and Timeline mode:

    • The widget in the Launch mode shows the increment of test-cases from run to run,
    • The widget in the Timeline mode shows the increment of test-cases distributed by dates (in launches with the largest number of test-cases per day).

    Widget's parameters:

    • Filter: At least one filter is required
    • Items: 1-150. The default meaning is 50.
    • Mode: Launch or Timeline.

    Widget view

    The widget view in Launch mode:

    • The X-axis shows launches numbers and launches names on hover.
    • Y-axis shows the increment of test-cases.

    The tooltip on mouse hover over the chart area shows launch details: launch name and number, launch start time and launch statistics - total number of test cases and test cases growth.

    The widget view in Timeline mode:

    • The X-axis shows dates and weekdays.
    • Y-axis shows the increment of test-cases in launches with the largest number of test-cases per day.

    The tooltip on mouse hover over the chart area shows launch details: date and launch statistics - total number of test cases and test cases growth.

    The widget has clickable sections when you click on a specific section in the widget, the system forwards you to the launch view for the appropriate selection.

    note

    The widget doesn't contain "IN PROGRESS" launches.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/UniqueBugsTable/index.html b/dashboards-and-widgets/UniqueBugsTable/index.html index c3af722a8..ac2033480 100644 --- a/dashboards-and-widgets/UniqueBugsTable/index.html +++ b/dashboards-and-widgets/UniqueBugsTable/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Unique bugs table

    The widget shows real identified bugs, posted to the Bug Tracking System from ReportPortal, and existing bugs, that were added to the items on ReportPortal.

    Widget's parameters:

    • Filter: At least one filter is required
    • Items: 1-150. The default meaning is 10.

    Widget view

    The widget has a table view and bugs that are found are then sorted by the date they were posted or added.

    The widget has the following data displayed:

    • Bug ID - links to the issue in Bug Tracking System.
    • Found in - links to the test item, to which the bug was posted/added.
    • Submit date - the date the bug was submitted/added. Time is displayed in 'time ago' format (i.e. "10 minutes ago"). On mouse hover, the system should display accurate action time.
    • Submitter - user, who submitted/added the bug.
    note

    The bugs from launches "IN PROGRESS" are not shown on the widget. In case a bug is found in multiple items, all of the items will be listed in the "Found in" column.

    - + \ No newline at end of file diff --git a/dashboards-and-widgets/WidgetCreation/index.html b/dashboards-and-widgets/WidgetCreation/index.html index 6256addf4..7391eaabd 100644 --- a/dashboards-and-widgets/WidgetCreation/index.html +++ b/dashboards-and-widgets/WidgetCreation/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content

    Widget Creation

    In our test automation dashboard widgets contain special graphical control elements that were designed to provide a simple and easy-to-use way of displaying and analyzing your automation trends and data.

    The widgets can be added to dashboards on the "Dashboards" tab. Widgets will be visible within the project, they are created.

    Create widget

    To create a new widget, perform the following steps:

    1. Navigate to the "All Dashboards" page and create a new dashboard or choose the existing one.

    2. Click the "Add New Widget" button.

    3. The Widget Wizard will be opened. To add a new widget, you need to pass all the required steps.

      • Step 1. Select the template of the widget (detailed description is below).

      • Step 2. Select a filter from the list below or create a new filter. Search functionality helps to find the filter quicker. Select other widget options: Criteria, Items, Launch or Timeline mode (if applicable for selected widget template)

      • Step 3. Enter a widget name and description. A widget name should be unique for a user on the project.

    4. After you have completed all steps, click the "Save" button. The new widget will be added to the dashboard on the top.

    Widgets are automatically refreshed every minute.

    Predefined widgets types

    There are 15 widget templates in ReportPortal for tracking different KPI:

    KPIWidget template
    Track the reasons of failuresLaunch statistics chart
    Passing rate for filter summary, and structure of problemsOverall statistics
    Track the longest launch in the filterLaunches duration chart
    Track the passing rate and structure of problems of the latest run in the systemLaunch execution and issue statistic
    Track the activity of your QA teamProject activity panel
    Track the growth of new test cases in your buildTest-cases growth trend chart
    Track the speed of test failure analysisInvestigated percentage of launches
    Follow up information about only important launches for your teamLaunches table
    Track new BTS issues in your runUnique bugs table
    Track the most unstable test cases in the buildMost failed test-cases table
    See the trend of the number of failed test cases from build to buildFailed cases trend chart
    See the trend of the number of failed and skipped test cases from build to buildNon-passed test-cases trend chart
    Compare two launches togetherDifferent launches comparison chart
    Track passing rate for one launchPassing rate per launch
    Track passing rate for the buildPassing rate summary
    Find the most flakiest test in the buildFlaky test cases table (TOP-20)
    Compare statistics for different builds on one graphCumulative trend chart
    Track the most popular failure reasons in the buildMost popular pattern table (TOP-20)
    Track the passing rate of different components of your applicationComponent health check
    Track the statistics of different components of your applicationComponent health check (table)
    Track the top-20 tests with longest execution timeMost time-consuming test cases widget (TOP-20)
    - + \ No newline at end of file diff --git a/dashboards-and-widgets/WorkWithDashboards/index.html b/dashboards-and-widgets/WorkWithDashboards/index.html index 1e38a5b19..0a8cf063e 100644 --- a/dashboards-and-widgets/WorkWithDashboards/index.html +++ b/dashboards-and-widgets/WorkWithDashboards/index.html @@ -12,7 +12,7 @@ - + @@ -25,7 +25,7 @@ project. You can add the description for your dashboard as well.

  • Click "Add" button. The new dashboard will be created.

  • Now you can add widgets to the dashboard.

    Edit dashboard

    To edit a dashboard, perform the following steps:

    1. Navigate to "All Dashboards" page.

    2. Click the "Edit" icon in the top corner of the dashboard or click the name of the dashboard and click 'Edit' button in the header of the dashboard.

    3. The "Edit Dashboard" popup will be opened.

    4. Make the necessary changes and click "Update" button. The dashboard will be displayed with updates.

    Delete dashboard

    To remove a dashboard from the project, perform the following steps:

    1. Click the "Delete" button in the top right corner of the dashboard.

    2. Click the "Delete" button on confirmation pop-up.

    The dashboard and related widgets will be deleted from the system.

    - + \ No newline at end of file diff --git a/dev-guides/APIDifferencesBetweenV4AndV5/index.html b/dev-guides/APIDifferencesBetweenV4AndV5/index.html index 09067dcd2..1c064b868 100644 --- a/dev-guides/APIDifferencesBetweenV4AndV5/index.html +++ b/dev-guides/APIDifferencesBetweenV4AndV5/index.html @@ -12,7 +12,7 @@ - + @@ -30,7 +30,7 @@ Using request above you can retrieve physical id from database of just reported test item and use it in next queries for items, logs etc.

    PUT /v1/{projectName}/item/info - Bulk update items attributes and descriptions.

    PUT /v1/{projectName}/item/issue/link - Link external issue for specified test items.

    PUT /v1/{projectName}/item/issue/unlink - Unlink external issue for specified test items.


    User controller

    GET /v1/user/export - Export information about all users.

    GET /v1/user/registration - Get user bid info.

    GET /v1/user/registration/info - Validate user login and/or email.

    GET /v1/user/search - Search users by term.

    GET /v1/user/{userName}/projects - Retrieve all user projects.

    DELETE /v1/user - Delete specified users by ids.


    Widget controller

    New group of widgets that may have few levels

    GET /v1/{projectName}/widget/multilevel/{widgetId} - Get multilevel widget by id.


    New controllers

    Bug tracking system controller - replacement of external system controller.

    GET /v1/bts/{integrationId}/fields-set - Get list of fields required for posting ticket.

    GET /v1/bts/{integrationId}/issue_types - Get list of allowable issue types for bug tracking system.

    GET /v1/bts/{projectName}/ticket/{ticketId} - Get ticket from the bts integration.

    GET /v1/bts/{projectName}/{integrationId}/fields-set - Get list of fields required for posting ticket (project integration).

    GET /v1/bts/{projectName}/{integrationId}/issue_types - Get list of allowable issue types for bug tracking system (project integration).

    POST - /v1/bts/{projectName}/{integrationId}/ticket - Post ticket to the bts integration.


    Integration controller

    GET /v1/integration/global/all - Get available global integrations.

    GET /v1/integration/global/all/{pluginName} - Get available global integrations for plugin.

    GET /v1/integration/project/{projectName}/all - Get available project integrations.

    GET /v1/integration/project/{projectName}/all/{pluginName} - Get available project integrations for plugin.

    GET /v1/integration/{integrationId} - Get specified global integration by id.

    GET /v1/integration/{integrationId}/connection/test - Test connection to the global integration.

    GET /v1/integration/{projectName}/{integrationId}/connection/test - Test connection to the integration through the project config.

    GET /v1/integration/{projectName}/{integrationId} - Get specified project integration by id.

    PUT /v1/integration/{projectName}/{integrationId} - Update specified project integration by id.

    PUT /v1/integration/{integrationId} - Update specified global integration by id.

    PUT /v1/integration/{projectName}/{integrationId}/{command} - Execute command to the integration instance.

    POST /v1/integration/{pluginName} - Create global integration.

    POST /v1/integration/{projectName}/{pluginName} - Create project integration instance.

    DELETE /v1/integration/all/{type} - Delete all global integrations by type.

    DELETE /v1/integration/{projectName}/all/{type} - Delete all project integrations by type.

    DELETE /v1/integration/{integrationId} - Delete specified global integration by id.

    DELETE /v1/integration/{projectName}/{integrationId} - Delete specified project integration by id.


    Launch asynchronous controller

    POST /v2/{projectName}/launch - Start launch for specified project.

    POST /v2/{projectName}/launch/merge - Merge set of specified launches in common one.

    PUT /v2/{projectName}/launch/{launchId}/finish - Finish launch for specified project.


    Test item asynchronous controller

    POST /v2/{projectName}/item - Start root test item.

    POST /v2/{projectName}/item/{parentItem} - Start child test item.

    PUT /v2/{projectName}/item/{testItemId} - Finish test item.


    Log asynchronous controller

    POST /v2/{projectName}/log - Create log.


    Differences in reporting

    Launch rerun

    Rerun developers guide

    Nested steps

    Nested steps wiki

    Launch logs

    Create log request contains fields launchUuid and itemUuid. At least one of them should not be null.

    {
    "itemUuid": "7f32fb6a-fcc2-4ecb-a4f7-780c559a37ca",
    "launchUuid": "6fd4638d-90e2-4f52-a9bd-bf433ebfb0f3"
    }

    If they both are present - log will be saved as test item log. If only itemUuid is present - log will be saved as test item log. If only launchUuid is present - log will be saved as launch log.

    Java client has static methods for launch log reporting:

    //TODO fix links after java client final version release

    - + \ No newline at end of file diff --git a/dev-guides/AsynchronousReporting/index.html b/dev-guides/AsynchronousReporting/index.html index ba08d22f8..1211931b0 100644 --- a/dev-guides/AsynchronousReporting/index.html +++ b/dev-guides/AsynchronousReporting/index.html @@ -12,7 +12,7 @@ - + @@ -57,7 +57,7 @@ In case the launch finish request is not last in the queue it will be finished anyway. But all the next requests under the launch will be handled as soon as they get to the consumer and the launch statistics will be updated. So it is possible to report items under already finished launch.

    - + \ No newline at end of file diff --git a/dev-guides/AttachmentsGuide/index.html b/dev-guides/AttachmentsGuide/index.html index a483ef51d..7ea44099f 100644 --- a/dev-guides/AttachmentsGuide/index.html +++ b/dev-guides/AttachmentsGuide/index.html @@ -12,7 +12,7 @@ - + @@ -31,7 +31,7 @@ There is no built-in capability to send attachments during test execution as the Jest Reporter works post-factum and does not allow to provide specific data to the report.

    agent-js-postman: There is no built-in capability at the moment to send attachments during test execution due to the specifics of postman nature.

    agent-js-nightwatch: The attachment can be added via ReportingAPI, follow docs for details.

    An example for each agent can be found here.

    How to log attachments (Screenshots) on .Net agents?

    General documentation on this in .net-commons: https://github.com/reportportal/commons-net/blob/develop/docs/Logging.md

    You can attach any binary content:

    Context.Current.Log.Info("my binary", "image/png", bytes);
    // where bytes is byte[] and image/png is mime type of content

    Or use file instead:

    Context.Current.Log.Info("my file", new FileInfo(filePath));
    // where filePath is relative/absolute path to your file
    // mime type is determined automatically
    - + \ No newline at end of file diff --git a/dev-guides/BackEndJavaContributionGuide/index.html b/dev-guides/BackEndJavaContributionGuide/index.html index 8878b66d1..51b559877 100644 --- a/dev-guides/BackEndJavaContributionGuide/index.html +++ b/dev-guides/BackEndJavaContributionGuide/index.html @@ -12,7 +12,7 @@ - + @@ -43,7 +43,7 @@ How to apply a fix and check if everything works fine? To do this you should follow these steps:

    Summary notes

    This documentation should help you save your time by configuring ReportPortal local dev environment and give you understanding of some standards/conventions that we try stick to.

    Simplified development workflow should look like this:

    - + \ No newline at end of file diff --git a/dev-guides/InteractionsBetweenAPIAndAnalyzer/index.html b/dev-guides/InteractionsBetweenAPIAndAnalyzer/index.html index 9a6c1f768..6dacf38d8 100644 --- a/dev-guides/InteractionsBetweenAPIAndAnalyzer/index.html +++ b/dev-guides/InteractionsBetweenAPIAndAnalyzer/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Interactions between API and Analyzer

    Overview

    Communication between API service and analyzer service is carried out using AMQP 0-9-1 and RabbitMQ as message broker. API service creates virtual host inside RabbitMQ with name analyzer on start. Analyzers in theirs turn connect to the virtual host and declare exchange with name and arguments. Any type of request from API and response from analyzer stores in the same queue. Request and response messages is presented as JSON.

    Declaring exchange

    Each analyzer has to declare direct exchange with the following arguments:

    • analyzer - Name of analyzer (string)
    • version - Analyzer version (string)
    • analyzer_index - Is indexing supported (boolean, false by default)
    • analyzer_log_search - Is log searching supported (boolean, false by default)
    • analyzer_priority - Priority of analyzer (number). The lower the number, the higher the priority.

    Declaring queues

    Each analyzer has to declare 5 queues with names: analyze, search, index, clean, delete.

    Indexing

    Index request can be used to store info about logs and then analysis will be proceed based on the info. Requests and responses use index queue.

    Index request structure from API:

    IndexLaunch:

    AttributeDescriptionExample
    launchIdId of launch101
    launchNameName of launchSmoke Test
    projectId of project12
    analyzerConfigAnalyzer configuration
    testItemsArray of test items

    AnalyzerConfig:

    AttributeDescriptionExample
    minDocFreqThe minimum frequency of the saved logs1
    minTermFreqThe minimum frequency of the word in the analyzed log1
    minShouldMatchPercent of words equality between analyzed log and particular log from index95
    numberOfLogLinesThe number of first lines of log message that should be considered in indeT-1
    isAutoAnalyzerEnabledIs auto analysis enabledtrue
    analyzerModeAnalysis mode. Allowable values: "all", "launch_name", "current_launch"all
    indexingRunningIs indexing runningfalse

    IndexTestItem:

    AttributeDescriptionExample
    testItemIdId of test item123
    issueTypeIssue type locatorpb001
    uniqueIdUnique id of test itemauto:c6edafc24a03c6f69b6ec070d1fd0089
    isAutoAnalyzedIs test item auto analyzedfalse
    logsArray of test item logs

    IndexLog:

    AttributeDescrioptionExample
    logIdId of log125
    logLevelLog level40000
    messageLog messagejava.lang.AssertionError: 1 expectation failed. Expected status code <200> but was <400>. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

    API send array of IndexLaunch entities that have to be indexed.

    Example in json :

    [
    {
    "launchId":110,
    "launchName":"Smoke Test",
    "project":11,
    "analyzerConfig":{
    "minDocFreq":1,
    "minTermFreq":1,
    "minShouldMatch":95,
    "numberOfLogLines":-1,
    "isAutoAnalyzerEnabled":true,
    "analyzerMode":"all",
    "indexingRunning":false
    },
    "testItems":[
    {
    "testItemId":101,
    "issueType":"pb001",
    "uniqueId":"auto:c6edafc24a03c6f69b6ec070d1fd0089",
    "isAutoAnalyzed":false,
    "logs":[
    {
    "logId":111,
    "logLevel":40000,
    "message":"java.lang.AssertionError: 1 expectation failed. Expected status code <200> but was <400>."
    },
    {
    "logId":112,
    "logLevel":40000,
    "message":"java.lang.AssertionError: 1 expectation failed. Expected status code <200> but was <500>."
    }
    ]
    }
    ]
    }
    ]

    Analyzer should return response with number of indexed logs.

    Analyze

    Analyze request can be used to find matches from request in indexed data. Requests and responses use analyze queue.

    Analyze request is the same as IndexLaunch entity used for indexing. It contains info about test items and logs thad have to be analyzed.

    Response from analyzer should contain array of the following entities (info about analyzed test items):

    AnalyzedItemRs:

    AttributeDescriptionExample
    itemIdId of analyzed test item111
    relevantItemIdId of relevant test item123
    issueTypeIssue type locatorpb001

    Search logs

    Search request can be used to find similar logs from test items with to_investigate type. Requests and responses use search queue.

    Search logs request from API:

    SearchRq:

    AttributeDescriptionExample
    launchIdId of launch111
    launchNameName of launchSmoke Test
    itemIdId of test item112
    projectIdId of project10
    filteredLaunchIdsArray of launch ids, among with search would be applied[1,2,3]
    logMessagesArray of log messages looking for["first message", "second message"]
    logLinesNumber of logs lines that will be used in comparison5

    Analyzer should return array of log ids that matches as a response.

    Clean

    Clean request can be used to remove stored log from index. Requests use clean queue.

    Clean logs request from API:

    CleanIndexRq:

    AttributeDescriptionExample
    projectId of project10
    idsArray of log ids to be removed[111, 122, 123]

    Analyzer do not send response on the request.

    Delete

    Delete request can be used to delete entire index. Requests use delete queue.

    Request message from API contains only id of index.

    Analyzer do not send response on the request.

    Examples

    Custom analyzer written in java using Spring AMQP.

    - + \ No newline at end of file diff --git a/dev-guides/PluginDevelopersGuide/index.html b/dev-guides/PluginDevelopersGuide/index.html index 776595e86..c998b4cdd 100644 --- a/dev-guides/PluginDevelopersGuide/index.html +++ b/dev-guides/PluginDevelopersGuide/index.html @@ -12,7 +12,7 @@ - + @@ -35,7 +35,7 @@ allows us to load it using GetFileCommand.

    That's how our build.gradle looks now:

    import com.github.spotbugs.SpotBugsTask

    plugins {
    id "io.spring.dependency-management" version "1.0.9.RELEASE"
    id 'java'
    id 'com.github.johnrengelman.shadow' version '5.2.0'
    id "com.moowork.node" version "1.3.1"
    }

    apply from: 'ui.gradle'

    repositories {
    mavenCentral()
    }

    dependencies {
    implementation 'com.epam.reportportal:plugin-api:5.4.0'
    annotationProcessor 'com.epam.reportportal:plugin-api:5.4.0'
    }

    artifacts {
    archives shadowJar
    }

    sourceSets {
    main {
    resources
    {
    exclude '**'
    }
    }
    }

    jar {
    from("src/main/resources") {
    into("/resources")
    }
    from("ui/build") {
    into("/resources")
    }
    manifest {
    attributes(
    "Class-Path": configurations.compile.collect { it.getName() }.join(' '),
    "Plugin-Id": "${pluginId}",
    "Plugin-Version": "${project.version}",
    "Plugin-Provider": "Report Portal",
    "Plugin-Class": "com.epam.reportportal.extension.example.ExamplePlugin",
    "Plugin-Service": "api"
    )
    }
    }

    shadowJar {
    from("src/main/resources") {
    into("/resources")
    }
    from("ui/build") {
    into("/resources")
    }
    configurations = [project.configurations.compile]
    zip64 true
    dependencies {
    }
    }

    task plugin(type: Jar) {
    getArchiveBaseName().set("plugin-${pluginId}")
    into('classes') {
    with jar
    }
    into('lib') {
    from configurations.compile
    }
    extension('zip')
    }

    task assemblePlugin(type: Copy) {
    from plugin
    into pluginsDir
    }

    task assemblePlugins(type: Copy) {
    dependsOn subprojects.assemblePlugin
    }

    compileJava.dependsOn npm_run_build

    Now we can just execute ./gradlew build and get plugin binaries (as jar and as shadowJar) that can be loaded to the application.

    Event listeners

    All plugin commands are executed through the core application end-point with mapping:

    https://host:port/v1/integration/{projectName}/{integrationId}/{command}

    As we can see integrationId is a mandatory parameter that specifies integration to be used in the command execution.

    We can affect logic executed in core application from the plugin by handling predefined set of events. As for now we will use mandatory PluginLoadedEventHandler as an example.

    This handler creates the very first integration and uses PluginInfoProvider to update plugin data in the database.

    To add a new listener we should use ApplicationContext after plugin was loaded - so we do it in the method marked by @PostConstruct.

    Also, we should remove listeners when we unload plugin - so we implement DisposableBean interface and provide this logic in the preDestroy() method.

    That's how our extension looks now:


    @Extension
    public class ExampleExtension implements ReportPortalExtensionPoint, DisposableBean {

    public static final String BINARY_DATA_PROPERTIES_FILE_ID = "example-binary-data.properties";
    private static final String PLUGIN_ID = "example";
    private final String resourcesDir;

    private final Supplier<Map<String, PluginCommand<?>>> pluginCommandMapping = new MemoizingSupplier<>(this::getCommands);
    private final Supplier<ApplicationListener<PluginEvent>> pluginLoadedListenerSupplier;

    @Autowired
    private ApplicationContext applicationContext;

    @Autowired
    private IntegrationTypeRepository integrationTypeRepository;

    @Autowired
    private IntegrationRepository integrationRepository;

    public ExampleExtension(Map<String, Object> initParams) {
    resourcesDir = IntegrationTypeProperties.RESOURCES_DIRECTORY.getValue(initParams).map(String::valueOf).orElse("");

    pluginLoadedListenerSupplier = new MemoizingSupplier<>(() -> new ExamplePluginEventListener(PLUGIN_ID,
    new PluginEventHandlerFactory(integrationTypeRepository,
    integrationRepository,
    new PluginInfoProviderImpl(resourcesDir, BINARY_DATA_PROPERTIES_FILE_ID)
    )
    ));
    }

    @Override
    public Map<String, ?> getPluginParams() {
    Map<String, Object> params = new HashMap<>();
    params.put(ALLOWED_COMMANDS, new ArrayList<>(pluginCommandMapping.get().keySet()));
    return params;
    }

    @Override
    public PluginCommand<?> getCommandToExecute(String commandName) {
    return pluginCommandMapping.get().get(commandName);
    }

    @PostConstruct
    public void createIntegration() throws IOException {
    initListeners();
    }

    private void initListeners() {
    ApplicationEventMulticaster applicationEventMulticaster = applicationContext.getBean(AbstractApplicationContext.APPLICATION_EVENT_MULTICASTER_BEAN_NAME,
    ApplicationEventMulticaster.class
    );
    applicationEventMulticaster.addApplicationListener(pluginLoadedListenerSupplier.get());
    }

    @Override
    public void destroy() {
    removeListeners();
    }

    private void removeListeners() {
    ApplicationEventMulticaster applicationEventMulticaster = applicationContext.getBean(AbstractApplicationContext.APPLICATION_EVENT_MULTICASTER_BEAN_NAME,
    ApplicationEventMulticaster.class
    );
    applicationEventMulticaster.removeApplicationListener(pluginLoadedListenerSupplier.get());
    }

    private Map<String, PluginCommand<?>> getCommands() {
    Map<String, PluginCommand<?>> pluginCommandMapping = new HashMap<>();
    pluginCommandMapping.put("getFile", new GetFileCommand(resourcesDir, BINARY_DATA_PROPERTIES_FILE_ID));
    pluginCommandMapping.put("testConnection", (integration, params) -> true);
    return pluginCommandMapping;
    }
    }

    Lazy initialization

    All plugin components that relies on @Autowired dependencies should be loaded lazily using MemoizingSupplier or another lazy-load mechanism. This is the restriction of plugin installation flow:

    note

    We create extension object using constructor and only then we autowire dependencies. If we don't use lazy initialization - all objects created in the constructor will be created with NULL objects that were marked as @Autowired

    - + \ No newline at end of file diff --git a/dev-guides/ReportPortalAPI/index.html b/dev-guides/ReportPortalAPI/index.html index 6bb8ba236..6763e9061 100644 --- a/dev-guides/ReportPortalAPI/index.html +++ b/dev-guides/ReportPortalAPI/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/dev-guides/ReportingDevelopersGuide/index.html b/dev-guides/ReportingDevelopersGuide/index.html index ed29709bc..ab35c3802 100644 --- a/dev-guides/ReportingDevelopersGuide/index.html +++ b/dev-guides/ReportingDevelopersGuide/index.html @@ -12,7 +12,7 @@ - + @@ -42,7 +42,7 @@ To do that use the same log endpoint, but in body do not send itemUuid

    {
    "launchUuid": "96d1bc02-6a3f-451e-b706-719149d51ce4",
    "time": "2019-11-06T15:50:53.187Z",
    "message": "java.lang.NullPointerException",
    "level": 40000,
    "file": {
    "name": "file2.txt"
    }
    }

    The same way we can report all the rest test items.

    Finish root(suite) item

    Finishing root item can be done the same way as finish parent item and finish child item. But we should specify its uuid in request parameter.

    curl --header "Content-Type: application/json" \
    --header "Authorization: Bearer 039eda00-b397-4a6b-bab1-b1a9a90376d1" \
    --request PUT \
    --data '{"endTime":"1574423247000","launchUuid":"96d1bc02-6a3f-451e-b706-719149d51ce4"}' \
    http://rp.com/api/v1/rp_project/item/1e183148-c79f-493a-a615-2c9a888cb441
    {
    "endTime": "1574423247000",
    "launchUuid": "96d1bc02-6a3f-451e-b706-719149d51ce4"
    }

    Finish launch

    When we finished all test items, it's time to finish launch. Endpoint:

    PUT /api/{version}/{projectName}/launch/{launchUuid}/finish

    Finish request model:

    AttributeRequiredDescriptionDefault valueExamples
    endTimeYesLaunch end time-2019-11-22T11:47:01+00:00 (ISO 8601); Fri, 22 Nov 2019 11:47:01 +0000 (RFC 822, 1036, 1123, 2822); 2019-11-22T11:47:01+00:00 (RFC 3339); 1574423221000 (Unix Timestamp)
    statusNoLaunch status. Allowable values: "passed", "failed", "stopped", "skipped", "interrupted", "cancelled"calculated from children test itemsfailed
    descriptionNoLaunch description. Overrides description on startemptyservice test
    attributesNoLaunch attributes. Pairs of key and value. Overrides attributes on startempty

    Finish response model

    AttributeRequiredDescriptionExample
    idYesLaunch UUID6f084c4d-edb5-4691-90ba-d9e819ba61ba
    number (*)NoLaunch number1
    link (*)NoUI link to launchhttp://rp.com/ui/#rp_project/launches/all/73336

    (*) - In case async endpoint field is missing or empty

    Full request:

    curl --header "Content-Type: application/json" \
    --header "Authorization: Bearer 039eda00-b397-4a6b-bab1-b1a9a90376d1" \
    --request PUT \
    --data '{"endTime":"1574423255000"}' \
    http://rp.com/api/v1/rp_project/launch/96d1bc02-6a3f-451e-b706-719149d51ce4/finish

    Where body is:

    {
    "endTime": "1574423255000"
    }

    Example

    Example using bash yo can find here

    - + \ No newline at end of file diff --git a/dev-guides/RerunDevelopersGuide/index.html b/dev-guides/RerunDevelopersGuide/index.html index e882472c6..182138d4f 100644 --- a/dev-guides/RerunDevelopersGuide/index.html +++ b/dev-guides/RerunDevelopersGuide/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Rerun developers guide

    What is rerun

    Let's imagine we have some set of tests:

    After run we can see few failed items:

    We are fixing issues and want to launch tests again. But running all the tests can take a lot of time. So it would be better to run only failed tests from previous launch.

    Now we have the following:

    So what do we have here? Two launches with the same tests that was just be started again, but they are have difference in passed and failed items. And it is hard to find which test was fixed and which was not.

    The main idea of reruns is to restart the same launch and trace changes between them not creating new launch every time.

    Let's try to report the same launches using rerun.

    We have only one launch with last run data

    On the step view we can see that items with names getActivitiesForProject, getActivityPositive and getTestITemActivitiesPositive have retries. Items getActivityPositive and getTestITemActivitiesPositive was fixed and getActivitiesForProject is still failing.

    How to start rerun

    Latest launch

    Using API

    To start launch rerun you should call default start launch endpoint adding "rerun"=true parameter in the request body.

    {
    "name": "launch_name",
    "description": "some description",
    "mode": "DEFAULT",
    "rerun": true
    }

    And response will contain found launch id for asynchronous endpoint or id and number for synchronous.

    {
    "id": "89f6d409-bee0-428e-baca-4848f86c06e7",
    "number": 4
    }

    Using agent

    To start launch rerun add rp.rerun=true to reportportal.properties file. No need to change anything else(name, project, etc.).

    rp.endpoint=https://rp.com
    rp.apiKey=caccb4bd-f6e7-48f2-af3a-eca0f566b3bd
    rp.launch=rerun_test_example
    rp.project=reporting-test
    rp.reporting.async=true
    rp.rerun=true

    Handling

    System tries to find the latest launch on the project with same name as in request.

    If launch found - system updates the following attributes (if they are present in request and they are different from stored):

    • Mode
    • Description
    • Attributes
    • UUID
    • Status = IN_PROGRESS

    If system cannot find launch with the same name - system throws error with 404 code.

    Specified launch

    Using API

    To start launch rerun you should call default start launch endpoint adding "rerun"=true and "rerunOf"=launch_uuid parameters in the request body. Where launch_uuid is UUID of launch that have to be reruned.

    {
    "name": "launch_name",
    "description": "some description",
    "mode": "DEFAULT",
    "rerun": true,
    "rerunOf": "79446272-a439-45f9-8073-5ca7869f140b"
    }

    And response will contain found launch id for asynchronous endpoint or id and number for synchronous.

    {
    "id": "79446272-a439-45f9-8073-5ca7869f140b",
    "number": 4
    }

    Using agent

    To start launch rerun set rp.rerun=true and rp.rerun.of=launch_uuid in reportportal.properties file. Where launch_uuid is UUID of launch that have to be reruned.

    rp.endpoint=https://rp.com
    rp.apiKey=caccb4bd-f6e7-48f2-af3a-eca0f566b3bd
    rp.launch=rerun_test_example
    rp.project=reporting-test
    rp.reporting.async=true
    rp.rerun=true
    rp.rerun.of=79446272-a439-45f9-8073-5ca7869f140b

    Where 79446272-a439-45f9-8073-5ca7869f140b is UUID of desired launch.

    Handling

    The same as for specified launch.

    Test Items behavior

    There are no differences in API calls for starting and finishing items inside rerun launch. But such items handling is different.

    Container types (has children)

    System tries to find item with the same name, set of parameters and under the same path.

    If such item found - the following attributes will be updated:

    • Description
    • UUID
    • Status = IN_PROGRESS

    If not - new item will be created.

    Step types (without children)

    System tries to find item with the same name, set of parameters and under the same path.

    If such item found - retry of the item will be created.

    If not - new item will be created.

    Example

    You can try to rerun launch here

    - + \ No newline at end of file diff --git a/dev-guides/RetriesReporting/index.html b/dev-guides/RetriesReporting/index.html index 6d4cc2e29..d052c420c 100644 --- a/dev-guides/RetriesReporting/index.html +++ b/dev-guides/RetriesReporting/index.html @@ -12,7 +12,7 @@ - + @@ -26,7 +26,7 @@ Also Test item with type Suite cannot be reported as a retry.

    Retries handling triggered only if Test item has retry=true flag in the request. For example:

    First request will trigger retries handling, but if it's the first reported Test item it won't be a retry:

    {
    "name": "example step",
    "startTime": "1574423237100",
    "type": "step",
    "launchUuid": "<launch uuid>",
    "description": "Item that should be retried",
    "retry": true
    }

    Second request won't trigger retries handling, because retry=false is specified (or this field isn't provided) in the request:

    {
    "name": "example step",
    "startTime": "1574423237100",
    "type": "step",
    "launchUuid": "<launch uuid>",
    "description": "Item that should be retried",
    "retry": false
    }

    As a result 2 separate Test items will be displayed, so ORDER of sent requests matters (if send this items in reversed order they will be grouped as retries).

    In ReportPortal the only Test item from the Retries group that has statistics and can have an issue attached is the one with max startTime. In previous requests startTime was 1574423237000 for the first one and 1574423237100 for the second one, so the second one is a 'main' Test Item with statistics and issue (if attached).

    - + \ No newline at end of file diff --git a/features/AIFailureReasonDetection/index.html b/features/AIFailureReasonDetection/index.html index fc97101d1..4a0164b8f 100644 --- a/features/AIFailureReasonDetection/index.html +++ b/features/AIFailureReasonDetection/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    AI-based failure reason detection

    In the realm of test automation, failure analysis often becomes a bottleneck, consuming valuable time and resources. ReportPortal introduces a game-changing solution with its AI-powered failure reason detection feature. Employing advanced Machine Learning (ML) algorithms, this feature streamlines your test processes, enabling quicker, more accurate results.

    Unlocking Efficiency through Automation

    Running daily regression tests can be a double-edged sword: they're essential for maintaining a robust application, but they also generate an avalanche of test results that need analysis. The good news? ReportPortal's AI algorithms take over this repetitive task, automatically categorizing test failures according to their root cause. As a result, your team can shift their focus to newly emerged issues, substantially cutting down the time spent on manual triage.

    Speed and Precision in Defect Identification

    The power of AI doesn't stop at handling daily jobs; it extends to making your defect identification process lightning-fast and razor-sharp. ReportPortal's Auto-Analysis feature scans through test results, logs, and other associated data to quickly pinpoint failures and automatically tag them with defect types. This efficiency enables your team to discover a maximum number of bugs in minimal time, supercharging your QA process.

    Elevated Accuracy in Failure Classification

    Human error is an inevitable part of any process, particularly one as monotonous as going through lines of test logs. ReportPortal's AI-driven approach minimizes this risk. By automating the classification of test failures, it not only eliminates manual errors but also adds an extra layer of precision that even the most experienced testers might miss.

    ReportPortal's AI functionality comes in three distinct forms to accommodate various testing needs:

    • Analyzer: This feature automatically classifies test failures, sparing your team the manual labor of sifting through results. Utilizing advanced algorithms, the Analyzer categorizes different types of test failures, so you can prioritize issues that need immediate attention.
    • Unique Error: This tool groups identical test failures together for accelerated bulk analysis. By clustering similar failures, Unique Error allows for more efficient troubleshooting and quicker resolution of recurring issues.
    • ML-Based Suggestions: Leveraging machine learning algorithms, this feature provides suggestions for failures that are most similar to ones you've encountered before while do manual analysis. The suggestions guide your team in identifying the root causes of test failures more accurately and swiftly.

    Conclusion: A Smarter Way to Test

    AI-based failure reason detection is more than just a flashy feature. First deployed in production in 2016, long before the hype cycle surrounding GenAI technology. It's a strategic asset that enhances team productivity and the reliability of your applications. By automating the most cumbersome aspects of test analysis, ReportPortal frees up your team to focus on what truly matters: delivering high-quality software.

    Embrace the future of test automation with ReportPortal's AI capabilities and give your team the edge they've been waiting for.

    - + \ No newline at end of file diff --git a/features/CategorisationOfFailures/index.html b/features/CategorisationOfFailures/index.html index fd4db3133..dc5645007 100644 --- a/features/CategorisationOfFailures/index.html +++ b/features/CategorisationOfFailures/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Categorization of failures based on issue roots

    In the realm of software development and QA, test failure analysis isn't just a task—it's a critical practice. The way we categorize test failures informs not only the testing process but also how quickly and efficiently developers can resolve issues. ReportPortal provides an advanced, feature-rich environment for streamlined failure categorization, which is an integral part of any agile development cycle.

    Navigate to the 'Launches' tab in ReportPortal and you'll find an intuitive breakdown of your test executions, displaying Total, Passed, Failed, and Skipped test cases. What sets ReportPortal apart is its pre-categorized issues like Product Bug, Automation Bug, and System Issue. This leaves your team with a focused subset of items marked “To Investigate.”

    The Process of Intelligent Triage

    When you open one of these “To Investigate” items, a QA engineer engages in defect triage by reviewing logs, screenshots, and video recordings, all of which are seamlessly integrated into each failed test item. Based on this comprehensive analysis, the engineer categorizes the failures through the “Make Decision” modal, as follows:

    • Product Bug: These issues stem from flaws in the application code and should be the developers' first priority for resolution.
    • Automation Bug: This category addresses issues related to test data, such as outdated tests or incorrect data, and timeouts.
    • System Issue: Failures in this category usually result from an unstable testing environment or issues with third-party software.
    • Not a Defect: Use this classification for tests that have been resolved or manually validated.

    The Takeaways

    Intelligent categorization with ReportPortal serves several essential functions:

    1. Root Cause Analysis: The key question remains: Was the failure caused by an issue within the software, or was it an external factor? With access to detailed logs, screenshots, and video, pinpointing the root cause becomes a far more efficient process.

    2. Functional Area Identification: Categorizing failures often reveals which area of the application is prone to issues, allowing for targeted remediation.

    3. Severity Assessment: Understanding the severity of a failure is critical. Is it an application issue that mandates postponing deployment, or is it a lesser issue that can be prioritized accordingly?

    In summary, the intelligent test failure categorization offered by ReportPortal is a cornerstone for delivering high-quality products swiftly and effectively.

    - + \ No newline at end of file diff --git a/features/QualityGates/index.html b/features/QualityGates/index.html index b1199add4..75374b886 100644 --- a/features/QualityGates/index.html +++ b/features/QualityGates/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Quality Gates

    Quality Gates is a feature thanks to which ReportPortal becomes an integral part of continuous testing platform. It prevents the code from moving forward if it doesn’t meet testing criteria. ReportPortal uses aggregated data and rule engine to verify testing results against required conditions.

    Top-3 benefits of Quality Gates:

    Increased Speed and Efficiency

    Automated GO / NO-GO decisions enable quick decision-making by instantly evaluating whether a particular build or code change meets the predefined quality criteria while tested. This eliminates the need for manual reviews at various stages of the process, allowing teams to release software more quickly and efficiently and avoid delays and human error.

    Improved Consistency and Reliability

    By setting standardized quality criteria, automated quality gates ensure that each code change adheres to the same high-quality standards. This consistency helps to reduce the likelihood of defects and errors, making the end product more reliable and robust.

    Enhanced Collaboration and Accountability

    Automated quality gates provide immediate, objective feedback that is accessible to all team members, from developers to QA to management. This shared understanding of what constitutes "quality" improves collaboration and holds everyone accountable for the quality of the software.

    In summary, Quality Gates are an essential feature in ReportPortal that helps to understand the quality of your product and make release decisions.

    - + \ No newline at end of file diff --git a/features/RESTAPI/index.html b/features/RESTAPI/index.html index 6719970fa..e480302c4 100644 --- a/features/RESTAPI/index.html +++ b/features/RESTAPI/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    REST API

    ReportPortal offers a robust set of features through its REST API, covering test data, user data, project data, statistics, integrations, and more.

    The REST API can be particularly useful for those looking to:

    • Implement own reporter and integrate it with a custom testing framework or runner.
    • Retrieve data from ReportPortal for custom reports or integration with third-party systems.
    • Enhance functionality and develop add-ons.

    Here are the examples of how REST API can be helpful, including:

    1. Creating/updating test results, test runs, logs, adding attachments, and comments, and other information as attributes. It improves the testing process in general by supplying more context and information about the test results.

    2. Integration with test management tools (Jira, TestRail, and others). It allows to collect and manage test results and other data in one place.

    3. Integration with bug tracking systems (Jira Server, Jira Cloud, Azure DevOps, Rally, and others). This makes it possible to easily track issues.

    4. Integration with CI/CD pipelines (Jenkins, Travis CI, TeamCity, and others) for the real-time test results, faster feedback and quicker release cycles.

    5. Integration with other systems like Data Mart (Microsoft Dynamics, Power BI, Vertica). They serve for extensive data analysis.

    6. Custom integration. With REST API, you can create your own extensions. For example, if here there is no integration with specific test framework that you need, you can make any integration using API.

    Generally, REST API can be used for getting the most out of the test automation reporting tools such as ReportPortal and for the extension of capabilities. REST API allows to track and manage bugs in one central area, to design unique reports and dashboards that might offer insightful data and assist teams in making better decisions regarding the testing process.

    - + \ No newline at end of file diff --git a/features/RealTimeReporting/index.html b/features/RealTimeReporting/index.html index 919dfba8f..49b5a6256 100644 --- a/features/RealTimeReporting/index.html +++ b/features/RealTimeReporting/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Real-time reporting

    Real-time test reporting provides the most current data without delay. Thanks to near real-time capabilities, you can receive the first test results in a matter of seconds after the execution has started and benefit from the advantage of getting the entire execution results before it is completed.

    Real-time reporting in ReportPortal comes with three main advantages:

    Saving time on early reaction and Early Defect Identification

    With test case results and logs appearing in ReportPortal in real-time, teams can begin triaging issues immediately. This not only saves time but also allows for early identification of defects, leading to quicker resolutions and improved code quality.

    At the same time, you can save time and effort with manual re-testing of failed test cases, while the entire execution is still on the go.

    Maximized Testing Efficiency for Large Regressions

    For extensive regression tests that may take several hours to complete, real-time reporting ensures that teams don't have to wait until the end to start analyzing results. They can begin the analysis while the rest of the tests are still running, thereby making optimal use of time and resources.

    Streamlined Workflow with Continuous Feedback:

    The real-time nature of the reporting enables a continuous feedback loop among testers, developers, and other stakeholders. This makes it easier to implement changes on the fly and ensures that everyone stays informed, reducing the likelihood of surprises at the end of the regression cycle.

    Overall, real-time reporting provides easy access to test results as soon as they are generated, allows to quickly identify issues and make decisions, reduces costs by preventing similar issues and ensures effective communication between team and stakeholders.

    - + \ No newline at end of file diff --git a/features/RichArtifactsInTestReports/index.html b/features/RichArtifactsInTestReports/index.html index abc6df5a1..c7ee8a203 100644 --- a/features/RichArtifactsInTestReports/index.html +++ b/features/RichArtifactsInTestReports/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Rich artifacts in test reports

    Test execution report in ReportPortal may contain extra details in addition to the standard test results (passed/failed status).

    These additional artifacts in test reports can consist of:

    1. Logs.

    Logs can include error messages that are very helpful for test failure analysis. With logs, it is possible to understand the root cause and set the defect type properly.

    2. Screenshots.

    You can see the image of the data displayed on the screen during the test run. Screenshots make the debugging process easier and may give insights into why this issue occurred.

    3. Video recordings.

    Logs in ReportPortal can have video recordings attached as well. A video shows the exact screen states. Using this artifact, you will be able to troubleshoot problems more quickly.

    Besides, ReportPortal also supports integration with Sauce Labs via corresponding plugin so that it is possible to watch videos of test executions right in the ReportPortal.

    4. Network traffic.

    These are details about the network calls performed while the test was running (requests and responses).

    You can send all the artifacts to ReportPortal as attachments and use them to simplify defect triage, to replicate and debug issues, to find performance and network-related issues, to comprehend the context in which certain problems occurred.

    - + \ No newline at end of file diff --git a/features/UnifiedTestReporting/index.html b/features/UnifiedTestReporting/index.html index 9ac1af26d..e41a7bb96 100644 --- a/features/UnifiedTestReporting/index.html +++ b/features/UnifiedTestReporting/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Single-entry point and unified test reporting

    ReportPortal was created with the idea in mind to be a single tool to acquire and aggregate the results of all automated tests for projects. Our centralized test automation tool is a great focus area for managers and test engineers because all test results can be accessed, reviewed and analyzed in one place.

    In the complex landscape of software testing, where multiple test frameworks, languages, and types of tests often coexist, the sheer volume and diversity of test reports can be overwhelming. ReportPortal addresses this problem elegantly by serving as a single-entry point for all test reports, providing unified and consolidated insights across the board. This single point of truth offers a multitude of benefits that can transform the way your team approaches testing.

    Unification and Standardization

    ReportPortal is designed to unify reports from various test frameworks and languages, whether they are API tests, integration tests, or end-to-end tests. It takes these disparate reports and presents them in a standardized format, making them equally understandable to QA engineers, developers, and even DevOps teams. This eliminates the need for engineers to scour multiple platforms for hidden reports, thereby reducing confusion and promoting efficiency.

    Streamlined Collaboration

    With a single link, you can direct team members to a central location where test results are not just collected but also analyzed. This facilitates collaboration by creating a universal language and understanding of test results across different team members, irrespective of their roles or technical proficiencies. The centralized reporting also aids in "Shift Left" testing approaches, where test feedback is made available earlier in the development cycle by Developers and not only QA Engineers.

    Immediate Feedback and Quality Gate Integration

    When combined with automated quality gates, the platform can provide instantaneous verdicts on whether code changes have passed or failed the tests. In the event of a failure, the platform's auto-analysis feature helps to pinpoint the exact cause. This real-time, automated feedback is invaluable in modern DevOps pipelines, accelerating the release process and enhancing code quality.

    In summary, ReportPortal’s single-entry point for all test reports offers the unique advantage of consolidating, standardizing, and analyzing test data in one location. This unified approach significantly simplifies the testing process, fosters better collaboration, and enables more agile development cycles. Adopting ReportPortal is a strategic move toward a more efficient and streamlined testing environment.

    - + \ No newline at end of file diff --git a/features/VisualisationOfTestResults/index.html b/features/VisualisationOfTestResults/index.html index e186dcc75..56285b1f6 100644 --- a/features/VisualisationOfTestResults/index.html +++ b/features/VisualisationOfTestResults/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Visualization of test data

    ReportPortal's dynamic visualization capabilities provide teams with real-time insights, enabling them to make informed decisions quickly and efficiently.

    Real-Time Dashboards and Widgets

    One of ReportPortal's most compelling features is its real-time dashboards and widgets. These dashboards can be customized to focus on specific test executions or take a broader view at the build level by consolidating multiple executions. As your tests run, widgets update in real-time, ensuring your data is always current. This eliminates the cumbersome task of having to recreate reports for every testing cycle or regression.

    Versatile Tracking

    ReportPortal's widgets are designed to track essential metrics and trends in your test results. Whether you are interested in code coverage, performance metrics, or failure trends, there's a widget to help you monitor it. The real-time aspect ensures that any shifts in these metrics are immediately visible, allowing your team to adapt and respond more quickly than ever before.

    Eye-Opening Insights

    The platform goes beyond just displaying test results; it offers deep insights into the technical debt of your test automation. Widgets can break down test failures by different categories, such as environmental issues, automation errors, or actual product bugs. This classification empowers teams to understand where their efforts are most needed and helps them prioritize tasks effectively.

    Demonstrating ROI of test automation

    Perhaps the most critical advantage is the ability to highlight the return on investment (ROI) of your test automation. By showing the actual number of failures due to product bugs, ReportPortal makes it easier to quantify the value your testing is bringing to the overall project.

    In conclusion, ReportPortal’s visualization capabilities offer real-time, customizable, and insightful views into your test data. These features not only save time but also provide teams with the insights needed to deliver better software, faster.

    - + \ No newline at end of file diff --git a/index.html b/index.html index 65077151d..c365701a3 100644 --- a/index.html +++ b/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ Testing process.

    ReportPortal is distributed under the Apache v2.0 license, and it is free to use and modify, even for commercial purposes. We offer the only paid premium feature – Quality Gates.

    If a company is interested in our services, we can provide support hours to deploy, integrate, configure, or customize the tool, as well as SaaS options.

    What ReportPortal can do?

    ReportPortal seamlessly integrates with mainstream platforms such as Jenkins, Jira, BDD process, majority of Functional and Unit testing frameworks.

    Real-time integration provides businesses the ability to manage and track execution status directly from the ReportPortal.

    Test case execution results are stored following the same structure you have in your reporting suites and test plan. The test cases are shown together with all related data in one place, right where you need it: logs, screenshots, binary data. The execution pipeline of certain test cases are also available for you, so one can see previous test execution report in one click.

    Our test report dashboard also gives you the ability to collaboratively analyze the test automation results and quickly get automation test report. Particular test cases can be associated with a product bug, an automation issue, a system issue or can be submitted as an issue ticket directly from the execution result.

    ReportPortal provides enhanced capabilities along with auto-results analysis by leveraging historical data of test execution.

    With each execution, ReportPortal automatically figures out the root cause of a fail. As a result of AI-based defects triage, ReportPortal is marking a test result with a flag. Engineers will be alerted about this issue to provide further analysis: if it has been resolved already or which test results require actual human analysis.

    What technologies are used?

    Considering a high load rate and performance requirements, we use cutting edge technologies such as:

    Benefits of report automation with ReportPortal

    Report automation is the procedure by which reports are routinely and automatically refreshed.

    Report automation is an effective technique to provide information that is essential to business operations. Delivering crucial information to the appropriate people at the appropriate time becomes considerably quicker and simpler by automatically generated reports. It allows to get business faster insights and drive better decisions.

    In ReportPortal, you have fully real-time analytic report automation. Since the reports are automatically generated, you are insured against human error when reports are generated manually.

    ReportPortal offers widgets with a user-friendly visual interface to create interactive reports for all your needs. For example, you can build qa metrics dashboard using Overall statistics, Unique bugs table, Passing rate summary widgets.

    ReportPortal is CI/CD agnostic tool solution. You can use any CI environment to run automated tests to improve quality of product by catching issues early in development lifecycle.

    - + \ No newline at end of file diff --git a/installation-steps/AdditionalConfigurationParameters/index.html b/installation-steps/AdditionalConfigurationParameters/index.html index f470c425d..ef3940e40 100644 --- a/installation-steps/AdditionalConfigurationParameters/index.html +++ b/installation-steps/AdditionalConfigurationParameters/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Additional configuration parameters

    Configuration parameterDefault ValueServiceDescription
    RP_PROFILES-API,UATSpecifies application settings profiles. Should be set to 'docker'
    RP_SESSION_LIVE3600UATSession token live time in seconds
    RP_SERVER_PORT8080UIUI service port
    POSTGRES_SERVERpostgresAPI,UAT,MIGRATIONSPostgreSQL host
    POSTGRES_PORT5432API,UAT,MIGRATIONSPostgreSQL port
    POSTGRES_USERrpuserAPI,UAT,MIGRATIONSPostgreSQL user name
    POSTGRES_PASSWORDrppassAPI,UAT,MIGRATIONSPostgreSQL user password
    POSTGRES_DBreportportalAPI,UAT,MIGRATIONSPostgreSQL database name
    RABBITMQ_DEFAULT_USERrabbitmqAPI,ANALYZERPostgreSQL database name
    RABBITMQ_DEFAULT_PASSrabbitmqAPI,ANALYZERPostgreSQL database name

    Example of docker compose with filled out configuration parameters can be found here.

    - + \ No newline at end of file diff --git a/installation-steps/BasicMonitoringConfiguration/index.html b/installation-steps/BasicMonitoringConfiguration/index.html index 234be75b0..fc7ec5ffb 100644 --- a/installation-steps/BasicMonitoringConfiguration/index.html +++ b/installation-steps/BasicMonitoringConfiguration/index.html @@ -12,7 +12,7 @@ - + @@ -26,7 +26,7 @@ To avoid competition for resources with the services of the ReportPortal, deploy a separate virtual machine for monitoring (for our instances we using m5.large shape for the monitoring node) and install the following services:

    1) InfluxDB database: https://docs.influxdata.com/influxdb/v2.0/install/?t=Docker;

    2) Grafana: Dashboard examples(Grafana IDs): 5955, 3056.

    3) Telegraf:

    Telegraf installation guide:

    Update your system.

        sudo yum -y update

    Add Influxdata RPM repository.

        cat <<EOF | sudo tee /etc/yum.repos.d/influxdb.repo
    [influxdb]
    name = InfluxDB Repository - RHEL
    baseurl = https://repos.influxdata.com/rhel/7/x86_64/stable/
    enabled = 1
    gpgcheck = 1
    gpgkey = https://repos.influxdata.com/influxdb.key
    EOF

    Install Telegraf on RHEL 8 / CentOS 8. Once the repository has been added, install Telegraf on RHEL 8 using the command below.

        sudo dnf -y install telegraf

    Open "telegraf.conf" file for the monitoring configuration. In the case of Kubernetes deployment need to configure telegraf on each cluster node separately.

        sudo nano /etc/telegraf/telegraf.conf

    Change following configs(press Ctrl+W to search for the particular configs):

        hostname = "api_node_1"

    Search for the "outputs.influxdb" and update URL and database name for the InfluxDB:

        [[outputs.influxdb]]
    urls = ["http://<influxdb_host>:8086"]
    database = "telegraf"

    Search for the "inputs.docker" and update next configs(should be uncommented each value which you need to add to the monitoring):

        [[inputs.docker]]
    endpoint = "unix://var/run/docker.sock"
    perdevice_include = ["cpu"]
    total_include = ["cpu", "blkio", "network"]

    Search for the "inputs.net" for adding the network metrics to the monitoring. Uncomment only the plugin name:

        [[inputs.net]]

    Save changes, close "telegraf.conf" and start telegraf service:

        sudo systemctl enable --now telegraf

    Check the status(should be green and in active running status):

        sudo systemctl status telegraf

    In case of errors "E! [inputs.docker] Error in plugin: Got permission denied while trying to connect to the Docker daemon socket at unix://var/run/docker.sock Permission denied"

    Need to add permissions to the /var/run/docker.sock:

        sudo chmod 666 /var/run/docker.sock

    PGHero - simple monitoring dashboard for PostgreSQL

    Functionality

    Installation

    How to enable query stats

    In the database settings(for RDS database - in parameter group) add/change the following parameters:

        shared_preload_libraries = 'pg_stat_statements'
    pg_stat_statements.track = all
    pg_stat_statements.max = 10000
    track_activity_query_size = 2048

    Restart the database or reboot the RDS instance. As a superuser from the psql console, run:

        CREATE extension pg_stat_statements;

    How to configure historical query stats

    To track query stats over time, create a table to store them.

    Execute the following query for table creation:

        CREATE TABLE "pghero_query_stats" (
    "id" bigserial primary key,
    "database" text,
    "user" text,
    "query" text,
    "query_hash" bigint,
    "total_time" float,
    "calls" bigint,
    "captured_at" timestamp
    );

    Build index on the created table:

        CREATE INDEX ON "pghero_query_stats" ("database", "captured_at");

    Include the following to the installation string to schedule the task to run every 5 minutes:

        bin/rake pghero:capture_query_stats

    The query stats table can grow large over time. Remove old stats with:

        bin/rake pghero:clean_query_stats
    - + \ No newline at end of file diff --git a/installation-steps/DeployOnUbuntuOS/index.html b/installation-steps/DeployOnUbuntuOS/index.html index 7be9afd80..c328cdfa5 100644 --- a/installation-steps/DeployOnUbuntuOS/index.html +++ b/installation-steps/DeployOnUbuntuOS/index.html @@ -12,7 +12,7 @@ - + @@ -25,7 +25,7 @@ docker ps to check that all containers from previous step are up and running.

    1. Check that ReportPortal is running In the VM opten browser to 0.0.0.0:8080 --> ReportPortal login page will open.
    note

    If you are behind a proxy you need to add 0.0.0.0 to the NO_PROXY (no_proxy) ENV.
    After setting the port forwarding like explained in video PART 2 (link below) open browser on your host machine to localhost:8080 or 127.0.0.1:8080 --> ReportPortal loginpage will open

    OPTIONAL

    If you don't like to write 'sudo' before each docker-command do this but be aware of possible security issue with that!

       sudo groupadd docker
    sudo usermod -aG docker $USER

    >>> RESTART VM!

    Helpful links (video tutorial)

    Part 1

    Part 2

    - + \ No newline at end of file diff --git a/installation-steps/DeployWithAWSECSFargate/index.html b/installation-steps/DeployWithAWSECSFargate/index.html index 03cb45f48..0a9712691 100644 --- a/installation-steps/DeployWithAWSECSFargate/index.html +++ b/installation-steps/DeployWithAWSECSFargate/index.html @@ -12,7 +12,7 @@ - + @@ -28,7 +28,7 @@ c. Now inspect the logs of traefik service to see if all RP components are being picked up in the configuration from ECS provider.

    Index: For some obvious reasons the index (with Ruby runtime) service couldn't run on ECS - hence ported to Lambda with Python runtime. Create a lambda function with 128M memory & 120s timeout with following code.

    Add environment variable TRAEFIK_SERVICES_URL with the IP address of Traefik endpoint (it can be fetched from the task details. Eg: http://192.168.2.120:8081)

    mureq can be obtained from mureq.py - place it beside the lamda_function.py file.

    import os
    import json
    import mureq


    def lambda_handler(event, context):
    print("-------------------EVENT BEGIN-------------------------------")
    print(event)
    print("-------------------EVENT END-------------------------------")
    traefik_services= mureq.get(os.environ['TRAEFIK_SERVICES_URL'])

    if event['path'] == '/composite/info':
    rp_status = {}
    for service in traefik_services.json():
    if "loadBalancer" in service:
    app_name = service['name'].replace('@ecs','')
    app_info = mureq.get(service['loadBalancer']['servers'][0]['url'] + '/info')
    rp_status.update({app_name: app_info.json()})
    return{
    "statusCode": 200,
    "statusDescription": "200 OK",
    "isBase64Encoded": False,
    "headers": {
    "Content-Type": "application/json"
    },
    "body": json.dumps(rp_status)
    }
    if event['path'] == '/composite/health':
    rp_health = {}
    for service in traefik_services.json():
    if "loadBalancer" in service:
    app_name = service['name'].replace('@ecs','')
    app_health = list(service['serverStatus'].values())[0]
    rp_health.update({app_name: app_health})
    return{
    "statusCode": 200,
    "statusDescription": "200 OK",
    "isBase64Encoded": False,
    "headers": {
    "Content-Type": "application/json"
    },
    "body": json.dumps(rp_health)
    }
    if event['path'] == '/':
    redirect_url= event['headers']['x-forwarded-proto'] + "://" + event['headers']['host'] + '/ui'
    print(redirect_url)
    return{
    "statusCode": 301,
    "statusDescription": "301 Found",
    "isBase64Encoded": False,
    "headers": {
    "Location": redirect_url
    },
    "body": ""
    }

    if event['path'] == '/ui':
    redirect_url= event['headers']['x-forwarded-proto'] + "://" + event['headers']['host'] + '/ui/'
    print(redirect_url)
    return{
    "statusCode": 301,
    "statusDescription": "301 Found",
    "isBase64Encoded": False,
    "headers": {
    "Location": redirect_url
    },
    "body": ""
    }

    Now add this lambda to the Index TargetGroup created earlier.

    DNS

    Add relevent DNS records in the Route53 hostedzone.

    Validation

    Access the application with default credentials once the DNS record addition is propogated & validate if all sections are loading without errors.

    - + \ No newline at end of file diff --git a/installation-steps/DeployWithDockerOnLinuxMac/index.html b/installation-steps/DeployWithDockerOnLinuxMac/index.html index 4f863225d..f0ed62282 100644 --- a/installation-steps/DeployWithDockerOnLinuxMac/index.html +++ b/installation-steps/DeployWithDockerOnLinuxMac/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Deploy with Docker on Linux/Mac

    ReportPortal can be easily deployed using Docker Compose.

    Install Docker

    Docker is supported by all major Linux distributions, MacOS and Windows.

    ⚠️ Recomended change resources limits at least 2 CPU 5 GB RAM for Docker Desktop: MAC | Windows | Linux

    Deploy ReportPortal with Docker

    1. Download the latest ReportPortal Docker Compose file from here. You can make it by run the following command:
    curl -LO https://raw.githubusercontent.com/reportportal/reportportal/master/docker-compose.yml

    Ensure you override the UAT Service environment variable RP_INITIAL_ADMIN_PASSWORD.

    1. Start the application using the following command:
    docker-compose -p reportportal up -d --force-recreate

    Where:

    • -p reportportal adds project prefix 'reportportal' to all containers
    • up creates and starts containers
    • -d daemon mode
    • --force-recreate Re-creates containers if there any

    Useful commands:

    • docker-compose logs shows logs from all containers
    • docker logs <container_name> shows logs from selected container
    • docker ps -a | grep "reportportal_" | awk '{print $1}' | xargs docker rm -f Deletes all ReportPortal containers
    • docker-compose down
    1. Open your web browser with an IP address of the deployed environment at port 8080

    Use the following login\pass to access:

    • Default User: default\1q2w3e
    • Administrator: superadmin\erebus

    ⚠️ Please change the admin password for better security

    Optional Customisation

    1. Expose Docker Volumes to the file system

    OPTIONAL: Set {vm.max_map_count} kernel setting before ReportPortal deploying with the following Commands

    Give the right permissions to the ElasticSearch data folder using the following commands:

    mkdir -p data/elasticsearch
    chmod 775 data/elasticsearch
    chgrp 1000 data/elasticsearch

    For more details about ElasticSearch visit ElasticSearch guide

    1. PostgreSQL Performance Tuning

    Depending on your hardware configuration and the parameters of your system, you can additionally optimize your PostgreSQL performance by adding the following parameters to the "command" option in the Docker compose file:

     -c effective_io_concurrency=
    -c shared_buffers=
    -c max_connections=
    -c effective_cache_size=
    -c maintenance_work_mem=
    -c random_page_cost=
    -c seq_page_cost=
    -c min_wal_size=
    -c max_wal_size=
    -c max_worker_processes=
    -c max_parallel_workers_per_gather=

    Please choose to set the values of these variables that are right for your system. You can also change the PostgreSQL host by passing a new value to the POSTGRES_SERVER environment variable.

    More info can be found at the following link

    - + \ No newline at end of file diff --git a/installation-steps/DeployWithDockerOnWindows/index.html b/installation-steps/DeployWithDockerOnWindows/index.html index b79f832bc..f539e77ea 100644 --- a/installation-steps/DeployWithDockerOnWindows/index.html +++ b/installation-steps/DeployWithDockerOnWindows/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Deploy with Docker on Windows

    In case you went with Docker on Windows, please make sure you changed the 'volumes' value for postgres container from "For unix host" to the "For windows host":

      volumes:
    # For windows host
    - postgres:/var/lib/postgresql/data
    # For unix host
    # - ./data/postgres:/var/lib/postgresql/data

    If you haven’t done this, you will get an error

    data directory “/var/lib/postgresql/data/pgdata” has wrong ownership

    Then uncomment the following:

        volumes:
    # For unix host
    # - ./data/storage:/data
    # For windows host
    - minio:/data

    And after that uncomment the following:

    # Docker volume for Windows host
    volumes:
    postgres:
    minio:

    Updating ReportPortal with Docker

    Updating ReportPortal with Docker is a two step process.

    In the first step, your Docker Compose file should be replaced with a new one (with the latest version services) from here.

    The second step is update / redeploy the application using the following command:

    docker-compose -p reportportal up -d --force-recreate

    There is no strict need for backup / restore the data if you are keep the postgres, elasticsearch & minio volumes. However, it is recommended (see Maintain commands Cheat sheet).

    ReportPortal Services

    The ReportPortal consists of the following services:

    • Authorization Service. In charge of access tokens distribution.
    • Gateway Service. Main entry point to application. Port used by gateway should be opened and accessible from outside network.
    • API Service. Main application API.
    • UI Service. All statics for user interface.
    • Analyzer Service. Collects and processes the information, then sends it to ElasticSearch
    • Index Service. Responsible for redirections, collection of services information, handling errors
    - + \ No newline at end of file diff --git a/installation-steps/DeployWithKubernetes/index.html b/installation-steps/DeployWithKubernetes/index.html index c8cd4544f..4a9123acd 100644 --- a/installation-steps/DeployWithKubernetes/index.html +++ b/installation-steps/DeployWithKubernetes/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/installation-steps/DeployWithoutDocker/index.html b/installation-steps/DeployWithoutDocker/index.html index bb43e6267..14d34bb2f 100644 --- a/installation-steps/DeployWithoutDocker/index.html +++ b/installation-steps/DeployWithoutDocker/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Deploy without Docker

    The instruction designed for the version 5.3.5 and might be outdated for the latest versions. Do not hesitate to contribute and send us a pull request with updates. We appreciate your help!

    In order to get started with ReportPortal on Red Hat Family and Ubuntu Linux distributions non-Docker/Kubernetes usage, please use the following links

    1. [Outdated] ReportPortal 5.3.5

    ReportPortal 5.3.5 installation guide

    Supported OS

    Ubuntu LTS 18.04, 20.04 / Red Hat family 6, 7, 8 (RHEL, CentOS, etc)

    Required service

    • PSQL 12.6
    • RabbitMQ 3.8.14
    • ElasticSearch 7.12
    • Traefik 1.7.29

    In addition, these services are compatible with earlier versions.

    2. [Outdated] ReportPortal 5.0.0

    ReportPortal 5.0.0 installation guide

    Supported OS

    Ubuntu LTS 16.04, 18.04 / Red Hat family 6, 7 (RHEL, CentOS, etc)

    Required service

    • PSQL 11
    • RabbitMQ 3.6.10
    • ElasticSearch 6.7.0"
    • Traefik 1.7.19
    - + \ No newline at end of file diff --git a/installation-steps/MaintainCommandsCheatSheet/index.html b/installation-steps/MaintainCommandsCheatSheet/index.html index 7d0127f73..b94388628 100644 --- a/installation-steps/MaintainCommandsCheatSheet/index.html +++ b/installation-steps/MaintainCommandsCheatSheet/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Maintain commands Cheat sheet

    Export as env var: 

    export RP_PRJ=$(docker ps --filter="name=api" --format="\{{.Names}}" | sed 's/\(.*\)_api_[0-9]*/\1/')

    Install/restart ReportPortal: 

    docker-compose -p $RP_PRJ up -d --force-recreate

    Show all logs: 

    docker-compose -p $RP_PRJ logs

    Show specific logs: 

    docker-compose -p $RP_PRJ logs <name, e.g. api>

    Delete everything except data: 

    docker-compose -p $RP_PRJ down --rmi -v --remove-orphans

    Backup / Dump the data: 

    docker exec <postgres_container_name> pg_dump -U <POSTGRES_USER> <database_name> > backup.sql

    Clean up the data: 

    docker exec -it <postgres_container_name> psql -U <POSTGRES_USER> -d <database_name>
        DROP TABLE schema_migrations;
    DROP SCHEMA quartz CASCADE;
    DROP SCHEMA public CASCADE; CREATE SCHEMA public;
    \q

    Restore the data: 

    docker exec -i <postgres_container_name> psql -U <POSTGRES_USER> -d <database_name> < backup.sql

    You can download PDF file with commands.

    - + \ No newline at end of file diff --git a/installation-steps/OptimalPerformanceHardwareSetup/index.html b/installation-steps/OptimalPerformanceHardwareSetup/index.html index b86824854..88366a618 100644 --- a/installation-steps/OptimalPerformanceHardwareSetup/index.html +++ b/installation-steps/OptimalPerformanceHardwareSetup/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ In grand total such a structure creates 3 Launches with 243 test case inside and produces 1253 requests.

    Now we can divide their number by the duration in seconds, and get the RPS result. if it runs for 6 minutes (2 minutes each lunch), then: 60s * 6 = 360 and 1253 / 360 ~=4.5 requests per second.

    If launches will be executed in parallel, 3 at the time, then RPS will be equal to 4.5*3 = 13.5 rps.


    Summary

    Having information regarding number of test cases in your framework, average number of logs, number of parallel threads and durations, you can calculate system capacity according to the tables below.


    Configuration testing results

    The purpose of the configuration performance testing is to determine saturation points and overall system capacity for different instance sizes and specifications. Testing was conducted on the С5 instances which are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio(Compute Optimized Instances) with Up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz 2nd generation Intel Xeon Scalable Processors (Cascade Lake) or 1st generation Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo frequency.

    Application and Database are deployed on separate VMs

    Instance TypeSaturation point, rpsvUsers countDisk IOPSJava Options
    с5.xlarge64060up to 3000-Xmx1g
    c5.2xlarge1374115up to 4000-Xmx2g
    c5.4xlarge3104356up to 8000-Xmx3g
    с5.9xlarge5700489up to 10000-Xmx4g

    Application and Database are deployed on single VM

    Instance TypeSaturation point, rpsvUsers countDisk IOPSJava Options
    с5.xlarge52150up to 3000-Xmx1g
    c5.2xlarge107883up to 4000-Xmx2g
    c5.4xlarge2940305up to 8000-Xmx3g
    с5.9xlarge5227440up to 10000-Xmx4g


    4. The database separately from other services

    Consider deploying the database separately from other RP services. It allows increasing throughput of the server and performance of the ReportPortal overall. This can be, for example, AWS RDS PostgreSQL Database or a separate VM only for the PostgreSQL database.

    The separate database instance should be the same by CPU's and RAM, but started from middle+ server type, the database instance may need to have doubled CPU's and RAM size in comparison with the application instance. This is explained by the fact that with an increase in the size of the database and the number of concurrent users, the load is distributed more on the database server: increased volume of resources(CPU, memory, IOPS number, etc.) required to performing each DB query since it handles / can handle more data volume and/or can handle of a greater number of concurrent users.

    Example for the middle+ server:

    Instance typeCPU'sRAM size, GbDisk space, GbAWS Shape
    Application instance1632200c5.4xlarge
    Database instance16321000c5.4xlarge

    5. PostgreSQL Performance Tuning

    Since PostgreSQL Database is used, it needs some set of special configs for the best performance. These set contains two categories:

    6. Application connections pool tuning

    By default, ReportPortal has 27 connections on service-api and 27 connections in pool on service-authorization. In general these values are valid for the small and middle servers. But from the middle+ server type, the connection pool may be increased if it's not enough for your server load.

    It can be detected as periodic freezes and the "Loading" message when opening any page, and/or slowing down the work with RP after a certain period of time during active reporting and use with UI. Restarts of API and UAT services can also be observed.

    To increasing the connection pool on both services, need to add next environment variables to the service-api and to the service-authorization:

    RP_DATASOURCE_MAXIMUMPOOLSIZE=100

    After increasing the connection pool from the application side, do not forget increase the max_connections from the Database side, using following DB configuration paramether:

    max_connections=500

    The values of these parameters are given for example only, but in general, can be valid for all types of loads for servers middle+ and large.

    Please note, that the max_connections paramether must be more than the sum of the RP_DATASOURCE_MAXIMUMPOOLSIZE for the API and the UAT services + several connections for connecting to the database from outside.

    7. Elasticsearch Performance Tuning

    As mentioned above, in some cases may be necessary to increase the limits of shards. The general rule is 2 active shards per 1 data stream(1 data stream equals 1 project in the ReportPortal on the default configuration). If the data stream reaches the default 50 GB storage limit, it will be rollovered and a new generation will be created (please see the diagram below). Please consider that the number of Elasticsearch shards on the default configuration is insufficient for ReportPortal installations with a large number of projects. As a result, after storing close to 3000 indices without any tuning the logs-saving behavior may be incorrect.

    To retrieve statistics from a cluster (to check the value of shard):

    GET /_cluster/stats

    {
    "_nodes": {
    "total": 3,
    "successful": 3,
    "failed": 0
    },
    "cluster_name": "elasticsearch",
    "cluster_uuid": "Oq5UUZUg3RGE0Pa_Trfds",
    "timestamp": 172365897412,
    "status": "green",
    "indices": {
    "count": 470,
    "shards": {
    "total": 940,
    "primaries": 470,
    "replication": 1.0,
    "index": {
    "shards": {
    "min": 2,
    "max": 2,
    "avg": 2.0
    },
    ...
    }

    The API returns basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).

    To increase the limits of the total number of primary and replica shards for the cluster:

    PUT /_cluster/settings

    {
    "persistent" : {
    "cluster.max_shards_per_node": 3000
    }
    }

    Keep in mind, that the more shards you allow per node the more resources each node will need and the worse the performance can get.

    We also recommend to check the next guides in the official Elasticsearch documentation:

    Data streams

    Size your shards

    - + \ No newline at end of file diff --git a/installation-steps/ReportPortal23.1FileStorageOptions/index.html b/installation-steps/ReportPortal23.1FileStorageOptions/index.html index d3e9bf8f7..9dbeb8b77 100644 --- a/installation-steps/ReportPortal23.1FileStorageOptions/index.html +++ b/installation-steps/ReportPortal23.1FileStorageOptions/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    ReportPortal 23.1 File storage options

    In ReportPortal 23.1 we can use multiple ways to store log attachments, user pictures and plugins.

    • AWS S3
    • MinIO distributed object storage
    • File system

    Currently we have 2 file storage systems: multi-bucket and single-bucket.

    In the multi-bucket system structure of buckets looks like this:

    • bucketPrefix + ‘keystore’ (bucket for storing integration secrets)
    • bucketPrefix + ‘users’ (bucket for storing user data)
    • defaultBucketName (bucket for storing plugins)
    • bucketPrefix + projectId (bucket for storing project attachments)

    In the single-bucket system structure of single-bucket is the following:

    • singleBucketName/integration-secrets/ (prefix for integration secrets)
    • singleBucketName/user-data/ (prefix for user data)
    • singleBucketName/plugins/ (prefix for plugins)
    • singleBucketName/project-data/projectId (prefix for project attachments)

    AWS S3

    Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Bucket names must be unique across all AWS accounts in all the AWS Regions within a partition. A partition is a grouping of Regions.

    To set up AWS S3 in API, UAT & Jobs services use the following variables:

    • DATASTORE_TYPE: s3
    • DATASTORE_ACCESSKEY for AWS S3 AccessKey
    • DATASTORE_SECRETKEY for AWS S3 SecretKey
    • DATASTORE_REGION for AWS region

    To set up the multi-bucket system, use the following environment variables:

    • DATASTORE_BUCKETPREFIX for prefix of bucket name (‘prj-‘ by default)
    • DATASTORE_DEFAULTBUCKETNAME for name of plugins bucket (‘rp-bucket’ by default)

    To set up the single-bucket system, use the following environment variables:

    • DATASTORE_DEFAULTBUCKETNAME for single-bucket name
    • RP_FEATURE_FLAGS: singleBucket

    MinIO

    MinIO is a high-performance distributed object storage server. It stays on top of S3 or any other cloud storage and allows to have a shared FS for several API, UAT & Jobs pods in Kubernetes.

    To set up MinIO in services, use the following variables:

    • DATASTORE_TYPE: minio
    • DATASTORE_ENDPOINT for endpoint (address)
    • DATASTORE_ACCESSKEY for accesskey
    • DATASTORE_SECRETKEY for secretkey

    To set the multi-bucket system, use the following environment variables:

    • DATASTORE_BUCKETPREFIX for prefix of bucket name (‘prj-‘ by default)
    • DATASTORE_DEFAULTBUCKETNAME for name of plugins bucket (‘rp-bucket’ by default)

    To set the single-bucket system, use the following environment variables:

    • DATASTORE_DEFAULTBUCKETNAME for single-bucket name
    • RP_FEATURE_FLAGS : singleBucket

    File system

    The file system option is used when you want to store this data in a mounted folder in the service-api or/and service-uat.

    To use this option, set up environment variables like this:

    • DATASTORE_TYPE: filesystem
    • DATASTORE_PATH for path in filesystem to store files.

    It can be done in both Docker and Kubernetes ReportPortal versions.

    - + \ No newline at end of file diff --git a/installation-steps/ScalingUpReportPortalAPIService/index.html b/installation-steps/ScalingUpReportPortalAPIService/index.html index bb3bd6e37..66a24ee79 100644 --- a/installation-steps/ScalingUpReportPortalAPIService/index.html +++ b/installation-steps/ScalingUpReportPortalAPIService/index.html @@ -12,7 +12,7 @@ - + @@ -29,7 +29,7 @@ Create a copy of the API values block and rename api to api_replica_1 to facilitate scaling.
    docker-compose.yml API values block

    version: '3.8'
    services:

    api:
    <...>
    environment:
    RP_AMQP_QUEUES: 20
    RP_AMQP_QUEUESPERPOD: 10
    <...>

    api_replica_1:
    <...>
    environment:
    RP_AMQP_QUEUES: 20
    RP_AMQP_QUEUESPERPOD: 10
    <...>

    Docker Compose v3.3+

    - + \ No newline at end of file diff --git a/installation-steps/SetupTSLSSLInTraefik2.0.x/index.html b/installation-steps/SetupTSLSSLInTraefik2.0.x/index.html index 205d907b2..6d28f7074 100644 --- a/installation-steps/SetupTSLSSLInTraefik2.0.x/index.html +++ b/installation-steps/SetupTSLSSLInTraefik2.0.x/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Setup TLS(SSL) in Traefik 2.0.x

    This is a short guideline that provides information on how to configure ReportPortal to use Let TLS/SSL certificate setup for your existing ReportPortal environment.

    Overview

    We use Traefik as a layer-7 load balancer with TLS/SSL termination for the set of micro-services used to run ReportPortal web application.

    Pre-requisites

    • Server with a public IP address, with Docker and docker-compose installed on it
    • Installed ReportPortal on this servers
    • Your own domain and the DNS configured accordingly so the hostname records

    Configuration

    Provided below is an example of using Traefik (gateway service) in docker-compose.yaml. If you don't have any custom configurations, you are free to use the example below.

    Create a directory on the server for Traefik data and storing certificates

    mkdir data/traefik/ && mkdir -p data/certs/traefik

    Check:

    data
    |-- certs
    |-- elasticsearch
    |-- postgres
    |-- traefik

    Create config file for Traefik with certificate and key path.

    cat << EOF | tee -a data/traefik/certs-traefik.yaml
    tls:
    certificates:
    - certFile: /etc/certs/examplecert.crt
    keyFile: /etc/certs/examplecert.key
    EOF

    Place certificate examplecert.crt and key examplecert.key to directory data/certs/traefik/ you created earlier.

    Edit Traefik service in the docker-compose.yaml

    Add the following volumes to Traefik:

    services:
    gateway:
    volumes:
    - "./data/traefik/dynamic/certs-traefik.yaml:/etc/traefik/dynamic/certs-traefik.yaml"
    - "./data/certs/traefik:/etc/certs/"

    commands:

    services:
    gateway:
    commands:
    - "--providers.file.directory=/etc/traefik/dynamic"
    - "--entrypoints.websecure.address=:443"

    and ports:

    services:
    gateway:
    ports:
    - "443:443"

    Check the Traefik part:

    version: '2.4'
    services:

    gateway:
    image: traefik:v2.0.5
    ports:
    - "8080:8080" # HTTP exposed
    - "8081:8081" # HTTP Administration exposed
    - "443:443" # TLS/SSL
    volumes:
    - "/var/run/docker.sock:/var/run/docker.sock"
    - "./data/traefik/dynamic/certs-traefik.yaml:/etc/traefik/dynamic/certs-traefik.yaml"
    - "./data/certs/traefik:/etc/certs/"
    command:
    - "--providers.docker=true"
    - "--providers.docker.constraints=Label(`traefik.expose`, `true`)"
    - "--entrypoints.web.address=:8080"
    - "--entrypoints.traefik.address=:8081"
    - "--api.dashboard=true"
    - "--api.insecure=true"
    # TLS/SSL
    - "--providers.file.directory=/etc/traefik/dynamic"
    - "--entrypoints.websecure.address=:443"
    restart: always

    Add the following labels to existing services api, uat, index, ui, replacing <service> with the corresponding service name

    labels:
    - "traefik.http.routers.<service>.tls=true"

    Check the UI and API services as an example:

    version: '2.4'
    services:

    ui:
    image: reportportal/service-ui:5.3.4
    environment:
    - RP_SERVER_PORT=8080
    labels:
    - "traefik.http.middlewares.ui-strip-prefix.stripprefix.prefixes=/ui"
    - "traefik.http.routers.ui.middlewares=ui-strip-prefix@docker"
    - "traefik.http.routers.ui.rule=PathPrefix(`/ui`)"
    - "traefik.http.routers.ui.service=ui"
    - "traefik.http.services.ui.loadbalancer.server.port=8080"
    - "traefik.http.services.ui.loadbalancer.server.scheme=http"
    - "traefik.expose=true"
    - "traefik.http.routers.ui.tls=true" # label is here
    restart: always
    note

    Make sure that the required ports are opened. Please check your firewall settings.

    1. Traefik HTTPS&TLS Offical documentation
    2. Traefik 2 & TLS 101
    3. Traefik Routing TLS

    Issues

    Unable to find valid certification path to requested target

    Feb-2 00:00:00.000 [rp-io-1] ERROR Launch - [18] ReportPortal execution error
    javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

    Solutions:

    1. Recomended. Add certificate to Java-machine
    2. Not recommended. Ignoring SSL certificate
    - + \ No newline at end of file diff --git a/issues-troubleshooting/HowToAddATestStackTraceToADescriptionAutomatically/index.html b/issues-troubleshooting/HowToAddATestStackTraceToADescriptionAutomatically/index.html index e9262cdc9..327b330f8 100644 --- a/issues-troubleshooting/HowToAddATestStackTraceToADescriptionAutomatically/index.html +++ b/issues-troubleshooting/HowToAddATestStackTraceToADescriptionAutomatically/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    How to add a test stack trace to a description automatically

    You can make your process of a test analysis more convenient and quick by adding a description for failed tests that will include a last error message from the test log.

    You will not need to open an every single test to see the failure reason. With this new functionality you will see test failures reasons right in a table on All launches (step level), so that you can perform group actions to items.

    How to activate this option:

    Change your listener to wrap log messages on error level with special text:

        <place your error message here>

    We have prepared an example how to extend a TestNG agent, and you can find it below:

    An extend agent service:

        public static class ParamTaggingTestNgService extends TestNGService {

    public ParamTaggingTestNgService(ListenerParameters parameters, ReportPortalClient reportPortalClient) {
    super(parameters, reportPortalClient);
    }

    @Override
    protected StartTestItemRQ buildStartStepRq(ITestResult testResult) {
    final StartTestItemRQ rq = super.buildStartStepRq(testResult);
    if (testResult.getParameters() != null && testResult.getParameters().length != 0) {
    final Set<String> tags = Optional.fromNullable(rq.getTags()).or(new HashSet<>());
    for (Object param : testResult.getParameters()) {
    tags.add(param.toString());
    }
    rq.setTags(tags);

    }
    return rq;
    }

    @Override
    protected FinishTestItemRQ buildFinishTestMethodRq(String status, ITestResult testResult) {
    FinishTestItemRQ finishTestItemRQ = super.buildFinishTestMethodRq(status, testResult);
    if (testResult.getThrowable() != null) {
    String description =
    "```error\n"
    + Throwables.getStackTraceAsString(testResult.getThrowable())
    + "\n```";
    description = description + Throwables.getStackTraceAsString(testResult.getThrowable());
    finishTestItemRQ.setDescription(description);
    }
    return finishTestItemRQ;
    }
    }

    An extend listener with your extended service:

        public static class ExtendedListener extends BaseTestNGListener {
    public ExtendedListener() {
    super(override(new TestNGAgentModule()).with((Module) binder -> binder.bind(ITestNGService.class)
    .toProvider(new TestNGProvider() {
    @Override
    protected TestNGService createTestNgService(ListenerParameters listenerParameters,
    ReportPortalClient reportPortalClient) {
    return new ParamTaggingTestNgService(listenerParameters, reportPortalClient);
    }
    })));
    }
    }
    - + \ No newline at end of file diff --git a/issues-troubleshooting/HowToAvoidLocalExecutionReportedIntoProjectSpace/index.html b/issues-troubleshooting/HowToAvoidLocalExecutionReportedIntoProjectSpace/index.html index 29a4f839d..73fe07afb 100644 --- a/issues-troubleshooting/HowToAvoidLocalExecutionReportedIntoProjectSpace/index.html +++ b/issues-troubleshooting/HowToAvoidLocalExecutionReportedIntoProjectSpace/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    How to avoid local execution reported into project space

    Option 1:

    Use a specific attribute for launches, which should represent the state.

    Create filters using those attributes. Build widgets and dashboards, based on those attributes.

    You can add those additional attributes via the CI command line.

    So, only Jenkins will generate launches with those attributes.

    Option 2:

    Put rp.mode=debug in all reportportal.properties files.

    For Jenkins executions, overload this param via the command line as rp.mode=default,

    so that all local launches will be in debug, all Jenkins – in launches.

    Option 3:

    Combine option #2 and option #3, in the launch name.

    rp.launch=xxx saved in all reportportal.properties files.

    For Jenkins executions, overload this param via the command line as rp.launch=yyy

    The auto-analysis will use only yyy launches for review.

    Use filter, based on your yyy name for widgets.

    Option 4:

    The same like option 2, but with rp.enable=false|true

    This will turn off reporting for local launches

    Option 5:

    Set all users on the project with the Role Operator. This role can’t report data into RP.

    Create an internal user for Jenkins executions, set him/her as a PROJECT MANAGER role.

    This will make it possible to create launches only for Jenkins users

    note

    It is also possible to combine all those options at the same time.

    - + \ No newline at end of file diff --git a/issues-troubleshooting/HowToCheckLDAPConnection/index.html b/issues-troubleshooting/HowToCheckLDAPConnection/index.html index 5887ea230..6c337bf1b 100644 --- a/issues-troubleshooting/HowToCheckLDAPConnection/index.html +++ b/issues-troubleshooting/HowToCheckLDAPConnection/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    How to check LDAP connection

    Firstly, check the availability of your LDAP server from the server when ReportPortal is installed. For example, use the command ldapsearch.

    ldapsearch -x -h <ldap hostname> -p <ldap port> -D "<bind DN>" -w "<bind password>" -b "<base users DN>" "uid=user1"

    Output will be:

    # extended LDIF
    #
    # LDAPv3
    # base <dc=rp,dc=com> with scope subtree
    # filter: uid=user1
    # requesting: ALL
    #

    # user1, people, rp.com
    dn: cn=tester,ou=people,dc=rp,dc=com
    objectClass: inetOrgPerson
    cn: user1
    sn: user1
    uid: user1
    userPassword:: PASSWORD
    mail: user1@rp.com
    description: user1 for experiments

    # search result
    search: 2
    result: 0 Success

    # numResponses: 2
    # numEntries: 1

    Hints

    If you are using docker you can also use the internal container IP docker inspect -f '\{{range .NetworkSettings.Networks}}\{{.IPAddress}}\{{end}}' <LDAP container name>

    - + \ No newline at end of file diff --git a/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingPGRepack/index.html b/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingPGRepack/index.html index df37bb333..2be61c3ed 100644 --- a/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingPGRepack/index.html +++ b/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingPGRepack/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    How to clean up the ReportPortal database using PG_REPACK

    Description

    pg_repack is a PostgreSQL extension that lets you remove bloat from tables and indexes, and optionally restore the physical order of clustered indexes. Unlike CLUSTER and VACUUM FULL it works online, without holding an exclusive lock on the processed tables during processing. pg_repack is efficient to boot, with performance comparable to using CLUSTER directly.

    Performance

    Initial Database SizeFinal Database SizeRepack durationDuration
    1500 Gb251 Gb1200 Gb7 hours

    Overall pg_repack performance has been tested during load tests running and without. The database load during pg_repack is pretty good by capacity and instance   High DB RAM Utilization faced at the pr_repack starting, but then the overall RAM Usage becomes normal. During reporting(load tests running) faced small response times and throughput degradation around 10 minutes, but then they became a regular performance. Also, no KO requests during reporting and pg_repack run in parallel, so that all Staging pg_repack configuration can be safely porting to Production.

    Detailed DB Resource Utilization Stats

    ResourcesUsed
    CPU utilization13 %
    CPU IOwait7%
    Disk IO Read/Write1800/30000 IOPS

    PG_REPACK installation

    To install PG_REPACK use the guide from the official GitHub page. If you use Amazon RDS follow the link

    PG_REPACK usage

    • Attach to Screen session:
    screen

    For more information about the Screen read this guide.

    • You need to add the path to the PG_REPACK executable file. The PATH variable is an environment variable that contains an ordered list of paths that Unix will search for executables when running a command. Run the following command:
    export PATH=$PATH:/usr/pgsql-12/bin/
    • Create .pgpass file and fill your data. The file .pgpass the file referenced by PGPASSFILE contain passwords to be used if the connection requires a password. Documentation.
    cat << EOF | tee -a /.pgpass
    <database_host>:<database_port>:<database_name>:<database_user>:<database_password>
    EOF

    For example, .pgpass file should look like this:

    reportportal-cdufjldqrau0.eu-west-3.rds.amazonaws.com:5432:reportportal:rpuser:strongpassword
    • Change permissions to .pgpasss file:
    chmod 600 /.pgpass
    • Set PGPASSFILE environment variable:
    export PGPASSFILE='/.pgpass'
    • Fill your data and run PG_REPACK:
    pg_repack -h <database_address> -U <database_user> -k <database_name> &>> /pg_repack-rpdb.log

    No password is needed for the database because you are using .pgpass.

    When you run the command, you will get artifact pg_repack-rpdb.log, where PG_REPACK will storage the logs. The pg_repack-rpdb.log file is stored in your root directory /.

    • To detach from the Screen session type Control+a+d (on OSX and Linux). The result will be similar to:
    [detached from 22556.pts-0.ip-10-68-38-165]

    22556 is ID of you screen session. You will get a different ID. Save it.

    • Attach to the Screen session:
    screen -r

    If you have one Screen session, you will join it. If you have two or more of them, you will get the following result:

    There are several suitable screens on:
    22556.pts-0.ip-10-68-38-165 (Detached)
    8175.pts-0.ip-10-68-38-165 (Detached)
    Type "screen [-d] -r [pid.]tty.host" to resume one of them.

    To join the PG_REPACK Screen session fill your Screen ID that you saved in step 4 and run the following command:

    screen -r <your_screen_id>
    • To view the process of running the command you can read pg_repack-rpdb.log with command:
    cat /pg_repack-rpdb.log

    In addition, you can stream log with the command:

    tail -F /pg_repack-rpdb.log

    Type Control+c (on OSX and Linux) to exit from Tail

    - + \ No newline at end of file diff --git a/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingVacuumFull/index.html b/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingVacuumFull/index.html index 16f0633a8..75c74fc65 100644 --- a/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingVacuumFull/index.html +++ b/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingVacuumFull/index.html @@ -12,7 +12,7 @@ - + @@ -25,7 +25,7 @@ VACUUM FULL rewrites the entire contents of the table into a new disk file with no extra space, allowing unused space to be returned to the operating system. This form is much slower and requires an exclusive lock on each table while it is being processed.

    The main goals for performing VACUUM FULL at the ReportPoral database:

    Parameters

    ParameterDescription
    FULLSelects "full" vacuum, which can reclaim more space, but takes much longer and exclusively locks the table. This method also requires extra disk space, since it writes a new copy of the table and doesn't release the old copy until the operation is complete. Usually this should only be used when a significant amount of space needs to be reclaimed from within the table.
    FREEZESelects aggressive "freezing" of tuples. Specifying FREEZE is equivalent to performing VACUUM with the vacuum_freeze_min_age and vacuum_freeze_table_age parameters set to zero. Aggressive freezing is always performed when the table is rewritten, so this option is redundant when FULL is specified.
    VERBOSEPrints a detailed vacuum activity report for each table.
    ANALYZEUpdates statistics used by the planner to determine the most efficient way to execute a query.

    ⚠️ Important notes

    1. Assuming that VACUUM FULL required exclusive locks on the tables and required high time-consuming on the large databases, the suggestions are:

    2. Сheck that the database disk has free space equal to or greater than the size of the largest table(with its indexes) in the database.

    The main suggestion is to perform VACUUM FULL operation periodically not for the whole database, but only for the particular tables defined below, which helps increase overall SQL queries performance. VACUUM FULL frequency for the databases more than 1Tb with high reporting amount - at least once per 3 months.

    Tables list and it's operations duration on our database(AWS RDS PostgreSQL Database spec: db.m5.4xlarge 16CPUs, 64Gb RAM):

    TableRows countVACUUM OperationDuration
    log614 372 224FULL14h 30m
    log614 372 224ANALYZE1h 30m
    test_item207 311 552FULL1h 50m
    test_item207 311 552ANALYZE21m
    statistics299 341 024FULL10m
    statistics299 341 024ANALYZE3m 49s
    test_item_results450 264 992FULL9m
    test_item_results450 264 992ANALYZE4m 12s

    VACUUM FULL execution

    Preconditions: Apply next configuration to PostgreSQL Parameter Group(database restart not needed after applying):

    maintenance_work_mem=8000000
    max_parallel_maintenance_workers=16

    Perform VACUUM FULL and ANALYZE on the each database table using the query:

    VACUUM (FULL, ANALYZE) my_table

    Or perform VACUUM FULL and VACUUM ANALYZE on all tables in the database using commands sequentially:

    VACUUM FULL
    VACUUM ANALYZE

    Postconditions: Apply regular configuration to PostgreSQL Parameter Group(database restart not needed after applying):

    maintenance_work_mem=2000000
    max_parallel_maintenance_workers=8

    Useful PostgreSQL queries

    Total database size:

    SELECT pg_size_pretty(pg_database_size('reportportal'));

    Show autovacuum stats:

    SELECT relname, last_vacuum, last_autovacuum FROM pg_stat_user_tables;

    Detailed statistic by each table and indexes:

    SELECT *, pg_size_pretty(total_bytes) AS total
    , pg_size_pretty(index_bytes) AS index
    , pg_size_pretty(toast_bytes) AS toast
    , pg_size_pretty(table_bytes) AS table
    FROM (
    SELECT *, total_bytes-index_bytes-coalesce(toast_bytes,0) AS table_bytes FROM (
    SELECT c.oid,nspname AS table_schema, relname AS table_name
    , c.reltuples AS row_estimate
    , pg_total_relation_size(c.oid) AS total_bytes
    , pg_indexes_size(c.oid) AS index_bytes
    , pg_total_relation_size(reltoastrelid) AS toast_bytes
    FROM pg_class c
    LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
    WHERE relkind = 'r'
    ) a
    ) a;

    Dead tuples amount per table:

    SELECT relname, n_dead_tup FROM pg_stat_user_tables order by n_dead_tup desc;
    - + \ No newline at end of file diff --git a/issues-troubleshooting/HowToResolveIssuesWithMigrationToTheNewVersion/index.html b/issues-troubleshooting/HowToResolveIssuesWithMigrationToTheNewVersion/index.html index 68cb9b0ff..3b7d70f53 100644 --- a/issues-troubleshooting/HowToResolveIssuesWithMigrationToTheNewVersion/index.html +++ b/issues-troubleshooting/HowToResolveIssuesWithMigrationToTheNewVersion/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    How to resolve issues with migration to the new version

    Error: Dirty database version XX. Fix and force version.

    That means, migration process has been interrupted during XX migration (migration has been started but not finished)

    1. At first, you need to check the logs of migration (service itselft and database), it can be helpful, if yes, make some actions based on logs, if not then move next.
    2. You need to rollback all applied (if any) migration XX parts.

    The format for url is the next:

    https://github.com/reportportal/migrations/blob/develop/migrations/XX_some_name.up.sql

    (Usually nothing to rollback, but need check)

    1. Change in schema_migrations table, change values in this table to version=XX-1 (previous successful migration number), and dirty flag set to true.
    2. Restart migration.

    For instance, if you have "Error: Dirty database version 10. Fix and force version."

    1. Check the logs(service itselft and database), in case we've found nothing, move next.
    2. You need to check 10 migration (https://github.com/reportportal/migrations/blob/develop/migrations/10_attachment_size.up.sql) and rollback if you have some partly migration.
    3. Then execute #update schema_migrations set version=9, dirty=f
    4. Redeploy RP based on docker-composer for example (migration should be started automatically, if you followed by instructions of installation for this way.)

    Error: org.jasypt.exceptions.EncryptionOperationNotPossibleException: null. API doesn't start. ReportPortal unavailable.

    Minio files are used during migration to change integration passwords encryption. Files in Minio may be corrupted and deleted somehow before migration during Reportportal usage.

    Removing existing integrations from db before deploying can help. Creation of a new integration will use a new encryption type.

    1. Execute the next script in database to remove existing integrations before deploy:
    DELETE FROM integration WHERE type IN (SELECT id FROM integration_type WHERE name IN ('email', 'jira', 'ldap', 'ad'));

    1. Deploy Reportportal
    2. Create integrations again
    - + \ No newline at end of file diff --git a/issues-troubleshooting/IssuesWithJIRABugTrackingSystemHowToResolve/index.html b/issues-troubleshooting/IssuesWithJIRABugTrackingSystemHowToResolve/index.html index 57d7f1ffd..b3ca0098a 100644 --- a/issues-troubleshooting/IssuesWithJIRABugTrackingSystemHowToResolve/index.html +++ b/issues-troubleshooting/IssuesWithJIRABugTrackingSystemHowToResolve/index.html @@ -12,7 +12,7 @@ - + @@ -32,7 +32,7 @@ login name and password into the fields)
  • Submit the login form Screen with CAPTCHA should appears
  • Enter the symbols
  • Submit the credentials again
  • Now try to establish the connection to JIRA on ReportPortal project.
  • Fourth, maybe the connection to the jira instance requires a certificate, in this case you need to import the certificate inside the jira container:

    1. docker exec -it reportportal_jira_1 ash # go inside shell
    2. cd /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/
    3. wget url://to/your/foo.cert
    4. keytool -importcert -noprompt -file foo.crt -alias "JIRA CERT" -keystore cacerts -storepass abc123 (default password for keystore: changeit)
    5. exit and restart the docker jira
    6. Now try to establish the connection to JIRA on ReportPortal project.

    or

    1. docker cp cert.der reportportal_jira_1:/cert.der
    2. docker exec -t -i reportportal_jira_1 ./usr/lib/jvm/java-1.8-openjdk/jre/bin/keytool -import -alias rootcert -keystore /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/cacerts -file /cert.der
    3. exit and restart the docker jira
    4. Now try to establish the connection to JIRA on ReportPortal project.
    note

    SSL instance of JIRA (even cloud version) can be accessed by JIRA API token, used instead of password.

    If these didn't resolve your issues, please contact us.

    - + \ No newline at end of file diff --git a/issues-troubleshooting/IssuesWithLDAPSHowToResolve/index.html b/issues-troubleshooting/IssuesWithLDAPSHowToResolve/index.html index 93e751b0f..856ead26b 100644 --- a/issues-troubleshooting/IssuesWithLDAPSHowToResolve/index.html +++ b/issues-troubleshooting/IssuesWithLDAPSHowToResolve/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Issues with LDAPS: how to resolve

    When configuring LDAP to work with ldaps:// users may see the following error when trying to log in:

    sun.security.validator.ValidatorException: PKIX path building failed: 
    sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target.

    This error can be solved by importing the needed certificate within the authorization container:

    # Enter service-authorization container as root
    docker exec -u 0 -it reportportal_uat_1 sh

    # Download certificates
    cd /usr/local/share/ca-certificates/
    wget url://to/your/foo.cert

    # Import the cert to keytool. if password is required the default should be "changeit"
    $JAVA_HOME/bin/keytool -import -alias ldap_cert -keystore $JAVA_HOME/lib/security/cacerts -file /usr/local/share/ca-certificates/foo.cert

    # exit container and restart it
    docker restart reportportal_uat_1
    - + \ No newline at end of file diff --git a/issues-troubleshooting/ResolveAnalyzerKnownIssues/index.html b/issues-troubleshooting/ResolveAnalyzerKnownIssues/index.html index ecaa0f885..7d6b9200e 100644 --- a/issues-troubleshooting/ResolveAnalyzerKnownIssues/index.html +++ b/issues-troubleshooting/ResolveAnalyzerKnownIssues/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Resolve Analyzer Known Issues

    Problem 1. Auto-Analyser doesn't work. Analyzer health check status failed: Elasticsearch is not healthy

    Problem Description

    Analyzer log:

    2021-09-09 11:34:47,927 - analyzerApp - ERROR - Analyzer health check status failed: Elasticsearch is not healthy;
    [pid: 10|app: 0|req: 1/3] 127.0.0.1 () {28 vars in 294 bytes} [Thu Sep 9 11:34:46 2021] GET / => generated 43 bytes in 1643 msecs (HTTP/1.1 503) 3 headers in 120 bytes (1 switches on core 0)
    2021-09-09 11:35:48,737 - analyzerApp.utils - ERROR - Error with loading url: http://elasticsearch:9200/_cluster/health
    2021-09-09 11:35:48,752 - analyzerApp.utils - ERROR - HTTPConnectionPool(host='elasticsearch', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5cb82d4290>: Failed to establish a new connection: [Errno 111] Connection refused'))
    2021-09-09 11:35:48,753 - analyzerApp.esclient - ERROR - Elasticsearch is not healthy
    2021-09-09 11:35:48,753 - analyzerApp.esclient - ERROR - list indices must be integers or slices, not str

    ElasticSearch container restarting all the time:

    STATUS                                     NAMES
    Up Less than a second (health: starting) reportportal_elasticsearch_1

    Solution

    Create a directory for ElasticSearch and assign permissions with the following commands

    mkdir -p data/elasticsearch
    chmod 777 data/elasticsearch
    chgrp 1000 data/elasticsearch

    Recreate ReportPortal services.

    Problem 2. Auto-Analyser doesn't work. KeyError: 'found_test_and_methods' not found

    Problem Description

    2021-09-09 11:35:48,737 - analyzerApp.utils - ERROR - KeyError: 'found_test_and_methods' not found

    Solution

    Regenerate index in the ElasticSearch. Project settings -> Auto-Analysis -> Genetate Index

    Problem 3. Amqp connection was not established

    Problem Description

    2021-09-09 11:32:00,579 - analyzerApp - INFO - Starting waiting for AMQP connection
    2021-09-09 11:32:00,586 - analyzerApp.amqp - INFO - Try connect to amqp://rabbitmq:5672/analyzer?heartbeat=600
    2021-09-09 11:32:00,595 - analyzerApp - ERROR - Amqp connection was not established

    Solution

    RabbitMQ container is not running. Wait for status running or recreate the RabbitMQ container.

    Problem 4. Performance

    Problem Description

    Slowing down analysis or waiting for a long time fore responce.

    Analyzer logs:

    DAMN ! worker 1 (pid: 9191) died, killed by signal 9 :( trying respawn ...
    Respawned uWSGI worker 1 (new pid: 9490)

    Solution

    Increase VM stats. We recommend using the minimum memory:

    Also you can reduce the number of Analyzer processes with processing environment variable UWSGI_WORKERS: 2 (default 4), then:

    However, UWSGI_WORKERS will slow down the Analyzer.

    - + \ No newline at end of file diff --git a/issues-troubleshooting/TuningCITool/index.html b/issues-troubleshooting/TuningCITool/index.html index dc27a2dae..6018622c6 100644 --- a/issues-troubleshooting/TuningCITool/index.html +++ b/issues-troubleshooting/TuningCITool/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content

    Tuning CI tool

    How to provide parameters via system variables in the CI tool (for example - Jenkins) for our continuous testing platform.

    In order to provide specific parameters (Such as attributes) for different executions that are based on the parameters loading order, you can provide them as system variables.

    To do so, follow the steps below:

    1. Open the Job configuration in Jenkins.
    1. Select the "This build is parameterized" check-box.

    2. Click the "Add Parameter" and select "Text Parameter".

    1. Define any name for the parameter and set the default value (note that attributes should have semicolon-separated values, with no spaces).
    1. Update the execution command at the "Build" section: add ReportPortal parameters using –D for a system variable parameters. For attributes it is "rp.tags":
    1. Click the "Build with Parameters" button.
    1. In the opened dialog, specify the needed parameters, using semicolons to separate values.
    1. Then Click the "Build" button.
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/HowToRunYourTests/index.html b/log-data-in-reportportal/HowToRunYourTests/index.html index c0e5a388e..16f969c09 100644 --- a/log-data-in-reportportal/HowToRunYourTests/index.html +++ b/log-data-in-reportportal/HowToRunYourTests/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/ImplementOwnIntegration/index.html b/log-data-in-reportportal/ImplementOwnIntegration/index.html index de037fff6..b8fb99e8c 100644 --- a/log-data-in-reportportal/ImplementOwnIntegration/index.html +++ b/log-data-in-reportportal/ImplementOwnIntegration/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/ImportDataToReportPortal/index.html b/log-data-in-reportportal/ImportDataToReportPortal/index.html index 70b9b602a..f1aed6b0e 100644 --- a/log-data-in-reportportal/ImportDataToReportPortal/index.html +++ b/log-data-in-reportportal/ImportDataToReportPortal/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Import data to ReportPortal

    Import functionality gives the opportunity to upload log files via UI. To start the import process, you should hit the 'Import' button on the Launch page (All Launches tab). In a new pop-up window 'Import Launch' you can drop files or add them by adding a link.

    ReportPortal checks file size and format first. Imported files should meet the following requirements:

    • File format is zip archive or XML file.
    • File size is up to 32Mb.
    • Timestamp format is 2022-11-17T10:15:43

    If the file format is incorrect, it is marked in red and the reason is shown in the tooltip on hovering over the file in the pop-up window. This is to prevent any incorrect files from being run through the import process.

    If all the files added are correct you may hit the 'Import' or 'Cancel' buttons:

    • The Cancel button closes the pop-up window without any operation under log files.
    • The Import button starts copying files into the RP file storage and disables the 'Import' button.

    The System will start copying files into the RP file storage if the files meet the following requirements:

    • The File format is a zip archive.
    • The File size is up to 32Mb.
    • The XML files must have the JUnit structure in the zip archive.

    The system copies valid XML files into RP file storage and marked them in green in the Import pop-up window.

    If files from the zip archive have formats other than XML, the system will skip them.

    If the XML file is not in the JUnit structure, the system will interrupt the process of copying and mark the file in red. The reason is then shown on the tooltip when hovering the file in the pop-up window.

    note

    Files that were copied earlier stay in the RP file storage

    When all of the valid log files are downloaded and processed the 'OK' button is enabled. The 'OK' button closes the Import launches pop-up window. The Zip archive is then deleted after the Import is finished or canceled.

    You can only interrupt the import in UI when files are being downloaded into the RP file storage. In this case, you should hit the 'Cancel' button (or X button in the pop-up window) and confirm the cancellation of import and then hit the 'Cancel' button again.

    Import via API

    The details about import via API you can find on the ReportPortal menu at the bottom: API -> launch controller -> Import junit xml report

    You can configure parameters (name, description, attributes) for the imported launch by specifying these values in your API request.

    The endpoint POST /v1/{projectName}/launch/import allows importing a launch into the specified project using an XML or ZIP file in JUnit format.

    Permissions: Admin, PM, Member, Customer, Operator.

    Here's an example of a request to the endpoint:

    curl -X 'POST' \
    'http://localhost:8585/api/v1/testProject/launch/import?attributeKey=someKey&attributeValue=someValue&description=someDescription&launchName=someName&skippedIsNotIssue=true' \
    -F 'file=@Launch.zip;type=application/x-zip-compressed'

    Query parameters:

    attributeKey (String) – Launch attribute key. If this parameter is specified, "attributeValue" is required.

    attributeValue (String) – Launch attribute value. Can be specified without the "attributeKey" parameter.

    description (String) – Launch description.

    launchName (String) – Launch name. If this parameter is not specified, the file name will be used as the launch name.

    skippedIsNotIssue (Boolean) – When set to "true", all skipped issues are processed without applying a defect type; when set to "false", skipped issues are processed and marked with the defect type "To Investigate". If the parameter is not set, the default behavior is equivalent to "false".

    note

    These parameters are optional and can be used individually, in any combination, or all together.

    Scenario 1

    If you want to mark skipped items as "To Investigate", use the following request:

    curl -X 'POST' 'http://localhost:8585/api/v1/testProject/launch/import'

    Scenario 2

    If you don't want to mark skipped items as "To Investigate", use the following request:

    curl -X 'POST' 'http://localhost:8585/api/v1/testProject/launch/import?skippedIsNotIssue=true'
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/ReportingSDK/index.html b/log-data-in-reportportal/ReportingSDK/index.html index a5a4acae6..3e9de24ad 100644 --- a/log-data-in-reportportal/ReportingSDK/index.html +++ b/log-data-in-reportportal/ReportingSDK/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/index.html b/log-data-in-reportportal/index.html index 776a75eaa..413eca626 100644 --- a/log-data-in-reportportal/index.html +++ b/log-data-in-reportportal/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Log data in ReportPortal

    ReportPortal is a CI/CD agnostic tool. Therefore, you can continue using your favorite CI/CD tool (GitLab, Jenkins, GitHub, Azure DevOps, Bamboo) to send data to ReportPortal and get test results of execution.

    The data transmission is regulated not by the CI/CD process, but by the test framework. The only requirement is to ensure that the machine where your CI/CD is located has access to the ReportPortal instance to which you are trying to send the data.

    As for test frameworks, a generic approach is to set the address of ReportPortal and other data in your test framework via properties or a configuration file, and your test framework will start reporting data to ReportPortal.

    ReportPortal supports various frameworks. For example, we have integration with Java frameworks (TestNG, Jbehave, etc.), Python frameworks (Pytest, Robot Framework, etc.), JavaScript frameworks (Playwright, Postman, etc.), .NET frameworks (NUnit, VSTest, etc.).

    Integration with ReportPortal is not dependent on the type of tests you run. It can be API tests, integration tests, or UI tests such as Selenium, Cypress, so, you can run different types of tests and get test results.

    ReportPortal can be integrated with external services, enabling you to report test results from platforms like Browserstack, Sauce Labs, and other third-party services. For Sauce Labs integration, we have a plugin.

    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/Cucumber/index.html b/log-data-in-reportportal/test-framework-integration/Java/Cucumber/index.html index 2ac72a067..96e2a9a86 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/Cucumber/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/Cucumber/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    ReportPortal Cucumber Integration

    There is an agent to integrate Cucumber with ReportPortal.

    Cucumber is a popular open-source framework for behavior-driven development (BDD) which based on Gherkin language and allows developers, testers, and business stakeholders to work together and define an application's behavior.

    Compatibility matrix for cucumber agents

    Version(s) of cucumber java and cucumber junitGherkin's version(s)Link to agent's github
    1.2.52.12.2sources/cucumber1
    2.0.0 - 2.4.03.2.0 - 5.1.0sources/cucumber2
    3.0.0 - 3.0.2_sources/cucumber3
    4.4.0 - 4.8.13.2.0 - 5.1.0sources/cucumber4
    5.0.0 - 5.7.0_sources/cucumber5
    6.0.0 - 7.0.06.0.0 - 7.0.0sources/cucumber6

    Installation guide

    Examples

    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/JBehave/index.html b/log-data-in-reportportal/test-framework-integration/Java/JBehave/index.html index c49a2286d..ba8109d8a 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/JBehave/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/JBehave/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/JUnit4/index.html b/log-data-in-reportportal/test-framework-integration/Java/JUnit4/index.html index 954278376..6dda33191 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/JUnit4/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/JUnit4/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/JUnit5/index.html b/log-data-in-reportportal/test-framework-integration/Java/JUnit5/index.html index abe740d09..05bc39653 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/JUnit5/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/JUnit5/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/Loggers/ApacheHttpComponents/index.html b/log-data-in-reportportal/test-framework-integration/Java/Loggers/ApacheHttpComponents/index.html index c3b19c920..60605acdc 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/Loggers/ApacheHttpComponents/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/Loggers/ApacheHttpComponents/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/Loggers/Log4J/index.html b/log-data-in-reportportal/test-framework-integration/Java/Loggers/Log4J/index.html index 117e2e9f6..f35c01992 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/Loggers/Log4J/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/Loggers/Log4J/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/Loggers/Logback/index.html b/log-data-in-reportportal/test-framework-integration/Java/Loggers/Logback/index.html index 229ba52a2..4a0be7de3 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/Loggers/Logback/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/Loggers/Logback/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/Loggers/OkHttp3/index.html b/log-data-in-reportportal/test-framework-integration/Java/Loggers/OkHttp3/index.html index b5401d565..64978957f 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/Loggers/OkHttp3/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/Loggers/OkHttp3/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/Loggers/RestAssured/index.html b/log-data-in-reportportal/test-framework-integration/Java/Loggers/RestAssured/index.html index 5ae6dcfce..f9e3ef9aa 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/Loggers/RestAssured/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/Loggers/RestAssured/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content

    ReportPortal Rest Assured Integration

    The logger intercept and logs all Requests and Responses issued by REST Assured into ReportPortal in Markdown format, including multipart requests. It recognizes payload types and attach them in corresponding manner: image types will be logged as images with thumbnails, binary types will be logged as entry attachments, text types will be formatted and logged in Markdown code blocks.

    Installation guide

    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/Loggers/Selenide/index.html b/log-data-in-reportportal/test-framework-integration/Java/Loggers/Selenide/index.html index cbe2a883a..19c0ec43c 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/Loggers/Selenide/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/Loggers/Selenide/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/SoapUI/index.html b/log-data-in-reportportal/test-framework-integration/Java/SoapUI/index.html index 6268259cf..587932fce 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/SoapUI/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/SoapUI/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/Spock/index.html b/log-data-in-reportportal/test-framework-integration/Java/Spock/index.html index 6aee171fa..cfa144bdf 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/Spock/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/Spock/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/TestNG/index.html b/log-data-in-reportportal/test-framework-integration/Java/TestNG/index.html index c8505425a..31d2256b8 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/TestNG/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/TestNG/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    ReportPortal TestNG Integration

    There is an agent to integrate TestNG with ReportPortal.

    TestNG provides support for attaching custom listeners, reporters, annotation transformers and method interceptors to your tests.

    TestNG agent can handle next events:

    • Start launch
    • Finish launch
    • Start suite
    • Finish suite
    • Start test
    • Finish test
    • Start test step
    • Successful finish of test step
    • Fail of test step
    • Skip of test step
    • Start configuration (All «before» and «after» methods)
    • Fail of configuration
    • Successful finish of configuration
    • Skip configuration

    Installation guide

    Examples

    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Java/index.html b/log-data-in-reportportal/test-framework-integration/Java/index.html index add3e19e0..6bff2a3e5 100644 --- a/log-data-in-reportportal/test-framework-integration/Java/index.html +++ b/log-data-in-reportportal/test-framework-integration/Java/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Java

    To integrate your Java test framework with ReportPortal, you need to create a file named reportportal.properties in your in your Java project in a source folder src/main/resources or src/test/resources (depending on where your tests are located):

    reportportal.properties

    rp.endpoint={RP_SERVER_URL}

    rp.api.key={YOUR_TOKEN}

    rp.project={YOUR_PROJECT}

    rp.launch={NAME_OF_YOUR_LAUNCH}

    Property description

    rp.endpoint - the URL for the report portal server (actual link).

    rp.api.key - an access token for Report Portal which is used for user identification. It can be found on your report portal user profile page.

    rp.project - a project code on which the agent will report test launches. Must be set to one of your assigned projects.

    rp.launch - a user-selected identifier of test launches.

    note

    Starting from the Service Release 23.1++, rp.uuid was renamed to rp.api.key.

    This set of properties will allow you to report your tests. And there are more properties available for fine grain tuning of integration. Details available here.

    If you need a sophisticated and full-featured integration with a test framework, you can configure it by your self.

    All agents use client-java to communicate with ReportPortal API and as common code library. Also you can use any combination of agent and logger.

    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/Codecept/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/Codecept/index.html index 3341a12ae..338a31887 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/Codecept/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/Codecept/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/CucumberJS/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/CucumberJS/index.html index 9f296db48..dedc8b935 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/CucumberJS/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/CucumberJS/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/Cypress/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/Cypress/index.html index 1b5b76aec..96642507f 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/Cypress/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/Cypress/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/Jasmine/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/Jasmine/index.html index a25a38fbc..1ba07cbda 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/Jasmine/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/Jasmine/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/Jest/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/Jest/index.html index 42047181e..bba35cf34 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/Jest/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/Jest/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/Mocha/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/Mocha/index.html index 1f5390bd2..8ed1dd6ef 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/Mocha/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/Mocha/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/Nightwatch/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/Nightwatch/index.html index 3fbf59a79..8663df6e0 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/Nightwatch/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/Nightwatch/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/Playwright/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/Playwright/index.html index 2fc992cf5..33d050f82 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/Playwright/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/Playwright/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/Postman/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/Postman/index.html index a25e0d6d7..4591260fe 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/Postman/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/Postman/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/TestCafe/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/TestCafe/index.html index 57f80a3ba..6727ec20c 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/TestCafe/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/TestCafe/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/WebdriverIO/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/WebdriverIO/index.html index 3904cb381..f07bbb461 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/WebdriverIO/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/WebdriverIO/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/JavaScript/index.html b/log-data-in-reportportal/test-framework-integration/JavaScript/index.html index e020280fa..1a5f895d0 100644 --- a/log-data-in-reportportal/test-framework-integration/JavaScript/index.html +++ b/log-data-in-reportportal/test-framework-integration/JavaScript/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Net/Loggers/Log4Net/index.html b/log-data-in-reportportal/test-framework-integration/Net/Loggers/Log4Net/index.html index b53f721cb..84152f1ff 100644 --- a/log-data-in-reportportal/test-framework-integration/Net/Loggers/Log4Net/index.html +++ b/log-data-in-reportportal/test-framework-integration/Net/Loggers/Log4Net/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Net/Loggers/NLog/index.html b/log-data-in-reportportal/test-framework-integration/Net/Loggers/NLog/index.html index 5a4a0d618..9fe625382 100644 --- a/log-data-in-reportportal/test-framework-integration/Net/Loggers/NLog/index.html +++ b/log-data-in-reportportal/test-framework-integration/Net/Loggers/NLog/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Net/Loggers/Serilog/index.html b/log-data-in-reportportal/test-framework-integration/Net/Loggers/Serilog/index.html index b064f7aaf..3f182ec8b 100644 --- a/log-data-in-reportportal/test-framework-integration/Net/Loggers/Serilog/index.html +++ b/log-data-in-reportportal/test-framework-integration/Net/Loggers/Serilog/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Net/Loggers/TraceListener/index.html b/log-data-in-reportportal/test-framework-integration/Net/Loggers/TraceListener/index.html index 4e71abd2f..3bfee781b 100644 --- a/log-data-in-reportportal/test-framework-integration/Net/Loggers/TraceListener/index.html +++ b/log-data-in-reportportal/test-framework-integration/Net/Loggers/TraceListener/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Net/NUnit/index.html b/log-data-in-reportportal/test-framework-integration/Net/NUnit/index.html index 95f08b18d..677174a3a 100644 --- a/log-data-in-reportportal/test-framework-integration/Net/NUnit/index.html +++ b/log-data-in-reportportal/test-framework-integration/Net/NUnit/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Net/SpecFlow/index.html b/log-data-in-reportportal/test-framework-integration/Net/SpecFlow/index.html index 99573b12c..edc64d1ca 100644 --- a/log-data-in-reportportal/test-framework-integration/Net/SpecFlow/index.html +++ b/log-data-in-reportportal/test-framework-integration/Net/SpecFlow/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Net/VSTest/index.html b/log-data-in-reportportal/test-framework-integration/Net/VSTest/index.html index dfe30ae76..d2af30cb2 100644 --- a/log-data-in-reportportal/test-framework-integration/Net/VSTest/index.html +++ b/log-data-in-reportportal/test-framework-integration/Net/VSTest/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Net/index.html b/log-data-in-reportportal/test-framework-integration/Net/index.html index 28692c71c..0a5e38146 100644 --- a/log-data-in-reportportal/test-framework-integration/Net/index.html +++ b/log-data-in-reportportal/test-framework-integration/Net/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Net/xUnit/index.html b/log-data-in-reportportal/test-framework-integration/Net/xUnit/index.html index 0a38cf343..981359a7e 100644 --- a/log-data-in-reportportal/test-framework-integration/Net/xUnit/index.html +++ b/log-data-in-reportportal/test-framework-integration/Net/xUnit/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    ReportPortal xUnit Integration

    There is an agent to integrate xUnit with ReportPortal.

    xUnit is a testing framework for .NET applications. The "Arrange, Act, Assert" (AAA) pattern, which is a systematic method of designing test cases, is the foundation of xUnit. The AAA pattern places emphasis on how each test should clearly distinguish between how the test environment and data are set up, how the test code is executed, and how the test result is verified (Assert).

    Installation guide

    Examples

    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Other/index.html b/log-data-in-reportportal/test-framework-integration/Other/index.html index 7df7eefe9..dc80462b4 100644 --- a/log-data-in-reportportal/test-framework-integration/Other/index.html +++ b/log-data-in-reportportal/test-framework-integration/Other/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/PHP/Behat/index.html b/log-data-in-reportportal/test-framework-integration/PHP/Behat/index.html index 6e0f64b4c..dadb93fd0 100644 --- a/log-data-in-reportportal/test-framework-integration/PHP/Behat/index.html +++ b/log-data-in-reportportal/test-framework-integration/PHP/Behat/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/PHP/Codeception/index.html b/log-data-in-reportportal/test-framework-integration/PHP/Codeception/index.html index cea9419fd..7f199a5a5 100644 --- a/log-data-in-reportportal/test-framework-integration/PHP/Codeception/index.html +++ b/log-data-in-reportportal/test-framework-integration/PHP/Codeception/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/PHP/PHPUnit/index.html b/log-data-in-reportportal/test-framework-integration/PHP/PHPUnit/index.html index 6112e43c1..21f35a525 100644 --- a/log-data-in-reportportal/test-framework-integration/PHP/PHPUnit/index.html +++ b/log-data-in-reportportal/test-framework-integration/PHP/PHPUnit/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/PHP/index.html b/log-data-in-reportportal/test-framework-integration/PHP/index.html index 805d7817c..4618a44f9 100644 --- a/log-data-in-reportportal/test-framework-integration/PHP/index.html +++ b/log-data-in-reportportal/test-framework-integration/PHP/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Python/RobotFramework/index.html b/log-data-in-reportportal/test-framework-integration/Python/RobotFramework/index.html index 466d66e60..7ab38861a 100644 --- a/log-data-in-reportportal/test-framework-integration/Python/RobotFramework/index.html +++ b/log-data-in-reportportal/test-framework-integration/Python/RobotFramework/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Python/behave/index.html b/log-data-in-reportportal/test-framework-integration/Python/behave/index.html index 405c45df6..4210e25c3 100644 --- a/log-data-in-reportportal/test-framework-integration/Python/behave/index.html +++ b/log-data-in-reportportal/test-framework-integration/Python/behave/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Python/index.html b/log-data-in-reportportal/test-framework-integration/Python/index.html index 5a519394e..4df28407e 100644 --- a/log-data-in-reportportal/test-framework-integration/Python/index.html +++ b/log-data-in-reportportal/test-framework-integration/Python/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Python/nosetests/index.html b/log-data-in-reportportal/test-framework-integration/Python/nosetests/index.html index 0e394d6c2..9c80c0399 100644 --- a/log-data-in-reportportal/test-framework-integration/Python/nosetests/index.html +++ b/log-data-in-reportportal/test-framework-integration/Python/nosetests/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/Python/pytest/index.html b/log-data-in-reportportal/test-framework-integration/Python/pytest/index.html index ad2fe0ec5..02985022f 100644 --- a/log-data-in-reportportal/test-framework-integration/Python/pytest/index.html +++ b/log-data-in-reportportal/test-framework-integration/Python/pytest/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/log-data-in-reportportal/test-framework-integration/index.html b/log-data-in-reportportal/test-framework-integration/index.html index 0958dcb6c..a23ebccc7 100644 --- a/log-data-in-reportportal/test-framework-integration/index.html +++ b/log-data-in-reportportal/test-framework-integration/index.html @@ -12,7 +12,7 @@ - + @@ -27,7 +27,7 @@ For specific tool details, select your language and Test Framework in the sections below.

    Examples

    If you do not know anything about ReportPortal. If you just want to integrate ReportPortal with your Test Framework and view info about test case executions: a name, a status, a duration, parameters, attachments (files, screenshots, video and other), logs and others. So this section for you.

    You can review and mimic the following examples to start your integration with the ReportPortal.

    Test frameworksExamples
    Java Test Frameworkshttps://github.com/reportportal/examples-java
    JavaScript based Test Frameworkshttps://github.com/reportportal/examples-js
    .Net based Test Frameworkshttps://github.com/reportportal?q=example-net&type=&language=&sort=
    Python based Test Frameworkshttps://github.com/reportportal/examples-python
    - + \ No newline at end of file diff --git a/plugins/AtlassianJiraCloud/index.html b/plugins/AtlassianJiraCloud/index.html index a6bd30f11..2e5ab9944 100644 --- a/plugins/AtlassianJiraCloud/index.html +++ b/plugins/AtlassianJiraCloud/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ Yes!

    Solution: Add two integrations to the project NNN-MMM in Jira with names f.e. "Project -1" and "Project -2". Add to the "Project -1" issue type Defect and for "Project -2" - issue type Task. While posing issue procedure, choose integration with needed type.

    Update global Atlassian Jira Cloud integration

    If you need to edit Jira Cloud integration authorization parameters, please perform actions:

    1. Log in as ADMIN

    2. Go to Administrative > Plugins

    3. Click on JIRA Cloud plugin panel

    4. Click on a tab with existing integration

    5. Click on "Edit authorization" link

    6. Change "Integration name"

    7. Submit the form

    note

    You can edit the "Integration name" only. If you need other changes, please submit new integration.

    If you need to edit fields which should be posted in Jira Cloud, please perform actions:

    1. Log in as ADMIN

    2. Go to Administrative > Plugins

    3. Click on JIRA Cloud plugin panel

    4. Click on a tab with existing integration

    5. Click on "Configure" button

    6. Choose issue type from the drop-down

    7. Check the needed fields and fill them in if necessary

    8. Click on "Submit" button

    Remove global Atlassian Jira Cloud integration

    If you need to remove Jira Cloud integration, please perform actions:

    1. Log in as ADMIN

    2. Go to Administrative > Plugins

    3. Click on JIRA Cloud plugin panel

    4. Click on a tab with existing integration

    5. Click on "Remove integration"

    6. Submit the action

    Project Atlassian JIRA Cloud integration

    Add new project Atlassian Jira Cloud integration

    If any project needs different Jira Cloud configurations, you should unlink a project from Global configurations and add a project configuration. It means that now when a new global integration is added to our test automation results dashboard, it won't be applied to the unlinked project.

    For that,

    1. Log in as an ADMIN or Project Manager

    2. Go to Project Settings > Integrations

    3. Click on the JIRA Cloud integration panel

    4. Click on "Unlink and setup manually" button

    5. Fill and confirm the authorization form

    'Integration Name': <The name which you want to give to your integration> - should be unique
    'Link to BTS': <valid URL of bug tracking system>
    'Project key in BTS': <project key in bug tracking system>
    'Email': <user email>
    'API Token': <user API Token>

    After you have created a connection with the JIRA project, you can choose predefined JIRA ticket fields. These fields will be shown to you all the time when you post the issue in Jira Cloud.

    This feature gives you the ability to choose which type you will post a ticket.

    To choose a needed issue type and predefined field for the chosen issue, you should perform the following actions:

    1. Click on "Configure" button

    2. Choose issue type from the drop-down

    3. Check the needed fields and fill them in if necessary

    4. Click on "Submit" button

    Now team members on this project will be able to submit issues in Jira Cloud. Options for Post Issue / Link issue are activated.

    You can add more integrations by clicking on "Add integration" button.

    Reset to project Atlassian Jira Cloud Integrations

    If you want to delete project integrations with Jira Cloud and link your project with global configurations, please perform actions described below:

    1. Log in as an ADMIN or Project Manager

    2. Go to Project Settings > Integrations

    3. Click on the JIRA Cloud integration panel

    4. Click on "Reset to global settings" button

    5. Confirm the action

    Post issue to Atlassian Jira Cloud

    Posting an issue to Jira Cloud means to create a new issue in Jira from ReportPortal and upload logs and attachments from an execution.

    If you want to post a new issue to Jira, you need to have a project or global integration with Jira Cloud.

    Post issue via Step view

    1. Log in to ReportPortal as Admin, PM, Member, Customer or Operator

    2. Go to a Step view

    3. Choose a needed item

    4. Click on "Actions" button

    5. Choose "Post issue" option

    6. Fill in the "Post issue" form with valid data

    `BTS`: if you have configured BTS integrations, you will be able to choose between them
    `Integration name`: from the drop-down, you can choose any of integrations for chosen earlier BTS
    `Predefined fields`: fields which you choose on Project Settings/ or Plugins
    `Included data`: which data should be posted to BTS (attachments, logs, comments)
    1. Submit the form

    2. A new issue will be posted in BTS with information from ReportPortal

    3. A label with issue ID will be added to the test item

    Linking an issue with an issue in Jira Cloud means adding a clickable link to an existing issue in Jira from ReportPortal that will show a status of this issue.

    Link issue via Step view

    1. Log in to ReportPortal as Admin, PM, Member, Customer or Operator

    2. Go to a Step view

    3. Choose a needed item

    4. Click on "Actions" button

    5. Choose "Link issue" option

    6. Fill in the "Link issue" form with valid data

    `BTS`: if you have configured BTS integrations, you will be able to choose between them
    `Integration name`: from the drop-down, you can choose any of integrations for chosen earlier BTS
    `Link to issue`: a full link to the item in BTS
    `Issue ID`: information which will be displayed on the label in ReportPortal
    1. Submit the form

    2. A label with issue ID will be added to the test item

    - + \ No newline at end of file diff --git a/plugins/AtlassianJiraServer/index.html b/plugins/AtlassianJiraServer/index.html index 8ff7e0391..dcc330c6e 100644 --- a/plugins/AtlassianJiraServer/index.html +++ b/plugins/AtlassianJiraServer/index.html @@ -12,7 +12,7 @@ - + @@ -25,7 +25,7 @@ Add to the "Project -1" issue type Defect and for "Project -2" - issue type Task. While posing issue procedure, choose integration with needed type.

    Update global Atlassian Jira Server integration

    If you need to edit Jira Server integration authorization parameters, please perform actions:

    1. Log in as ADMIN

    2. Go to Administrative > Plugins

    3. Click on JIRA Server plugin panel

    4. Click on a tab with existing integration

    5. Click on "Edit authorization" link

    6. Change "Integration name"

    7. Type your Jira Server credentials

    8. Submit the form

    note

    You can edit the "Integration name" only. If you need other changes, please submit new integration.

    If you need to edit fields which should be posted in Jira Server, please perform actions:

    1. Log in as ADMIN

    2. Go to Administrative > Plugins

    3. Click on JIRA Server plugin panel

    4. Click on a tab with existing integration

    5. Click on "Configure" button

    6. Choose issue type from the drop-down

    7. Check the needed fields and fill them in if necessary

    8. Click on "Submit" button

    Remove global Atlassian Jira Server integration

    If you need to remove Jira Server integration, please perform actions:

    1. Log in as ADMIN

    2. Go to Administrative > Plugins

    3. Click on JIRA Server plugin panel

    4. Click on a tab with existing integration

    5. Click on "Remove integration"

    6. Submit the action

    Project Atlassian JIRA Server integration

    Add new project Atlassian Jira Server integration

    If any project needs different Jira Server configurations, you should unlink a project from Global configurations and add a project configuration. It means that now when a new global integration is added to the our centralized test automation tool, it won't be applied to the unlinked project.

    For that,

    1. Log in as an ADMIN or Project Manager

    2. Go to Project Settings > Integrations

    3. Click on the JIRA Server integration panel

    4. Click on "Unlink and setup manually" button

    5. Fill and confirm the authorization form

    'Integration Name': <The name which you want to give to your integration> - should be unique
    'Link to BTS': <valid URL of bug tracking system>
    'Project key in BTS': <project key in bug tracking system>
    'Authorization Type': Basic (predefined)
    'BTS Username': <JIRA user name>
    'BTS Password': <JIRA user password>

    After you have created a connection with the JIRA project, you can choose predefined JIRA ticket fields. These fields will be shown to you all the time when you post the issue in Jira.

    This feature gives you the ability to choose which type you will post a ticket.

    To choose a needed issue type and predefined field for the chosen issue, you should perform the following actions:

    1. Click on "Configure" button

    2. Choose issue type from the drop-down

    3. Check the needed fields and fill them in if necessary

    4. Click on "Submit" button

    Now team members on this project will be able to submit issues in Jira. Options for Post Issue / Link issue are activated.

    You can add more integrations by clicking on "Add integration" button.

    Reset to project Atlassian Jira Server Integrations

    If you want to delete project integrations with Jira Server and link your project with global configurations, please perform actions described below:

    1. Log in as an ADMIN or Project Manager

    2. Go to Project Settings > Integrations

    3. Click on the JIRA Server integration panel

    4. Click on "Reset to global settings" button

    5. Confirm the action

    Some tricks when you create a new connection:

    1. Verify that the link to the JIRA Server system is correct. There are several possible variants are possible, for instance:
    https://jira.company.com/jira
    https://jiraeu.company.com
    1. Verify the project key in JIRA Server is correct. Please fill in the Project key field with project key-value, e.g. project ABC-DEF has key ABCDEF.

    2. Verify the username and password data. Make sure, that the login name and not the email are in the username field. In case all the data above is correct, but the error appears again, check whether the user's credentials to JIRA Server are not expired. As far as JIRA Server sends the request in HTML format, we are not able to display the real reason for the error. To check and/or resolve the issue, please do the next steps:

    3. SSL instance of JIRA (even cloud version) can be accessed by JIRA API token, used instead of a password. After you have connected Jira and our test automation results dashboard, you can choose an issue type that you will be able to add to Jira during the “Post Issue” operation. Also, the user can add predefined fields that the user can fill.

    Post issue to Atlassian Jira Server

    Posting an issue to Jira Server means to create a new issue in Jira from ReportPortal and upload logs and attachments from an execution.

    If you want to post a new issue to Jira, you need to have a project or global integration with Jira Server.

    Post issue via Step view

    1. Log in to ReportPortal as Admin, PM, Member, Customer or Operator

    2. Go to a Step view

    3. Choose a needed item

    4. Click on "Actions" button

    5. Choose "Post issue" option

    6. Fill in the "Post issue" form with valid data

    `BTS`: if you have configured BTS integrations, you will be able to choose between them
    `Integration name`: from the drop-down, you can choose any of integrations for chosen earlier BTS
    `Predefined fields`: fields which you choose on Project Settings/ or Plugins
    `Included data`: which data should be posted to BTS (attachments, logs, comments)
    `BTS username`: reporter login in Jira Server
    `BTS password`: reporter password in Jira Server
    1. Submit the form

    2. A new issue will be posted in BTS with information from ReportPortal

    3. A label with issue ID will be added to the test item

    Linking an issue with an issue in Jira Server means adding a clickable link to an existing issue in Jira from ReportPortal that will show a status of this issue.

    Link issue via Step view

    1. Log in to ReportPortal as Admin, PM, Member, Customer or Operator

    2. Go to a Step view

    3. Choose a needed item

    4. Click on "Actions" button

    5. Choose "Link issue" option

    6. Fill in the "Link issue" form with valid data

    `BTS`: if you have configured BTS integrations, you will be able to choose between them
    `Integration name`: from the drop-down, you can choose any of integrations for chosen earlier BTS
    `Link to issue`: a full link to the item in BTS
    `Issue ID`: information which will be displayed on the label in ReportPortal
    1. Submit the form

    2. A label with issue ID will be added to the test item

    - + \ No newline at end of file diff --git a/plugins/AzureDevOpsBTS/index.html b/plugins/AzureDevOpsBTS/index.html index 2a6a781b5..6789fa1c8 100644 --- a/plugins/AzureDevOpsBTS/index.html +++ b/plugins/AzureDevOpsBTS/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Azure DevOps BTS

    To install the Azure DevOps BTS plugin, see Upload plugin section.

    Integration with our centralized test automation tool can be helpful for projects which are already using Azure DevOps BTS (Bug Tracking System) as a tracking tool. This feature allows posting issues and linking/unlinking issues, getting updates on their statuses. For example, just a few clicks – and bug with detailed logs is created!

    Azure DevOps BTS integration can be configured on the global level (for all projects on the instance) or on the project level (only for one project).

    Global Azure DevOps BTS integration

    Add new global Azure DevOps BTS integration

    You might want to configure global integrations which will be applied on all projects if all projects on your instance are using the same Azure DevOps BTS projects.

    1. Log in as ADMIN.

    2. Go to Administrate > Plugins.

    3. Click on Azure DevOps plugin.

    4. Click on the “Add integration” button.

    1. Fill and Save the authorization form.

    Please, follow the steps below to get a token for Azure DevOps integration:

    You can choose predefined Azure DevOps ticket fields after you have created a connection with the Azure DevOps BTS project. These fields will be shown to you all the time when you post an issue to the Azure DevOps BTS.

    This feature also gives you the ability to choose proper issue type for newly created issues in Azure DevOps BTS.

    To choose a needed issue type and predefined field for the chosen issue, you should perform the following actions on the opened Configuration form:

    1. Choose issue type from the drop-down.

    2. 2.Check the fields needed and fill them in if necessary.

    3. Click on “Submit” button.

    Now team members on all your projects will be able to submit issues in Azure DevOps BTS. Options for Post Issue/Link issue are activated.

    You can add more integrations by clicking on “Add integration”.

    User can add several integrations, but with a different name to the one Azure DevOps BTS project.

    Use case:

    Situation: User wants to post to Azure DevOps BTS issues with type Issue and Task to the project NNN-MMM in Azure DevOps BTS. Is it possible? Yes!

    Solution: Add two integrations to the project NNN-MMM in Azure DevOps BTS with names e.g., “Project -1” and “Project -2”. Add to the “Project -1” issue type Issue and for “Project -2” - issue type Task. While posing issue procedure, choose integration with needed type.

    Update global Azure DevOps BTS integration

    If you need to edit Azure DevOps BTS integration authorization parameters, please perform actions:

    1. Log in as ADMIN.

    2. Go to Administrate > Plugins.

    3. Click on Azure DevOps plugin.

    4. Click on a tab with existing integration.

    5. Click on “Edit authorization” link.

    1. Change “Integration name”.

    2. Type your Azure DevOps credentials.

    3. Submit the form.

    note

    You can edit only “Integration name”. If you need other changes, please submit new integration.

    If you need to edit fields which should be posted in Azure DevOps BTS, please perform actions:

    1. Log in as ADMIN.

    2. Go to Administrate > Plugins.

    3. Click on Azure DevOps plugin.

    4. Click on the tab with existing integration.

    5. Click on “Configure” button.

    1. Choose issue type from the drop-down.

    2. Check the fields needed and fill them in if necessary.

    3. Click on “Submit” button.

    Remove global Azure DevOps BTS integration

    If you need to remove Azure DevOps BTS integration, please perform actions:

    1. Log in as ADMIN.

    2. Go to Administrate > Plugins.

    3. Click on Azure DevOps plugin.

    4. Click on the tab with existing integration.

    5. Click on “Remove integration”.

    1. Submit the action.

    Project Azure DevOps BTS integration

    Add new project Azure DevOps BTS integration

    If any project needs different Azure DevOps BTS configurations, you should unlink a project from Global configurations and add a Project configuration. It means that now when a new global integration is added to the ReportPortal, it won't be applied to the unlinked project.

    For that,

    1. Log in as an ADMIN or Project Manager.

    2. Go to Project Settings > Integrations.

    3. Click on the Azure DevOps integration panel.

    4. Click on “Unlink and setup manually” button.

    1. Fill and confirm the authorization form.
    note

    Please, have a look at Global Azure DevOps BTS integration for detailed configuration steps.

    You can choose predefined Azure DevOps ticket fields after you have created a connection with the Azure DevOps BTS project. These fields will be shown to you all the time when you post an issue to the Azure DevOps BTS.

    This feature also gives you the ability to choose proper issue type for newly created issues in Azure DevOps BTS.

    To choose a needed issue type and predefined field for the chosen issue, you should perform the following actions on the opened Configuration form:

    1. Choose issue type from the drop-down.

    2. Check the needed fields and fill them in if necessary.

    3. Click on “Submit” button.

    Now team members on this project will be able to submit issues in Azure DevOps BTS. Options for Post Issue/Link issue are activated.

    You can add more integrations by clicking on “Add integration” button.

    Reset to project Azure DevOps BTS Integrations

    If you want to delete project integrations with Azure DevOps BTS and link your project with global configurations, please perform actions described below:

    1. Log in as an ADMIN or Project Manager.

    2. Go to Project Settings > Integrations.

    3. Click on the Azure DevOps integration panel.

    4. Click on “Reset to global settings” button.

    5. Confirm the action.

    Post issue to Azure DevOps BTS

    Posting an issue to Azure DevOps BTS means creating a new issue in Azure DevOps BTS from ReportPortal and uploading logs and attachments from an execution.

    If you want to post a new issue to Azure DevOps BTS, you need to have a project or global integration with Azure DevOps BTS.

    1. Log in to ReportPortal as Admin, PM, Member, Customer or Operator.

    2. Go to Launches.

    3. Choose a needed item.

    1. Click on the pencil icon to open “Make decision” modal.
    1. Choose “Post issue” option and then “Apply & Continue”.
    1. Fill in the “Post Issue” form with valid data and submit the form.
    1. A new issue will be posted in Azure DevOps BTS with information from ReportPortal.
    1. A label with issue ID will be added to the test item.

    Linking an issue with an issue in Azure DevOps BTS means adding a clickable link to an existing issue in Azure DevOps BTS from ReportPortal that will show a status of this issue.

    1. Log in to ReportPortal as Admin, PM, Member, Customer or Operator.

    2. Go to Launches.

    3. Choose a needed item.

    1. Click on the pencil icon to open “Make decision” modal.
    1. Choose “Link issue” option and then “Apply & Continue”.
    1. Fill in the “Link issue” form with valid data and submit the form.
    1. A label with issue ID will be added to the test item.
    1. Link is redirected to this issue in Azure DevOps BTS.

    You can also unlink an issue.

    1. Click on the “remove” icon.
    1. Click “Unlink Issue”.
    1. Link to the issue in Azure DevOps BTS is removed.

    Custom issue type in Azure DevOps BTS

    You can configure any custom issue type (e.g., Bug for Adam) in Azure DevOps BTS and then choose it as predefined Azure DevOps ticket field. So, developer Adam will see on the Azure DevOps BTS all issues from ReporPortal which assigned to him.

    Follow the steps below to configure custom issue type:

    1. Log in to Azure portal.

    2. Go to Organization settings.

    1. Click on the “Process” menu item.
    1. Select three dots near the current process and create a new one.
    1. Click on the name of the just created process.
    1. Create custom issue type.
    1. Click on the project quantity and change the process for your project.
    1. Change Issue Type for Azure DevOps BTS integration on ReportPortal.
    1. Post issues to Azure DevOps BTS.

    2. Now you can see issues with custom issue type on Azure DevOps BTS.

    - + \ No newline at end of file diff --git a/plugins/EmailServer/index.html b/plugins/EmailServer/index.html index d007ff265..e64bab959 100644 --- a/plugins/EmailServer/index.html +++ b/plugins/EmailServer/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content

    E-mail server

    E-mail server plugin is available in ReportPortal on the Plugins page.

    You don't need to download . It is already available in ReportPortal on the Plugins page.

    Add E-mail server integrations

    You can integrate our centralized test automation tool with an E-mail server via SMTP protocol. With this integration, you will be able to perform such functions as:

    • invite a new user to the project
    • configure notification rules on launch finish

    Permissions:

    user with account role ADMINISTRATOR can configure E-mail integration for the whole instance or per project. User with account role PROJECT MANAGER can configure E-mail integration only on a project where he is assigned on as Project Manager.

    Global E-mail server integration

    To configure the SMTP server for the whole instance:

    1. Log in to the ReportPortal as an ADMIN user

    2. Then open the list on the right of the user's image.

    3. Click the 'Administrative' link

    4. Click the 'Plugins' from the left-hand sidebar

    5. Click on the 'Email Server' tab.

    6. Click on Add new integration

    7. The next fields should be present:

    `Host`: <host_name_of_email_server>
    `Protocol`: SMTP (predefined)
    `Default sender name`: (optional)
    `Port`: <port_number>
    `Authorization`: OFF/ON
    `Username`: <user_email_address>
    `Password`: <user_email_password>
    'TLS' or 'SSL': should be checked depends on the selected port.

    Example of SMTP server configuration for Gmail email server (detailed info could be found here)

    `Host`: smtp.gmail.com
    `Protocol`: SMTP
    `Default sender name`: Report Portal
    `Port`: 465
    `Authorization`: ON
    `Username`: <user_email_address>
    `Password`: <user_email_password>
    `SSL`: checkbox should be checked.

    Example of an SMTP server configuration for a Yandex email server (detailed info can be found here)

    `Host`: smtp.yandex.com
    `Protocol`: SMTP
    `Default sender name`: Report Portal
    `Port`: 465
    `Authorization`: ON
    `Username`: <user_email_address>
    `Password`: <user_email_password>
    `SSL`: checkbox should be checked.
    1. Confirm data in the form

    After E-mail server integration adding, the configuration will be applied to all projects on the instance.

    Project E-mail integration

    If E-mail integration has not to be added on the project, or if Project Manager or Admin want to specified special configurations for a special project, they can configure E-mail server in the project settings.

    To configure SMTP server for one single project instance:

    1. Log in to the ReportPortal as an ADMIN or PM user
    2. Then click on the Project settings icon.
    3. Click on the Integrations tab.
    4. Click on the 'E-mail Server' tab.
    5. Click on the button "Unlink & Setup Manually"
    6. The next fields should be present:
    `Host`: <host_name_of_email_server>
    `Protocol`: SMTP (predefined)
    `Default sender name`: (optional)
    `Port`: <port_number>
    `Authorization`: OFF/ON
    `Username`: <user_email_address>
    `Password`: <user_email_password>
    'TLS' or 'SSL': should be checked depends on the selected port.

    Example of SMTP server configuration for Gmail email server (detailed info could be found here)

    `Host`: smtp.gmail.com
    `Protocol`: SMTP
    `Default sender name`: Report Portal
    `Port`: 465
    `Authorization: ON
    `Username`: <user_email_address>
    `Password`: <user_email_password>
    `SSL`: checkbox should be checked.

    Example of an SMTP server configuration for a Yandex email server (detailed info can be found here)

    Host: smtp.yandex.com
    `Protocol`: SMTP
    `Default sender name`: Report Portal
    `Port`: 465
    `Authorization`: ON
    `Username`: <user_email_address>
    `Password`: <user_email_password>
    `SSL`: checkbox should be checked.
    1. Confirm data in the form

    After E-mail server integration adding, the configuration will be applied to all projects on the instance.

    note

    In case you unlink your project settings from Global settings, for the chosen project

    A possibility to provide custom host in links (started from 5.4 version)

    You can make this operation via API. For that just choose an API call Integration controller - Update project integration instance, and provide a link to your host in the field ' "rpHost": "custom_link.com" ''

    PUT/v1/integration/{projectName}/{integrationId}

    {
    "enabled": true,
    "integrationParameters": {
    "protocol": "smtp",
    "rpHost": "custom_link.com",
    "authEnabled": true,
    "port": "",
    "sslEnabled": false,
    "starTlsEnabled": true,
    "host": "smtp.com",
    "username": ""
    }
    }
    - + \ No newline at end of file diff --git a/plugins/ManagePlugins/index.html b/plugins/ManagePlugins/index.html index 6c930e879..c91be1f23 100644 --- a/plugins/ManagePlugins/index.html +++ b/plugins/ManagePlugins/index.html @@ -12,7 +12,7 @@ - + @@ -35,7 +35,7 @@ Pros: Project Manager or Administrator configures integrations per project, team members from different projects can not see configurations of each other.

    Use case 3:

    Situation: On ReportPortal instance there are several projects. Separate projects are added for different teams. But for one team has been added several projects. So several projects on ReportPortal have connections with one Jira (or Rally) project and several projects have a connection with different Jira (or Rally) projects. Solution: configure global integrations on the Management board, and configure project integrations for Jira (or Rally) plugin on the Project settings. Pros: Administrator configures integration once for those who need the same settings, and Project Manager or Administrator configures integrations per project, for those projects who need to limit access.

    - + \ No newline at end of file diff --git a/plugins/Rally/index.html b/plugins/Rally/index.html index da69d5cf6..405de37e0 100644 --- a/plugins/Rally/index.html +++ b/plugins/Rally/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ Yes!

    Solution: Add two integrations to the project NNN-MMM in RALLY with names f.e. "Project -1" and "Project -2". Add to the "Project -1" issue type Defect and for "Project -2" - issue type Task. While posing issue procedure, choose integration with needed type.

    Update global RALLY integration

    If you need to edit RALLY integration authorization parameters, please perform actions:

    1. Log in as ADMIN

    2. Go to Administrative > Plugins

    3. Click on RALLY plugin panel

    4. Click on a tab with existing integration

    5. Click on "Edit authorization" link

    6. Change "Integration name"

    7. Type your RALLY credentials

    8. Submit the form

    note

    You can edit only "Integration name". If you need other changes, please submit new integration.

    If you need to edit fields which should be posted in RALLY, please perform actions:

    1. Log in as ADMIN

    2. Go to Administrative > Plugins

    3. Click on RALLY plugin panel

    4. Click on a tab with existing integration

    5. Click on "Configure" button

    6. Choose issue type from the drop-down

    7. Check the needed fields and fill them in if necessary

    8. Click on "Submit" button

    Remove global RALLY integration

    If you need to remove RALLY integration, please perform actions:

    1. Log in as ADMIN

    2. Go to Administrative > Plugins

    3. Click on RALLY plugin panel

    4. Click on a tab with existing integration

    5. Click on "Remove integration"

    6. Submit the action

    Project RALLY integration

    Add new project RALLY integration

    If any project needs different RALLY configurations, you should unlink a project from Global configurations and add a project configuration. It means that now when a new global integration is added to the ReportPortal, it won't be applied to the unlinked project.

    For that,

    1. Log in as an ADMIN or Project Manager

    2. Go to Project Settings > Integrations

    3. Click on the RALLY integration panel

    4. Click on "Unlink and setup manually" button

    5. Fill and confirm the authorization form

    'Integration Name ': <The name which you want to give to your integration> - should be unique
    'Link to BTS': <valid URL of bug tracking system>
    'Project ID in BTS': <project ID in bug tracking system>
    'Authorization Type': Basic (predefined)
    'BTS Username': <RALLY user name>
    'BTS Password': <RALLY user password>

    After you have created a connection with the RALLY project, you can choose predefined RALLY ticket fields. These fields will be shown to you all the time when you post the issue in the RALLY.

    This feature gives you the ability to choose which type you will post a ticket with.

    To choose a needed issue type and predefined field for the chosen issue, you should perform the following actions:

    1. Click on "Configure" button

    2. Choose issue type from the drop-down

    3. Check the needed fields and fill them in if necessary

    4. Click on "Submit" button

    Now team members on this project will be able to submit issues in RALLY. Options for Post Issue / Link issue are activated.

    You can add more integrations by clicking on "Add integration" button.

    Reset to project RALLY Integrations

    If you want to delete project integrations with RALLY and link your project with global configurations, please perform actions described below:

    1. Log in as an ADMIN or Project Manager

    2. Go to Project Settings > Integrations

    3. Click on the RALLY integration panel

    4. Click on "Reset to global settings" button

    5. Confirm the action

    Post issue to Rally

    Posting an issue to Rally means to create a new issue in Rally from ReportPortal and upload logs and attachments from an execution.

    If you want to post a new issue to Rally, you need to have a project or global integration with Rally.

    Post issue via Step view

    1. Log in to ReportPortal as Admin, PM, Member, Customer or Operator

    2. Go to a Step view

    3. Choose a needed item

    4. Click on "Actions" button

    5. Choose "Post issue" option

    6. Fill in the "Post issue" form with valid data

    `BTS`: if you have configured BTS integrations, you will be able to choose between them
    `Integration name`: from the drop-down, you can choose any of integrations for chosen earlier BTS
    `Predefined fields`: fields which you choose on Project Settings/ or Plugins
    `Included data`: which data should be posted to BTS (attachments, logs, comments)
    `ApiKey`: user apikey
    1. Submit the form

    2. A new issue will be posted in BTS with information from ReportPortal

    3. A label with issue ID will be added to the test item

    Linking an issue with an issue in Rally means adding a clickable link to an existing issue in Rally from ReportPortal that will show a status of this issue.

    Link issue via Step view

    1. Log in to ReportPortal as Admin, PM, Member, Customer or Operator

    2. Go to a Step view

    3. Choose a needed item

    4. Click on "Actions" button

    5. Choose "Link issue" option

    6. Fill in the "Link issue" form with valid data

    `BTS`: if you have configured BTS integrations, you will be able to choose between them
    `Integration name`: from the drop-down, you can choose any of integrations for chosen earlier BTS
    `Link to issue`: a full link to the item in BTS
    `Issue ID`: information which will be displayed on the label in ReportPortal
    1. Submit the form

    2. A label with issue ID will be added to the test item

    - + \ No newline at end of file diff --git a/plugins/SauceLabs/index.html b/plugins/SauceLabs/index.html index 34a360079..34cb791e9 100644 --- a/plugins/SauceLabs/index.html +++ b/plugins/SauceLabs/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Sauce Labs

    To install the Sauce Labs plugin, see Upload plugin section.

    Add the Sauce Labs integration

    Configure the integration with Sauce Labs to watch a video of test executions in the our centralized test automation tool.

    Permissions:

    • User with account role ADMINISTRATOR can configure the integration for the whole instance or per project.
    • User with account role PROJECT MANAGER can configure the integration only on a project where they are assigned on as Project Manager.

    Global Sauce Labs integration

    To configure Sauce Labs for the whole instance:

    1. Log in to ReportPortal as an ADMIN user.

    2. Open the list on the right of the user's image.

    3. Click the 'Administrative' link.

    4. Click 'Plugins' from the left-hand sidebar.

    5. Click the 'Sauce Labs' tab.

    6. Click 'Add integration'.

    7. The following fields should be present:

      `User name`: <host_name_of_email_server>
      `Access token`: <your access token>
      `Data center`: <Europe, USA>
    8. Confirm data in the form.

    After you've added the Sauce Labs integration, you can use the Sauce Labs in ReportPortal.

    Project Sauce Labs integration

    If the plugin is configured on the global level, then all projects at this instance will use this configuration by default.

    However, you can unlink the integration from the global level and use a project level configuration instead.

    To unlink the integration, click 'Unlink & Setup Manually', then follow the on-screen instructions.

    How to use the Sauce Labs integration

    Before using this feature, you should report test results to ReportPortal with the attribute: SLID: XXXXXXXX.

    Where: SLID = Sauce Labs ID and XXXXXXXX = # of job in Sauce Labs

    The SLID: XXXXXXXX attribute links the execution in ReportPortal and a job in Sauce Labs. If a test item has attribute SLID: XXXXXXXX, and there is a global or project integration with Sauce Labs, a user will be able to view a video from Sauce Labs for the appropriate job in ReportPortal on a log view.

    - + \ No newline at end of file diff --git a/quality-gates/AssessmentOfTestResultsUsingQualityGates/index.html b/quality-gates/AssessmentOfTestResultsUsingQualityGates/index.html index b64904f78..b7d9c9e65 100644 --- a/quality-gates/AssessmentOfTestResultsUsingQualityGates/index.html +++ b/quality-gates/AssessmentOfTestResultsUsingQualityGates/index.html @@ -12,7 +12,7 @@ - + @@ -24,7 +24,7 @@ Second, let's check how to send assessment results to CI/CD.

    Quality Gate Analysis

    How to run Quality Gates Manually

    By default, all launches have "N\A" status. It means that Quality Gate analysis has not been run for these launches.

    If you want to run Quality Gate analysis manually, click on the label "N/A" and click on the "Run Quality Gate" in the opened pop-up.

    How to recalculate Quality Gates

    If you want to recalculate Quality Gate status for a launch, just perform next actions:

    Quality Gates can not be run for launches in progress.

    note

    If Quality Gate status has been already sent to CI/CD, a status can not be recalculated for a such launch.

    How to run Quality Gates Automatically

    You can configure Auto Quality Gate Analysis on the Project Settings. If you switch Quality Gate Analysis ON, the system will start QG analysis on the launch finish.

    Quality Gate Status and Timeout

    When a launch finishes, the system starts Quality Gate Analysis.

    First, the system checks if there Quality Gate for a launch under analysis. If there is no, such Quality Gate, the system shows error message. Second, if Quality Gates is found, the systems checks all rules in Quality Gate one by one and define a status for each rule Third, if all rules are done, the system defines the status of a whole Quality Gate.

    How is status calculated:

    StatusCalculationMeaning
    PassedAll rules in a Quality Gate have status PASSEDQuality Assessment passed, a test run matches specified quality criteria
    UndefinedIf Quality Gate does not have FAILED, IN PROGRESS rules, but at least one rule has status UndefinedQuality Assessment can not be finished ❓
    In ProgressIf Quality Gate does not have FAILED rules, but at least one rule in a Quality Gate has status IN PROGRESSQuality Assessment is in progress
    FailedAt least one rule in a Quality Gate has status FAILEDQuality Assessment failed, a test run does not match specified quality criteria

    Forth, if there is an integration with CI/CD, the system sends status to CI/CD tools to a pipeline.

    ❓ The reasons why Quality Gates can get a status Undefined:

    If you get this status, you can proceed with launch analysis (or choose another baseline) and rerun Quality Gates. For that check the section ### How to recalculate Qulaity Gates.

    Timeout

    Specially for integration with CI\CD, Quality Gates has parameter Timeout. If a launch whose status should be sent to a pipeline, gets UNDEFINED status, the system uses a value from Timeout. Default Timeout equals to 2 hours. It means, that after 2 hours after launch finish, the system force recaluculats Quality Gate Status and defined status.

    Jenkins Job StatusQuality Gate StatusDescription
    SUCCESSPASSEDAll Rulles Passed
    FAILEDFAILEDAt least one rule does not pass

    If you want to choose other options for a timeout, you can do it:

    If there is no needed option in the dropdown, you can specify custom value via API.

    Quality Gate Report

    A Quality Gate report is a full report that shows information on Quality gate results. This is a table that shows:

    All actual results are clickable in the report except New Failure. A clickable area for New failure will be available in the version 5.7. So user can drill down and investigate items, that became a reason of build failure.

    - + \ No newline at end of file diff --git a/quality-gates/DeleteQualityGates/index.html b/quality-gates/DeleteQualityGates/index.html index ea8d336f1..e753839b3 100644 --- a/quality-gates/DeleteQualityGates/index.html +++ b/quality-gates/DeleteQualityGates/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/quality-gates/FeatureOverview/index.html b/quality-gates/FeatureOverview/index.html index d3c298477..9b9907ac7 100644 --- a/quality-gates/FeatureOverview/index.html +++ b/quality-gates/FeatureOverview/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Feature overview

    Quality Gate analysis provides capabilities to speed up CI/CD pipeline by sending auto-feedback to your CI/CD tools. ReportPortal assesses the build quality and sends auto feedback to CI/CD.

    Quality Gates plugin adds to our continuous testing platform possibilities:

    • to create quality rules based on general test automation KPI: number of executed tests and executed tests, failure rate, failure of critical components, number of issues in completed tests, number of essential issues in the critical parts, new failures & new errors in the build
    • run Quality Gates analysis for build and view build report that helps to troubleshoot issues in the build
    • automatically send Quality Gates status to CI/CD.
    - + \ No newline at end of file diff --git a/quality-gates/HowToInstallQualityGates/index.html b/quality-gates/HowToInstallQualityGates/index.html index aa88cf6b9..0117b49fe 100644 --- a/quality-gates/HowToInstallQualityGates/index.html +++ b/quality-gates/HowToInstallQualityGates/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/quality-gates/IntegrationWithCICD/IntegrationWithGitLabCI/index.html b/quality-gates/IntegrationWithCICD/IntegrationWithGitLabCI/index.html index 7dd925ffc..fe6909d02 100644 --- a/quality-gates/IntegrationWithCICD/IntegrationWithGitLabCI/index.html +++ b/quality-gates/IntegrationWithCICD/IntegrationWithGitLabCI/index.html @@ -12,7 +12,7 @@ - + @@ -137,7 +137,7 @@ shell scripts and console commands. We also implemented HashiCorp Vault integration to store our test secrets securely. To provide more outlook we described how to run tests in Kotlin and Python.

    And here are the corresponding pipeline files, which we implemented:

    - + \ No newline at end of file diff --git a/quality-gates/IntegrationWithCICD/IntegrationWithJenkins/index.html b/quality-gates/IntegrationWithCICD/IntegrationWithJenkins/index.html index b787aa439..6f8ffdcc6 100644 --- a/quality-gates/IntegrationWithCICD/IntegrationWithJenkins/index.html +++ b/quality-gates/IntegrationWithCICD/IntegrationWithJenkins/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content

    Integration with Jenkins

    Jenkins configuration

    1. Go to “Manage Jenkins” -> “Manage Plugins”.
    2. Make sure that the necessary Jenkins plugin is installed:

    a. Switch to the “Installed” tab and search for the “Webhook Step” plugin.

    b. If no results of the search:

    i. Switch to the “Available” tab;

    ii. Search for “Webhook Step”;

    iii. Install the plugin with “Download now and install after restart”.

    1. Define webhook configuration to the Jenkins job/pipeline before tests execution:
    def hook = registerWebhook();
    def encodedUrl = sh(script: "echo -n ${hook.getURL().toString()} | base64 -w 0", returnStdout: true)

    encodedUrl – this is a unique string that will be generated from the Jenkins job/pipeline and connect each reported launch with the appropriate Jenkins run from which the launch was reported.

    Put the encodedUrl variable into the test execution string at the enumeration of RP.attributes. For example(Maven build):

    Drp.attributes='k1:v1;k2:v2;rp.webhook.key:${encodedUrl}'
    1. Configure webhook waiting data from RP:

    a. Option #1

    This option allows sending the Quality Gates result status to the separate pipeline stage. It doesn’t affect the tests execution stage, and the status of that stage will be determined by the result of the Quality Gate status. Add additional pipeline stage Wait for webhook and define the particular TIMEOUT_TIME, how long Jenkins should wait for data from RP:

    stage('Wait for webhook') {
    timeout(time: params.TIMEOUT_TIME, unit: params.TIMEOUT_UNIT) {

    echo 'Waiting for RP processing...'
    data = waitForWebhook hook;
    echo "Processing finished... ${data}"

    def jsonData = readJSON text: data
    assert jsonData['status'] == 'PASSED'
    }
    }

    Parameters for TIMEOUT_TIME and TIMEOUT_UNIT can be defined like that:

    parameters {
    string(name: 'TIMEOUT_TIME', defaultValue: '30', description: '')
    string(name: 'TIMEOUT_UNIT', defaultValue: 'SECONDS', description: '')
    }

    b. Option #2

    This option should send the results from the RP to the tests run pipeline stage, and the status of that stage(tests execution) will be determined by the result of the Quality Gate status. Add next code in the pipeline stage, where tests run:

    echo 'Waiting for RP processing...'
    data = waitForWebhook hook;
    echo "Processing finished... ${data}"

    def jsonData = readJSON text: data
    assert jsonData['status'] == 'PASSED'

    If the Jenkins received a response about QualityGate status from RP, the build status should be appropriately marked:

    Jenkins Job StatusQuality Gate StatusDescription
    SUCCESSPASSEDQuality Gate is passed
    ABORTEDUNDEFINEDThe Jenkins timeout has been exceeded
    FAILEDFAILEDQuality Gate is failed
    - + \ No newline at end of file diff --git a/quality-gates/IntegrationWithCICD/index.html b/quality-gates/IntegrationWithCICD/index.html index bbf50fb0a..5d2bb1d68 100644 --- a/quality-gates/IntegrationWithCICD/index.html +++ b/quality-gates/IntegrationWithCICD/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/quality-gates/QualityGateEdit/index.html b/quality-gates/QualityGateEdit/index.html index 819132c10..16e0d29da 100644 --- a/quality-gates/QualityGateEdit/index.html +++ b/quality-gates/QualityGateEdit/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/quality-gates/QualityGatePurpose/index.html b/quality-gates/QualityGatePurpose/index.html index 85c2a31ba..19f9aedbd 100644 --- a/quality-gates/QualityGatePurpose/index.html +++ b/quality-gates/QualityGatePurpose/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Quality Gate Purpose

    ReportPortal is a continuous testing platform with build-in functionality - Quality Gates.

    The primary Quality Gate purpose is to speed up a CI/CD pipeline.

    Quality Gates plugin supports the next user flow:

    • Create Quality Gate rules in ReportPortal
    • Start a test job in CI/CD with webhook
    • ReportPortal assesses launch quality using created rules
    • ReportPortal sends auto feedback to CI/CD tool with status Passed or Failed
    • Based on ReportPortal Feedback, CI/CD tool fails a build or promotes it to the next stage

    The second purpose of Quality Gate is to simplify interactions between a QA team and business stakeholders. Quality Gates provides a possibility to create business-friendly rules such as:

    • define required number of tests in a job
    • specify tests that should be executed (features, components)
    • define minimum executed tests failure rate
    • new failure of critical components
    • define a number of issues in executed tests
    • define a number of critical issues in the critical components
    • new failures & new errors in the build

    And it leads us to the third purpose of Quality Gates. It is a full-featured report on Quality Gates analysis results which helps to troubleshoot problems and fix them.

    - + \ No newline at end of file diff --git a/quality-gates/QualityRulesConfiguration/index.html b/quality-gates/QualityRulesConfiguration/index.html index 1b6fb6af7..a27d9c8d2 100644 --- a/quality-gates/QualityRulesConfiguration/index.html +++ b/quality-gates/QualityRulesConfiguration/index.html @@ -12,7 +12,7 @@ - + @@ -24,7 +24,7 @@ Failure rate = items with type STEP with status FAILED / ALL items with type STEP in the analyzed launch

    You can add only 1 "All tests failure rate" rule to 1 Quality Gate.

    Failure rate in a component/feature/etc

    You can also track the failure rate of tests that belong to a feature, component, priority or others in a launch. For that, tests in the analyzed launch should have attributes (f.i. feature: Payment, or component: Payment, or priority: critical, or any others).

    Then you need to add an "amount" rule with an attribute option:

    1. Open Project Settings> Quality Gate
    2. Click on the pencil on the Quality Gate
    3. Click on the drop-down: "Add a new rule"
    4. Choose the option "Percent"
    5. Choose option "Tests with attributes"
    6. Add a % of min allowable failure rate - N%
    7. Click on the tick
    8. The rule is added to the Quality Gate

    In this case, on the finish, the system will automatically analyze a launch and compare the failure rate of tests with a specified attribute in the analyzed launch with failure rate from the rule in the Quality Gate. If the failure rate is more than specified in the rule, the system fails the rule and Quality Gate.

    note

    How a failure rate is calculated

    Failure rate for tests with a attribute = items with type STEP with status FAILED and with a specified attribut / ALL items with type STEP in the analyzed launch and with a specified attribut

    You can add several "Tests with attribute percent" rules to the Quality Gate. But it is impossible to create duplicates.

    Not passing rate in the launch or a component/feature/etc.

    You can use the "Percent rule" in several options: Failure /Not passed.

    The failure rule is described in the previous sections.

    If you choose the "Not passed" option, the system will use another calculation method.

    note

    How a notpassed rate is calculated

    Not passed rate = items with type STEP with status FAILED and SKIPPED / ALL items with type STEP in the analyzed launch

    Not passed rate for tests with a attribute = items with type STEP with status FAILED and SKIPPED and with a specified attribut / ALL items with type STEP > in the analyzed launch and with a specified attribut

    Amount of issues in the run

    Case 1: You want to track that the regression suite run should not have a critical issue or Product bugs.

    Case 2: Regression suite contains 500 tests with critical priority. You want to track that the run should not have critical issues or Product bugs (or any other) in these 500 tests.

    Amount of issues also has 2 options "All tests" and "Tests with the attribute". The purpose of the rule is to limit the number of unwanted defects in the run. With the option "All tests", you can restrict issues for all tests in the launch.

    With the option "Test with attributes", you can limit issues in the critical features, components, etc.

    Ammount of issues in the launch

    For adding this rule Project Manager or Admin should:

    1. Open Project Settings> Quality Gate
    2. Click on the pencil on the Quality Gate
    3. Click on the drop-down: "Add a new rule"
    4. Choose the option "Amount of issues"
    5. Choose the option "All tests"
    6. Choose a defect type: "Total Defect Types" or "Defect Type"
    7. Add a number of min allowable issues - N%
    8. Click on the tick
    9. The rule is added to the Quality Gate

    On the finish, the system will automatically analyze a launch and compare a number of specified defects in the analyzed launch with a issues numbers in the "amount issues" rule in the Quality Gate. If the a number of issues in the launch is more than in the rule, the system fails the rule and Quality Gate.

    note

    How a number of issues is calculated

    if in the rule specified "Total Defect type"

    A number of issues in the test run = SUM of items with defect types which belong to the Defect type group

    if in the rule specified "Defect type"

    A number of issues in the test run = number of items with specified defect types

    Ammount of issues in tests with attribute

    For this rule a user can choose an option Tests with an attribute". The logic for this rule is the same as for rule "Amount of tests with an attribute".

    Allowable level of To investigate

    When you choose a rule "Amount of issues", the system automatically adds a parameter "Allowable To investigate level" to the Quality Gate.

    What does this parameter mean?

    The purpose of the rule is a check and a guarantee that the run does not contain specified issues. But if a launch includes "To investigate", the system can not make an analysis of the system and guarantee that the forbidden problems are absent in a launch.

    For this reason, we have added the parameter "Allowable To investigate level". By default, this parameter equals 0. But you can change this parameter and set your custom value.

    For this Project Manager or Admin can edit Quality Gate:

    New failures in the run

    Case 1: Regression suite has 1000 tests. In the last released version, five tests failed in a regression suite. You want to track that the regression runs on the version in development should not have new failures.

    Case 2: Regression suite contains 500 tests with critical priority. In the last released version 1 test with critical priority failed. You want to track that critical tests in the regression run on the version in development should not have new failures.

    The purpose of the rule is to block a run that has new failures compared to a chosen baseline.

    New failures also have 2 options "All tests" and "Tests with an attribute". The purpose of the rule is to block a run that has new failures compared to a chosen baseline.

    1. Open Project Settings> Quality Gate
    2. Click on the pencil on the Quality Gate
    3. Click on the drop-down: "Add a new rule."
    4. Choose the option "New failure."
    5. Choose the option "All tests"/"Test with attributes."
    6. Click on the tick
    7. The rule is added to the Quality Gate

    In this case, on the finish, the system will automatically analyze a launch and compare failed tests /or failed tests with the specified attribute in the analyzed launch with tests in the baseline. It fails a rule if the system detects a new failure in the launch or in tests with specified attributes.

    How does the rule works

    For defining test uniqueness, our continuous testing platform uses Test Case ID principles.

    note

    For now, ReportPortal can not process items with the same Test Case ID correctly.

    How to choose a Baseline for the "New failures" rule

    Default Baseline

    By default, a system will use a previous launch for comparison. For example, for "Launch A #3", the system will use "Launch A #2" as a baseline. If there is no "Launch A #2" (f.e. this launch has been deleted by retention job) in the system, the system will use "Launch A #1".

    If there is no fitting launch in the system, the "New failure" rule will get the status "Undefine".

    Customized Baseline

    If you want to choose other options for a baseline, you can do it:

    CaseFileds configuration
    You want to specify a static launch that should always be usedSelect launch name and add launch number in the baseline section
    You want to specify dynamic launcgSelect launch name and check a button "Latest" ❓

    ❓ When you use "latest", the system will use the latest launch with a specified launch name, which has been run before the analyzed launch. If you want to specify a baseline, you also can add launch attributes. In this case, the system will use the latest launch with specified launch name and attributes, which have been run before the analyzed launch.

    New errors in the run

    “Unique errors” functionality with a new Quality Gates rule – New Errors – was implemented in version 5.7. This feature saves your time for searching and analyzing repeated errors in launches. “New Errors” rule will help to group errors into new and known ones and, for example, fail build if there are new error not seen previously.

    To begin using this functionality, you need to create a Quality Gate and add the “New Errors” rule. Please follow the steps below:

    1) Log in to ReportPortal as Admin/PM.

    2) Go to Project Settings.

    3) Select Quality Gates section.

    4) Click Create Quality Gate button.

    5) Enter Quality Gate Name and Analyzed Launch, then click Save button.

    note

    if you want the Quality Gate not to run for all launches, you can adjust it only for the launches with specific attributes. Click Add attribute and specify value and key, e.g., browser.

    On the example in the screenshot above Quality Gate will run for launches with name “Test” with attributes Browser Chrome, Feature Reporting, Device MacBook.

    6) Click on the Add a new rule dropdown and select New Errors.

    Click the “confirm” icon.

    note

    Please, note that “New Errors” rule can be created with “All tests” condition only.

    Before running automation tests, make sure that “Quality Gates” feature is ON.

    Now everything is ready to use.

    7) Go to automation testing tool.

    8) Run autotests.

    9) Go to ReportPortal.

    10) Open the Launches section and click the Refresh button at the top.

    11) Verify the Quality Gates' status.

    Passed - there are no failures.

    Failed - there are new errors.

    N/A - appears if the quality gate was created after a launch was finished, or there is no quality gate for this launch. If the status is Not Available, click on the “N/A” and then click “Run Quality Gate” button (+ “Refresh”).

    12) To look at the failed test results, click on the Failed status and then click on the number under Current column.

    You will be redirected to the Unique errors tab with a list of all new error logs of the launch. If you want to see known issues as well, open the All Unique Errors dropdown at the top and click the Known Errors checkbox.

    By default, a previous launch execution is used as a Baseline Launch for the Quality Gate. Besides, you can as well define any other launch by specifying its name and sequence number or select Latest for the prior run of the specified launch to be used as a baseline.

    To make these changes, click Edit Details on the Quality Gate page and uncheck the Choose a previous launch as a baseline checkbox.

    Follow the steps below depending on the preferable settings for the Baseline Launch.

    In this way you can compare analyzed launch not only with its previous execution but also with another launch.

    - + \ No newline at end of file diff --git a/quality-gates/UploadQualityGateToReportPortal/index.html b/quality-gates/UploadQualityGateToReportPortal/index.html index d73d8f711..98ff5a3b1 100644 --- a/quality-gates/UploadQualityGateToReportPortal/index.html +++ b/quality-gates/UploadQualityGateToReportPortal/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Upload Quality Gate to ReportPortal

    The default configuration of our continuous testing platform doesn't contain Quality Gate. For adding this feature, you need to receive a link to the .jar file from ReportPortal.

    Download the .jar file and upload it to ReportPortal. Fo that pleases perform, following actions:

    • Login ReportPortal as an Admin
    • Open Admin Page > Plugins
    • Click on the button Upload
    • Add .jar file to the modal Upload plugin
    • And click the button Upload
    • Reload page

    As soon as the plugin has been added to the ReportPortal, a new Quality Gates tab will be added to the Project Settings.

    On the All launches page, the system adds a label "N/A" to each launch. A label "N/A" means that the Quality Gates has not been run for a launch yet.

    - + \ No newline at end of file diff --git a/releases/Version23.2/index.html b/releases/Version23.2/index.html index f77560d18..f76f23b51 100644 --- a/releases/Version23.2/index.html +++ b/releases/Version23.2/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Version v.23.2

    1. What's Changed:

    • New feature - Monitoring:

    A new Event Monitoring interface has been introduced, providing administrators with a convenient means to review all activities at the Project level through the Search & Filter functionality within the Admin panel. Events at the instance level are not visible on the UI, but they are securely stored in the database. This data can be easily transmitted to SIEM systems for future monitoring and analysis.

    More details can be found via the link.

    • Project Activity Panel Adjustments:

    The list of project activities displayed in the Project Activity Panel has been expanded.

    More details can be found via the link.

    info

    Please take into account that starting from this point forward, all events will have a new format for their storing in the database. Consequently, all events which have been stored prior to version 23.2 will be deleted.

    • New feature – Delete Account:

    Now instance administrators can empower users to delete their accounts and obfuscate associated personal data.

    More details can be found via the link.

    • New feature – Personal Data Retention policy:

    ReportPortal now offers the option to set a retention period for collected personal data during instance configuration.

    More details can be found via the link.

    note

    Please note that Features 3 and 4 are configurable, giving you the flexibility to decide whether you want to use these features or not. If you choose to utilize them, you can configure them using environmental variables. Further details can be found in the respective documentation.

    • New feature – API Key:

    You can now generate as many API Keys as you need. You also have the ability to keep track of the creation date of API Keys and revoke any that are unused. Old tokens will still continue to function. Additionally, easy identification of the purpose of API Keys is facilitated through the use of prefixes.

    More details can be found via the link.

    • Gitlab CI integration Workaround:

    More details can be found via the link.

    2. Small updates:

    • “Load current step” functionality adjustments:

    Minor refinements have been applied to the "Load current step" functionality. Now, you can access the "Load current step" feature by hovering over a step.

    • Download file name changes:

    Attachment details and download format have been revised: files are now downloaded with the real file names.

    • Configuration examples updates:

    Configuration examples on the user profile page have been updated.

    3. Technical Improvements:

    • Storage layer now supports S3 storage, allowing data consolidation into a single bucket for the entire instance.

    • We’ve added postfix for bucket names in binary storage.

    • We’ve updated dependencies with security fixes: service-auto-analyzer and service-metrics-gatherer.

    • The issue of slow Log View loading when STEP has a complex structure of the nested steps nesting and count of them has been resolved and now up to 7x faster.

    • Service-jobs stability is improved during the reporting logs with the large stack traces.

    • We’ve optimized the attachments cleaning mechanism that allows us to increase the default value of the chunk_size by 20 times: from 1000 to 20000 in the docker compose and Kubernetes deployments.

    • Content Security Policy has been extended by adding the .browserstack.com. Now you can embed videos as a markdown from BrowserStack in order to ease failed tests troubleshooting.

    • Job for Flushing Demo data works as expected thanks to sql error fix.

    • We’ve updated react to version 17 and its dependencies to reduce the number of vulnerabilities and have a smooth transition to version 18.

    • Issues arising when service-api is starting (connected to bucket structure update or the binary storage type update) while there are integrations to external services like Jira have been resolved. Old logic for migrating integration salt has been removed.

    • Launches import via API is possible with additional parameters: name, description, attributes.

    • Rename notIssue parameter for import launch : For the end-point POST/v1 /{projectName} /launch /import parameter notIssue is renamed to skippedIsNotIssue. Logic remains the same.

    4. Enhancements Based on Community Feedback:

    • #1815, #1795, #957, #1644, #1590. All ReportPortal images now support multiple platforms: linux/amd64

    • #1970. Deserialization issue has been fixed.

    5. Released versions

    Service NameRepositoryTag
    Indexreportportal/service-index5.10.0
    Authorizationreportportal/service-authorization5.10.0
    UIreportportal/service-ui5.10.0
    APIreportportal/service-api5.10.0
    Jobsreportportal/service-jobs5.10.0
    Migrationsreportportal/migrations5.10.0
    Auto Analyzerreportportal/service-auto-analyzer5.10.0
    Metrics Gathererreportportal/service-metrics-gatherer5.10.0

    Migration guide

    - + \ No newline at end of file diff --git a/releases/Version3.3.2-1/index.html b/releases/Version3.3.2-1/index.html index 15dbaa088..c060290eb 100644 --- a/releases/Version3.3.2-1/index.html +++ b/releases/Version3.3.2-1/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version3.3.2/index.html b/releases/Version3.3.2/index.html index 61b3e1892..94ad6e046 100644 --- a/releases/Version3.3.2/index.html +++ b/releases/Version3.3.2/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version4.0.0/index.html b/releases/Version4.0.0/index.html index 3930fb384..3bb7f03cc 100644 --- a/releases/Version4.0.0/index.html +++ b/releases/Version4.0.0/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content

    Version 4.0.0

    Issues and features in milestone 4.0

    Migration Details

    • MAKE BACKUP

    • IF YOUR MONGO IS INSTALLED ON SEPARATE HOST, WE WOULD LIKE TO DRAW YOUR ATTENTION ON NEW URI FORMAT: RP_MONGO_URI=mongodb://localhost:27017. Please, refer to MongoDB documentation to get more details

    • ElasticSearch has been introduced. Please, make sure vm.max_map_count kernel setting is defined as it's described in the official ES guide to prepare your environment Please, make sure you give right permissions to ES data folder (as described in the official guide)

    mkdir data/elasticsearch
    chmod g+rwx data/elasticsearch
    chgrp 1000 data/elasticsearch
    • Please, refer to the last version of docker-compose example for more details

    Agent Compatibility Details

    • TestNG: Versions of TestNG framework below 6.10 are not supported
    • JVM Clients v3 can be extended to support ReportPortal server v.4.

    New features

    • New version of Auto Analyzer

    Ru Video

    En Video

    • #48 IGNORE flag for AA to skip item next time;
    • #227 Boost human priority;
    • A label for auto-analysed test cases (AA) is added;
    • An AA action is added to the test cases` history on a Log view;
    • Added a filter for test items with a label AA
    • Retries and Auto-Analyzer

    Improvements

    Github Improvements

    • #254 Added possibility to post bug with a correct link based on Unique ID to test item
    • #238 Added possibility to add a domain without a dot;
    • #230 Escaped logs filter term after refresh
    • #141 Added tags in e-mail body
    • #217 Added possibility to print dashboards (print CSS)
    • #212 Added search for "Add shared widget" window
    • #213 Added possibility to add an own shared widget on an own dashboard
    • #105 Changing a dropdown list on a slider to set a log level
    • #22 Added a filter by a launch number
    • #276 Keep logs and screenshots for a long period of time (forever)
    • #210 Image viewer not close
    • #189 Added feature "Copy result from previous run"
    • #133 ALL DASHBOARDS: Added possibility for List view
    • #250 Support for custom types of defects
    • #136 Added a filter for linked bugs
    • #119 Added test parameters on a Step and Log view
    • #26 LDAPS protocol support
    • #270 Report Portal Email Notification should have "link" configuration
    • #247 Get launch's URL using ReportPortal agent-java-testNG
    • #242 "Replace Comments to All Selected Items" should be checked only after a comment is typed

    Widgets Improvements

    Widgets refactoring:

    • Launch execution & issue statistics widget refactoring with C3.js library;
    • Launch statistics line chart refactoring with C3 library Investigated; percentage of launches refactoring with C3.js library;
    • Different launches comparison chart refactoring with C3 library;
    • Failed cases trend chart refactoring with C3 library;
    • Non-Passed test-cases trend chart with C3 library;
    • Test-Cases growth trend chart refactoring with C3 library;
    • Launches duration chart refactoring with C3 library;
    • Refactor charts for All launches & defect type page & launches table widget with refactoring with C3 library;
    • Refactoring of charts on Project Info page;

    Line chart widget improvements:

    • Combine line chart and trend chart together;
    • Added new zoom functionality on Line chart;
    • #232 Added possibility to combine custom defects type on a Line chart widget;

    The most failed test cases widget:

    • Changing a design;
    • Changing a mechanism of a results calculating (based on Unique ID);
    • Added a name of chosen defect type on a widget view;

    Other widget improvements:

    #174 Widget silent update (save an actions with a legend after an auto-refresh); Added test parameters separately from description;

    Improvements on ReportPortal

    • Added Cheat Sheets to the documentation on ReportPortal (“Installation steps”);
    • Added possibility to correct and improve documentation on reportportal.io by our Users;
    • Added twitter widget;
    • Added YouTube widget;
    • Added a collapsing function for a documentation menu;
    • Added new section for easy downloading "Download"
    • Added Docker-compose.yml generator
    • Added extended scheme of agents` working

    Minor Improvements

    • History table is grouping test items by Unique ID;
    • Added ALL to multie drop-down list;
    • Added clickable elements on Management board
    • Remove match issue
    • Added mechanism based on UID to Merge functionality
    • Added"check All" to dropdown lists;

    Bug fixes

    Bugs

    • #249 Notification rule for launch in Debug
    • #4 Correct a link on Jenkins plugin
    • #268 Warning about an outdated browser
    • #218 system-out is not recognized when importing junit
    • #255 Invalid link for the test in the "FOUND IN" column
    • #322 Make startTestItemRQ in API 4.x case insensitive
    • #317 Bad request. The importing file has invalid format. 'There are invalid xml files inside.
    • #314 Set up different "superadmin" password
    • #307 Cucumber Java Agent – Steps are sporadically missing from the test’s logs
    • #305 JBehave NPE if givenStory=true for root story
    • #281 system-out is not recognized when importing junit
    • #188 Error Message: Start time of child ['Wed Jul 19 12:53:49 UTC 2017'] item should be same or later than start time ['Wed Jul 19 12:53:49 UTC 2017'] of the parent item/launch '596f565d2ab79c0007b48b46' Error Type: CHILD_START_TIME_EARLIER_THAN_PARENT

    Agent bugs

    • #220 Cucumber-JVM: RP throws exception, when there is no features match the filter
    • #229 Unable to view logs for some test items
    • #3 Race condition failures: lost logs and failures
    - + \ No newline at end of file diff --git a/releases/Version4.1.0/index.html b/releases/Version4.1.0/index.html index 98e6e7151..0017bbc98 100644 --- a/releases/Version4.1.0/index.html +++ b/releases/Version4.1.0/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version4.2.0/index.html b/releases/Version4.2.0/index.html index ec41d3d85..4c7a562c6 100644 --- a/releases/Version4.2.0/index.html +++ b/releases/Version4.2.0/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Version 4.2.0

    Features

    • #417 Segregation of AA settings in a separate section
    • #417 Added a possibility to configure ML;
    • #417 Added a possibility to remove/generate ElasticSearch index (ML education);
    • #381 Auto-Analysis: AA for the current launch (analogue of our previous feature "Match issue);
    • #382 Auto-Analysis: Possibility to chose which items should be auto-analysed in the launch ( With "To investigate", Already auto-analysed, analysed manually) ; Documentation about auto-analysis is here
    • #366 Bulk operation for Unlink issues in BTS;

    Improvements

    • #103 HAR viewer for attached .har files;
    • #326 Clickable launch number on a history line;
    • #328 Clickable History table;
    • #329 Duration in format MM:SS
    • #384 #613 Option for "Tag" filter - "Not contain"
    • #339 OAuth App on GitHub requires the user scope instead of read:user
    • "Load issue" has been renamed to "Link issue"
    • Added infinite session in full-screen mode(for using ReportPortal dashboards on screens)
    note

    Before using the last function, please visit Profile page for the auto-generation of API token.

    Bugs

    • #374 Logs with level Error (40 000) and higher are considered in ElasticSearch
    • #376 Unnecessary logging of all items in the run in case if run cannot be completed
    • #371 Unable to connect ldap
    • #251 Internal Server Error if no external system id is specified
    • #292 Embedded cucumber log attachments are displayed incorrectly via reportportal
    • #380 Issue with retry: negative statistics #380
    • Fixed bug with disappearing numbers on mobile version of All launches

    All issues and features are in milestone 4.2

    - + \ No newline at end of file diff --git a/releases/Version4.3/index.html b/releases/Version4.3/index.html index 2dec50ebf..1683d634f 100644 --- a/releases/Version4.3/index.html +++ b/releases/Version4.3/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Version 4.2.0

    Features

    • #442 Added a link to the test item which have been used by Analyzer for decision on History of Actions (Log view);

    • Previous Analyzer is back (choose pre-set Classic)

    Improvements

    • #452 Added possibility to share with a quick filter on All launches by link;
    • #427 Allowed to set autoAnalyzed flag via API;
    • #375 Added option with/without methods to Most failed/ flaky tests cases widgets;
    • Performance optimization for Latest Launches;

    Bugs

    • #440 Not getting any data on widget when selecting Start Time in filter
    • #436 Jira integration: Bug priority is always Minor

    All issues and features are in milestone 4.3

    - + \ No newline at end of file diff --git a/releases/Version5.0.0/index.html b/releases/Version5.0.0/index.html index 9cd0c88e6..b579001c7 100644 --- a/releases/Version5.0.0/index.html +++ b/releases/Version5.0.0/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Version 5.0.0

    Finally we are glad to introduce a new release of ReportPortal. In this version we have:

    • Migrated to PostgreSQL
    • Migrated to React JS
    • Removed Consul
    • Introduced RabbitMQ for asynchronous reporting\
    • Introduced a plugin system
    • Improved performance x3

    Installation details

    Plugins

    Jira, Rally, SauceLabs integration now will work through the plugin functionality. You need to download latest JAR package from our Bintray repository, and drag-n-drop it ReportPortal in Administrative section -> Plugins.

    Read more here.

    Available plugins to download:

    Brand new features

    • #275 #639 Nested steps
    • #348 Integration with SauceLabs
    • #673 #486 Pattern analysis
    • #675 Auto-updated wWidget based on launch attributes (Cumulative trend chart, Component Health Check)
    • 15 sub-defects per defect type
    • #41 Sub-defects for “To investigate”
    • Replace tags with attributes (attributes = key:value)
    • #680 Implement a plugin system for integration with external systems (JIRA, Rally, SauceLabs, E-mail server)
    • A possibility to change status of test item
    • A view with a test item s list from different launch (Clickable area for Overall statistics, Component Health Check widget)

    Reporting updates

    • Reduced restriction for synchronous reporting
    • Fully asynchronous reporting with api/v2
    • #275 #639 Reporting with Nested steps (see section Agents Updates)
    • #526 #444 Logs/attachments for launch level (see section Agents Updates)
    • #571 #451 Finish launch with populated status
    • #670 Interruption children “in progress” when a parent has been finished
    • #671 #548 Reporting test code reference (see section Agents Updates)
    • #490 Reporting test into finished Parent Item
    • A possibility to report tests with Test Case ID (ID number from your test management system)

    Small and nice updates

    • #453 Launch description and attributes on the suite view
    • #606 Markdown on the Log page (no need in !!!MARKDOWN MODE!!!)
    • Possibility to configure integrations (Jira/Rally/ E-mail-server/ SauceLabs) per project and for the whole instance
    • #618 Increase limitation for step name length from 256 to 1024
    • #457 Increase password length
    • Auto launch deleting
    • Increase the number of launches for widgets from 150 to 600

    Bug fixing

    • #522 Defect comment is not updated during deleting
    • #542 Issue with History line
    • #542 Unable to create a link to the result log page - page keeps reloading
    • #563 DOM XSS in rp.epam.com
    • #564 There is no possibility to enter login/password for email settings with RU local

    Administrative page updates

    • New design
    • Possibility to filter projects and users
    • Plugin system
    • #364 Possibility to delete personal projects

    Agents update

    • Reporting with Nested steps (already updated TestNG)
    • Logs/attachments for launch level (already updated TestNG)
    • Reporting test code reference (already updated Java-based agents)
    • Reporting test case ID (already updated Java-based agents)

    Test frameworks integration

    • The majority of test framework integrations (agents) of v4 supported by ReportPortal v5.0 backward compatibility. But do not use latest features, capabilities and performance upgrades (NestedSteps, Re-run, re-tries, etc.)
    • Please take latest agents started with 5.* in order to get full support of RPv5 features (work in progress, agents will be released soon)

    Dev guides

    - + \ No newline at end of file diff --git a/releases/Version5.0RC/index.html b/releases/Version5.0RC/index.html index a1cce878f..f0ff280b9 100644 --- a/releases/Version5.0RC/index.html +++ b/releases/Version5.0RC/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Version 5.0RC

    This is the Beta release.

    Brand new features

    • #275 #639 Nested steps
    • #348 Integration with SauceLabs
    • #673 #486 Pattern analysis
    • #675 Widget based on launch attributes (Cumulative trend chart)
    • 15 sub-defects per defect type
    • #41 Sub-defects for “To investigate”
    • Replace tags with attributes (attributes = key:value)
    • #680 Implement a plugin system for integration with external systems (JIRA, Rally, SauceLabs, E-mail server)

    Reporting updates

    • Reduced restriction for synchronous reporting (beta)
    • Fully asynchronous reporting (beta)
    • #275 #639 Reporting with Nested steps (see section Agents Updates)
    • #526 #444 Logs/attachments for launch level (see section Agents Updates)
    • #571 #451 Finish launch with populated status
    • #670 Interruption children “in progress” when a parent has been finished
    • #671 #548 Reporting test code reference (see section Agents Updates)
    • #490 Reporting test into finished Parent Item

    Small and nice updates

    • #453 Launch description and attributes on the suite view
    • #606 Markdown on the Log page
    • Possibility to configure integrations (Jira/Rally/ E-mail-server/ SauceLabs) per project and for the whole instance
    • #618 Increase limitation for step name length from 256 to 1024
    • #457 Increase password length
    • Auto launch deleting
    • Increase the number of launches for widgets

    Bug fixing

    • #522 Defect comment is not updated during deleting
    • #542 Issue with History line
    • #542 Unable to create a link to the result log page - page keeps reloading
    • #563 DOM XSS in rp.epam.com
    • #564 (BUG) There is no possibility to enter login/password for email settings with RU local

    Administrative page updates

    • New design
    • Possibility to filter projects and users
    • Plugin system
    • #364 Possibility to delete personal projects

    Agents update

    • Reporting with Nested steps (already updated TestNG)
    • Logs/attachments for launch level (already updated TestNG)
    • Reporting test code reference (already updated Java-based agents)

    Integration with Java Test Frameworks

    Dev guides

    - + \ No newline at end of file diff --git a/releases/Version5.1.0/index.html b/releases/Version5.1.0/index.html index 5fadb1fc7..7fedfe01e 100644 --- a/releases/Version5.1.0/index.html +++ b/releases/Version5.1.0/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Version 5.1.0

    Important: We are constantly improving ReportPortal. And in version 5.1 we have changed a way we encrypt your personal data. Please be aware, that for successful interaction with version 5.1 you need to change a password at the first login. Please read instructions below.

    Thank you for being with us, ReportPortal team

    Brand new features

    • #893 Improved ML in Аuto-Autoanalysis 2.0
    • #894 History line improvements
    • #896 History table for the whole Launch/Filter
    • #899 Possibility to compare results from: Launch/Launch; Launch/Filter; Filter/Filter
    • Java 11 introduced

    Reporting updates

    • #895 Explicit declaration of Test Case ID: Possibility to report execution with Test Case ID from your Test Case Management system

    Small and nice updates

    • #586 Clickable area for widgets:
      • Overall statistics bar view
      • Failed case trend chart
      • Non-passed test cases trend chart
      • Passing rate per filter
      • Cumulative trend chart
      • Most popular pattern table (TOP-20)
    • Added a launch UUID in the modal "Edit Launch"
    • Added a possibility to resize the Cumulative trend chart widget.
    • Refactored auto-complete component
    • Introduced Java 11 for API
    • #744 Migrated to Traefik 2

    Bug fixing

    • Added a considering of nested steps logs in an auto-analysis procedure
    • #741 Added a possibility to collapse/expand additional launch info on Launches Table widget.
    • #870 Fixed a possibility to send link to BTS on an item finish
    • #447 Fixed launch inactivity scripts
    • Fixed launch/attachments/screenshots deleting scripts
    • #746 Backward compatibilities for !!!MARKDOWN!!! in logs
    • #768 Added markdown support for video links in logs
    • #740 Fixed a possibility to view a full launch name in widgets tooltips
    • #749 Fixed a scroll in full-screen mode for widgets

    Known issues

    LDAP returns Code 500 when the integration configuration is not correct

    How to migrate to the latest version 5.1

    Details can be found here

    - + \ No newline at end of file diff --git a/releases/Version5.2.0/index.html b/releases/Version5.2.0/index.html index 51f3501a9..7e143f9fa 100644 --- a/releases/Version5.2.0/index.html +++ b/releases/Version5.2.0/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.2.1/index.html b/releases/Version5.2.1/index.html index e2783eabd..9a2259d03 100644 --- a/releases/Version5.2.1/index.html +++ b/releases/Version5.2.1/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.2.2/index.html b/releases/Version5.2.2/index.html index 636041a6e..55003a9d2 100644 --- a/releases/Version5.2.2/index.html +++ b/releases/Version5.2.2/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.2.3/index.html b/releases/Version5.2.3/index.html index 0975e8cf6..bf9548083 100644 --- a/releases/Version5.2.3/index.html +++ b/releases/Version5.2.3/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content

    Version 5.2.3

    Bugs

    #950 Service API PostgreSQL DB locks on SELECT queries Performance improvements Small UI fixes

    - + \ No newline at end of file diff --git a/releases/Version5.3.0/index.html b/releases/Version5.3.0/index.html index ed2a8779d..d455cb841 100644 --- a/releases/Version5.3.0/index.html +++ b/releases/Version5.3.0/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Version 5.3.0

    Brand new features

    #269 Component Health Check Widget (table view) #877 SAML auth

    Small and nice updates

    • Filter by retried items on step level view
    • Purging jobs delete logs/attachments/launches by star time, not lastModified time
    • #1005 Auto-Analysis: Add a boost for items with the latest Defect Type
    • Parent line recalculation on the step view
    • Refine on the Step view: Has retries / Hasn`t retries
    • Filter description is increased to 1500

    Bug fixing

    • Github login does not work for users with private membership in the organization
    • #773 Service-API errors when the user does not have a photo
    • Wrong order during retry reporting
    • Reduced number of requests to BTS
    • Performance fixes
    - + \ No newline at end of file diff --git a/releases/Version5.3.1./index.html b/releases/Version5.3.1./index.html index 86b6e6ca2..468a8d93f 100644 --- a/releases/Version5.3.1./index.html +++ b/releases/Version5.3.1./index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.3.2/index.html b/releases/Version5.3.2/index.html index 8cd60b094..02f481a85 100644 --- a/releases/Version5.3.2/index.html +++ b/releases/Version5.3.2/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.3.3/index.html b/releases/Version5.3.3/index.html index a63da5808..5f0061f57 100644 --- a/releases/Version5.3.3/index.html +++ b/releases/Version5.3.3/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Version 5.3.3

    New features

    • Increased number of launches for Cumulative trend chart from 600 to 10 000 launches

    Small and nice updates

    • Added expanded failed items by default. Improved an alignment of arrows and step names on Log view
    • Launches. Log view. Attachment section. Gallery improvement for reducing the number of clicks
    • Added a possibility to add attributes to refine rather than replace them
    • [UI] Show first 5 lines for defect comment instead of 2 first lines on Step view and on Log view
    • Increased a description for widgets/dashboards/filters to 1500 symbols

    Analyzer improvements

    • Analyzer. Add more options for log lines in settings
    • Added a boosting feature for the similarity between log lines with more important namespaces

    Bugfixing

    • Fixed: #867 Very poor scroll performance
    • Fixed: #1128 Wrong password in the email letter when adding a new user
    • Fixed: #1182 Year in footer copyright text is not up to date
    • Fixed #857 Total Failed count is not matched with Test cases after Merging the multiple Launches [ deep merge]
    • Fixed: #911 Widget table column width (Unique bugs table)
    • Fixed: #871 Launch duration chart label "seconds"
    • Fixed: #1050 Add UUID data to TestItem Controller when querying using filters
    • Fixed: #1184 No history of test-items with defect
    • Fixed: History. Compare functionality. The custom column has items from 1st execution instead of the latest one
    • Fixed: 'Add new widget' from Launches page in case no dashboards on the project
    • Fixed: Only one attribute is returned for the launches in 'Launches table' widget
    • Fixed: Only the first 12 attachments are displayed in 'Attachments' section
    • [Performance] Issue with DB CPU utilization of "Flaky" widget query
    • Fixed unclassified errors from inserts of issues for failed items
    - + \ No newline at end of file diff --git a/releases/Version5.3.5/index.html b/releases/Version5.3.5/index.html index 13ef5f09f..0a8720a0e 100644 --- a/releases/Version5.3.5/index.html +++ b/releases/Version5.3.5/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.4.0/index.html b/releases/Version5.4.0/index.html index 9a4e0f432..0d70cf6f5 100644 --- a/releases/Version5.4.0/index.html +++ b/releases/Version5.4.0/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.5.0/index.html b/releases/Version5.5.0/index.html index 8f0c178c6..334f6c12f 100644 --- a/releases/Version5.5.0/index.html +++ b/releases/Version5.5.0/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.6.0/index.html b/releases/Version5.6.0/index.html index a22a9a4f0..85a3029c8 100644 --- a/releases/Version5.6.0/index.html +++ b/releases/Version5.6.0/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Version 5.6.0

    Migration guide

    New features

    • #1094 Filter by Attributes

    Small and nice update

    • Change status from the Log View
    • Add Test Case ID to the log view

    Analyzer updates

    • Cleaning Job for Elastic Search
    • "All big log messages should match
    • Add the to investigate group except To Investigate itself to be used in Auto-analysis and suggestions
    • Add a field with test item name while indexing logs/auto-analysis/suggestions
    • Add configuration to check strictly all log message while analysis
    • When demo data is generated, the user should click "generating index" for analyzer himself
    • Adding launch name/message info into custom retrained models
    • Add checking by Exceptions and releasing the min should match for search/cluster operations to 95%
    • Add checking by Exceptions and releasing the min should match for search/cluster operations to 95%
    • Adding launch name/message info into custom retrained models
    • Changed the logic for No defect and custom TI defect types for auto-analysis
    • Create a similar for searching by similar TI items as for AA logic for treating short/long messages
    • The users complained that some older test items still can be used by Auto-analysis, so I decided to add discounting for the ES scores in case they are farther from the start_time of the test item.

    Bugfixing

    • Performance fixes
    • Fixed issue with deadlocks by retries refactoring
    • #1474 SAML. Add "callbackUrl" field to SAML configuration
    • Failed cases trend chart. Send only "statistics$executions$failed" field in "contentFields"
    • #1502 Email configurations. Change field "Username" to "Sender email"

    What's Changed

    - + \ No newline at end of file diff --git a/releases/Version5.6.1/index.html b/releases/Version5.6.1/index.html index 90821b16f..6e0ef56e6 100644 --- a/releases/Version5.6.1/index.html +++ b/releases/Version5.6.1/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.6.2/index.html b/releases/Version5.6.2/index.html index 1cac9a980..8e59022da 100644 --- a/releases/Version5.6.2/index.html +++ b/releases/Version5.6.2/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.6.3/index.html b/releases/Version5.6.3/index.html index 60eef115c..8fadf51e4 100644 --- a/releases/Version5.6.3/index.html +++ b/releases/Version5.6.3/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.7.0/index.html b/releases/Version5.7.0/index.html index d28fb1c82..67d5446b5 100644 --- a/releases/Version5.7.0/index.html +++ b/releases/Version5.7.0/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Version 5.7.0

    New Features:

    A possibility to see all unique errors for a launch
    #1268. Support of Azure SAML authorization

    New Plugins:

    Small and Nice Updates:

    • New design for Make decision modal
    • Help & Support functionality for newly deployed instances
    • Additional configuration for Similar “To Investigate” functionality (“Min Should Match”)
    • Default State for Auto-Analysis is ON

    Bugfixing:

    • New logic for removing widget has been implemented (deleting a parent widget doesn’t delete the child widget)
    • #1603. Attributes. Error on cancel edit common attributes in "Edit items" modal
    • #1181. Most Failed Tests and Most Flaky Tests widgets: wrong time is shown
    • #1606. Component Health Check Widget not working after Upgrade
    • #1616. Component health check (table view) widget for HotProd filter does not load results and keeps spinning

    Performance Improvements:

    • 3x improved performance of project index generation for Auto-Analysis
    • Refactored and optimized retry items processing
    • Increased Auto-Analysis performance by updating the communicating interface between API and ANALYZERS
    - + \ No newline at end of file diff --git a/releases/Version5.7.1/index.html b/releases/Version5.7.1/index.html index 9615e68dc..c74a3add5 100644 --- a/releases/Version5.7.1/index.html +++ b/releases/Version5.7.1/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/releases/Version5.7.2/index.html b/releases/Version5.7.2/index.html index 2e4c123c7..e3ab848a6 100644 --- a/releases/Version5.7.2/index.html +++ b/releases/Version5.7.2/index.html @@ -12,7 +12,7 @@ - + @@ -26,7 +26,7 @@ Along with version 5.8 we will distribute migration script and instructions for data migration. So that you can easily migration from early 5.x version. The reason of the switch and performance results will be a subject of separate article. In a few words: it reduces the DB footprint in almost x10 times, improves speed of logging, and minimizes computation power to clean-up data. And also brings Full text search capabilities.

    New Contributors

    Full Changelog: 5.7.1...5.7.2

    Bugfixing:

    Bug connected to filtering by attributes with "any" and "without any" conditions was fixed

    - + \ No newline at end of file diff --git a/releases/Version5.7.3/index.html b/releases/Version5.7.3/index.html index e99d0ed15..4f893d25f 100644 --- a/releases/Version5.7.3/index.html +++ b/releases/Version5.7.3/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ Buttons “Show”, “Next” and “Previous” error logs minimize user’s efforts of scrolling across all the available logs.
  • Improved lazy loading: the number of pre-loaded logs is 300 which helps to get a better understanding of preconditions to a certain error. What is more, now it’s possible to load 300 more logs or load all the current step at once.
  • “Stack trace” in log messages now loads all the Error Logs. Besides, with the help of “Jump to” button on the Error Log it’s possible to switch to this Error Log displayed in the “All Logs” view.
  • Small and nice updates:

    #3109 Launch and test item description limits have been increased to 2048. Improved description view on all the “Launches” pages means that now even more useful links, artifacts, OKRs, etc can be stored in the description.

    Bugfixing:

    We have refactored logs double-entry saving to Elasticsearch by changing index type: now logs are saved in indexes per project instead of indexes per launch. It helps us to save the performance of Elasticsearch considering other operations and the data which we have processed via Elastic. More details can be found via the link.

    Technical updates:

    CVE addressed:

    - + \ No newline at end of file diff --git a/releases/Version5.7.4/index.html b/releases/Version5.7.4/index.html index fd2d4f0d2..b7fa2e6ad 100644 --- a/releases/Version5.7.4/index.html +++ b/releases/Version5.7.4/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ Starting from ReportPortal 5.7.4 you can either use AWS S3 directly or continue with your existing MinIO as object storage.

    1. IMPORTANT:

    Please, don’t forget to update ElasticSearch config. We've enabled logs double entry by default. Thus, it is important to review your ElasticSearch setup. Please, read carefully through this migration guide in order to avoid performance pitfalls.

    - + \ No newline at end of file diff --git a/releases/Versionv23.1/index.html b/releases/Versionv23.1/index.html index da2572bee..c4c66d301 100644 --- a/releases/Versionv23.1/index.html +++ b/releases/Versionv23.1/index.html @@ -12,7 +12,7 @@ - + @@ -36,7 +36,7 @@ The number of items in Most failed test-cases table (TOP-20) has been increased from 20 to 50.
  • #1618. [v5] Okta SSO login is not possible.
  • #1790. URL links to dashboard are not loading the widgets.
  • #1868. Client exception with client-javascript, Request failed with status code 400: The maximum length of Attribute Key and Value has been increased to 512 characters.
  • #1891. Cannot report test results to the project with "demo" name.
  • #1912. Cloud Jira Integration Post Issue is not Coming.
  • #3132. Component Health Check - Needs to wrap text: The table now includes a new column named 'Name'. Hovering over the value in the table will display a tooltip with the full text of the value.
  • 4. CVE addressed:

    https://github.com/reportportal/reportportal/wiki/Migration-to-ReportPortal-v.23.1

    6. Released versions

    Service NameRepositoryTag
    Indexreportportal/service-index5.8.0
    Authorizationreportportal/service-authorization5.8.0
    UIreportportal/service-ui5.8.0
    APIreportportal/service-api5.8.0
    Jobsreportportal/service-jobs5.8.0
    Migrationsreportportal/migrations5.8.0
    Auto Analyzerreportportal/service-auto-analyzer5.7.5
    Metrics Gathererreportportal/service-metrics-gatherer5.7.4
    - + \ No newline at end of file diff --git a/reportportal-configuration/ComponentsOverview/index.html b/reportportal-configuration/ComponentsOverview/index.html index 5f8d23a61..ed5f55466 100644 --- a/reportportal-configuration/ComponentsOverview/index.html +++ b/reportportal-configuration/ComponentsOverview/index.html @@ -12,7 +12,7 @@ - + @@ -26,7 +26,7 @@ below and creates internal OAuth2 token which is used by UI and agents. There are two types of tokens:
  • UI (expiring token)
  • API - non-expiring token, intended to use on agent side
  • Analyzer Service

    Keeps index of user logs per project and provides ability to perform search by that index. Used by auto-analysis functionality.
    Collects and processes the information, then sends it to Elasticsearch.

    Migrations Service

    Database migrations written in Go. Migrate reads migrations from sources and applies them in correct order to a database.

    Index Service

    The Index services handle requests that do not match any pattern of other services. It also aggregates some information/health data from other services to provide UI with that information.

    UI Service

    All statics for user interface.

    - + \ No newline at end of file diff --git a/reportportal-configuration/CreationOfProjectAndAddingUsers/index.html b/reportportal-configuration/CreationOfProjectAndAddingUsers/index.html index 902f0989c..602af9420 100644 --- a/reportportal-configuration/CreationOfProjectAndAddingUsers/index.html +++ b/reportportal-configuration/CreationOfProjectAndAddingUsers/index.html @@ -12,7 +12,7 @@ - + @@ -44,7 +44,7 @@ Via "Administrate" section, only ADMINISTRATOR can unassign users. Via Project Space ADMINISTRATOR and PROJECT MANAGER can unassign users.

    note

    Please, do not forget to review project roles on regular basis. We recommend to do it at least quarterly.

    Depending on the project needs the assignment could be removed. To unassign the assignment for the user on the project, perform the following steps:

    1. Navigate to the "Administrate" section -> "All Users" page.
    2. Find a user and their project in the "Projects and roles" column.
    3. Click on the name of project.
    4. Click on "Cross" icon near the needed project.
    5. Confirm the action. - The user will be unassigned from the current project but will stay in the system.
    note

    Users can not be unassign from their own personal projects.

    - + \ No newline at end of file diff --git a/reportportal-configuration/HowToGetAnAccessTokenInReportPortal/index.html b/reportportal-configuration/HowToGetAnAccessTokenInReportPortal/index.html index f90e3eb35..a2f5bdecf 100644 --- a/reportportal-configuration/HowToGetAnAccessTokenInReportPortal/index.html +++ b/reportportal-configuration/HowToGetAnAccessTokenInReportPortal/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    How to get an access token in ReportPortal

    There are two ways to authorize in the ReportPortal API:

    1. Authorization with user’s login and password

    This is the main and recommended way to get access to ReportPortal API.

    Make the following HTTP request to get user's access token using login and password:

    POST <report_portal_url>/uat/sso/oauth/token

    BODY with parameters:
    grant_type: password
    username: <username>
    password: <password>

    Or you can use the following curl request:

    curl --header "Content-Type: application/x-www-form-urlencoded" \
    --request POST \
    --data "grant_type=password&username=<username>&password=<password>" \
    --user "ui:uiman" \
    <report_portal_url>/uat/sso/oauth/token

    Then you will receive a response with an access token:

    {
    "access_token": <access_token>,
    "token_type": "bearer",
    "expires_in": <token_lifetime_ms>,
    "refresh_token": <refresh_token>,
    ...
    }

    Now you can use “access_token” with any request to API by sending it as HTTP Authorization header:

    HTTP Header:
    Authorization: Bearer <access_token>

    2. Authorization with user's API Key (for agents)

    Another method involves using the API Key found on the user's Profile page.

    The API Key is a unique token that grants access to the ReportPortal REST API.

    To use it, log in and navigate to the Profile page, then find the API Keys tab. If you’ve previously integrated agents using UUID, it has been converted to a Legacy API Key at the moment of migration to newest version and it remains valid and operational.

    note

    It will continue to work even if you generate new API Keys.

    Thus, you can use several API keys at the same time. And revoke unused, expired or publicly exposed keys.

    To generate a new API key in the ReportPortal app, click on the "Generate API Key" button.

    You are free to assign any name to this API key, as long as it is unique and consists of 1 to 40 characters. Keep in mind that duplicate API key names are not allowed.

    The system will automatically prefix the generated API key with its assigned name for easy identification. If the API key name contains spaces or underscores, these will be replaced by hyphens in the API key prefix.

    It's crucial to understand that the API key will be visible only at the point of creation. We strongly recommend copying and securely storing it for future use, as it will be impossible to retrieve later. This practice aligns with stringent security measures and standards.

    You have the ability to create multiple API keys for various purposes, such as for automation or for integration with third-party services. However, it's important to note that all API keys generated from the user's Profile page are functionally equivalent from the permissions standpoint.

    Users can also revoke an API key at any time. Upon revocation, all related information will be removed from the database, and the revoked API key will no longer be usable.

    An API key functions similarly to a regular token. When making requests to the ReportPortal API, simply include it in the HTTP Authorization header as follows:

    HTTP Header:
    Authorization: Bearer <access_token>

    note

    Please be aware that this type of token is specifically designed for use by ReportPortal client tools (agents). We do not recommend using it to provide direct access to API endpoints.

    - + \ No newline at end of file diff --git a/reportportal-configuration/IntegrationViaPlugin/index.html b/reportportal-configuration/IntegrationViaPlugin/index.html index 4f018f539..a6bccde9e 100644 --- a/reportportal-configuration/IntegrationViaPlugin/index.html +++ b/reportportal-configuration/IntegrationViaPlugin/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Integration via plugin

    Users can reinforce ReportPortal with adding additional integrations with:

    If you want to integrate ReportPortal with these external systems, and you can not find a needed tab on the Project Settings, please check the section in documentation Plugins.

    Integration configurations can be added on the global level (for all projects on the instance) in the Administrate section or the project level (only for one project) on Project Settings.

    If you have another configuration than other projects have or you want to integrate only your project with an external system, you should perform the next actions:

    1. Log in to ReportPortal as PROJECT MANAGER or ADMINISTRATOR
    2. Go to Project settings > Integrations
    3. Click on one of the integration panels
    4. And click the button "Unlink and setup manually"

    By this action, you unlink the current project from the global settings and configure your integration.

    note

    If you unlink project setting and ADMIN changes global settings for the whole instance, your project will use your project settings.

    To return global settings, you need to click a button "Reset to global settings". In this case, your settings will be deleted, and integration will use global settings.

    You can always reset to the global settings.

    - + \ No newline at end of file diff --git a/reportportal-configuration/ProjectConfiguration/index.html b/reportportal-configuration/ProjectConfiguration/index.html index c75a159c9..1950d559b 100644 --- a/reportportal-configuration/ProjectConfiguration/index.html +++ b/reportportal-configuration/ProjectConfiguration/index.html @@ -12,7 +12,7 @@ - + @@ -44,7 +44,7 @@ MANAGER project role.

    Project role

    Every user is given a specific Project role within a specific project.

    Depends on the role, the user is able or not able to perform some actions. For more details, please see the "Permissions map".

    There are 4 possible Project roles in ReportPortal:

    note

    The ADMINISTRATOR has all privileges on the project.

    Edit project role

    To edit the project role, perform the following steps:

    1. Login into the ReportPortal as a user with PROJECT_MANAGER project role.
    2. Navigate to the "Project Members" page on the left menu.
    3. Select a new value from the "Project Role" drop-down for the user. - The new project role will be automatically saved.

    Unassign user from the project

    Depending on the project needs the assignment could be removed. To unassign the assignment for the user on the project, perform the following steps:

    1. Login into the ReportPortal as a user with PROJECT_MANAGER project role.
    2. Navigate to the "Project Members" page on the left menu.
    3. Find the required member.
    4. Click the 'Unassign' button for the user.
    5. Confirm the action in the popup.
    6. The user will be unassigned from the current project but will stay in the system.
    note

    Invite user, Assign/Unassign internal user to/from the project, change user's role on a project action can be done for a user with a similar and lower role only.

    - + \ No newline at end of file diff --git a/reportportal-configuration/ReportPortalJobsConfiguration/index.html b/reportportal-configuration/ReportPortalJobsConfiguration/index.html index b3fdee5ac..bbf21bdf1 100644 --- a/reportportal-configuration/ReportPortalJobsConfiguration/index.html +++ b/reportportal-configuration/ReportPortalJobsConfiguration/index.html @@ -12,7 +12,7 @@ - + @@ -26,7 +26,7 @@ and removes binaries from the file storage Environment variables for configuration with default values:

    Project binary storage size recalculation job

    Project binary storage size recalculation job updates allocated storage value of the project based on the existing attachments at the moment. Environment variables for configuration with default values:

    - + \ No newline at end of file diff --git a/reportportal-configuration/authorization/ActiveDirectory/index.html b/reportportal-configuration/authorization/ActiveDirectory/index.html index eb62673d1..d49473baa 100644 --- a/reportportal-configuration/authorization/ActiveDirectory/index.html +++ b/reportportal-configuration/authorization/ActiveDirectory/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ Click the 'Submit' button. All users of Active Directory will have access to the ReportPortal instance.
    For entrance to ReportPortal, the user should use their domain credentials (Login and password).

    Please find the example with configurations for Microsoft Active Directory that worked successfully provided by our user:

    Table with properties and values for LDAP Microsoft Active Directory

    PropertyValue
    Urlauth-servers.domain.org.int:3358
    Base DNOU=MAIN,DC=DOMAIN,DC=ORG,DC=INT
    Manager DNcn=Service UserBind,ou=Service Accounts,ou=Colombia,ou=America,ou=ServiceAccounts,dc=DOMAIN,dc=ORG,dc=INT
    User search filter(&(objectClass=user)(sAMAccountName={0}))
    Password encoder typeNO
    Email attributemail
    Full name attributedisplayName
    Photo attributethumbnailPhoto
    - + \ No newline at end of file diff --git a/reportportal-configuration/authorization/GitHub/index.html b/reportportal-configuration/authorization/GitHub/index.html index 587357b9e..22087e4d3 100644 --- a/reportportal-configuration/authorization/GitHub/index.html +++ b/reportportal-configuration/authorization/GitHub/index.html @@ -12,7 +12,7 @@ - + @@ -25,7 +25,7 @@ Fill in the form:

    'Client Id': 8767988c424a0e7a2640
    'Client Secret': ef22c9f804257afaf399a2dada7c8f22dee5fd1b
    'Organization Name': reportportal

    Click on 'Submit' button. A confirmation message in status bar should be shown. A 'Login with GitHub' button will appear on login form.

    - + \ No newline at end of file diff --git a/reportportal-configuration/authorization/LDAP/index.html b/reportportal-configuration/authorization/LDAP/index.html index 1636ff0f5..21d655dbd 100644 --- a/reportportal-configuration/authorization/LDAP/index.html +++ b/reportportal-configuration/authorization/LDAP/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content

    LDAP

    LDAP plugin is available in ReportPortal on the Plugins page.

    To set up access with LDAP:

    1. Log in to the ReportPortal as an ADMIN user
    2. Then open the list on the right of the user's image.
    3. Click the 'Administrative' link
    4. Click the 'Plugins' from the left-hand sidebar
    5. Click on the'LDAP' tab.
    6. Click on Add new integration
    7. The next fields should be present:
    'URL*': text
    'Base DN*': text
    'Manager DN': text
    'Manager password': text
    'User DN pattern': text
    'User search filter': text
    'Group search base': text
    'Group search filter': text
    'Password encoder type': text
    'Email attribute*': text
    'Full name attribute' : text
    'Photo attribute' : text

    Mandatory fields are marked with red. Click the 'Submit' button. All users of LDAP will have access to the ReportPortal instance. For access to the ReportPortal, the user should use their domain credentials (Login and password).

    - + \ No newline at end of file diff --git a/reportportal-configuration/authorization/SAMLProvider/AzureSAML/index.html b/reportportal-configuration/authorization/SAMLProvider/AzureSAML/index.html index 8b39f5cbd..9f2296416 100644 --- a/reportportal-configuration/authorization/SAMLProvider/AzureSAML/index.html +++ b/reportportal-configuration/authorization/SAMLProvider/AzureSAML/index.html @@ -12,7 +12,7 @@ - + @@ -40,7 +40,7 @@
  • Click the Add integration button.
  • Synchronize Azure SAML and ReportPortal

    1. Synchronize Azure SAML and ReportPortal as follows:

    Finally, you can log in to ReportPortal using Azure SAML.

    - + \ No newline at end of file diff --git a/reportportal-configuration/authorization/SAMLProvider/OktaSAML/index.html b/reportportal-configuration/authorization/SAMLProvider/OktaSAML/index.html index 8ca54ab43..53c972ca6 100644 --- a/reportportal-configuration/authorization/SAMLProvider/OktaSAML/index.html +++ b/reportportal-configuration/authorization/SAMLProvider/OktaSAML/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ Click the 'Submit' button. All users of SAML will have access to the ReportPortal instance.
    Just click on the button 'Login with SAML' and choose a needed integration from the drop-down

    On Octa side you should to specify SSO url. The format for url is the next:

    https://your domain adress/uat/saml/sp/SSO/alias/report-portal-sp

    “RP callback URL” field is an optional field to provide a redirect base path right in SAML integration settings. Fill in the field in format “RP host/uat”. The format for url is the next:

    https://reportportal.com/uat

    Once you have submitted an integration with “RP callback URL”, the URL will be applied to all SAML integrations.

    - + \ No newline at end of file diff --git a/reportportal-configuration/authorization/SAMLProvider/index.html b/reportportal-configuration/authorization/SAMLProvider/index.html index 8c3ca87b7..543bbbeab 100644 --- a/reportportal-configuration/authorization/SAMLProvider/index.html +++ b/reportportal-configuration/authorization/SAMLProvider/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    SAML provider

    If you have a pre-created Internal user, you can't log in by SAML using their credentials (Email or Name).

    This plugin allows you to configure a connection with a SAML provider.

    Integration with SAML will allow you to login into ReportPortal using SSO instead of tedious manual user creation.

    The plugin provides a mechanism to exchange information between ReportPortal and SAML provider, such as the possibility to login to ReportPortal with SAML credentials.

    SAML provider requirements

    • SAML 2.0 version
    • HTTP-POST Binding
    • URL to download SAML IdP Metadata
    • HTTPS connection for SAML Metadata
    • Support SAML attributes:
      • email
      • first name
      • last name
      • full name (instead first and last name)
    tip

    There are detailed manuals for configuration of Azure SAML and Okta SAML.

    Add integration

    ReportPortal contains the SAML Plugin by default.

    1. Go to Administration -> Plugins -> SAML
    2. Select Add integration.

    Set up connection

    Identity provider configuration

    ReportPortal SSO initial URL

    You need to provide a URL for a SAML Provider to deliver SAML data for the identity federation.

    https://<host>/uat/saml/sp/SSO/alias/report-portal-sp

    Identifier

    Set up identifier (aka Audience Restriction, aka Entity ID) for application as report.portal.sp.id. You can specify personal Entity id when you deploy the Authorization service by environment variable RP_AUTH_SAML_ENTITYID.

    Custom attribute as claims in token

    The IDp app user profile must provide attributes like this:

    - user.email
    - user.firstName
    - user.lastName

    Also, Make sure there is a mapping created according to the values that you use in the ReportPortal SAML plugin like this:

    - user.email -> Email
    - user.firstName -> FirstName
    - user.lastName -> LastName

    ReportPortal configuration

    Identity provider name ID (Optional)

    Identity provider name ID (aka name identifier formats) controls how the users at identity providers are mapped to users at service providers.

    We support next formats:

    UNSPECIFIED - used by default

    urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified

    EMAIL

    urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress

    PERSISTENT

    urn:oasis:names:tc:SAML:2.0:nameid-format:persistent

    Provider name - Provider name associated with IDP.

    Metadata URL - URL that provides data with information about SAML Provider.

    Email - Attribute name from SAML metadata which contains an user email.

    <saml:Attribute Name="Email" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri">
    <saml:AttributeValue xsi:type="xs:string">neuromancer@cyberspace.net</saml:AttributeValue>
    </saml:Attribute>

    ReportPortal Callback URL (Optional) - This field provides a redirect base path.

    Once you have submitted an integration with “RP callback URL”, the URL will be applied to all SAML integrations.

    https://<host>/uat

    First name - Attribute name from SAML metadata which contains an user first/given name.

    <saml:Attribute Name="FirstName" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri">
    <saml:AttributeValue xsi:type="xs:string">William</saml:AttributeValue>
    </saml:Attribute>

    Last name - Attribute name from SAML metadata which contains an user last/family name.

    <saml:Attribute Name="LastName" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri">
    <saml:AttributeValue xsi:type="xs:string">Gibson</saml:AttributeValue>
    </saml:Attribute>

    Full name - Attribute name from SAML metadata which contains a full user name. You can use either two separate attributes for first name and last name or a combined first and last name attribute. This solely depends on your SAML provider.

    <saml:Attribute Name="FullName" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri">
    <saml:AttributeValue xsi:type="xs:string">William Gibson</saml:AttributeValue>
    </saml:Attribute>
    - + \ No newline at end of file diff --git a/reportportal-configuration/authorization/index.html b/reportportal-configuration/authorization/index.html index b909ade93..5a8dd3635 100644 --- a/reportportal-configuration/authorization/index.html +++ b/reportportal-configuration/authorization/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content
    - + \ No newline at end of file diff --git a/reportportal-glossary/index.html b/reportportal-glossary/index.html index c64bcd751..8ebd4f0b4 100644 --- a/reportportal-glossary/index.html +++ b/reportportal-glossary/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    ReportPortal Glossary

    Agent

    Agents are direct test framework integrations. If you integrate your tests with an agent then you don’t need to do anything, except adding properties and test metadata. Basically, each agent has the same naming convention: agent-{ language }-{ framework }. E.G. “agent-python-pytest”. The best way to see which agents we do have and check out the latest documentation for them is to go on ReportPortal organization page on GitHub and start searching.

    Each agent project contains a README with the latest installation instructions. Agent pages usually are being updated along with the agent code. If you found any issue with the documentation you are free to correct it with a PR or post an issue.

    Agents are complete and self-sufficient integrations, all you need is to provide correct properties and optionally test metadata (like attributes and steps).

    Client

    Clients are basically interfaces for our Rest API, like agents they have its naming convention: client-{ language }. E.G. “client-java”. And, like Agents, the best way to find the latest library version and documentation is to find it on GitHub.

    Unlike agents, Clients are not self-sufficient. They provide convenient way to call ReportPortal API, but you must do this in your code. It is also your part of responsibility which metadata you send to ReportPortal if you decided to use a Client.

    Logger

    Loggers are special type of libraries which a responsible for saving logs into ReportPortal. They can be attached to logging framework or a test tool, E.G. Selenide, to report steps and log records. Basically, you can report logs yourself with certain methods inside Clients, but Loggers do this for you.

    Here is an example of how OkHttp3 logger works:

    Like agents and clients, loggers have their own naming pattern: logger-{ language }-{ framework / tool }.

    Plugin

    Plugins are co-applications which add additional functionality to ReportPortal. We have separate examples and development guide for our users. You are free to extend our test automation results dashboard as you wish, we don’t limit or obligate our users with any clauses. ReportPortal provides some plugins for free like Jira and Rally integrations but also has closed plugins, which we provide only with our paid plans, e.g., “Quality Gates” plugin:

    Launch

    The first reporting-related word in our list. The Launch is a collection of all reported tests which were run at single test execution. Launches allows you to monitor your application-under-test state. The idea is that you take certain number of tests and test suites and run them periodically on different environments and our widgets draws you the picture of your application health from launch to launch. You are not obligated to stop adding new test reports into a launch after the launch finish, you are free to add more data. Or, for example, you can create several Launches and merge them into one. Or run your tests in a distributed way and report everything into a single Launch. The key concept here is that Launch is the biggest data point on our widgets, everything else is up to you.

    Suite

    For the convenience of locating and navigating data, you can put your tests into test suites. Features, Stories, Suites, Test Classes, Test (sic!), etc., they are all Suites from ReportPortal view, just with different names. Suites are hierarchical, to put one Suite inside another you should specify Parent Suite UUID. Also, as a limitation, child Suite and Items start time should not be earlier than Parent Suite start time due to Database limitations, please keep that in mind.

    Step

    Step is the only entity with statistics in our centralized test automation tool. Every time you report a step, ReportPortal adds 1 to test count and every time a Step fails it counts it as a test failure.

    Nested step

    Nested steps are like suites but located as child step elements. They are like containers for test logs and represented as dropdown lists inside Log View. On the screen below Nested step is the first entity.

    Log

    Logs are textual data necessary for debugging and problem solving. Our Analyzer uses them in the same way. Logs should be neat and informative, but not too much verbose. Tons of logs usually the same problem as their full absence.

    Test Case ID

    The ID which is used along with Steps. It is unique signature of tests by which we build test history. In general, you don’t need to manipulate these IDs, since every agent usually generates them with Code reference and parameters. But if you want to customize your history view this is the first point where to look. Agents usually generate Test Case IDs based on code reference and have a way to customize it.

    - + \ No newline at end of file diff --git a/reportportal-tutorial/index.html b/reportportal-tutorial/index.html index db8aa3ba4..4c4927e40 100644 --- a/reportportal-tutorial/index.html +++ b/reportportal-tutorial/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    ReportPortal Tutorial

    Overview

    The goal of this tutorial is to introduce all ReportPortal capabilities. Along the way, you'll learn how to use ReportPortal features and how to get the most out of them, as well as expert tips for using our test automation results dashboard.

    How to explore ReportPortal without installation

    One day you found ReportPortal which promises to simplify the analysis of failed tests and bring many more benefits.

    “Really? I don’t believe it”, – your first reaction.

    Do you just want to see how ReportPortal works to make sure you need it? It is easy! Just visit our Demo instance and use default credentials for login:

    login: default

    password: 1q2w3e

    Or you can use a button "Login with GitHub" to login.

    How to deploy ReportPortal instance

    You tried the demo session. You are impressed with ReportPortal features / possibilities and decided to install a ReportPortal. Excellent! Visit our detailed documentation on how deploy ReportPortal:

    Please also check the technical requirements for your system

    If you don’t want to deal with technical details, we will be happy to assist you.

    How to invite Team to ReportPortal

    Finally, you logged into ReportPortal. And you see just empty tabs... Looks confusing for the first step. What can we do to get started?

    Let’s start by inviting your team members. You can also do it as a final step, but it would be nice to investigate ReportPortal together.

    In order to add other users to ReportPortal you can send invitations via email. To make sure that the Email service is configured, please follow the next steps (as an admin user): E-mail server configuration. Once emailing is configured, you can either invite new users or create a new user in the project.

    After this step you will have emailing capabilities enabled, and several more users invited to ReportPortal.

    How to generate first data in ReportPortal

    The main section for work with ReportPortal is Launches tab in the left menu. But Launches table is empty and it’s hard to understand what ReportPortal can do and what to do next.

    Generate demo data feature can help you with this, by generating a set of demo launches (test executions), filters and dashboards.

    Once generated, you will see 5 launches (each Launch is equivalent to a testing session, or testing execution, or one Jenkins job). On the Filters tab you will find 1 filter. And Dashboards will have a Demo dashboard with visualizations/widgets built on top of the data from launches.

    Let’s understand how ReportPortal works based on demo data, and later we can return to the upload of your actual data from your testing frameworks. You can navigate to this section right now if you wish.

    How to triage test failures with ReportPortal

    So far you have Demo launches in ReportPortal. You see the list of test executions on the Launches page with Total/Passed/Failed/Skipped numbers of test cases, and some of the issues are already sorted: Product Bug, Auto Bug, System Issue. But some issues required the attention of engineers, and they are marked with the “To Investigate” flag.

    The next step and the main goal for QA engineers is defect triage. This means opening each test case, identifying the root of the failure reason, and categorizing/associating it with a particular defect type. We call this action “Make decision”.

    Based on test results, you can make decisions on further steps to improve your product. For example, you can arrange a call with a Development Team Leader to discuss bug fixing, if you have a lot of Product Bugs.

    In case of a large number of System Issues, you can ask a DevOps engineer to fix the environment or to review the testing infrastructure. Thus, you won‘t waste your team's effort and time by receiving failed reports due to an inconsistent environment.

    If you have a lot of Automation Bugs, put more effort into the test cases stabilization, and convert test automation (flaky) fails into valuable test cases, which will test your application for real.

    Moreover, you can post and link issues in just a few clicks using Bug Tracking System plugins:

    Atlassian Jira Server

    Atlassian Jira Cloud

    Azure DevOps

    Rally

    How to filter test executions in ReportPortal

    To distinguish executions by type and focus only on required or related to your team today, you can use filters. Filters have “tab” capabilities, so you can easily navigate between different selections. You can filter by different criteria like launch name, description, number of failed or passed test cases, attributes, etc.

    How to add more attributes for filtering launches in ReportPortal

    There is also a possibility to filter by attributes. You can find an example of setting attributes in your profile. You can include them in the parameters of automation, then additional attributes will appear under the Launch name, and you can filter test executions by these attributes as well.

    How to visualize test results in ReportPortal

    So, you’ve separated your own test data from others. Now let’s visualize our test results. Navigate to the Dashboards tab and open the Demo Dashboard. Here you can see basic visualizations that will help you understand the state of the product.

    You can also create new Dashboards. Since managers love charts, let’s practice building some self-updated charts! And let them see the actual statistics and value of your test automation along with you, at any given moment of the time, since dashboards and widgets will be updated in real-time. The best widget to start from is Investigated percentage of launches which shows how well the QA team analyzes failures.

    Once QA team categorized all issues, we can understand why automation tests fail. Create Launch statistics chart widget for that. It shows the reasons of failures, for example, broken environment, outdated tests, product bugs.

    The next step can be creating the Overall statistics chart to define the Total test cases number and how many of them are Passed/Failed/Skipped. This widget can be applied for all launches or for the latest launches only.

    We've reviewed basic widgets. How can I get some insights from launches? Our suggestion is to create a Flaky test cases table to find tests that often change status from passed to failed in different launches. These unstable tests do not give any confidence. The widget allows you to identify them so that you can pay special attention to them and fix them faster.

    Next, you might want to understand how long it takes to pass each test case. Most time-consuming test cases widget helps to find the longest scenarios.

    How to use ML power of ReportPortal

    ML suggestions feature prompts similar tests and defect types they have. In this way we don’t waste time re-reading the log but use ML hints instead.

    ML suggestions analysis is running every time you enter "Make decision" editor. ML suggestions are executed for all test items no matter what defect type they currently have.

    How to use Pattern Analysis

    Pattern Analysis feature helps to find static repeating patterns within automation. For example, you know that a 404 error in your application might be caused by a specific product bug. Create the rule with a problem phrase, launch a test run, and Pattern Analysis will find all failed items which have known patterns in error logs. This allows you to draw a quick conclusion.

    How to run Auto-Analysis in ReportPortal

    ReportPortal has Auto-Analysis feature which makes it possible for the application to independently check and perform some of the routine tasks.

    When you have test executions on the Launches page, you might need to analyze them automatically using ML. You can switch ON Auto-Analysis in the settings – then it will start as soon as any launch finishes. Auto-Analysis takes a part of your routine work and defines the reason for the test item failure based on the previous launches and sets: a defect type; a link to BTS (in case it exists); comment (in case it exists). As a result, you save time, and you can create new test cases instead of analyzing test results.

    You can run Auto-Analysis manually as well.

    When the test item is analyzed by ReportPortal, a label “AA” is set on the test item on a Step Level.

    How to see the historical trend of the causes of falls

    And now let's build a more detailed “Launch statistics chart” widget with the historical changes in tests results. So, I can see how the results of my launches have changed over time.

    Use case

    Goal: Create a widget to show historical changes in Passed/Failed test cases in my API tests.

    Follow the instructions below to create this Launch statistics chart.

    Here you can see the historical distribution of your test results: there are Passed or Failed tests.

    Instead of just Failed tests, you can see the dynamics of the total number of Product bugs, Automation bugs, System issues and No Defect.

    In this way, you see the historical trend of the causes of falls.

    How to make automated test results part of my pipeline

    ReportPortal supports Continuous Testing with built-in functionality – Quality Gates (premium feature). Quality Gate is a set of predefined criteria that should be met in order launch run to be considered as successful.

    Firstly, navigate to Project settings and create a Quality Gate with the rules which will be applied to a specific launch that matches the conditions.

    Finally, configure integration with CI/CD to send results to the pipeline.

    How to use nested steps and attributes in ReportPortal

    Usually, you see the results of automation as a carpet of error logs, and only an automation engineer can understand what is happening inside. Adding nested steps (Java, Python) allows applying a one-time change in the test code to make a logical grouping of steps and make these error logs more readable for the whole team.

    You can also use attributes on any level (launch/suite/test/step) to provide more contextual information.

    How to evaluate product health with ReportPortal

    You can create a “Component health check” widget based on attributes to understand which components do not work well, and which areas we need to pay more attention to.

    Use case 1

    Goal: define which features are affected by failed scenarios.

    You can see scenarios on the first screenshot.

    Select failed scenario to see which features were affected.

    Finally, let’s see what is the priority of the failed test cases.

    Use case 2

    Goal: define the priority of failed test cases.

    You can see that failures occurred in test cases with critical priority.

    Select Critical to understand which operating system is having problems.

    Next, select Android to see the list of features that need more attention.

    Use case 3

    Goal: define state of test cases on mobile devices.

    On the screenshot below you can see that our trouble spot is Android.

    You can go to the test cases level and see what problems they had.

    How to add test results to ReportPortal

    You have checked demo test results, dashboards and widgets. And now you want to see your real data in ReportPortal.

    ReportPortal is a TestOps service that integrates with your Test Framework, listens to events and visualizes test results. You cannot execute results right from ReportPortal, but you can integrate ReportPortal with a Test Framework or Implement own integration.

    - + \ No newline at end of file diff --git a/saved-searches-filters/CreateFilters/index.html b/saved-searches-filters/CreateFilters/index.html index 4a7366558..b63b4af45 100644 --- a/saved-searches-filters/CreateFilters/index.html +++ b/saved-searches-filters/CreateFilters/index.html @@ -12,7 +12,7 @@ - + @@ -21,7 +21,7 @@
    Skip to main content

    Create filters

    Filters in the our test automation reporting dashboard are saved searches of the launches.

    Filters could be used as an independent object and for creating widgets as well.

    Permission: all users of the project despite their role.

    To create a filter, perform the following steps:

    1. Navigate to the "Launches" page.

    2. Click on the "Add Filter" button.

    3. The new tab will be opened. Now you can configure your filter. The unsaved filter is marked with an asterisk (*).

    4. Add filtering parameters to have the relevant data.

    5. Click 'Save' button.

    6. 'ADD FILTER' popup will appear.

    7. Enter a new filter name (3-55 symbols long) and click "Add" button.

    Your new filter will be saved and shown on the "Filters" page.

    note

    ReportPortal allows saving a filter on the "Launches" mode only. It's impossible to save filters on the "Debug" tab.

    - + \ No newline at end of file diff --git a/saved-searches-filters/FiltersPage/index.html b/saved-searches-filters/FiltersPage/index.html index 18b9ef043..e5732d118 100644 --- a/saved-searches-filters/FiltersPage/index.html +++ b/saved-searches-filters/FiltersPage/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Filters page

    You can see the list of your saved searches and filters created by other team members on the "Filters" page.

    The following information is present on the "Filters" page:

    • Filter name and description
    • Options: criteria of search
    • Owner: the user who created the filter
    • Display on launches: ON/OFF switcher
    • Delete option

    To open launches based on saved filter, click the link of filter's name. The filter will be opened as a tab on the "Launches" page.

    It is the only place where it is possible to delete the filter from our test report&analytics dashboard.

    To do this, click the 'delete' icon of your filter and confirm the action. The filter will be deleted but not launches in it.

    - + \ No newline at end of file diff --git a/saved-searches-filters/ManageFilters/index.html b/saved-searches-filters/ManageFilters/index.html index d6fe0c53f..216dd677b 100644 --- a/saved-searches-filters/ManageFilters/index.html +++ b/saved-searches-filters/ManageFilters/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Manage filters

    Filters feature is a base for data visualization in test automation because widgets are built on their basis.

    OPERATIONS WITH FILTERS

    After the filter is saved, there are some options to manage them.

    EDIT FILTER

    This option allows edit filter name and description.

    To edit a filter, perform the following steps:

    1. Open your filter from the tab on the "Launches" page.

    2. Click the "Edit" option on the tab menu.

    3. The Edit filter popup window will appear.

    4. Make changes.

    5. To save the updates, select the "Save" option from the filter context menu.

    Your changes for the filter will be saved.

    CLONE

    This option allows you to create a new tab with the same criteria.

    To clone an already existed filter, perform the following steps:

    1. Open any filter on the "Launches" page.

    2. Click the "Clone" option on the tab menu.

    3. Enter the unique name, description and submit.

    4. The content of filter will be the same as in original filter.

    DISCARD

    This option helps to reset unsaved filter changes.

    To discard unsaved changes, perform the following steps:

    1. Open any filter on the "Launches" page.

    2. Add new criteria to the filter options.

    3. Asterisk mark appears for the filter.

    4. Click 'Discard' to remove most recent changes.

    5. All unsaved changes are removed.

    6. Asterisk mark removed for the filter tab.

    CLOSE

    This option allows to close filter tab with all selected criteria. The option is available on the filter tab.

    DELETE option is available from 'Filters' page only.

    - + \ No newline at end of file diff --git a/search/index.html b/search/index.html index 8d9d752f7..5e76fd815 100644 --- a/search/index.html +++ b/search/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Search the documentation

    - + \ No newline at end of file diff --git a/terms-and-conditions/GoogleAnalyticsUsageByReportPortal/index.html b/terms-and-conditions/GoogleAnalyticsUsageByReportPortal/index.html index 04eee4b9e..c87e15d60 100644 --- a/terms-and-conditions/GoogleAnalyticsUsageByReportPortal/index.html +++ b/terms-and-conditions/GoogleAnalyticsUsageByReportPortal/index.html @@ -12,7 +12,7 @@ - + @@ -27,7 +27,7 @@ everything with OK button. The changes will be applied after the system restart.

    Build systems

    Some build systems can set environment variables on their own we can use this feature to set the variable.

    Gradle

    Gradle has environment keyword which sets variables for child processes, so all you need to do is to set it in test task:

    test {
    environment "AGENT_NO_ANALYTICS", "1"
    }

    Maven

    maven-surefire-plugin has option to set environment variables for forked processes, so you can configure the plugin accordingly:

    <project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.epam.reportportal.example</groupId>
    <artifactId>example-mute</artifactId>
    <version>1.0-SNAPSHOT</version>

    <build>
    <plugins>
    <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.22.2</version>
    <configuration>
    <forkMode>once</forkMode>
    <environmentVariables>
    <AGENT_NO_ANALYTICS>1</AGENT_NO_ANALYTICS>
    </environmentVariables>
    </configuration>
    </plugin>
    </plugins>
    </build>
    </project>

    tox

    Python's tox automation tool also provides a way to set necessary variables with setenv parameter in tox.ini file:

    [testenv]
    setenv =
    AGENT_NO_ANALYTICS = 1

    Docker

    If your tests are wrapped in a docker container you need to bypass this variable through command-line with -e flag:

    docker run --rm -it \
    -e "AGENT_NO_ANALYTICS=1"
    selenium/standalone-chrome:103.0

    Or you can use ENV keyword in your Dockerfile when building the image:

    ENV AGENT_NO_ANALYTICS=1

    Thanks to Google Analytics, we can deliver interesting and helpful features to ReportPortal. As a result, you will have effective working instruments and better customer support.

    - + \ No newline at end of file diff --git a/terms-and-conditions/PremiumFeatures/index.html b/terms-and-conditions/PremiumFeatures/index.html index 1dc2e43ec..a15c96ba5 100644 --- a/terms-and-conditions/PremiumFeatures/index.html +++ b/terms-and-conditions/PremiumFeatures/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Premium Features

    At ReportPortal, we understand that every enterprise's testing environment is unique, requiring tailored solutions that address specific needs. To cater to this demand, we've introduced our Premium Features, exclusively available to our Managed Services and SaaS subscription clients.

    Our Premium Features have been meticulously designed and developed with large-scale, enterprise-level needs in mind. They are the cornerstone for organizations seeking to establish true continuous testing within their operational setup. Whether it's the ability to navigate the complexities of testing at scale, or the demand for more granular insights to drive decision-making, these premium options are equipped to handle it all.

    We invite you to explore our Premium Features, understanding their objectives and benefits in detail on our documentation page. We are confident that you'll find the value they add to be well worth the investment. As always, we're here to answer any questions and assist you in getting the most out of your ReportPortal experience.

    Available Premium Features:

    - + \ No newline at end of file diff --git a/user-account/DataRetentionProcedure/index.html b/user-account/DataRetentionProcedure/index.html index 02cc81a71..5d163aa31 100644 --- a/user-account/DataRetentionProcedure/index.html +++ b/user-account/DataRetentionProcedure/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Data retention procedure

    Starting from version 23.2, ReportPortal introduces an option to establish a retention period for collected Personally Identifiable Information (PII) data during instance configuration. This configuration allows for setting an individual retention duration for the instance in days, such as N=90, 180, 540 or any other number of days.

    Docker

    To activate data retention, add the following environment variables to Service Jobs:

    # Int (days)
    RP_ENVIRONMENT_VARIABLE_CLEAN_EXPIREDUSER_RETENTIONPERIOD:

    # CRON
    RP_ENVIRONMENT_VARIABLE_CLEAN_EXPIREDUSER_CRON:
    RP_ENVIRONMENT_VARIABLE_NOTIFICATION_EXPIREDUSER_CRON:

    Kubernetes

    Fill in Service Jobs values in the values.yaml

    servicejobs:
    coreJobs:
    # Int (days)
    notifyExpiredUserCron:

    # CRON
    cleanExpiredUserCron:
    cleanExpiredUserRetention:

    If the data retention option is enabled but a specific number of days for deleting inactive users is not specified, no deletions will occur. In the case of specifying 0 or a negative value, an error will be displayed in the logs.

    When the data retention option is activated, the job will run daily to identify inactive users and obfuscate their data.

    Inactive users are defined as follows:

    1. Users who have not logged in for N days.

    2. Users who have not reported testing data for N days.

    Users are only classified as inactive if both conditions are satisfied.

    In cases where a user logs in but doesn’t submit any reports, they are not deleted as the second condition isn’t fulfilled. Similarly, if a user has not logged in but has submitted reports, they are still considered active.

    Before performing deletions, the system sends out email notifications as follows: notification №1 is dispatched to inactive users N-60 days before deletion, notification №2 is sent N-30 days prior, and notification №3 is sent 1 day before obfuscation. Notifications about account deletion are also sent by the system.

    Users will be able to return whenever they are invited to the project.

    In summary, a data retention policy optimizes resources and helps create a more efficient, secure, and effective environment for data management, which fosters business success.

    - + \ No newline at end of file diff --git a/user-account/DeleteAccount/index.html b/user-account/DeleteAccount/index.html index bc329e9dd..4656b2f40 100644 --- a/user-account/DeleteAccount/index.html +++ b/user-account/DeleteAccount/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Delete account

    Starting from version 23.2, ReportPortal users can delete their accounts along with their personal data.

    During the instance setup, the DevOps engineer (or whoever is deploying the instance) can use a variable to decide whether the "Delete account" button will appear in each user's profile or not. This setting is specific to each instance.

    RP_ENVIRONMENT_VARIABLE_ALLOW_DELETE_ACCOUNT: true

    When a user clicks on the "Delete account" button, a modal window with feedback options appears. The user can select from predefined options or choose "Other" and provide a specific reason for deleting their account. Alternatively, they can simply select "Other" without leaving any comments.

    To prevent accidental deletions, the user must enter "Delete" in capital letters to confirm their intention to delete the account. This extra step ensures that the user genuinely wants to proceed with the account deletion. Once the user clicks the "Delete" button, all personal information related to their account, including account name, email, and photo, will be removed from our test automation reporting platform. However, any data created or reported by the user in ReportPortal, such as launches, filters, widgets, and dashboards, will still be retained in the application but will no longer be accessible to the user. Additionally, the user will receive an email notification confirming the account deletion.

    In summary, allowing users to delete their accounts and personal data from our automated testing tool is a critical measure to protect user privacy. ReportPortal is committed to adhering to data protection regulations and staying up to date with industry trends to ensure compliance.

    - + \ No newline at end of file diff --git a/user-account/EditPersonalInformation/index.html b/user-account/EditPersonalInformation/index.html index a0e2dfb5b..3be6b3122 100644 --- a/user-account/EditPersonalInformation/index.html +++ b/user-account/EditPersonalInformation/index.html @@ -12,7 +12,7 @@ - + @@ -25,7 +25,7 @@ name and email fields will become available.

    The full name allows: 3-256 symbols, English, Latin, Cyrillic, numeric characters and the following symbols: hyphen, underscore, dot, and space. The email should be unique in ReportPortal.

    Make changes and click the "Save" button.

    note

    Personal information for a GitHub user cannot be changed.

    Change password

    To change your password in ReportPortal, click on the "Change password" button above the form and enter:

    Check the box "Show password" to verify the password entered is what you intended to enter.

    Fill in these fields and click the "Submit" button under the form.

    note

    GitHub users cannot change the password on ReportPortal.

    - + \ No newline at end of file diff --git a/user-account/RestoreAPassword/index.html b/user-account/RestoreAPassword/index.html index 8764ba4f2..993ec943e 100644 --- a/user-account/RestoreAPassword/index.html +++ b/user-account/RestoreAPassword/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content

    Restore a password

    If you forgot your password, you can restore it on the login page. To do that, perform the following steps:

    1. Click the "Forgot your password?" link on the login page.

    2. Enter your email in the form that appears and click the "Change Password" button. The password recovery instructions will be sent to your email.

    3. Follow the link from the email. The Restore Password form will appear.

    4. Fill in the restore password form and click the "Change Password" button. Now you can log in ReportPortal with the new credentials.

    note

    GitHub user cannot restore the password on ReportPortal.

    - + \ No newline at end of file diff --git a/user-role-guides/index.html b/user-role-guides/index.html index f054c0f6e..27917014d 100644 --- a/user-role-guides/index.html +++ b/user-role-guides/index.html @@ -12,7 +12,7 @@ - + @@ -48,7 +48,7 @@ Saved searches (Filters)
    Dashboards and Widgets
    User account

    - + \ No newline at end of file diff --git a/work-with-reports/FilteringLaunches/index.html b/work-with-reports/FilteringLaunches/index.html index a9bb8bedf..21ccdd5c1 100644 --- a/work-with-reports/FilteringLaunches/index.html +++ b/work-with-reports/FilteringLaunches/index.html @@ -12,7 +12,7 @@ - + @@ -47,7 +47,7 @@ Defect Comments "abcd", "zabc", "zabcd", "abc".

    - + \ No newline at end of file diff --git a/work-with-reports/HistoryOfLaunches/index.html b/work-with-reports/HistoryOfLaunches/index.html index 5a6981a3a..78d84a2f5 100644 --- a/work-with-reports/HistoryOfLaunches/index.html +++ b/work-with-reports/HistoryOfLaunches/index.html @@ -12,7 +12,7 @@ - + @@ -43,7 +43,7 @@ above the item.

    Also on a History line, you can see an "i" label, it means that the item with this label has a defect comment or/and a link to the Bug Tracking System.

    History across All launches

    By default system shows history across Launches with the same name.

    But you can choose option "History across All Launches" and system will show you executions of the test cases which have been executed in launches with different names.

    On hover user can find launch name of test and all launch attributes.

    Duration fluctuation

    If test execution has duration growth in comparison with previous run, the system marks such items with red triangles:

    No duration growth - 0 triangles

    duration growh from 0 to 20% - 1 triangles

    duration growh from 21% - 50% - 2 trianges

    duration growh from 51% - 100% - 3 trianges

    duration growh from 101% and higher - 4 trianges

    Load more History

    By default the system shows 12 latest executions. If you need more history you can click on the button "Load more 9 items and the system loaded more executions on history line. Max number of items on History line is 30 executions.

    User can move on History line using horizontal scroll.

    Test Item actions history

    Test Item actions history will show you the history of actions, which have been made to a certain test item. You can see the kind of activity, and who performed it.

    The following actions are shown on the history of actions:

    user changed defect type of test item

    user posted a comment to the test item

    user posted a bug to the Bug Tracking System or added a link to the existing in Bug Tracking System issue.

    analyzer changed defect type of test item based on the item (where "item" is a link to a log view of an item which has been chosen by Analyzer as the most relevant result)

    analyzer posted a comment to the test item

    analyzer posted a bug to the Bug Tracking System or added a link to the existing in the Bug Tracking System issue.

    To see the history of actions, navigate to a certain child item. By default you will see the last action in one line.

    Use spoiler to maximize actions history:

    - + \ No newline at end of file diff --git a/work-with-reports/InvestigationOfFailure/index.html b/work-with-reports/InvestigationOfFailure/index.html index d102f595f..022c3f510 100644 --- a/work-with-reports/InvestigationOfFailure/index.html +++ b/work-with-reports/InvestigationOfFailure/index.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@
    Skip to main content

    Investigation of failure

    Set defect type

    ReportPortal provides the possibility for test failure analysis of your runs. The investigation includes setting the appropriate defect type of failed items, posting a defect for them or linking the ID of a defect that is already created in the bug tracking system.

    Using “Make decision” modal, you can choose the real reason of your failure and provide a comment for this fail.

    When a defect is found in a test, the bug triage should be performed and proper defect type should be assigned to it, in order to have accurate test runs statistics. You will be able to change a defect type for a test and mark it as a Product Bug/Automation Bug/System Issue/No Defect at any time.

    The available defect types are described below:

    “To investigate” means that no investigation has been made on this defect yet.

    “Product bug” means that the defect was investigated and a production bug found as the reason for the failure of this test.

    “Automation bug” means that the defect was investigated and the automation test is not correct.

    “System issue” means that the defect was investigated and it turns out that a system-level issue, like an app crash, caused the test to fail.

    “No defect” means that the defect was investigated and was defined to be not a defect.

    Post bug to Bug Tracking System

    ReportPortal allows to connect some popular bug tracking systems with our test automation reporting platform and post a bug to them or link the ticket ID of the already posted defect to the test item in ReportPortal.

    Before posting/linking a bug, make sure that the bug tracking systems are connected to the project on the project settings page. To check it, please, find the user guides below:

    - + \ No newline at end of file diff --git a/work-with-reports/OperationsUnderLaunches/index.html b/work-with-reports/OperationsUnderLaunches/index.html index 072cc9164..0e194e236 100644 --- a/work-with-reports/OperationsUnderLaunches/index.html +++ b/work-with-reports/OperationsUnderLaunches/index.html @@ -12,7 +12,7 @@ - + @@ -36,7 +36,7 @@ and select "Move to All Launches" from the 'Actions' list.

    Force finish launches

    The system allows finishing launches on the "Launches" and the "Debug" pages manually.

    Permission: Next users are able to stop launches:

    In order to finish a launch that is in progress now, perform the following steps:

    1. Navigate to the "Launches" page.

    2. Select the "Force Finish" option in the context menu on the left hand of the launch name.

    3. The warning popup will be opened.

    4. Click the "Finish" button.

    5. The launch will be stopped and shown in the launches table with the "stopped" tag and the "stopped" description. All the statistics collected by this time will be displayed.

    In order to finish some launches simultaneously those are in progress now, perform the following steps:

    1. Navigate to the "Launches" page.

    2. Select required launches that are in progress by click on their checkboxes

    3. Open 'Actions' list

    4. Select "Force Finish" from the list

    5. The warning popup will be opened.

    6. Confirm the action

    7. All selected launches will be stopped and shown in the launches table with the "stopped" tag and the "stopped" description. All the statistics collected by this time will be displayed.

    Export launches reports

    The system allows exporting launches reports on the "Launches" and the "Debug" modes. You can export the launch report in the following formats: PDF, XML, HTML.

    In order to export a launch, perform the following steps:

    1. Navigate to the "Launches" page.

    2. Select the required format from the "Export" option in the context menu on the left hand of the launch name.

    3. The launch report in the selected format will be opened.

    note

    The export operation works for a separate launch only. No multiple actions for the export of launches.

    Delete launches

    The system allows deleting launches on the "Launches" and "Debug" pages.

    Permission: next users are able to delete launches:

    There are two ways how the launch/es can be deleted

    In order to delete a launch, perform the following steps:

    1. Navigate to the "Launches" page.

    2. Select the "Delete" option in the context menu on the left hand of the launch name.

    3. The warning popup will be opened.

    4. Click the "Delete" button.

    5. The launch will be deleted from ReportPortal. All related content will be deleted: test items, logs, screenshots.

    In order to delete more than one launch simultaneously, perform the following steps:

    1. Navigate to the "Launches" page

    2. Select required launches by click on their checkboxes

    3. Open 'Actions' list

    4. Click 'Delete' option

    5. The warning popup will be opened.

    6. Confirm the action

    7. The launches will be deleted from ReportPortal. All related content will be deleted: test items, logs, screenshots.

    note

    It is impossible to delete launches IN PROGRESS - "Delete" launch option will be disabled.

    Delete test item

    The system allows deleting test items in all levels of launch in the "Launches" and "Debug" pages.

    Permission: Next users are able to delete the test item:

    In order to delete a test item, perform the following steps:

    1. Navigate to the "Launches" page

    2. Drill down to the test level of any item

    3. Select the "Delete" option in the context menu next to the selected test item.

    4. The warning popup will be opened.

    5. Click the "Delete" button.

    6. The test item will be deleted from ReportPortal with all related content (logs, screenshots).

    In order to delete some test items simultaneously, perform the following steps:

    1. Navigate to the "Launches" page

    2. Drill down to the test level of any item

    3. Select required test items by click on their checkboxes

    4. If you are on SUITE or TEST view, click 'Delete' button from the header

      If you are on STEP view, open 'Actions' list and select "Delete" option

    5. The warning popup will be opened.

    6. Confirm the action

    7. Test items will be deleted from ReportPortal with all related content (logs, screenshots).

    note

    It is impossible to delete test items in launches IN PROGRESS - "Delete" test item option is disabled for test items in launches IN PROGRESS.

    - + \ No newline at end of file diff --git a/work-with-reports/TestCaseId/index.html b/work-with-reports/TestCaseId/index.html index 74af87bdd..f815f94f0 100644 --- a/work-with-reports/TestCaseId/index.html +++ b/work-with-reports/TestCaseId/index.html @@ -12,7 +12,7 @@ - + @@ -23,7 +23,7 @@ It is a unique identifier from your source test management system which help ReportPortal.io to identify the uniqueness of a test case

    Where Test Case ID is using?

    Test case id is using for:

    You can find a test case ID in the 'Edit' modal.

    How you can report items with Test case ID?

    You can report test case id via agents. All details you can find in the dev guide https://github.com/reportportal/client-java/wiki/Test-case-ID

    If the test execution has test parameters, a test case ID will be generated on the base of:

    Test case id in ReportPortal = reported 'TestCaseID + all test parameters

    What does happen if you do not report items with Test case ID?

    In case a user doesn't report tests with Test Case ID, the system generates it automatically:

    Test case id in ReportPortal = 'Code reference' + all test parameters

    Test case id in ReportPortal = Test Case Name + All parents Name (except Launch name) + All parameters

    - + \ No newline at end of file diff --git a/work-with-reports/UniqueId/index.html b/work-with-reports/UniqueId/index.html index fb0233de5..a9f1c9964 100644 --- a/work-with-reports/UniqueId/index.html +++ b/work-with-reports/UniqueId/index.html @@ -12,7 +12,7 @@ - + @@ -22,7 +22,7 @@
    Skip to main content

    Unique ID

    (deprecated/ will be replaced by Test Case ID gradually)

    ReportPortal generates an ID automatically for all test items by default. The UniqueID generation is based on:

    • A test item name;
    • A project name;
    • A launch name;
    • Names of all parents;
    • All parameters of the item;
    note

    Unique ID deprecated/will be replaced by Test Case ID gradually.

    All this information becomes part of a test item in the form of the MD5 hash. After that, a UID becomes a part of the item. It allows defining the item's uniqueness with no possibility of doubt. ReportPortal uses this functionality in a process of building widgets ( f.e.:'Most failed test cases', 'Flacky tests'), 'Retry' and 'Rerun'and, etc.

    - + \ No newline at end of file diff --git a/work-with-reports/ViewLaunches/index.html b/work-with-reports/ViewLaunches/index.html index bbbd36db0..423be9b53 100644 --- a/work-with-reports/ViewLaunches/index.html +++ b/work-with-reports/ViewLaunches/index.html @@ -12,7 +12,7 @@ - + @@ -50,7 +50,7 @@ than 5 lines. You can expand the log message clicking on the special "Expand" icon.

    You can use a filter to specify the level.

    Also, you can use the logs sorting by time, and filtering logs to find a certain message in logs.

    Stack trace

    On the Log view for the fast redirection to the last 5 error log messages, please click on the tab Stack trace, in this section you can find *5 last error logs".

    Attachments

    In case you are interested in logs with attachments only, check the corresponding checkbox.

    Click on the file in log opens the preview of the attachment.

    The attachments could be rotated on a preview screen if needed.

    ReportPortal provides the possibility of preview for such types of attachments as:

    Other types of attachments will be opened in a new browser tab or downloaded.

    The alternative way to view these files is by using the Attachments.

    To view data via the Attachments section, perform the following steps:

    1. Open Log view of launch for test items with attachments available

    2. Click 'Attachments' tab

    3. Select the required file by clicking on its thumbnail.

    4. To expand the area, click the view on the main box.

    Items details

    In the section Items details, you can find information about test case such as:

    History of actions

    In this section, you can find all activities which were performed under the test case such as:

    History of actions is not shown if nobody performed actions with the item. By default, you will see the last action in one line.

    Markdown mode on Logs view

    You can view logs in Markdown mode or in the Console view.

    To enable Markdown mode, please perform actions:

    To disable Markdown mode, please perform actions:

    The same logic applies to the Console view.

    Log view for containers (for a launch or a suite)

    A user can report logs not only to the test execution but also to containers:

    If user want to view attached logs:

    Nested Steps

    Retried test case (retry)

    How can I report retry items In case you implemented a retry logic for your tests in a test framework, ReportPortal will reflect it by adding a special retry structure. If there were a few invocations of the one test case, all these invocations will be shown as the one test case in ReportPortal.

    On a long view, you can see all logs and all information about all invocations. But in statistics and auto-analysis the ReportPortal will take into account only the last invocation. So that launch statistics will be more accurate.

    The defect type can be set for the last invocation only.

    On a Launch view, you can see a label, which means that the launch includes retries.

    On a step view, you can see the number of invocations and stack trace of each invocation.

    On a log view, you can see the number of invocations and logs, attachments of each invocation.

    - + \ No newline at end of file