From 0e8859dd22e8ab1884f3761e5fbe3b9389749205 Mon Sep 17 00:00:00 2001 From: Martin Etmajer Date: Tue, 15 Feb 2022 11:12:49 +0100 Subject: [PATCH] Merge from master into release/v1.2 (#75) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * cftp changes based on veeva crm feedback - master (#65) Co-authored-by: Martin Etmajer Co-authored-by: Sergio Sacristán * RA: fix to make table fit in page (#56) RA: fix to make table fit in page * ssds missing interface tokens (#68) * CFTP for GAMP3/4/5 improvements (#73) * Remove temp created HTML file (#74) Co-authored-by: Clemens Utschig <40628552+clemensutschig@users.noreply.github.com> Co-authored-by: Sergio Sacristán Co-authored-by: Jorge Romero --- CHANGELOG.md | 2 + templates/CFTP-3.html.tmpl | 85 ++++++++++++++++---------------------- templates/CFTP-4.html.tmpl | 85 ++++++++++++++++---------------------- templates/CFTP-5.html.tmpl | 85 ++++++++++++++++---------------------- 4 files changed, 107 insertions(+), 150 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 4092168..d7f93a4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,9 +2,11 @@ ## Unreleased +## 1.2 - 2022-02-15 ### Added - CFTP for Gamp3/4/5 - Purpose chapter 7.1.1 needs changes([#64](https://github.com/opendevstack/ods-document-generation-templates/pull/64)) - SDSS for GAMP3/4 - Missing Section 3.2.x tokens for replacement ([#67](https://github.com/opendevstack/ods-document-generation-templates/issues/67)) +- CFTP for GAMP3/4/5 improvements ([#73](https://github.com/opendevstack/ods-document-generation-templates/pull/73)) ## 1.2 - 2021-18-11 diff --git a/templates/CFTP-3.html.tmpl b/templates/CFTP-3.html.tmpl index d652004..8221ab4 100644 --- a/templates/CFTP-3.html.tmpl +++ b/templates/CFTP-3.html.tmpl @@ -78,12 +78,7 @@
  • Acceptance Testing
  • -
  • Operational Qualification Activities and Training -
      -
    1. Test Procedure 1: Verification of Operational Documentation
    2. -
    3. Test Procedure 2: Verification of Training Documentation
    4. -
    -
  • +
  • Training
  • Integration Testing
    1. Purpose of Integration Testing
    2. @@ -104,7 +99,12 @@
    3. Traceability Matrix
    4. Validation Environment
    5. -
    6. Test Case Failure and Problem Resolution
    7. +
    8. Test Case Failure and Problem Resolution +
        +
      1. Automated Test Cases
      2. +
      3. Manual Test Cases
      4. +
      +
    9. Integration / Acceptance Testing Documentation
    10. Definitions and Abbreviations
        @@ -144,70 +144,44 @@ Test Administrator - In case of full automation (and no further testcases) - N/A, otherwise Test Administrators will supervise the Administrator execution of the test by the Testers and will review the test cases. + In case of full automation (and no further manual test cases) - N/A, otherwise Test Administrators will supervise the execution of the manual test cases by the Testers and will review these test cases. Tester - In case of full automation (and no further testcases) - N/A, otherwise Testers will execute the test cases and document the results. + In case of full automation (and no further manual test cases) - N/A, otherwise Testers will execute the manual test cases and document the results. - Developer - Writes tests. + Developer/SME + Writes and, in case of using automation, implements test cases.

        4Levels of Testing

        -

        The Testing Approach and Strategy is adapted to the Agile Development Methodologies applied in Platforms. This means that the former LeVA Development, Functional and Requirements Testing will be also covered but grouped in a different classification: Unit Testing, Installation Testing, Integration Testing and Acceptance Testing. Unit testing is performed during development by the development engineers and documented in the Development Test Plan (C-DTP) and Report (C-DTR).

        -

        Installation Testing is aimed at checking the successful installation and configuration, as well as updating or uninstalling the software. This level of testing is usually executed automatically and in Platforms it is part of the Installation Test Plan (C-IVP) and Report (C-IVR).

        4.1Integration Testing

        -

        The objective of the Integration Testing level is to verify whether the combined Units work well together as a group. Integration testing is aimed at detecting the flaws in the interactions between the Units within a module, micro-services and/or systems.

        -

        In Platforms Integration Testing is part of the Combined Functional/Requirements Test Plan (C-CFTP) and Report (C-CFTR).

        +

        The objective of Integration Testing is to verify whether the applicable components (e.g. modules, micro-services and/or systems) work well together and detect flaws in their interactions.

        4.2Acceptance Testing

        -

        This is the last stage of the testing process, where the product is verified against the end user requirements (can be functional or non-functional ones) and for accuracy. Successfully performed acceptance testing is a prerequisite for the product release. This testing level focuses on overall system quality, for example, from content and UI (functional) to performance or security issues (non-functional).

        -

        Within an agile approach the Acceptance Criteria are well-defined upfront.

        -

        In Platforms Acceptance Testing is part of the Combined Functional/Requirements Test Plan (C-CFTP) and Report (C-CFTR).

        -

        As enunciated before, requirements and acceptant criteria can be functional and/or non-functional so Acceptance Testing can be split in two main groups: Functional Testing and Non-Functional Testing.

        - -

        4.2.1 Functional Testing

        -

        Functional testing is a type of software testing in which the system is tested against the functional (user) requirements and specifications. Functional testing ensures that the (user) requirements or specifications are properly satisfied by the application. This type of testing is particularly concerned with the result of processing. It focuses on simulation of actual system usage but does not develop any system structure assumptions.

        -

        It is basically defined as a type of testing which verifies that each function of the software application works in conformance with the requirement and specification. This testing is not concerned about the source code of the application. Each functionality of the software application is tested by providing appropriate test input, expecting the output and comparing the actual output with the expected output.

        -

        Some examples of functional testing types are: Unit Testing, Smoke Testing, Integration Testing, System Testing, Exploratory Testing, etc.

        - -

        4.2.2 Non-Functional Testing

        -

        Non-functional testing is a type of software testing that is performed to verify the Non-Functional requirements of the application or system. It verifies whether the behavior of the system is as per the requirement or not. It tests all the aspects which are not tested in Functional testing.

        -

        Non-functional testing is defined as a type of software testing to check non-functional aspects of a software application. It is designed to test the readiness of a system as per non-functional parameters which are never addressed by Functional testing. Non-functional testing is as important as Functional testing.

        -

        Some examples of functional testing types are: Performance Testing, Load Testing Stress Testing, Security Testing, Scalability Testing, Compatibility Testing, Usability Testing, etc.

        +

        Acceptance tests refer to functional or non-functional (such as system availability, performance, reliability) user requirements. Examples for non-functional acceptance tests are: load tests, performance tests, recovery tests.

        -

        5Operational Qualification Activities and Training

        +

        5Training

        {{{data.sections.sec5.content}}}
        -

        5.1Test Procedure 1: Verification of Operational Documentation

        -

        As part of the Integration Testing the following documentation will be verified for all relevant subjects listed in the Qualification Plan.

        - -

        5.1.1 Test Procedure 1.1: SOPs and Working Instructions

        -

        Verify that approved SOPs and Working Instructions addressing and regarding the production environment, are in place. They must be approved and effective prior to release for production use.

        - -

        5.1.2 Test Procedure 1.2: Manuals and other System Documentation

        -

        Verify that appropriate manuals and other system documentation exist for use in operating, maintaining, configuring, and/or troubleshooting of the system.

        - -

        5.2Test Procedure 2: Verification of Training Documentation

        -

        List the applicable procedures in which the Integration Testing participants must be trained before executing their portions of the Functional Testing Plan. Describe when and how the training will take place.

        + {{#if data.sections.sec5s2}} - +
        {{{data.sections.sec5s2.content}}}{{{content}}}
        -

        Verify that an approved training plan exists, if justified, for all personnel involved in operating and maintaining the Infrastructure System.

        + {{/if}}
        @@ -224,10 +198,10 @@

        7Acceptance Testing

        7.1Functional Testing

        -

        7.1.1 Purpose of Functional Testing

        +

        7.1.1 Purpose of Combined Functional and Requirements Testing

        The purpose of the combined functional/requirements testing is to confirm that the computerized system is capable of performing or controlling the activities of the processes as intended according to the user requirements in a reproducible and effective way, while operating in its specified operating environment.

        -

        7.1.2 Scope of Functional Testing

        +

        7.1.2 Scope of Combined Functional and Requirements Testing

        @@ -262,19 +236,25 @@

        8.2Test Execution

        Test results shall be recorded in a way that independent reviewer can compare the documented acceptance criteria against the (written or captured) test evidence and determine whether the test results meet these criteria.

        +

        If both automated and manual test cases exist, automated test cases will be executed before manual test cases. Successful execution of automated test cases (no failures) is the prerequisite to start execution of the manual test cases.

        + +

        Execution of Automated Test Cases

        +

        In case test execution is automated:

        -

        In the case that the test execution is fully automated:

        Jenkins (Technical Role) shall:

          -
        • execute the code base test cases
        • +
        • execute the test cases
        • record the test results and evidence after the execution and include them in the XUnit file following Good Documentation Practices
        • mark the test cases as a "Fail" or a "Pass"
        • stop the test execution if one of the test cases has failed
        • report back the test execution results to the Test Management Tool
        +

        As the execution is fully automated, as included in the section 3 "Roles and Responsibilities" the Tester and Test Administrator does not apply.

        -

        In the case that the test execution is not fully automated:

        +

        Execution of Manual Test Cases

        +

        In case test execution is manual:

        +

        Testers shall:

        • execute test cases
        • @@ -283,7 +263,7 @@
        • provide comments for all failed test cases
        • sign and date each test in spaces provided after test execution
        • label any test output or evidence (e.g., screenshots, printouts and any additional pages) with test case number and test step number. Sign and date the output. If pages have successive page numbers signing and dating the first or last page is sufficient.
        • -
        • if any deviations from the test are encountered, follow the Test Case Failure and Problem Resolution (see section 10)
        • +
        • if any deviations from the test are encountered, follow the Test Case Failure and Problem Resolution (see section 11)

        If a test case is executed by more than one person (tester), it is required that each tester signs (signature or initials and date) each test step for traceability purpose.

        @@ -295,12 +275,13 @@

        Test execution and test result review must be independent, i.e. for any individual test case the Tester and the Test Administrator must be different individuals.

        +

        The training records of all testers should be verified prior to initiating testing.

        +
        {{{data.sections.sec7s1s2.content}}}
        {{{data.sections.sec8s2.content}}}
        -

        The training records of all testers should be verified prior to initiating testing.

        @@ -334,6 +315,10 @@
      1. a tester's error
      2. +

        11.1 Automated Test Cases

        +

        All discrepancies occurring during the test execution are automatically recorded in a designated discrepancy log. Failed automated test cases where the failure cannot be resolved within the Q environment are considered unacceptable. A move to P is not possible. These failures must be resolved via a change control in the Dev environment.

        + +

        11.2 Manual Test Cases

        Upon failing a test case, the Tester shall always contact the Test Administrator immediately to review the problem. The Test Administrator shall decide how to proceed, since test cases may build upon each other and a failure may cascade through several cases.

        The Test Administrator will also record all discrepancies that occur during the test execution in a designated discrepancy log. The Test Administrator is responsible for determining failure resolutions and whether a failure represents an unacceptable flaw in the system. The Test Administrator will document the result of this determination in the discrepancy log.

        The final evaluation of remaining risks and unresolved critical failures will be assessed in the validation summary report.

        diff --git a/templates/CFTP-4.html.tmpl b/templates/CFTP-4.html.tmpl index f299ecb..9414dd1 100644 --- a/templates/CFTP-4.html.tmpl +++ b/templates/CFTP-4.html.tmpl @@ -78,12 +78,7 @@
      3. Acceptance Testing
    11. -
    12. Operational Qualification Activities and Training -
        -
      1. Test Procedure 1: Verification of Operational Documentation
      2. -
      3. Test Procedure 2: Verification of Training Documentation
      4. -
      -
    13. +
    14. Training
    15. Integration Testing
      1. Purpose of Integration Testing
      2. @@ -104,7 +99,12 @@
      3. Traceability Matrix
      4. Validation Environment
      5. -
      6. Test Case Failure and Problem Resolution
      7. +
      8. Test Case Failure and Problem Resolution +
          +
        1. Automated Test Cases
        2. +
        3. Manual Test Cases
        4. +
        +
      9. Integration / Acceptance Testing Documentation
      10. Definitions and Abbreviations
          @@ -144,70 +144,44 @@ Test Administrator - In case of full automation (and no further testcases) - N/A, otherwise Test Administrators will supervise the Administrator execution of the test by the Testers and will review the test cases. + In case of full automation (and no further manual test cases) - N/A, otherwise Test Administrators will supervise the execution of the manual test cases by the Testers and will review these test cases. Tester - In case of full automation (and no further testcases) - N/A, otherwise Testers will execute the test cases and document the results. + In case of full automation (and no further manual test cases) - N/A, otherwise Testers will execute the manual test cases and document the results. - Developer - Writes tests. + Developer/SME + Writes and, in case of using automation, implements test cases.

          4Levels of Testing

          -

          The Testing Approach and Strategy is adapted to the Agile Development Methodologies applied in Platforms. This means that the former LeVA Development, Functional and Requirements Testing will be also covered but grouped in a different classification: Unit Testing, Installation Testing, Integration Testing and Acceptance Testing. Unit testing is performed during development by the development engineers and documented in the Development Test Plan (C-DTP) and Report (C-DTR).

          -

          Installation Testing is aimed at checking the successful installation and configuration, as well as updating or uninstalling the software. This level of testing is usually executed automatically and in Platforms it is part of the Installation Test Plan (C-IVP) and Report (C-IVR).

          4.1Integration Testing

          -

          The objective of the Integration Testing level is to verify whether the combined Units work well together as a group. Integration testing is aimed at detecting the flaws in the interactions between the Units within a module, micro-services and/or systems.

          -

          In Platforms Integration Testing is part of the Combined Functional/Requirements Test Plan (C-CFTP) and Report (C-CFTR).

          +

          The objective of Integration Testing is to verify whether the applicable components (e.g. modules, micro-services and/or systems) work well together and detect flaws in their interactions.

          4.2Acceptance Testing

          -

          This is the last stage of the testing process, where the product is verified against the end user requirements (can be functional or non-functional ones) and for accuracy. Successfully performed acceptance testing is a prerequisite for the product release. This testing level focuses on overall system quality, for example, from content and UI (functional) to performance or security issues (non-functional).

          -

          Within an agile approach the Acceptance Criteria are well-defined upfront.

          -

          In Platforms Acceptance Testing is part of the Combined Functional/Requirements Test Plan (C-CFTP) and Report (C-CFTR).

          -

          As enunciated before, requirements and acceptant criteria can be functional and/or non-functional so Acceptance Testing can be split in two main groups: Functional Testing and Non-Functional Testing.

          - -

          4.2.1 Functional Testing

          -

          Functional testing is a type of software testing in which the system is tested against the functional (user) requirements and specifications. Functional testing ensures that the (user) requirements or specifications are properly satisfied by the application. This type of testing is particularly concerned with the result of processing. It focuses on simulation of actual system usage but does not develop any system structure assumptions.

          -

          It is basically defined as a type of testing which verifies that each function of the software application works in conformance with the requirement and specification. This testing is not concerned about the source code of the application. Each functionality of the software application is tested by providing appropriate test input, expecting the output and comparing the actual output with the expected output.

          -

          Some examples of functional testing types are: Unit Testing, Smoke Testing, Integration Testing, System Testing, Exploratory Testing, etc.

          - -

          4.2.2 Non-Functional Testing

          -

          Non-functional testing is a type of software testing that is performed to verify the Non-Functional requirements of the application or system. It verifies whether the behavior of the system is as per the requirement or not. It tests all the aspects which are not tested in Functional testing.

          -

          Non-functional testing is defined as a type of software testing to check non-functional aspects of a software application. It is designed to test the readiness of a system as per non-functional parameters which are never addressed by Functional testing. Non-functional testing is as important as Functional testing.

          -

          Some examples of functional testing types are: Performance Testing, Load Testing Stress Testing, Security Testing, Scalability Testing, Compatibility Testing, Usability Testing, etc.

          +

          Acceptance tests refer to functional or non-functional (such as system availability, performance, reliability) user requirements. Examples for non-functional acceptance tests are: load tests, performance tests, recovery tests.

          -

          5Operational Qualification Activities and Training

          +

          5Training

          {{{data.sections.sec5.content}}}
          -

          5.1Test Procedure 1: Verification of Operational Documentation

          -

          As part of the Integration Testing the following documentation will be verified for all relevant subjects listed in the Qualification Plan.

          - -

          5.1.1 Test Procedure 1.1: SOPs and Working Instructions

          -

          Verify that approved SOPs and Working Instructions addressing and regarding the production environment, are in place. They must be approved and effective prior to release for production use.

          - -

          5.1.2 Test Procedure 1.2: Manuals and other System Documentation

          -

          Verify that appropriate manuals and other system documentation exist for use in operating, maintaining, configuring, and/or troubleshooting of the system.

          - -

          5.2Test Procedure 2: Verification of Training Documentation

          -

          List the applicable procedures in which the Integration Testing participants must be trained before executing their portions of the Functional Testing Plan. Describe when and how the training will take place.

          + {{#if data.sections.sec5s2}} - +
          {{{data.sections.sec5s2.content}}}{{{content}}}
          -

          Verify that an approved training plan exists, if justified, for all personnel involved in operating and maintaining the Infrastructure System.

          + {{/if}}
          @@ -228,10 +202,10 @@

          7Acceptance Testing

          7.1Functional Testing

          -

          7.1.1 Purpose of Functional Testing

          +

          7.1.1 Purpose of Combined Functional and Requirements Testing

          The purpose of the combined functional/requirements testing is to confirm that the computerized system is capable of performing or controlling the activities of the processes as intended according to the user requirements in a reproducible and effective way, while operating in its specified operating environment.

          -

          7.1.2 Scope of Functional Testing

          +

          7.1.2 Scope of Combined Functional and Requirements Testing

          @@ -265,19 +239,25 @@

          8.2Test Execution

          Test results shall be recorded in a way that independent reviewer can compare the documented acceptance criteria against the (written or captured) test evidence and determine whether the test results meet these criteria.

          +

          If both automated and manual test cases exist, automated test cases will be executed before manual test cases. Successful execution of automated test cases (no failures) is the prerequisite to start execution of the manual test cases.

          + +

          Execution of Automated Test Cases

          +

          In case test execution is automated:

          -

          In the case that the test execution is fully automated:

          Jenkins (Technical Role) shall:

            -
          • execute the code base test cases
          • +
          • execute the test cases
          • record the test results and evidence after the execution and include them in the XUnit file following Good Documentation Practices
          • mark the test cases as a "Fail" or a "Pass"
          • stop the test execution if one of the test cases has failed
          • report back the test execution results to the Test Management Tool
          +

          As the execution is fully automated, as included in the section 3 "Roles and Responsibilities" the Tester and Test Administrator does not apply.

          -

          In the case that the test execution is not fully automated:

          +

          Execution of Manual Test Cases

          +

          In case test execution is manual:

          +

          Testers shall:

          • execute test cases
          • @@ -286,7 +266,7 @@
          • provide comments for all failed test cases
          • sign and date each test in spaces provided after test execution
          • label any test output or evidence (e.g., screenshots, printouts and any additional pages) with test case number and test step number. Sign and date the output. If pages have successive page numbers signing and dating the first or last page is sufficient.
          • -
          • if any deviations from the test are encountered, follow the Test Case Failure and Problem Resolution (see section 10)
          • +
          • if any deviations from the test are encountered, follow the Test Case Failure and Problem Resolution (see section 11)

          If a test case is executed by more than one person (tester), it is required that each tester signs (signature or initials and date) each test step for traceability purpose.

          @@ -298,12 +278,13 @@

          Test execution and test result review must be independent, i.e. for any individual test case the Tester and the Test Administrator must be different individuals.

          +

          The training records of all testers should be verified prior to initiating testing.

          +
          {{{data.sections.sec7s1s2.content}}}
          {{{data.sections.sec8s2.content}}}
          -

          The training records of all testers should be verified prior to initiating testing.

          @@ -337,6 +318,10 @@
        1. a tester's error
        2. +

          11.1 Automated Test Cases

          +

          All discrepancies occurring during the test execution are automatically recorded in a designated discrepancy log. Failed automated test cases where the failure cannot be resolved within the Q environment are considered unacceptable. A move to P is not possible. These failures must be resolved via a change control in the Dev environment.

          + +

          11.2 Manual Test Cases

          Upon failing a test case, the Tester shall always contact the Test Administrator immediately to review the problem. The Test Administrator shall decide how to proceed, since test cases may build upon each other and a failure may cascade through several cases.

          The Test Administrator will also record all discrepancies that occur during the test execution in a designated discrepancy log. The Test Administrator is responsible for determining failure resolutions and whether a failure represents an unacceptable flaw in the system. The Test Administrator will document the result of this determination in the discrepancy log.

          The final evaluation of remaining risks and unresolved critical failures will be assessed in the validation summary report.

          diff --git a/templates/CFTP-5.html.tmpl b/templates/CFTP-5.html.tmpl index 7b754c4..67e0214 100644 --- a/templates/CFTP-5.html.tmpl +++ b/templates/CFTP-5.html.tmpl @@ -78,12 +78,7 @@
        3. Acceptance Testing
      11. -
      12. Operational Qualification Activities and Training -
          -
        1. Test Procedure 1: Verification of Operational Documentation
        2. -
        3. Test Procedure 2: Verification of Training Documentation
        4. -
        -
      13. +
      14. Training
      15. Integration Testing
        1. Purpose of Integration Testing
        2. @@ -104,7 +99,12 @@
        3. Traceability Matrix
        4. Validation Environment
        5. -
        6. Test Case Failure and Problem Resolution
        7. +
        8. Test Case Failure and Problem Resolution +
            +
          1. Automated Test Cases
          2. +
          3. Manual Test Cases
          4. +
          +
        9. Integration / Acceptance Testing Documentation
        10. Definitions and Abbreviations
            @@ -144,70 +144,44 @@ Test Administrator - In case of full automation (and no further testcases) - N/A, otherwise Test Administrators will supervise the Administrator execution of the test by the Testers and will review the test cases. + In case of full automation (and no further manual test cases) - N/A, otherwise Test Administrators will supervise the execution of the manual test cases by the Testers and will review these test cases. Tester - In case of full automation (and no further testcases) - N/A, otherwise Testers will execute the test cases and document the results. + In case of full automation (and no further manual test cases) - N/A, otherwise Testers will execute the manual test cases and document the results. - Developer - Writes tests. + Developer/SME + Writes and, in case of using automation, implements test cases.

            4Levels of Testing

            -

            The Testing Approach and Strategy is adapted to the Agile Development Methodologies applied in Platforms. This means that the former LeVA Development, Functional and Requirements Testing will be also covered but grouped in a different classification: Unit Testing, Installation Testing, Integration Testing and Acceptance Testing. Unit testing is performed during development by the development engineers and documented in the Development Test Plan (C-DTP) and Report (C-DTR).

            -

            Installation Testing is aimed at checking the successful installation and configuration, as well as updating or uninstalling the software. This level of testing is usually executed automatically and in Platforms it is part of the Installation Test Plan (C-IVP) and Report (C-IVR).

            4.1Integration Testing

            -

            The objective of the Integration Testing level is to verify whether the combined Units work well together as a group. Integration testing is aimed at detecting the flaws in the interactions between the Units within a module, micro-services and/or systems.

            -

            In Platforms Integration Testing is part of the Combined Functional/Requirements Test Plan (C-CFTP) and Report (C-CFTR).

            +

            The objective of Integration Testing is to verify whether the applicable components (e.g. modules, micro-services and/or systems) work well together and detect flaws in their interactions.

            4.2Acceptance Testing

            -

            This is the last stage of the testing process, where the product is verified against the end user requirements (can be functional or non-functional ones) and for accuracy. Successfully performed acceptance testing is a prerequisite for the product release. This testing level focuses on overall system quality, for example, from content and UI (functional) to performance or security issues (non-functional).

            -

            Within an agile approach the Acceptance Criteria are well-defined upfront.

            -

            In Platforms Acceptance Testing is part of the Combined Functional/Requirements Test Plan (C-CFTP) and Report (C-CFTR).

            -

            As enunciated before, requirements and acceptant criteria can be functional and/or non-functional so Acceptance Testing can be split in two main groups: Functional Testing and Non-Functional Testing.

            - -

            4.2.1 Functional Testing

            -

            Functional testing is a type of software testing in which the system is tested against the functional (user) requirements and specifications. Functional testing ensures that the (user) requirements or specifications are properly satisfied by the application. This type of testing is particularly concerned with the result of processing. It focuses on simulation of actual system usage but does not develop any system structure assumptions.

            -

            It is basically defined as a type of testing which verifies that each function of the software application works in conformance with the requirement and specification. This testing is not concerned about the source code of the application. Each functionality of the software application is tested by providing appropriate test input, expecting the output and comparing the actual output with the expected output.

            -

            Some examples of functional testing types are: Unit Testing, Smoke Testing, Integration Testing, System Testing, Exploratory Testing, etc.

            - -

            4.2.2 Non-Functional Testing

            -

            Non-functional testing is a type of software testing that is performed to verify the Non-Functional requirements of the application or system. It verifies whether the behavior of the system is as per the requirement or not. It tests all the aspects which are not tested in Functional testing.

            -

            Non-functional testing is defined as a type of software testing to check non-functional aspects of a software application. It is designed to test the readiness of a system as per non-functional parameters which are never addressed by Functional testing. Non-functional testing is as important as Functional testing.

            -

            Some examples of functional testing types are: Performance Testing, Load Testing Stress Testing, Security Testing, Scalability Testing, Compatibility Testing, Usability Testing, etc.

            +

            Acceptance tests refer to functional or non-functional (such as system availability, performance, reliability) user requirements. Examples for non-functional acceptance tests are: load tests, performance tests, recovery tests.

            -

            5Operational Qualification Activities and Training

            +

            5Training

            {{{data.sections.sec5.content}}}
            -

            5.1Test Procedure 1: Verification of Operational Documentation

            -

            As part of the Integration Testing the following documentation will be verified for all relevant subjects listed in the Qualification Plan.

            - -

            5.1.1 Test Procedure 1.1: SOPs and Working Instructions

            -

            Verify that approved SOPs and Working Instructions addressing and regarding the production environment, are in place. They must be approved and effective prior to release for production use.

            - -

            5.1.2 Test Procedure 1.2: Manuals and other System Documentation

            -

            Verify that appropriate manuals and other system documentation exist for use in operating, maintaining, configuring, and/or troubleshooting of the system.

            - -

            5.2Test Procedure 2: Verification of Training Documentation

            -

            List the applicable procedures in which the Integration Testing participants must be trained before executing their portions of the Functional Testing Plan. Describe when and how the training will take place.

            + {{#if data.sections.sec5s2}} - +
            {{{data.sections.sec5s2.content}}}{{{content}}}
            -

            Verify that an approved training plan exists, if justified, for all personnel involved in operating and maintaining the Infrastructure System.

            + {{/if}}
            @@ -228,10 +202,10 @@

            7Acceptance Testing

            7.1Functional Testing

            -

            7.1.1 Purpose of Functional Testing

            +

            7.1.1 Purpose of Combined Functional and Requirements Testing

            The purpose of the combined functional/requirements testing is to confirm that the computerized system is capable of performing or controlling the activities of the processes as intended according to the user requirements in a reproducible and effective way, while operating in its specified operating environment.

            -

            7.1.2 Scope of Functional Testing

            +

            7.1.2 Scope of Combined Functional and Requirements Testing

            @@ -265,19 +239,25 @@

            8.2Test Execution

            Test results shall be recorded in a way that independent reviewer can compare the documented acceptance criteria against the (written or captured) test evidence and determine whether the test results meet these criteria.

            +

            If both automated and manual test cases exist, automated test cases will be executed before manual test cases. Successful execution of automated test cases (no failures) is the prerequisite to start execution of the manual test cases.

            + +

            Execution of Automated Test Cases

            +

            In case test execution is automated:

            -

            In the case that the test execution is fully automated:

            Jenkins (Technical Role) shall:

              -
            • execute the code base test cases
            • +
            • execute the test cases
            • record the test results and evidence after the execution and include them in the XUnit file following Good Documentation Practices
            • mark the test cases as a "Fail" or a "Pass"
            • stop the test execution if one of the test cases has failed
            • report back the test execution results to the Test Management Tool
            +

            As the execution is fully automated, as included in the section 3 "Roles and Responsibilities" the Tester and Test Administrator does not apply.

            -

            In the case that the test execution is not fully automated:

            +

            Execution of Manual Test Cases

            +

            In case test execution is manual:

            +

            Testers shall:

            • execute test cases
            • @@ -286,7 +266,7 @@
            • provide comments for all failed test cases
            • sign and date each test in spaces provided after test execution
            • label any test output or evidence (e.g., screenshots, printouts and any additional pages) with test case number and test step number. Sign and date the output. If pages have successive page numbers signing and dating the first or last page is sufficient.
            • -
            • if any deviations from the test are encountered, follow the Test Case Failure and Problem Resolution (see section 10)
            • +
            • if any deviations from the test are encountered, follow the Test Case Failure and Problem Resolution (see section 11)

            If a test case is executed by more than one person (tester), it is required that each tester signs (signature or initials and date) each test step for traceability purpose.

            @@ -298,12 +278,13 @@

            Test execution and test result review must be independent, i.e. for any individual test case the Tester and the Test Administrator must be different individuals.

            +

            The training records of all testers should be verified prior to initiating testing.

            +
            {{{data.sections.sec7s1s2.content}}}
            {{{data.sections.sec8s2.content}}}
            -

            The training records of all testers should be verified prior to initiating testing.

            @@ -337,6 +318,10 @@
          1. a tester's error
          2. +

            11.1 Automated Test Cases

            +

            All discrepancies occurring during the test execution are automatically recorded in a designated discrepancy log. Failed automated test cases where the failure cannot be resolved within the Q environment are considered unacceptable. A move to P is not possible. These failures must be resolved via a change control in the Dev environment.

            + +

            11.2 Manual Test Cases

            Upon failing a test case, the Tester shall always contact the Test Administrator immediately to review the problem. The Test Administrator shall decide how to proceed, since test cases may build upon each other and a failure may cascade through several cases.

            The Test Administrator will also record all discrepancies that occur during the test execution in a designated discrepancy log. The Test Administrator is responsible for determining failure resolutions and whether a failure represents an unacceptable flaw in the system. The Test Administrator will document the result of this determination in the discrepancy log.

            The final evaluation of remaining risks and unresolved critical failures will be assessed in the validation summary report.