-
Notifications
You must be signed in to change notification settings - Fork 0
General Testing Procedure
As with any software development, not only is it important to have functional software, but also accurate results. In our case, we are going through development cycles where we add a new section to our E2E exporter. As that E2E is a well defined specification for output, we need to not only make sure that our output is valid xml, but also contains content that is valuable and valid to the recipients.
General testing procedure can be broken down into the following 3 steps:
- Written Specification Testing
- Manual Testing
- Automated Testing
The first step to testing the output of a new section is making sure that the data we are extracting from the source can match up and satisfy all the written specification requirements. That is to say we make sure the mapping between the source and destination is correct to the best of our abilities, and that all functional requirements from the written specification are satisfied. Currently this step can be done at the same time as defining and creating the mapping. Should there be any discrepancies, they will be raised up to the team or to the appropriate people in order to resolve it.
This step is done after there is a functioning output. At this stage, manual testing involves the developer(s) reading through the output and looking for any errors or flaws in the output. They would then note any observed flaws, track down the source, and fix it. This cycle continues and repeats until all bugs that have been noticed have been dealt with. This step can be done at the same time the feature is being implemented.
This final step involves using automation tools which look for any bugs or errors that may have slipped passed manual testing. In this case, we use an XSD validator where we feed in example output documents and check if the validator finds any errors. When we do find validation errors, we return back to the code to fix those bugs and repeat until all of them are resolved.
When the validator no longer detects any errors, we can be assured that the output does conform to the specification. Since we did manual testing, that stage of testing is also done to make sure that the content we have in the output is as expected. For example, we would want to make sure that the medication list held information such as tylenol or aspirin, but not things like liver or kidney. Lastly, the written specification test allows us to have reasonable certainty that the content we are outputting matches the content that is required by the specification.
These three "steps" to testing can overlap each other, but it is important that they all be done at some point in each development cycle in order to ensure a certain level of quality in the final result.
SCOOP is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
- SCOOP Overall Design
- SCOOP Actors
- User Stories and Use Case Maps
- System Architecture
- Development Process
- Prototypical Questions
- Current Meds vs Med List
- Data Enrichment Design
- Data Visualization
- Deployment Architecture
- EMR-2-EMR (E2E)
- OSCAR Setup
- Gateway & Hub Setup
- OSCAR Development Notes
- OSCAR DB Table Notes
- Coding Standards
- Mongodb Notes
- Server Configuration
- PDC Gateway Server
- Iteration Overview
- Feature List
- Architecture
- Requirements
- Visualization Requirements
- Test Specification