-
Notifications
You must be signed in to change notification settings - Fork 0
Deployment Architecture
This wiki page is meant to describe the architecture of a SCOOP initial deployment, covering the hardware and software requirements, hardware configuration, and any decisions made along the way.
- LEADLAB will maintain hardware and software in the lab that is identical to what has been deployed to the clinics. This duplicate configuration will be used for simulation, testing and diagnosis.
- OSCAR is locally installed in the test clinics
- A backup system is available in all test clinics
- OSCAR version 12.1 is used and is periodically updated with fixes
- We will not modify the primary nor the backup installation of OSCAR
- We will connect to the backup-system's database with our own, extended version of OSCAR 12.1 (which has the schedule E2E export) via a "read only" connection provided by the OSPs
- We will provide hardware for end points to the test clinics
- The OSP will be available to install end point
- The SCOOP export will occur after the backup is completed
- The SCOOP machine will be running Ubuntu 12.04 LTS 64 bit.
On the high level, we expect to see the following flow within the practice:
Live OSCAR EMR ---(Periodic Backup Mirroring)--->
Backup OSCAR EMR ---(Shadow OSCAR instance with scheduled E2E Export via HTTP POST)--->
hQuery Gateway ---(Add/Update Patients)---> mongodb
At least within the practice, the source of all data should stem from the live OSCAR EMR. From there, a periodic backup/mirroring will execute based on how the OSP has previously set it up. After the backup OSCAR EMR server is finished receiving the update, a "shadow OSCAR instance" (which has the E2E export functions) will connect to the backup system's database (via a read only connection) and export E2E patient exports via HTTP post either directly to the hQuery Gateway, or another intermediate rest-compatible adapter for preprocessing. (WE WILL NOT MODIFY THE BACKUP SYSTEM.)
From there, the E2E post will be processed by hQuery gateway, where it will either update the existing record, or add the record to mongodb. Once the operation is complete, mongodb will have the patient records and is ready for querying from the hub (composer).
Given that the query-gateway software performs well in a virtual machine with 1 CPU, 1024GB RAM and a 20GB virtual disk, any business class desktop or rack mount server currently available will suffice. Since the endpoint will have the additional requirement that it support an instance for OSCAR, it needs to have at least a dual-core processor, 4GB RAM and a 100GB hard drive. This is still within the envelope of what can be expected of any currently available business class server.
It is crucial that the server be Linux compatible and that installing Linux not void the warranty. Also, the availability of hardware support (preferably onsite) for an extended period is required. Moreover, the query-gateway server should be sufficiently powerful that it can be re-purposed for use in the lab after it returns from the clinic.
The endpoint server must have at least one ethernet port. Wireless connections should not be used for security and reliability reasons. We expect to have the backup OSCAR server also have at least one ethernet port. Both the endpoint and the backup OSCAR server should reside on the same physical network and they should be capable of seeing each other on the network.
- The "shadow OSCAR server" existing on the "endpoint hardware" shall be able to connect to the backup system's database and export E2E patient records after the clinic completes its backup routine.
- As that we depend on a scheduler module within OSCAR which defines the time of activation, we will work with the OSP to determine the appropriate time for exporting.
- The "shadow OSCAR server" is effectively another full instance of OSCAR. Since we do not want people to login and use the shadow OSCAR instance, this version of OSCAR will not allow login sessions.
- OSCAR will export all E2E patient records per export session.
- The query-gateway will update mongodb records on a patient by patient basis rather than deleting the entire database and refreshing it.
- The query-gateway should have the ability to wipe all records off of mongodb if asked to do so.
- The current software can handle OSCAR posting E2E patient records directly to the query-gateway via the network.
We may consider pursuing a differential export approach from OSCAR in the future in order to scale the export load.
Alternately, OSCAR can generate E2E patient record files that can be transferred to the query-gateway from the filesystem. To facilitate direct posting via the network, the query-gateway will need to support http on the local network even though it uses https for all external access. To pass the files to the query-gateway from a filesystem accessible to OSCAR, a Unix shell script using ssh would suffice and would be sufficiently secure.
Firewall configuration and security related to posting records to and deleting records in the mongodb are to be determined. If a non-secured version of the server runs within the firewall for communication with OSCAR, it must operate on ports that are blocked by the firewall. Only the secured server should be accessible via https through the firewall to the internet.
- The end point server needs to run the latest stable version of the hquery gateway software at all times.
- Updating the end point will minimally impact its performance and availability. We expect to have the software restart only when there is a detected update from github.
- The act of updating should be automatic and performed only when no queries are running and no new patients are being imported.
We are considering using a script via cronjob that will periodically check for any github updates to the software. We may check for the existence of an update by checking the current commit hash and the new updated hash. Should they differ, then it will signify a change and start the rebuild and restart process.
We should also have a script of some form that can check for the gateway software being in an operational and accessible state, and initiate a reboot if the software crashes or goes into a hung state.
Some software monitoring tools such as nagios3 could be considered. However, to be viewed by the Scoophealth team, Nagios would have to allow external access which represents a significant security and privacy concern.
If the clinic is already using Nagios internally, adding Nagios clients to our host so it can be monitored by their Nagios service would be worthwhile even though it would not provide a direct view of system status to the Scoophealth. Nagios could be configured to have a scoophealth user with limited monitoring access to just the endpoint if that is a viable option to persue.
A home-grown script that runs in the background for monitoring and sending out periodic status updates could be applied as well. Since the endpoint will poll a secure git server for configuration information, monitoring of the git server logs can be used to detect if the server has failed to do a pull request as expected and the OSP can be contacted for a more detailed investigation into the cause of the failure.
At least for the initial pilot release, we will only be working with one or two clinics serviced by the same OSP. Because of this, we will need to coordinate closely with the OSP in order to ensure that we minimally impact the live operations of the clinic.
For now, we have decided that OSCAR will export all E2E patient records every time the scheduled job is executed. A differential approach will be considered in later iterations to factor in scalability and load balancing.
A stripped down version of OSCAR will run on the endpoint machine. Its purpose is to only handle exporting E2E documents from the provided read-only database. This version of OSCAR will not allow login sessions to prevent unauthorized viewing of data.
- What version/build of OSCAR is installed at the clinics?
- What is the update process for the managed OSCAR instances?
- How are updates to RELEASE_12_1 codebase handled?
- Can we get read only database connection for the ENDPOINT machine?
- What schema updates have been applied that might impact our E2E exports?
- Is a live backup database available?
- If a live backup database is not available, can we have access to MySQL dump files?
- Can we get notified when backup happened?
- Can we get remote access for maintenance of ENDPOINT machine?
- How exactly will we conduct deployment together with OSP?
- For instance, do they wish to configure our Oscar instance?
- How are you monitoring your OSCAR servers currently?
- Can we piggy back monitoring of the End Point?
- What degree of automation does the OSP want in terms of ENDPOINT software updating?
- How should unexpected errors be handled on ENDPOINT?
- Ie. should errors like stack traces be sending notifications?
SCOOP is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
- SCOOP Overall Design
- SCOOP Actors
- User Stories and Use Case Maps
- System Architecture
- Development Process
- Prototypical Questions
- Current Meds vs Med List
- Data Enrichment Design
- Data Visualization
- Deployment Architecture
- EMR-2-EMR (E2E)
- OSCAR Setup
- Gateway & Hub Setup
- OSCAR Development Notes
- OSCAR DB Table Notes
- Coding Standards
- Mongodb Notes
- Server Configuration
- PDC Gateway Server
- Iteration Overview
- Feature List
- Architecture
- Requirements
- Visualization Requirements
- Test Specification