Documentation | Questionnaire | DOSMA Basics Tutorial
DOSMA is an AI-powered Python library for medical image analysis. This includes, but is not limited to:
- image processing (denoising, super-resolution, registration, segmentation, etc.)
- quantitative fitting and image analysis
- anatomical visualization and analysis (patellar tilt, femoral cartilage thickness, etc.)
We hope that this open-source pipeline will be useful for quick anatomy/pathology analysis and will serve as a hub for adding support for analyzing different anatomies and scan sequences.
DOSMA requires Python 3.6+. The core module depends on numpy, nibabel, nipype, pandas, pydicom, scikit-image, scipy, PyYAML, and tqdm.
Additional AI features can be unlocked by installing tensorflow and keras. To enable built-in registration functionality, download elastix. Details can be found in the setup documentation.
To install DOSMA, run:
pip install dosma
# To install with AI support
pip install dosma[ai]
If you would like to contribute to DOSMA, we recommend you clone the repository and
install DOSMA with pip
in editable mode.
git clone [email protected]:ad12/DOSMA.git
cd DOSMA
pip install -e '.[dev,docs]'
make dev
To run tests, build documentation and contribute, run
make autoformat test build-docs
DOSMA provides efficient readers for DICOM and NIfTI formats built on nibabel and pydicom. Multi-slice DICOM data can be loaded in parallel with multiple workers and structured into the appropriate 3D volume(s). For example, multi-echo and dynamic contrast-enhanced (DCE) MRI scans have multiple volumes acquired at different echo times and trigger times, respectively. These can be loaded into multiple volumes with ease:
import dosma as dm
multi_echo_scan = dm.load("/path/to/multi-echo/scan", group_by="EchoNumbers", num_workers=8, verbose=True)
dce_scan = dm.load("/path/to/dce/scan", group_by="TriggerTime")
DOSMA's MedicalVolume data structure supports array-like operations (arithmetic, slicing, etc.) on medical images while preserving spatial attributes and accompanying metadata. This structure supports NumPy interoperability, intelligent reformatting, fast low-level computations, and native GPU support. For example, given MedicalVolumes mvA
and mvB
we can do the following:
# Reformat image into Superior->Inferior, Anterior->Posterior, Left->Right directions.
mvA = mvA.reformat(("SI", "AP", "LR"))
# Get and set metadata
study_description = mvA.get_metadata("StudyDescription")
mvA.set_metadata("StudyDescription", "A sample study")
# Perform NumPy operations like you would on image data.
rss = np.sqrt(mvA**2 + mvB**2)
# Move to GPU 0 for CuPy operations
mv_gpu = mvA.to(dosma.Device(0))
# Take slices. Metadata will be sliced appropriately.
mv_subvolume = mvA[10:20, 10:20, 4:6]
DOSMA is built to be a hub for machine/deep learning models. A complete list of models and corresponding publications can be found here.
We can use one of the knee segmentation models to segment a MedicalVolume mv
and model
weights
downloaded locally:
from dosma.models import IWOAIOAIUnet2DNormalized
# Reformat such that sagittal plane is last dimension.
mv = mv.reformat(("SI", "AP", "LR"))
# Do segmentation
model = IWOAIOAIUnet2DNormalized(input_shape=mv.shape[:2] + (1,), weights_path=weights)
masks = model.generate_mask(mv)
DOSMA supports parallelization for compute-heavy operations, like curve fitting and image registration. Image registration is supported thru the elastix/transformix libraries. For example we can use multiple workers to register volumes to a target, and use the registered outputs for per-voxel monoexponential fitting:
# Register images mvA, mvB, mvC to target image mv_tgt in parallel
_, (mvA_reg, mvB_reg, mvC_reg) = dosma.register(
mv_tgt,
moving=[mvA, mvB, mvC],
parameters="/path/to/elastix/registration/file",
num_workers=3,
return_volumes=True,
show_pbar=True,
)
# Perform monoexponential fitting.
def monoexponential(x, a, b):
return a * np.exp(b*x)
fitter = dosma.CurveFitter(
monoexponential,
num_workers=4,
p0={"a": 1.0, "b": -1/30},
)
popt, r2 = fitter.fit(x=[1, 2, 3, 4], [mv_tgt, mvA_reg, mvB_reg, mvC_reg])
a_fit, b_fit = popt[..., 0], popt[..., 1]
@inproceedings{desai2019dosma,
title={DOSMA: A deep-learning, open-source framework for musculoskeletal MRI analysis},
author={Desai, Arjun D and Barbieri, Marco and Mazzoli, Valentina and Rubin, Elka and Black, Marianne S and Watkins, Lauren E and Gold, Garry E and Hargreaves, Brian A and Chaudhari, Akshay S},
booktitle={Proc 27th Annual Meeting ISMRM, Montreal},
pages={1135},
year={2019}
}
In addition to DOSMA, please also consider citing the work that introduced the method used for analysis.