Skip to content
This repository has been archived by the owner on Feb 11, 2023. It is now read-only.

Commit

Permalink
update Segm (#8)
Browse files Browse the repository at this point in the history
 * drop labels for train & statisctic @segm
 * draw: overlap image-segm
 * outsource metric for classif. hyper-param
 * visual train samples (label purity)
 * parallelise label filter
 * fix setup for installing
 * fix loading input list
 * remove LOO @segm
 * minor renaming
 * update contrib.
  • Loading branch information
Borda authored May 8, 2018
1 parent 819daf1 commit fd248da
Show file tree
Hide file tree
Showing 32 changed files with 573 additions and 402 deletions.
4 changes: 3 additions & 1 deletion .shippable.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ script:

# SEGMENTATION section
- rm -r -f results && mkdir results
- python experiments_segmentation/run_compute_stat_annot_segm.py --visual
- python experiments_segmentation/run_compute_stat_annot_segm.py -a "data_images/drosophila_ovary_slice/annot_struct/*.png" -s "data_images/drosophila_ovary_slice/segm/*.png" --visual
- python experiments_segmentation/run_segm_slic_model_graphcut.py -i "data_images/drosophila_disc/image/img_[5,6].jpg" -cfg ./experiments_segmentation/sample_config.json --visual
- python experiments_segmentation/run_segm_slic_classif_graphcut.py -l data_images/drosophila_ovary_slice/list_imgs-annot-struct_short.csv -i "data_images/drosophila_ovary_slice/image/insitu41*.jpg" -cfg ./experiments_segmentation/sample_config.json --visual

Expand Down Expand Up @@ -96,3 +96,5 @@ after_success:
- coverage xml -o $COVERAGE_REPORTS/coverage.xml
- codecov # public repository on Travis CI
- coverage report

- cd .. && python -c "import imsegm.descriptors"
1 change: 1 addition & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,3 +54,4 @@ after_success:
- coverage xml
- python-codacy-coverage -r coverage.xml
- coverage report
- cd .. && python -c "import imsegm.descriptors"
42 changes: 21 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@

## Superpixel segmentation with GraphCut regularisation

Image segmentation is widely used as an initial phase of many image processing tasks in computer vision and image analysis. Many recent segmentation methods use superpixels because they reduce the size of the segmentation problem by order of magnitude. Also, features on superpixels are much more robust than features on pixels only. We use spatial regularization on superpixels to make segmented regions more compact. The segmentation pipeline comprises (i) computation of superpixels; (ii) extraction of descriptors such as color and texture; (iii) soft classification, using a standard classifier for supervised learning, or the Gaussian Mixture Model for unsupervised learning; (iv) final segmentation using Graph Cut. We use this segmentation pipeline on real-world applications in medical imaging (see a sample [images](./images)). We also show that [unsupervised segmentation](./notebooks/segment-2d_slic-fts-model-gc.ipynb) is sufficient for some situations, and provides similar results to those obtained using [trained segmentation](notebooks/segment-2d_slic-fts-classif-gc.ipynb).
Image segmentation is widely used as an initial phase of many image processing tasks in computer vision and image analysis. Many recent segmentation methods use superpixels because they reduce the size of the segmentation problem by order of magnitude. Also, features on superpixels are much more robust than features on pixels only. We use spatial regularization on superpixels to make segmented regions more compact. The segmentation pipeline comprises (i) computation of superpixels; (ii) extraction of descriptors such as color and texture; (iii) soft classification, using a standard classifier for supervised learning, or the Gaussian Mixture Model for unsupervised learning; (iv) final segmentation using Graph Cut. We use this segmentation pipeline on real-world applications in medical imaging (see a sample [images](./images)). We also show that [unsupervised segmentation](./notebooks/segment-2d_slic-fts-model-gc.ipynb) is sufficient for some situations, and provides similar results to those obtained using [trained segmentation](notebooks/segment-2d_slic-fts-classif-gc.ipynb).

![schema](figures/schema_slic-fts-clf-gc.jpg)

**Sample ipython notebooks:**
* [Supervised segmentation](notebooks/segment-2d_slic-fts-classif-gc.ipynb) requires training anottaion
* [Unsupervised segmentation](notebooks/segment-2d_slic-fts-model-gc.ipynb) just asks for expected number of classes
* **partially annotated images** with missing annotatio is marked by a negative number
* **partially annotated images** with missing annotation is marked by a negative number

**Illustration**

Expand All @@ -44,7 +44,7 @@ Borovec J., Kybic J., Nava R. (2017) **Detection and Localization of Drosophila

## Superpixel Region Growing with Shape prior

Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed approach differs from standard region growing in three essential aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speedup. Second, our method uses learned statistical shape properties which encourage growing leading to plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily, or iteratively using GraphCuts.
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed approach differs from standard region growing in three essential aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speedup. Second, our method uses learned statistical shape properties which encourage growing leading to plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as energy minimization and is solved either greedily, or iteratively using GraphCuts.

**Sample ipython notebooks:**
* [General GraphCut](notebooks/egg_segment_graphcut.ipynb) from given centers and initial structure segmentation.
Expand Down Expand Up @@ -93,7 +93,7 @@ We have implemented cython version of some functions, especially computing descr
```bash
python setup.py build_ext --inplace
```
If loading of compiled descriptors in cython fails, it is automatically swapped to numpy which gives the same results, but it is significantly slower.
If loading of compiled descriptors in `cython` fails, it is automatically swapped to `numpy` which gives the same results, but it is significantly slower.

**Installation**

Expand Down Expand Up @@ -191,40 +191,40 @@ We utilize (un)supervised segmentation according to given training examples or s
* For both experiment you can evaluate segmentation results.
```bash
python experiments_segmentation/run_compute-stat_annot-segm.py \
-annot "./data_images/drosophila_ovary_slice/annot_struct/*.png" \
-segm "./results/experiment_segm-supervise_ovary/*.png" \
-img "./data_images/drosophila_ovary_slice/image/*.jpg" \
-out ./results/evaluation
-a "./data_images/drosophila_ovary_slice/annot_struct/*.png" \
-s "./results/experiment_segm-supervise_ovary/*.png" \
-i "./data_images/drosophila_ovary_slice/image/*.jpg" \
-o ./results/evaluation --visual
```
![vusial](figures/segm-visual_D03_sy04_100x.jpg)

The previous two (un)segmentation accept [configuration file](experiments_segmentation/sample_config.json) (JSON) by parameter `-cfg` with some extra parameters which was not passed in arguments, for instance:
```json
{
"slic_size": 35,
"slic_regul": 0.2,
"features": {"color_hsv": ["mean", "std", "eng"]},
"classif": "SVM",
"nb_classif_search": 150,
"gc_edge_type": "model",
"gc_regul": 3.0,
"run_LOO": false,
"run_LPO": true,
"cross_val": 0.1
"slic_size": 35,
"slic_regul": 0.2,
"features": {"color_hsv": ["mean", "std", "eng"]},
"classif": "SVM",
"nb_classif_search": 150,
"gc_edge_type": "model",
"gc_regul": 3.0,
"run_LOO": false,
"run_LPO": true,
"cross_val": 0.1
}
```

### Center detection and ellipse fitting

In general, the input is a formatted list (CSV file) of input images and annotations. Another option is set `-list none` and then the list is paired from given paths to images and annotations.
In general, the input is a formatted list (CSV file) of input images and annotations. Another option is set `-list none` and then the list is paired with given paths to images and annotations.

**Experiment sequence is following:**

1. We can create the annotation completely manually or use following script which uses annotation of individual objects and create the zones automatically.
```bash
python experiments_ovary_centres/run_create_annotation.py
```
1. With zone annotation, we train a classifier for center candidate prediction. The annotation can be a CSV file with annotated centers as points, and the zone of positive examples is set uniformly as the circular neighborhood around these points. Another way (preferable) is to use annotated image with marked zones for positive, negative and neutral examples.
1. With zone annotation, we train a classifier for center candidate prediction. The annotation can be a CSV file with annotated centers as points, and the zone of positive examples is set uniformly as the circular neighborhood around these points. Another way (preferable) is to use an annotated image with marked zones for positive, negative and neutral examples.
```bash
python experiments_ovary_centres/run_center_candidate_training.py -list none \
-segs "./data_images/drosophila_ovary_slice/segm/*.png" \
Expand Down Expand Up @@ -269,7 +269,7 @@ In general, the input is a formatted list (CSV file) of input images and annotat

![ellipse fitting](figures/insitu7544_ellipses.jpg)

### Region growing with shape prior
### Region growing with a shape prior

In case you do not have estimated object centers, you can use [plugins](ij_macros) for landmarks import/export for [Fiji](http://fiji.sc/).

Expand Down
2 changes: 1 addition & 1 deletion circle.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ test:
- python handling_annotations/run_segm_annot_relabel.py -imgs "./data_images/drosophila_ovary_slice/center_levels/*.png" -out ./results/relabel_center_levels

# SEGMENTATION section
- python experiments_segmentation/run_compute_stat_annot_segm.py --visual
- python experiments_segmentation/run_compute_stat_annot_segm.py -a "data_images/drosophila_ovary_slice/annot_struct/*.png" -s "data_images/drosophila_ovary_slice/segm/*.png" --visual
- python experiments_segmentation/run_segm_slic_model_graphcut.py -i "data_images/drosophila_disc/image/img_[5,6].jpg" -cfg ./experiments_segmentation/sample_config.json --visual
- python experiments_segmentation/run_segm_slic_classif_graphcut.py -l data_images/drosophila_ovary_slice/list_imgs-annot-struct_short.csv -i "data_images/drosophila_ovary_slice/image/insitu41*.jpg" -cfg ./experiments_segmentation/sample_config.json --visual

Expand Down
10 changes: 4 additions & 6 deletions docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,10 +57,8 @@ Here's the long and short of it:
* Refer to array dimensions as (plane), row, column, not as x, y, z. See :ref:`Coordinate conventions <numpy-images-coordinate-conventions>` in the user guide for more information.
* Functions should support all input image dtypes. Use utility functions such as ``img_as_float`` to help convert to an appropriate type. The output format can be whatever is most efficient. This allows us to string together several functions into a pipeline
* Use ``Py_ssize_t`` as data type for all indexing, shape and size variables in C/C++ and Cython code.
* Use relative module imports, i.e. ``from .._shared import xyz`` rather than ``from skimage._shared import xyz``.
* Wrap Cython code in a pure Python function, which defines the API. This improves compatibility with code introspection tools, which are often not aware of Cython code.
* For Cython functions, release the GIL whenever possible, using
``with nogil:``.
* For Cython functions, release the GIL whenever possible, using ``with nogil:``.
## Testing
Expand All @@ -76,12 +74,12 @@ the library is installed in development mode::
```
Now, run all tests using::
```
$ PYTHONPATH=. pytest pyImSegm
$ pytest -v pyImSegm
```
Use ``--doctest-modules`` to run doctests.
For example, run all tests and all doctests using::
```
$ PYTHONPATH=. pytest --doctest-modules --with-xunit --with-coverage pyImSegm
$ pytest -v --doctest-modules --with-xunit --with-coverage pyImSegm
```
## Test coverage
Expand All @@ -92,7 +90,7 @@ To measure the test coverage, install `pytest-cov <http://pytest-cov.readthedocs
```
$ coverage report
```
This will print a report with one line for each file in `skimage`,
This will print a report with one line for each file in `imsegm`,
detailing the test coverage::
```
Name Stmts Exec Cover Missing
Expand Down
16 changes: 8 additions & 8 deletions experiments_ovary_centres/run_center_clustering.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,15 +47,15 @@
'DBSCAN_max_dist': 50,
'DBSCAN_min_samples': 1,
}
PARAMS = run_train.CENTER_PARAMS
PARAMS.update(CLUSTER_PARAMS)
PARAMS.update({
'path_expt': os.path.join(PARAMS['path_output'],
FOLDER_EXPERIMENT % PARAMS['name']),
DEFAULT_PARAMS = run_train.CENTER_PARAMS
DEFAULT_PARAMS.update(CLUSTER_PARAMS)
DEFAULT_PARAMS.update({
'path_expt': os.path.join(DEFAULT_PARAMS['path_output'],
FOLDER_EXPERIMENT % DEFAULT_PARAMS['name']),
'path_images': os.path.join(run_train.PATH_IMAGES, 'image', '*.jpg'),
'path_segms': os.path.join(run_train.PATH_IMAGES, 'segm', '*.png'),
'path_centers': os.path.join(PARAMS['path_output'],
FOLDER_EXPERIMENT % PARAMS['name'],
'path_centers': os.path.join(DEFAULT_PARAMS['path_output'],
FOLDER_EXPERIMENT % DEFAULT_PARAMS['name'],
'candidates', '*.csv')
})

Expand Down Expand Up @@ -227,5 +227,5 @@ def main(params):

if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
params = run_train.arg_parse_params(PARAMS)
params = run_train.arg_parse_params(DEFAULT_PARAMS)
main(params)
12 changes: 6 additions & 6 deletions experiments_ovary_centres/run_center_evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,18 +53,18 @@

FOLDER_ANNOT = 'annot_user_stage-%s'
FOLDER_ANNOT_VISUAL = 'annot_user_stage-%s___visual'
PARAMS = run_train.CENTER_PARAMS
PARAMS.update({
DEFAULT_PARAMS = run_train.CENTER_PARAMS
DEFAULT_PARAMS.update({
'stages': [(1, 2, 3, 4, 5),
(2, 3, 4, 5),
(1, ), (2, ), (3, ), (4, ), (5, )],
'path_list': '',
'path_centers': os.path.join(os.path.dirname(PARAMS['path_centers']),
'path_centers': os.path.join(os.path.dirname(DEFAULT_PARAMS['path_centers']),
'*.csv'),
'path_infofile': os.path.join(run_train.PATH_IMAGES,
'info_ovary_images.txt'),
'path_expt': os.path.join(PARAMS['path_output'],
run_detect.FOLDER_EXPERIMENT % PARAMS['name']),
'path_expt': os.path.join(DEFAULT_PARAMS['path_output'],
run_detect.FOLDER_EXPERIMENT % DEFAULT_PARAMS['name']),
})

NAME_CSV_TRIPLES = run_train.NAME_CSV_TRIPLES
Expand Down Expand Up @@ -277,5 +277,5 @@ def main(params):

if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
params = run_train.arg_parse_params(PARAMS)
params = run_train.arg_parse_params(DEFAULT_PARAMS)
main(params)
10 changes: 5 additions & 5 deletions experiments_ovary_centres/run_center_prediction.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@
FOLDER_EXPERIMENT = 'detect-centers-predict_%s'

# This sampling only influnece the number of point to be evaluated in the image
PARAMS = run_train.CENTER_PARAMS
PARAMS.update(run_clust.CLUSTER_PARAMS)
PARAMS['path_centers'] = os.path.join(PARAMS['path_output'],
run_train.FOLDER_EXPERIMENT % PARAMS['name'],
DEFAULT_PARAMS = run_train.CENTER_PARAMS
DEFAULT_PARAMS.update(run_clust.CLUSTER_PARAMS)
DEFAULT_PARAMS['path_centers'] = os.path.join(DEFAULT_PARAMS['path_output'],
run_train.FOLDER_EXPERIMENT % DEFAULT_PARAMS['name'],
'classifier_RandForest.pkl')


Expand Down Expand Up @@ -173,7 +173,7 @@ def main(params):

if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
params = run_train.arg_parse_params(PARAMS)
params = run_train.arg_parse_params(DEFAULT_PARAMS)

params['path_classif'] = params['path_centers']
assert os.path.isfile(params['path_classif']), \
Expand Down
23 changes: 10 additions & 13 deletions experiments_ovary_detect/run_cut_segmented_objects.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def arg_parse_params(dict_paths):
parser.add_argument('-imgs', '--path_image', type=str, required=False,
help='path to directory & name pattern for images',
default=dict_paths['image'])
parser.add_argument('-out', '--path_out', type=str, required=False,
parser.add_argument('-out', '--path_output', type=str, required=False,
help='path to the output directory',
default=dict_paths['output'])
parser.add_argument('--padding', type=int, required=False,
Expand All @@ -55,19 +55,15 @@ def arg_parse_params(dict_paths):
help='using background color', default=None, nargs='+')
parser.add_argument('--nb_jobs', type=int, required=False, default=NB_THREADS,
help='number of processes in parallel')
args = parser.parse_args()
args = vars(parser.parse_args())
logging.info('ARG PARAMETERS: \n %s', repr(args))
dict_paths = {
'annot': tl_data.update_path(args.path_annot),
'image': tl_data.update_path(args.path_image),
'output': tl_data.update_path(args.path_out),
}
dict_paths = {k.split('_')[-1]:
os.path.join(tl_data.update_path(os.path.dirname(args[k])),
os.path.basename(args[k]))
for k in args if k.startswith('path_')}
for k in dict_paths:
if dict_paths[k] == '':
continue
p = os.path.dirname(dict_paths[k]) \
if k in ['annot', 'image', 'output'] else dict_paths[k]
assert os.path.exists(p), 'missing (%s) "%s"' % (k, p)
assert os.path.exists(os.path.dirname(dict_paths[k])), \
'missing (%s) "%s"' % (k, os.path.dirname(dict_paths[k]))
return dict_paths, args


Expand Down Expand Up @@ -126,4 +122,5 @@ def main(dict_paths, padding=0, use_mask=False, bg_color=None,
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
dict_paths, args = arg_parse_params(PATHS)
main(dict_paths, args.padding, args.mask, args.background, args.nb_jobs)
main(dict_paths, args['padding'], args['mask'],
args['background'], args['nb_jobs'])
4 changes: 2 additions & 2 deletions experiments_ovary_detect/run_egg_swap_orientation.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
'drosophila_ovary_slice')
PATH_RESULTS = tl_data.update_path('results', absolute=True)
SWAP_CONDITION = 'cc'
PARAMS = {
DEFAULT_PARAMS = {
'path_images': os.path.join(PATH_IMAGES, 'image_cut-stage-2', '*.png'),
'path_output': os.path.join(PATH_RESULTS, 'image_cut-stage-2'),
}
Expand Down Expand Up @@ -129,5 +129,5 @@ def main(params):

if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
params = r_match.arg_parse_params(PARAMS)
params = r_match.arg_parse_params(DEFAULT_PARAMS)
main(params)
4 changes: 2 additions & 2 deletions experiments_ovary_detect/run_ellipse_annot_match.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
NB_THREADS = max(1, int(mproc.cpu_count() * 0.8))
PATH_IMAGES = tl_data.update_path(os.path.join('data_images', 'drosophila_ovary_slice'))

PARAMS = {
DEFAULT_PARAMS = {
'path_ellipses': os.path.join(PATH_IMAGES, 'ellipse_fitting', '*.csv'),
'path_infofile': os.path.join(PATH_IMAGES, 'info_ovary_images.txt'),
'path_output': tl_data.update_path('results', absolute=True),
Expand Down Expand Up @@ -173,5 +173,5 @@ def main(params):

if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
params = arg_parse_params(PARAMS)
params = arg_parse_params(DEFAULT_PARAMS)
main(params)
4 changes: 2 additions & 2 deletions experiments_ovary_detect/run_ellipse_cut_scale.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
PATH_IMAGES = tl_data.update_path(os.path.join('data_images', 'drosophila_ovary_slice'))
PATH_RESULTS = tl_data.update_path('results', absolute=True)

PARAMS = {
DEFAULT_PARAMS = {
'path_images': os.path.join(PATH_IMAGES, 'image', '*.jpg'),
'path_infofile': os.path.join(PATH_IMAGES, 'info_ovary_images_ellipses.csv'),
'path_output': os.path.join(PATH_RESULTS, 'cut_stages'),
Expand Down Expand Up @@ -132,5 +132,5 @@ def main(params):

if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
params = r_match.arg_parse_params(PARAMS)
params = r_match.arg_parse_params(DEFAULT_PARAMS)
main(params)
Loading

0 comments on commit fd248da

Please sign in to comment.