Skip to content

Commit

Permalink
Merge branch 'master' into ohinds-warmstart
Browse files Browse the repository at this point in the history
  • Loading branch information
satra authored Sep 19, 2023
2 parents bad4575 + 7de8638 commit 02ac53c
Show file tree
Hide file tree
Showing 10 changed files with 35 additions and 32 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ repos:
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/psf/black
rev: 23.7.0
rev: 23.9.1
hooks:
- id: black
- repo: https://github.com/PyCQA/flake8
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ the Apache 2.0 license. It was started under the support of NIH R01 EB020470.

### Augmentation methods
#### Spatial Transforms
[Center crop](), [Spacial Constant Padding](), [Random Crop](), [Resize](), [Random flip (left and right)]()
[Center crop](), [Spatial Constant Padding](), [Random Crop](), [Resize](), [Random flip (left and right)]()

#### Intensity Transforms
[Add gaussian noise](), [Min-Max intensity scaling](), [Costom intensity scaling](), [Intensity masking](), [Contrast adjustment]()
Expand Down
2 changes: 1 addition & 1 deletion nobrainer/bayesian_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ def default_loc_scale_fn(
safe to use when doing asynchronous distributed training. The default
(`None`) is to use the `tf.get_variable` default.
weightnorm: An optional (boolean) to activate weightnorm for the mean
kernal.
kernel.
Returns
----------
Expand Down
18 changes: 9 additions & 9 deletions nobrainer/models/bayesian_vnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ def down_stage(
kld: a func to compute KL Divergence loss, default is set None.
KLD can be set as (lambda q, p, ignore: kl_lib.kl_divergence(q, p))
prior_fn: a func to initialize priors distributions
kernel_posterior_fn:a func to initlaize kernal posteriors
kernel_posterior_fn:a func to initlaize kernel posteriors
(loc, scale and weightnorms)
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
activation: str or optimizer object, the non-linearity to use. All
tf.activations are allowed to use.
Expand Down Expand Up @@ -83,11 +83,11 @@ def up_stage(
kld: a func to compute KL Divergence loss, default is set None.
KLD can be set as (lambda q, p, ignore: kl_lib.kl_divergence(q, p))
prior_fn: a func to initialize priors distributions
kernel_posterior_fn:a func to initlaize kernal posteriors
kernel_posterior_fn:a func to initlaize kernel posteriors
(loc, scale and weightnorms)
filters: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
activation: str or optimizer object, the non-linearity to use. All
tf.activations are allowed to use
Expand Down Expand Up @@ -153,17 +153,17 @@ def end_stage(
kld: a func to compute KL Divergence loss, default is set None.
KLD can be set as (lambda q, p, ignore: kl_lib.kl_divergence(q, p))
prior_fn: a func to initialize priors distributions
kernel_posterior_fn:a func to initlaize kernal posteriors
kernel_posterior_fn:a func to initlaize kernel posteriors
(loc, scale and weightnorms)
n_classes: int, for binary class use the value 1.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
activation: str or optimizer object, the non-linearity to use. All
tf.activations are allowed to use
Result
----------
prediction probablities
prediction probabilities
"""
conv = tfp.layers.Convolution3DFlipout(
n_classes,
Expand Down Expand Up @@ -216,12 +216,12 @@ def bayesian_vnet(
a value of 1.
input_shape: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
kernal_size(int): size of the kernal of conv layers
kernal_size(int): size of the kernel of conv layers
activation(str): all tf.keras.activations are allowed
kld: a func to compute KL Divergence loss, default is set None.
KLD can be set as (lambda q, p, ignore: kl_lib.kl_divergence(q, p))
prior_fn: a func to initialize priors distributions
kernel_posterior_fn:a func to initlaize kernal posteriors
kernel_posterior_fn:a func to initlaize kernel posteriors
(loc, scale and weightnorms)
See Bayesian Utils for more options for kld, prior_fn and kernal_posterior_fn
activation: str or optimizer object, the non-linearity to use. All
Expand Down
16 changes: 8 additions & 8 deletions nobrainer/models/bayesian_vnet_semi.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def down_stage(inputs, filters, kernel_size=3, activation="relu", padding="SAME"
inputs: tf.layer for encoding stage.
filters: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
activation: str or optimizer object, the non-linearity to use. All
tf.activations are allowed to use
Expand Down Expand Up @@ -61,11 +61,11 @@ def up_stage(
kld: a func to compute KL Divergence loss, default is set None.
KLD can be set as (lambda q, p, ignore: kl_lib.kl_divergence(q, p))
prior_fn: a func to initialize priors distributions
kernel_posterior_fn:a func to initlaize kernal posteriors
kernel_posterior_fn:a func to initlaize kernel posteriors
(loc, scale and weightnorms)
filters: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
activation: str or optimizer object, the non-linearity to use. All
tf.activations are allowed to use
Expand Down Expand Up @@ -131,17 +131,17 @@ def end_stage(
kld: a func to compute KL Divergence loss, default is set None.
KLD can be set as (lambda q, p, ignore: kl_lib.kl_divergence(q, p))
prior_fn: a func to initialize priors distributions
kernel_posterior_fn:a func to initlaize kernal posteriors
kernel_posterior_fn:a func to initlaize kernel posteriors
(loc, scale and weightnorms)
n_classes: int, for binary class use the value 1.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
activation: str or optimizer object, the non-linearity to use. All
tf.activations are allowed to use
Result
----------
prediction probablities.
prediction probabilities.
"""
conv = tfp.layers.Convolution3DFlipout(
n_classes,
Expand Down Expand Up @@ -195,12 +195,12 @@ def bayesian_vnet_semi(
a value of 1.
input_shape: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
kernal_size(int): size of the kernal of conv layers
kernal_size(int): size of the kernel of conv layers
activation(str): all tf.keras.activations are allowed
kld: a func to compute KL Divergence loss, default is set None.
KLD can be set as (lambda q, p, ignore: kl_lib.kl_divergence(q, p))
prior_fn: a func to initialize priors distributions
kernel_posterior_fn:a func to initlaize kernal posteriors
kernel_posterior_fn:a func to initlaize kernel posteriors
(loc, scale and weightnorms)
See Bayesian Utils for more options for kld, prior_fn and kernal_posterior_fn
activation: str or optimizer object, the non-linearity to use. All
Expand Down
2 changes: 1 addition & 1 deletion nobrainer/models/highresnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ def highresnet(
https://arxiv.org/abs/1707.01992
Args:
n_classes(int): number of classes
input_shape(tuple):four ints representating the shape of 3D input
input_shape(tuple):four ints representing the shape of 3D input
activation(str): all tf.keras.activations are allowed
dropout_rate(int): [0,1].
"""
Expand Down
2 changes: 1 addition & 1 deletion nobrainer/models/unet.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ def unet(
https://arxiv.org/abs/1606.06650
Args:
n_classes(int): number of classes
input_shape(tuple):four ints representating the shape of 3D input
input_shape(tuple):four ints representing the shape of 3D input
activation(str): all tf.keras.activations are allowed
batch_size(int): batch size.
"""
Expand Down
10 changes: 5 additions & 5 deletions nobrainer/models/vnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def down_stage(inputs, filters, kernel_size=3, activation="relu", padding="SAME"
inputs: tf.layer for encoding stage.
filters: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
activation: str or optimizer object, the non-linearity to use. All
tf.activations are allowed to use
Expand All @@ -48,7 +48,7 @@ def up_stage(inputs, skip, filters, kernel_size=3, activation="relu", padding="S
inputs: tf.layer for encoding stage.
filters: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
activation: str or optimizer object, the non-linearity to use. All
tf.activations are allowed to use
Expand Down Expand Up @@ -80,14 +80,14 @@ def end_stage(inputs, n_classes=1, kernel_size=3, activation="relu", padding="SA
----------
inputs: tf.model layer.
n_classes: int, for binary class use the value 1.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
activation: str or optimizer object, the non-linearity to use. All
tf.activations are allowed to use
Result
----------
prediction probablities
prediction probabilities
"""
conv = Conv3D(
filters=n_classes,
Expand Down Expand Up @@ -123,7 +123,7 @@ def vnet(
a value of 1.
input_shape: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
activation: str or optimizer object, the non-linearity to use. All
tf.activations are allowed to use
Expand Down
10 changes: 5 additions & 5 deletions nobrainer/models/vox2vox.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@ def vox_gan(
a value of 1.
input_shape: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
g_kernal_size: int, size of the kernal for generator. Default kernal size
g_kernal_size: int, size of the kernel for generator. Default kernel size
is set to be 4.
g_filters: int, number of filters for generator. default is set 64.
g_norm: str, to set batch or instance norm.
d_kernal_size: int, size of the kernal for discriminator. Default kernal size
d_kernal_size: int, size of the kernel for discriminator. Default kernel size
is set to be 4.
d_filters: int, number of filters for discriminator. default is set 64.
d_norm: str, to set batch or instance norm.
Expand Down Expand Up @@ -65,7 +65,7 @@ def Vox_generator(n_classes, input_shape, n_filters=64, kernel_size=4, norm="bat
a value of 1.
input_shape: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 4.
n_filters: int, number of filters. default is set 64.
norm: str, to set batch or instance norm.
Expand Down Expand Up @@ -195,7 +195,7 @@ def Vox_discriminator(input_shape, n_filters=64, kernel_size=4, norm="batch"):
input_shape: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
n_filters: int, number of filters. default is set 64.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 4.
norm: str, to set batch or instance norm.
Expand Down Expand Up @@ -275,7 +275,7 @@ def Vox_ensembler(n_classes, input_shape, kernel_size=3, **kwargs):
a value of 1.
input_shape: list or tuple of four ints, the shape of the input data. Omit
the batch dimension, and include the number of channels.
kernal_size: int, size of the kernal of conv layers. Default kernal size
kernal_size: int, size of the kernel of conv layers. Default kernel size
is set to be 3.
Returns
Expand Down
3 changes: 3 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,6 @@ force_sort_within_sections = true
reverse_relative = true
sort_relative_in_force_sorted_sections = true
known_first_party = ["nobrainer"]

[tool.codespell]
ignore-words-list = "nd"

0 comments on commit 02ac53c

Please sign in to comment.