nn
Module: nn.cnn_1d_denoising
Title : Denoising diffusion weighted imaging data using CNN
Obtaining tissue microstructure measurements from diffusion weighted imaging
(DWI) with multiple, high b-values is crucial. However, the high noise levels
present in these images can adversely affect the accuracy of the
microstructural measurements. In this context, we suggest a straightforward
denoising technique that can be applied to any DWI dataset as long as a
low-noise, single-subject dataset is obtained using the same DWI sequence.
We created a simple 1D-CNN model with five layers, based on the 1D CNN for
denoising speech. The model consists of two convolutional layers followed by
max-pooling layers, and a dense layer. The first convolutional layer has
16 one-dimensional filters of size 16, and the second layer has 32 filters of
size 8. ReLu activation function is applied to both convolutional layers.
The max-pooling layer has a kernel size of 2 and a stride of 2.
The dense layer maps the features extracted from the noisy image to the
low-noise reference image.
Reference
Cheng H, Vinci-Booher S, Wang J, Caron B, Wen Q, Newman S, et al.
(2022) Denoising diffusion weighted imaging data using convolutional neural
networks.
PLoS ONE 17(9): e0274396. https://doi.org/10.1371/journal.pone.0274396
Module: nn.evac
Class and helper functions for fitting the EVAC+ model.
Block (*args, **kwargs)
|
|
ChannelSum (*args, **kwargs)
|
|
EVACPlus ([verbose])
|
This class is intended for the EVAC+ model. |
logger
|
Instances of the Logger class represent a single logging channel. |
prepare_img (image)
|
Function to prepare image for model input Specific to EVAC+ |
init_model ([model_scale])
|
Function to create model for EVAC+ |
Module: nn.histo_resdnn
Class and helper functions for fitting the Histological ResDNN model.
HistoResDNN ([sh_order, basis_type, verbose])
|
This class is intended for the ResDNN Histology Network model. |
logger
|
Instances of the Logger class represent a single logging channel. |
Module: nn.synb0
Class and helper functions for fitting the Synb0 model.
EncoderBlock (*args, **kwargs)
|
|
DecoderBlock (*args, **kwargs)
|
|
EncoderBlock (*args, **kwargs)
|
|
DecoderBlock (*args, **kwargs)
|
|
Synb0 ([verbose])
|
This class is intended for the Synb0 model. |
logger
|
Instances of the Logger class represent a single logging channel. |
UNet3D (input_shape)
|
|
normalize (image[, min_v, max_v, new_min, ...])
|
normalization function |
unnormalize (image, norm_min, norm_max, ...)
|
unnormalization function |
UNet3D (input_shape)
|
|
Module: nn.utils
normalize (image[, min_v, max_v, new_min, ...])
|
normalization function |
unnormalize (image, norm_min, norm_max, ...)
|
unnormalization function |
set_logger_level (log_level, logger)
|
Change the logger to one of the following: DEBUG, INFO, WARNING, CRITICAL, ERROR |
transform_img (image, affine[, init_shape, scale])
|
Function to reshape image as an input to the model |
recover_img (image, affine, ori_shape[, scale])
|
Function to recover image back to its original shape |
-
class dipy.nn.cnn_1d_denoising.Cnn1DDenoiser(sig_length, optimizer='adam', loss='mean_squared_error', metrics=('accuracy',), loss_weights=None)
Bases: object
-
__init__(sig_length, optimizer='adam', loss='mean_squared_error', metrics=('accuracy',), loss_weights=None)
Initialize the CNN 1D denoiser with the given parameters.
Parameters
- sig_lengthint
Length of the DWI signal.
- optimizerstr, optional
Name of the optimization algorithm to use. Options: ‘adam’, ‘sgd’,
‘rmsprop’, ‘adagrad’, ‘adadelta’.
- lossstr, optional
Name of the loss function to use. Available options are
‘mean_squared_error’, ‘mean_absolute_error’,
‘mean_absolute_percentage_error’, ‘mean_squared_logarithmic_error’,
‘squared_hinge’, ‘hinge’, ‘categorical_hinge’, ‘logcosh’,
‘categorical_crossentropy’, ‘sparse_categorical_crossentropy’,
‘binary_crossentropy’, ‘kullback_leibler_divergence’, ‘poisson’,
‘cosine_similarity’.
Suggested to go with ‘mean_squared_error’.
- metricstuple of str or function, optional
List of metrics to be evaluated by the model during training and
testing. Available options are ‘accuracy’, ‘binary_accuracy’,
‘categorical_accuracy’, ‘top_k_categorical_accuracy’,
‘sparse_categorical_accuracy’, ‘sparse_top_k_categorical_accuracy’,
and any custom function.
- loss_weightsfloat or dict, optional
Scalar coefficients to weight the loss contributions of different
model outputs. Can be a single float value or a dictionary mapping
output names to scalar coefficients.
-
compile(optimizer='adam', loss=None, metrics=None, loss_weights=None)
Configure the model for training.
Parameters
- optimizerstr or optimizer object, optional
Name of optimizer or optimizer object.
- lossstr or objective function, optional
Name of objective function or objective function itself.
If ‘None’, the model will be compiled without any loss function
and can only be used to predict output.
- metricslist of metrics, optional
List of metrics to be evaluated by the model during training
and testing.
- loss_weightslist or dict, optional
Optional list or dictionary specifying scalar coefficients(floats)
to weight the loss contributions of different model outputs.
The loss value that will be minimized by the model will then be
the weighted sum of all individual losses. If a list, it is
expected to have a 1:1 mapping to the model’s outputs. If a dict,
it is expected to map output names (strings) to scalar
coefficients.
-
evaluate(x, y, batch_size=None, verbose=1, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False)
Evaluate the model on a test dataset.
Parameters
- xndarray
Test dataset (high-noise data). If 4D, it will be converted to 1D.
- yndarray
Labels of the test dataset (low-noise data). If 4D, it will be
converted to 1D.
- batch_sizeint, optional
Number of samples per gradient update.
- verboseint, optional
Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line
per epoch.
- stepsint, optional
Total number of steps (batches of samples) before declaring the
evaluation round finished.
- callbackslist, optional
List of callbacks to apply during evaluation.
- max_queue_sizeint, optional
Maximum size for the generator queue.
- workersint, optional
Maximum number of processes to spin up when using process-based
threading.
- use_multiprocessingbool, optional
If True, use process-based threading.
- return_dictbool, optional
If True, loss and metric results are returned as a dictionary.
Returns
- List or dict
If return_dict is False, returns a list of [loss, metrics]
values on the test dataset. If return_dict is True, returns
a dictionary of metric names and their corresponding values.
-
fit(x, y, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)
Train the model on train dataset.
The fit method will train the model for a fixed number of epochs
(iterations) on a dataset. If given data is 4D it will convert
it into 1D.
Parameters
- xndarray
The input data, as an ndarray.
- yndarray
The target data, as an ndarray.
- batch_sizeint or None, optional
Number of samples per batch of computation.
- epochsint, optional
The number of epochs.
- verbose‘auto’, 0, 1, or 2, optional
Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line
per epoch.
- callbackslist of keras.callbacks.Callback instances, optional
List of callbacks to apply during training.
- validation_splitfloat between 0 and 1, optional
Fraction of the training data to be used as validation data.
- validation_datatuple (x_val, y_val) or None, optional
Data on which to evaluate the loss and any model metrics at
the end of each epoch.
- shuffleboolean, optional
This argument is ignored when x is a generator or an object of
tf.data.Dataset.
- initial_epochint, optional
Epoch at which to start training.
- steps_per_epochint or None, optional
Total number of steps (batches of samples) before declaring one
epoch finished and starting the next epoch.
- validation_batch_sizeint or None, optional
Number of samples per validation batch.
- validation_stepsint or None, optional
Only relevant if validation_data is provided and is a
tf.data dataset.
- validation_freqint or list/tuple/set, optional
Only relevant if validation data is provided. If an integer,
specifies how many training epochs to run before a new validation
run is performed. If a list, tuple, or set, specifies the epochs
on which to run validation.
- max_queue_sizeint, optional
Used for generator or keras.utils.Sequence input only.
- workersinteger, optional
Used for generator or keras.utils.Sequence input only.
- use_multiprocessingboolean, optional
Used for generator or keras.utils.Sequence input only.
Returns
- histobject
A History object. Its History.history attribute is a record of
training loss values and metrics values at successive epochs.
-
load_weights(filepath)
Load the model weights from the specified file path.
Parameters
- filepathstr
The file path from which to load the weights.
-
predict(x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)
Generate predictions for input samples.
Parameters
- xndarray
Input samples.
- batch_sizeint, optional
Number of samples per batch.
- verboseint, optional
Verbosity mode.
- stepsint, optional
Total number of steps (batches of samples) before declaring the
prediction round finished.
- callbackslist, optional
List of Keras callbacks to apply during prediction.
- max_queue_sizeint, optional
Maximum size for the generator queue.
- workersint, optional
Maximum number of processes to spin up when using process-based
threading.
- use_multiprocessingbool, optional
If True, use process-based threading. If False, use
thread-based threading.
Returns
- ndarray
Numpy array of predictions.
-
save_weights(filepath, overwrite=True)
Save the weights of the model to HDF5 file format.
Parameters
- filepathstr
The path where the weights should be saved.
- overwritebool,optional
If True, overwrites the file if it already exists. If False,
raises an error if the file already exists.
-
summary()
Get the summary of the model.
The summary is textual and includes information about:
The layers and their order in the model.
The output shape of each layer.
Returns
- summaryNoneType
the summary of the model
-
train_test_split(x, y, test_size=None, train_size=None, random_state=None, shuffle=True, stratify=None)
Split the input data into random train and test subsets.
Parameters
- x: numpy array
input data.
- y: numpy array
target data.
- test_size: float or int, optional
If float, should be between 0.0 and 1.0 and represent the
proportion of the dataset to include in the test split.
If int, represents the absolute number of test samples.
If None, the value is set to the complement of the train size.
If train_size is also None, it will be set to 0.25.
- train_size: float or int, optional
If float, should be between 0.0 and 1.0 and represent the
proportion of the dataset to include in the train split.
If int, represents the absolute number of train samples.
If None, the value is automatically set to the complement of the
test size.
- random_state: int, RandomState instance or None, optional
Controls the shuffling applied to the data before applying
the split. Pass an int for reproducible output across multiple
function calls. See Glossary.
- shuffle: bool, optional
Whether or not to shuffle the data before splitting.
If shuffle=False then stratify must be None.
- stratify: array-like, optional
If not None, data is split in a stratified fashion,
using this as the class labels. Read more in the User Guide.
Returns
Tuple of four numpy arrays: x_train, x_test, y_train, y_test.
-
class dipy.nn.evac.Block(*args, **kwargs)
Bases: Layer
-
__init__(out_channels, kernel_size, strides, padding, drop_r, n_layers, layer_type='down')
-
call(input, passed)
This is where the layer’s logic lives.
The call() method may not create state (except in its first
invocation, wrapping the creation of variables or other resources in
tf.init_scope()). It is recommended to create state, including
tf.Variable instances and nested Layer instances,
in __init__(), or in the build() method that is
called automatically before call() executes for the first time.
- Args:
- inputs: Input tensor, or dict/list/tuple of input tensors.
The first positional inputs argument is subject to special rules:
- inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value
of a keyword argument.
NumPy array or Python scalar values in inputs get cast as
tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method)
using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs.
If a layer has tensor arguments in *args or **kwargs, their
casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs
only.
Integration with various ecosystem packages like TFMOT, TFLite,
TF.js, etc is only supported for inputs and not for tensors in
positional and keyword arguments.
- *args: Additional positional arguments. May contain tensors, although
this is not recommended, for the reasons above.
- **kwargs: Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above.
The following optional keyword arguments are reserved:
- training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a
mask argument, its default value will be set to the mask
generated for inputs by the previous layer (if input did come
from a layer that generated a corresponding mask, i.e. if it came
from a Keras layer with masking support).
- Returns:
A tensor or list/tuple of tensors.
-
class dipy.nn.evac.ChannelSum(*args, **kwargs)
Bases: Layer
-
__init__()
-
call(inputs)
This is where the layer’s logic lives.
The call() method may not create state (except in its first
invocation, wrapping the creation of variables or other resources in
tf.init_scope()). It is recommended to create state, including
tf.Variable instances and nested Layer instances,
in __init__(), or in the build() method that is
called automatically before call() executes for the first time.
- Args:
- inputs: Input tensor, or dict/list/tuple of input tensors.
The first positional inputs argument is subject to special rules:
- inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value
of a keyword argument.
NumPy array or Python scalar values in inputs get cast as
tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method)
using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs.
If a layer has tensor arguments in *args or **kwargs, their
casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs
only.
Integration with various ecosystem packages like TFMOT, TFLite,
TF.js, etc is only supported for inputs and not for tensors in
positional and keyword arguments.
- *args: Additional positional arguments. May contain tensors, although
this is not recommended, for the reasons above.
- **kwargs: Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above.
The following optional keyword arguments are reserved:
- training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a
mask argument, its default value will be set to the mask
generated for inputs by the previous layer (if input did come
from a layer that generated a corresponding mask, i.e. if it came
from a Keras layer with masking support).
- Returns:
A tensor or list/tuple of tensors.
-
class dipy.nn.evac.EVACPlus(verbose=False)
Bases: object
This class is intended for the EVAC+ model.
-
__init__(verbose=False)
The model was pre-trained for usage on
brain extraction of T1 images.
This model is designed to take as input
a T1 weighted image.
Parameters
- verbosebool (optional)
Whether to show information about the processing.
Default: False
-
fetch_default_weights()
Load the model pre-training weights to use for the fitting.
While the user can load different weights, the function
is mainly intended for the class function ‘predict’.
-
load_model_weights(weights_path)
Load the custom pre-training weights to use for the fitting.
Parameters
- weights_pathstr
Path to the file containing the weights (hdf5, saved by tensorflow)
-
predict(T1, affine, voxsize=(1, 1, 1), batch_size=None, return_affine=False, return_prob=False)
Wrapper function to facilitate prediction of larger dataset.
Parameters
- T1np.ndarray or list of np.ndarrys
For a single image, input should be a 3D array.
If multiple images, it should be a 4D array or a list.
- affinenp.ndarray (4, 4) or (batch, 4, 4)
or list of np.ndarrays with len of batch
Affine matrix for the T1 image. Should have
batch dimension if T1 has one.
- voxsizenp.ndarray or list or tuple, optional
(3,) or (batch, 3)
voxel size of the T1 image.
Default is (1, 1, 1)
- batch_sizeint, optional
Number of images per prediction pass. Only available if data
is provided with a batch dimension.
Consider lowering it if you get an out of memory error.
Increase it if you want it to be faster and have a lot of data.
If None, batch_size will be set to 1 if the provided image
has a batch dimension.
Default is None
- return_affinebool, optional
Whether to return the affine matrix. Useful if the input was a
file path.
Default is False
- return_probbool, optional
Whether to return the probability map instead of a
binary mask. Useful for testing.
Default is False
Returns
- pred_outputnp.ndarray (…) or (batch, …)
Predicted brain mask
- affinenp.ndarray (…) or (batch, …)
affine matrix of mask
only if return_affine is True
logger
-
dipy.nn.evac.logger()
Instances of the Logger class represent a single logging channel. A
“logging channel” indicates an area of an application. Exactly how an
“area” is defined is up to the application developer. Since an
application can have any number of areas, logging channels are identified
by a unique string. Application areas can be nested (e.g. an area
of “input processing” might include sub-areas “read CSV files”, “read
XLS files” and “read Gnumeric files”). To cater for this natural nesting,
channel names are organized into a namespace hierarchy where levels are
separated by periods, much like the Java or Python package namespace. So
in the instance given above, channel names might be “input” for the upper
level, and “input.csv”, “input.xls” and “input.gnu” for the sub-levels.
There is no arbitrary limit to the depth of nesting.
prepare_img
-
dipy.nn.evac.prepare_img(image)
Function to prepare image for model input
Specific to EVAC+
Parameters
- imagenp.ndarray
Input image
Returns
input_data : dict
init_model
-
dipy.nn.evac.init_model(model_scale=16)
Function to create model for EVAC+
Parameters
- model_scaleint, optional
The scale of the model
Should match the saved weights from fetcher
Default is 16
Returns
model : tf.keras.Model
-
class dipy.nn.histo_resdnn.HistoResDNN(sh_order=8, basis_type='tournier07', verbose=False)
Bases: object
This class is intended for the ResDNN Histology Network model.
-
__init__(sh_order=8, basis_type='tournier07', verbose=False)
The model was re-trained for usage with a different basis function
(‘tournier07’) like the proposed model in [1, 2].
To obtain the pre-trained model, use::
>>> resdnn_model = HistoResDNN()
>>> fetch_model_weights_path = get_fnames(‘histo_resdnn_weights’)
>>> resdnn_model.load_model_weights(fetch_model_weights_path)
This model is designed to take as input raw DWI signal on a sphere
(ODF) represented as SH of order 8 in the tournier basis and predict
fODF of order 8 in the tournier basis. Effectively, this model is
mimicking a CSD fit.
Parameters
- sh_orderint, optional
Maximum SH order in the SH fit. For sh_order
, there will be
(sh_order + 1) * (sh_order + 2) / 2
SH coefficients for a
symmetric basis. Default: 8
- basis_type{‘tournier07’, ‘descoteaux07’}, optional
tournier07
(default) or descoteaux07
.
- verbosebool (optional)
Whether to show information about the processing.
Default: False
-
fetch_default_weights()
Load the model pre-training weights to use for the fitting.
Will not work if the declared SH_ORDER does not match the weights
expected input.
-
load_model_weights(weights_path)
Load the custom pre-training weights to use for the fitting.
Will not work if the declared SH_ORDER does not match the weights
expected input.
- The weights for a sh_order of 8 can be obtained via the function:
get_fnames(‘histo_resdnn_weights’).
Parameters
- weights_pathstr
Path to the file containing the weights (hdf5, saved by tensorflow)
-
predict(data, gtab, mask=None, chunk_size=1000)
Wrapper function to facilitate prediction of larger dataset.
The function will mask, normalize, split, predict and ‘re-assemble’
the data as a volume.
Parameters
- datanp.ndarray
DWI signal in a 4D array
- gtabGradientTable class instance
The acquisition scheme matching the data (must contain at least
one b0)
- masknp.ndarray (optional)
Binary mask of the brain to avoid unnecessary computation and
unreliable prediction outside the brain.
Default: Compute prediction only for nonzero voxels (with at least
one nonzero DWI value).
Returns
- pred_sh_coefnp.ndarray (x, y, z, M)
Predicted fODF (as SH). The volume has matching shape to the input
data, but with (sh_order + 1) * (sh_order + 2) / 2
as a last
dimension.
logger
-
dipy.nn.histo_resdnn.logger()
Instances of the Logger class represent a single logging channel. A
“logging channel” indicates an area of an application. Exactly how an
“area” is defined is up to the application developer. Since an
application can have any number of areas, logging channels are identified
by a unique string. Application areas can be nested (e.g. an area
of “input processing” might include sub-areas “read CSV files”, “read
XLS files” and “read Gnumeric files”). To cater for this natural nesting,
channel names are organized into a namespace hierarchy where levels are
separated by periods, much like the Java or Python package namespace. So
in the instance given above, channel names might be “input” for the upper
level, and “input.csv”, “input.xls” and “input.gnu” for the sub-levels.
There is no arbitrary limit to the depth of nesting.
-
class dipy.nn.model.SingleLayerPerceptron(input_shape=(28, 28), num_hidden=128, act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', optimizer='adam', loss='sparse_categorical_crossentropy')
Bases: object
-
__init__(input_shape=(28, 28), num_hidden=128, act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', optimizer='adam', loss='sparse_categorical_crossentropy')
Single Layer Perceptron with Dropout.
Parameters
- input_shapetuple
Shape of data to be trained
- num_hiddenint
Number of nodes in hidden layer
- act_hiddenstring
Activation function used in hidden layer
- dropoutfloat
Dropout ratio
- num_out10
Number of nodes in output layer
- act_outstring
Activation function used in output layer
- optimizerstring
Select optimizer. Default adam.
- lossstring
Select loss function for measuring accuracy.
Default sparse_categorical_crossentropy.
-
evaluate(x_test, y_test, verbose=2)
Evaluate the model on test dataset.
The evaluate method will evaluate the model on a test
dataset.
Parameters
- x_testndarray
the x_test is the test dataset
- y_testndarray shape=(BatchSize,)
the y_test is the labels of the test dataset
- verboseint (Default = 2)
By setting verbose 0, 1 or 2 you just say how do you want to
‘see’ the training progress for each epoch.
Returns
- evaluateList
return list of loss value and accuracy value on test dataset
-
fit(x_train, y_train, epochs=5)
Train the model on train dataset.
The fit method will train the model for a fixed
number of epochs (iterations) on a dataset.
Parameters
- x_trainndarray
the x_train is the train dataset
- y_trainndarray shape=(BatchSize,)
the y_train is the labels of the train dataset
- epochsint (Default = 5)
the number of epochs
Returns
- histobject
A History object. Its History.history attribute is a record of
training loss values and metrics values at successive epochs
-
predict(x_test)
Predict the output from input samples.
The predict method will generates output predictions
for the input samples.
Parameters
- x_trainndarray
the x_test is the test dataset or input samples
Returns
- predictndarray shape(TestSize,OutputSize)
Numpy array(s) of predictions.
-
summary()
Get the summary of the model.
The summary is textual and includes information about:
The layers and their order in the model.
The output shape of each layer.
Returns
- summaryNoneType
the summary of the model
-
class dipy.nn.model.MultipleLayerPercepton(input_shape=(28, 28), num_hidden=(128,), act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', loss='sparse_categorical_crossentropy', optimizer='adam')
Bases: object
-
__init__(input_shape=(28, 28), num_hidden=(128,), act_hidden='relu', dropout=0.2, num_out=10, act_out='softmax', loss='sparse_categorical_crossentropy', optimizer='adam')
Multiple Layer Perceptron with Dropout.
Parameters
- input_shapetuple
Shape of data to be trained
- num_hiddenarray-like
List of number of nodes in hidden layers
- act_hiddenstring
Activation function used in hidden layer
- dropoutfloat
Dropout ratio
- num_out10
Number of nodes in output layer
- act_outstring
Activation function used in output layer
- optimizerstring
Select optimizer. Default adam.
- lossstring
Select loss function for measuring accuracy.
Default sparse_categorical_crossentropy.
-
evaluate(x_test, y_test, verbose=2)
Evaluate the model on test dataset.
The evaluate method will evaluate the model on a test
dataset.
Parameters
- x_testndarray
the x_test is the test dataset
- y_testndarray shape=(BatchSize,)
the y_test is the labels of the test dataset
- verboseint (Default = 2)
By setting verbose 0, 1 or 2 you just say how do you want to
‘see’ the training progress for each epoch.
Returns
- evaluateList
return list of loss value and accuracy value on test dataset
-
fit(x_train, y_train, epochs=5)
Train the model on train dataset.
The fit method will train the model for a fixed
number of epochs (iterations) on a dataset.
Parameters
- x_trainndarray
the x_train is the train dataset
- y_trainndarray shape=(BatchSize,)
the y_train is the labels of the train dataset
- epochsint (Default = 5)
the number of epochs
Returns
- histobject
A History object. Its History.history attribute is a record of
training loss values and metrics values at successive epochs
-
predict(x_test)
Predict the output from input samples.
The predict method will generates output predictions
for the input samples.
Parameters
- x_trainndarray
the x_test is the test dataset or input samples
Returns
- predictndarray shape(TestSize,OutputSize)
Numpy array(s) of predictions.
-
summary()
Get the summary of the model.
The summary is textual and includes information about:
The layers and their order in the model.
The output shape of each layer.
Returns
- summaryNoneType
the summary of the model
-
class dipy.nn.synb0.EncoderBlock(*args, **kwargs)
Bases: Layer
-
__init__(out_channels, kernel_size, strides, padding)
-
call(input)
This is where the layer’s logic lives.
The call() method may not create state (except in its first
invocation, wrapping the creation of variables or other resources in
tf.init_scope()). It is recommended to create state, including
tf.Variable instances and nested Layer instances,
in __init__(), or in the build() method that is
called automatically before call() executes for the first time.
- Args:
- inputs: Input tensor, or dict/list/tuple of input tensors.
The first positional inputs argument is subject to special rules:
- inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value
of a keyword argument.
NumPy array or Python scalar values in inputs get cast as
tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method)
using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs.
If a layer has tensor arguments in *args or **kwargs, their
casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs
only.
Integration with various ecosystem packages like TFMOT, TFLite,
TF.js, etc is only supported for inputs and not for tensors in
positional and keyword arguments.
- *args: Additional positional arguments. May contain tensors, although
this is not recommended, for the reasons above.
- **kwargs: Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above.
The following optional keyword arguments are reserved:
- training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a
mask argument, its default value will be set to the mask
generated for inputs by the previous layer (if input did come
from a layer that generated a corresponding mask, i.e. if it came
from a Keras layer with masking support).
- Returns:
A tensor or list/tuple of tensors.
-
class dipy.nn.synb0.DecoderBlock(*args, **kwargs)
Bases: Layer
-
__init__(out_channels, kernel_size, strides, padding)
-
call(input)
This is where the layer’s logic lives.
The call() method may not create state (except in its first
invocation, wrapping the creation of variables or other resources in
tf.init_scope()). It is recommended to create state, including
tf.Variable instances and nested Layer instances,
in __init__(), or in the build() method that is
called automatically before call() executes for the first time.
- Args:
- inputs: Input tensor, or dict/list/tuple of input tensors.
The first positional inputs argument is subject to special rules:
- inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value
of a keyword argument.
NumPy array or Python scalar values in inputs get cast as
tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method)
using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs.
If a layer has tensor arguments in *args or **kwargs, their
casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs
only.
Integration with various ecosystem packages like TFMOT, TFLite,
TF.js, etc is only supported for inputs and not for tensors in
positional and keyword arguments.
- *args: Additional positional arguments. May contain tensors, although
this is not recommended, for the reasons above.
- **kwargs: Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above.
The following optional keyword arguments are reserved:
- training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a
mask argument, its default value will be set to the mask
generated for inputs by the previous layer (if input did come
from a layer that generated a corresponding mask, i.e. if it came
from a Keras layer with masking support).
- Returns:
A tensor or list/tuple of tensors.
-
class dipy.nn.synb0.EncoderBlock(*args, **kwargs)
Bases: Layer
-
__init__(out_channels, kernel_size, strides, padding)
-
call(input)
This is where the layer’s logic lives.
The call() method may not create state (except in its first
invocation, wrapping the creation of variables or other resources in
tf.init_scope()). It is recommended to create state, including
tf.Variable instances and nested Layer instances,
in __init__(), or in the build() method that is
called automatically before call() executes for the first time.
- Args:
- inputs: Input tensor, or dict/list/tuple of input tensors.
The first positional inputs argument is subject to special rules:
- inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value
of a keyword argument.
NumPy array or Python scalar values in inputs get cast as
tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method)
using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs.
If a layer has tensor arguments in *args or **kwargs, their
casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs
only.
Integration with various ecosystem packages like TFMOT, TFLite,
TF.js, etc is only supported for inputs and not for tensors in
positional and keyword arguments.
- *args: Additional positional arguments. May contain tensors, although
this is not recommended, for the reasons above.
- **kwargs: Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above.
The following optional keyword arguments are reserved:
- training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a
mask argument, its default value will be set to the mask
generated for inputs by the previous layer (if input did come
from a layer that generated a corresponding mask, i.e. if it came
from a Keras layer with masking support).
- Returns:
A tensor or list/tuple of tensors.
-
class dipy.nn.synb0.DecoderBlock(*args, **kwargs)
Bases: Layer
-
__init__(out_channels, kernel_size, strides, padding)
-
call(input)
This is where the layer’s logic lives.
The call() method may not create state (except in its first
invocation, wrapping the creation of variables or other resources in
tf.init_scope()). It is recommended to create state, including
tf.Variable instances and nested Layer instances,
in __init__(), or in the build() method that is
called automatically before call() executes for the first time.
- Args:
- inputs: Input tensor, or dict/list/tuple of input tensors.
The first positional inputs argument is subject to special rules:
- inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value
of a keyword argument.
NumPy array or Python scalar values in inputs get cast as
tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method)
using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs.
If a layer has tensor arguments in *args or **kwargs, their
casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs
only.
Integration with various ecosystem packages like TFMOT, TFLite,
TF.js, etc is only supported for inputs and not for tensors in
positional and keyword arguments.
- *args: Additional positional arguments. May contain tensors, although
this is not recommended, for the reasons above.
- **kwargs: Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above.
The following optional keyword arguments are reserved:
- training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a
mask argument, its default value will be set to the mask
generated for inputs by the previous layer (if input did come
from a layer that generated a corresponding mask, i.e. if it came
from a Keras layer with masking support).
- Returns:
A tensor or list/tuple of tensors.
-
class dipy.nn.synb0.Synb0(verbose=False)
Bases: object
This class is intended for the Synb0 model.
The model is the deep learning part of the Synb0-Disco
pipeline, thus stand-alone usage is not
recommended.
-
__init__(verbose=False)
The model was pre-trained for usage on pre-processed images
following the synb0-disco pipeline.
One can load their own weights using load_model_weights.
This model is designed to take as input
a b0 image and a T1 weighted image.
It was designed to predict a b-inf image.
Parameters
- verbosebool (optional)
Whether to show information about the processing.
Default: False
-
fetch_default_weights(idx)
Load the model pre-training weights to use for the fitting.
While the user can load different weights, the function
is mainly intended for the class function ‘predict’.
Parameters
- idxint
The idx of the default weights. It can be from 0~4.
-
load_model_weights(weights_path)
Load the custom pre-training weights to use for the fitting.
Parameters
- weights_pathstr
Path to the file containing the weights (hdf5, saved by tensorflow)
-
predict(b0, T1, batch_size=None, average=True)
Wrapper function to facilitate prediction of larger dataset.
The function will pad the data to meet the required shape of image.
Note that the b0 and T1 image should have the same shape
Parameters
- b0np.ndarray (batch, 77, 91, 77) or (77, 91, 77)
For a single image, input should be a 3D array. If multiple images,
there should also be a batch dimension.
- T1np.ndarray (batch, 77, 91, 77) or (77, 91, 77)
For a single image, input should be a 3D array. If multiple images,
there should also be a batch dimension.
- batch_sizeint
Number of images per prediction pass. Only available if data
is provided with a batch dimension.
Consider lowering it if you get an out of memory error.
Increase it if you want it to be faster and have a lot of data.
If None, batch_size will be set to 1 if the provided image
has a batch dimension.
Default is None
- averagebool
Whether the function follows the Synb0-Disco pipeline and
averages the prediction of 5 different models.
If False, it uses the loaded weights for prediction.
Default is True.
Returns
- pred_outputnp.ndarray (…) or (batch, …)
Reconstructed b-inf image(s)
logger
-
dipy.nn.synb0.logger()
Instances of the Logger class represent a single logging channel. A
“logging channel” indicates an area of an application. Exactly how an
“area” is defined is up to the application developer. Since an
application can have any number of areas, logging channels are identified
by a unique string. Application areas can be nested (e.g. an area
of “input processing” might include sub-areas “read CSV files”, “read
XLS files” and “read Gnumeric files”). To cater for this natural nesting,
channel names are organized into a namespace hierarchy where levels are
separated by periods, much like the Java or Python package namespace. So
in the instance given above, channel names might be “input” for the upper
level, and “input.csv”, “input.xls” and “input.gnu” for the sub-levels.
There is no arbitrary limit to the depth of nesting.
UNet3D
-
dipy.nn.synb0.UNet3D(input_shape)
normalize
-
dipy.nn.synb0.normalize(image, min_v=None, max_v=None, new_min=-1, new_max=1)
normalization function
Parameters
image : np.ndarray
min_v : int or float (optional)
minimum value range for normalization
intensities below min_v will be clipped
if None it is set to min value of image
Default : None
- max_vint or float (optional)
maximum value range for normalization
intensities above max_v will be clipped
if None it is set to max value of image
Default : None
- new_minint or float (optional)
new minimum value after normalization
Default : 0
- new_maxint or float (optional)
new maximum value after normalization
Default : 1
Returns
- np.ndarray
Normalized image from range new_min to new_max
unnormalize
-
dipy.nn.synb0.unnormalize(image, norm_min, norm_max, min_v, max_v)
unnormalization function
Parameters
image : np.ndarray
norm_min : int or float
minimum value of normalized image
- norm_maxint or float
maximum value of normalized image
- min_vint or float
minimum value of unnormalized image
- max_vint or float
maximum value of unnormalized image
Returns
- np.ndarray
unnormalized image from range min_v to max_v
UNet3D
-
dipy.nn.synb0.UNet3D(input_shape)
normalize
-
dipy.nn.utils.normalize(image, min_v=None, max_v=None, new_min=-1, new_max=1)
normalization function
Parameters
image : np.ndarray
min_v : int or float (optional)
minimum value range for normalization
intensities below min_v will be clipped
if None it is set to min value of image
Default : None
- max_vint or float (optional)
maximum value range for normalization
intensities above max_v will be clipped
if None it is set to max value of image
Default : None
- new_minint or float (optional)
new minimum value after normalization
Default : 0
- new_maxint or float (optional)
new maximum value after normalization
Default : 1
Returns
- np.ndarray
Normalized image from range new_min to new_max
unnormalize
-
dipy.nn.utils.unnormalize(image, norm_min, norm_max, min_v, max_v)
unnormalization function
Parameters
image : np.ndarray
norm_min : int or float
minimum value of normalized image
- norm_maxint or float
maximum value of normalized image
- min_vint or float
minimum value of unnormalized image
- max_vint or float
maximum value of unnormalized image
Returns
- np.ndarray
unnormalized image from range min_v to max_v
set_logger_level
-
dipy.nn.utils.set_logger_level(log_level, logger)
Change the logger to one of the following:
DEBUG, INFO, WARNING, CRITICAL, ERROR
Parameters
- log_levelstr
Log level for the logger
recover_img
-
dipy.nn.utils.recover_img(image, affine, ori_shape, scale=2)
Function to recover image back to its original shape
Parameters
- imagenp.ndarray
Image to recover
- affinenp.ndarray
Affine matrix provided from transform_img
- ori_shapetuple
Original shape of image
- scalefloat
Scale that was used in transform_img
Default is 2
Returns
recovered_img : np.ndarray