nnUnet代码分析一训练
nnUnet是做分割的一套完整代码,用在医学图像分析中较多,效果还很不错。
先看训练的代码 run_training.py
一般用法:nnUNet_train 2d nnUNetTrainerV2 TaskXXX_MYTASK FOLD --npz
2d代表2d Unet网络,nnUNetTrainerV2代表trainer,Task是任务id,
还有其他的参数。详见代码。
plans_file, output_folder_name, dataset_directory, batch_dice, stage, \
trainer_class = get_default_configuration(network, task, network_trainer, plans_identifier)
根据网络、任务、trainer,计划生产一个trainer_class.
命令:nnUNet_train 2d nnUNetTrainerV2 Task004_Hippocampus 1 --npz
输出结果如下:
###############################################
I am running the following nnUNet: 2d
My trainer class is: <class 'nnunet.training.network_training.nnUNetTrainerV2.nnUNetTrainerV2'>
For that I will be using the following configuration:
num_classes: 2
modalities: {0: 'MRI'}
use_mask_for_norm OrderedDict([(0, False)])
keep_only_largest_region None
min_region_size_per_class None
min_size_per_class None
normalization_schemes OrderedDict([(0, 'nonCT')])
stages...
stage: 0
{'batch_size': 366, 'num_pool_per_axis': [3, 3], 'patch_size': array([56, 40]), 'median_patient_size_in_voxels': array([36, 50, 35]), 'current_spacing': array([1., 1., 1.]), 'original_spacing': array([1., 1., 1.]), 'pool_op_kernel_sizes': [[2, 2], [2, 2], [2, 2]], 'conv_kernel_sizes': [[3, 3], [3, 3], [3, 3], [3, 3]], 'do_dummy_2D_data_aug': False}
I am using stage 0 from these plans
I am using batch dice + CE loss
I am using data from this folder: /mnt/nnUNet_preprocessed/Task004_Hippocampus/nnUNetData_plans_v2.1_2D
###############################################
这里有一个batch dice 和sample dice,stage的概念不是很理解。
按照上面的提示,我们用的trainer是'nnunet.training.network_training.nnUNetTrainerV2.nnUNetTrainerV2'>
继承自class nnUNetTrainer(NetworkTrainer):
先看:NetworkTrainer.py,nnUNetTrainer.py
do_split :5折验证
训练过程没有什么特别,sgd+poly_lr
loss是dice+ce
数据增强也非常简单,只有缩放和旋转
def setup_DA_params(self):
"""
- we increase roation angle from [-15, 15] to [-30, 30]
- scale range is now (0.7, 1.4), was (0.85, 1.25)
- we don't do elastic deformation anymore
有早停,patience=50.