Segsrgan

该算法是基于Chi Hieu Pham于2019年提出的method算法。有关segsrgan算法的更多信息可以在相关的article中找到。

安装

用户(推荐)

可以使用pypi安装库pip install SegSRGAN

注意:我们建议使用virtualenv

如果安装了包,可以使用importlib python包找到下面显示的所有.py文件,如下所示:importlib.util.find_spec("SegSRGAN").submodule_search_locations[0]

显影剂

首先,克隆存储库。使用make运行测试套件

或者创建pypi包。git clone git@github.com:koopa31/SegSRGAN.git

make test

make pkg

安装pip install SegSRGAN

进行培训:

示例:python SegSRGAN_training.py

−−new_low_res 0.5 0.5 3

−−csv /home/user/data.csv

−−snapshot_ folder /home/user/training_weights

−−dice_file /home/user/dice.csv

−−mse_ file /home/user/mse_example_for_article.csv

−−folder_training_data/home/user/temporary_file_for_training

选项:

####常规选项:csv (string): CSV file that contains the paths to the files used for the training. These files are divided into two categories: train and test. Consequently, it must contain 3 columns, called: HR_image, Label_image and Base (which is equal to either Train or Test), respectively

dice_file (string): CSV file where to store the DICE at each epoch

mse_file(string): CSV file where to store the MSE at each epoch

epoch (integer) : number of training epochs

batch_size (integer) : number of patches per mini batch

number_of_disciminator_iteration (integer): how many times we train the discriminator before training the generator

new_low_res (tuple): resolution of the LR image generated during the training. One value is given per dimension, for fixed resolution (e.g.“−−new_low_res 0.5 0.5 3”). Two values are given per dimension if the resolutions have to be drawn between bounds (e.g. “−−new_low_res 0.5 0.5 4 −−new_low_res 1 1 2” means that for each image at each epoch, x and y resolutions are uniformly drawn between 0.5 and 1, whereas z resolution is uniformly drawn between 2 and 4.

snapshot_folder (string): path of the folder in which the weights will be regularly saved after a given number of epochs (this number is given by snapshot (integer) argument). But it is also possible to continue a training from its saved weight, adding the following parameters:

folder_training_data (string): folder where temporary files are written during the training (created at the begining of each epoch and deleted at the end of it)

网络架构选项:kernel_gen (integer): number of output channels of the first convolutional layer of the generator.

kernel_dis (integer): number of output channels of the first convolutional layer of the discriminator.

is_conditional (Boolean): enables to train a conditional network with a condition on the input resolution (discriminator and generator are conditional).

u_net (Boolean): enables to train U-Net network (see difference between u-net and non u-net network in the images below).

is_residual (Boolean): determines whether the structure of the network is residual or not. This option only impacts the activation function of the generator (see image below for more details).

python 屏幕分辨率 python 超分辨率_权重

剩余网络与非剩余网络

python 屏幕分辨率 python 超分辨率_python 多帧 超分辨_02

python 屏幕分辨率 python 超分辨率_python_03

U-net vs non u-net shaped network

其中表示为“resblock”的块定义如下:

python 屏幕分辨率 python 超分辨率_python 多帧 超分辨_04

重新阻塞

从一组重量继续训练的选项:init_epoch (integer): number of the first epoch which will be considered during the continued training (e.g., 21 if the weights given were those obtained at the end of the 20th epoch). This is mainly useful to write the weights in the same folder as the training which is continued. Warning – The number of epochs of the remaining training is then epoch − initepoch +1.

weights (string): path to the saved weights from which the training will be continued.

数据增强选项:percent_val_max: multiplicative value that gives the ratio of the maximal value of the image, to define the standard deviation of the additive Gaussian noise.

For instance, a value of 0.03 means that sigma = 0.03 max(X) where max(X) is the maximal value of the image X.

contrast_max: controls the modification of contrast of each image. For instance, a value of 0.4 means that at each epoch, each image will be set to a power uniformly drawn between 0.6 and 1.4.

执行分段:

有两种方法可以执行分割:在命令行中(主要用于处理多个分段)

使用python函数。

python函数:

在testsuite/seg.py文件中可以看到,python函数可以如下使用:from SegSRGAN.Function_for_application_test_python3 import segmentation

segmentation(input_file_path, step, new_resolution, patch, path_output_cortex, path_output_hr, weights_path)

其中:input_file_path is the path of the image to be super resolved and segmented

step is the shifting step for the patches

new_resolution is the new z-resolution we want for the output image

path_output_cortex output path of the segmented cortex

path_output_hr output path of the super resolution output image

weights_path is the path of the file which contains the pre-trained weights for the neural network

patch is the size of the patches

多个步长和面片值的一组图像的分割

为了便于对多个图像进行分割,可以运行segsrgan/segsrgan/job_model.py:

一般说明:python job_model.py

--path

--patch

--step

--result_folder_name

--weights_path

要处理的图像的路径列表必须存储在csv文件中。

其中:path : Path of the CSV file

patch : list of patch sizes

step : list of steps

result_folder_name : Name of the folder containing the results

####示例:python job_model . py −−path /home/data . csv −−patch "

64,128" −−step "32 64 ,64 128" −−

result_folder_name "

weights_without_augmentation" −−weights_path "

weights /Perso_without_data_augmentation"

csv路径参数:

一个csv文件,正如上面提到的

在上面的示例中,用于获取所有

要处理的图像。只有第一列

将使用项,并且文件必须仅包含

路径(即没有标题)。

step和patch参数:

在这个例子中,我们运行

步骤32和64用于修补程序64,步骤64和128用于

补丁128。图像的路径列表

已处理的必须存储在csv文件中。

警告-必须严格遵守

给定步骤和面片的形状。

权重参数。算法的实现允许使用两种不同的

重量:我们已经训练过的重量。

通过训练可以获得的新重量。

为了使用我们已经有的重量

经过训练,最简单的解决方案是

-权重路径参数一些值如下所示:未增加数据的重量/人:

对应于没有数据的权重

增强。

重量/人与噪声对比_

0.03_val_max:对应于权重

如第4节所述的数据扩充。

本文中未提供的其他权重是可用的(segsrgan的帮助提供了所有

这些可用重量)。

组织输出存储:

要处理的每个图像都必须存储在自己的文件夹中。什么时候?

处理给定的输入图像(可以是nifti图像或dicom文件夹),专用

为每个输出创建文件夹。此文件夹将

位于输入图像的文件夹中

已处理并将以尊敬的方式命名

参数–-result_folder_name的值(在我们的示例中,该文件夹将命名为“result_with_weights_without_augmentation”)。最后,每个初始图像将包含

名为“result_with_weights_without_augmentation”的文件夹,此文件夹将包含两个

nifti文件,即sr和t分割。