You are on page 1of 32

19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

We use cookies on kaggle to deliver our services, analyze web traffic, and improve your experience on the site.
Got it Learn more
By using kaggle, you agree to our use of cookies.

Search kaggle  Competitions Datasets Kernels Discussion Learn Sign In

Eduardo Mineo 
2
U-Net lung segmentation (Montgomery + Shenzhen)
voters

last run 6 hours ago · IPython Notebook HTML · 126 views


using data from multiple data sources ·  Public

Notebook Code Data (3) Output Comments (1) Log Versions (11) Forks Fork Notebook

Tags multiple data sources

Notebook

https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 1/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

RSNA Pneumonia Detection Challenge

Sociedade Beneficente de Senhoras - Hospital Sírio-


Libanês - Brazil

Contents
1. Overview
2. Data preparation
3. Segmentation training
4. Results

1. Overview
This notebook follows the work of Kevin Mader (https://www.kaggle.com/kmader/training-u-net-on-
tb-images-to-segment-lungs/notebook) for lung segmentation. Our motivation is to automatically
identify lung opacities in chest x-rays for the RSNA Pneumonia Detection Challenge
(https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/leaderboard).

Medical Image Segmentation is the process of automatic detection of boundaries within images. In
this exercise, we train a convolutional neural network with U-Net (https://arxiv.org/abs/1505.04597)
architecture, which training strategy relies on the strong use of data augmentation to improve the
efficiency of available annotated samples.

The training is done with two chest x-rays datasets: Montgomery County and Shenzhen Hospital
(https://ceb.nlm.nih.gov/repositories/tuberculosis-chest-x-ray-image-data-sets/). The Montgomery
County dataset includes manually segmented lung masks, whereas Shenzhen Hospital dataset was
manually segmented by Stirenko et al (https://arxiv.org/abs/1803.01199). The lung segmentation
masks were dilated to load lung boundary information within the training net and the images were
resized to 512x512 pixels.

2. Data preparation
Prepare the input segmentation directory structure.

In [1]:
!mkdir ../input/segmentation
!mkdir ../input/segmentation/test
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 2/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

!mkdir ../input/segmentation/train
!mkdir ../input/segmentation/train/augmentation
!mkdir ../input/segmentation/train/image
!mkdir ../input/segmentation/train/mask
!mkdir ../input/segmentation/train/dilate

Import required Python libraries

In [2]:
import os

import numpy as np
import cv2
import matplotlib.pyplot as plt

from keras.models import *


from keras.layers import *
from keras.optimizers import *
from keras import backend as keras
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint, LearningRateScheduler

from glob import glob


from tqdm import tqdm

Using TensorFlow backend.

Define appropriate constants for directory paths and training parameters

In [3]:
INPUT_DIR = os.path.join("..", "input")

SEGMENTATION_DIR = os.path.join(INPUT_DIR, "segmentation")


SEGMENTATION_TEST_DIR = os.path.join(SEGMENTATION_DIR, "test")
SEGMENTATION_TRAIN_DIR = os.path.join(SEGMENTATION_DIR, "train")
SEGMENTATION_AUG_DIR = os.path.join(SEGMENTATION_TRAIN_DIR, "augmentat
ion")
SEGMENTATION_IMAGE_DIR = os.path.join(SEGMENTATION_TRAIN_DIR, "image")
SEGMENTATION_MASK_DIR = os.path.join(SEGMENTATION_TRAIN_DIR, "mask")
SEGMENTATION_DILATE_DIR = os.path.join(SEGMENTATION_TRAIN_DIR, "dilat
e")
SEGMENTATION_SOURCE_DIR = os.path.join(INPUT_DIR, \
"pulmonary-chest-xray-abnormali
ties")
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 3/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
ties )

SHENZHEN_TRAIN_DIR = os.path.join(SEGMENTATION_SOURCE_DIR, "ChinaSet_A


llFiles", \
"ChinaSet_AllFiles")
SHENZHEN_IMAGE_DIR = os.path.join(SHENZHEN_TRAIN_DIR, "CXR_png")
SHENZHEN_MASK_DIR = os.path.join(INPUT_DIR, "shcxr-lung-mask", "mask",
"mask")

MONTGOMERY_TRAIN_DIR = os.path.join(SEGMENTATION_SOURCE_DIR, \
"Montgomery", "MontgomerySet")
MONTGOMERY_IMAGE_DIR = os.path.join(MONTGOMERY_TRAIN_DIR, "CXR_png")
MONTGOMERY_LEFT_MASK_DIR = os.path.join(MONTGOMERY_TRAIN_DIR, \
"ManualMask", "leftMask")
MONTGOMERY_RIGHT_MASK_DIR = os.path.join(MONTGOMERY_TRAIN_DIR, \
"ManualMask", "rightMask")

DILATE_KERNEL = np.ones((15, 15), np.uint8)

#Prod
STEPS_PER_EPOC=512
EPOCHS=48

#Desv
#STEPS_PER_EPOC=64
#EPOCHS=16

1. Combine left and right lung segmentation masks of Montgomery chest x-rays
2. Resize images to 512x512 pixels
3. Dilate masks to gain more information on the edge of lungs
4. Split images into training and test datasets
5. Write images to /segmentation directory

In [4]:
montgomery_left_mask_dir = glob(os.path.join(MONTGOMERY_LEFT_MASK_DIR,
'*.png'))
montgomery_test = montgomery_left_mask_dir[0:50]
montgomery_train= montgomery_left_mask_dir[50:]

for left_image_file in tqdm(montgomery_left_mask_dir):


base_file = os.path.basename(left_image_file)
image_file = os.path.join(MONTGOMERY_IMAGE_DIR, base_file)
right_image_file = os.path.join(MONTGOMERY_RIGHT_MASK_DIR, base_fi
le)

image = cv2.imread(image_file)
left_mask = cv2.imread(left_image_file, cv2.IMREAD_GRAYSCALE)
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 4/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

right_mask = cv2.imread(right_image_file, cv2.IMREAD_GRAYSCALE)

image = cv2.resize(image, (512, 512))


left_mask = cv2.resize(left_mask, (512, 512))
right_mask = cv2.resize(right_mask, (512, 512))

mask = np.maximum(left_mask, right_mask)


mask_dilate = cv2.dilate(mask, DILATE_KERNEL, iterations=1)

if (left_image_file in montgomery_train):
cv2.imwrite(os.path.join(SEGMENTATION_IMAGE_DIR, base_file), \
image)
cv2.imwrite(os.path.join(SEGMENTATION_MASK_DIR, base_file), \
mask)
cv2.imwrite(os.path.join(SEGMENTATION_DILATE_DIR, base_file),
\
mask_dilate)
else:
filename, fileext = os.path.splitext(base_file)
cv2.imwrite(os.path.join(SEGMENTATION_TEST_DIR, base_file), \
image)
cv2.imwrite(os.path.join(SEGMENTATION_TEST_DIR, \
"%s_mask%s" % (filename, fileext)), m
ask)
cv2.imwrite(os.path.join(SEGMENTATION_TEST_DIR, \
"%s_dilate%s" % (filename, fileext)),
mask_dilate)

100%|██████████| 138/138 [01:00<00:00, 2.23it/s]

Define some useful functions to display images with segmentation as overlays

In [5]:
def add_colored_dilate(image, mask_image, dilate_image):
mask_image_gray = cv2.cvtColor(mask_image, cv2.COLOR_BGR2GRAY)
dilate_image_gray = cv2.cvtColor(dilate_image, cv2.COLOR_BGR2GRAY)

mask = cv2.bitwise_and(mask_image, mask_image, mask=mask_image_gra


y)
dilate = cv2.bitwise_and(dilate_image, dilate_image, mask=dilate_i
mage_gray)

mask_coord = np.where(mask!=[0,0,0])
dilate_coord = np.where(dilate!=[0,0,0])

mask[mask coord[0] mask coord[1] :]=[255 0 0]


https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 5/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
mask[mask_coord[0],mask_coord[1],:]=[255,0,0]
dilate[dilate_coord[0],dilate_coord[1],:] = [0,0,255]

ret = cv2.addWeighted(image, 0.7, dilate, 0.3, 0)


ret = cv2.addWeighted(ret, 0.7, mask, 0.3, 0)

return ret

def add_colored_mask(image, mask_image):


mask_image_gray = cv2.cvtColor(mask_image, cv2.COLOR_BGR2GRAY)

mask = cv2.bitwise_and(mask_image, mask_image, mask=mask_image_gra


y)

mask_coord = np.where(mask!=[0,0,0])

mask[mask_coord[0],mask_coord[1],:]=[255,0,0]

ret = cv2.addWeighted(image, 0.7, mask, 0.3, 0)

return ret

def diff_mask(ref_image, mask_image):


mask_image_gray = cv2.cvtColor(mask_image, cv2.COLOR_BGR2GRAY)

mask = cv2.bitwise_and(mask_image, mask_image, mask=mask_image_gra


y)

mask_coord = np.where(mask!=[0,0,0])

mask[mask_coord[0],mask_coord[1],:]=[255,0,0]

ret = cv2.addWeighted(ref_image, 0.7, mask, 0.3, 0)


return ret

Show some Montgomery chest x-rays and its lung segmentation masks from training and test dataset
to verify the procedure above. In merged image it is possible to see the difference between the dilated
mask in blue and the original mask in red.

In [6]:
base_file = os.path.basename(montgomery_train[0])

image_file = os.path.join(SEGMENTATION_IMAGE_DIR, base_file)


mask_image_file = os.path.join(SEGMENTATION_MASK_DIR, base_file)
dilate_image_file = os.path.join(SEGMENTATION_DILATE_DIR, base_file)

image = cv2.imread(image_file)
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 6/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
g ( g )
mask_image = cv2.imread(mask_image_file)
dilate_image = cv2.imread(dilate_image_file)
merged_image = add_colored_dilate(image, mask_image, dilate_image)

fig, axs = plt.subplots(2, 4, figsize=(15, 8))

axs[0, 0].set_title("X-Ray")
axs[0, 0].imshow(image)

axs[0, 1].set_title("Mask")
axs[0, 1].imshow(mask_image)

axs[0, 2].set_title("Dilate")
axs[0, 2].imshow(dilate_image)

axs[0, 3].set_title("Merged")
axs[0, 3].imshow(merged_image)

base_file = os.path.basename(montgomery_test[0])
filename, fileext = os.path.splitext(base_file)
image_file = os.path.join(SEGMENTATION_TEST_DIR, base_file)
mask_image_file = os.path.join(SEGMENTATION_TEST_DIR, \
"%s_mask%s" % (filename, fileext))
dilate_image_file = os.path.join(SEGMENTATION_TEST_DIR, \
"%s_dilate%s" % (filename, fileext))

image = cv2.imread(image_file)
mask_image = cv2.imread(mask_image_file)
dilate_image = cv2.imread(dilate_image_file)
merged_image = add_colored_dilate(image, mask_image, dilate_image)

axs[1, 0].set_title("X-Ray")
axs[1, 0].imshow(image)

axs[1, 1].set_title("Mask")
axs[1, 1].imshow(mask_image)

axs[1, 2].set_title("Dilate")
axs[1, 2].imshow(dilate_image)

axs[1, 3].set_title("Merged")
axs[1, 3].imshow(merged_image)

Out[6]:
<matplotlib.image.AxesImage at 0x7f7a1f0ed5c0>

https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 7/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

1. Resize Shenzhen Hospital chest x-ray images to 512x512 pixels


2. Dilate masks to gain more information on the edge of lungs
3. Split images into training and test datasets
4. Write images to /segmentation directory

In [7]:
shenzhen_mask_dir = glob(os.path.join(SHENZHEN_MASK_DIR, '*.png'))
shenzhen_test = shenzhen_mask_dir[0:50]
shenzhen_train= shenzhen_mask_dir[50:]

for mask_file in tqdm(shenzhen_mask_dir):


base_file = os.path.basename(mask_file).replace("_mask", "")
image_file = os.path.join(SHENZHEN_IMAGE_DIR, base_file)

image = cv2.imread(image_file)
mask = cv2.imread(mask_file, cv2.IMREAD_GRAYSCALE)

image = cv2.resize(image, (512, 512))


mask = cv2.resize(mask, (512, 512))
mask_dilate = cv2.dilate(mask, DILATE_KERNEL, iterations=1)

if (mask_file in shenzhen_train):
cv2.imwrite(os.path.join(SEGMENTATION_IMAGE_DIR, base_file), \
image)
cv2.imwrite(os.path.join(SEGMENTATION_MASK_DIR, base_file), \
mask)
cv2.imwrite(os.path.join(SEGMENTATION_DILATE_DIR, base_file),
\
mask_dilate)
else:
filename, fileext = os.path.splitext(base_file)

cv2 imwrite(os path join(SEGMENTATION TEST DIR base file) \


https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 8/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
cv2.imwrite(os.path.join(SEGMENTATION_TEST_DIR, base_file), \
image)
cv2.imwrite(os.path.join(SEGMENTATION_TEST_DIR, \
"%s_mask%s" % (filename, fileext)), m
ask)
cv2.imwrite(os.path.join(SEGMENTATION_TEST_DIR, \
"%s_dilate%s" % (filename, fileext)),
mask_dilate)

100%|██████████| 566/566 [01:49<00:00, 5.18it/s]

Show some Shenzhen Hospital chest x-rays and its lung segmentation masks from training and test
dataset to verify the procedure above. In merged image it is possible to see the difference between the
dilated mask in blue and the original mask in red.

In [8]:
base_file = os.path.basename(shenzhen_train[0].replace("_mask", ""))

image_file = os.path.join(SEGMENTATION_IMAGE_DIR, base_file)


mask_image_file = os.path.join(SEGMENTATION_MASK_DIR, base_file)
dilate_image_file = os.path.join(SEGMENTATION_DILATE_DIR, base_file)

image = cv2.imread(image_file)
mask_image = cv2.imread(mask_image_file)
dilate_image = cv2.imread(dilate_image_file)
merged_image = add_colored_dilate(image, mask_image, dilate_image)

fig, axs = plt.subplots(2, 4, figsize=(15, 8))

axs[0, 0].set_title("X-Ray")
axs[0, 0].imshow(image)

axs[0, 1].set_title("Mask")
axs[0, 1].imshow(mask_image)

axs[0, 2].set_title("Dilate")
axs[0, 2].imshow(dilate_image)

axs[0, 3].set_title("Merged")
axs[0, 3].imshow(merged_image)

base_file = os.path.basename(shenzhen_test[0].replace("_mask", ""))


image_file = os.path.join(SEGMENTATION_TEST_DIR, base_file)
filename, fileext = os.path.splitext(base_file)
mask_image_file = os.path.join(SEGMENTATION_TEST_DIR, \
"%s_mask%s" % (filename, fileext))
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 9/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

filename, fileext = os.path.splitext(base_file)


image_file = os.path.join(SEGMENTATION_TEST_DIR, base_file)
mask_image_file = os.path.join(SEGMENTATION_TEST_DIR, \
"%s_mask%s" % (filename, fileext))
dilate_image_file = os.path.join(SEGMENTATION_TEST_DIR, \
"%s_dilate%s" % (filename, fileext))

image = cv2.imread(image_file)
mask_image = cv2.imread(mask_image_file)
dilate_image = cv2.imread(dilate_image_file)
merged_image = add_colored_dilate(image, mask_image, dilate_image)

axs[1, 0].set_title("X-Ray")
axs[1, 0].imshow(image)

axs[1, 1].set_title("Mask")
axs[1, 1].imshow(mask_image)

axs[1, 2].set_title("Dilate")
axs[1, 2].imshow(dilate_image)

axs[1, 3].set_title("Merged")
axs[1, 3].imshow(merged_image)

Out[8]:
<matplotlib.image.AxesImage at 0x7f7a1c5d5908>

Print the count of images and segmentation lung masks available to test and train the model

In [9]:
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 10/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
In [9]:
(len(glob(os.path.join(SEGMENTATION_TEST_DIR, "*.png"))), \
len(glob(os.path.join(SEGMENTATION_IMAGE_DIR, "*.png"))), \
len(glob(os.path.join(SEGMENTATION_MASK_DIR, "*.png"))), \
len(glob(os.path.join(SEGMENTATION_DILATE_DIR, "*.png"))))

Out[9]:
(300, 604, 604, 604)

3. Segmentation training
References: https://github.com/zhixuhao/unet/ (https://github.com/zhixuhao/unet/),
https://github.com/jocicmarko/ultrasound-nerve-segmentation
(https://github.com/jocicmarko/ultrasound-nerve-segmentation)

Data augmentation helper function for training the net

In [10]:
# From: https://github.com/zhixuhao/unet/blob/master/data.py
def train_generator(batch_size, train_path, image_folder, mask_folder,
aug_dict,
image_color_mode="grayscale",
mask_color_mode="grayscale",
image_save_prefix="image",
mask_save_prefix="mask",
save_to_dir=None,
target_size=(256,256),
seed=1):
'''
can generate image and mask at the same time use the same seed for
image_datagen and mask_datagen to ensure the transformation for ima
ge
and mask is the same if you want to visualize the results of genera
tor,
set save_to_dir = "your path"
'''
image_datagen = ImageDataGenerator(**aug_dict)
mask_datagen = ImageDataGenerator(**aug_dict)

image_generator = image_datagen.flow_from_directory(
train_path,
classes = [image_folder],
class_mode = None,
color_mode = image_color_mode,
target_size = target_size,
batch_size = batch_size,
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 11/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

save_to_dir = save_to_dir,
save_prefix = image_save_prefix,
seed = seed)

mask_generator = mask_datagen.flow_from_directory(
train_path,
classes = [mask_folder],
class_mode = None,
color_mode = mask_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = mask_save_prefix,
seed = seed)

train_gen = zip(image_generator, mask_generator)

for (img, mask) in train_gen:


img, mask = adjust_data(img, mask)
yield (img,mask)

def adjust_data(img,mask):
img = img / 255
mask = mask / 255
mask[mask > 0.5] = 1
mask[mask <= 0.5] = 0

return (img, mask)

U-net architecture

In [11]:
# From: https://github.com/jocicmarko/ultrasound-nerve-segmentation/blo
b/master/train.py
def dice_coef(y_true, y_pred):
y_true_f = keras.flatten(y_true)
y_pred_f = keras.flatten(y_pred)
intersection = keras.sum(y_true_f * y_pred_f)
return (2. * intersection + 1) / (keras.sum(y_true_f) + keras.sum(
y_pred_f) + 1)

def dice_coef_loss(y_true, y_pred):


return -dice_coef(y_true, y_pred)

def unet(input_size=(256,256,1)):
inputs = Input(input_size)
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 12/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inpu


ts)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv
1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool


1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv
2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)

conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(poo


l2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(con
v3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)

conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(poo


l3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(con
v4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)

conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(poo


l4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(con
v5)

up6 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), pa


dding='same')(conv5), conv4], axis=3)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(up6
)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(con
v6)

up7 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), pa


dding='same')(conv6), conv3], axis=3)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(up7
)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(con
v7)

up8 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), pad


ding='same')(conv7), conv2], axis=3)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(up8)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv
8)
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 13/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
8)

up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), pad


ding='same')(conv8), conv1], axis=3)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv
9)

conv10 = Conv2D(1, (1, 1), activation='sigmoid')(conv9)

return Model(inputs=[inputs], outputs=[conv10])

Helper functions to load test chest x-ray images

In [12]:
# From: https://github.com/zhixuhao/unet/blob/master/data.py
def test_load_image(test_file, target_size=(256,256)):
img = cv2.imread(test_file, cv2.IMREAD_GRAYSCALE)
img = img / 255
img = cv2.resize(img, target_size)
img = np.reshape(img, img.shape + (1,))
img = np.reshape(img,(1,) + img.shape)
return img

def test_generator(test_files, target_size=(256,256)):


for test_file in test_files:
yield test_load_image(test_file, target_size)

def save_result(save_path, npyfile, test_files):


for i, item in enumerate(npyfile):
result_file = test_files[i]
img = (item[:, :, 0] * 255.).astype(np.uint8)

filename, fileext = os.path.splitext(os.path.basename(result_f


ile))

result_file = os.path.join(save_path, "%s_predict%s" % (filena


me, fileext))

cv2.imwrite(result_file, img)

Select test and validation files

In [13]:
def add suffix(base file suffix):
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 14/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
def add_suffix(base_file, suffix):
filename, fileext = os.path.splitext(base_file)
return "%s_%s%s" % (filename, suffix, fileext)

test_files = [test_file for test_file in glob(os.path.join(SEGMENTATIO


N_TEST_DIR, "*.png")) \
if ("_mask" not in test_file \
and "_dilate" not in test_file \
and "_predict" not in test_file)]

validation_data = (test_load_image(test_files[0], target_size=(512, 51


2)),
test_load_image(add_suffix(test_files[0], "dilate"
), target_size=(512, 512)))

len(test_files), len(validation_data)

Out[13]:
(100, 2)

Prepare the U-Net model and train the model. It will take a while...

In [14]:
train_generator_args = dict(rotation_range=0.2,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.05,
zoom_range=0.05,
horizontal_flip=True,
fill_mode='nearest')

train_gen = train_generator(2,
SEGMENTATION_TRAIN_DIR,
'image',
'dilate',
train_generator_args,
target_size=(512,512),
save_to_dir=os.path.abspath(SEGMENTATION_A
UG_DIR))

model = unet(input_size=(512,512,1))
model.compile(optimizer=Adam(lr=1e-5), loss=dice_coef_loss, \
metrics=[dice_coef, 'binary_accuracy'])
model.summary()

model_checkpoint = ModelCheckpoint('unet_lung_seg.hdf5',
monitor='loss',
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 15/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
verbose=1,
save_best_only=True)

history = model.fit_generator(train_gen,
steps_per_epoch=STEPS_PER_EPOC,
epochs=EPOCHS,
callbacks=[model_checkpoint],
validation_data = validation_data)

_____________________________________________________________________
_____________________________
Layer (type) Output Shape Param # Conn
ected to
=====================================================================
=============================
input_1 (InputLayer) (None, 512, 512, 1) 0

_____________________________________________________________________
_____________________________
conv2d_1 (Conv2D) (None, 512, 512, 32) 320 inpu
t_1[0][0]
_____________________________________________________________________
_____________________________
conv2d_2 (Conv2D) (None, 512, 512, 32) 9248 conv
2d_1[0][0]
_____________________________________________________________________
_____________________________
max_pooling2d_1 (MaxPooling2D) (None, 256, 256, 32) 0 conv
2d_2[0][0]
_____________________________________________________________________
_____________________________
conv2d_3 (Conv2D) (None, 256, 256, 64) 18496 max_
pooling2d_1[0][0]
_____________________________________________________________________
_____________________________
conv2d_4 (Conv2D) (None, 256, 256, 64) 36928 conv
2d_3[0][0]
_____________________________________________________________________
_____________________________
max_pooling2d_2 (MaxPooling2D) (None, 128, 128, 64) 0 conv
2d_4[0][0]
_____________________________________________________________________
_____________________________
conv2d_5 (Conv2D) (None, 128, 128, 128 73856 max_
pooling2d_2[0][0]
_____________________________________________________________________
_____________________________
conv2d_6 (Conv2D) (None, 128, 128, 128 147584 conv
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 16/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
( ) ( , , ,
2d_5[0][0]
_____________________________________________________________________
_____________________________
max_pooling2d_3 (MaxPooling2D) (None, 64, 64, 128) 0 conv
2d_6[0][0]
_____________________________________________________________________
_____________________________
conv2d_7 (Conv2D) (None, 64, 64, 256) 295168 max_
pooling2d_3[0][0]
_____________________________________________________________________
_____________________________
conv2d_8 (Conv2D) (None, 64, 64, 256) 590080 conv
2d_7[0][0]
_____________________________________________________________________
_____________________________
max_pooling2d_4 (MaxPooling2D) (None, 32, 32, 256) 0 conv
2d_8[0][0]
_____________________________________________________________________
_____________________________
conv2d_9 (Conv2D) (None, 32, 32, 512) 1180160 max_
pooling2d_4[0][0]
_____________________________________________________________________
_____________________________
conv2d_10 (Conv2D) (None, 32, 32, 512) 2359808 conv
2d_9[0][0]
_____________________________________________________________________
_____________________________
conv2d_transpose_1 (Conv2DTrans (None, 64, 64, 256) 524544 conv
2d_10[0][0]
_____________________________________________________________________
_____________________________
concatenate_1 (Concatenate) (None, 64, 64, 512) 0 conv
2d_transpose_1[0][0]
conv
2d_8[0][0]
_____________________________________________________________________
_____________________________
conv2d_11 (Conv2D) (None, 64, 64, 256) 1179904 conc
atenate_1[0][0]
_____________________________________________________________________
_____________________________
conv2d_12 (Conv2D) (None, 64, 64, 256) 590080 conv
2d_11[0][0]
_____________________________________________________________________
_____________________________
conv2d_transpose_2 (Conv2DTrans (None, 128, 128, 128 131200 conv
2d_12[0][0]
_____________________________________________________________________
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 17/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

_____________________________
concatenate_2 (Concatenate) (None, 128, 128, 256 0 conv
2d_transpose_2[0][0]
conv
2d_6[0][0]
_____________________________________________________________________
_____________________________
conv2d_13 (Conv2D) (None, 128, 128, 128 295040 conc
atenate_2[0][0]
_____________________________________________________________________
_____________________________
conv2d_14 (Conv2D) (None, 128, 128, 128 147584 conv
2d_13[0][0]
_____________________________________________________________________
_____________________________
conv2d_transpose_3 (Conv2DTrans (None, 256, 256, 64) 32832 conv
2d_14[0][0]
_____________________________________________________________________
_____________________________
concatenate_3 (Concatenate) (None, 256, 256, 128 0 conv
2d_transpose_3[0][0]
conv
2d_4[0][0]
_____________________________________________________________________
_____________________________
conv2d_15 (Conv2D) (None, 256, 256, 64) 73792 conc
atenate_3[0][0]
_____________________________________________________________________
_____________________________
conv2d_16 (Conv2D) (None, 256, 256, 64) 36928 conv
2d_15[0][0]
_____________________________________________________________________
_____________________________
conv2d_transpose_4 (Conv2DTrans (None, 512, 512, 32) 8224 conv
2d_16[0][0]
_____________________________________________________________________
_____________________________
concatenate_4 (Concatenate) (None, 512, 512, 64) 0 conv
2d_transpose_4[0][0]
conv
2d_2[0][0]
_____________________________________________________________________
_____________________________
conv2d_17 (Conv2D) (None, 512, 512, 32) 18464 conc
atenate_4[0][0]
_____________________________________________________________________
_____________________________
conv2d_18 (Conv2D) (None, 512, 512, 32) 9248 conv
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 18/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

2d_17[0][0]
_____________________________________________________________________
_____________________________
conv2d_19 (Conv2D) (None, 512, 512, 1) 33 conv
2d_18[0][0]
=====================================================================
=============================
Total params: 7,759,521
Trainable params: 7,759,521
Non-trainable params: 0
_____________________________________________________________________
_____________________________
Epoch 1/48
Found 604 images belonging to 1 classes.
Found 604 images belonging to 1 classes.
512/512 [==============================] - 196s 384ms/step - loss: -
0.4672 - dice_coef: 0.4672 - binary_accuracy: 0.4820 - val_loss: -0.4
817 - val_dice_coef: 0.4817 - val_binary_accuracy: 0.6168

Epoch 00001: loss improved from inf to -0.46716, saving model to unet
_lung_seg.hdf5
Epoch 2/48
512/512 [==============================] - 186s 364ms/step - loss: -
0.6585 - dice_coef: 0.6585 - binary_accuracy: 0.7137 - val_loss: -0.5
781 - val_dice_coef: 0.5781 - val_binary_accuracy: 0.7190

Epoch 00002: loss improved from -0.46716 to -0.65851, saving model to


unet_lung_seg.hdf5
Epoch 3/48
512/512 [==============================] - 187s 364ms/step - loss: -
0.7434 - dice_coef: 0.7434 - binary_accuracy: 0.8146 - val_loss: -0.8
438 - val_dice_coef: 0.8438 - val_binary_accuracy: 0.9300

Epoch 00003: loss improved from -0.65851 to -0.74345, saving model to


unet_lung_seg.hdf5
Epoch 4/48
512/512 [==============================] - 186s 364ms/step - loss: -
0.8688 - dice_coef: 0.8688 - binary_accuracy: 0.9202 - val_loss: -0.8
887 - val_dice_coef: 0.8887 - val_binary_accuracy: 0.9472

Epoch 00004: loss improved from -0.74345 to -0.86877, saving model to


unet_lung_seg.hdf5
Epoch 5/48
512/512 [==============================] - 186s 364ms/step - loss: -
0.8889 - dice_coef: 0.8889 - binary_accuracy: 0.9327 - val_loss: -0.8
939 - val_dice_coef: 0.8939 - val_binary_accuracy: 0.9485

Epoch 00005: loss improved from -0.86877 to -0.88886, saving model to


https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 19/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
unet_lung_seg.hdf5
Epoch 6/48
512/512 [==============================] - 186s 364ms/step - loss: -
0.8984 - dice_coef: 0.8984 - binary_accuracy: 0.9377 - val_loss: -0.8
765 - val_dice_coef: 0.8765 - val_binary_accuracy: 0.9439

Epoch 00006: loss improved from -0.88886 to -0.89842, saving model to


unet_lung_seg.hdf5
Epoch 7/48
512/512 [==============================] - 186s 363ms/step - loss: -
0.9066 - dice_coef: 0.9066 - binary_accuracy: 0.9430 - val_loss: -0.8
985 - val_dice_coef: 0.8985 - val_binary_accuracy: 0.9513

Epoch 00007: loss improved from -0.89842 to -0.90660, saving model to


unet_lung_seg.hdf5
Epoch 8/48
512/512 [==============================] - 186s 363ms/step - loss: -
0.9054 - dice_coef: 0.9054 - binary_accuracy: 0.9418 - val_loss: -0.9
128 - val_dice_coef: 0.9128 - val_binary_accuracy: 0.9576

Epoch 00008: loss did not improve from -0.90660


Epoch 9/48
512/512 [==============================] - 186s 363ms/step - loss: -
0.9200 - dice_coef: 0.9200 - binary_accuracy: 0.9509 - val_loss: -0.9
203 - val_dice_coef: 0.9203 - val_binary_accuracy: 0.9591

Epoch 00009: loss improved from -0.90660 to -0.92003, saving model to


unet_lung_seg.hdf5
Epoch 10/48
512/512 [==============================] - 186s 363ms/step - loss: -
0.9265 - dice_coef: 0.9265 - binary_accuracy: 0.9556 - val_loss: -0.9
365 - val_dice_coef: 0.9365 - val_binary_accuracy: 0.9684

Epoch 00010: loss improved from -0.92003 to -0.92648, saving model to


unet_lung_seg.hdf5
Epoch 11/48
512/512 [==============================] - 186s 363ms/step - loss: -
0.9325 - dice_coef: 0.9325 - binary_accuracy: 0.9589 - val_loss: -0.9
397 - val_dice_coef: 0.9397 - val_binary_accuracy: 0.9699

Epoch 00011: loss improved from -0.92648 to -0.93251, saving model to


unet_lung_seg.hdf5
Epoch 12/48
512/512 [==============================] - 186s 362ms/step - loss: -
0.9378 - dice_coef: 0.9378 - binary_accuracy: 0.9623 - val_loss: -0.9
390 - val_dice_coef: 0.9390 - val_binary_accuracy: 0.9688

Epoch 00012: loss improved from -0.93251 to -0.93783, saving model to


l hdf
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 20/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
unet_lung_seg.hdf5
Epoch 13/48
512/512 [==============================] - 186s 362ms/step - loss: -
0.9411 - dice_coef: 0.9411 - binary_accuracy: 0.9643 - val_loss: -0.9
487 - val_dice_coef: 0.9487 - val_binary_accuracy: 0.9744

Epoch 00013: loss improved from -0.93783 to -0.94113, saving model to


unet_lung_seg.hdf5
Epoch 14/48
512/512 [==============================] - 185s 362ms/step - loss: -
0.9444 - dice_coef: 0.9444 - binary_accuracy: 0.9662 - val_loss: -0.9
363 - val_dice_coef: 0.9363 - val_binary_accuracy: 0.9693

Epoch 00014: loss improved from -0.94113 to -0.94444, saving model to


unet_lung_seg.hdf5
Epoch 15/48
512/512 [==============================] - 185s 362ms/step - loss: -
0.9459 - dice_coef: 0.9459 - binary_accuracy: 0.9673 - val_loss: -0.9
399 - val_dice_coef: 0.9399 - val_binary_accuracy: 0.9712

Epoch 00015: loss improved from -0.94444 to -0.94586, saving model to


unet_lung_seg.hdf5
Epoch 16/48
512/512 [==============================] - 186s 362ms/step - loss: -
0.9474 - dice_coef: 0.9474 - binary_accuracy: 0.9681 - val_loss: -0.9
566 - val_dice_coef: 0.9566 - val_binary_accuracy: 0.9778

Epoch 00016: loss improved from -0.94586 to -0.94743, saving model to


unet_lung_seg.hdf5
Epoch 17/48
512/512 [==============================] - 186s 362ms/step - loss: -
0.9493 - dice_coef: 0.9493 - binary_accuracy: 0.9693 - val_loss: -0.9
560 - val_dice_coef: 0.9560 - val_binary_accuracy: 0.9777

Epoch 00017: loss improved from -0.94743 to -0.94934, saving model to


unet_lung_seg.hdf5
Epoch 18/48
512/512 [==============================] - 186s 363ms/step - loss: -
0.9524 - dice_coef: 0.9524 - binary_accuracy: 0.9710 - val_loss: -0.9
543 - val_dice_coef: 0.9543 - val_binary_accuracy: 0.9765

Epoch 00018: loss improved from -0.94934 to -0.95237, saving model to


unet_lung_seg.hdf5
Epoch 19/48
512/512 [==============================] - 186s 362ms/step - loss: -
0.9522 - dice_coef: 0.9522 - binary_accuracy: 0.9709 - val_loss: -0.9
573 - val_dice_coef: 0.9573 - val_binary_accuracy: 0.9782

Epoch 00019: loss did not improve from 0 95237


https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 21/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
Epoch 00019: loss did not improve from -0.95237
Epoch 20/48
484/512 [===========================>..] - ETA: 10s - loss: -0.9504 -
dice_coef: 0.9504 - binary_accuracy: 0.9699

Show some results from model fitting history

In [15]:
fig, axs = plt.subplots(1, 2, figsize = (15, 4))

training_loss = history.history['loss']
validation_loss = history.history['val_loss']

training_accuracy = history.history['binary_accuracy']
validation_accuracy = history.history['val_binary_accuracy']

epoch_count = range(1, len(training_loss) + 1)

axs[0].plot(epoch_count, training_loss, 'r--')


axs[0].plot(epoch_count, validation_loss, 'b-')
axs[0].legend(['Training Loss', 'Validation Loss'])

axs[1].plot(epoch_count, training_accuracy, 'r--')


axs[1].plot(epoch_count, validation_accuracy, 'b-')
axs[1].legend(['Training Accuracy', 'Validation Accuracy'])

Out[15]:
<matplotlib.legend.Legend at 0x7f7a143ede48>

Make lung segmentation predictions

In [16]:
test_gen = test_generator(test_files, target_size=(512,512))
results = model.predict_generator(test_gen, len(test_files), verbose=1
)
save result(SEGMENTATION TEST DIR results test files)
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 22/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
save_result(SEGMENTATION_TEST_DIR, results, test_files)

100/100 [==============================] - 7s 69ms/step

4. Results
Below, we see some results from our work, presented as Predicted, Gold Standard (manually
segmented) and the difference between segmentations.

The next step will be the selection of lungs area on RSNA images dataset and the generation of a
lungs-only image dataset.

In [17]:
image = cv2.imread("../input/segmentation/test/CHNCXR_0003_0.png")
predict_image = cv2.imread("../input/segmentation/test/CHNCXR_0003_0_p
redict.png")
mask_image = cv2.imread("../input/segmentation/test/CHNCXR_0003_0_dila
te.png")

fig, axs = plt.subplots(4, 3, figsize=(16, 16))

axs[0, 0].set_title("Predicted")
axs[0, 0].imshow(add_colored_mask(image, predict_image))
axs[0, 1].set_title("Gold Std.")
axs[0, 1].imshow(add_colored_mask(image, mask_image))
axs[0, 2].set_title("Diff.")
axs[0, 2].imshow(diff_mask(mask_image, predict_image))

image = cv2.imread("../input/segmentation/test/MCUCXR_0003_0.png")
predict_image = cv2.imread("../input/segmentation/test/MCUCXR_0003_0_p
redict.png")
mask_image = cv2.imread("../input/segmentation/test/MCUCXR_0003_0_dila
te.png")

axs[1, 0].set_title("Predicted")
axs[1, 0].imshow(add_colored_mask(image, predict_image))
axs[1, 1].set_title("Gold Std.")
axs[1, 1].imshow(add_colored_mask(image, mask_image))
axs[1, 2].set_title("Diff.")
axs[1, 2].imshow(diff_mask(mask_image, predict_image))

image = cv2.imread("../input/segmentation/test/CHNCXR_0020_0.png")
predict_image = cv2.imread("../input/segmentation/test/CHNCXR_0020_0_p
redict.png")
mask_image = cv2.imread("../input/segmentation/test/CHNCXR_0020_0_dila
te.png")

https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 23/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

axs[2, 0].set_title("Predicted")
axs[2, 0].imshow(add_colored_mask(image, predict_image))
axs[2, 1].set_title("Gold Std.")
axs[2, 1].imshow(add_colored_mask(image, mask_image))
axs[2, 2].set_title("Diff.")
axs[2, 2].imshow(diff_mask(mask_image, predict_image))

image = cv2.imread("../input/segmentation/test/MCUCXR_0016_0.png")
predict_image = cv2.imread("../input/segmentation/test/MCUCXR_0016_0_p
redict.png")
mask_image = cv2.imread("../input/segmentation/test/MCUCXR_0016_0_dila
te.png")

axs[3, 0].set_title("Predicted")
axs[3, 0].imshow(add_colored_mask(image, predict_image))
axs[3, 1].set_title("Gold Std.")
axs[3, 1].imshow(add_colored_mask(image, mask_image))
axs[3, 2].set_title("Diff.")
axs[3, 2].imshow(diff_mask(mask_image, predict_image))

Out[17]:
<matplotlib.image.AxesImage at 0x7f7a1410acc0>

https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 24/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

In [18]:
!tar zcvf results.tgz --directory=../input/segmentation/test .

./
./CHNCXR_0657_1_dilate.png
./MCUCXR_0071_0_dilate.png
./CHNCXR_0283_0_predict.png
./MCUCXR_0006_0_dilate.png
./MCUCXR_0367_1.png
./CHNCXR_0020_0.png
./CHNCXR_0030_0_mask.png
./MCUCXR_0006_0_mask.png
./CHNCXR_0462_1_mask.png
./MCUCXR_0113_1_mask.png
./MCUCXR_0101_0_dilate.png
./CHNCXR_0091_0_dilate.png
./MCUCXR_0030_0_dilate.png
./MCUCXR_0399_1_dilate.png
./MCUCXR_0367_1_dilate.png
./CHNCXR_0572_1_mask.png
./CHNCXR_0608_1_mask.png
./CHNCXR_0649_1.png
./MCUCXR_0046_0.png
./MCUCXR_0188_1.png
./CHNCXR_0070_0_mask.png
./CHNCXR_0385_1_mask.png
./MCUCXR_0350_1.png
./MCUCXR_0255_1_mask.png
./MCUCXR_0017_0.png
./MCUCXR_0188_1_predict.png
./MCUCXR_0390_1_predict.png
./MCUCXR_0030_0_mask.png
./CHNCXR_0658_1_dilate.png
./CHNCXR_0460_1.png
./CHNCXR_0091_0_mask.png
./CHNCXR_0157_0_mask.png
./MCUCXR_0059_0_mask.png
./CHNCXR_0460_1_dilate.png
./MCUCXR_0350_1_predict.png
./CHNCXR_0572_1.png
./CHNCXR_0320_0_mask.png
./CHNCXR_0462_1.png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 25/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./C C _0 6 _ .p g
./CHNCXR_0152_0_mask.png
./CHNCXR_0230_0_predict.png
./MCUCXR_0011_0.png
./MCUCXR_0258_1_mask.png
./CHNCXR_0538_1.png
./CHNCXR_0152_0_dilate.png
./CHNCXR_0608_1_dilate.png
./MCUCXR_0367_1_mask.png
./MCUCXR_0313_1_predict.png
./MCUCXR_0051_0_mask.png
./MCUCXR_0150_1_mask.png
./CHNCXR_0506_1.png
./MCUCXR_0289_1_predict.png
./CHNCXR_0658_1.png
./MCUCXR_0101_0.png
./MCUCXR_0058_0_dilate.png
./MCUCXR_0095_0_dilate.png
./MCUCXR_0275_1_dilate.png
./CHNCXR_0620_1.png
./CHNCXR_0375_1_mask.png
./MCUCXR_0275_1.png
./CHNCXR_0446_1_mask.png
./MCUCXR_0017_0_mask.png
./CHNCXR_0238_0.png
./MCUCXR_0141_1_predict.png
./CHNCXR_0005_0_predict.png
./MCUCXR_0311_1_predict.png
./CHNCXR_0620_1_mask.png
./MCUCXR_0195_1_mask.png
./MCUCXR_0350_1_mask.png
./MCUCXR_0091_0_mask.png
./CHNCXR_0628_1_mask.png
./MCUCXR_0046_0_predict.png
./CHNCXR_0520_1_mask.png
./MCUCXR_0046_0_mask.png
./MCUCXR_0141_1_dilate.png
./MCUCXR_0080_0_mask.png
./MCUCXR_0075_0_predict.png
./MCUCXR_0080_0_dilate.png
./CHNCXR_0651_1.png
./MCUCXR_0026_0_mask.png
./CHNCXR_0030_0_predict.png
./CHNCXR_0334_1_dilate.png
./MCUCXR_0049_0_dilate.png
./MCUCXR_0003_0.png
./CHNCXR_0030_0.png
./CHNCXR_0510_1_mask.png
./MCUCXR_0113_1_dilate.png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 26/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

./MCUCXR_0150_1_predict.png
./MCUCXR_0049_0.png
./MCUCXR_0313_1.png
./MCUCXR_0016_0_mask.png
./CHNCXR_0085_0_mask.png
./CHNCXR_0657_1.png
./CHNCXR_0538_1_predict.png
./CHNCXR_0567_1_dilate.png
./CHNCXR_0334_1_predict.png
./CHNCXR_0320_0.png
./CHNCXR_0032_0.png
./CHNCXR_0238_0_mask.png
./CHNCXR_0423_1_dilate.png
./MCUCXR_0035_0.png
./MCUCXR_0049_0_mask.png
./MCUCXR_0375_1_mask.png
./CHNCXR_0157_0_predict.png
./CHNCXR_0259_0.png
./CHNCXR_0003_0_predict.png
./CHNCXR_0658_1_predict.png
./CHNCXR_0329_1.png
./CHNCXR_0375_1_dilate.png
./MCUCXR_0182_1.png
./CHNCXR_0409_1_mask.png
./CHNCXR_0572_1_predict.png
./MCUCXR_0057_0.png
./MCUCXR_0051_0.png
./MCUCXR_0017_0_predict.png
./MCUCXR_0095_0_predict.png
./CHNCXR_0003_0_dilate.png
./MCUCXR_0057_0_mask.png
./MCUCXR_0003_0_predict.png
./CHNCXR_0387_1_mask.png
./MCUCXR_0258_1_dilate.png
./MCUCXR_0170_1_mask.png
./MCUCXR_0188_1_mask.png
./CHNCXR_0329_1_predict.png
./CHNCXR_0157_0_dilate.png
./MCUCXR_0059_0_dilate.png
./CHNCXR_0329_1_dilate.png
./MCUCXR_0289_1.png
./CHNCXR_0611_1_dilate.png
./CHNCXR_0152_0_predict.png
./CHNCXR_0423_1_predict.png
./MCUCXR_0375_1.png
./CHNCXR_0329_1_mask.png
./CHNCXR_0608_1_predict.png
./MCUCXR_0011_0_predict.png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 27/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

./CHNCXR_0020_0_mask.png
./CHNCXR_0259_0_predict.png
./MCUCXR_0258_1.png
./CHNCXR_0004_0.png
./MCUCXR_0017_0_dilate.png
./MCUCXR_0099_0_mask.png
./MCUCXR_0026_0_dilate.png
./CHNCXR_0122_0_dilate.png
./CHNCXR_0085_0_predict.png
./MCUCXR_0350_1_dilate.png
./CHNCXR_0005_0.png
./CHNCXR_0567_1.png
./CHNCXR_0068_0_mask.png
./CHNCXR_0070_0.png
./MCUCXR_0059_0_predict.png
./CHNCXR_0283_0_mask.png
./MCUCXR_0058_0.png
./MCUCXR_0046_0_dilate.png
./MCUCXR_0030_0.png
./CHNCXR_0032_0_mask.png
./CHNCXR_0155_0.png
./CHNCXR_0651_1_predict.png
./MCUCXR_0058_0_mask.png
./CHNCXR_0538_1_dilate.png
./CHNCXR_0584_1_predict.png
./MCUCXR_0101_0_predict.png
./CHNCXR_0409_1_predict.png
./MCUCXR_0182_1_mask.png
./MCUCXR_0311_1_mask.png
./MCUCXR_0030_0_predict.png
./MCUCXR_0141_1.png
./MCUCXR_0099_0_dilate.png
./MCUCXR_0071_0.png
./MCUCXR_0051_0_dilate.png
./MCUCXR_0266_1_dilate.png
./MCUCXR_0064_0_dilate.png
./MCUCXR_0375_1_predict.png
./MCUCXR_0399_1.png
./CHNCXR_0575_1_mask.png
./CHNCXR_0384_1_mask.png
./MCUCXR_0048_0.png
./MCUCXR_0051_0_predict.png
./MCUCXR_0390_1.png
./CHNCXR_0032_0_predict.png
./MCUCXR_0352_1.png
./CHNCXR_0408_1_dilate.png
./MCUCXR_0058_0_predict.png
./CHNCXR_0060_0.png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 28/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle

./CHNCXR_0238_0_predict.png
./CHNCXR_0446_1.png
./CHNCXR_0506_1_dilate.png
./MCUCXR_0141_1_mask.png
./CHNCXR_0004_0_predict.png
./MCUCXR_0144_1_predict.png
./MCUCXR_0255_1_predict.png
./CHNCXR_0608_1.png
./MCUCXR_0313_1_mask.png
./CHNCXR_0230_0_dilate.png
./MCUCXR_0006_0.png
./CHNCXR_0091_0.png
./MCUCXR_0077_0.png
./MCUCXR_0144_1.png
./MCUCXR_0016_0_predict.png
./CHNCXR_0155_0_mask.png
./CHNCXR_0259_0_dilate.png
./CHNCXR_0409_1.png
./CHNCXR_0567_1_mask.png
./CHNCXR_0649_1_mask.png
./CHNCXR_0575_1.png
./MCUCXR_0102_0_mask.png
./CHNCXR_0575_1_dilate.png
./CHNCXR_0020_0_dilate.png
./MCUCXR_0077_0_mask.png
./CHNCXR_0005_0_dilate.png
./MCUCXR_0275_1_predict.png
./CHNCXR_0658_1_mask.png
./CHNCXR_0275_0_predict.png
./MCUCXR_0091_0.png
./MCUCXR_0057_0_dilate.png
./MCUCXR_0113_1.png
./MCUCXR_0188_1_dilate.png
./CHNCXR_0408_1_mask.png
./MCUCXR_0082_0_mask.png
./CHNCXR_0628_1_predict.png
./MCUCXR_0077_0_dilate.png
./MCUCXR_0352_1_dilate.png
./MCUCXR_0311_1.png
./CHNCXR_0375_1.png
./CHNCXR_0387_1_predict.png
./MCUCXR_0080_0.png
./MCUCXR_0049_0_predict.png
./MCUCXR_0170_1_dilate.png
./CHNCXR_0584_1_dilate.png
./CHNCXR_0030_0_dilate.png
./MCUCXR_0311_1_dilate.png
./CHNCXR_0122_0.png

https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 29/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./CHNCXR_0320_0_predict.png
./MCUCXR_0352_1_mask.png
./CHNCXR_0387_1.png
./CHNCXR_0385_1_dilate.png
./MCUCXR_0003_0_mask.png
./CHNCXR_0275_0.png
./MCUCXR_0195_1_predict.png
./MCUCXR_0289_1_dilate.png
./CHNCXR_0060_0_mask.png
./CHNCXR_0423_1.png
./MCUCXR_0170_1.png
./CHNCXR_0460_1_mask.png
./CHNCXR_0375_1_predict.png
./MCUCXR_0367_1_predict.png
./MCUCXR_0096_0_mask.png
./MCUCXR_0399_1_predict.png
./MCUCXR_0016_0_dilate.png
./CHNCXR_0628_1_dilate.png
./CHNCXR_0122_0_mask.png
./MCUCXR_0077_0_predict.png
./CHNCXR_0275_0_dilate.png
./CHNCXR_0122_0_predict.png
./MCUCXR_0082_0_predict.png
./MCUCXR_0075_0_dilate.png
./MCUCXR_0075_0_mask.png
./CHNCXR_0155_0_predict.png
./CHNCXR_0384_1_predict.png
./MCUCXR_0399_1_mask.png
./CHNCXR_0334_1_mask.png
./MCUCXR_0301_1_predict.png
./MCUCXR_0099_0.png
./MCUCXR_0006_0_predict.png
./CHNCXR_0446_1_predict.png
./CHNCXR_0060_0_dilate.png
./CHNCXR_0567_1_predict.png
./CHNCXR_0070_0_predict.png
./CHNCXR_0572_1_dilate.png
./CHNCXR_0423_1_mask.png
./CHNCXR_0238_0_dilate.png
./CHNCXR_0112_0.png
./CHNCXR_0384_1.png
./CHNCXR_0409_1_dilate.png
./CHNCXR_0510_1_dilate.png
./MCUCXR_0150_1_dilate.png
./MCUCXR_0390_1_mask.png
./MCUCXR_0011_0_mask.png
./MCUCXR_0195_1_dilate.png
./MCUCXR_0375_1_dilate.png
/CHNCXR 0003 0
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 30/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./CHNCXR_0003_0.png
./MCUCXR_0266_1_mask.png
./CHNCXR_0628_1.png
./CHNCXR_0283_0.png
./MCUCXR_0096_0_dilate.png
./MCUCXR_0266_1_predict.png
./MCUCXR_0150_1.png
./MCUCXR_0071_0_mask.png
./CHNCXR_0005_0_mask.png
./CHNCXR_0032_0_dilate.png
./CHNCXR_0520_1_predict.png
./MCUCXR_0071_0_predict.png
./MCUCXR_0144_1_mask.png
./CHNCXR_0649_1_predict.png
./MCUCXR_0255_1_dilate.png
./CHNCXR_0651_1_dilate.png
./CHNCXR_0584_1_mask.png
./CHNCXR_0510_1_predict.png
./MCUCXR_0255_1.png
./MCUCXR_0042_0_dilate.png
./MCUCXR_0080_0_predict.png
./MCUCXR_0301_1_dilate.png
./MCUCXR_0258_1_predict.png
./CHNCXR_0506_1_predict.png
./MCUCXR_0101_0_mask.png
./MCUCXR_0082_0_dilate.png
./CHNCXR_0657_1_mask.png
./MCUCXR_0035_0_predict.png
./MCUCXR_0102_0_dilate.png
./MCUCXR_0266_1.png
./MCUCXR_0102_0.png
./MCUCXR_0035_0_mask.png
./CHNCXR_0620_1_predict.png
./MCUCXR_0096_0.png
./MCUCXR_0102_0_predict.png
./MCUCXR_0059_0.png
./CHNCXR_0259_0_mask.png
./CHNCXR_0462_1_predict.png
./CHNCXR_0020_0_predict.png
./CHNCXR_0651_1_mask.png
./MCUCXR_0042_0.png
./CHNCXR_0152_0.png
./CHNCXR_0068_0_dilate.png
./MCUCXR_0003_0_dilate.png
./MCUCXR_0016_0.png
./CHNCXR_0091_0_predict.png
./MCUCXR_0099_0_predict.png
./MCUCXR_0289_1_mask.png
/CHNCXR 0462 1 dilate png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 31/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./CHNCXR_0462_1_dilate.png
./CHNCXR_0387_1_dilate.png
./MCUCXR_0182_1_dilate.png
./MCUCXR_0011_0_dilate.png
./MCUCXR_0113_1_predict.png
./CHNCXR_0408_1.png
./MCUCXR_0275_1_mask.png
./CHNCXR_0003_0_mask.png
./MCUCXR_0048_0_dilate.png
./CHNCXR_0657_1_predict.png
./MCUCXR_0042_0_predict.png
./CHNCXR_0510_1.png
./MCUCXR_0301_1_mask.png
./CHNCXR_0538_1_mask.png
./MCUCXR_0095_0_mask.png
./MCUCXR_0026_0_predict.png
./MCUCXR_0170_1_predict.png
./CHNCXR_0575_1_predict.png
./CHNCXR_0320_0_dilate.png
./MCUCXR_0301_1.png
/CHNCXR 0004 0 mask png

Did you find this Kernel useful? 


Show your appreciation with an upvote 2

Comments (1) All Comments Sort by Hotness

Please sign in to leave a comment.

14 hours ago

This Comment was deleted.

© 2018 Kaggle Inc Our Team Terms Privacy Contact/Support 

https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 32/32

You might also like