In this study, we focused on the implementation of U-Net based on pix2pix around Google Colaboratory, which provides an appropriate range of computing resources (GPUs), memory, and disk space for free. A previous study already showed that Google Colaboratory can be an easy platform to study multiple biological imaging domains using deep learning.
29 The web browser–based Jupyter Notebooks provided by Colaboratory can interactively run Python code, which is currently the most widely used language to deploy machine learning applications. The original pix2pix code is available on the TensorFlow webpage (
https://www.tensorflow.org/tutorials/generative/pix2pix) under the Apache License v2.0, which allows users to use, modify, and redistribute the source code, and we modified it to use our sample dataset. Our code generated for segmenting SRF lesions for CSC is also available online at
https://data.mendeley.com/datasets/4k64fwnp4k. The code file tiled “pix2pix_csc_segmentation.ipynb" provides users with Jupyter Notebooks for Google Colaboratory and a dataset for training and validation. To use this code implementation, researchers do not need prior knowledge of coding skills, and they can run the codes in a log-in to Google Drive and a few mouse clicks. As summarized in
Figure 3, this process could be implemented in the following way: First, we prepared the example dataset in Google Drive. Second, we uploaded the code file in Google Drive and open the file on the Google Drive page on the web browser. Third, we prepared the dataset in Google Drive and match the folder locations of the code with the address of the actual folders. For example, in our experiment, we saved the training dataset at “csc/segmentation/train/” and the test dataset at “csc/segmentation/test/” on our own Google Drive. Fourth, click the play button to the left of the code cell one by one. The second code cell links the datasets to this Colaboratory notebook using Google Drive. The size of the input and annotation images was set to a resolution of 256 × 256 pixels to use the original architecture of the pix2pix and U-Net models. In this experiment using pix2pix, we set lambda, a weight term for regularizing, to 100, and we set the number of training iterations to 20,000. In this experiment, augmentation was not performed except for right-to-left flipping and jittering (using “random_jitter” function in the code) because it focused on the description of our annotated data and code implementation.