Warning: The magic method HM\BackUpWordPress\Notices::__wakeup() must have public visibility in /hosting/mlk/html/wp-content/plugins/backupwordpress/classes/class-notices.php on line 46

Warning: The magic method HM\BackUpWordPress\Path::__wakeup() must have public visibility in /hosting/mlk/html/wp-content/plugins/backupwordpress/classes/class-path.php on line 57

Warning: The magic method HM\BackUpWordPress\Extensions::__wakeup() must have public visibility in /hosting/mlk/html/wp-content/plugins/backupwordpress/classes/class-extensions.php on line 35
Open Image Dataset으로 보는 MASK RCNN 파이썬 치트코드 – Go Lab

Open Image Dataset으로 보는 MASK RCNN 파이썬 치트코드

1) mask rcnn the real-New version step늘린거(멀티프로세싱 재개)-publish 용

Mask RCNN 간단한 Tuning 예제

  • 대상이 되는 이미지중 실제 마스크에 물체가 탐지된 경우에만 훈련
  • MRCNN 구현체내 model.py의 generator부분이 현재 버그가 있어서, 마스크가 없는경우 무한루프가 돌 가능성이 있음
In [1]:
import os 
import sys
import random
import math
import numpy as np
import cv2
import matplotlib.pyplot as plt
from tqdm import tqdm
import pandas as pd 
import glob
from sklearn.model_selection import KFold
from PIL import Image
import os.path
import glob
import traceback
import skimage

from skimage import data, color
from skimage.transform import rescale, resize, downscale_local_mean

from mrcnn.config import Config
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnn.model import log

import os
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np

import tensorflow as tf
Using TensorFlow backend.

데이터 준비 및 마스크에 대한 데이터 로딩

In [2]:
IMG_SIZE = 256
In [3]:
masks = [filename for filename in glob.iglob('../input/mask/' + '**/*.png', recursive=True)]
In [4]:
train_raw = [filename for filename in glob.iglob('../input/train/' + '**/*.jpg', recursive=True)]
In [5]:
len(train_raw)
Out[5]:
1712712

Mask Segmentation에 대한 클래스에 대한 메타데이터 확인

In [6]:
mask_class = pd.read_csv('../tiny_input/class/classes-segmentation.txt', header=None)

mask_class = mask_class.reset_index()
mask_class['index'] = mask_class['index'] + 1

mask_class.columns = ['index','value']

mask_class.shape
Out[6]:
(350, 2)
In [7]:
mask_class.head(5)
Out[7]:
index value
0 1 /m/01_5g
1 2 /m/0c06p
2 3 /m/01lsmm
3 4 /m/01bqk0
4 5 /m/0l14j_

마스크에 대한 경로와 object에 대한 label확인

  • 쓰기좋게 인덱스를 적당히 걸어주는 것도 필요
In [8]:
mask_map = pd.DataFrame( [ i.rsplit('_',1)  for i in masks ] )
mask_map.columns = ['left','right']
mask_map['left1'] = mask_map['left'].apply(lambda x : x.split('_', 1)[0])
mask_map['left2'] = mask_map['left'].apply(lambda x : x.split('_', 1)[1])
mask_map['name'] = mask_map['left'].apply(lambda x: x.rsplit('\\', 1)[1])
mask_map['name_origin'] = mask_map['name'].apply(lambda x : x.split('_', 1)[0])

mask_map = mask_map[['name_origin', 'left2']]
mask_map.columns = ['train', 'object']
mask_map['origin'] = masks
In [9]:
mask_map.head(5)
Out[9]:
train object origin
0 100009cf62726c53 m01bl7v ../input/mask\train-masks-1\100009cf62726c53_m…
1 1000172a06986b17 m03bt1vf ../input/mask\train-masks-1\1000172a06986b17_m…
2 1000172a06986b17 m03bt1vf ../input/mask\train-masks-1\1000172a06986b17_m…
3 1000172a06986b17 m03bt1vf ../input/mask\train-masks-1\1000172a06986b17_m…
4 1000172a06986b17 m05r655 ../input/mask\train-masks-1\1000172a06986b17_m…
In [10]:
mask_map_identifier = mask_map.set_index('train')
In [11]:
mask_map_identifier.head(5)
Out[11]:
object origin
train
100009cf62726c53 m01bl7v ../input/mask\train-masks-1\100009cf62726c53_m…
1000172a06986b17 m03bt1vf ../input/mask\train-masks-1\1000172a06986b17_m…
1000172a06986b17 m03bt1vf ../input/mask\train-masks-1\1000172a06986b17_m…
1000172a06986b17 m03bt1vf ../input/mask\train-masks-1\1000172a06986b17_m…
1000172a06986b17 m05r655 ../input/mask\train-masks-1\1000172a06986b17_m…

현재 마스크가 존재하는것만 학습

  • 인덱스 걸린 테이블을 합리적으로 찾는것이 중요
In [12]:
def check_if_mask_exists(filename):
    f = filename.rsplit('.jpg')[0][-16:]
    return f in mask_map_identifier.index
In [13]:
train = []
for file in train_raw:
    if check_if_mask_exists(file):
        train.append(file)
In [ ]:
len(train)
Out[ ]:
808669

train 넣었을때 object 매핑로직

In [ ]:
def get_objects_from_train_image(train_image_name):
    return mask_map[mask_map['train']==train_image_name]
In [ ]:
get_objects_from_train_image('50002559e6b827ed')
Out[ ]:
train object origin
586878 50002559e6b827ed m03bt1vf ../input/mask\train-masks-5\50002559e6b827ed_m…
586879 50002559e6b827ed m05r655 ../input/mask\train-masks-5\50002559e6b827ed_m…
586880 50002559e6b827ed m05r655 ../input/mask\train-masks-5\50002559e6b827ed_m…

class int 바꾸기

  • class에 인덱스를 줘서 ID 를 사용하는 방법으로 MASKRCNN 공통점
In [ ]:
class_list = pd.read_csv('../tiny_input/class/classes-segmentation.txt', header=None)
class_list = class_list.reset_index()
class_list['index'] = class_list['index'] + 1
class_list.columns = ['image_index', 'object']
class_list['object'] = class_list['object'].apply(lambda x : x.replace('/',''))
print(class_list.head(10))
   image_index   object
0            1   m01_5g
1            2   m0c06p
2            3  m01lsmm
3            4  m01bqk0
4            5  m0l14j_
5            6   m0342h
6            7   m09j2d
7            8   m076bq
8            9   m01xqw
9           10   m01599

Image Height Width 를 설정하기 위한 메타 로딩

  • MASK RCNN 라이브러리상 크게 필요는 없음
  • 결국에는 resize가 다시한번 들어가기 때문에, 지정해놓은 이미지 사이즈대로 맞춰짐
In [ ]:
train_meta = pd.read_csv('image_meta.csv')
In [ ]:
train_meta['image_name'] = train_meta['filepath'].apply(lambda x: x.rsplit('\\')[2].split('.jpg')[0])
train_meta.head(2)
train_meta = train_meta.set_index('filepath')
In [ ]:
# 현재 mask가 있는것만 학습한다.
def get_wh_from_filepath(filepath):
    row = train_meta.loc[filepath]
    return row.width, row.height
#     return row.iloc[0]['width'], row.iloc[0]['height']
In [ ]:
get_wh_from_filepath('../input/train\\train_1\\100009cf62726c53.jpg')
Out[ ]:
(683, 1024)

데이터셋(Dataset MRCNN) 만들기

  • MRCNN 라이브러리에서 제공하는 Dataset을 만든다.
  • 따라서 load_mask() 는 꼭 구현해야 하는 메서드로, Segmentation에서 마스크를 로딩하는 기능을 함.
  • 마스크는 한 이미지 아이디당 여러개를 구현 가능
  • 여기서는 MRCNN 의 Model.py에 있는 while문의 continue문을 지운 수정버전을 사용함
  • 또한 Multiprocessing을 지운 버전(윈도우필요)을 사용함
In [ ]:
class_list[class_list['object'] == 'm02p0tk3']
Out[ ]:
image_index object
77 78 m02p0tk3
In [ ]:
file_name = '10086973aa2c6d00.jpg'

print(file_name)

w = 576
h = 1024

# 관련된 이미지 다 긁어옴. train / object / mask filepath
df_objects = get_objects_from_train_image(file_name.rsplit('.jpg')[0][-16:])
df_objects
10086973aa2c6d00.jpg
Out[ ]:
train object origin
366 10086973aa2c6d00 m02p0tk3 ../input/mask\train-masks-1\10086973aa2c6d00_m…
367 10086973aa2c6d00 m02p0tk3 ../input/mask\train-masks-1\10086973aa2c6d00_m…
368 10086973aa2c6d00 m02p0tk3 ../input/mask\train-masks-1\10086973aa2c6d00_m…
369 10086973aa2c6d00 m02p0tk3 ../input/mask\train-masks-1\10086973aa2c6d00_m…
370 10086973aa2c6d00 m02p0tk3 ../input/mask\train-masks-1\10086973aa2c6d00_m…
371 10086973aa2c6d00 m02p0tk3 ../input/mask\train-masks-1\10086973aa2c6d00_m…
372 10086973aa2c6d00 m05r655 ../input/mask\train-masks-1\10086973aa2c6d00_m…

실행할때의 주의점

  • 실행할때 skimage에 있는 resize를 함부로 사용하면 안된다.
  • 왜냐하면 uint8로 되어있는 부분을 타입을 바꾸기 때문으로, 이를 재사용하지 못한다.
  • 따라서 resize를 굳이 사용하려면 from skimage import img_as_ubyte 를 이용해 img_as_ubyte로 다시한번 감싸줘야, 추후에 mrcnn에서 resize하는 부분이 생 이미지 그대로 바꿀수있게 해준다.
In [ ]:
class DetectorDataset(utils.Dataset):
    
    # 미리 파일을 로딩한다. 하나의 이미지에 복수의 segmentation가능
    def load_dataset(self, start, end):
        
        # class에 대한 정보를 넣는다.
        for (i, c) in enumerate(class_list['object'], 1):
            self.add_class('openimage', i, c)
         
        # width 와 height는 자유 파라미터로, 굳이 중요하지 않음.
        for i in train[start:end]:
            w, h = get_wh_from_filepath(i)
            self.add_image("openimage", image_id = i, path = i, width = w, height = h)
                
    def load_image(self, image_id):
        """Load the specified image and return a [H,W,3] Numpy array.
        """
        # Load image
        image = skimage.io.imread(self.image_info[image_id]['path'])
        # If grayscale. Convert to RGB for consistency.
        if image.ndim != 3:
            image = skimage.color.gray2rgb(image)
        # If has an alpha channel, remove it for consistency
        if image.shape[-1] == 4:
            image = image[..., :3]
        return image
    
    # override 
    def load_mask(self, id_num ) :
        
        file_name = self.image_info[id_num]['id']
        
        # 관련된 이미지 다 긁어옴.
        df_objects = get_objects_from_train_image(file_name.rsplit('.jpg')[0][-16:])
        
        class_ids = list()
        object_length = len(df_objects)
        mask = np.zeros([IMG_SIZE,IMG_SIZE, object_length], dtype= np.int8)
        
        # 하나라도 해당 물체가 발견되면 학습을 시작
        if object_length >= 1:
            
            m = skimage.io.imread(df_objects.iloc[0]['origin'])
            mask = np.zeros([m.shape[0], m.shape[1], object_length], dtype = np.int8) 
                
                
            # mask관련된 오브젝트 다 긁어옴
            for i,obj in enumerate(df_objects['object']) :
                       
                
                # 리사이즈를 해서 i번째 마스크에 할당
                mask[:,:,i] = skimage.io.imread(df_objects.iloc[i]['origin'])  
                
                class_candidate = class_list[class_list['object'] == obj]
                
                # 하나만 가져오면 되므로, 0번째를 가지고 오며 class id는 i와 동일.
                if len(class_candidate) >= 1 :
                    class_ids.append(class_candidate['image_index'].iloc[0])
            
        return mask.astype(np.bool), np.asarray(class_ids, dtype='int32')

    # 이 라이브러리는 image_info에 모든걸 저장해놓음.
    def image_info_list(self):
        print(self.image_info)
    
    def image_info(self, image_id):
        """Return the shapes data of the image."""
        return self.image_info[image_id]
                
dataset = DetectorDataset()
dataset.load_dataset(0, 700000) # train
dataset.prepare()

val_dataset = DetectorDataset()
val_dataset.load_dataset(700000, 808000) # validation
val_dataset.prepare()

원래 준비된 COCO dataset에 weight가 조정된 모델 로딩

  • 상세한 설정은 ShapesConfig에서 진행
  • 6GB의 2060에서는 IMAGES_PER_GPU 6이 최대
In [ ]:
ROOT_DIR = os.path.abspath("../../")

MODEL_DIR = os.path.join(ROOT_DIR, "logs")

COCO_MODEL_PATH = "mask_rcnn_coco.h5"

if not os.path.exists("mask_rcnn_coco.h5"):
    utils.download_trained_weights(COCO_MODEL_PATH)
In [ ]:
class ShapesConfig(Config):
    """Configuration for training on the toy shapes dataset.
    Derives from the base Config class and overrides values specific
    to the toy shapes dataset.
    """
    # Give the configuration a recognizable name
    NAME = "shapes"

    # Train on 1 GPU and 8 images per GPU. We can put multiple images on each
    # GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
    GPU_COUNT = 1
    IMAGES_PER_GPU = 6

    # Number of classes (including background)
    NUM_CLASSES = 351  

    # Use small images for faster training. Set the limits of the small side
    # the large side, and that determines the image shape.
    IMAGE_MIN_DIM = 256
    IMAGE_MAX_DIM = 256

    # Use smaller anchors because our image and objects are small
    RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128)  # anchor side in pixels

    # Reduce training ROIs per image because the images are small and have
    # few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
    TRAIN_ROIS_PER_IMAGE = 32

    # Use a small epoch since the data is simple
    STEPS_PER_EPOCH = 500 #700000 / IMAGES_PER_GPU

    # use small validation steps since the epoch is small
    VALIDATION_STEPS = 5
    
config = ShapesConfig()
config.display()
Configurations:
BACKBONE                       resnet101
BACKBONE_STRIDES               [4, 8, 16, 32, 64]
BATCH_SIZE                     6
BBOX_STD_DEV                   [0.1 0.1 0.2 0.2]
COMPUTE_BACKBONE_SHAPE         None
DETECTION_MAX_INSTANCES        100
DETECTION_MIN_CONFIDENCE       0.7
DETECTION_NMS_THRESHOLD        0.3
FPN_CLASSIF_FC_LAYERS_SIZE     1024
GPU_COUNT                      1
GRADIENT_CLIP_NORM             5.0
IMAGES_PER_GPU                 6
IMAGE_MAX_DIM                  256
IMAGE_META_SIZE                363
IMAGE_MIN_DIM                  256
IMAGE_MIN_SCALE                0
IMAGE_RESIZE_MODE              square
IMAGE_SHAPE                    [256 256   3]
LEARNING_MOMENTUM              0.9
LEARNING_RATE                  0.001
LOSS_WEIGHTS                   {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}
MASK_POOL_SIZE                 14
MASK_SHAPE                     [28, 28]
MAX_GT_INSTANCES               100
MEAN_PIXEL                     [123.7 116.8 103.9]
MINI_MASK_SHAPE                (56, 56)
NAME                           shapes
NUM_CLASSES                    351
POOL_SIZE                      7
POST_NMS_ROIS_INFERENCE        1000
POST_NMS_ROIS_TRAINING         2000
ROI_POSITIVE_RATIO             0.33
RPN_ANCHOR_RATIOS              [0.5, 1, 2]
RPN_ANCHOR_SCALES              (8, 16, 32, 64, 128)
RPN_ANCHOR_STRIDE              1
RPN_BBOX_STD_DEV               [0.1 0.1 0.2 0.2]
RPN_NMS_THRESHOLD              0.7
RPN_TRAIN_ANCHORS_PER_IMAGE    256
STEPS_PER_EPOCH                500
TOP_DOWN_PYRAMID_SIZE          256
TRAIN_BN                       False
TRAIN_ROIS_PER_IMAGE           32
USE_MINI_MASK                  True
USE_RPN_ROIS                   True
VALIDATION_STEPS               5
WEIGHT_DECAY                   0.0001


로그와 모델 저장 경로 설정 및 초기화 모델 불러오기

In [ ]:
MODEL_DIR = os.path.join("./model/")
In [ ]:
model = modellib.MaskRCNN(mode="training", config=config,model_dir=MODEL_DIR)
In [ ]:
model.load_weights(COCO_MODEL_PATH, by_name=True,
                   exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])

train시 heads만 fine tuning

In [ ]:
model.train(dataset, val_dataset, learning_rate=0.01, epochs=10, layers='heads')

가중치 저장

In [ ]:
model.keras_model.save_weights("./my_mask_rcnn_all_tuning_head_size.h5")

답글 남기기