ResNet을 이용하여 바로 물체 예측 파이썬 치트코드

  • Resnet을 Tensorflow Hub를 통해 다운로드 받아 바로 예측할수 있습니다.
  • 복잡한 훈련과정을 거치지 않더라도 바로 예측으로 사용할수 있어서 강점입니다.
resnet_simple

물체인식 (Object Detection with Resnet + FastRCNN)

  • 실제 훈련하는 과정은 없으며, 기존에 훈련되어있던 모델로 바로 예측
In [2]:
import os
import pandas as pd
import numpy as np
import base64
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
import matplotlib.pyplot as plt
from six import BytesIO
from PIL import Image, ImageColor, ImageDraw, ImageFont, ImageOps
import time

이미지에 상자표기용 함수 선언 (중요하지않음)

In [8]:
def display_image(image):
    fig = plt.figure(figsize=(20, 15))
    plt.grid(False)
    plt.imshow(image)


def draw_bounding_box_on_image(image,
                               ymin,
                               xmin,
                               ymax,
                               xmax,
                               color,
                               font,
                               thickness=4,
                               display_str_list=()):
    """Adds a bounding box to an image."""
    draw = ImageDraw.Draw(image)
    im_width, im_height = image.size
    (left, right, top, bottom) = (xmin * im_width, xmax * im_width,
                                  ymin * im_height, ymax * im_height)
    draw.line([(left, top), (left, bottom), (right, bottom), (right, top),
               (left, top)],
              width=thickness,
              fill=color)

    # If the total height of the display strings added to the top of the bounding
    # box exceeds the top of the image, stack the strings below the bounding box
    # instead of above.
    display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]
    # Each display_str has a top and bottom margin of 0.05x.
    total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)

    if top > total_display_str_height:
        text_bottom = top
    else:
        text_bottom = bottom + total_display_str_height
    # Reverse list and print from bottom to top.
    for display_str in display_str_list[::-1]:
        text_width, text_height = font.getsize(display_str)
        margin = np.ceil(0.05 * text_height)
        draw.rectangle([(left, text_bottom - text_height - 2 * margin),
                        (left + text_width, text_bottom)],
                       fill=color)
        draw.text((left + margin, text_bottom - text_height - margin),
                  display_str,
                  fill="black",
                  font=font)
        text_bottom -= text_height - 2 * margin


def draw_boxes(image, boxes, class_names, scores, max_boxes=10, min_score=0.1):
    """Overlay labeled boxes on an image with formatted scores and label names."""
    colors = list(ImageColor.colormap.values())

    font = ImageFont.load_default()

    for i in range(min(boxes.shape[0], max_boxes)):
        if scores[i] >= min_score:
            ymin, xmin, ymax, xmax = tuple(boxes[i].tolist())
            display_str = "{}: {}%".format(class_names[i].decode("ascii"),
                                           int(100 * scores[i]))
            color = colors[hash(class_names[i]) % len(colors)]
            image_pil = Image.fromarray(np.uint8(image)).convert("RGB")
            draw_bounding_box_on_image(
                image_pil,
                ymin,
                xmin,
                ymax,
                xmax,
                color,
                font,
                display_str_list=[display_str])
            np.copyto(image, np.array(image_pil))
    return image

실제 예측

  • 우리가 가져올 모델 (resnet_v2)
  • 그리고 맞출 이미지설정 (building.jpg)
In [ ]:
module_handle = "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1"
image_path = "../input/sds_image/building.jpg"
  • tensorflow hub를 이용해 바로 모델을 불러와서 예측에 사용할수 있다.
In [10]:
with tf.Graph().as_default():
    # 모듈을 불러옴
    detector = hub.Module(module_handle)
    #  JPEG 이미지를 uint8로 바꾸고, 포맷을 변경
    image_string_placeholder = tf.placeholder(tf.string)
    decoded_image = tf.image.decode_jpeg(image_string_placeholder)
    # Module accepts as input tensors of shape [1, height, width, 3], i.e. batch
    # of size 1 and type tf.float32.
    decoded_image_float = tf.image.convert_image_dtype(image=decoded_image, dtype=tf.float32)
    module_input = tf.expand_dims(decoded_image_float, 0)
    # 이미지를 detect한다.
    result = detector(module_input, as_dict=True)
    init_ops = [tf.global_variables_initializer(), tf.tables_initializer()]

    session = tf.Session()
    session.run(init_ops)

    # Load the downloaded and resized image and feed into the graph.
    with tf.gfile.Open(image_path, "rb") as binfile:
        image_string = binfile.read()

    result_out, image_out = session.run( [result, decoded_image], feed_dict={image_string_placeholder: image_string})
    print("Found %d objects." % len(result_out["detection_scores"]))
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
I0608 13:40:17.824107  5376 saver.py:1483] Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
I0608 13:40:41.454476  5376 saver.py:1483] Saver not created because there are no variables in the graph to restore
Found 100 objects.
  • 실제 삼성SDS건물을 대상으로 예측을 실시한결과 Skyscrapper라고 제대로 예측한것을 확인할수 있다.
In [11]:
# see the sample image with bounding boxes
image_with_boxes = draw_boxes(
    np.array(image_out), result_out["detection_boxes"],
    result_out["detection_class_entities"], result_out["detection_scores"])
display_image(image_with_boxes)
Font not found, using default font.

답글 남기기