Object Detection using Yolo
Checking and Changing the version of Keras
•
Check the version: pip show keras
•
Change the version: pip install keras==[target version]
pip show keras
Python
복사
pip install keras==2.6.0
Python
복사
Yolo3 weights Download
•
Before run, please download yolo-tiny.weights file
•
Convert the format of Yolo3 file
•
Convert yolov3 file to .h5 format, which can be used in Keras
•
◦
Don’t need to download if you clonde the github repository above.
•
File description
◦
convert.py : run to convert the file format
◦
yolov3.cfg : definition of model structure which is used in Darknet
◦
yolov3.weight : weight of trained model from Darknet
!python convert.py yolov3-tiny.cfg yolov3-tiny.weights model_data/yolo_tiny.h5
Python
복사
[Error]
Test with Sample Image
from IPython.display import display
from PIL import Image
from yolo import YOLO
import tensorflow.compat.v1.keras.backend as K
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
def objectDetection(file, model_path, class_path):
yolo = YOLO(model_path=model_path, classes_path=class_path, anchors_path="model_data/yolo_tiny_anchors.txt")
image = Image.open(file)
result_image = yolo.detect_image(image)
display(result_image)
objectDetection('dog.jpg', 'model_data/yolo_tiny.h5', 'model_data/coco_classes.txt')
Python
복사
[Error]
Result
Test with Real Image
from IPython.display import display
from PIL import Image
from yolo import YOLO
import tensorflow.compat.v1.keras.backend as K
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
def objectDetection(file, model_path, class_path):
yolo = YOLO(model_path=model_path, classes_path=class_path, anchors_path="model_data/yolo_tiny_anchors.txt")
image = Image.open(file)
result_image = yolo.detect_image(image)
display(result_image)
objectDetection('resize_img/00000.jpg', 'model_data/yolo_tiny.h5', 'model_data/coco_classes.txt')
Python
복사
Result
Image annotation
•
Converting the labelling data to train the model
import xml.etree.ElementTree as ET
from os import getcwd
import glob
classes = ['beam']
def convert_annotation(annotation_voc, train_all_file):
tree = ET.parse(annotation_voc)
objects = tree.findall('./object')
for obj in objects:
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult)==1: continue
cls_id = classes.index(cls)
xmlbox = obj.find('bndbox')
b = (int(float(xmlbox.find('xmin').text)), int(float(xmlbox.find('ymin').text)), int(float(xmlbox.find('xmax').text)), int(float(xmlbox.find('ymax').text)))
train_all_file.write(" " + ",".join([str(a) for a in b]) + ',' + str(cls_id))
train_all_file = open('./data/personal/annotations_yolo/train_all.txt', 'w')
# Get annotations_voc list
for xml in xml_list:
annotations_voc = glob.glob(f'./data/personal/annotations_voc/{xml}')
for annotation_voc in annotations_voc:
image_id = annotation_voc.split('/')[-1].split('.')[0]+'.JPG'
print(image_id)
train_all_file.write(f'./data/personal/image_train/{image_id}')
convert_annotation(annotation_voc, train_all_file)
train_all_file.write('\n')
train_all_file.close()
Python
복사
Result