The COTBLEDTCID approach to object detection and pose estimation, Part V – Circles detection

Introduction

Let’s do a summary of what we have done so far:

  • COT: colour thresholding. We separated yellow objects from the rest.
  • BLED: blob edge detection. We retrieved the bottom edges of blobs (pawns).
  • T: transformation. We transformed the image’s pixels into game field points (aka. pixels to meters).

And the last step CID: Circles Detection.

As you may have already noticed, pawns and tower of pawns are in fact circles when viewed from above. Therefore, the bottom edges we found with BLED are also circles’ segments when transformed into game field coordinates (step T). This is the property we’re exploiting below.

Continue reading

Advertisements

The COTBLEDTCID approach to object detection and pose estimation, Part III – Blob Edge Detection

Introduction

There is still too much information we do not need on the B&W image we got on the last step. That’s why we need to extract the features we do need. One way of accomplishing this is by performing a connected component analysis in binary images, aka blob labelling. However as you’ll will see, this method is not completely adapted to our needs, so a new approach is proposed: Blob Edge Detection.

Continue reading

The COTBLEDTCID approach to object detection and pose estimation, Part I – Preface

Introduction

A fancy acronym that stands for the process of COlour Thresholding, Blob Edge Detection, Transformation and CIrcle Detection used for locating 3D objects on a plane. The next series of posts will explain the software algorithms used by ClubElek on 2011 to achieve computer vision. The problems we faced, the solutions we implemented and most importantly what we have learned by doing this project.

These posts are targeted to a wide audience with some background in maths and preferably some background in computer vision. Mostly because there are some maths and magic behind the algorithms used, but I’ll try to keep them as simple and clear as possible. Should you have questions or remarks do not hesitate to comment !

This software was designed to detect “pawns” and “figures” defined by the Eurobot 2011 rules and was demoed during “Industrie Lyon” from 5 april to 8 april 2011. Before continuing reading this post you should read the summary about the rules of the contest so you don’t get lost.

Besides the 5 previously mentioned steps, 2 other steps were necessary before attempting any computer recognition: terrain calibration and colour calibration. These 2 processes will be explained on separate posts as they are far more complex than the COTBLEDTCID itself.

After the pawns’ positions had been detected by the means of COTBLEDTCID, they were sent wirelessly to the robot through a XBee connection.

Additional requirements

  • The software should be used in a real-time environment. The fastest the algorithm, the better.
  • The software should be easy to use and fast to configure. (Teams have only 1 minute and 30 seconds before a match to completely set up the robot and its peripherals).

Hardware set-up

  • A Fit-PC-2 disk-less and fan-less computer running a customized ubuntu version controlled through ssh.
  • 3 identical Microsoft LifeCam Cinema webcams. Why 3 cameras should you ask. Well, during a match, there are two robots that constantly move around the table and obfuscate large parts of the terrain, with 3 cameras chances are we see most of the objects on the playing table at any time.

Assumptions made

  • Light intensity remains constant during the match and after calibration.
  • The cameras do not move during the match.

Both assumptions resulted to be inaccurate but did not affect the result as the detection and pose estimation algorithm is fairly robust.

Shiny pics

What the computer sees (note that the robot’s game field wasn’t entirely finished by the time):

What the computer sees

What the computer understands: (compare the pawns’ position in both images. You may use the red top corner or the black area at the bottom of the image as a reference)

what the computer understands

It’s fairly accurate isn’t it ?

What’s next

In the next section I will explain how the Colour Thresholding works and why we need it (COT for short).

Capture Video with OpenCV and VideoInput (Windows only)

As you might have already noticed, the internal’s opencv camera interface is far from complete. You can capture video from your camera without a hassle but you’re very limited to what you can do.

For instance, let’s say you have a webcam that can run at HD resolution ( 1280 x 720 px ), you can use the opencv’s class cv::VideoCapture to get the frames but you’re going to have a hard time on getting the full resolution out of your device.

To sort out this problem, one could use specialized libraries. The disadvantage :  they’re not usually multi platform so you’ll find yourself writing classes for every platform where you’re camera device operates.

Theo  developed a very useful video capture library for windows called videoInput.

Continue reading