I've been interested in computer vision for a long time, but I haven't had any free time to make any progress until this holiday season. Over Christmas and the New Years I experimented with various methodologies in OpenCV to detect road signs and other objects of interest to OpenStreetMap. After some failed experiments with thresholding and feature detection, the excellent /r/computervision suggested using the dlib C++ module because it has more consistently-good documentation and the pre-built tools are faster.
After a day or two figuring out how to compile the examples, I finally made some progress:
-
Clone
dlib
from Github to your local machine:git clone [email protected]:davisking/dlib.git
-
Install the
libjpeg
dependency:brew install libjpeg
-
As of this writing,
dlib
won't compile due to weirdness with the system-installedlibjpeg
, so the developer suggests modifying line 277 ofdlib/CMakeLists.txt
to look like this:if (JPEG_FOUND AND LIBJPEG_IS_GOOD AND NOT APPLE)
-
Compile the example programs that come with
dlib
(one of which is the classifier training program):mkdir dlib/examples/build cd dlib/examples/build cmake .. cmake --build .
-
You'll also want to compile the
imglab
tool so you can mark up images to tell the system what you're searching for:mkdir dlib/tools/imglab/build cd dlib/tools/imglab/build cmake --build .
-
Download at least a dozen images that contain the object you're trying to recognize. For road signs I used Wikimedia commons, Mapillary, or a Google Image Search. Put these images in one directory on your computer. I found that they all had to be converted to JPEG (and I used
convert
from ImageMagick to do it) for the next step. -
Run the
imglab
tool once to create an XML list of files you downloaded:dlib/tools/imglab/build/imglab -c signs.xml Downloads/sign*.jpg
-
The file that
imglab
just created is a very simple XML file that lists relative paths for all the images. The next step is to specify where the objects are in the images. Run theimglab
tool once more, but this time only specify the XML file you created above:dlib/tools/imglab/build/imglab signs.xml
-
This will open a window via XWindows/XQuartz:
Now, for each image you hold down Shift and drag a bounding box around the object to detect. As in the screenshot here, I found that selecting the region immediately inside the black border of the sign resulted in a better model. If you accidentally create a bounding box that you didn't want, double click the border of the bounding box and press Delete. There are more interface details in the Help menu.
-
When you finish highlighting the regions of interest, save the changes (File -> Save) and exit the
imglab
tool. Your XML file now contains extra markup that specifies the bounding boxes for the objects of interest. -
Next, we'll use the XML file you created to train a classifier:
dlib/examples/build/train_object_detector -tv signs.xml
This will run some processing tasks to build the model based on your XML file and then it will test the model against the images you gave it. If the model is excellent, it will match 100% of the original bounding boxes. The output below has 100% recall indicated by the
1 1 1
:Saving trained detector to object_detector.svm Testing detector on training data... Test detector (precision,recall,AP): 1 1 1 Parameters used: threads: 4 C: 1 eps: 0.01 target-size: 5000 detection window width: 65 detection window height: 77 upsample this many times : 0
Your model is now stored in the
object_detector.svm
file and can be used to predict the location of similar objects in completely new images.
-
Find an image that you didn't train with. Run the object detector again with the new image specified as an argument:
dlib/examples/build/train_object_detector Downloads/new_sign_image.jpg
This time around the program will use the model you trained to highlight any objects on an XWindows/XQuartz window with the image as a background:
I used the imglab exe to make the file with the boxes. while running the code to build the svm file on certain occasions it fails somewhere so i checked i changed the width and the height to random value it worked but that will increase the chances of misclassifications. How is it the bounding boxes are affecting this process of training?
Theres absolutely no error message the last check point is when it counts the no of images and then the crash
so is there a certain aspect ratio to maintained while drawing the bounding box over the object? because certain occasions the default window size 80 x 80 does not seem to work unless changed to 50 x 50. What features should be common? similar height, width , aspect ratio , area etc..