I've been interested in computer vision for a long time, but I haven't had any free time to make any progress until this holiday season. Over Christmas and the New Years I experimented with various methodologies in OpenCV to detect road signs and other objects of interest to OpenStreetMap. After some failed experiments with thresholding and feature detection, the excellent /r/computervision suggested using the dlib C++ module because it has more consistently-good documentation and the pre-built tools are faster.
After a day or two figuring out how to compile the examples, I finally made some progress:
-
Clone
dlib
from Github to your local machine:git clone [email protected]:davisking/dlib.git
-
Install the
libjpeg
dependency:brew install libjpeg
-
As of this writing,
dlib
won't compile due to weirdness with the system-installedlibjpeg
, so the developer suggests modifying line 277 ofdlib/CMakeLists.txt
to look like this:if (JPEG_FOUND AND LIBJPEG_IS_GOOD AND NOT APPLE)
-
Compile the example programs that come with
dlib
(one of which is the classifier training program):mkdir dlib/examples/build cd dlib/examples/build cmake .. cmake --build .
-
You'll also want to compile the
imglab
tool so you can mark up images to tell the system what you're searching for:mkdir dlib/tools/imglab/build cd dlib/tools/imglab/build cmake --build .
-
Download at least a dozen images that contain the object you're trying to recognize. For road signs I used Wikimedia commons, Mapillary, or a Google Image Search. Put these images in one directory on your computer. I found that they all had to be converted to JPEG (and I used
convert
from ImageMagick to do it) for the next step. -
Run the
imglab
tool once to create an XML list of files you downloaded:dlib/tools/imglab/build/imglab -c signs.xml Downloads/sign*.jpg
-
The file that
imglab
just created is a very simple XML file that lists relative paths for all the images. The next step is to specify where the objects are in the images. Run theimglab
tool once more, but this time only specify the XML file you created above:dlib/tools/imglab/build/imglab signs.xml
-
This will open a window via XWindows/XQuartz:
Now, for each image you hold down Shift and drag a bounding box around the object to detect. As in the screenshot here, I found that selecting the region immediately inside the black border of the sign resulted in a better model. If you accidentally create a bounding box that you didn't want, double click the border of the bounding box and press Delete. There are more interface details in the Help menu.
-
When you finish highlighting the regions of interest, save the changes (File -> Save) and exit the
imglab
tool. Your XML file now contains extra markup that specifies the bounding boxes for the objects of interest. -
Next, we'll use the XML file you created to train a classifier:
dlib/examples/build/train_object_detector -tv signs.xml
This will run some processing tasks to build the model based on your XML file and then it will test the model against the images you gave it. If the model is excellent, it will match 100% of the original bounding boxes. The output below has 100% recall indicated by the
1 1 1
:Saving trained detector to object_detector.svm Testing detector on training data... Test detector (precision,recall,AP): 1 1 1 Parameters used: threads: 4 C: 1 eps: 0.01 target-size: 5000 detection window width: 65 detection window height: 77 upsample this many times : 0
Your model is now stored in the
object_detector.svm
file and can be used to predict the location of similar objects in completely new images.
-
Find an image that you didn't train with. Run the object detector again with the new image specified as an argument:
dlib/examples/build/train_object_detector Downloads/new_sign_image.jpg
This time around the program will use the model you trained to highlight any objects on an XWindows/XQuartz window with the image as a background:
Now I have the tools working, one practical note: you need a lot of high resolution pictures to make this work which makes it fairly problematic for improving the speed tagging in OSM compared to the old "mark waypoints on the GPS at speed limit changes and take notes" approach.
Using my Garmin Virb Elite my success rate at getting interval photos at its maximum rate (30 frames/minute) with 16 megapixel images that have enough resolution to reliably find a speed limit sign in images taken at anything over 30 mph is hit or miss at best. Picture 1 will often be too small and you've blown by the sign by the time picture 2 shows up. Alas it won't go faster than 30 frames/minute (1/2 fps) except in video mode, which is limited to 1080p (2 MP) but will time-lapse up to 2 fps.
My dashcam will get me the frame rate (up to 30 fps) but not the image quality (1-2 MP, and subjectively much worse than the Virb even in video mode). Plus it's a ton of image files to deal with and I'd have to correct the lens distortion adding extra processing to the mix.
My only other ideas are to train the classifier some more with crappier pictures or pointing the camera off-axis. Then I'll have to hack on train_object_detector to have do batch output rather than being interactive (should be easy enough) to make it a more practical tool.
Edit: after playing around a bit more, I've found that upsampling the images (using the -u command line parameter) from either the Virb or the dashcam improves the recognition rate at a distance substantially. So apparently the sign images don't have to be quite as spectacular as I thought and thus my initial pessimism may not be justified. 😄