Skip to content

Instantly share code, notes, and snippets.

@anguyen8
Forked from iandees/dlib_plus_osm.md
Created August 17, 2015 01:34
Show Gist options
  • Save anguyen8/0c67d67b5b1fa0e5a701 to your computer and use it in GitHub Desktop.
Save anguyen8/0c67d67b5b1fa0e5a701 to your computer and use it in GitHub Desktop.
Detecting Road Signs in Mapillary Images with dlib C++

image

I've been interested in computer vision for a long time, but I haven't had any free time to make any progress until this holiday season. Over Christmas and the New Years I experimented with various methodologies in OpenCV to detect road signs and other objects of interest to OpenStreetMap. After some failed experiments with thresholding and feature detection, the excellent /r/computervision suggested using the dlib C++ module because it has more consistently-good documentation and the pre-built tools are faster.

After a day or two figuring out how to compile the examples, I finally made some progress:

Compiling dlib C++ on a Mac with Homebrew

  1. Clone dlib from Github to your local machine:

    git clone [email protected]:davisking/dlib.git
  2. Install the libjpeg dependency:

    brew install libjpeg
  3. As of this writing, dlib won't compile due to weirdness with the system-installed libjpeg, so the developer suggests modifying line 277 of dlib/CMakeLists.txt to look like this:

    if (JPEG_FOUND AND LIBJPEG_IS_GOOD AND NOT APPLE)
    
  4. Compile the example programs that come with dlib (one of which is the classifier training program):

    mkdir dlib/examples/build
    cd dlib/examples/build
    cmake ..
    cmake --build .
  5. You'll also want to compile the imglab tool so you can mark up images to tell the system what you're searching for:

    mkdir dlib/tools/imglab/build
    cd dlib/tools/imglab/build
    cmake ..
    cmake --build .

Train a classifier for road signs

  1. Download at least a dozen images that contain the object you're trying to recognize. For road signs I used Wikimedia commons, Mapillary, or a Google Image Search. Put these images in one directory on your computer. I found that they all had to be converted to JPEG (and I used convert from ImageMagick to do it) for the next step.

  2. Run the imglab tool once to create an XML list of files you downloaded:

    dlib/tools/imglab/build/imglab -c signs.xml Downloads/sign*.jpg
  3. The file that imglab just created is a very simple XML file that lists relative paths for all the images. The next step is to specify where the objects are in the images. Run the imglab tool once more, but this time only specify the XML file you created above:

    dlib/tools/imglab/build/imglab signs.xml
  4. This will open a window via XWindows/XQuartz:

    image

    Now, for each image you hold down Shift and drag a bounding box around the object to detect. As in the screenshot here, I found that selecting the region immediately inside the black border of the sign resulted in a better model. If you accidentally create a bounding box that you didn't want, double click the border of the bounding box and press Delete. There are more interface details in the Help menu.

  5. When you finish highlighting the regions of interest, save the changes (File -> Save) and exit the imglab tool. Your XML file now contains extra markup that specifies the bounding boxes for the objects of interest.

  6. Next, we'll use the XML file you created to train a classifier:

    dlib/examples/build/train_object_detector -tv signs.xml

    This will run some processing tasks to build the model based on your XML file and then it will test the model against the images you gave it. If the model is excellent, it will match 100% of the original bounding boxes. The output below has 100% recall indicated by the 1 1 1:

     Saving trained detector to object_detector.svm
     Testing detector on training data...
     Test detector (precision,recall,AP): 1 1 1
     
     Parameters used:
       threads:                 4
       C:                       1
       eps:                     0.01
       target-size:             5000
       detection window width:  65
       detection window height: 77
       upsample this many times : 0

    Your model is now stored in the object_detector.svm file and can be used to predict the location of similar objects in completely new images.

Detect signs in new images

  1. Find an image that you didn't train with. Run the object detector again with the new image specified as an argument:

    dlib/examples/build/train_object_detector Downloads/new_sign_image.jpg

    This time around the program will use the model you trained to highlight any objects on an XWindows/XQuartz window with the image as a background:

    image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment