Using OpenCV and Python on the Raspberry Pi for simple object detection UPDATE

I recently wrote about detecting blue objects. Now that I have refined the code somewhat I am going to show how detect a round blue object and post the full source at the end. Most of it is the same as in the previous post but the circle detection is new.

The basic idea is that we take all contours from the image and then test it they are convex. If they are convex and the area of the contour is roughly pi*radius^2 it is a circle. The radius we get by applying a minimum enclosing circle around the contour.

Like before we have a couple of preprocessing steps.

This is our stater image, we have 3 blue objects.

ball_distinction_light

Now we filter the blue color.

ball_distinction_light_1_only_blue

Now we filter out the saturation channel of our HSV image and blur it.

ball_distinction_light_2_blur

Now we threshold it and erode it a little, we have still some other objects left.

ball_distinction_light_3a_morphNow we use a canny edge detection.

ball_distinction_light_3b_edges

And finally we detect all circles and mark their location on the original image.

ball_distinction_light_4_output

And here is the source for this:

 

Posted in computer vision, Linux, python, raspberry pi | Leave a comment

Using OpenCV and Python on the Raspberry Pi for simple object detection

I wrote some time ago about the robot which task it is to find a blue ball and move to it.
In this post I will give a little bit more information on how it detects the blue ball. The blue color helps us because we can filter image for blue parts and then only process those, but that is not all there is to it.

Detecting a simple object of a certain color is a very different task from detecting complex feature rich objects.  Here we can get away with a lot of simple and computationally inexpensive operations.

We will do this with roughly the following operations.

  1. Convert image from RGB to HSV.
  2. Filter the color blue from the HSV transformed image.
  3. Logically AND the filtered blue color with the original image then split the image channels because we only need the value channel.
  4. Apply erosion, which aids in removing artifacts and unwanted parts in the image.
  5. Find an enclosing circle for the part of the image that is left.

This is the base image for our example:

ball

Original image

 

Now the operations in slightly more detail:

1. Convert the color image from RGB (should be what most web cams return) to HSV (Hue Saturation Value) format.
In our case HSV is the better format because it separates the color of the image from the intensity (read more about HSV here).

2. Now we create a mask which filters only the blue color.
The hue value can be between 90 and 130.
We also accept a range in saturation and value.

You can play a little with the values here. In some cases 110 in place of 90 may be working better for you.

3. Now that we have a mask that filters blue color we logically AND it with the original image. After this we have only the blue parts of the image left. Next we split the channels of the HSV image and keep only the value which gives us an gray scale image which we then blur.

Blurring the image may seem counterproductive but in comparison to the human vision system which has no trouble finding something in a noisy image the computer has a lot of trouble with noise in images (see also).

ball_only_blue

Blue filtered image

Saturation channel of blue filtered image

4. Next is the erosion and thresholding.
First we erode the image, this helps us to clear the image of the rest that is almost blue, or is also blue but not our main object.

erosion_1

Erosion with 1 iteration

erosion_2

Erosion with 2 iterations

We choose an ellipse as a structuring element with a size of 15×15 and one iteration. You can play around with these parameters. The images above are results from another source image with more blue parts in the image.

After the erosion we threshold it. This gives us a binary image (0 empty, 1 part of object).
We only need the binary image now because we only want to find shapes and the brightness, color, etc. is not needed.

ball_morph

Thresholded binary image

5. At this point we can try to find contours in our binary image.
However if you look at the code you will notice that we only try to find an enclosing circle and not a circular shape. This is done because it is the most robust method for the robot and changing surroundings.

ball_output

Final result

 

Posted in computer vision, Linux, python, raspberry pi | Leave a comment

Why you should blur an image before processing it using OpenCV and Python

If you start playing around with computer vision there are a couple of surprises waiting. One of them, for me at least, was how bad computers are at finding shapes in noisy images and, in contrast to that, how good the human brain is at this task.

For example, take this simple image:

shape

original shape image

 

We want to first run canny edge detection, so we first convert it into gray scale.

shape_grayscale

gray scale shape image

 

Now, we add some noise to it.

shape_noise

noisy, non blurred shape image

 

You can still make out the shape just fine but when you now try to detect edges with the canny edge detection in opencv this happens:

shape_edges_noise

canny edge detection

 

Now, when we try to find contours we get this:

shape_outuput_noise

contour detection

 

But, if we first blur the image, then use canny edge and then find contours we get this:

shape_blurred

blurred image

shape_edges

canny edge detection

shape_outuput

contour is found

As you can see it is not perfect but almost perfect and certainly better than without the blurring of the image.

Example python code:

 

 

Posted in computer vision, Linux, python | Leave a comment

Raspberry PI Robot finds a blue ball update

I modified the code for finding the ball and added moving towards ist (if the radius of the detected object is small, etc.).

Short video of the action

 

And another screenshot of the GUI, the detection uses OpenCV.

found_it

Posted in python, raspberry pi, robots | Leave a comment

Raspberry PI Robot finds a blue ball

Short clip of the Robot2 with its new light attachment (USB powered reading light, attached to the USB Power Source).
The images it uses for recognition are taken by a small microsoft webcam and then processed with python and opencv. This allows the robot to find a blue ball (although not as robust as it should be!). The light attachment enhances the performance of the recognition  a little.

This is how it sees the world:

robot_guiThe small gui is written in python with pyqt bindings. It has only a couple of features at the moment, basically just enough to trigger behaviours and remote control the robot over wifi (and get the webcam images). It communicates with the robot through its API (a small flask server running on the raspberry pi). The API exposes the sensor data, servo and stepper motor control.

Posted in python, raspberry pi, robots | Leave a comment

Raspberry PI Robot Navigation Experiment

After installing the HMC5883L I wanted to try some simple navigation experiments.

Experiment 1 should go like this:

1. Move forward a little
2. Rotate 90 degrees
3. Move forward a little
4. Go back to the start point (distance and heading, get calculated)

As you can see it was not really a success but better than nothing. You may notice that it moves a little slowly. This is because before every action it takes 3 sensor readings (heading from the HMC5883L and distance from a HC-SR04) with a little pause in between and then take the mean of the 3 readings. This is especially noticeable at the end because there it additionally has to wait for the distance signal to come back because there are no books blocking the way.

This allows the robot to cope a little better with inacurate sensor information. The next thing that I plan to do is to take the sensor readings constantly from 2 processes (one for heading and one for distance) and apply some kind of filter (probably kalman).
This should allow for some more smoothing of the sensor readings and probably a better performance.

Posted in python, raspberry pi, robots | Leave a comment

Digital compass HMC5883L with python and the raspberry pi

I recently installed the hmc5883l which I ordered from amazon on my robot.
The plan is to do some simple navigation experiments.

The assembly was unproblematic. I connected the i2c bus to the adafruit servo controller which in turn is connected to the pi.

pi robot2 with hmc5883l

But I ran into a little bit of a problem, there is a very good library in python that allows easy access to the hmc5883l with python. But I already use the Adafruit I2C library for controlling the servo motors. Fortunately the code was very easy to adapt, which I did.

Here is my adapted code:

For this to work you first have to set your declination. You can get the declination for your current position here.

Example for reading the data:

 

Posted in python, raspberry pi, robots | Leave a comment

Hylafax Faxkennung ändern pro User

Ist aufgrund des Aufbaus vom HylaFax Server eigentlich sehr einfach.

Es muss das faxsend Script unter /var/spool/fax/bin (je nach Installation) geändert werden.

Der User steht in der q$faxid Datei zusammen mit den anderen Parametern drin. Die Parameter für den faxsend Befehl können mit -c Parameter:Wert geändert werden.
Hier ein Beispiel faxsend Script.

Posted in Hylafax, Linux | Leave a comment

Teamspeak 3 Server unter Gentoo

Den Teamspeak3 Server unter Gentoo zu installieren ist eigentlich ganz einfach (gibt nen ebuild).

2  Sachen sind trotzdem zu beachten, oder ich hab etwas falsch gemacht (media-sound/teamspeak-server-bin 3.0.0_beta18).

1. Das Script unter /opt/teamspeak3-server/ts3server_minimal_runscript.sh muss einmal gestartet werden bevor das initscript benutzt werden kann (dort musste ich den Namen der aufgerufenen Binary noch anpassen).

2. Es wird ein Teamspeak User angelegt der nicht die Berechtigung auf die SQLite Datenbank unter /opt/teamspeak3-server/ erhält diese müssen erst vergeben werden.

Nachdem das beides gemacht wurde startet auch der Server über das Initscript.

Posted in gentoo, Linux | Leave a comment

Zarafa Postfachgrößen

Ein kleines hilfreiches Bash Script um die Postfachgrößen von einem Zarafa Mailserver zu ermitteln.

Abgewandelt von einem Script das ich mal im Zarafa Forum gefunden habe.

edit:
Und da WP nervt hier der Pastebin link: http://pastebin.de/4239

Posted in Linux | 5 Comments