Veter Veter - robotics vehicle for researchers and makers



The following set of examples are inspired by the great online class "Artificial Intelligence for Robotics (cs373) - Programming A Robotic Car" taught by professor Sebastian Thrun. We modified the source code for some algorithms and topics explained in this class to control our robot. We keep the core part unchanged but substitute the control commands with our function calls. As a result, the same algorithm can now control the robot.

Unit1 - one-dimensional localization using histogram filter.

This example illustrates how to perform one-dimensional localization using histogram filter we have learned in Unit 1 of the class. In particular, the left sonar measures the distance at each step. If the distance is less than threshold (defined as 30cm), then we consider that the measurement returns "wall". If the distance is larger than than threshold, then we assume that we are in front of the door.

The one second pause between movements is made intentionally to highlight that we are not in the continuous space. In practice, it might be well possible not to stop motors at the moment where measurements are made. In fact, it is very easy to modify the source code by commenting out corresponding sleep call to achieve "smooth" movement.

Check out relevant sources on GitHub

Unit2 - speed and position estimation with Kalman filter

In this example, we estimate our own position and speed based on the rear sonar measurement (distance to the wall). In the Unit 2, several examples are mentioned where Kalman filter was applied to estimate the speed and position of the *other* vehicles. We simplify our test case by estimating our own position relative to the well known marker - the wall. If you are not happy with this simplification, then you can always assume, that we conduct this experiment in the coordinate system which is collocated with the vehicle. In this case, the wall was moving and we were estimating the speed and position of the wall relative to our vehicle :-) .

The one second pause between movements is made intentionally to highlight the moments where the measurements were taken. In fact, it is very easy to modify the source code by commenting out corresponding sleep call to achieve "smooth" movement.

Check out relevant sources on GitHub

Unit3 - position estimation using particle filter

In this example, we estimate robot's position based on the sonar measurements. There are four sonars available on the vehicle facing forward, backward, left and right. The robot is programmed to follow the predefined trajectory through way-points specified in the trajectory.dat file. In addition, we defined the room plan as a list of "walls" stored in the plan.dat.

At each way-point we are obtaining sonar measurements. Then, for each particle we are calculating "ideal" measurement. This calculation is done by creating equations for two lines the particle belongs to - the one which is parallel to the particle bearing and the one which is perpendicular to the particles bearing. Searching the nearest intersection between these lines and walls gives us the "ideal" measurement. Corresponding functions are implemented in lineutils.py. The difference between "ideal" measurement and actual sonar data is used to calculate the error and the weight of the particle for re-sampling step.

In the small picture, there is a room plan with the trajectory the robot attempts to follow (green line). Red lines illustrate sonar directions. Points on red lines correspond to the distances measured by sonar. These distances are not exactly on the intersection with the walls because the robot can not follow the ideal path precisely. Dots spread across the room are particles used in the filter. The red cloud of particles and camera-like icon represents the final robot position estimated by the particle filter. As could be seen on the video, the estimated position is very close to the actual robot position. In our experiments we were using 500 particles. However, it was hard to see just 500 particles on the video, so we recorded sonar measurements and than re-run the filter using 10000 particles for the video. Final results using 500, 1000 and 10000 particles were identical.

Check out relevant sources on GitHub

Unit5 - motion control with PID-controller with Twiddle algorithm

In this example we illustrate the application of PID controller to automatically drive the robot as close as possible to the ideal straight line trajectory.

In the original CS-373 Unit 5 examples, the goal was to drive the simulated robot as close as possible to the X axis. Vertical deviation from X axis (Y coordinate) was treated as an error and used as input for PID controller. Twiddle algorithm was introduced as a way to find P, I and D parameters.

In the real robot case there are three main problems:

  1. what is the desired (ideal) path
  2. how to measure the deviation from it
  3. how to implement twiddle algorithm with real robot in such a way that reasonable amount of time (for example, less then 30min.) will required to discover good PID parameters. It should be possible to conduct experiment within limited space (for example, typical office room) without hitting walls and without manual intervention during the run.
To address these problems, we decide to rely on the available on-board camera. The idea is to place well distinguishable markers (black circles on the white background) on the opposite walls in the room. We decide to use available Python interface to OpenCV library to detect the markers. In particular we use cv.HoughCircles() function to detect circles. To improve detection performance, before applying HoughCircles, the original image is converted to the gray-scale, then Canny edge detection algorithm is executed followed by Gaussian smoothing. The result of these steps is the location of the center and radius of the circle in the cameras field of view. This information is used to:
  1. Define the "ideal" path. The goal for the robot is to drive direct towards the center with as little deviations as possible. This is what we consider as the ideal straight-line trajectory.
  2. Deviation is measured as distance between the detected circle center and middle (320 pixels) of the camera's frame (640x480 RGBA bitmap). The goal is to control the robot in such a way, that detected circle center is permanently kept in the middle of the frame.
  3. The twiddle algorithm implemented (copy/pasted) exactly as in the original Unit 5 assignments. To prevent the robot from completely driving away when trying bad PID parameter we use the following strategy. First, we try to drive towards detected marker with current PID parameters. For each camera frame we measure the error as deviation of the circle center from the middle of the frame. If, the circle goes out of the camera field of view, then there is no circle detected and we set detected radius to zero. At each iteration we react at this condition by stopping using PID controller and start on-place rotation until the marker will come back in the field of view. Then we switch back to the PID-controlled movement towards the marker. As we approach the marker, the radius of the circle will grow up. We define the limit of 100 pixels as indication that we are close to the wall (where the marker is placed). As soon as we reach the 100 pixel radius limit, we again start on-place rotation sequence until we detect the marker with visible radius less then 100 pixels. This marker will be the one on the opposite wall. After that, we continue PID-controlled movement towards newly detected marker.

The following two videos illustrate the example.

This video illustrates how the robot behaves at the early stages of the twiddle algorithm. The small picture in picture shows the view from "driver cockpit" with image processing results. The robot movements are not smooth, sometimes they are going into completely wrong direction which leads the marker out of the camera field of view. It in turn results in the rotation of the robot to find the next marker. This behavior is normal for twiddle algorithm because it tries different parameters some of which are not feasible.


This video illustrates the final run after twiddle algorithm has found good PID parameters. It could be seen, that the robot moves straight forward towards the marker, does not loose it and properly reacts on small deviations.

Check out relevant sources on GitHub

Blender Gaming Enginge (BGE) based simulation environment

We are currently developing BGE-based simulation environment for our robot. As could be seen on the video above, the driver cockpit is the same as used when controlling the real robot. Our goal is to make the real robot transparently exchangeable with the simulated one without changing control algorithms.

Tweets by @veterobot