Veter Veter - robotics vehicle for researchers and makers



Target audience

At the first step we are targeting researchers in robotics, artificial intelligence and computer vision as well as hobby robotics enthusiasts. For this purposes, we identified and addressed the following key requirements: To address these requirements we decide for the following solutions.

Chassis and body

We are using Dagu Rover 5 Tracked Chassis with two motors and two quadrature encoders. The whole body is 3D-printed and provides: Application of the 3D printing significantly simplifies customization of the body. In addition, it opens the natural way to integrate new types of sensors and actuators. All 3D models are developed using popular open-source modeling tool Blender. All the models required to print the body are also available as open-source. The following set of pictures provides some examples:

The first two pictures shows our current model and the last two represent rendered 3D models of the alternative bodies we are currently working on.

Sensors and on-board electronic

In the full configuration the following sensors are available: The front sonar and cameras could be mounted on the "head" which is rotated by the servo motor. All peripheral devices are connected to the on-board computer over USB and connectors provided by the daughter board. Current robot version uses TI's BeagleBoard-xM as on-board computer. BeagleBoard is an open-hardware embedded computer with dual-core CPU (ARM+DSP). It provides enough power to compress video stream with H.264 codec in real-time, control the hardware and run advanced navigation algorithms. We are using standard in modeling domain NiMh batteries. For development, the robot could be powered with external power supply.


Robot is equipped with the IEEE 802.11 b/g/n WLAN-Adapter. Currently, we are testing the operation mode with 3G-Modem (UMTS). It is already possible (there is an available software) to remotely control the robot over the Internet using real-time video streaming from on-board camera. Of course, the autonomous operation mode is also possible. In particular, we support cloud-robotics paradigm: complex navigation algorithms could be executed on external powerful computer if the on-board computer does not provide enough performance.


The whole software is open-source and is available on git-hub. We offer the complete stack from operating system up to communication infrastructure and client-side user interface and visualization applications. There is an OpenGL-based application "cockpit" available to remotely control the robot manually. Sensor data (including video from cameras) are rendered in real-time and control commands are sent back to the vehicle. Keyboard and USB joysticks are supported as control devices. The following picture illustrate the current version of the cockpit application.

The left video panel shows the original video stream and the right one - processed with OpenCV (in this case using Canny edge detection). In the bottom, there are indicators for current motor speed, sonar measurements, compass, etc.

To demonstrate how to develop solutions for typical problems from robotics domain using our platform we implement several homework assignments from "Programming A Robotic Car" online course taught by Prof. Sebastian Thrun. These examples illustrates the applicability of our platform in educational domain. In particular, comparing to popular LEGO Mindstorms systems, our system offers more computational power and more flexible set of software building blocks to solve typical robotics problems. Cloud-robotics and distributed autonomous robotic systems are promising future directions. Our platform is the step towards this direction and our customers could benefit by reusing our software and hardware and concentrate on their areas of competence.

Tweets by @veterobot