Bartender Robots

This is a project that I supervised at the Institute of Robotics and Process Control (IRP) at TU Braunschweig. This was in the context of a lecture that I propose in which our students come up with ideas of their own and we provide hardware (robots, sensors, etc), software and expertise*. We called it Open Robotics Lab and in this iteration, Marco, Adrian and Maxi who were with our institute for awhile decided to serve beers with two of our Panda arms by Franka Emika.

Figure 1: Snapshot of a pour


Figure 2: The detailed flowchart of the entire process

There are two robots, lets call them R1 and R2. The entire process is as follows:

  1. R1 picks a bottle from a crate,
  2. puts it on a holder and re-grasp it form the middle.
  3. R2 first grasps the opener and
  4. opens the bottle held by R1.
  5. It then puts back the opener and picks an empty glass from a designate location.
  6. The two robot engage in pouring. Once the bottle is almost completely drained,
  7. the robots do some fancy bottle shake and last drop extraction! (see video below)
  8. R1 puts the bottle back in the crate while
  9. R2 collects the detected cap
  10. and deposit it in garbage collections ;D

Or according to figure 2 in more details.

Challenges and Solutions

The project exhibited a number of challenges out of the box on the top of the usual technical aspects of robotic applications. For one, the guys wanted to open the bottle with a standard opener without designing special grippers. Another challenge was related to grasping the bottle and the glass firm enough to ensure planned manipulations are going as expected without slide/slip. In addition, the overall planning and manipulation needed to be addressed. Marco, Adrian and Maxi themselves summarized their project in 5 work items:

  • LibORL: a high level motion planning library that wrapper around libfranka
  • Cap detection: Package for detecting the bottle caps
  • Configuration and GUI: for launching the program and modifying (its many!) parameters
  • 3D modelling/printing
  • Documentation and deployment (for future users)

Here I very briefly go through some of these efforts, but if you’re not in the mood of reading, you can jump down and look at the video of the final results.


It was developed so that one can easily generate Cartesian motions and execute them on the robot. I, as the supervisor, originally objected the creation of yet another library. Particularly knowing that it relies on the Franka’s internal motion generation which itself relies on Reflexxes! However, they were persistent on the logic that they needed another level of abstraction on the top of not-so-feature-rich libfranka. One of my goals with this lab-lecture was to let students do their own risk assessment and face the consequences, so I let them make their own LibORL! Here is an excerpt from their readme:

These are the key features of this library:

  • Easy trajectory generation in Cartesian space + optional elbow control
  • Impedance mode which can follow an moving attractor Pose
  • Different kinds of speed profiles
  • Trajectory generation using B-Splines, Bezier-Curves or circles
  • Abort movements when there is too much force applied on the end-effector
  • Move to specific joint configurations
  • Usage of the Gripper

The basic workflow is like this:

  • Define a PoseGenerator (a function which gets a progress of the movement, the initial pose of the robot and the current robot state and returns a pose)
  • Apply a SpeedProfile (you can use a QuinticPolynomial a Cosine function or an S-Curve Speed Profile)
  • (optional) create a StopCondition. For example you can say that the robot abort the motion if there is an external force pushing the robot. Be careful with this when you open the gripper afterwards! Look at the docs.
  • Let the robot execute the PoseGenerator by providing a duration for the motion.
Figure 3: Stop condition
Figure 4: Concatenation of difference Geometric shapes

Cap Detection and Disposal

One of the challenges which I introduced was to force my students to detect the cap and dispose of it. What they did was actually pretty cool: They first detect the cap using with OpenCV using a camera hanged above, and then collect the cap using a simple but smart 3D printed gripper.

Figure 5: Cap detection: First all the circles in the scene are detected and the those with wrong radii are rejected. The coordinate of the matching circle is sent to LibORL which moves and aligns the magnetic gripper for a grasp.

3D Designs of Gripper

The entire process of picking a bottle, re-grasping and pouring it for R1; and holing a glass, picking a bottle opener, opening the bottle and collecting the cap for R2 had to be done without chaining the grippers. This means that the designed and 3D printed grippers had to be multi-purpose. These are some of the designs that Maxi, Marco and Adrian come up with:

The solution for the picking and disposal of the cap was simple but quite creative. Consider figure 7: a neodymium disk shaped magnet will fit in the circular edge of the left finger. When the gripper is closed, this magnet and its enclosing will fit into to the right circle. The magnetic field is strong enough to pick the cap from the outer side of the right finger as shown Figure 6-d. To drop the cap, simply open the fingers (check the video).

Figure 7: Successful design for the cap disposal mechanism

Final Result


Thoughts on Open Robotics Lab

This project was an interesting activity for my students, but also for me as the supervisor. I will write a separate post on the details of this form of empirical robotic activity later. For the moment it suffices to say that giving students freedom to pick their own idea and let them try and fail offered them an insight which is often not granted in the lecture halls. For instance, we all learn about the joint limits in robotic lectures, however, one does not really understand the challenge until they faces a complex manipulation task. Joint limits were one of the issues that my students faced. Another problem was the overall headache of dealing with a not-so-reliable Panda robot and their somewhat unfriendly software.

In the next iteration of Open Robotics Lab we will try to track the motion of a small (basket) ball using RGB images and position/orient a robot carrying a basket attached to its end-effector. Basically a basket ball game that the user throws and always hits the target! I will report on that later once the project is concluded.

Credits and External Links

This project was supervised by me and conducted by Marco Boneberger, Adrian Rudloff and Maximilian von Unwerth. All the pictures and the video in this post are made and owned by Marco Boneberger, Adrian Rudloff and Maximilian von Unwerth, and by the Institut für Robotik und Prozessinformatik, Technische Universität Braunschweig.

The guys are going to publish their code on github under EUPL. I will update this page with links to all the repos once it goes public.

LibORL has gone public at

* The idea of a fully empirical lab activity for the students was previously implemented at Braunschweig and Stanford by Prof. Dr.-Ing. Torsten Kröger, friend and former research fellow at IRP. [back to the top]