Testing the new Catapult

I built the new catapult (that I talked about in my last post) and tested it.

Immediately I could see many of the improvements that were expected of the new design:

  • It is easy to adjust.
  • It fires straight.
  • The turntable moves as expected (+/- 30 degrees).
  • The new design is much easier to install on the robot.

However, I found that it was still too inconsistent.  Specifically, the energy stored in the system would gradually decrease as the catapult was used for many shots.  Initially I imagined that the rubber band was stretching and becoming weaker.  But then I realised that the band was slipping along the catapult arm.

So here is the new design for the arm.  It now incorporates a hole for the band to pass through so that it can’t slip along the arm.

Catapult arm (version lost count).

I also found that there was insufficient travel in the trajectory adjuster.  I went for the very high-tech approach of gluing on a piece of Lego to fix that:

Catapult with Lego “tweak”.

This new (dare I say final) version is now yielding much better results.  I’m optimistic it will achieve close to 15 out of 15 shots!

A New Catapult for Feed the Fish

Back in November I made this design for Feed the Fish.

Feed the Fish Catapult, Version 3

At the time, I was pleased with it.  Initial tests went well, and I had occasions where the catapult delivered five rounds into the aquarium in one shot.  Since then, however, I have become less satisfied with the design.  Here are the problems with it:

  • I misinterpreted the initial results.  The catapult can deliver five shots into the aquarium.  But it would be lucky to get three consecutive deliveries of five shots.  The catapult needs to deliver five shots almost every time for this game to be viable without hundreds of attempts.
  • Adjustments to the catapult are too difficult to make.  It is difficult to add or subtract energy storage (by adjusting the rubber bands) since it is necessary to partially dismantle the device.  Adjusting the trajectory is also tricky since it is necessary to fight the rubber bands whilst doing it.
  • The catapult generally fires to the left.  This might be because the trigger acts on one side of the catapult arm rather than through the middle.  It might also be because the rubber band is tighter on one side than the other.  I would like a design that fires straight!
  • The turntable only moves +/-10 degrees: in November I thought this would be about right.  But the catapult fires too far left for this to work.  Since I want the turntable design for our Nerf gun, I think the turntable is worth revisiting, and I think +/-30 degrees is a better choice.
  • It is difficult to set the robot up with this implement because it has the camera fitted.  It is necessary to juggle the lid, catapult and battery during installation: you need lots of hands.  This is probably not a problem for PiWars@Home, but it feels like the camera cable would be easy to break in the process.
  • There are quite a few aspects of the design which probably contribute to inconsistent results: the arm bearing has too much play, the turntable is not stable enough and the trigger does not act centrally on the catapult arm.

So, here is version 4.

Feed the Fish Catapult, Version 4.

These are the changes:

  • The trigger now acts centrally on the catapult arm.
  • The turntable has been re-worked to move +/-30 degrees.  It is also easier to tighten to remove excess play.
  • The rubber band can now be adjusted with a thumbscrew via a worm gear.
  • The trajectory can be adjusted by the two long bolts mounted on the sides of the catapult.
  • The arm now pivots on ball bearings rather than a bolt so it has less play.
  • The system uses the hull mounted camera.

That’s the theory anyway.  I’m currently printing the new parts to try it.

Voice Control Taking Shape

I’m a bit late to the voice control party.  But that’s good news because everyone else has done all the hard work.  In particular, Neil Stevenson from Team Dangle has blogged how to install Vosk (https://dangle.blogsyte.com/?p=199) on a Pi.  I copied Neil’s instructions and had a voice control system up and running (on a RPi400) in minutes.

The good news is that that my wife’s posh Bose noise cancelling headphones work beautifully with it too: I was worried that they might not be Pi-compatible.  Actually, setting them up wasn’t easy; I used the instructions on https://howchoo.com/pi/bluetooth-raspberry-pi but had to be very careful not to let them configure as “headphones” (the rather insistent default). They must be a set up as a “headset” to work with Vosk.

I then tried to connect the headphones to P21’s Pi.  This didn’t go to plan.  The problem is that I redirected the Pi’s hardware UART to the Pi’s header ages ago.  This is used to pass messages between the Pi and the robot’s Arduino hardware controller and is fundamental to the robot’s design.  The problem is that the UART is normally required by the Pi’s Bluetooth system.

I thought of the following options to counter this:

  1. Revert the robot interface to the Pi’s software UART.  I’m not keen on this; on https://howchoo.com/pi/bluetooth-raspberry-pi it says that the software UART timing fails if the Pi is heavily loaded.  The Pi is certainly heavily loaded when the vision system is running.
  2. I could try connecting a Bluetooth dongle to a USB port on P21’s RPi.  I have no idea if this will help so I will probably look into this in time.  I would probably have to reprint P21’s lid to fit a dongle in.
  3. I could place another RPi on the robot and connect it to the implement connector.  I might run into UART problems again though.  Vosk probably loads the Pi and I would not be able to use the hardware UART (obviously!).
  4. I could place another RPi on the robot and connect it via IP and WiFi. 

I decided to go with the last option.  I decided to use the RPi and touchscreen from P19 since it is currently redundant.  This setup was appealing since it can be created with minimal changes to P21.

So, here’s what I did:

  1. I took the Pi and touchscreen off of P19.  I updated the OS then installed the Bluetooth manager software as described in https://howchoo.com/pi/bluetooth-raspberry-pi . This time the headphones connected without problem.
  2. I added a new socket interface into PiDrogen’s software framework.  A client can now connect to it and issue words.  I updated the general framework to understand some words like “go”, “stop” and “pause”.  I updated the line follower game to understand “straight” (go straight on at the next junction), “right” and “left”.  I also added “return” which has the robot do an about turn, reverse the directions it followed before, follow the line (back to the start) then do another about turn.
  3. I installed Vosk on the RPi following Neil’s instructions.  I then added a socket client to glue Vosk to P21’s framework.  This is awkward since Vosk often repeats itself; the glue code clears the repetition. 
  4. I designed a carrier for the RPi, touchscreen and a battery pack, see below.

Does it work?

Yes.  But it took quite a few “takes” to get a complete video of the whole system working.  Three errors made the system unreliable:

  1. The line follower code made a number of errors where it either lost the line or it took the wrong turn at junctions.  At the time I countered this by down-tuning the robot speed but I now believe I should have adjusted the black threshold.
  2. Sometimes Vosk is too slow.  I start the robot by saying the sequence “go straight right left” (those being the directions to follow for Up the Garden Path).  If the word “straight” reaches P21 after the robot has reached the first junction then the robot defaults the first junction to straight on, but then the sequence is out of step with the robot’s location.  As a result, the robot takes a wrong turn at a subsequent junction.  I can bypass this problem by omitting the word “straight”, but I would prefer to get it working nicely.
  3. Sometimes Vosk misses a word.  If one of the direction words is missed then the robot might make a wrong turn.

At time of writing I have a video that could be submitted to the competition.  The robot is voice controlled and uses the camera to follow the line (which is the maximum scoring option).  But the video was made with down-tuned speed and I forgot to return the robot back to the start by saying “return” (which is not part of the game, but it’s fun).

I think I’m going to address some of the issues above then have another go!

I want Doughnuts.

Two blog posts in one day!  Whatever next?

Version 1 of the junction detector has a problem.  Frequently the angle of the road ahead does not match the preset angles of the coloured bars in the detector (see the earlier blog).  As a result, the code misses road options.

So, I have developed a new system.

Image 1

As with version 1 of the junction detector, the image is split into top, middle and bottom bands.  The bottom two bands are used to work out the orientation of the line in the image.  This is then used to locate a centre for the detector, where the road is half way up the top band; see the turquoise line in the image above.  This is all the same as version 1 of the junction detector (which was the subject of the previous blog entry).

However, at this point the code creates a mask consisting of two concentric circles; yes, like a doughnut.  The code then segments the line in the top band using the doughnut mask. The result of this are the two purple segments (one of which has been partially overwritten in blue) in the image.  At this point the code can see two portions of road in the top segment. 

There are a few other things to note in the frame:

  • The blue line shows the target that the line follower is using to direct the robot. Note there is an inherent assumption that the robot position is half way across the bottom of the frame.
  • The steerDegrees caption shows the angle that the line follower is asking the robot to steer; 17 degrees left of centre.
  • NextTurn:R shows that the robot should turn right at the next junction; the image is the approach to the second junction on the PiWars@Home course.

Now we can roll the video forward to the junction:

Image 2

Now there are three doughnut portions visible.  As a result the code is reporting “NextTurn:R at junction”.  In other words, it can see the junction.

Because the next turn should be right (the code has a pre-coded list of turns: straight, right, left for the PiWars@Home course), the right-most doughnut portion is overwritten in blue, the blue target line passes through it and the target steer angle is 3 degrees left of centre.

This system is better than the coloured lines in version 1 because it reads the directions of the exits from the image, rather than needing them to conform to a preset ideal.  The “doughnut” system ensures that the segmentation yields a blob for each exit rather than one contiguous blob.

However, an additional system is needed in this algorithm. As the robot gets close to a junction the following occurs:

Image 3

The centre of the doughnut has now moved beyond the junction intersection.  But the robot is still to reach the junction, so it is important that the junction detector does not move on to the next direction (in the list of directions) until the camera can no longer see this current junction.

To achieve this a counter is employed. When a junction is detected the counter is set to a value (from the parameter set) of about 10 (equating to 0.5 seconds of robot travel).  The counter is decremented for each video frame following the junction: the junction is deemed traversed when the counter reaches zero.

In the image above the counter is more than zero.  But the doughnut has only detected two line segments.  This is inconsistent, so the code re-segments a strip through the centre of the top band; this is shown in yellow.  The code now steers towards the centroid of the yellow segment (as denoted by the blue outline).  Had there been multiple yellow segments, the algorithm would have chosen the left-most, since the next turn is marked as left; see the next image:

Image 4

This algorithm works well.  The robot consistently identifies the junctions correctly and it accurately follows the pre-defined route.

A new video of the robot completing the course has been made.  It is now completely vision controlled (unlike the first video that used vision and odometry).  The new code is also faster; the robot completes the course in under 15 seconds.

The next job is to replace the pre-coded directions with a voice controlled system.

Line Follower Junction Detection Version 1.

You may recall that we have managed to complete the Up the Garden Path challenge using a mixture of video guidance and odometry (see our blog from the 25th January).

The next step in our plan is to remove the utilisation of the odometry.  This requires that the line follower code be changed to understand road junctions.  Here are details of our first attempt at this.  This is the algorithm that was presented at the PiWars conference last weekend.

Image 1

The image shows the output from the updated video interpreter.  The physical line on the ground is black, the road forks ahead, and the line at the bottom of the image disappears under the robot.

The code splits the image into three horizontal bands and the line is separately segmented in each band.  The segments are outlined by the algorithm in white.

The code then calculates the centroids of the two bottom segments and draws a (blue) line through them to the bottom edge of the top band.  From this point the algorithm plots the pink, red, white and green lines at angles which coincide with possible junction exit directions.  Each line has been tilted by the angle of the blue line to cater for any misalignment of the robot on the black line.

Next, the algorithm looks under the coloured lines and counts the number of pixels which are black (i.e., corresponding to the physical black line on the floor).  These counts are shown in colours corresponding to the lines.  So, there are 167 black pixels under the green line, 183 under the red, etc.

Now we can see the updated numbers as the robot moves forward:

Image 2

In the new image the red and green counts have jumped to over 700 indicating that these directions are available as exits.  A simple, configurable, threshold can then be used to identify these by the algorithm.

This algorithm works a lot of the time.  However, it is not entirely robust.

Image 3

For this image the thresholding system has been set to 500.  As a result, only the pink line is showing: the other lines do not meet the threshold so they are not shown.  Under some interpretations the algorithm has failed here.  The pink line is correct to be showing, however, the route forwards has not been detected.  This is because the middle band centroid falls somewhere at the intersection of the two routes resulting in the misinterpretation of the location of the junction.  The algorithm does not see this frame as a junction since it only sees one exit.

A different error has occurred in this image:

Image 4

In this example the road bends to the right.  In this case the angle of the road does not match any of the junction direction options.

It’s now my view that this algorithm is not flexible enough to cater for all eventualities.  I have a new idea which I’m going to try…

…And it uses doughnuts.

Vision System Development

Building software to control a robot based on a video feed is tricky.  Generally, you get an idea how your algorithm should work so you implement it.  But when you try it the robot does something completely different to your plan and you have no idea why.  And the next time you try it, it does something different again.

Prior to the 2020 games we swapped from an onboard RPi screen to using a network connection to control the Pi.  This gave us the opportunity to build an infrastructure to assist with the machine vision coding of our robot.  The video pipeline of the software is shown below:

PiDrogen’s Video Processing Overview

The game framework is the centre of the application on the robot.  When it runs on the robot it receives a live video feed from a Raspberry Pi camera via Pi specific camera code and OpenCV.  Individual video frames are passed to game specific video interpreters at a rate of 20 frames per second.  The interpreters process each frame and direct robot movements (via the framework).  They can also, optionally, mark up the image to assist with debugging.

The framework presents an IP socket interface.  A PC application, “PW2020 Controller”, can be used to connect to the socket interface, normally via WiFi (although a wired connection is also possible).  The PC application can display a live feed of the marked-up images, also at 20 frames per second.  The application permits access to the game framework parameters and various robot control features.

PW2020 Controller PC Application

In the image above the controller is showing a video feed which has been marked up by the line follower code. The line follower has partially marked up the line segment and is requesting a turn of 16 degrees. Sadly, this is wrong, the screen also says that the next turn should be left! Still, that’s the point of this system.

By default, the PC application saves a copy of the received marked-up images to a video file on the PC.

The most useful aspect of all this is that the game framework can also be used on a PC; note in the image above the application reports that it is connected to “localhost”.  When used in this way, the video feed comes from a saved video file on the PC which was made as described above.  So, it is possible to record the robot performing a short sequence (from the robot’s camera), then replay the sequence on a PC until the video processing can be finalised.  Furthermore, this gets recorded too, so it is possible to run a sequence and step back and forth, frame by frame, to really drill down into the problem. This allows much quicker development.  It also allows testing of new algorithms against an archive of known problem scenarios.

Up the Garden Path Progress

I’ve done quite a lot of work on UTGP over the last couple of weeks.

To start with I made the course.

Then I used our 2019 line following code to time the course under vision control.  This code doesn’t know about junctions, so I made it work as follows:

  1. Set the odometer to count down a fixed distance and stop.  The distance set is 50mm short of the next junction.
  2. Follow the line until the odometer says stop.
  3. Drive 50mm without video guidance on the heading that was last specified by the line follower.
  4. Turn to the exit heading of the junction (as measured by the IMU).
  5. Drive 50mm without guidance on the new heading.
  6. Go to 1.

This method is a mixture of dead-reckoning and camera guidance so I’m not sure what points it would attract in the games.  However, it is a “time on the board”.  The best times are around 16 seconds, but it’s quite hard to measure.

I then set to work on updating the line following code to interpret junctions.  I’ll be back soon to explain how that’s going.

Happy New Year!

Wow, it’s been a month since our last update.  That’s partly because we’ve been working on a talk for the upcoming conference, and partly because we’re too lazy after Christmas.  Anyway, lets update you on progress.

Feed the Fish

We have progress on this game, but there’s more to do.

The robot does the appropriate moves around the course, it aligns itself on the aquarium using video guidance, then it stops at the appropriate place based on the front time-of-flight sensor.  The catapult fires, and sometimes, five projectiles land in the aquarium.  But only sometimes.

Why only sometimes?  The robot generally manoeuvres to within a few millimeters of the stop position; it seems unlikely that this is the problem.  More likely is that the catapult does not fire the projectiles consistently.

The next step for this game is to film some shots in slow motion to see if we can see where the problem is.

Oh, and we need to build an aquarium.

 Tidy Up the Toys

‘Toys is a much better story.  So long as the moves are kept slow the game works.  We have quite a few videos of the boxes being autonomously collected and nicely stacked in the target box.  For now, we’re going to call this done.

It might be that we could make the robot move faster but that will require re-tuning of some parameters.  Perhaps if there’s still time when the other games are in the bag?

Up the Garden Path

This is the next game to address.

To begin with, we are going to use the old line following code that was originally used in P19.  It uses a Pi camera to follow the line.  The code will need to be updated because it doesn’t understand junctions, but this doesn’t seem that tricky to do.

Also, the code follows a white line on a black surface, but we will have a black line on a beige background (since our arena is made of hardboard).  Again, a pretty easy change.

The next step for this game is to mark up the course.  Break out the tape!

DIY Obstacle Course

Nothing to report on this game yet.  Our chief driver frequently drives P21 around under radio control so the robot (and driver) are completely ready to take on a course.  It’s just that we don’t have a course yet.

We would like to do the course soon though.  We have a pond which sometimes freezes over.  It would be cool to see some ice driving!  So we need to film this game while the weather is cold.

That’s all for now.  Until next time…

Move Over P20!

P20 (the robot we built for PiWars 2020) had loads of improvements over P19 (the machine we entered for PiWars 2019).  But most of the improvements were kind of… “incremental”: none of them resulted in a big performance increase.  For PiWars@Home we decided there was opportunity for fundamental change.

So, we present P21; our machine for the 2021 games.

P21, Our New Robot for PiWars@Home

I know, it looks just like P20.  But P21 has one big upgrade: it uses new motors.  The motors have encoders, which, when combined with PID enabled motor drivers, allow much finer control.  And since we’re expecting better control, we’ve gone for a 50% faster output shaft RPM so that the robot is faster too.

This change has taken a long time to complete since it has required some fundamental changes.

  • Our designs employ an Arduino based peripheral control board which perform the low level, timing-constrained, jobs on behalf of the Raspberry Pi.  We have replaced this board with a new one since each motor now requires a couple of digital signals from its encoder.
  • The peripheral controller software has also been re-written.  The motors are now interfaced via PID algorithms; this means that when the Raspberry Pi requests a motor speed (expressed as a percentage of full speed), that is what it gets.  With the previous (pulse width modulated) control scheme, you get a vague approximation.  At low speeds control is particularly poor with PWM systems where no motor speed feedback is employed (because of the friction that is inherent with gearboxes).
  • The chassis has also been redesigned and re-printed since the motors are a different size.

There is one other difference between P20 and P21: the implement interface has been changed.  P20 provided a bunch of servo outputs for implements; so, it is simple to connect servos with no further electronics.  P21’s implement interface provides all the internal voltages (5.2v, 6v and the battery voltage) plus a two-way serial interface.  This change was largely imposed by the toy box lift that uses motor pulses to measure the position of the jaws above the ground; the distance of travel is too much for a conventional servo.  The new interface is far more flexible than the old system, but it requires every implement to have a dedicated control board. There’s no such thing as a free lunch.

P21’s hardware and low-level software is now largely complete.  We now need to work back through the higher-level code in the Raspberry Pi to take advantage of P21’s increased performance.  We’re hoping to see some big improvements soon!

The Tidy Up the Toys Challenge

We have progress to report on the Tidy Up the Toys challenge.

To begin with we thought we would adapt our Eco Disaster barrel lifter (shown below) for the tidy up the toys challenge.

Our Eco-Disaster Barrel Lift

However, this has a few shortcomings:

  • The barrel lift only raises the barrels by about 15mm.  This is plenty for Eco-Disaster but not adequate to place one toy box (which are 50mm tall) onto another.
  • When the barrel lift is carrying a barrel, the camera is more-or-less blinded.  To be honest, this is not ideal for Eco-Disaster either.
  • When PiDrogen is picking up a barrel it uses its forward-facing time of flight (ToF) sensor to measure the distance to the barrel.  But the ToF sensor is mounted too high in the chassis to see a box; instead, it looks over the top.
  • Finally, various components of the barrel lift are mounted under PiDrogen’s hull.  As a result, it is necessary to use large diameter tyres when using it.  However, autonomous manoeuvres work better when PiDrogen is fitted with smaller diameter wheels.

We ended up designing a new lift system; see below:

Tidy Up the Toys Lift

The new design addresses the issues previously mentioned:

  • It can lift boxes beyond 50mm, so it can place one box on another.  It can also lift one, two or three boxes, so it can place two boxes on a third box, then lift all three.
  • The lift mechanism now uses a rack and pinion system which operates more smoothly than the barrel lift (which used a servo to create the lifting motion).  This means the lift can reliably lift three boxes without them toppling.
  • If the lift moves high enough (up to about 80mm) then the camera can see under the lift mechanism, even if it is carrying boxes.  So, the lift can pick up a box, raise it above the camera’s field of view, then use camera guidance to find a further box.
  • A new ToF sensor is mounted low on the static frame of the lift so that it can measure the distance to a box when lining up to pick it up.
  • The new design is compatible with the smaller diameter wheels, as can be seen in the image.

The toy box lift works reliably under radio control.  The next step is to write the code to allow the robot to play the game autonomously.

Hopefully, we will get to play Eco-Disaster one day.  Assuming we do, we will probably adapt this design to lift the Eco-Disaster barrels as it has several advantages over the previous barrel lift:

  • It could carry two (or possibly even three) barrels at once.
  • The robot can be fitted with small diameter wheels which are better for delicate manoeuvres.
  • The camera can still see forwards when a barrel has been lifted.  So, it should be possible to steer around obstructing barrels.

Now, let’s go write some toy-box-locating vision software.