YACA5 Update

Sorry!  I forgot to update the blog on the new catapult arm. (See my previous blog entry).

Sadly, it didn’t make a difference on the Feed the Fish game.  But I’ll explain it anyway.

Nerf Rival balls are obviously made with a mold which has two hemispherical parts.  It is clear to see the seam where the two parts come together.  The balls also have dimples like a golf ball.  YACA5 has slots so that the ball seams can be aligned with the flight path.  I wondered if aligning then might make them fly more consistently.

As I said, it doesn’t.

However, I have realised that not all Nerf Rival balls are born equal.  I test fired 40 balls 10 times each.  Some balls were on target 10 times.  One ball only flew correctly 4 times out of 10.

Perhaps the best hint I can give here is this: test your balls and pick the best!

Yet Another Catapult Arm

I’ve fallen into the trap of searching for the Holy Grail of Feed the Fish: that is 15/15 shots.  I’ve done so many “takes” that the course has gone darker (from tyre rubber) where the robot makes a right-angle turn to line up on the aquarium! 

This is my latest attempt at designing a more reliable catapult arm which I will call YACA5:

Yet Another Catapult Arm V5

If it works, I’ll explain it in another blog post.

Wish me luck!

Take a look at our Aquarium!

I’m rather proud of our aquarium. Here’s a picture:

iPads need less cleaning than real fish.

It is a 3D printed frame into which you can slot two (or three) iPads.  The red circle on the bottom is a target for the robot to align on.

The outside of the box is the regulation 200mm, but the hole in the top is rather smaller (because of the thickness of the iPads), but I reckon it’s worth it!

It’s a proud Dad moment: my eldest son (who is also the team driver) created a Scratch animation of the fish swimming about.  We then captured video of it running on a PC, did some Python reformatting of the video to make it full screen in portrait mode, then used Giphy.com to convert it into a gif.  We can open the gif in photos app, then the animation cycles forever.  Just the job for doing dozens of takes while trying to get 15 out of 15 shots!

Here’s the gif for your entertainment.

Well fed fish

Testing the new Catapult

I built the new catapult (that I talked about in my last post) and tested it.

Immediately I could see many of the improvements that were expected of the new design:

  • It is easy to adjust.
  • It fires straight.
  • The turntable moves as expected (+/- 30 degrees).
  • The new design is much easier to install on the robot.

However, I found that it was still too inconsistent.  Specifically, the energy stored in the system would gradually decrease as the catapult was used for many shots.  Initially I imagined that the rubber band was stretching and becoming weaker.  But then I realised that the band was slipping along the catapult arm.

So here is the new design for the arm.  It now incorporates a hole for the band to pass through so that it can’t slip along the arm.

Catapult arm (version lost count).

I also found that there was insufficient travel in the trajectory adjuster.  I went for the very high-tech approach of gluing on a piece of Lego to fix that:

Catapult with Lego “tweak”.

This new (dare I say final) version is now yielding much better results.  I’m optimistic it will achieve close to 15 out of 15 shots!

A New Catapult for Feed the Fish

Back in November I made this design for Feed the Fish.

Feed the Fish Catapult, Version 3

At the time, I was pleased with it.  Initial tests went well, and I had occasions where the catapult delivered five rounds into the aquarium in one shot.  Since then, however, I have become less satisfied with the design.  Here are the problems with it:

  • I misinterpreted the initial results.  The catapult can deliver five shots into the aquarium.  But it would be lucky to get three consecutive deliveries of five shots.  The catapult needs to deliver five shots almost every time for this game to be viable without hundreds of attempts.
  • Adjustments to the catapult are too difficult to make.  It is difficult to add or subtract energy storage (by adjusting the rubber bands) since it is necessary to partially dismantle the device.  Adjusting the trajectory is also tricky since it is necessary to fight the rubber bands whilst doing it.
  • The catapult generally fires to the left.  This might be because the trigger acts on one side of the catapult arm rather than through the middle.  It might also be because the rubber band is tighter on one side than the other.  I would like a design that fires straight!
  • The turntable only moves +/-10 degrees: in November I thought this would be about right.  But the catapult fires too far left for this to work.  Since I want the turntable design for our Nerf gun, I think the turntable is worth revisiting, and I think +/-30 degrees is a better choice.
  • It is difficult to set the robot up with this implement because it has the camera fitted.  It is necessary to juggle the lid, catapult and battery during installation: you need lots of hands.  This is probably not a problem for PiWars@Home, but it feels like the camera cable would be easy to break in the process.
  • There are quite a few aspects of the design which probably contribute to inconsistent results: the arm bearing has too much play, the turntable is not stable enough and the trigger does not act centrally on the catapult arm.

So, here is version 4.

Feed the Fish Catapult, Version 4.

These are the changes:

  • The trigger now acts centrally on the catapult arm.
  • The turntable has been re-worked to move +/-30 degrees.  It is also easier to tighten to remove excess play.
  • The rubber band can now be adjusted with a thumbscrew via a worm gear.
  • The trajectory can be adjusted by the two long bolts mounted on the sides of the catapult.
  • The arm now pivots on ball bearings rather than a bolt so it has less play.
  • The system uses the hull mounted camera.

That’s the theory anyway.  I’m currently printing the new parts to try it.

Voice Control Taking Shape

I’m a bit late to the voice control party.  But that’s good news because everyone else has done all the hard work.  In particular, Neil Stevenson from Team Dangle has blogged how to install Vosk (https://dangle.blogsyte.com/?p=199) on a Pi.  I copied Neil’s instructions and had a voice control system up and running (on a RPi400) in minutes.

The good news is that that my wife’s posh Bose noise cancelling headphones work beautifully with it too: I was worried that they might not be Pi-compatible.  Actually, setting them up wasn’t easy; I used the instructions on https://howchoo.com/pi/bluetooth-raspberry-pi but had to be very careful not to let them configure as “headphones” (the rather insistent default). They must be a set up as a “headset” to work with Vosk.

I then tried to connect the headphones to P21’s Pi.  This didn’t go to plan.  The problem is that I redirected the Pi’s hardware UART to the Pi’s header ages ago.  This is used to pass messages between the Pi and the robot’s Arduino hardware controller and is fundamental to the robot’s design.  The problem is that the UART is normally required by the Pi’s Bluetooth system.

I thought of the following options to counter this:

  1. Revert the robot interface to the Pi’s software UART.  I’m not keen on this; on https://howchoo.com/pi/bluetooth-raspberry-pi it says that the software UART timing fails if the Pi is heavily loaded.  The Pi is certainly heavily loaded when the vision system is running.
  2. I could try connecting a Bluetooth dongle to a USB port on P21’s RPi.  I have no idea if this will help so I will probably look into this in time.  I would probably have to reprint P21’s lid to fit a dongle in.
  3. I could place another RPi on the robot and connect it to the implement connector.  I might run into UART problems again though.  Vosk probably loads the Pi and I would not be able to use the hardware UART (obviously!).
  4. I could place another RPi on the robot and connect it via IP and WiFi. 

I decided to go with the last option.  I decided to use the RPi and touchscreen from P19 since it is currently redundant.  This setup was appealing since it can be created with minimal changes to P21.

So, here’s what I did:

  1. I took the Pi and touchscreen off of P19.  I updated the OS then installed the Bluetooth manager software as described in https://howchoo.com/pi/bluetooth-raspberry-pi . This time the headphones connected without problem.
  2. I added a new socket interface into PiDrogen’s software framework.  A client can now connect to it and issue words.  I updated the general framework to understand some words like “go”, “stop” and “pause”.  I updated the line follower game to understand “straight” (go straight on at the next junction), “right” and “left”.  I also added “return” which has the robot do an about turn, reverse the directions it followed before, follow the line (back to the start) then do another about turn.
  3. I installed Vosk on the RPi following Neil’s instructions.  I then added a socket client to glue Vosk to P21’s framework.  This is awkward since Vosk often repeats itself; the glue code clears the repetition. 
  4. I designed a carrier for the RPi, touchscreen and a battery pack, see below.

Does it work?

Yes.  But it took quite a few “takes” to get a complete video of the whole system working.  Three errors made the system unreliable:

  1. The line follower code made a number of errors where it either lost the line or it took the wrong turn at junctions.  At the time I countered this by down-tuning the robot speed but I now believe I should have adjusted the black threshold.
  2. Sometimes Vosk is too slow.  I start the robot by saying the sequence “go straight right left” (those being the directions to follow for Up the Garden Path).  If the word “straight” reaches P21 after the robot has reached the first junction then the robot defaults the first junction to straight on, but then the sequence is out of step with the robot’s location.  As a result, the robot takes a wrong turn at a subsequent junction.  I can bypass this problem by omitting the word “straight”, but I would prefer to get it working nicely.
  3. Sometimes Vosk misses a word.  If one of the direction words is missed then the robot might make a wrong turn.

At time of writing I have a video that could be submitted to the competition.  The robot is voice controlled and uses the camera to follow the line (which is the maximum scoring option).  But the video was made with down-tuned speed and I forgot to return the robot back to the start by saying “return” (which is not part of the game, but it’s fun).

I think I’m going to address some of the issues above then have another go!

I want Doughnuts.

Two blog posts in one day!  Whatever next?

Version 1 of the junction detector has a problem.  Frequently the angle of the road ahead does not match the preset angles of the coloured bars in the detector (see the earlier blog).  As a result, the code misses road options.

So, I have developed a new system.

Image 1

As with version 1 of the junction detector, the image is split into top, middle and bottom bands.  The bottom two bands are used to work out the orientation of the line in the image.  This is then used to locate a centre for the detector, where the road is half way up the top band; see the turquoise line in the image above.  This is all the same as version 1 of the junction detector (which was the subject of the previous blog entry).

However, at this point the code creates a mask consisting of two concentric circles; yes, like a doughnut.  The code then segments the line in the top band using the doughnut mask. The result of this are the two purple segments (one of which has been partially overwritten in blue) in the image.  At this point the code can see two portions of road in the top segment. 

There are a few other things to note in the frame:

  • The blue line shows the target that the line follower is using to direct the robot. Note there is an inherent assumption that the robot position is half way across the bottom of the frame.
  • The steerDegrees caption shows the angle that the line follower is asking the robot to steer; 17 degrees left of centre.
  • NextTurn:R shows that the robot should turn right at the next junction; the image is the approach to the second junction on the PiWars@Home course.

Now we can roll the video forward to the junction:

Image 2

Now there are three doughnut portions visible.  As a result the code is reporting “NextTurn:R at junction”.  In other words, it can see the junction.

Because the next turn should be right (the code has a pre-coded list of turns: straight, right, left for the PiWars@Home course), the right-most doughnut portion is overwritten in blue, the blue target line passes through it and the target steer angle is 3 degrees left of centre.

This system is better than the coloured lines in version 1 because it reads the directions of the exits from the image, rather than needing them to conform to a preset ideal.  The “doughnut” system ensures that the segmentation yields a blob for each exit rather than one contiguous blob.

However, an additional system is needed in this algorithm. As the robot gets close to a junction the following occurs:

Image 3

The centre of the doughnut has now moved beyond the junction intersection.  But the robot is still to reach the junction, so it is important that the junction detector does not move on to the next direction (in the list of directions) until the camera can no longer see this current junction.

To achieve this a counter is employed. When a junction is detected the counter is set to a value (from the parameter set) of about 10 (equating to 0.5 seconds of robot travel).  The counter is decremented for each video frame following the junction: the junction is deemed traversed when the counter reaches zero.

In the image above the counter is more than zero.  But the doughnut has only detected two line segments.  This is inconsistent, so the code re-segments a strip through the centre of the top band; this is shown in yellow.  The code now steers towards the centroid of the yellow segment (as denoted by the blue outline).  Had there been multiple yellow segments, the algorithm would have chosen the left-most, since the next turn is marked as left; see the next image:

Image 4

This algorithm works well.  The robot consistently identifies the junctions correctly and it accurately follows the pre-defined route.

A new video of the robot completing the course has been made.  It is now completely vision controlled (unlike the first video that used vision and odometry).  The new code is also faster; the robot completes the course in under 15 seconds.

The next job is to replace the pre-coded directions with a voice controlled system.

Line Follower Junction Detection Version 1.

You may recall that we have managed to complete the Up the Garden Path challenge using a mixture of video guidance and odometry (see our blog from the 25th January).

The next step in our plan is to remove the utilisation of the odometry.  This requires that the line follower code be changed to understand road junctions.  Here are details of our first attempt at this.  This is the algorithm that was presented at the PiWars conference last weekend.

Image 1

The image shows the output from the updated video interpreter.  The physical line on the ground is black, the road forks ahead, and the line at the bottom of the image disappears under the robot.

The code splits the image into three horizontal bands and the line is separately segmented in each band.  The segments are outlined by the algorithm in white.

The code then calculates the centroids of the two bottom segments and draws a (blue) line through them to the bottom edge of the top band.  From this point the algorithm plots the pink, red, white and green lines at angles which coincide with possible junction exit directions.  Each line has been tilted by the angle of the blue line to cater for any misalignment of the robot on the black line.

Next, the algorithm looks under the coloured lines and counts the number of pixels which are black (i.e., corresponding to the physical black line on the floor).  These counts are shown in colours corresponding to the lines.  So, there are 167 black pixels under the green line, 183 under the red, etc.

Now we can see the updated numbers as the robot moves forward:

Image 2

In the new image the red and green counts have jumped to over 700 indicating that these directions are available as exits.  A simple, configurable, threshold can then be used to identify these by the algorithm.

This algorithm works a lot of the time.  However, it is not entirely robust.

Image 3

For this image the thresholding system has been set to 500.  As a result, only the pink line is showing: the other lines do not meet the threshold so they are not shown.  Under some interpretations the algorithm has failed here.  The pink line is correct to be showing, however, the route forwards has not been detected.  This is because the middle band centroid falls somewhere at the intersection of the two routes resulting in the misinterpretation of the location of the junction.  The algorithm does not see this frame as a junction since it only sees one exit.

A different error has occurred in this image:

Image 4

In this example the road bends to the right.  In this case the angle of the road does not match any of the junction direction options.

It’s now my view that this algorithm is not flexible enough to cater for all eventualities.  I have a new idea which I’m going to try…

…And it uses doughnuts.

Vision System Development

Building software to control a robot based on a video feed is tricky.  Generally, you get an idea how your algorithm should work so you implement it.  But when you try it the robot does something completely different to your plan and you have no idea why.  And the next time you try it, it does something different again.

Prior to the 2020 games we swapped from an onboard RPi screen to using a network connection to control the Pi.  This gave us the opportunity to build an infrastructure to assist with the machine vision coding of our robot.  The video pipeline of the software is shown below:

PiDrogen’s Video Processing Overview

The game framework is the centre of the application on the robot.  When it runs on the robot it receives a live video feed from a Raspberry Pi camera via Pi specific camera code and OpenCV.  Individual video frames are passed to game specific video interpreters at a rate of 20 frames per second.  The interpreters process each frame and direct robot movements (via the framework).  They can also, optionally, mark up the image to assist with debugging.

The framework presents an IP socket interface.  A PC application, “PW2020 Controller”, can be used to connect to the socket interface, normally via WiFi (although a wired connection is also possible).  The PC application can display a live feed of the marked-up images, also at 20 frames per second.  The application permits access to the game framework parameters and various robot control features.

PW2020 Controller PC Application

In the image above the controller is showing a video feed which has been marked up by the line follower code. The line follower has partially marked up the line segment and is requesting a turn of 16 degrees. Sadly, this is wrong, the screen also says that the next turn should be left! Still, that’s the point of this system.

By default, the PC application saves a copy of the received marked-up images to a video file on the PC.

The most useful aspect of all this is that the game framework can also be used on a PC; note in the image above the application reports that it is connected to “localhost”.  When used in this way, the video feed comes from a saved video file on the PC which was made as described above.  So, it is possible to record the robot performing a short sequence (from the robot’s camera), then replay the sequence on a PC until the video processing can be finalised.  Furthermore, this gets recorded too, so it is possible to run a sequence and step back and forth, frame by frame, to really drill down into the problem. This allows much quicker development.  It also allows testing of new algorithms against an archive of known problem scenarios.

Up the Garden Path Progress

I’ve done quite a lot of work on UTGP over the last couple of weeks.

To start with I made the course.

Then I used our 2019 line following code to time the course under vision control.  This code doesn’t know about junctions, so I made it work as follows:

  1. Set the odometer to count down a fixed distance and stop.  The distance set is 50mm short of the next junction.
  2. Follow the line until the odometer says stop.
  3. Drive 50mm without video guidance on the heading that was last specified by the line follower.
  4. Turn to the exit heading of the junction (as measured by the IMU).
  5. Drive 50mm without guidance on the new heading.
  6. Go to 1.

This method is a mixture of dead-reckoning and camera guidance so I’m not sure what points it would attract in the games.  However, it is a “time on the board”.  The best times are around 16 seconds, but it’s quite hard to measure.

I then set to work on updating the line following code to interpret junctions.  I’ll be back soon to explain how that’s going.