Minos 2010 was held last weekend on April 10th and 11th at Royal Holloway College, university of London. the event followed is familiar format of talks an presentations on the Saturday, eating on Saturday evening and an informal micromouse contest on the Sunday…
With a good number of speakers and some interesting subjects, we were all in for an excellent and informative weekend. Below are summaries of the sessions presented. Where available, slides, links and other supporting materials are linked at the bottom of each section.
I do not yet have all these so check back in a week or so to pick up any missing content.
- Garry Bulmer – Robot friendly arduinos
- Dave Otten – Shiny walls and accelerometer progress
- Duncan Louttit – DC motors and encoderless odometry
- Peter Harrison – Contest Roundup
- Rob Probin – Debugging Techniques
- Ken Hewitt – Easier Maze Building
- David Hannaford – Tuning the maze solver
- Alan Dibley – Improved diagonal path generation
- Martin Barratt – Power Supplies
- Contest Results
First up this year was Garry Bulmer who brought us up to date with his work using the Arduino and simple robotics platforms in schools.If you have not yet come across the Arduino platform, it is well worth the time taken to give it the once-over. For very little money, you get a readily available, inexpensive, general-purpose microcontroller platform that can be connected straight to a host PC via USB and programmed directly from the extremely simple, free IDE. It is hard to overstate how easy and flexible this is in use. Follow some links below and see for yourself. the aim of the school projects is to more school pupils engaged with technology. Many schools are more than willing to have these projects going on. In the Coventry area, they are being run in conjunction with Warwick University and are proving successful and popular. Perhaps more surprising is the interest shown by adults in the events organised around Birmingham. These give those unfamiliar with microcontroller technology the opportunity to see how easy it can be to get started and produce practical results in a very short time. A couple of these events have been organised in conjunction with fizzpop and there has been very high demand. Finally Garry described some of the adantages of toaster-oven soldering. Switching to the use of surface-mount components and solder paste has allowed many more school pupils – and others – to produce working PCBs in a fraction of the normal time. These boards are found to be much more reliable and the technique allows more people to build boards without having to have dangerous soldering stations in the room.
David Otten made his way from the US once again to tell us of some of the effects of shiny walls and the trials and tribulations of working with accelerometers in a micromouse. Shiny walls are a problem. Although the Korean injection moulded walls are commonplace, the various contests around the world use a variety of different types of wall and you micromouse sensors need to be able to cope with them all. The question of shininess had not been raised too much until the Taiwan contest in 2009. For this contest, new walls were made locally and, while superficially similar to many other plastic walls, a number of mice had some trouble with them. Among those was David’s mouse. after some investigation, it was apparent that the distance measured by the sensor changed as the angle of the emitter to the wall varied around 90 degrees. Reflections from walls have two components – specular and diffuse. The specular component depends on how shiny the wall is and it is the type you were taught about at school where the angle of reflection equals the angle of reflection. Mirrors work almost entirely through specular reflection. Thus, when you shine a torch at a mirror, it is not the mirror that lights up but the thing the light is reflected onto. Diffuse reflection is what you get with less shiny surfaces and it is the reason that the torch is useful everywhere except mirrors. Shiny maze walls have more specular reflection than dull ones. Harjit Singh in the US, was able to measure the amount of specular reflection from a variety of maze wall surfaces and David’s presentation shows these results and the effect they have on his sensors. The same type of sensor is now used in David’s timing sensors and he described how they are employed to reduce false signals when timing mice. With run times getting shorter and the differences between places also reducing, accurate timing has become something of an essential requirement. Finally, David describes some of his further experiments with inertial navigation and the use of accelerometers. One of the biggest problems with these sensors seems to be noise. With as much as 3% of the signal being noise, it will take a bit of figuring out to see how to make best use of the information contained in the signals. While it is still early days, there are promising results and the hope is that it will be possible to make use of accelerometers in conjunction with the other sensors to improve reliability and performance as the mechanical limitations of typical mice are reached.
Generally speaking, mice and small robots will use encoders on the wheels or motors to determine their speed and distance. Duncan Louttit has tried these and was dismayed by the effect they tended to have on the interrupt rate in his robots and the difficulty of getting a suitable encoder integrated into the drive chain at a reasonable cost. In principal, he reasons, with a DC motor running at constant speed, all you have to do is time your move and you know where you are. The problem here is the characteristic of the motor. DC motors respond to load changes poorly and will slow down. Now, to get the speed back up, you have to increase the current but, since the motor has a significant internal resistance, that means the applied voltage has to go up and you no longer know how fast you were travelling. All this awkwardness goes away if you have a perfect motor with no internal resistance. Duncan points out that it is not easy to buy negative resistors to cancel out the internal resistance so he has come up with a more cunning solution. In essence, the solution is to place a power amplifier around the motor with feedback that will ensure that the voltage across the motor remains constant as the load changed. This is all done in hardware using readily available components. Once configured, it is possible to drive the motor at very consistent speeds over a wide range of load conditions. In effect, the circuit make the motor behave very much more like an ideal motor with very low internal resistance – perhaps as low as 1-5% of the actual motor resistance. The use of an integrator in the input also gives the benefit that a single logic line can turn on the motor, accelerate it at a known rate, run it at a pre-determined fixed speed and then, when the logic level is reset, decelerate it in a controlled fashion until it stops. The drive method was demonstrated on a test rig and a small robot and works very well indeed. The robot very obediently perform open loop turns of 90 degrees again and again. There is no gain without pain however and the drive circuit, still somewhat in its prototype stages, needs careful setup and can suffer from drift.
After lunch Peter Harrison ran through a contest report from the Taiwan, Japan and USA contests that took place over the last few months. There is not much to say here as these contests have been covered elsewhere on the site reasonably well. the presentation was, however, an opportunity for folk to get a better look at some of the photographs and to view a few high definition videos which are, unfortunately, just too big to make available on this site. The presentation, for example, achieved a new personal record of 630MB with the videos included
Switching from hardware to software, Rob Probin gave us an overview of a variety of debugging techniques. This is a very much under-appreciated part of the production process. Rob pointed out that we spend a lot of time teaching people good design and coding methodologies. This is done to improve the quality of the output and reduce the number of bugs present. However, it is probably not possible to create even simple systems that re truly bug free. in spite of that, we fall very short on engineers them how to deal with the inevitable realities of the systems they are working on. The presentation is peppered with telling quotations and I have noted three that seem particularly apt:
“Beware of bugs in the above code; I have only proved it correct, not tried it.” – Donald Knuth
”As soon as we started programming, we found to our surprise that it wasn’t as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs. ” – Maurice Wilkes
”Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?” — Brian Kernighan
From the relatively ephemeral to the fundamentally practical, Ken Hewitt took us up to a coffee break a description of how he set about building a test maze. This is a surprisingly tricky thing to do well and the bigger you make it, the harder it will be. The practical tolerance for the maze is probably about 1mm. According to the oft-published rules, it is larger than that. However, if you were to allow this error to accumulate over an entire full-size maze, you could find that the cells at one end were 16mm smaller than those at the other and your walls won’t fit. In fact, with the commonly available Korean walls, and their dovetailed joints, you could do with every cell being correct to less than 1mm. Ken’s method relies on one flat template, cut as accurately as possible, to lay out the first four post holes. This template could be cut buy a local CNC-equipped workshop as his was or, if you make sure to arrange its safe return you can borrow his. These four post holes have to be carefully drilled. While a hand-drill is adequate in terms of power, the chances of being able to hold it properly perpendicular to the base are slim so a drill guide is made that sits in the holes in the template and ensures that the drilled holes are clean and straight. Ken recommends a brad-point wood cutting bit for this job. Having aligned the first set of holes carefully with the board edges, the template can then be placed over any two holes and held in place with pegs that are a close fit in the template and the holes in the board. Repeated use of the technique will allow you to extend your maze as large as you like knowing that each section will properly take a wall. Should you wish to extend the maze later – even onto another piece of material, it is a relatively easy matter to arrange.
Suitably refreshed after coffee and biscuits, David Hannaford took us back to the question of the maze solving algorithm. While this can be done very simply, there is definitely some advantage in being able to do it quickly. Last year we had a fairly comprehensive look at a variety of techniques. Like sorting, it seems that the faster you want the maze solved, the more elaborate must be the algorithm to do it. At first sight, it is contradictory that code which can be hard to explain had enormous performance benefits over code that is blindingly obvious. however, before we dip back into the debugging business… David has been working on his already very quick solver. The fundamental question being – can it be made quicker? There are some clear performance gains proposed. these might be most usefully given as hypotheses in some case since there is no obvious proof of correctness. Nonetheless, it seems clear enough that there is no need to solve any more if, when trying to find a route to a particular target you have reached the current cell. That observation alone should save 50% of the time on average. It might be 75% since the distance is 50% and so the enclosed area would be 25% as big – I can’t decide. Much of the time, we are moving through a known section of the maze. That being so, we need not re-solve at all. Even if we are visiting a cell for the first time, there may be no new wall information so again, we do not need to re-solve. If we discover new walls, they may be between you and cells that are already further from your goal so there is no point in re-solving since it cannot change the route. Now these are all good ways to reduce the amount of time the mouse spends solving. However, when we do need to re-solve, it would be best to get it done quickly and this was the core of David’s approach. In essence, he suggests that local changes to the maze are unlikely to result in changes to most cells distance from the goal and so he proposes a method by which it should be possible to determine which cells need to be re-examined and limit the algorithm to just those cells. Now there will be occasions where big changes occur. For example, when the mouse finds itself at the end of a long dead-end. under those circumstances the normal algorithm David uses should still obtain a correct solution in adequate time. So, how fast is fast enough? the conclusion was that it might be possible to re-solve in less than 1 millisecond. on a 60MHz or so dsPIC. In general, a combination of the techniques discussed might make it possible to achieve solutions on average ten times faster than now.
As part of the solving theme last year, Alan Dibley presented a technique for converting routes from orthogonal to diagonal to diagonal form. This is a bit of a tricky task. It is solved by mouse builders in a variety of ways but, as far as I know, none of them could be considered elegant. They are mostly brute-force pattern matching exercises full of specials cases. Alan has a method which whereby a route is generated as a string of moves of the form ‘FRFFLRLRFF…T’ Each letter describes what happens in each cell and the ‘T’ represents the target cell. So far, this is just like what most folk do. For his twist in the tale, Alan converts this string into trinary – base 3 – numbers and takes them in sets of three to look up values in one of two tables. The example above would become ‘0200121200…’ . Taking the numbers as three-digit trinary values, we can evaluate it an look up the output value. For example, ‘020’ evaluates to the decimal value of 6 and the sixth value in the look-up table is ‘R’. A little further on we see the sequence ‘FLR’ which becomes ‘012’ and evaluates to 5 to give an output of A indicating a 45 degree turn to the left. Now we are travelling on a diagonal so we need to switch to the other table. The next sequence is ‘LRL’ which translates as ‘121’ and evaluates to 16 to give an output of ‘f’ indicating a forward diagonal move. Not all combinations can occur in a real maze and so some way will have to be found of checking that these sequences have beed properly detected. The reader should note that this method makes assumptions about how the mouse makes turns and exactly what is meant by each type of move. It does though offer a much more elegant solution to the problem. There was a comment made that it is possible to do this looking only at pairs of moves if the original rout was generated in such a way as to consider each 90 degree turn to be a sequence of two 45 degree turns. That is the method used by the PicOne micromouse.
The final session of the days was a fairly comprehensive consideration by Martin Barratt of the means by which designers can provide power for their mouse. In days of yore the Earth was flat, the Sun went around the Earth and everyone used 7805 regulators. Things have moved on – at least as far as regulators are concerned. Martin started off with a look at some of the common types available now from the good old 78xx in its huge TO220 package to the modern ultra low dropout devices in their alarmingly tiny surface mount houses. Along the way we found out some of the advantages and disadvantages of these types, guidance on where they were useful and suitable warnings about their implementation. In particular, there was the question of choosing the right output capacitor as some of the low dropout regulators had quite stringent requirements in that component. Generally, it was noted, for many of these it is safest to choose a Tantalum part rather than a multi-layer ceramic. There are similar regulators designed to be stable with multi-layer ceramic capacitors so it pays to read the data sheets carefully before getting carried away. Once you have chosen your device, there is the question of where to put it and how to layout your circuit board to keep everything happy. There was plenty of good advice on track widths, power dissipation and the routing of power lines around the board. The pros and cons of local, low power regulator were discussed and we were told about the use of multi-layer boards to provide better shielding, decoupling and to make routing of power much simpler. All in all, an excellent introduction to the topic with many valuable pointers toward successful implementation of the power supplies in your next project.
Martin’s was the last presentation for the day and I should take the trouble to thank everyone for the effort they took, once again, to give of their time and expertise to make this such a valuable event for the budding (and not so budding) small robot builder.
Dinner at the Village Bar and Grill in Egham was a suitably pleasant experience. More so perhaps than in previous years since the change in venue allowed us to be seated in a much more acoustically suitable space where we were not reduced to shouting at each other to overcome the echoes. I for one greatly enjoyed my meal and look forward to using them again in the future.
Following a night’s sleep sadly disturbed by the elves that crept into my room and tightened the waistband on my trousers, we assembled for a friendly competition. Friendly does not mean we were not competitive though!
In keeping with tradition, we had a wall follower contest and a maze-solver contest. the wall followers, although not numerous, have increased in their performance significantly. The entries from Bernard Grabowski were very good.
The maze solver event provided its share of excitement with 11 entries. Of those, four unaccountably failed to reach the middle for the heats while the top three placed at this stage were
- Derek Hall’s MouseX with a best run of 8.89s
- David Otten’s MITEE mouse 11 with 6.94s
- Peter Harrison’s Decimus with 6.77s
All eleven entries got another chance for the final and ran in traditional reverse order. This was (probably) the finals maze. There may be a small error in the top half.
The table below shows the results along with the heat times:
|It doesn’t have a name X (KEN)||retired||retired||–|
|It doesn’t have a name (KEN)||45.78||46.03||46.03|
|MITEE mouse 11||6.94||5.25||40.98||40.80||7.59||6.31||5.99||5.25|
For all the usual reasons, several mice had trouble completing either the maze or a speed run. Soon enough, we came to the top three mice to run. MouseX put in a good performance but was, unfortunately, unable to complete more than one speed run leaving an impressive 6.45 seconds to beat. Next up was David Otten’s MITEE 11. In typically deliberate fashion, it searched the maze until happy and then started on its series of speed runs. the second of these posted a time of 6.31 and MITEE 11 went into the lead. It still had some margin left and managed two further timed runs leaving the time at a daunting 5.25 seconds. Decimus, placed best in the heats, was last to run. For a change, the maze configuration favoured its search algorithm and the first run to the centre took only 22 seconds. Strangeness in the search resulted in a posted second run time of 37.64 seconds. No idea what happened there. Then it was time for the speed runs. First try got 6.80 seconds. The next improved the time to 6.27 seconds, guaranteeing second place but could it go any faster? the code has at least two faster runs set up in it. Decimus managed the first of these to give a time of 5.78 seconds. It was now well within range of being able to beat MITEE but – well, Decimus has never managed a successful run at its top speed setting and the previous time was only achieved after several crashes at that speed setting. I am beginning to think it just is not worth doing the slower speed runs sometimes. Under full scoring rules, MouseX would have beaten Decimus because it achieved a marginally slower run time but did it much sooner so the search penalties would be lower. Before the summer contest I must find out how to make it run at more than 4m/s/s acceleration.
So, that was MINOS for another year. I had a very enjoyable weekend and I believe the other participants did as well. We have plenty of room so come along next year and make it bigger and better.
Last but by no means least, I would like to thank Adrian Johnstone for making it possible and Janet Hales for doing all the work arranging rooms, meals and refreshments.