This post details the development of a brain-controlled robot that I built for my MEng Group Project. A video I posted to give an overview of how it works can be seen below:
Contents
- Part 1 – Project Background
- Part 2 – Platform Specifications
- Part 3 – Implementation of basic brain-control
- Part 4 – Implementation of multi-directional control
Part 1 – Project Background
As stated above, this post will firstly aim to provide some background information about the project before moving onto the more interesting bits! The MEng Group Project’s aim was to develop a system of robots that would work together to achieve some end goal. A major component in the assessment of this project was how well the group were able to demonstrate advanced technical innovation and capability. This, coupled with my recent research into neuro-robotics and Brain-Computer Interfaces (BCI), prompted me to propose that the group use a low-cost commercial BCI headset as a control interface for the robot that I was working on.
Part 2 – Platform Specifications
The robot itself was a Lynxmotion Tri-Track [1]. Some technical specifications for the particular platform used in this project are shown below:
- Microcontroller Unit – Bot-Board II with a Basic Atom Pro 28
- Motor Driver Board – Sabertooth 2X5 R/C
- Primary Communications – BlueSMiRF Gold Bluetooth Module
- Backup Communications – Wireless PS2 Remote Controller (supplied by Lynxmotion)
Figure 1 – Lynxmotion Tri-Track Robot
The control interface for this robot was a LabVIEW VI which takes input from a Logitech Attack3 Joystick, processes the input then sends out a single ASCII character via Bluetooth to the Tri-Track. The BAP-28 processor then interpreted these characters as commands to move forwards, backwards, left, right or stop. When any of these are received, a subroutine is called within the BASIC code that sets the speed for both motors until a different command has been received.
Below is a video of a Joystick control test session. Note that the wrist servo is unable to support itself in an upright position; I am not entirely sure what the issue was as I was responsible for the control interface aspects of this robot, but my colleague was able to rectify the before the deadline thankfully 😀
Part 3 – Implementation of basic brain-control
After some initial research, I concluded that the Neurosky Mindwave or Mindwave Mobile would be a suitable device, as they are cost effective at approximately £130 and provide some EEG data at a resolution that was appropriate for the project. We selected the standard Mindwave as this uses RF rather than Bluetooth, however, the UK supplier did not have any in stock and we were working to a tight deadline so decided to order the Mindwave Mobile instead.
Figure 2- Neurosky Mindwave Mobile Headset
The reason we had originally decided on the standard Mindwave is that, the control PC was already using Bluetooth to communicate with the Tri-Track, and the group had previously tried (unsuccessfully) to use multiple Bluetooth devices with the same PC. This proved to be an issue, as it potentially meant that we would need to use a second PC to receive the Mindwave data then stream this over a LAN to the main PC, which would be a cumbersome solution.
The solution we ended up using was to effectively make a second Bluetooth serial port by connecting an Arduino UNO R3 to the PC then wiring a BlueSMiRF Silver into the serial pin on the Arduino. Initially, I had trouble connecting the Mindwave to the BlueSMiRF, but managed to set up the BlueSMiRF in master mode. This meant that when the Arduino was powered up, the Bluetooth module would search for the MAC address of the Mindwave headset, and initiate a connection if the MAC address is detected.
Once I had solved the issue of connecting the headset up, I conducted some experimentation to try and work out what the best control strategy was. From this, the “attention” value seemed relatively easy to modulate with some practice and so could be suitable as a way to trigger a stop/go signal. I found that if I focussed hard on something, particularly if it was a visual stimulus such as an LED, I could get the attention value to spike considerably, and then by relaxing and letting my thoughts drift it would drop. I determined that to trigger movement, an attention threshold of 60% would be best as I could easily get the value to jump between about 45 and 70.
The next step was to write some code to run on the Arduino which would act as a middleman between the headset and LabVIEW. To do this, I took the sample code supplied by Neurosky then modified it by adding some simple ‘if’ statements in to check what the “Attention” value from the headset was. If the signal was less than 60%, an ASCII character “Z” would be output over serial into LabVIEW where it would be decoded to mean “transmit a stop command”. When the value crossed the trigger threshold of 60% an “F” was output, which correlated to “transmit a forward command”.
Once I had tested this and verified that it worked, I then set out on the task to take it beyond basic stop/start control and implement directional control!
Part 4 – Implementation of multi-directional control
The main issue with implementing multi-directional control was how we would actually go about triggering directional commands.
A look through the Neurosky documentation gave me the idea of using blink detection to cycle through a list of different commands, however, the blink detection implemented by Neurosky appeared to be too sensitive for our purposes; it seems very accurate, but if we used it there probably would have been a lot of issues with false triggers due to the headset detecting normal blinking.
In order to make sure that the commands would only cycle when the user really meant to, I decided to use the signal quality value rather than the blink value. The theory behind this was that, if the operator really forced a blink, there would be some movement of the forehead skin, thus causing a momentary degradation of signal quality, which could then be interpreted as a forced blink trigger.
Initially, I thought we could have the following cycle of commands: Fwd > Back > Left > Right > Stop, however, I realised that because the signal quality detection isn’t foolproof, this might lead to skipping from say forward to left when really you wanted to go backwards. To minimise these effects, I placed a stop between each direction, so the sequence became: Fwd > Stop > Back > Stop > Left > Stop > Right > Stop
Whilst this adds additional time when trying to cycle through commands, it was a trade-off that seemed beneficial.
The algorithm for checking the signal quality and keeping track of the current selected direction was written using more if statements and built into the Arduino program that was used in the previous stage. The functionality of the attention level processing was the same still, except instead of outputting an “F” when the attention level was greater than 60%, it output a variable called “currentDirection” which held the currently selected direction. This meant that LabVIEW would receive one of five commands:
- Z – Stop
- F – Forward
- B – Backward
- L – Rotate left
- R – Rotate right
A video showing blink detection testing is below:
That’s it for the article, hopefully someone finds it interesting or helpful! If you have any questions feel free to post them below 😀
The last parts of this post are a diagram of the NI LabVIEW Front Panel layout, followed by a couple of references. Thanks!
Figure 3 – Final Version of the LabVIEW Front Panel
References
[1] – Lynxmotion PS2 Combo Kit – http://www.lynxmotion.com/c-121-ps2-combo-kit.aspx
Figures
- Lynxmotion Tri-Track Robot Isometric View – Sourced From: http://www.lynxmotion.com/images/jpg/ttrk01.jpg
- Neurosky Mindwave Mobile – Sourced From: http://neurosky.com/Images/MWM/Product.png