Proposal how to proof the navigation correction by sensors
Proposal how to proof the navigation correction by sensors
Proposal how to proof the navigation an positioning correction by odometry or compass or different sensors (standardized):
materials and methods:
NXT tribot with 2 Lego encoder motors (port B,C) and an about 30 cm higher mounted compass sensor, which has to be mounted strictly horizontally.
a square on the floor of 1m² size with different flooring (triangle ABC: parquet, triangle CDA: carpet) and good detectable vertices (called point A, B, C, D clockwise).
goniometer, ruler
programming language: NXC
Experimental setup:
put the tribot on point A, targeting to point B,
heading A-> B defined as 0 degrees,
heading B-> C defined as 90 degrees
heading C-> D defined as 180 degrees
heading D-> A defined as 270 degrees
write a NXC program that starts the robot and can control each single motor by speed and direction without using sync motor functions.
you may read encoder values of both motors, and you may use them to determine the lenght of the driven path, but you must not use them to correct driving or targeting to any direction (but of course you may stop the motors if the encoders indicate that you finished a 1m stretch).
you may read the compass sensor values and you may change motor speeds of each motor due to the compass results in order to move straight ahead and/or to turn at each vertex.
you must not interfere manually in any way.
mission:
drive 4 times round the square;
move each time exactly 1m ahead and then turn 90° on the stop targetting the next point (vertex)
move to the points A, B, C, D as close as possible.
Stop in the moment when the robot has calculated that he should have finally reached the start/stop point A at the end.
repeat the mission going other way round (A, D, C, B).
data recording:
record the deviations at each targeted point (amount of the difference vector)
record deviation (amount of the difference vector) and
record the last taken heading (D->A) when you reach the start/stop point after the 4th round
Publish your NXC program, a fotograf of the robot, and your recorded results.
comparison: same materials and methods, same mission, and same data recording, but:
(1) no compass reading allowed, use motor encoder data for motor control, using sync motor function is allowed.
(2) compass reading and reading of motor encoder data and use of sync motor function allowed for motor control.
(3) use a different sensor instead of the compass alternatively.
materials and methods:
NXT tribot with 2 Lego encoder motors (port B,C) and an about 30 cm higher mounted compass sensor, which has to be mounted strictly horizontally.
a square on the floor of 1m² size with different flooring (triangle ABC: parquet, triangle CDA: carpet) and good detectable vertices (called point A, B, C, D clockwise).
goniometer, ruler
programming language: NXC
Experimental setup:
put the tribot on point A, targeting to point B,
heading A-> B defined as 0 degrees,
heading B-> C defined as 90 degrees
heading C-> D defined as 180 degrees
heading D-> A defined as 270 degrees
write a NXC program that starts the robot and can control each single motor by speed and direction without using sync motor functions.
you may read encoder values of both motors, and you may use them to determine the lenght of the driven path, but you must not use them to correct driving or targeting to any direction (but of course you may stop the motors if the encoders indicate that you finished a 1m stretch).
you may read the compass sensor values and you may change motor speeds of each motor due to the compass results in order to move straight ahead and/or to turn at each vertex.
you must not interfere manually in any way.
mission:
drive 4 times round the square;
move each time exactly 1m ahead and then turn 90° on the stop targetting the next point (vertex)
move to the points A, B, C, D as close as possible.
Stop in the moment when the robot has calculated that he should have finally reached the start/stop point A at the end.
repeat the mission going other way round (A, D, C, B).
data recording:
record the deviations at each targeted point (amount of the difference vector)
record deviation (amount of the difference vector) and
record the last taken heading (D->A) when you reach the start/stop point after the 4th round
Publish your NXC program, a fotograf of the robot, and your recorded results.
comparison: same materials and methods, same mission, and same data recording, but:
(1) no compass reading allowed, use motor encoder data for motor control, using sync motor function is allowed.
(2) compass reading and reading of motor encoder data and use of sync motor function allowed for motor control.
(3) use a different sensor instead of the compass alternatively.
Last edited by HaWe on 19 Oct 2010, 16:00, edited 1 time in total.
-
- Posts: 323
- Joined: 29 Sep 2010, 05:03
Re: Proposal how to proof the navigation correction by sensors
Looks like an interesting test and seems like a good set of rules. Unfortunately I don't have a suitable mix of carpet and another surface (though I agree that it is a great idea to use a mix if possible). Also at the moment I'm way to busy trying to get a new release of leJOS completed to use any other system than leJOS (and the code I've been running is really only to test some new stuff I've been adding recently - ahem)... But I look forward to seeing the results that people get...
One thing I would like to clarify about the rules though. You mention controlling the motor speed and direction but not using the syncdrive feature. My understanding is that the default NXC motor control (without sync), will still use the tachometers to regulate the speed of the individual motors. Is that what you intended? I have no problem with this because I think it makes sense to use the tachometers for motor control, but I just wanted to check if this was what you intended...
Oh and for me by far the most interesting option is 2 (and 3) my major interest here is getting the best possible control of the robot irrespective of what sensors and motor control systems are used...
Good luck...
Andy
One thing I would like to clarify about the rules though. You mention controlling the motor speed and direction but not using the syncdrive feature. My understanding is that the default NXC motor control (without sync), will still use the tachometers to regulate the speed of the individual motors. Is that what you intended? I have no problem with this because I think it makes sense to use the tachometers for motor control, but I just wanted to check if this was what you intended...
Oh and for me by far the most interesting option is 2 (and 3) my major interest here is getting the best possible control of the robot irrespective of what sensors and motor control systems are used...
Good luck...
Andy
Re: Proposal how to proof the navigation correction by sensors
I may try to do the option (1), as I do not own a compass, what is your deadline?
Also, how do you record deviation and deviations at each targeted point?
Also, how do you record deviation and deviations at each targeted point?
LEGO things http://ni.fr.eu.org/lego/ - NXT Improved Firmware (GCC) http://nxt-firmware.ni.fr.eu.org/ - Other robots http://apbteam.org
-
- Posts: 76
- Joined: 29 Sep 2010, 06:57
Re: Proposal how to proof the navigation correction by sensors
Just a couple of observations:
Perhaps we need to get an idea for how long it would take someone to re-configure the robot to go round a different path.
Matt
Are we allowed to use any other sensors for this test of the compass? My concern is that the compass is incapable of measuring distance, so the robot has no way of knowing when to stop on the straight sections.you may read the compass sensor values and you may change motor speeds of each motor due to the compass results in order to move straight ahead and/or to turn at each vertex.
I think there is a danger that someone will artificially calibrate out any local environmental effects in a way that will not lead to a useful navigation system in any practical terms. For instance, I imagine that with a lot of trial and error someone could do a pretty good job at getting the robot to go round very accurately only using the timing functions. This is fine for going round a square, but would not be a practical solution for the navigation required to, say, find its way round a house.mission:
drive 4 times round the square;
move each time exactly 1m ahead and then turn 90° on the stop targetting the next point (vertex)
move to the points A, B, C, D as close as possible.
Stop in the moment when the robot has calculated that he should have finally reached the start/stop point A at the end.
Perhaps we need to get an idea for how long it would take someone to re-configure the robot to go round a different path.
Matt
-
- Posts: 18
- Joined: 28 Sep 2010, 23:57
Re: Proposal how to proof the navigation correction by sensors
This sounds like a great challenge. A few comments...
First of all I would like to suggest an alternative robot base. The TriBot, with its small caster wheel, has some limitations. The HiTechnic Trike Base is also very simple to build and can be built from either a 1.0 or 2.0 set. The large caster wheel makes its capable of driving over fairly rough terrain.
Look for Trike roughly halfway down on this page: http://www.hitechnic.com/models
The Compass sensor works best when level and calibrated (as well as minimal interference from metal beams, pipes, conduits, ovens etc.) Here is an NXC program to calibrate the compass. It assumes one drive motor is OUT_B and that the compass is on S4. Change these constants as necessary. The program will display the compass value on the screen until the Enter button is pressed. Then it will go into a calibration mode and attempt to turn around for 20 seconds. It should turn at least 360 degrees for proper calibration. After the calibration it will indicate if the calibration was successful or if it failed. If it failed, for example if the robot did not turning enough, then the sensor calibration was not altered.
After calibration I suggest manually checking how successful it was by examining the sensor at the four compass points: 0, 90, 180, and 270. I usually put down a LEGO plate to correspond to the 0 heading and then turn the robot by hand to the four compass points and make sure that the robot lines up to the edges of the plate. If not, I double check the level of the compass and recalibrate.
Regarding measuring distance. The original rules did say the motor encoders can be read. Distance can be determined simply by adding the two encoders.
I predict that when care is take, you can get fairly accurate navigation with the Compass sensor.
Gus
First of all I would like to suggest an alternative robot base. The TriBot, with its small caster wheel, has some limitations. The HiTechnic Trike Base is also very simple to build and can be built from either a 1.0 or 2.0 set. The large caster wheel makes its capable of driving over fairly rough terrain.
Look for Trike roughly halfway down on this page: http://www.hitechnic.com/models
The Compass sensor works best when level and calibrated (as well as minimal interference from metal beams, pipes, conduits, ovens etc.) Here is an NXC program to calibrate the compass. It assumes one drive motor is OUT_B and that the compass is on S4. Change these constants as necessary. The program will display the compass value on the screen until the Enter button is pressed. Then it will go into a calibration mode and attempt to turn around for 20 seconds. It should turn at least 360 degrees for proper calibration. After the calibration it will indicate if the calibration was successful or if it failed. If it failed, for example if the robot did not turning enough, then the sensor calibration was not altered.
Code: Select all
//=====================================================================
// Calibrate compass
//
// Compass on S4.
//
// Assume that one drive motor is on OUT_B, Will power this one motor
// to spin robot in a circle while calibrating.
#define DRIVE_MOTOR OUT_B
#define COMPASS S4
//=====================================================================
// bool HTCompassCalibrateStart(const byte & port)
//
// Set HiTechnic Compass to Calibrate mode. While in Calibrate mode,
// robot should slowly turn more than 360 degrees for duration of about
// 20 seconds. Call HTCompassCalibrateStop to stop calibrationb mode.
//
bool HTCompassCalibrateStart(const byte & port)
{
int count;
byte inI2Ccmd[];
char outbuf[];
ArrayInit(inI2Ccmd, 0, 3);
inI2Ccmd[0] = 0x02;
inI2Ccmd[1] = 0x41; // Mode:
inI2Ccmd[2] = 0x43; // Calibrate
count=0;
return I2CBytes(port, inI2Ccmd, count, outbuf);
}
//---------------------------------------------------------------------
// bool HTCompassCalibrateStop(const byte & port)
//
// Stop the Calibration mode that was started by
// HTCompassCalibrateStart. Returns true if the calibration was
// successful.
bool HTCompassCalibrateStop(const byte & port)
{
int count;
byte inI2Ccmd[];
char outbuf[];
bool bSuccess;
if (port > IN_4)
return false;
ArrayInit(inI2Ccmd, 0, 3);
inI2Ccmd[0] = 0x02;
inI2Ccmd[1] = 0x41; // Mode
inI2Ccmd[2] = 0x00; // Normal Read
count=0;
bSuccess = I2CBytes(port, inI2Ccmd, count, outbuf);
if (bSuccess) {
Wait(100);
// Read back location 0x41 to see if successful
ArrayInit(inI2Ccmd, 0, 2);
inI2Ccmd[0] = 0x02;
inI2Ccmd[1] = 0x41; // Mode
count=1;
bSuccess = I2CBytes(port, inI2Ccmd, count, outbuf);
if (bSuccess) {
bSuccess = (outbuf[0] == 0);
}
}
return (bSuccess);
}
//=====================================================================
// Sample program to recalibrate the compass sensor. Shows current
// compass value on screen. If user presses the Enter button then
// robot activates one drive motor (OUT_B), to spin robot while in
// compass calibration mode. Result of calibration, success of fail,
// are displayed on the screen after the calibration.
//
task main()
{
int heading;
bool bSuccess;
SetSensorLowspeed(COMPASS);
Wait(100);
TextOut(0, LCD_LINE1, "HiTechnic");
TextOut(0, LCD_LINE2, " Compass Sensor");
TextOut(15, LCD_LINE8, "[Calibrate]");
while(true) {
heading = SensorHTCompass(COMPASS);
TextOut(0, LCD_LINE4, "Heading: ");
NumOut(8*6, LCD_LINE4, heading);
// Check for Enter button
if (ButtonPressed(BTNCENTER, false)) {
// Wait for release
while(ButtonPressed(BTNCENTER, false));
TextOut(0, LCD_LINE4, "Calibrating...");
// Start turning robot
OnFwd(DRIVE_MOTOR, 30);
HTCompassCalibrateStart(COMPASS);
// Calibrating while robot is turning
Wait(20000);
// Stop Calibration
bSuccess = HTCompassCalibrateStop(COMPASS);
Off(OUT_B);
if (bSuccess) {
PlaySound(SOUND_UP);
TextOut(0, LCD_LINE4, "Cal. success!");
} else {
PlaySound(SOUND_LOW_BEEP);
TextOut(0, LCD_LINE4, "Cal. failed.");
}
Wait(5000);
}
Wait(100);
}
}
Regarding measuring distance. The original rules did say the motor encoders can be read. Distance can be determined simply by adding the two encoders.
I predict that when care is take, you can get fairly accurate navigation with the Compass sensor.
Gus
-
- Posts: 76
- Joined: 29 Sep 2010, 06:57
Re: Proposal how to proof the navigation correction by sensors
They can be read, but apparently can't be used to "correct driving":gusjansson wrote: Regarding measuring distance. The original rules did say the motor encoders can be read. Distance can be determined simply by adding the two encoders.
My interpretation of that statement is that even though you can calculate the distance, you're not allowed to use it to alter driving, i.e. by using it to decide when the robot stops.you may read encoder values of both motors, and you may use them to determine the lenght of the driven path, but you must not use them to correct driving or targeting to any direction.
Perhaps some clarification from doc-helmut is required.
Matt
Re: Proposal how to proof the navigation correction by sensors
hi,
thanks for your interest!
The ride (4x round the square) has to be taken in one run without intermediate reconfiguring related to external sources or points except the natural earth's magnetic field like magnetic North).
(of course it's allowed to turn twice on the spot for activating the built-in configuration routine ot the HT magnetic sensor, or to stop a few seconds intermediately so that measuring can be taken).
checking compass controlled navigation accuracy:
you may use the number of encoder ticks (e.g., multiplied by wheel circumference a.s.o.) to calculate the driven length of the path but must not do anything more than this by them.
the carpet/parquet (or linoleum etc.) test ground is easy to construct:
lay a carpet (e.g. a bathroom rug or however it's called) and a piece of thin wooden plate or linoleum side to side and use this contact line for determination of points A and C (the diagonal of the square).
checking the deviation (distance aberration) at each target point: just measure the absolute distance (cm) from the robot's turning point to the real target point by the ruler.
checking the heading aberration is only needed at the end (e.g., difference angle to 270°taken by a goniometer).
Did I miss any open question or problem?
thanks for your interest!
The ride (4x round the square) has to be taken in one run without intermediate reconfiguring related to external sources or points except the natural earth's magnetic field like magnetic North).
(of course it's allowed to turn twice on the spot for activating the built-in configuration routine ot the HT magnetic sensor, or to stop a few seconds intermediately so that measuring can be taken).
checking compass controlled navigation accuracy:
you may use the number of encoder ticks (e.g., multiplied by wheel circumference a.s.o.) to calculate the driven length of the path but must not do anything more than this by them.
the carpet/parquet (or linoleum etc.) test ground is easy to construct:
lay a carpet (e.g. a bathroom rug or however it's called) and a piece of thin wooden plate or linoleum side to side and use this contact line for determination of points A and C (the diagonal of the square).
checking the deviation (distance aberration) at each target point: just measure the absolute distance (cm) from the robot's turning point to the real target point by the ruler.
checking the heading aberration is only needed at the end (e.g., difference angle to 270°taken by a goniometer).
Did I miss any open question or problem?
Last edited by HaWe on 19 Oct 2010, 09:57, edited 2 times in total.
-
- Posts: 323
- Joined: 29 Sep 2010, 05:03
Re: Proposal how to proof the navigation correction by sensors
Thanks for the clarifications I'm still not sure about speed control of the motors (which will use the tachometers inside the firmware) though... Can you set the speed of the motors individually (no sync), or are you only allowed to set a power level (parhaps someone more familiar with NXC can explain this in NXC terms)? This is for test 0 (your original test), obviously it is allowed in test 1, 2. Maybe I'm just being a little slow on this...
Re: Proposal how to proof the navigation correction by sensors
the main test (0) is for just a compass: so no synch allowed. But of course you may stop the motors when the encoders indicate that you reached the end of your 1m stretch. And of course you may ramp up or down your motors as much as you like, but only by your own code, better not by built in fw functions (for safety, because I'm not sure how they interfere or have unexpected side effects)
comparison (1) is only for odometric control: so all is allowed like speed control, simple P control, PID control or whatever you like - based on both encoder values.
comparison (2) is the "synthesis" of using both inputs: encoder and compass.
This you may call the ultimate test: Can a compass improve encoder readings and odometric control, if used additionally to odometry?
This part can be done by at least 2 ways:
a) minor approach: use compass headings only at the turning points to realign heading values and use only the built-in synchronization modules like sync_BC for going straight regardless of compass values
(that's actually what I did in my posted RobotC code so far)
b) major approach: use compass headings simultaneously also for move correction going straight ahead (in this case it would be obviously useful not to use the built-in synchronization modules but write you own heading correction function, maybe also an intelligent filter for both like KF or PF)
And I want to encourage you: just go ahead and make your tests!
The reason why we do it is to get objective unprejudiced results we can make some decisions of.
We can discuss the results later and see if they are reliable or maybe there are undiscovered side effects which have to be erased.
And last but not least: watching a video of the runs would be great of course!
comparison (1) is only for odometric control: so all is allowed like speed control, simple P control, PID control or whatever you like - based on both encoder values.
comparison (2) is the "synthesis" of using both inputs: encoder and compass.
This you may call the ultimate test: Can a compass improve encoder readings and odometric control, if used additionally to odometry?
This part can be done by at least 2 ways:
a) minor approach: use compass headings only at the turning points to realign heading values and use only the built-in synchronization modules like sync_BC for going straight regardless of compass values
(that's actually what I did in my posted RobotC code so far)
b) major approach: use compass headings simultaneously also for move correction going straight ahead (in this case it would be obviously useful not to use the built-in synchronization modules but write you own heading correction function, maybe also an intelligent filter for both like KF or PF)
And I want to encourage you: just go ahead and make your tests!
The reason why we do it is to get objective unprejudiced results we can make some decisions of.
We can discuss the results later and see if they are reliable or maybe there are undiscovered side effects which have to be erased.
And last but not least: watching a video of the runs would be great of course!
Last edited by HaWe on 19 Oct 2010, 15:13, edited 1 time in total.
Re: Proposal how to proof the navigation correction by sensors
concerning the tribot base:
I think everyone may choose his favourite robot base. I personally took a much larger robot platform and used the big RCX wheels (so it could move more easily over bumps on the floor). I don't think that this will effect (or affect?) the comparison results, and it over all would be the best way if everyone makes both test 0 and 1 or 2 by himself with the same model anyway.
I think everyone may choose his favourite robot base. I personally took a much larger robot platform and used the big RCX wheels (so it could move more easily over bumps on the floor). I don't think that this will effect (or affect?) the comparison results, and it over all would be the best way if everyone makes both test 0 and 1 or 2 by himself with the same model anyway.
Who is online
Users browsing this forum: No registered users and 4 guests