Tiantian Li
Hi! I am Tiantian, a junior studying CS at Cornell University. This webpage documents my lab work in ECE 4160 Fast Robots, taken in the Spring 2023 semester.
This course focuses on systems level design and implementation of dynamic autonomous robots. We design a fast autonomous car and explore dynamic behaviors, acting forces, sensors, and reactive control on an embedded processor, as well as the benefit of partial off-board computation, low latency software, and noise tolerant implementation [ECE4160 Website].
In Lab 1, I perform basic setup and gain familiarity with the SparkFun RedBoard Artemis Nano through various examples.
The Artemis board is compatible with SparkFun's Arduino core and can be programmed under the Arduino IDE. I follow the lab instructions outlined below to set it up:
First, I ran a basic example to make sure everything
is
functional. The Blink example toggles the on-board LED on and off by using the
digitalWrite
function. The result is shown in this video.
This example shows how to communicate with the board
over serial. The baudrate is set to 115200
, telling the board how
fast information is transferred. The code reads input from the serial port using
Serial.read()
and writes the data back using
Serial.write()
. One can also print to serial port using
Serial.println()
. The difference between
Serial.write()
and Serial.println()
is that
Serial.write()
writes binary data to the serial port while
Serial.println()
prints data to the serial port as ASCII text.
Therefore, Serial.write()
is simple and fast.
Serial.println()
on the other hand is more versatile.
Below is a screen shot containing Serial Monitor to show serial communication.
The microcontroller includes an onboard ADC (analog to
digital converter) that is able to read analog voltages from 0V to 2V. It also has
some internal ADC channels that can measure things like the internal die
temperature. I modify the example to output temperature in Fahrenheit instead of raw
ADC counts. In the video below, we can see that the temp
reading
changes from around 79 to almost 82 when the chip is touched by finger.
I run into the issue of not being able to print floats with
Serial.printf()
. My workaround is to separate the string before and
after the float and print the float on separate lines using
Serial.print()
.
In this last example, we play with the pulse density
microphone (PDM) on the Artemis board. The example code reads samples from the mic
into a data buffer and performs FFT with arm_cfft_f32()
to find the
frequency bin with the largest magnitude. As we can see in this video, the measured
frequency increases significantly when high frequency noise is produced.
In this lab we explore sending data between the Artemis board and the computer via Bluetooth Low Energy, which is optimized for low power use at low data rates. This communication framework will allow us to effectively control and debug our robot in the future.
On the computer side, I follow these detailed
instructions provided by our awesome,
talented, passionate and inspiring course staff.
I installed Python 3.10.8, created a virtual environment, and configured required
packages. The main work on the computer side is done through Python code written in
Jupyter notebooks.
On the Artemis side, Arduino is used to program the board and ArduinoBLE needs to be
installed from the library manager.
Next I downloaded this
codebase, again provided by our wonderful, hard-working,
generous and dedicated teaching team. đ The codebase contains useful tools
including class definitions to simplify the transfer and manipulation of data, as
well as demos to get us started.
The code that runs on Artemis sets up one service with three characteristics that
correspond to sending out floats, sending out strings, and receiving strings. While
a Bluetooth connection is established, the program will keep writing floats to the
âfloat characteristicâ, which the computer can then read from. And the program also
responds to the commands it reads from the âreceiving string characteristicâ, which
is updated by the computer.
On the computer side, we can use Python to send string commands to the Artemis
board, and receive data by either reading a characteristic or waiting for
notifications when a characteristic updates.
Once everything is ready, I load and burn the sketch
into the Artemis board. BLE.address()
gives me the boardâs MAC address
and I update this info on the computer side. I also generate a new BLEService UUID
and make sure the computer and Artemis agree.
The first task is to implement the ECHO
command which upon receiving a string from the computer, responds with an augmented
string containing the original string. I use the EString
class to build
the augmented strings and writeValue()
to send responses to the
computer. The result is shown in the screenshot below. On the left is the relevant
section of code running on Artemis. On the right is the Python code to send and
receive data. We can see that the computer sends âHiiiiiâ and Artemis replies with
âRobot says -> Hiiiii:)â.
I add a new command that writes the string âT:123456â
to the string characteristic. On the Python side, I had to remember to modify
cmd_types.py to include the new command. This time, instead of calling
receive_string()
to read the value, I set up a notification handler
with a callback function which extracts the time. The result is shown below.
â123456â is extracted and printed.
This task requires us to add a command to send five timestamped temperature readings taken once per second. I structured my data in this format: â| time, temp | time, temp âŚâ. And I updated my callback function to extract time and temperature. We can see the data printed in the bottom right corner of this screenshot.
To make sure my data will not exceed the characteristic size limit, I calculated my
maximum possible datasize. Temp has 3 decimal places and is unlikely to go below 0
or above 100 degree celsius so that gives 5 numbers + 1 decimal place = 6
characters. I also used int
for millis()
instead of
unsigned long
. In total for each timestamped temperature there is 1 for
â|â + 6 for temp + 1 for â,â + 7 for time (which gives at least 2hr) = 15 bytes. So
my reply will have at most 5*15 = 75 bytes which is less than the characteristic
size limit of 150 bytes.
For this task, we are rapidly sending timestamped temperatures for five seconds. I check the data size and send data out when itâs about to reach the characteristic size limit. The code and demonstration is shown in this video. On average I was able to send 2100 temperature points.
Theoretically with 384 kB of RAM, the Artemis can store 5 minutes of 32-bit data taken at 320 Hz. (300s * 4 bytes/value * 320 values/sec = 384kB) However, there are also global variables. When Artemis is doing some computations some other local variables are probably needed. Therefore, the actual capacity for storing data is probably less.
We played around with Time of Flight Sensors (ToF) in this lab.
The two VL53L1X sensors have the same default I2C address. In order to address them individually, we can either
I chose the first method.
I planned to put one ToF in front of the car and the other at the back, thinking that after the car flipped 180 in the air during a stunt, it can still go forward. Also rotating 180 can map the surroundings. However, if there are obstacles to the sides of the car, it will not be able to detect them. Here is a photo showing the connections.
Scan the I2C channel to find the sensor: We can see the address is 0x29. This is because the original 0x52 address contains a LSBit for read write status as shown in the datasheet.
The sensor has three distance modes (actually our library only supports long and
short).
The Long mode has the longest range but is impacted significantly by ambient light
while Short mode is more robust to lighting conditions. I want to go with the Short
mode for the robot since I donât expect it to navigate long distances and I want it
to be more reliable.
I perform tests to see if the measured distance of the sensor is aligned with the actual distance.
I plotted the resulting data from 3 repeated measurements of a range of 200 mm at 20
mm increments. We can see that the range is about 1400 mm. Going over 1500 mm we see
some weird data. The accuracy is pretty good. The graph is close to y=x when target
is in range. Repeatability is also good. The maximum difference between repeated
measurements is negligible.
For ranging time, the sensor takes 51 ms from startRanging()
to
stopRanging()
while in range , and 31 ms when out of range. The time
also slightly increases (51->52) while the object is closer.
I soldered the XSHUT
pin of one ToF sensor to pin A3 of the Artemis
board. To turn the sensor off, I pull the pin low.
After changing one sensor's address, I was able to get data from both sensors simultaneously.
Using default mode, time between measurements is 202 ms. With short mode, it reduces to 62 ms. Here is a demo:
Also if the measured distance is longer, the time also increases. Reading the datasheet, I found that changing the timing budget to 20 ms in Short mode can reduce the time needed by the sensor to perform and report ranging measurements data.
Building off of the previous Bluetooth lab, I added a command to get ToF data for a user-specified period of time. The results are then plotted.
In this lab we finally got our car and get to run stunts! We added the Inertial Measurement Unit (IMU) to the car and used sensor fusion to calculate pitch and row from raw data.
Just like what we did for the ToF sensor, I installed SparkFun 9DOF IMU Breakout -
ICM 20948 - Arduino Library. I started exploring using the basic example.
In the code, AD0_VAL
is defined to be 1. From reading the datasheet, I
found out that this is the LSB bit of the I2C address. The default is 1 and if we
short the ADR pads on the IMU board, the value becomes 0. This can be used to
connect two IMUs to the same I2C bus (leaving one default and short the other so
they have different address). I wish our ToF sensors have this functionality.
I played around and examine the acceleration and gyroscope data as I move the IMU
board according to the axis drawn on it. When the board is still, the acceleration
along x and y axis are 0 as expected, but the acceleration along z axis is equal to
the gravitational acceleration because we are on earth. When rotating and flipping
the board, I notice that the accelerometer data changes shaper than the gyroscope
data.
I added code to blink the LED when IMU finished setting up. This will be helpful
when debugging later.
I calculated the pitch and roll from accelerometer data.
Here is a demo showing outputs at {-90, 0, 90} degrees pitch and roll:
The sensor is pretty accurate. The error is within 2 degree.
To reduce the effect of noise, I analyze the data in
frequency domain.
I used scipy.fftpack.fft
to implement Fast Fourier Tranform. Here is
the result I got with 400 samples and sample rate = 30 Hz.
As we can see, apart from the spick around 0Hz, these is little noise. I set up low pass filter for accelerometer and gyroscope according to the datasheet.
For comparison, the figure on the left is raw data and the one on the right is data with low pass filter applied.
I computed pitch, roll and yaw from readings from gyroscope.
This video shows the output comparing to that given by the accelerometer at different sampling rate. We can see that gyroscopeâs output suffers from drifting. And increasing the samping frequency improves the accuracy of my estimated angles.
Accelerometer is especially noisy near noise such as our DC motor for the car. However, it does not have any measurement bias since gravity always points down. Gyroscope, on the other hand, has much less noise. But by using dead reckoning and integrating the gyro data to get an estimate for angle, a large bias is introduced [Hunter's tutorial on Complementary Filter]. To get the best of both worlds, I used a complimentary filter to compute an accurate and stable estimate of pitch and roll. The results can be seen in the video.
After getting rid of all the delays and Serial.print() statements, I was able to sample IMU data every 2-4 ms. This screenshot shows the timestamped IMU data I collected.
To get both ToF and IMU data, I decided to store them in different arrays for the ease of data processing. And I included 2 timestamps for ToF and 1 for IMU. ToF is also much slower than IMU. When we map out the room later, we might want to know the angle/yaw from IMU and the distance from ToF at the same time. Initially, I checked if ToF data is ready first, but only collected 3 ToF data in 5s. So I check if IMU is ready first. That way ToF have time for ranging and I was able to collect more ToF data. Here is a demo collecting and plotting 5 seconds of ToF and IMU data.
Considering the memory of the Artemis, ideally I can store almost 50 seconds of ToF
and IMU data in their arrays. Here is the calculation:
Int takes 2 bytes while floats takes 4 bytes.
ToF has 2 timestamps and 2 distances of type int: 2*2 + 2*2 = 8 bytes per sample.
There are 1/0.062 = 16 samples per second. That gives us 8*16 = 128 bytes/s.
IMU has 1 int timestamp and 3 floats for pitch, roll, and yaw: 1*2 + 3*4 = 14 bytes
per sample. And there are 1/0.002 = 500 samples per second so 14*500 = 7000
bytes/s.
Artemis has 384 kB of RAM. 384000 / 7128 = 53.9 seconds.
We use separate batteries to power the digital
electronics and the motors so the sensitive sensors will not get disturbed from the
noisy motors.
The 3.7V 650mAh battery is used to power the digital electronics and the 3.7V 850mAh
one is powering the motors through the motor driver because the motors draws
significant more amount of current.
I was able to flip the car by driving forward and then quickly drive backward. The car can sometimes flip on its sides too. Here is a demo of the carâs stunts.
I taped the circuits on the the car and recorded data while the car is performing a stunt. Here is the stunt and the ToF and IMU data collected during the stunt.
In this lab we fully integrated the robot with Artemis, sensors, motor drivers, and batteries. I calibrated the motors so the car moves in a straight line and was able to perform open loop control.
I need to hook up the two motor drivers to Artemis. After reading the datasheet, I
decided to use pin 6,7 and 11,12 since they support pulse width modulated(PWM)
signals.
At first, I connected the signal of one motor driver and powered it from an external
power supply. I chose 3V and a 2A current limit for the power supply since our
battery is going to supply 3V and the peak output current per channel of the motor
driver is 2A.
I then generated PWM signals with analogWrite()
and checked the motor
driver output on an oscilloscope.
I connected the motor to the driver and ran the wheels in both directions. A demo is shown in this video.
Next I connected the battery to the motor driver. We use separate batteries to power the digital electronics and the motors so the sensitive sensors will not get disturbed from the noisy motors. The 3.7V 650mAh battery is used to power the digital electronics and the 3.7V 850mAh one is powering the motors through the motor driver because the motors draw significant more amount of current.
After making sure everything works as expected, I hooked up the whole circuitry. A thing to note is that in the datasheet it says the control source and the motor driver must share a common ground. So I shorted the two GND pins on the motor driver, one connected to battery ground and the other to Artemis ground. The final integrated car looks like this:
I try to find out the lower PWM limit for which the robot starts to move forward and
on-axis turns while on the ground. By slowly increasing the PWM value I found out a
minimum of 40
is required to get the car to move and 100
for the car to perform an
on-axis turn while on full battery power. Here is a demo.
I also test to see if my car can move in a straight line. I found that it drives off to the left. So I added in a calibration factor to reduce the power of the right motor. In the end the robot is able to follow a straight line for at least 6 feet.
Here is a demo of untethered, open-loop control. I was trying to make the car flip by first moving forward and then moving back suddenly. But the car doesnât change direction quick enough to do that.
We experiment with PID control on our robot. I try to implement orientational control to make the robot turn to a certain angle. In the end, using the P term in PID already gives me pretty good results.
To efficiently test and tune my system, I implemented the following bluetooth
commands:
SET_SETPOINT
allows me to tell the robot how many degrees I want it to
turn. I can send it like this on python side:
ble.send_command(CMD.SET_SETPOINT, "180")
SET_PID
allows me to set the P, I, or D term. For example to set the P
term:
ble.send_command(CMD.SET_PID, "P|0.1")
PID
tells the robot to perform the task: go forward, turn with PID
orientational control, and then
come back.
While the robot is performing the task, I collected timestamped sensor data and
motor inputs. In the end I send them to the computer via bluetooth for analysis.
I used 4 arrays of size 150
(which is more than I needed) to store
time, yaw, error, and PWM motor input. Related code is shown below:
First, I implemented orientational control when the robot is stationary. The robot is able to turn to a certain angle and maintain a constant orientation even when kicked. Here is a demo and code:
I graphed the data collected. Note that I was only collecting data while the PID controller was running.
I ran into a weird issue while implementing the task. One of my motors that controls my right wheels refuses to turn backwards after turning forward. It can spin in both directions independently but wonât spin anymore if I tell it to switch from forward to backward. And this only happens when I am running both of the motors. As a consequence my robot cannot turn right after turning left. Through experimenting I was able to fix it by setting the right motor first (before the left motor).
Another issue is that while tuning for PID, telling the robot to turn 180 degrees will actually make it turn more than 180 (~220). Its 90-degree turns are perfect tho. I looked at the data and confirmed that this is not caused by PID overshooting. The sensor actually thinks it turned 180 and the error was close to 0. At first I thought this was caused by the drifting of the estimated yaw. Therefore I repositioned the sensor such that the yaw comes from âpitchâ which is calculated using a complementary filter on both the gyroscope and accelerometer reading. But the problem still remains. In the end, I played around with it for a bit and found out a setpoint of 120 will make the robot turn 180.
Then I made the robot perform the task: go forward and turn 180 and then come back. The Kp term in PID stands for proportional control. Basically the bigger the error, the faster the car will turn to try to correct the error. To tune it, I first tried 0.1 and 1 to get the ballpark value. Then finetuned it from there by looking at the data collected. I found out different battery levels will affect the result of PID. Also if I set the setpoint to 180, it turns more than expected so I had to hardcode it again.
Here is the data collected during turning.
Sometimes the robot did something cool. Or it magically did something right even though I wasnât expecting it to. I need to look at the behavior and think about why and what made it do that. Different runs can also produce different results, which adds complexity to debugging. Here are some fun bloopers:
We want to drive our robot as fast as possible towards the wall and stop as close as possible. In order to do that we need real time distance reading. However our ToF sensor is slow. In this lab, we supplement the sensor by using a Kalman filter to estimate the distance from the wall.
We first estimate the drag and momentum for the state space model by using a step response. I drove the car towards the wall at a speed of 80 and got the following results.
I tried to estimate velocity by integrating acceleration from the IMU but the
results
are not that great.
From the graph and data we can say that steady state speed = 907 mm/s
and 90% rise time =
1278 ms
. Then we can calculate drag and mass:
d = u/x_dot, let u =1, d = 1/907 = 0.0011
m = -d * t_0.9 / ln (1-0.9) = 0.0011*1.278 / ln(0.1) = 0.0006105
Repeating the same process in Run 1 we get
Steady state speed = 672 mm/s
90% rise time = 1s
d = 0.0015
m = 0.0006514417229
As a sanity check, we run the Kalman filter with data collected from the run.
Details of the algorithm can be found in
Prof.
Kirstin Petersenâs awesome lecture.
With drag and mass we can construct our A and B matrices and discretize them:
A = np.array([[0,1],[0,-d/m]])
C matrix:
B = np.array([[0],[1/m]])
dt = 0.01
Ad = np.eye(2) + dt * A
Bd = dt * B
C = np.array([[-1,0]])
We also need to specify the process noise and sensor noise:
sigma_1 = 100
sigma_2 = 100
sigma_3 = 20
sig_u = np.array([[sigma_1**2,0],[0,sigma_2**2]])
sig_z = np.array([[sigma_3**2]])
We set our initial state to first sensor data and give a resonable initial state
uncertainty.
x = np.array([[-tof[0]],[0]])
sig = np.array([[5**2,0],[0,5**2]])
Then we can run our Kalman filter.
The results are shown below. We can see that our car model is not very accurate thus trusting the model too much did not produce good estimates. Therefore I increased the noise of the model and the prediction better matches the sensor reading.
Next step is to move the Kalman filter onto our robot. One difference is that we
only run the update step when there is sensor reading, otherwise only the prediction
step is runed.
Here is the relevant code:
Data from first try:
We can see that the Kalman Filter performed badly. I
ended up using Anya's drag and
mass from previous year and got more reasonable results.
Here are some data from the process of tuning the Kalman filter. I had to change dt
(and thus discretized A and B) since the prediction step is run much often. I also
had to adjust noise covariance matrices because now trusting the sensor too much
will make the car run into the wall.
Karlman filter too conservative, car ran into the wall.
Karlman filter too aggresive, car stops too soon
Finally I found a good set of parameters and got satisfactory results. (I only remembered to plot ToF data as dots in this plot)
Here is a demo of the robot stopping in front of the wall with the help of Kalman filter.
Other things I did during this lab:
Discovered I never implemented a comp filter, but
performance got worse when I did it. Yaw would just go back to 0.
Modified my PID from lab 6 to a state machine. Details are discussed in lab 8.
In this lab, we combine what we have learned to perform fast stunts. I choose to do
Task B: drive the car fast towards the wall and turn 180 degrees.
My code structure is illustrated here:
Here is a graph from one run:
In the end the car is able to perform the stunt at a speed of 200.
During the implementation process there are several things I ran into:
In this lab we use our robot to map out a room. This is done by spinning the robot 360 and gathering ToF data. In order to do that I used angular speed control with a proportional term.
To make the robot turn slowly and consistently, I implemented a PID controller that works on the raw gyroscope angular velocity data. This is more reliable than orientation control since now we avoid the integration error when estimating yaw. Here is a graph showing the PID controller working. We can see the PWM input mirrors angular velocity and the angular velocity is kept around 30 degree/s. Yaw is in a straight line indicating constant angular velocity. Therefore, we can assume the readings are consistently separated in angular space.
Here is a video showing the robot rotating on axis. The slowest speed I was able to
turn was around 90. I got around 100 readings in 10s, which means each measurement
took 0.1 second. Since the average angular velocity was 30 degree/s, the orientation
of the robot changes about 0.1*30 = 3 degrees during a measurement.
From the video, we can see the robot did not shift much when turning and ended up
relatively in the same place. Therefore I assume the measurements to be pretty
accurate.
To map the entire room, my robot executes 360 turns at 4 marked positions and sends
back data to the computer via bluetooth. For the orientation, I trust the gyroscope
values more. However, I realized I was not recording IMU and ToF readings at the
same time. Although I can try to extrapolate, I found the results satisfactory when
assuming that the readings are spaced equally in angular space.
Here are the data collected and the polar coordinates plots generated for each scan:
The measurements are what I expected and match the real room. I scanned twice at position (0,0). The results are not super precise and are offset by a slight rotation.
The final step is to merge everything together and generate the whole map. I did not
start my robot in the same orientation in all scans. Luckily this is easy to work
with using transformation matrices.
First we data from polar to cartesian coordinates.
Then we use the transformation matrix to map points from the robot frame to the
global frame.
The final result looks like this:
And we manually estimate the walls and obstacles.
final_x = np.array([-1500, -1500, -600, -600, 1900, 1900, 300 , 300, -250, -250, -1500])
final_y = np.array([-1250, 400, 400, 1500, 1500, -1250, -1250, -750, -750, -1250, -1250])
In this lab we implemented a Bayes Filter to perform grid localization in the simulator. Bayes Filter uses the notion of a belief which is recursively predicted with a motion model and updated with sensor inputs. The algorithm [Professor Petersen's great lecture] works as follows:
With the help of predefined classes and helper functions we complete the filter in the following steps:
The compute_control function takes in the previous and current state, and then extracts the action taken from the changes in states. In our setting we can figure out the translation and the rotations before and after translation, given previous and current poses of the robots.
The odom_motion_model function calculates the transition probabilities. The probability of the robot actually taking action rot1, trans, rot2 (and thus end up in state cur_pose) given that the control input is u can be modeled with Gaussian distribution.
After taking an action, the prediction step makes predictions based on previous belief and the motion model. The probability that the robot is in state x is equal to the sum of {probability that the robot was in state xâ times the probability that the robot will end up in x given the robot was in state xâ and took the action}. For efficiency we skip those states with previous belief smaller than 0.0001. This can lead to total probability not summing to 1 anymore. Thus we normalize the belief in the end.
The sensor_model function takes in the true observation of the environment and outputs the probability that the current sensor measurements are obtained at each state. This can again be modeled by a gaussian distribution.
The update_step updates the belief base on the sensor model. The new belief is updated based on how much sense the prior belief makes given the sensor measurements.
Here are demos showing the bayes filter running. The green line shows the ground truth, the red line is the odometry readings, and the blue line is Bayes filter outputs. We can see that the Bayes filter gave pretty accurate estimates even though the odometry readings are very off.
Building on the previous lab, we perform localization with the Bayes filter on the actual robot in this lab. I collaborated with Jueun Kwon for this lab.
We first verify the localization part of the starter code in the simulator. Here is the result. We can see that the Bayes filter (blue) works well just like the last lab.
This function tells the robot to rotate 360 and collect sensor measurement data. The
code that runs on the Artemis performs PID angular speed control just like in Lab 9.
We changed the observation_count
in world.yaml
to collect
71 readings and also calculated sensor bearings based on dt and angular velocity.
Here is our implementation.
We were confused by the get_pose() function in the beginning, but that does not need to be implemented.
We placed our robot at marked poses in the lab and ran an update step of the Bayes filter. Here are the results:
As we can see, the localization poses are all the same or within one grid square of the ground truth. Possible reasons for inaccuracies might include the robot not turning exactly in place, giving slightly off ToF readings, and confusing the localization. The robot localizes better at the bottom two positions. Maybe the bottom two locations have more unique features. On the contrary, I can imagine moving around the top right corner pose can give similar observations.
âAvengers Assemble!â In this final lab our robot is task with the mission to
navigate through a set of waypoints as quickly and accurately as possible.
I collaborated with Jueun Kwon and Raphael Fortuna for this lab.
We started out ambitious â- we tried to get localization working. However we ran into many difficulties. The robot doesnât perform perfect 360 degree turns and might not end up back to its starting position. This messed up later trajectories. We also looked into using the magnetometer to help us get accurate heading. But the magnetic field generated by running motors can interferes with that of the magnetometer. Plus our localization result from last lab can be off occasionally. In the end we decided the cost of localization is greater than its benefits and thus did not incorporate it in our final system.
Our approach performs path planning on the computer and sends forward and turning commands to the robot. The robot executes the commands using closed loop controls with ToF and gyroscope readings.
We implemented PID orientational control for the robot to turn to a certain angle. The robot is able to turn both left and right. We found out the car is better at right turns so we adjusted some parameters. Since we are using Jueunâs robot, we integrated orientational control code with Jueunâs old code. Her code base is well structured. I appreciate her modular and easy to understand design.
Our robot moves forward with PID control and Kalman filter. We performed PID on either the distance to wall or the the distance traveled depending on the scenario. This is because the distance to wall approach can get messed up by obstacle while distance traveled is less accurate.
Initially we had the robot take the shortest path between each waypoint. However, we discovered that ToF did not work well with walls at an angle. So we added intermediate waypoints to make the robot travel in Manhatten distance. Our path planner takes in the waypoints and generate commands to send to the robot similar to the compute_control step in lab 10. We then adjusted the output during tunning.
Here are some demos. Our robot is able to hit most of the waypoints. The easiest ones to mess up are the 2, 3, and 4th waypoints at the lower right corner. The robot is subject to hitting obstacles due to the small space and errors building up. We are satisfied with our results given the time and other constraints.
I will remember this day. I was in Philips 427
from 4am to 7pm, knocking out lab 11 and 12.
Shout out to Jueun for being my physical and mental support.
Apologies to the wall that I accidently kicked down 100 times.
Fuji Apple Pear Celcius does not taste good, and almost killed Raphaelâs laptop.
Coconut Matcha from CTB really helped my digestive system.
My car got not one, but TWO parking tickets.
We tried so many times that both Jueunâs and mine phone ran out of storage. The
precious demos are taken on Raphaelâs phone with a sketchy emergency battery.
Last but not least, THANK YOU SO MUCH Professor Petersen and the teaching team for
such an amazing and rewarding semester. I cannot imagine the sheer amount of time
and energy that went into creating and running this course. Your generous support
and kindness really lighted up this journey. Thank you!
This website is my first time documenting a large technical project online. There are definitely lots of takeaways, and I gained valuable insights into online communication.
The target audience of this webpage includes my wonderful TAs and professor, other students, and potentially employers or anyone interested in robotics, oh and also my future self. I tried my best to address the needs and interests of my intended audience. For the graders, I made sure I included everything on the rubric. I imagine them going through a checklist and trying to find corresponding info on my website. So I format each section clearly and use concise language and media to make my work convincing. I also try to employ cool graphics and interactive design to engage my audience. For example, the cover picture for each lab acts as clickbait so people would want to learn more; the topic of each lab shows up when a mouse hovers over the cover; the graphs and photos can be enlarged with a click. That said, the UI design is whole another subject of itself. Me dealing with fonts and spacing and color palette is tedious enough. I come to admire the effort that goes behind online communication. Also as technology and aesthetics are ever evolving, I wonder what will stand the test of time.
When it comes to the content, I think a combination of clear and concise language, supplemented with relevant examples and visuals, was particularly effective in conveying complex concepts. I also try to use a friendly and conversational tone to create a welcoming environment. I gained lots of inspiration from reading previous yearâs webpages, including but not limited to the awesome Krithik Ranjan, Aryaa Pai, Robby Huang, and Anya Prabowo.
One very big thing I realize is that, not only is it helpful to document the result and how I achieved it, but the reasoning of why something is done this way is equally important. The thinking behind every decision is invaluable. This might include various considerations, system limits, methods already tried, assumptions made, tradeoffs taken, etc. These provide a more comprehensive view of oneâs product. It can save people from trying what I already tried and falling into the same trap later. Moreover, these ways of thinking and analysis are applicable to many border areas. I also try to include potential improvements or things I wish I have done, so maybe someone doing this in the future can better their work.
Another thing is that, once you know or understand something, it is hard to un-think it. For documenting technical stuff, it is hard to include just the right amount of details. I try to assume my readers know nothing, and talk over the concepts with a rubber duck to see if they make sense. I think for others that are unfamiliar with my work, even little things like variable names should be taken with careful consideration. Description for the figures would be helpful too.
As the semester progressed, my approach tends to be more organized and well-structured. I will start with a problem statement. The topic is then broken down into sections to help my audience navigate the content more easily. Additionally, I incorporated cross-referencing and linking to related sources, allowing readers to explore topics in greater depth if desired. I included more code snippets and tried to make them well commented and easy to follow. Videos also allowed me to demonstrate practical applications and showcase robots in action, making the content more tangible and exciting.
In conclusion, my experience during the Fast Robots 2023 course provided me with valuable lessons in online communication.