Roboticist + Physicist
emojis.jpg

Robotics + Coding

I like making robots. My latest project is below.

Computers are a fantastic tool for answering questions about the world. My N-Body Gravity Simulator is on github.

Happybot! Raspberry Pi Robot with Tensorflow and OpenCV

The robot is operational! The objects are oriented and the code is on github. This section describes the robot’s behavior, performance, and construction. A description of facial sentiment recognition algorithm is in the next section.

BEHAVIOR

The robot behaves in the following way:

1) The robot looks for a landmark, which in this case is a blue cup. If no blue cup is seen, the robot turns and looks again.

2) Finding the blue cup, the robot moves towards it until the blue cup doubles in size or fills up a large portion of the frame.

3) The robot scans its surroundings for a face. If a face is not found past the time limit, return to step (1).

4) If a face is detected, the robot turns toward the face and orients itself such that the servo motor is point forward and the face is centered in the frame. If a face is not detected, return to step (1).

5) When the face is centered, the neutral expression must be set. (The sentiment recognition is cooperative. The robot reads a person's neutral expression and predicts emotions based on that.) To show the robot is setting the neutral expression, it moves back and forth as a visual cue. When the person looks straight-on at the robot, the neutral position is set and the robot ceases to rock back and forth.

6) The robot predicts the emotions of the face. If happiness is predicted, the robot responds by spinning quickly. After happiness is detected, return to step (1).

Notes:

After step (4), if the face vanishes from frame for sufficient time, the program returns to step (1).

After step (5), if the face vanishes from frame, the neutral facial expression must be reset.

PERFORMANCE

I’m please the Raspberry Pi can recognize facial actions outside of the CK+ dataset. The recognition, though is dependent on lighting conditions and distance from the camera. If a person stands very far from the camera, the algorithm, will not detect significant difference in facial features and so will not detect emotions properly.

The robot could be faster. Rather, the Pi has too much to do at once, and another processor would help a great deal. When the robot begins a task, be it moving a servo or filming, the speed is real time. I was wondering if it would be because the Pi has Tensorflow v1.8 and OpenCV, which could have slowed it down. But, despite the large modules, the Pi performs quickly. In another version, I would like an addition microprocessor to handle lower level functions.

SOFTWARE

I ran Raspbian Stretch OS on the Raspberry Pi 3b. All scripting was done in Python 3.

I downloaded Tensorflow to the Raspberry Pi because it was much easier than downloading Tensorflow Lite. While Lite ought to be much faster, installing it on the Pi is not in the documentation. I asked on github and the Tensorflow Dev team is working on it. I thought installing Tensorflow on the Pi would be complicated because several tutorials deal with many bugs, but Tensorflow provides a wheel to install Tensorflow v1.8. I followed this tutorial.

On a side note, I did install Tensorflow Lite on the Pi by cross compiling with cmake following this guide. I can run TF Lite on the Raspberry Pi’s command line for C++ programs, but not for Python. Cross compilation is an important technique for programming mobile devices and is fitting for the Pi, but for speed I liked to program directly into the Pi.

To install openCV, I used this guide. It seems like a popular guide and it worked well for me. It took about four hours to install though.

HARDWARE

The robot is built from several robotshop.com parts and some from the hardware store. The parts include:

Raspberry Pi 3b

Raspberry Pi Camera V2

2WD MiniQ Robot Chassis Kit

Adafruit mini-pan tilt kit with micro servos

Servo and motor driver hat by Geekroo RB-Gek-11

Balsa Wood for the levels. I wanted something really lightweight. I think it would have been more sturdy to use fiber glass. Maybe for an upgrade.

Several 3mm screws which were slightly too large for the chassis.

Smaller 2.5mm screws for the Pi, which has holes 2.9mm in diameter.

5V 2.5A Power supply (I used Canakit) to power the Raspberry Pi 3 in development. This was when I didn’t want to use the lithium battery.

4 AA batteries in a battery pack to power the motors.

Powerboost 1000C by Adafruit, a board I used to power the Pi from the lithium ion battery.

Adafruit Lithium Ion Polymer Battery 3.7V, 2500mA. The connector got terribly stuck in the Powerboost board. There are two nubs that lock in the 2-PH JST port, and I filed off the nubs to ease development.

Micro USB cable.

Deciphering Emotions with Machine Learning and Raspberry Pi

Curious about machine-human interactions, I wanted to see how computer vision could interpret human emotions. I trained a neural network following the literature for Facial Action Units (AUs). The accuracy of the network is 89%. My Tensorflow neural network is available on github.

In the video, you see my Tensorflow neural network recognize my AUs and determine my basic emotion. I was happy that the algorithm worked reasonably well when analyzing my face. Some AUs are easier for the script to find than others, and you can see that when the video flickers from one AU to another.

FACS

The Facial Action Coding System (FACS) maps the face based on facial movements. Advocated by Ekman, basic emotions (happiness, sadness, anger, fear, surprise, disgust, and contempt) have facial action units (AUs) in common. The facial actions are universal across different cultures.

I began the project training networks for basic emotions, but I found training for AUs more compelling. The training shifted from looking for abstraction to seeking an analytic target.

BACKGROUND

While I had analyzed laser images in my physics research, facial sentiment recognition was new to me and I found forums, github, and research papers incredibly helpful. I planned to use a convolution neural network, as I read Chu, Torre, and Cohn’s paper on combining CNNs with LSTMs for AU recognition. I think for faces in the wild, CNNs and LSTMs are favorable. CNNs are tolerant of translations and rotations and LSTMs help CNNs capture long term dependencies, which CNNs struggle with. I found papers showing very good results with more analytic methods and I wanted to explore those, as I was surprised facial expressions could be analyzed almost analytically.

I enjoyed the work by Kotsia and Pitas in using a modified SVM and displacement of a Candide Grid to classify AUs. Bartlett, Littlewort, Frank, Lainscsek, Fasel, and Movellan applied convolutions with Gabor filters and analyzed the images with SVM and AdaBoost. Tian, Kanade, and Cohn calculated facial distances, such as eye height, with facial landmarks and used a neural network to classify AUs. I preferred Tian’s method of predicting AUs with an MLP feed-forward neural network.

METHODS

The neural network was trained on the displacements of facial landmarks in the CK+ database for AUs. Facial landmarks are 68 points on key positions of face such as corner of the lips and the ends of the eyebrows. The CK+ database shows image sequences of faces, starting at neutral and ending at an emotion with maximum intensity in the facial expression. All subjects are facing forward in similar indoor lighting conditions.

Using Python’s dlib library, the algorithm detected 68 facial landmarks. Following Tian’s paper, the facial landmarks were used to calculate key distances such as eye lid height. In my experience, it’s easier for me know what a person is feeling as I get to know them, so I set the algorithm to use the difference between the neutral expression with the intense emotional expression as the input data to the neural network. This was similar to what Tian’s group did, but they trained their algorithm with two databases rather than one. Because the CK+ database is unbalanced with a ratio of about 1:4 positive to negative values, I weighted the positive values in the loss function.

RESULTS

The algorithm is a binary multiclass classifier. Accuracy should measured by the True Positive Rate (TPR), as it is much more important to determine that there is a present AU rather than an AU is absent. The TPR for AUs in the upper half of the face was 87% and the AUs in the lower half were 91%. The script is available on github.

N-body Gravity Simulation in Python

N-body Gravity Simulation in Python

Using NASA data for initial conditions, I modeled the orbits of the solar system based on gravitational interactions. To check the simulation, the model outputs total energy and total angular momentum, both of which must be conserved. Mindful of roundoff error, we choose the leapfrog integration method to conserve energy. The plot of the total energy and angular momentum nicely oscillates. The script is available on github.