This will run a simple robot with a webserver on a raspberry PI with the Adafruit Motor Hat. I wrote this up for myself for fun and to help me remember how I set things up.
This is all designed for a Raspberry PI 3 with the Adafruit Motor Hat for cars and the Adafruit Servo Hat for arms.
I use cheap HC-SR04 sonar sensors but I think other ones will work fine.
Pretty much any chassis with a DC motors (4wd or 2wd) works well.
robot.py program will run commands from the commandline sonar.py tests sonar wired into GPIO ports drive_server.py runs a web server for controlling a robot wheels and arms drive_safe.py runs a simple object avoidance algorithm inception_server.py runs an image classifying microservice
If you want to use the default sonar configuation
Left sonar trigger GPIO port 23 echo 24 Center sonar trigger GPIO port 17 echo 18 Right sonar trigger GPIO port 22 echo 27
You can easily change this but this is what wheels.py expects
M1 - Front Left M2 - Back Left (optional) M3 - Back Right (optional) M4 - Front Right
There are a ton of articles on how to do basic setup of a Raspberry PI - one good one is here https://www.howtoforge.com/tutorial/howto-install-raspbian-on-raspberry-pi/
You will need to turn on i2c and optionall the camera
raspi-config
Next you will need to download i2c tools and smbus
sudo apt-get install i2c-tools python-smbus python3-smbus
Test that your hat is attached and visible with
i2cdetect -y 1
Install dependencies
pip install -r requirements.txt
At this point you should be able to drive your robot locally, try:
./robot.py forward
To run a webserver in the background with a camera you need to setup gunicorn and nginx
Nginx is a lightway fast reverse proxy - we store the camera image in RAM and serve it up directly. This was the only way I was able to get any kind of decent fps from the raspberry pi camera. We also need to proxy to gunicorn so that the user can control the robot from a webpage.
copy the configuration file from nginx/nginx.conf to /etc/nginx/nginx.conf
sudo cp nginx/nginx.conf /etc/nginx/nginx.conf
restart nginx
sudo nginx -s reload
install gunicorn
copy configuration file from services/web.service /etc/systemd/system/web.service
sudo cp services/web.service /etc/systemd/system/web.service
start gunicorn web app service
sudo systemctl daemon-reload
sudo systemctl enable web
sudo systemctl start web
Your webservice should be started now. You can try driving your robot with buttons or arrow keys
In order to stream from the camera you can use RPi-cam. It's documented at http://elinux.org/RPi-Cam-Web-Interface but you can also just run the following
git clone https://github.com/silvanmelchior/RPi_Cam_Web_Interface.git
cd RPi_Cam_Web_Interface
chmod u+x *.sh
./install.sh
Now a stream of images from the camera should be constantly updating the file at /dev/shm/mjpeg. Nginx will serve up the image directly if you request localhost/cam.jpg.
There is a great project at https://github.com/samjabrahams/tensorflow-on-raspberry-pi that gives instructions on installing tensorflow on the Raspberry PI. Recently it's gotten much easier, just do
wget https://github.com/samjabrahams/tensorflow-on-raspberry-pi/releases/download/v0.11.0/tensorflow-0.11.0-cp27-none-linux_armv7l.whl
sudo pip install tensorflow-0.11.0-cp27-none-linux_armv7l.whl
Next start a tensorflow service that loads up an inception model and does object recognition the the inception model
sudo cp services/inception.service /etc/systemd/system/inception.service
sudo systemctl daemon-reload
sudo systemctl enable inception
sudo systemctl start inception