MAE 148 Final Project Fall 2024 Team 15
- Team Members
- Abstract
- What We Promised
- Accomplishments
- Challenges
- Videos
- Running the Repository
- Software
- Hardware
- Progress Updates
- Acknowledgements
- Contact
Anurag Gajaria - MAE Controls & Robotics (MC34) - Class of 2025 - LinkedIn
Jimmy Nguyen - MAE Controls & Robotics (MC34) - Class of 2025
Michael Ramirez - MAE Controls & Robotics (MC34) - Class of 2025
Jingnan Huang - MAE
This projects goal is to follow a selected individual and follow them at a set distance. Once that individual makes a certain gesture (in this case a fist), the robot starts a countdown of 3 seconds and takes a picture. This picture is then saved to a local folder until it can upload the image to a google drive.
This project uses RoboFlow and Depthai for person detection, person following, and gesture identification. It also utilizes Google Cloud Console to create an application to create an application to upload pictures to the project's Google Drive.
- Detect a person and follow them at a set distance
- Detect the appropriate gesture and take a picture
- Upload picture to Google Drive
- Focus only on the closest person
- Have multiple gestures, one to stop and one to go
- Person Detection and Following
- We were able to create a Depthai model that first finds a person and follows behind them at a set distance of 1 meter.
- Gesture Detection
- We were able to set up a gesture detection model that could identify rock, paper, and scissors.
- We were able to use the rock gesture to take pictures.
- Picture Upload
- We were able to integrate a data upload node that continually verifies the internet connection then uploads the taken pictures to google drive.
- We were having a hard time getting the camera to only focus on one person, as it would rather identify all people and move to the center of them.
- The gesture detection node and the person detection node would try to use the same camera, requiring us to launch the nodes separately.
- The google projects token would expire every hour, resulting in us having to manually create a new token every hour.
If you are interested in reproducing our project, here are a few steps to get you started with our repo:
-
Follow instuctions on UCSD Robocar Framework Guidebook,
pulldevel
image on your JTN:docker pull djnighti/ucsd_robocar:devel
-
sudo apt update && sudo apt upgrade
(make sure you upgrade the packages, or else it won't work; maybe helpful if you run into some error https://askubuntu.com/questions/1433368/how-to-solve-gpg-error-with-packages-microsoft-com-pubkey)
check ifslam_toolbox
is installed and launchable:sudo apt install ros-foxy-slam-toolbox source_ros2 ros2 launch slam_toolbox online_async_launch.py
Output should be similar to:
[INFO] [launch]: All log files can be found below /root/.ros/log/2024-03-16-03-57-52-728234-ucsdrobocar-148-07-14151 [INFO] [launch]: Default logging verbosity is set to INFO [INFO] [async_slam_toolbox_node-1]: process started with pid [14173] [async_slam_toolbox_node-1] 1710561474.218342 [7] async_slam: using network interface wlan0 (udp/192.168.16.252) selected arbitrarily from: wlan0, docker0 [async_slam_toolbox_node-1] [INFO] [1710561474.244055467] [slam_toolbox]: Node using stack size 40000000 [async_slam_toolbox_node-1] 1710561474.256172 [7] async_slam: using network interface wlan0 (udp/192.168.16.252) selected arbitrarily from: wlan0, docker0 [async_slam_toolbox_node-1] [INFO] [1710561474.517037334] [slam_toolbox]: Using solver plugin solver_plugins::CeresSolver [async_slam_toolbox_node-1] [INFO] [1710561474.517655574] [slam_toolbox]: CeresSolver: Using SCHUR_JACOBI preconditioner.```
-
Since we upgrade all existing packges, we need to rebuild VESC pkg under
/home/projects/sensor2_ws/src/vesc/src/vesc
cd /home/projects/sensor2_ws/src/vesc/src/vesc
git pull
git switch foxy
make sure you are on foxy branch
Then, build 1st time under sensor2_ws/src/vesc/src/vesc
colcon build
source install/setup.bash
Then, 2nd time but under sensor2_ws/src/vesc
cd /home/projects/sensor2_ws/src/vesc
colcon build
source install/setup.bash
Now, try ros2 pkg xml vesc
, check if VESC pkg version has come to 1.2.0
- Install Navigation 2 package, and related packages:
sudo apt install ros-foxy-navigation2 ros-foxy-nav2* ros-foxy-robot-state-publisher ros-foxy-joint-state-publisher
- Pull this repository
cd /home/projects/ros2_ws/src
git clone https://github.com/UCSD-ECEMAE-148/fall-2024-final-project-team-15/tree/main
cd final_project/
- Create a Google cloud console account. Once you have an account, create a project.
- Enable the Google Drive API:
- In the API & Services section, click Library.
- Search for "Google Drive API".
- Select it and click Enable.
- Create Credentials
- Navigate to API & Services > Credentials.
- Create OAuth 2.0 Credentials**:
- Click on Create Credentials → OAuth client ID.
- If prompted, configure the OAuth consent screen:
- Application name: Enter a name (e.g.,
ROS2 Drive App
). - Add any necessary information and save.
- For Application type, choose Desktop app.
- Name the OAuth client (e.g.,
data_uploader
) and click Create.
- Enable the Google Drive API:
- Download Credentials and upload it to the drive.
- After creating the credentials, click Download JSON to save the file and name it
creds.json
. - Keep this file secure as it contains sensitive information. - Install the necessary requirements:
pip install google-api-python-client google-auth google-auth-oauthlib google-auth-httplib2
- Remove the placeholder
token.JSON
and runauthenticate.py
. - Run the software using
ros2 launch final_project_pkg person_follower_pkg_launch.launch.py
Computer vision platform for dataset preparation, model training, and deployment of computer vision models.
Computer vision AI platform that runs on edge devices like Jetson Nano and Luxonis OAKD Lite cameras. Utilizes depth capabilities of OAKD lite camera.
This creates a google application that is used to upload pictures to google drive.
- 3D Printing: Camera Stands and Jetson Nano Case
- Laser Cut: Base plate to mount electronics and other components.
Parts List
- Traxxas Chassis with steering servo and sensored brushless DC motor
- Jetson Nano
- WiFi adapter
- 64 GB Micro SD Card
- Adapter/reader for Micro SD Card
- Logitech F710 controller
- OAK-D Lite Camera
- LD19 Lidar (LD06 Lidar)
- VESC
- Point One GNSS with antenna
- Anti-spark switch with power switch
- DC-DC Converter
- 4-cell LiPo battery
- Battery voltage checker/alarm
- DC Barrel Connector
- XT60, XT30, MR60 connectors
Additional Parts used for testing/debugging
- Car stand
- USB-C to USB-A cable
- Micro USB to USB cable
- 5V, 4A power supply for Jetson Nano
Thank you to Professor Jack Silberman and our incredible TA's Alexander, Winston, and Vivek for an amazing Fall 2024 class!
- Anurag Gajaria | [email protected]
- Michael Ramirez | [email protected]
- Jimmy Nguyen | [email protected]
- Jingnan Huang