Home e-Manual

Cal Poly Pomona Label Robot!

Introduction

Hello, we are Cal Poly Pomona, College of Engineering’s Robotics Team of a project known as the BANSHEE UAV DRONE Project. We have teamed up with ROBOTIS and utilizing their Dynamixels, to create a label robot that will take a label and apply it to a cardboard surface. We gratefully acknowledge the sponsor of ROBOTIS.

My name is Water and I am going to be trying my best to update our team’s progress biweekly. Our group was just formed not too long ago so we are trying to work out the kinks and debug issues that might arise. Granted, this is going to be an ongoing project so we are bound to make mistakes here and there, however, I will be fixing things up as we go.

Goals

April 9 - April 23, 2021

  1. Software
  • [ ] Create Development Environment

  • [ ] Create Working Configuration

  1. Hardware
  • [ ] Physical Design Specifications

Steps on how to accomplish the goals

  1. Software

    • For Software, all the instructions on how to set up the environment will be posted on our GitHub,

    • Brief Summary of Computer Archetype that we are using

      • Linux - Ubuntu

      • Jetson tx2

    • Other Apps that we are using!

      • CUDA - v.11.0

      • OpenCV - v.4.5.1

      • TensorFlow - v.2.4.0

      • TeamViewer - remote testing

      • Github - storing all of our research

  2. Hardware

    • Mapping out the cad files using this link

    • We are currently using Solidworks to draw our CAD files, these will be updated in future posts.

    • Create a script that will test if the motors are working, code will be provided in the future once it has been completed, tested, and made sure that is working optimally.

    • Hardware Components:

      • Dynamixel XM430-W350R

      • Dynamixel Starter Kit (UD2D + U2D2 PHB Set + SMPS 12V 5A PS-10)

      • Creality Ender 3 3D Printer Fully Open Source with Resume Printing All Metal Frame FDM DIY Printers 220x220x250mm

      • Webcam - Logitech c920

Software

  • As of right now, the software team is trying their best to install CUDA, OpenCV, and Tensorflow onto a computer that most of the members are accessing via Team viewer.

  • We currently have CUDA and OpenCV installed, but we when are running commands to test if both are integrated correctly, there seem to be some issues regarding the path.

  • Planning to use Tensorflow to help with the learning algorithm since it could be integrated with Yolov3. More updates on this in the future.

Hardware

  • Currently waiting for the Dynamixels to be delivered to the respective members on the Hardware Team.

  • Modifications of the CAD are proceeding to about 50%. Adjustments are being made and should be completed soon.

Overall Summary on the Progress of the Group

As of right now, the software side is still trying to set up the environment and have it run efficiently while the hardware side is still waiting for the parts to be delivered to them by ROBOTIS. Hopefully, by the next update, we will be able to provide more details on how far we managed to get.

Edited by Water

3 Likes

Label Robot

Biweekly Update: April 23 - May 7, 2021,

Summary

Hello Everyone! Water here! Here is an update on our project for these two weeks! We have made great progress on the software aspect in terms of changing the state to view the objects and toggle between them. On the hardware side, our members have just received the Dynamixel XM430-W350R and are working in getting the environment situated and setup.

For the next update, our team will try our best to post what we have on time. This is primarily due to finals coming up and so we will most likely take the finals week off to focus on finals before coming back together to work on the project.

Hardware Updates

  • The hardware team has just recently got their Dynamixel XM430-W350R and are understanding how to integrate the U2D2 to interact with the Dynamixel. Currently, there aren’t any updates but hopefully by next blog update we can give you more insight on what they were able to find.

Software Updates

  • The software team has made great progress with their code and toggling between different states, creating the GUI, and testing the random point when the center of an object has reached its designated position.

    • If you would like to visit our GitHub page . . . click Here
  • If you would like to see some of our test images, you can follow this path here:

UAV_Robotics_Team > Spring 2021 > Software > webcam-opencv > Official > data > images .

  • Another thing that we created was the GUI for our remote control. This remote control in officialv3.py will allow the user to toggle between the different states and in officialv4.py we modified it so you would only need to toggle to open the close the windows for the different states. We also included a trackbar that would enable the users to change the HSV (Hue Saturation and Value) so the webcam can pick up a distinct color at a time. By allowing the user to change the HSV we are able to detect different colored objects and fade out the colors that we don’t want the webcam to pick up.

  • Here is the path for the officialv3.py and officialv4.py : UAV_Robotics_Team > Spring 20201 > Software > webcam-opencv > Official

  • To run the file, type python3 officialv3.py into the terminal window once you are in the correct directory

  • To view the demonstration video for this version of the project, please click Here

Goals

  • Hardware:

    • The goal for the hardware team in the next coming weeks it to play around with the Dynamixel XM430-W350R and calculate the measurements for it so we can slowly incorporate it with the software.
  • Software:

    • The goal for the software is trying to create a server so we can send the commands that the webcam picks up to the server which will give directions to the hardware - telling it how it should move. We are also planning to automate the entire process so we don’t have to manually use the remote control to manually input directions.

This is what I have for this week! See you next time!

Edited by [P.T]

2 Likes

Hi @Lyfae,

Thanks very much for your updates, and for sharing the latest links for your team’s projects! The GitHub page will be great to keep up to date with your team’s code, and the demonstration video you’ve made is great to demonstrate the technology and methods used!

I’m just going to copy your video link again here for interested readers- as a tip, our forums here will turn any link pasted alone in a line into a larger tile / preview which can be help to keep the link from being overlooked:

I think the test of the vision system and random point generator look great! It will be exciting to see how your team’s work continues with the DYNAMIXEL hardware too!