Home e-Manual

Cal Poly Pomona Label Robot!

Introduction

Hello, we are Cal Poly Pomona, College of Engineering’s Robotics Team of a project known as the BANSHEE UAV DRONE Project. We have teamed up with ROBOTIS and utilizing their Dynamixels, to create a label robot that will take a label and apply it to a cardboard surface. We gratefully acknowledge the sponsor of ROBOTIS.

My name is Water and I am going to be trying my best to update our team’s progress biweekly. Our group was just formed not too long ago so we are trying to work out the kinks and debug issues that might arise. Granted, this is going to be an ongoing project so we are bound to make mistakes here and there, however, I will be fixing things up as we go.

Goals

April 9 - April 23, 2021

  1. Software
  • [ ] Create Development Environment

  • [ ] Create Working Configuration

  1. Hardware
  • [ ] Physical Design Specifications

Steps on how to accomplish the goals

  1. Software

    • For Software, all the instructions on how to set up the environment will be posted on our GitHub,

    • Brief Summary of Computer Archetype that we are using

      • Linux - Ubuntu

      • Jetson tx2

    • Other Apps that we are using!

      • CUDA - v.11.0

      • OpenCV - v.4.5.1

      • TensorFlow - v.2.4.0

      • TeamViewer - remote testing

      • Github - storing all of our research

  2. Hardware

    • Mapping out the cad files using this link

    • We are currently using Solidworks to draw our CAD files, these will be updated in future posts.

    • Create a script that will test if the motors are working, code will be provided in the future once it has been completed, tested, and made sure that is working optimally.

    • Hardware Components:

      • Dynamixel XM430-W350R

      • Dynamixel Starter Kit (UD2D + U2D2 PHB Set + SMPS 12V 5A PS-10)

      • Creality Ender 3 3D Printer Fully Open Source with Resume Printing All Metal Frame FDM DIY Printers 220x220x250mm

      • Webcam - Logitech c920

Software

  • As of right now, the software team is trying their best to install CUDA, OpenCV, and Tensorflow onto a computer that most of the members are accessing via Team viewer.

  • We currently have CUDA and OpenCV installed, but we when are running commands to test if both are integrated correctly, there seem to be some issues regarding the path.

  • Planning to use Tensorflow to help with the learning algorithm since it could be integrated with Yolov3. More updates on this in the future.

Hardware

  • Currently waiting for the Dynamixels to be delivered to the respective members on the Hardware Team.

  • Modifications of the CAD are proceeding to about 50%. Adjustments are being made and should be completed soon.

Overall Summary on the Progress of the Group

As of right now, the software side is still trying to set up the environment and have it run efficiently while the hardware side is still waiting for the parts to be delivered to them by ROBOTIS. Hopefully, by the next update, we will be able to provide more details on how far we managed to get.

Edited by Water

4 Likes

Label Robot

Biweekly Update: April 23 - May 7, 2021,

Summary

Hello Everyone! Water here! Here is an update on our project for these two weeks! We have made great progress on the software aspect in terms of changing the state to view the objects and toggle between them. On the hardware side, our members have just received the Dynamixel XM430-W350R and are working in getting the environment situated and setup.

For the next update, our team will try our best to post what we have on time. This is primarily due to finals coming up and so we will most likely take the finals week off to focus on finals before coming back together to work on the project.

Hardware Updates

  • The hardware team has just recently got their Dynamixel XM430-W350R and are understanding how to integrate the U2D2 to interact with the Dynamixel. Currently, there aren’t any updates but hopefully by next blog update we can give you more insight on what they were able to find.

Software Updates

  • The software team has made great progress with their code and toggling between different states, creating the GUI, and testing the random point when the center of an object has reached its designated position.

    • If you would like to visit our GitHub page . . . click Here
  • If you would like to see some of our test images, you can follow this path here:

UAV_Robotics_Team > Spring 2021 > Software > webcam-opencv > Official > data > images .

  • Another thing that we created was the GUI for our remote control. This remote control in officialv3.py will allow the user to toggle between the different states and in officialv4.py we modified it so you would only need to toggle to open the close the windows for the different states. We also included a trackbar that would enable the users to change the HSV (Hue Saturation and Value) so the webcam can pick up a distinct color at a time. By allowing the user to change the HSV we are able to detect different colored objects and fade out the colors that we don’t want the webcam to pick up.

  • Here is the path for the officialv3.py and officialv4.py : UAV_Robotics_Team > Spring 20201 > Software > webcam-opencv > Official

  • To run the file, type python3 officialv3.py into the terminal window once you are in the correct directory

  • To view the demonstration video for this version of the project, please click Here

Goals

  • Hardware:

    • The goal for the hardware team in the next coming weeks it to play around with the Dynamixel XM430-W350R and calculate the measurements for it so we can slowly incorporate it with the software.
  • Software:

    • The goal for the software is trying to create a server so we can send the commands that the webcam picks up to the server which will give directions to the hardware - telling it how it should move. We are also planning to automate the entire process so we don’t have to manually use the remote control to manually input directions.

This is what I have for this week! See you next time!

Edited by [P.T]

3 Likes

Hi @Lyfae,

Thanks very much for your updates, and for sharing the latest links for your team’s projects! The GitHub page will be great to keep up to date with your team’s code, and the demonstration video you’ve made is great to demonstrate the technology and methods used!

I’m just going to copy your video link again here for interested readers- as a tip, our forums here will turn any link pasted alone in a line into a larger tile / preview which can be help to keep the link from being overlooked:

I think the test of the vision system and random point generator look great! It will be exciting to see how your team’s work continues with the DYNAMIXEL hardware too!

2 Likes

Label Robot

Update: May 7 - May 28, 2021

Summary

Hello Everyone! Water here! Sorry for being gone for a couple of weeks. The team had finals and so we decided to put the brakes on this project in order to focus on our finals. Now that finals are done and we don’t have any more distractions, we are going full speed ahead! Since the team took a break to focus on finals, there wasn’t much that was done during this time but we did manage to progress a little in both the hardware and software team.

For the next update, it will be consistent or every two weeks since we won’t have anything that will keep up us from posting on time. If there is something that should arise, I will let you know!

Hardware Updates

  • The hardware team has been playing with Dynamixel XM430-W350R and had a problem with the serial ports. After some extensive research, one of the members found out that there was a problem with the Baud Rate. For the motor, he said that that it would turn if you set the Baud Rate to 1 million instead of 57,600 which is assumed to be the default.

  • From here on out, the hardware team will finish the CAD design and print out the parts for the robotic arm. We aren’t sure when this will be accomplished but the group is in the process of finishing it up.

Software Updates

  • The software team has made great progress with their code and optimizing it so the camera detection works better in the previous versions.

  • With the creation of officialv4.py we were able to fine-tune our code and update the GUI so that it looks better than the previous one.

  • Changes to the software:

    • The code has been optimized so that it will only detect the smallest object when the random point simulation is tested. Before, when there were multiple objects inside of the frame, any center point of any object that reached the target location will successfully end the code. In the previous versions, this would be a problem since other objects besides the label could successfully return true that the label has reached its destination. So we made it so that it will only detect the smallest object, and the center of the smallest object would only be able to return true once it reaches its designated point.

    • The GUI has been updated so the user can toggle between the different states and the modified track bar would enable the users to change the HSV (Hue Saturation and Value) so the webcam can pick up a distinct color at a time. By allowing the user to change the HSV we are able to detect different colored objects and fade out the colors that we don’t want the webcam to pick up.

    • If you would like to visit our GitHub page . . . click Here

  • If you would like to see some of our test images, you can follow this path here:UAV_Robotics_Team > Spring 2021 > Software > webcam-opencv > Official > data > images .

  • Here is the path for the officialv4.py : UAV_Robotics_Team > Spring 20201 > Software > webcam-opencv > Official

  • If you would like to use access the Tag link to our fourth version, please click Here

Goals

  • Hardware:

    • The goal for the hardware team in the next coming weeks is to finish with the CAD programs and start assembling the arm. More updates will be given next week.
  • Software:

    • The goal for the software team is to clean up the code and document everything. We will be working on the integration process once the arm has been printed and everything is ready to be assembled. The automation process of the software is still under development and has been put on hold due to finals, but we will resume progress on that.

This is what I have for this update! See you next time!

Edited by [P.T]

1 Like

Label Robot

Update: May 28 - June 11, 2021

Summary

Hello Everyone! Lyfae here! Back with another week of updates! This week will be very brief and general since we are beginning our INTEGRATION between hardware and software! The robotic arm has finally been 3D printed and functional! One of our team members was able to fix the broken components left by the last team and is doing some test runs to see if is operating optimally. We hope that by the next update we can give you more information regarding the progress of our integration.

For the next update, it will be out on July 2, 2021, so keep an eye out for that!

Hardware Updates

  • Robotic Arm

    • One of our team members has successfully recreated the robotic arm that was handed down to us by the previous team that worked on this project. There were some parts that had broken off and were missing so the member spent some time reprint those parts and reassemble the robotic arm. As of right now, the arm is fully functional and is capable of moving around and picking up a label, and drop it off at a manually chosen point. As of right now, the robotic arm is being controlled via the Dynamixel Wizard Application, but the members are finding a workaround for it in the future.

    • The Hardware team is currently, working on taking their one giant code that controls the movement of the robotic arm and them separating it into individual functions so it would be easier to debug the code and fix any bugs if any should arise.

Software Updates

  • The software team is still working on officialv5.py and officialv6.py as it proves to be many bugs when interfacing with our official.py code with the server. Currently, the team is working on bug fixing them, so more information can be provided next week and we get more information regarding the progress of the fixes.

  • Inside of the software team, one group is responsible for handling the computer vision and contour detection while another group is responsible for the arm movement server. While the computer vision team is trying to debug the bugs in the previous versions of official.py the server team was able to come up with in-depth findings regarding trigonometric angles can be used to move the robot efficiently. As such, when the computer vision team finishes fixing all the bugs, they can integrate the trigonometric functions that the server team derived in order for the robotic arm to move exactly how we would want it to and perform the necessary tasks as efficiently as possible.

  • The Tag Link and the packet for officialv5.py and officialv6.py is still under development and bug testing so there won’t be any files shared this update, but by the next update, we should have something that everyone can access.

Helpful Links

  • If you would like to visit our GitHub page . . . click Here

  • If you would like to see some of our test images, you can follow this path here:

UAV_Robotics_Team > Spring 2021 > Software > webcam-OpenCV > Official > data > images .

Goals

  • Hardware:

    • The goal for the hardware team is to take their big chunk of code that controls the robotic arm’s movement and reorganize it into functions.

  • Software:

    • The goal for the software team is work on fixing the bugs because there is a lot of them.

This is what I have for this week! See you next time!

Edited by [P.T]

1 Like

Label Robot

Update: June 11 - July 23, 2021

Summary

Hello Everyone! Lyfae here! Back with another week of updates! We apologize for the lack of updates in a while due to most of our team members being occupied with other things and as such, weren’t able to work on the project. However, here is an update on our current progress and what we managed to create in our time of absence.

Hardware Updates

  • Robotic Arm

    • For our Robotic arm, we went from having most of the parts 3D printed to have a fully assembled arm! Yay! The hardware team has also written a code that controls all of the three motors that are tasked with controlling the arm. Now, the team is able to have simultaneous movements with the bicep and forearm motor, allowing it to have a full range of control over the area that it is assigned to.

    • In addition to the assembly of the robotic arm and the control of all the motors, the hardware team was able to fine-tune the motors so that they could control the velocity at which the arm is moving. The varying velocity is crucial for the arm because when it is picking and dropping the label or item, any excess or aggressive movement could damage it. So it is important that the process gets fine-tuned even further so we can have smooth and specific movements.

    • Bug Fixes:

      • There was a problem where to code would indefinitely loop itself, so we fixed it so that it wouldn’t do that by defining a threshold detection method.

      • Before we had problems with integrating the motor code with the server, now, we have successfully integrated it with the server.

        • Now, the server can control all aspects of the motor.
    • Goals for Hardware:

      • Create a realistic testing environment for the robotic arm utilizing: boxes, packing labels, air pumps, cameras, and tiny things that can be found daily.

      • Streamline the code for smoother integration with the server.

      • Integrate the code with the camera software.

      • Documentation of the entire Summer 2021 progress.

Software Updates

  • The software team was able to fix all the bugs and created the code for calibration and arm movement.

    • Calibration:

      • To correct for any camera positioning errors, the software team has integrated an image-straightening homography. Homography, in projective geometry, is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. Basically, it maps images of points that lie on a world plane from one camera view to another which in our case, allows for us to turn an uneven terrain, flat.

      • Integrated an anti-distortion feature using the ChArUco boards, which is a planar board where the markers are placed inside the white squares of a chessboard.

      • Developed process for automatic calibration inside the GUI for both HSV and Camera properties.

    • Gui

      • Revamped Gui and expanded the design to allow for more room and keep everything spacious.

      • Buttons on the remote control will now display texts when hovered over - will be towards the right of the button.

      • Functions of the arm will be split into 4 regions: Control, Contour Display, Calibration, and Testing. Currently, this is what we will have, more will be added in the future.

    • Testing / Bug Fixes

      • Created a new testing method called “Send Packet” which will send a packet of data containing the real-life measurement (in mm) of the location of the smallest contour on the frame.

      • Fixed the bugs that previously caused the code to crash whenever it does not see a contour.

      • Improved connections to the server, will not randomly drop packets anymore, and retains a secure connection for the entire time both server & client are running simultaneously.

    • Goals for Software:

      • Refine the remote control when new additions are implemented → depends on what hardware wants

      • Work with hardware to rid the bugs and create a smoother movement for the arm.

      • Documentation of the entire Summer 2021 progress.

Helpful Links

  • If you would like to visit our GitHub page . . . click Here

This is what I have for this week! See you next time!

Edited by [P.T]

Label Robot

Update: June 23 - August 30, 2021

Summary

Hello Everyone! Lyfae here! Back with another week of updates! Due to finals and Fall classes resuming back on campus again, the team is busying getting things done and as such, wasn’t able to work much on the project, but we were able to fix some bugs, made some changes and progress in our research.

Team Updates

  • Accomplishments:

    • [x] Built an amplifier circuit to remotely control the air pump

    • [x] Built a test environment for the robot to interact with labels and packages

    • [x] Finalized CAD designs for the end effector

    • [x] Integrated both hardware and software code

    • [x] Robot recognized the difference between labels and packages

    • [x] Robot applies labels to packages successfully

  • Goals:

    • [ ] Create project poster and project presentation for IEEE Student Engineering Technology Challenge Poster

    • [ ] Integrate end effector into the robot

    • [ ] Test end effector by using it to pick up the labels and applying them to packages

    • [ ] Record video showcasing the project for the competition

    • [ ] Run the presentation and video over with Professor Yu and Andrew before the competition.

    • [ ] Create informative videos - (Maybe, after the rest of the work is accomplished)

Helpful Links

  • If you would like to visit our GitHub page and see the current work that we have accomplished . . . click Here

This is what I have for this week! See you next time!

Edited by [P.T]

1 Like

Label Robot

Update: September 3 - September 17, 2021

Summary

Hello Everyone! Lyfae here! Back with another week of updates! Our group is almost done with this collaboration project! For the next couple of weeks, we will be wrapping up our work, finalize all of the documentation and work on youtube videos to show how our robot works.

Team Updates

  • Accomplishments:

    • [x] Built an amplifier circuit to remotely control the air pump

    • [x] Built a test environment for the robot to interact with labels and packages

    • [x] Finalized CAD designs for the end effector

    • [x] Integrated both hardware and software code

    • [x] Robot recognized the difference between labels and packages

    • [x] Robot applies labels to packages successfully

    • [x] Made a poster/presentation to the IEEE Student Engineering Technology Challenge (SETC) competition

    • [x] End effector refinements: Sealing end effector to reduce space for air to escape through the end effector

    • [x] Create project poster and project presentation for IEEE Student Engineering Technology Challenge Poster

    • [x] Run the presentation and video over with Professor Yu and Andrew before the competition.

    • [x] Test end effector by using it to pick up the labels and applying them to packages

    • [x] Record video showcasing the project for the competition

  • Goals:

    • [ ] Integrate end effector into the robot

    • [ ] Create informative videos regarding the robotic project

Helpful Links

  • If you would like to visit our GitHub page and see the current work that we have accomplished . . . click Here

  • If you would like to check out our IEEE Poster, click Here

This is what I have for this week! See you next time!

Edited by [P.T]

2 Likes