ROBOTS MAKE THE FUTURE

The academia at Amrita Vishwa Vidyapeetham has developed a humanoid mobile robot for multi-purpose applications. With features such as autonomous navigation, voice control through Alexa, interactive capabilities, a robotic arm manipulator, electrically driven wheels, and AI-enabled functionalities, the robot Amrita represents a significant advancement in robotics, paving the way for future innovations in humanoid technology.

Our humanoid mobile robot has been developed for a wide range of applications, including industrial pick and place operations and challenging tasks and chores in healthcare and domestic environments. The proposed comprehensive solution – ‘a 2-wheeled humanoid robot’ – is controlled by Alexa and is accessible from anywhere as long as one has access to the internet.

The robot can deliver food and medicine to those with mobility impairment, including the elderly and small kids who require timely attention concerning their meals and medications.

Robotic Operating System (ROS) is leveraged here, which is an ideal platform for navigating autonomous robots. A map-based navigation system is used since the robot’s area of work could mainly be a house or hospital. To obtain the map, the robot utilizes the lidar sensors and starts functioning as an autonomous robot thereafter. The simulation and real-world testing of the robot have been completed and prove that the navigation accuracy and performance of the robot agree with the simulation result.

Robot design and features

The advanced robot is equipped with a robotic arm manipulator and electrically driven wheels, powered by on-board microcontrollers, servos, and other peripherals. It features a skull that can rotate and eyes that can move up and down, mimicking human-like movements. The eyes feature a webcam for tracking people and objects. The jaw of the robot can move up and down to mimic speaking actions, thereby enhancing its interactive capabilities.

The project also includes voice integration and visual interaction of the skull using a laptop or computer. The development of the robotic arm manipulator hardware and the software integration of the robotic hand are key aspects of this project. Voice integration has been carried out with the robotic arm to facilitate picking things such as food items, medicine, water bottles, etc.

The robot. includes hardware for mask detection, body temperature measurement, and checking vitals. It also features an entertainment module with an LCD for music, movies, video calling, health monitoring, etc. This advanced AI-enabled robot can detect people’s facial expressions, talk with people, keep the doctor-patient communication way more effective, and provide more intense care and monitoring. Additionally, the development of the trunk, tray, and robotic wheels for navigation further contribute to the full integration of this humanoid robot. This project represents a significant advancement in robotics, paving the way for future innovations in humanoid technology.

The robot is designed and manufactured as shown in the photographs. The head and robotic arm are manufactured using 3D printing. The body of the robot is manufactured using sheet metal welding. The simulation of the robot is done using ROS and tested in a virtual home environment.

Simulation and testing

The robot is designed and manufactured as shown in the photographs. The head and robotic arm are manufactured using 3D printing. The body of the robot is manufactured using sheet metal welding. The simulation of the robot is done using ROS and tested in a virtual home environment. This will help to validate the design and operation of the robot.

The main purpose of the simulation is to validate the autonomous navigation of the robot. The first step of the autonomous navigation is mapping of the environment using the lidar sensor. Path planning is the second step of navigation wherein the robot starts moving autonomously by using the map. Adaptive Monto Carlo Localization is a method used in ROS for finding the current position of the robot. After the robot knows the current location, one can send the final position that the robot needs to go.

One of the main features of the robot is that it can be controlled using Alexa, the voice assistant, from anywhere in the world. For integrating Alexa into the ROS-controlled robot, an app called Alexa Skill is required and the server is used for connecting Alexa with the robot. A communication protocol is used as a medium for communication between the robot and Alexa Skill. For the home application, the position the robot needs to take is conveyed through Alexa commands. The robot is tested using the Alexa Echo device.

How it works

Once the robot reaches the kitchen for food, it searches for food items using a camera. The arm of the robot then places the food onto the robot’s tray and heads to deliver it to the elderly in their bedrooms. The same can be replicated to provide healthcare support for elderly individuals and patients in hospitals for their meals and medication. Additionally, it can also be used to provide service in restaurants and offices. Using thermal camera imaging, the robot can be used to measure body temperature without any contact and also identify whether a person wears a mask or not. The weight of the robot is about 30 kg and the weight carrying capacity of the arm is about 0.5 kg. This can be improved by providing two humanoid arms in the future.

DR GANESHA UDUPA

VISHNU RAJ K

ABHIRAM TV

Professor & Dean, Department of Mechanical Engineering

Amrita Vishwa Vidyapeetham, Kollam, Kerala

ganesh@am.amrita.edu


Eplan
  Facebook   Twitter   Linkedin   Subscribe