Techno Press
Tp_Editing System.E (TES.E)
Login Search


You have a Free online access.
arr
 
CONTENTS
Volume 2, Number 2, June 2018
 

Abstract
Robotics and automation are rapidly growing in the industries replacing human labor. The idea of robots replacing humans is positively influencing the business thereby increasing its scope of research. This paper discusses the development of an experimental platform controlled by a robotic arm through Robot Operating System (ROS). ROS is an open source platform over an existing operating system providing various types of robots with advanced capabilities from an operating system to low-level control. We aim in this work to control a 7-DOF manipulator arm (Robai Cyton Gamma 300) equipped with an external vision camera system through ROS and demonstrate the task of balancing a ball on a plate-type end effector. In order to perform feedback control of the balancing task, the ball is designed to be tracked using a camera (Sony PlayStation Eye) through a tracking algorithm written in C++ using OpenCV libraries. The joint actuators of the robot are servo motors (Dynamixel) and these motors are directly controlled through a low-level control algorithm. To simplify the control, the system is modeled such that the plate has two-axis linearized motion. The developed system along with the proposed approaches could be used for more complicated tasks requiring more number of joint control as well as for a testbed for students to learn ROS with control theories in robotics.

Key Words
robot operating system (ROS); robot manipulator; balancing control; ball-on-plate system

Address
Khasim A. Khan, Revanth R. Konda and Ji-Chul Ryu: Department of Mechanical Engineering, Northern Illinois University, DeKalb, IL 60115, U.S.A.

Abstract
With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.

Key Words
human pose estimation; skeleton extraction; fully convolutional network; semantic segmentation; upper-body joint segmentation

Address
Seunghee Lee, Jungmo Koo and Jinki Kim: Department of Civil and Environmental Engineering, Korean Advanced Institute for Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea

Hyun Myung: 1.) Department of Civil and Environmental Engineering, Korean Advanced Institute for Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
2.) Robotics Program, Korean Advanced Institute for Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea


Abstract
In image processing and robotic applications, two-dimensional (2D) black and white patterned planar markers are widely used. However, these markers are not detectable in low visibility environment and they are not changeable. This research proposes an active and adaptive marker node, which displays 2D marker patterns using light emitting diode (LED) arrays for easier recognition in the foggy or turbid underwater environments. Because each node is made to blink at a different frequency, active LED marker nodes were distinguishable from each other from a long distance without increasing the size of the marker. We expect that the proposed system can be used in various harsh conditions where the conventional marker systems are not applicable because of low visibility issues. The proposed system is still compatible with the conventional marker as the displayed patterns are identical.

Key Words
image processing, marker vision, LED arrays

Address
Kyukwang Kim, Jieum Hyun and Hyun Myung: Urban Robotics Laboratory(URL), Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea



Techno-Press: Publishers of international journals and conference proceedings.       Copyright © 2018 Techno-Press
P.O. Box 33, Yuseong, Daejeon 34186 Korea, Tel: +82-42-828-7996, Fax : +82-42-828-7997, Email: info@techno-press.com