原文:英文
October 21, 2014Algorithm lets disaster response robots discern between humans and rubble
The robot features HD cameras to scan the surrounding area for people Image Gallery (2 images)With their ability to navigate through tight spaces and unstable environments without putting people at risk, disaster response is one of the most promising applications for robots. Researchers from Mexico's University of Guadalajara (UDG) have developed an algorithm that could come in handy in such situations by allowing robots to differentiate between people and debris. The team used a robot with a form factor similar to iRobot's 110 FirstLook robot, but without that robot's self-righting capabilities. With motion sensors, cameras, a laser and infrared system, the robot is able to plot paths through an environment or create a 2D map. But it is the inclusion of a flashlight and stereoscopic HD camera that allows it to obtain images of its environment and recognize if there are any people within it. It does this by using the HD cameras to scan the surrounding area, before the images are cleaned up and patterns of interest are isolated from their surrounds, such as rubble. A descriptor system obtains the 3D points to segment, applying numerical values to the captured images that represent the shape, color and density of the shapes.
The segments are then merged to create a new image that passes through a filter that determines whether it is a human silhouette or not. The whole system can be integrated into the robot, or the algorithm run on a separate laptop and the robot controlled wirelessly. "Pattern recognition allows the descriptors to automatically distinguish objects containing information about the features that represent a human figure," says Arana Daniel, researcher at the University Center of Exact and Engineering Sciences (CUCEI) at the UDG. "This involves developing algorithms to solve problems of the descriptor and assign features to an object." The silhouettes will also be used to train a neural network to recognize patterns. This network, called CSVM, was developed by Arana Daniel and can be used to recognize not only human silhouettes, but also fingerprints, handwriting, faces, voice frequencies and DNA sequences. By mimicking the human learning process, the team plans to continue development on the robot with the goal of training it to automatically classify human shapes based on previous experience. Source: University of Guadalajara via Alpha Galileo |
自动翻译仅供参考
目标跟踪算法可让救灾机器人分辨出瓦砾和人10月21日,2014Algorithm让灾难应对机器人人和rubble
机器人之间的辨别功能的高清摄像头扫描周边地区的人 图片廊(2??图像)
凭借其导航能力通过狭小的空间和不稳定的环境不把人置于风险,救灾是最有前途的应用程序的机器人之一。从瓜达拉哈拉墨西哥的大学(UDG)的研究人员已经开发了一种算法,可以派上用场在这种情况下,允许机器人人民和碎片区分开来。
该小组使用机器人类似iRobot的110 FirstLook机器人的外形,但没有这种机器人的自动复位功能。与运动传感器,照相机,激光和红外线系统,机器人能够通过环境绘制路径或创建2D地图。但它是包括手电筒和立体高清相机,允许它得到其环境的图象,如果有其中的任何人识别。
它通过使用高清摄像机来扫描周围区域,图像之前执行此正在清理和利益格局被隔离的周围,如瓦砾。描述符系统获得的三维点段,施加数值来表示形状的形状,颜色和密度的拍摄图像。
这些段然后合并以创建经过过滤器的新图像确定它是否是一个人的轮廓或没有。整个系统可以集成到机器人,或者在一个单独的笔记本电脑上运行该算法和机器人无线控制。
“模式识别允许描述符,自动区分包含有关代表一个人的数字特征信息的对象,”阿拉纳说:丹尼尔,研究员的精确科学和工程科学大学中心(CUCEI)在UDG。 “这涉及制定算法来解决描述的问题和分配功能的对象。”
的轮廓也将被用于训练神经网络识别模式。这个网络中,被称为CSVM,由阿拉纳丹尼尔开发并可以用来不仅识别人的轮廓,但也指纹,笔迹,脸,声音的频率和DNA序列。
通过模仿人类的??学习过程中,该球队计划继续开发与培养其对人体的形状根据以往的经验进行自动分类的目的机器人
来源:瓜达拉哈拉大学通过阿尔法伽利略 |