Search results
1 – 10 of over 14000Kanstantsin Miatliuk, Yoon Hyuk Kim and Kyungsoo Kim
The purpose of this paper is to define the process of human motion design in hierarchical space by cybernetic technology using mathematical symbol construction of hierarchical…
Abstract
Purpose
The purpose of this paper is to define the process of human motion design in hierarchical space by cybernetic technology using mathematical symbol construction of hierarchical systems (HSs) and realize the technology in tasks of biomechanical motion design.
Design/methodology/approach
Suggested HS technology allows definition of human motion on level space. HS are presented by their two main symbol images, i.e. mathematical symbol construction and graphic image. Those images connect all the strata of HS, show the acts of systems multiplying (learning) and uniting (design), the place of man in hierarchical space and human activity on higher levels. The design&control tasks are solved by HS coordinator.
Findings
The paper proves that HS technology presented allows design&control tasks of human motion in hierarchical space to be solved. Coherence of man's construction deformations and correspondent changes of his interactions with environment elements (his motion) is controlled by HS coordinator. Coordinator design tasks are formulated. The possibility of computer program description in frames of the proposed technology is revealed.
Practical implications
The technology presented gives an instrument for the design&control of different kinds of human motion in hierarchical space, predicts connected motion dynamics on different levels. It is applied in design tasks of biomechanical motion.
Originality/value
The method (technology) presented meets all the requirements of practical cybernetic (design&control) tasks. It brings new light to theory and practice of human motion design, presents human motion as hierarchical process in the level space, allows motion design&control task to be solved as coordination task of HS coordinator.
Details
Keywords
Meiyin Liu, SangUk Han and SangHyun Lee
As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities…
Abstract
Purpose
As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities for the prevention of accidents and injuries in construction. This study thus aims to present a computationally efficient and robust method of human motion data capture for the on-site motion sensing and analysis.
Design/methodology/approach
This study investigated a tracking approach to three-dimensional (3D) human skeleton extraction from stereo video streams. Instead of detecting body joints on each image, the proposed method tracks locations of the body joints over all the successive frames by learning from the initialized body posture. The corresponding body joints to the ones tracked are then identified and matched on the image sequences from the other lens and reconstructed in a 3D space through triangulation to build 3D skeleton models. For validation, a lab test is conducted to evaluate the accuracy and working ranges of the proposed method, respectively.
Findings
Results of the test reveal that the tracking approach produces accurate outcomes at a distance, with nearly real-time computational processing, and can be potentially used for site data collection. Thus, the proposed approach has a potential for various field analyses for construction workers’ safety and ergonomics.
Originality/value
Recently, motion capture technologies have rapidly been developed and studied in construction. However, existing sensing technologies are not yet readily applicable to construction environments. This study explores two smartphones as stereo cameras as a potentially suitable means of data collection in construction for the less operational constrains (e.g. no on-body sensor required, less sensitivity to sunlight, and flexible ranges of operations).
Details
Keywords
Abstract
Details
Keywords
Abstract
Details
Keywords
What is animation? This article defines different types of animation and explores its therapeutic potential.
Abstract
What is animation? This article defines different types of animation and explores its therapeutic potential.
Details
Keywords
In this paper, two recent VR technology topics are introduced. First, three keywords; (intuitive interaction, presence, multi‐sensory) which characterize VR technology are…
Abstract
In this paper, two recent VR technology topics are introduced. First, three keywords; (intuitive interaction, presence, multi‐sensory) which characterize VR technology are discussed. Second, IPT, a projection‐based technology, is introduced as a suitable interface example which effectively illustrates the recent technology’s sophistication. CABIN is a small cubic room surrounded by five screens, which is implemented at the University of Tokyo. Third, wearable computer is introduced as a further extrapolation of mobile computer technology. Mixed reality (MR) is at the edge of recent VR technology R&D. Finally, it is concluded that these two streams may appear to be in opposite directions, but they are both related to the fusion of real and virtual world.
Chris Bernard, Hyosig Kang, Sunil K. Singh and John T. Wen
Minimally invasive surgery (MIS) is a cost‐effective alternative to the open surgery whereby essentially the same operations are performed using specialized instruments designed…
Abstract
Minimally invasive surgery (MIS) is a cost‐effective alternative to the open surgery whereby essentially the same operations are performed using specialized instruments designed to fit into the body through several tiny punctures instead of one large incision. The EndoBots (Endoscopic Robots) described here are designed for collaborative operation between the surgeon and the robotic device. The surgeon can program the device to be operated completely manually, collaboratively where motion of the robotic device in certain directions is under computer control and in others under manual surgeon control, or autonomously where the complete device is under computer control. Furthermore, the robotic tools can be quickly changed from a robotic docking station, allowing different robotic tools to be used in an operation.
Details
Keywords
Daisuke Chugo, Kuniaki Kawabata, Hiroyuki Okamoto, Hayato Kaetsu, Hajime Asama, Norihisa Miyake and Kazuhiro Kosuge
The aim is to develop a force assistance system for standing‐up which prevents the decreasing of physical strength of the patient by using their remaining physical strength.
Abstract
Purpose
The aim is to develop a force assistance system for standing‐up which prevents the decreasing of physical strength of the patient by using their remaining physical strength.
Design/methodology/approach
The system realizes the standing up motion using the support bar with two degrees of freedom and the bed system which can move up and down. For using the remaining physical strength, our system uses the motion pattern which is based on the typical standing up motion by nursing specialist as control reference.
Findings
The assistance system realizes the natural standing up motion by nursing specialist and it is effective to assist the aged person to stand up without reducing their muscular strength.
Originality/value
The first idea is distributed system which controls the support bar and the bed system with coordination among them. The second idea is the combination of force and position control.
Details
Keywords
Smart cam or electronic cam is a combination of a computer program for motion definition and a high‐performance servo mechanism. By using this, one can obtain a high‐speed…
Abstract
Smart cam or electronic cam is a combination of a computer program for motion definition and a high‐performance servo mechanism. By using this, one can obtain a high‐speed, reliable and application‐oriented motion, just as obtained from a mechanical cam system. Hard cams are dedicated and difficult to change, but smart cams are flexible in stroke, in timing, etc. With the progress of computer and servo technology, smart cams are becoming widely used. In this report the methods used to create smart cams are discussed and some examples are shown. The experimental results of these examples show excellent performance which is as good as hard cams.
Details
Keywords
The following article is a “Q&A interview” conducted by Joanne Pransky of Industrial Robot Journal as a method to impart the combined technological, business and personal…
Abstract
Purpose
The following article is a “Q&A interview” conducted by Joanne Pransky of Industrial Robot Journal as a method to impart the combined technological, business and personal experience of a prominent, robotic industry PhD-turned entrepreneur regarding his pioneering efforts of bringing technological inventions to market. The paper aims to discuss these issues.
Design/methodology/approach
The interviewee is Dr James Kuffner, CEO at Toyota Research Institute Advanced Development (TRI-AD). Kuffner is a proven entrepreneur and inventor in robot and motion planning and cloud robotics. In this interview, Kuffner shares his personal and professional journey from conceptualization to commercial realization.
Findings
Dr Kuffner received BS, MS and PhD degrees from the Stanford University’s Department of Computer Science Robotics Laboratory. He was a Japan Society for the Promotion of Science (JSPS) Postdoctoral Research Fellow at the University of Tokyo where he worked on software and planning algorithms for humanoid robots. He joined the faculty at Carnegie Mellon University’s Robotics Institute in 2002 where he served until March 2018. Kuffner was a Research Scientist and Engineering Director at Google from 2009 to 2016. In January 2016, he joined TRI where he was appointed the Chief Technology Officer and Area Lead, Cloud Intelligence and is presently an Executive Advisor. He has been CEO of TRI-AD since April of 2018.
Originality/value
Dr Kuffner is perhaps best known as the co-inventor of the rapidly exploring random tree (RRT) algorithm, which has become a key standard benchmark for robot motion planning. He is also known for introducing the term “Cloud Robotics” in 2010 to describe how network-connected robots could take advantage of distributed computation and data stored in the cloud. Kuffner was part of the initial engineering team that built Google’s self-driving car. He was appointed Head of Google’s Robotics Division in 2014, which he co-founded with Andy Rubin to help realize the original Cloud Robotics concept. Kuffner also co-founded Motion Factory, where he was the Senior Software Engineer and a member of the engineering team to develop C++ based authoring tools for high-level graphic animation and interactive multimedia content. Motion Factory was acquired by SoftImage in 2000. In May 2007, Kuffner founded, and became the Director of Robot Autonomy where he coordinated research and software consulting for industrial and consumer robotics applications. In 2008, he assisted in the iOS development of Jibbigo, the first on-phone, real-time speech recognition, translation and speech synthesis application for the iPhone. Jibbigo was acquired by Facebook in 2013. Kuffner is one of the most highly cited authors in the field of robotics and motion planning, with over 15,000 citations. He has published over 125 technical papers and was issued more than 50 patents related to robotics and computer vision technology.
Details