VRTEX Under the Hood Lap Joint Using Aluminum and the Hood Hardfacing

 

Publicado el 15/05/2014

This video demonstrates the GMAW welding process on an Aluminum Lap Joint in a virtual welding simulation environment using the VRTEX® 360.

 

Publicado el 15/05/2014

Learn how to build-up a welding pad using SMAW in the VRTEX® virtual reality welding simulation environment.

Source: Lincoln Electric

 

Why Virtual Reality Isn’t (Just) the Next Big Platform: Michael Abrash & Dov Katz of Oculus VR

 

 

 

Transmitido en directo el 02/05/2014

Michael Abrash, Chief Scientist, Oculus VR
Dov Katz, Senior Vision Engineer, Oculus VR

May 02, 2014

Abstract
There’s been a lot of talk lately about how VR is the next big platform, but that’s not really accurate; it’s something bigger and more fundamental, nothing less than a phase change in the way that we interact with information. This talk will discuss why that’s so, what going to be involved in getting to that point, and why VR is going to open up huge new research and development opportunities.

VR is coupled with our embodiment more than any existing interface. A compelling and immersive VR experience requires that our interactions with the physical world are mapped precisely and with low latency onto the virtual environment. This creates a variety of challenges, including hardware design, processing multi modal sensor data, and filtering. The second part of the talk will focus on our current solution to the problem of head tracking.

Speaker Biographies

Michael Abrash
Over the last 30 years, Michael has worked at companies that made graphics hardware, computer-based instrumentation, and rendering software, been the GDI lead for the first couple of versions of Windows NT, worked with John Carmack on Quake, worked on Xbox and Xbox 360, written or co-written at least four software rasterizers (the last one of which, written at RAD Game Tools, turned into Intel’s late, lamented Larrabee project), and worked on VR at Valve. Along the way he wrote a bunch of magazine articles and columns for Dr. Dobb’s Journal, PC Techniques, PC Tech Journal, and Programmer’s Journal, as well as several books. He’s been lucky enough to have more opportunities to work on interesting stuff than he could ever have imagined when he almost failed sixth grade because he spent all his time reading science fiction. He is currently Chief Scientist at Oculus VR, and thinks VR is going to be the most interesting and important project of all.

Dov Katz
Dov Katz is leading Oculus VR’s computer vision R&D. He is passionate about human and computer perception. His research interest include computer vision, machine learning, and autonomous manipulation. At Oculus, he developed a high precision – low latency optical position tracking system. He is currently engaged in several projects that will deliver a more immersive and intuitive VR experience.

He was previously a postdoctoral fellow at Carnegie Mellon University. He received his MS in 2008 and Ph.D. in 2011 from the University of Massachusetts Amherst, and his BS in 2004 from Tel-Aviv University, Israel. He was the recipient of several national and international awards, and his work received attention in the popular press. He is the founder of the IEEE/RAS technical committee on mobile manipulation.

Source:

Vienna University of Technology. Quadcopter Piloted by a Smartphone

The Quadcopter-Team: Annette Mossel, Christoph Kaltenriner, Hannes Kaufmann, Michael Leichtfried (l.t.r.)

The quadcopter, which was developed at TU Vienna, can negotiate its way through a room completely on its own. It does not need any human interference, and in contrast to other models, it is not assisted by any external computer. All the necessary computing power in on board; the image processing is done by a standard smartphone.

Autonomous Machines
Quadcopters have become a popular toy for academic research. The small aircraft, powered by four electrical engines, are perfect for testing advanced feedback control systems, which make them fly steadily and safely. But beyond that, quadcopters are also used to test how machines can be made to perceive their environment and act autonomously.

The Virtual-Reality-Team at Vienna University of Technology has been working with visual data for many years. “Proceeding towards robotics and mounting a camera onto a quadcopter was just the logical next step for us”, says Hannes Kaufmann (Faculty of Informatics, TU Vienna). Usually, quadcopters are steered by humans or they send their data to a powerful earthbound computer, which then returns the necessary control signals. The Vienna quadcopter, however, does not need any external input.

The Quadcopter, built at TU Vienna

A Smartphone as the  Eyes and Brains
The team decided not to buy an expensive commercial quadcopter-system, but instead to assemble a simple, cost-efficient quadcopter, using carefully selected components. The core element – and the most expensive part of the quadcopter – is a smartphone. Its camera provides the visual data and its processor acts as the control center. The quadcopter’s intelligence, which allows it to navigate, was coded in a smartphone-app. In addition, a micro controller adjusts the rotor speed, so that the quadcopter flies as steadily as possible.

The quadcopter was designed to work indoors, even in small rooms. This is a major challenge; especially close to walls or corners, aerodynamics can be much more tricky than in open space. Apart from that, the quadcopter cannot make any use of GPS data, it has to rely entirely on visual data.

To test the quadcopters navigational capabilities, the team attached visual codes to the floor, similar to QR-codes. Hovering above these codes, the quadcopter recognizes them, obtains information and creates a map of its environment. Once it has created a virtual map of the codes on the floor, it can head for a specific known location or go on exploring areas it has not yet checked out.

“In the future, the quadcopter should also be able to do without these codes. Instead, we want it to use naturally occurring reference points, which can be obtained from the camera data and also from depth sensors such as the MS Kinect”, says Annette Mossel, chief engineer of the quadcopter project. She developed the device together with her diploma students Christoph Kaltenriner and Michael Leichtfried.

Many Ideas for Applications
There are many possible applications for an autonomous quadcopter; firemen could send it into a burning building and have it transmit a 3D picture from inside before they enter the building themselves. Miniature quadcopters could guide people to the right place in large, labyrinthine buildings. Due to its low price, the smartphone-quadcopter could also be used in less wealthy regions of the world – for instance to monitor illegal forest clearance without having to use expensive helicopters.

The components of the quadcopter are less than a thousand Euros, says the team. However, the many months of work, which were spent designing the electronics and developing the computer programs, are not included in this calculation.

Further Information:
Dr. Hannes Kaufmann
Institute of Software Technology and Interactive Systems
Vienna University of Technology
Favoritenstraße 9-11
T: +43-1-58801-18860
kaufmann@ims.tuwien.ac.at

Dipl-Inf.(FH) Annette Mossel
Institute of Software Technology and Interactive Systems
Vienna University of Technology
Favoritenstraße 9-11
T: +43-1-58801-18893
annette.mossel@tuwien.ac.at

Source: Vienna University of Technology

 

ICRA 2013. Haptics

“Haptics is the science of understanding and improving human interaction with the physical world through the sense of touch. Haptic interfaces are computer-controlled electro-mechanical systems that enable a user to feel and manipulate a real, remote, or virtual environment. They often take the form of a lightweight, backdrivable robotic arm, measuring the motion of the human hand and providing appropriate force feedback throughout the interaction; other haptic interfaces focus on tactile interactions directly through the skin. Haptic interfaces for real interactions can be configured to steady the hand of an eye surgeon during delicate interventions or guide the hand of an individual assembling tiny mechanical components. When applied to teleoperation, haptic interfaces allow the user to dexterously control the motion of a robot manipulator in an unreachable environment, such as the depths of the sea or the operative site in minimally invasive robot-assisted surgery. Lastly, haptic interfaces can be connected to a computational model of a physical environment to facilitate training of manual skills like medical procedures or to augment more general human-computer interactions for education or entertainment”. Source:  Haptics Group, part of the GRASP Lab at the University of Pennsylvania.

 At the IEEE International Conference on Robotics and Automation (ICRA) 2013, haptics has been all over the place. It’s a tricky thing to experiment with, because it requires sophisticated sensors, and equally sophisticated software to understand what the sensors are saying. Translating such sensor data into something that a human can understand is especially difficult, but in a paper presented a PR2 robot equipped with an innovative finger sensor from SynTouch has been taught to use touch exploration to associate objects with “tactile adjectives. “Using Robotic Exploratory Procedures to Learn the Meaning of Haptic Adjectives,” by Vivian Chu, Ian McMahon, Lorenzo Riano, Craig G. McDonald, Qin He, Jorge Martinez Perez-Tejada, Michael Arrigo, Naomi Fitter, John C. Nappo, Trevor Darrell, and Katherine J. Kuchenbecker from the University of Pennsylvania and the University of California, Berkeley, was presented  at ICRA 2013 in Germany”

Translating such sensor data into something that a human can understand is especially difficult, but in a paper presented this week, a PR2 robot equipped with an innovative finger sensor from SynTouch has been taught to use touch exploration to associate objects with “tactile adjectives.” A tactile adjective is a word like “squishy.” “Fuzzy” is another one, and so is “crinkly.” Humans easily understand what those terms mean. But robots still have a lot to learn. The study had a bunch of humans feel up a set of common household items, resulting in 34 adjective labels:

PR2 robot learns haptic adjectives.

The researchers then had a PR2 with the BioTac tactile finger sensor perform a series of exploratory procedures on the same set of objects, including tapping, squeezing, holding, and both slow and fast sliding. Here’s a video of PR2 exploring a folded satin pillowcase through touch:

After training the PR2 by correlating haptic sensor data with adjectives from humans who touched the same objects, the robot was tested out on a series of objects that it had never experienced before to see whether it would be able to derive the same haptic adjectives as humans do. And it worked. As shown in the video above (although it flashes past pretty quickly at the end), humans described the folded satin pillowcase as “compact, compressible, deformable, smooth, and squishy,” while the robot thought it was “compact, compressible, crinkly, smooth, and squishy.” I’m not sure where “crinkly” came from, but the rest of it is pretty close, and it’s very impressive for words that have a tendency towards subjectiveness. The researchers summarize:
The presented results prove that a robot equipped with rich multi-channel tactile sensors can discover the haptic properties of objects through physical interaction and then generalize this understanding across previously unfelt objects. Furthermore, we have shown that these object properties can be related to subjective human labels in the form of haptic adjectives, a task that has rarely been explored in the literature, though it stands to benefit a wide range of future applications in robotics.
Source: Evan Ackerman IEEE 13 May 2013

Machine Haptics

 

Exoskeleton. Mindwalker

Mindwalker from the University of Twente (Netherlands). A lack of mobility often leads to limited participation in social life. The purpose of this STREP is to conceive a system empowering lower limbs disabled people with walking abilities that let them perform their usual daily activities in the most autonomous and natural manner.

To May 2013, after successful ethical board approval, the whole MINDWALKER setup has been shipped from the University of Twente (Netherlands) to the Santa Lucia Foundation (Italy) in March. Clinical evaluation has been carried out since. About 20 trials have been performed so far, with 5 spinal cord injured patients of the Santa Lucia Foundation. The evaluation results will be reported in the project’s deliverables, and will allow road-mapping the required improvements to turn this MINDWALKER prototype into a mature product.

The project addresses 3 main different fields of expertise:

  • BCI technologies
  • Virtual Reality
  • Exoskeleton Mechatronics and Control

The project top level objective is to combine these expertises to develop an integrated MINDWALKER system. In addition the system shall undergo a clinical evaluation process.

Mindwalker Project Research Objectives

Approaches

New smart dry EEG bio-sensors will be applied to enable lightweight wearable EEG caps for everyday use.

Novel approaches to non-invasive BCI will be experimented in order to control a purpose-designed lower limbs orthosis enabling different types of gaits. Complementary research on EMG processing will strengthen the approach. The main BCI approach relies on Dynamic Recurrent Neural Network (DRNN) technology.

A Virtual Reality (VR) training environment will assist the patients in generating the correct brain control signals and in properly using the orthosis. The VR training environment will comprise both a set of components for the progressive patient training in a safe and controlled medical environment, and a lightweight portable set using immersive VR solutions for self-training at home.

The orthosis will be designed to support the weight of an adult, to address the dynamic stability of a body-exoskeleton combined system, and to enable different walking modalities.

Evaluation

The developed technologies will be assessed and validated with the support of a formal clinical evaluation procedure. This will allow to measure the strengths and weaknesses of the chosen approaches and to identify improvements required to build a future commercial system. In addition the resulting system will be progressively tested in everyday life environments and situations, ranging from simple activities at home to eventually shopping and interacting with people in the street.

Public Material

Project’s Leaflet – May 2013