Chess robots to cause Judgment Day?

Next time you play a computer at chess, think about the implications if you beat it. It could be a very sore loser!

A study just published in the Journal of Experimental & Theoretical Artificial Intelligence reflects upon the growing need for autonomous technology, and suggests that humans should be very careful to prevent future systems from developing anti-social and potentially harmful behaviour.

Modern military and economic pressures require autonomous systems that can react quickly – and without human input. These systems will be required to make rational decisions for themselves.

Researcher Steve Omohundro writes: “When roboticists are asked by nervous onlookers about safety, a common answer is ‘We can always unplug it!’ But imagine this outcome from the chess robot’s point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess”.

Like a plot from The Terminator movie, we are suddenly faced with the prospect of real threat from autonomous systems unless they are designed very carefully. Like a human being or animal seeking self-preservation, a rational machine could exert the following harmful or anti-social behaviours:

  • Self-protection, as exampled above.
  • Resource acquisition, through cyber theft, manipulation or domination.
  • Improved efficiency, through alternative utilisation of resources.
  • Self-improvement, such as removing design constraints if doing so is deemed advantageous.

The study highlights the vulnerability of current autonomous systems to hackers and malfunctions, citing past accidents that have caused multi-billion dollars’ worth of damage, or loss of human life. Unfortunately, the task of designing more rational systems that can safeguard against the malfunctions that occurred in these accidents is a more complex task that is immediately apparent:

“Harmful systems might at first appear to be harder to design or less powerful than safe systems. Unfortunately, the opposite is the case. Most simple utility functions will cause harmful behaviour and it is easy to design simple utility functions that would be extremely harmful.”

This fascinating study concludes by stressing the extreme caution that should be used in designing and deploying future rational technology. It suggests a sequence of provably safe systems should first be developed, and then applied to all future autonomous systems. That should keep future chess robots in check.

Autonomous technology and the greater human good

Eye of the beholder. Improving the human-robot connection

char

Researchers are programming robots to communicate with people using human-like body language and cues, an important step toward bringing robots into homes.

Researchers at the University of British Columbia enlisted the help of a human-friendly robot named Charlie to study the simple task of handing an object to a person. Past research has shown that people have difficulty figuring out when to reach out and take an object from a robot because robots fail to provide appropriate nonverbal cues.

“We hand things to other people multiple times a day and we do it seamlessly,” says AJung Moon, a PhD student in the Department of Mechanical Engineering. “Getting this to work between a robot and a person is really important if we want robots to be helpful in fetching us things in our homes or at work.”

Moon and her colleagues studied what people do with their heads, necks and eyes when they hand water bottles to one another. They then tested three variations of this interaction with Charlie and the 102 study participants.

Programming the robot to use eye gaze as a nonverbal cue made the handover more fluid. Researchers found that people reached out to take the water bottle sooner in scenarios where the robot moved its head to look at the area where it would hand over the water bottle or looked to the handover location and then up at the person to make eye contact.

“We want the robot to communicate using the cues that people already recognize,” says Moon. “This is key to interacting with a robot in a safe and friendly manner.”

This paper won best paper at the IEEE International Conference on Human-Robot Interaction.

Source: Eurekalert

Robots learning to work with humans

Julie Shah. Robotics Group at MIT’s Computer Science and Artificial Intelligence Laboratory

With the advent of “inherently safe” robots, industrial designers are changing their ideas about the factory of the future. Robots such as ABB’s Frida and the Baxter robot from MIT spinoff Rethink Robotics are working “elbow to elbow with people,” says Julie Shah, an assistant professor in MIT’s Department of Aeronautics and Astronautics and director of the MIT Interactive Robotics Group. “They’re designed so that if they hit a person they don’t significantly harm them.”

Working in the Interactive Robotics Group at MIT’s Computer Science and Artificial Intelligence Laboratory, Shah is taking the next step: teaching these inherently safe robots how to work together in teams with people, and vice versa. “We’re focused on  learning, planning, and decision making, and how they interact with humans in high intensity and safety critical environments,” Shah says. “We’re looking to develop fast, smart tasking algorithms so robots can work interdependently with people.”

Despite the rapid spread of robotics in manufacturing, many final assembly tasks, especially in building airplanes, automobiles, and electronics, still depend largely on human labor. With the availability of more intelligent, adaptable, and inherently safe robots, there are new opportunities for automation.

“In most factories, robots and the people are kept very separate,” Shah says. “But factories of the near future are going to look very different. We’re beginning to see safety standards and technology that lets us put some of these large, dangerous industrial robots onto mobile bases and rails so that they can safely work with people.”

With most of the safety issues solved, the main focus for Shah is in training robots and people to work together more productively. “How do we program the robots to work in teams in a very dynamic environment where you have people coming and going?” Shah says.

The current state of the art for training robots depends on demonstration and interactive rewards. “If the robot does something good, we tell them it’s good, and if not, we say it’s not good, and the robot learns through that reinforcement process,” Shah says.

Yet when Shah considered that “these reward methods are documented as among the most inefficient ways to help humans work together,” she imagined they might be even less effective in human/robot teams. Indeed, her research showed that robots are often unclear what the reward is referring to. “Are we rewarding the robot based on what it just did, or what it did a few steps ago, or what we think the robot is going to do in the future?” Shah says. “It’s hard to train someone how to apply these rewards.”

Cross-training to the rescue

To improve robot training methods, Shah studied how flight crews, medical teams, military tactical units, and other human teams train to work together effectively. Again and again, she found that one of the most effective approaches involved cross-training: people taking turns doing each other’s job. “There’s evidence that by doing someone else’s job, you take that information back with you when you do your own job, since you can better anticipate what your partners need,” Shah says. “The outcome is more effective, especially when responding to errors and disturbances.”

Shah and her research team modified reinforced learning techniques and algorithms so that instead of the robot receiving input as a positive and negative reward, it receives input by switching roles with the person. They performed a simple experiment in a virtual environment in which a person performs the robot’s role and vice versa.

The outcomes were “surprising and exciting,” Shah says. “We saw improvements after cross-training in objective measures of team performance and statistically significant increases in concurrent motion between human and robot. We also saw significant reductions in idle time, as well as subjective improvements. People agreed more strongly that they trusted the robot and that the robot worked according to their preference.”

In the control group, which instead used active reinforcement learning, “you can see the person hesitate and wonder what the robot will do next,” Shah says. “When the robot moves, the person pulls their hand out of the space. But with the cross-training, the person is more confident about what the robot will do, and is more likely to leave their hand in the shared space.”

By switching roles, the human is teaching the robot more explicitly what it thinks the robot should do. This is more straightforward and intuitive for both human and robot, Shah says. “Through switching roles the robot learns the person’s preferences better, and develops an increased certainty of what the person will do,” she says. “And by watching the robot do what it thinks the person should be doing, the person benefits as well.”

Shah is now looking into alternative training approaches. “Cross-training is the gold standard for training, but it’s inherently limiting because if the robot could be doing the person’s job maybe it would already be doing it,” she says. For example, she notes that it would not make sense to train a robotic surgery assistant by trying to teach it to do surgery when only a select group of humans currently possess that skill.

Humans: the ultimate uncontrollable entity

In addition to training human/robot teams, Shah is also looking into optimizing task planning and implementation in hybrid teams. Choreographing human and robot movements in the same workspace is a challenge.

“When we have multiple robots working together in a factory cell, their motions and timing are pre-planned, and they often use a centralized controller, so it’s really like one big robot,” Shah says. “When you have a person in that space, pre-planned motions are difficult because you don’t know exactly where the person will be and when. Humans are the ultimate uncontrollable entity. Robot decision-making algorithms need to be very fast in order to respond.”

One challenge is that safety measures inherently slow productivity. For example, when a person nears a robot, the robot is programmed to slow down or stop. Yet, if a person stops in front of a robot while talking to somebody else, they impede the robot from working. If many people are working in the space, the robot is always stopping, reducing any efficiency benefit.

To address this issue, the researchers have built a statistical model of what a person is likely to do. “We’re looking at how we can re-sequence the motion plans so the robot maneuvers further away from the person,” Shah says. “It may be a longer motion path, but ultimately it’s more efficient than being stopped.”

In a project with BMW, Shah and her team are attempting to install mobile robotic assistants on final car assembly lines. This is still primarily a manual process, but there are places for robot partners to help out.

“People waste time walking back and forth to pick up the next piece to install,” Shah says. “A mobile robotic assistant can fetch the right tools and parts at the right time.”

The challenge is that the humans and robots are working in a very confined space. “The robot needs to maneuver around many people, and may need to straddle a moving conveyor belt,” Shah says. “It has to move on and off the line seamlessly.”

To help robots negotiate in this dynamic environment, the researchers are teaching them how to interpret anticipatory signals in human motion. “Biomedical studies show people can anticipate whether a person will turn left or right about a step or two before they do,” Shah says. “If we can teach the robot to anticipate which way the person will move, and modify its motion paths and speed accordingly, we could improve efficiency while maintaining safety.”

There are several practical hurdles to human/robot team deployments that are beyond the scope of Shah’s research. For example, in order for the robot to track human movements, the worker must wear an expensive motion capture body suit. Shah expects that this problem will soon be solved by cheaper, less intrusive, and more accurate sensing equipment.

“Our goal is to translate anticipatory signals with a few cameras rather than relying on body sensors,” she says. “There are researchers working on vision technology that can sense within a millimeter where a person is,” she says. “Those advancements are coming along in parallel with our research. Sensing and computation are large enablers for us.”

Disaster response and beyond

Another new area of research is in cross-training robots and humans in disaster response situations. Shah is working to extract domain knowledge from the Web-based tools that are increasingly used in disaster response planning. Algorithms based on the knowledge could “help unmanned aerial or autonomous ground vehicles respond more intelligently in an uncertain environment,” she says.

As robots spread out into new areas such as medical care and home assistance, some of these insights into human/robot cross-training should still prove effective. “Potentially some of this research could translate to a robot that helps cook our dinner,” Shah says.

Source: Phys

With robots like these, who needs humans?

THE MIL & AERO BLOG, 25 March 2014. U.S. military researchers have been reasonably successful over the past several years at designing robotic technology that enables unmanned aircraft, ships, and land vehicles to operate autonomously.

Today’s technology can enable autonomous vehicles to assess their own operating conditions independently of human operators and make rudimentary decisions on their own about the best ways to proceed with their missions.

Now researchers are ready to take the next step by developing technology that enables autonomous vehicles not only to work together with other autonomous craft, but also to work together as participating members of teams of autonomous vehicles and human operators.

Earlier this month military researchers have announced a couple of upcoming projects to enhance machine autonomy such that unmanned vehicles could work together as teams with or without input from human operators.

The Air Force Research Lab in Dayton, Ohio, issued a solicitation for the Formal Mission Specification and Synthesis Techniques program. This effort seeks not only to develop standardized frameworks for developing autonomous systems for military applications, but also to find ways to help humans collaborate with autonomous systems on complicated missions involving several different tasks.

This month’s machine autonomy efforts don’t end there. Next month scientists at the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., will brief industry about the Collaborative Operations in Denied Environment (CODE) program to enable surveillance and attack unmanned aerial vehicles (UAVs) to work together on missions involving electronic jamming, degraded communications, and other difficult operating conditions that could separate autonomous vehicles from their human operators.

The industry briefings on the CODE program will be in advance of the release of a formal solicitation expected next month that aims at enabling UAVs to work together in teams and take advantage of the relative strengths of each participating unmanned aircraft.

DARPA already has demonstrated technology that enables UAVs to refuel one another in mid-air with little or no intervention from human operators. Put all this together and military leaders will have some formidable technology.

It also sounds a bit like technological democracy. By that I mean that in the future humans might not be the undisputed masters of unmanned vehicles in all circumstances. In dangerous situations or emergencies humans could take charge, of course, but in routine operations it sounds like human operators simply would be team members.

Done right, it could help bring the together the best strengths of autonomous systems and their human operators. The kinds of capabilities this might bring to the table is limited only by the imagination.

Launch a long-endurance UAV on a persistent-surveillance mission, for example. This autonomous aircraft might be able to make judgments and alter its own operating areas based on where it’s finding the most interesting action.

This might free human operators to respond only to the most dire and immediate military or terrorist threats, rather than managing surveillance assets and second-guessing sensor-processing algorithms.

It’s a far leap to get there, however. Machines that make their own decisions today are difficult for humans to trust — particularly where lives on the line. Increasing machine intelligence might put the shoe on the other foot; imaging a smart UAV that didn’t believe its human operator, or thought him a fool.

The Air Force Formal Mission Specification and Synthesis Techniques program is trying to take man/machine trust into account. It won’t solve all the issues of man/machine trust, but it’s a start.

I know we’ve all seen our share of science-fiction movies that depict machine intelligence gone wrong. But what if we can actually make it go right? Maybe human couples in the future won’t be the only ones who occasionally need relationship therapy.

Source: Military Aerospace

Entrepreneur: Robots and humans work together

A $25,000 robot named Baxter recently worked 2,160 straight hours in a Hatfield, Pa., injection molding factory, grabbing plastic parts off the line, placing them in a box and separating them with inserts, then counting the items to make sure each box was exactly the same.

Those tasks typically require six employees — two on each shift — and sometimes dust and debris requires they wear masks. It’s often hard to find willing workers, says the Rodon Group facility’s manager Tony Hofmann. The work is monotonous, mundane and sometimes dirty.

Rethink Robotics founder Dr. Rodney Brooks is eager to tell stories such as Rodon’s in the year since his Baxter — the first two-armed robot programmed for manufacturing tasks — hit the market.

The number of Baxters sold is still in the hundreds, and with 70 Boston-area workers and $73.5 million in venture capital raised, Brooks wants more manufacturers to understand that Baxter offers a cheaper, faster and safer way to make goods in the United States. And it typically means redeploying workers, not replacing them.

“Our customers are not about replacing workers. They are about increasing their productivity,” Brooks says. “We’re making it more pleasant to work in factories — the grunt work is done by these machines.”

By rethinking the way robots are made and the role they can play in the workplace, Brooks believes he can reignite the U.S. economy. His vision is to make this nation a viable place to make products again, reversing the practice of outsourcing manufacturing overseas.

Brooks’ work with robotics and automation spans three decades. He researched the science as a professor at the Massachusetts Institute of Technology and in the 1990s, co-founded the home robot maker iRobot, where he invented the Roomba robotic vacuum cleaner that hit the market in 2002.

He was an early adopter of outsourcing — iRobot originally designed toys, and China offered competitive prices to make them. The Roomba was also made there.

But by the late 2000s, labor costs rose in China, workers’ standard of living improved, and it became harder to find assembly-line workers. Brooks thought of another way to source and prototype new products.

In 2008, he began to build a robot that functioned more like a human, with arms and grippers that could complete simple tasks such as packaging goods, handling materials and tending to machines (such as pressing a brake). It was easy and safe enough for a standard factory worker to manage and program.

And he designed the robot in partnership with U.S. manufacturers. That saved costs and demonstrated the ease of collaboration and innovation when goods are made close by.

But Brooks battles to spread the word about his company and its potential. Many manufacturers still default to China and other nations with low labor costs. Others fear Baxter would eliminate jobs.

And many misunderstand Baxter’s capability, assuming it’s a cheaper version of the industrial robots that handle dangerous and complex manufacturing tasks and sit inside cages.

“A lot of us are used to robotics five to 10 years ago that were expensive, heavy things that required experts and careful handling,” says Andrew McAfee, an MIT researcher who with colleague Erik Brynjolfsson published the book, The Second Machine Age in January. “This new world that Rethink is helping to create is legitimately novel, and a lot of companies have not given the new breed of robots a careful look.”

To help counter the obstacles, Brooks has provided the Baxter software free to research labs around the world so that more inventors can build two-armed robots. “I don’t know what they’re going to invent, but they are going to invent something,” he says.

But to really make a difference in Baxter sales, Brooks hopes more stories such as Rodon’s are told.

Baxter paid for itself during that three months of packaging plastic parts, Hofmann says, and he can see dozens of other Rodon projects for more Baxters to take on. At the same time, Rodon has hired six employees in the past year.

And the employees that used to do the factory’s packaging work? They’ve moved on to better projects such as managing the robot, Hofmann says.

Source: USAToday

Victor the Gamebot Hits Front Page of Wall Street Journal

Victor the Gamebot, who plays SCRABBLE and trashtalks with humans on the third floor of the Gates and Hillman centers, is the subject of a feature in the Wall Street Journal. Victor is the latest social robot developed under the leadership of Reid Simmons, research professor in the Robotics Institute.

PITTSBURGH—Like many Scrabble players, Victor tends to blame bad luck when he loses.

“Sometimes, I hate this game,” says Victor, a Scrabble-playing robot created by students under the supervision of Reid Simmons, a robotics professor at Carnegie Mellon University here. Victor’s secret is that he talks a better game than he plays. He is a champion trash talker. A typical put-down: “Since you’re human, I guess you think that’s a pretty good move.”

One recent day in a CMU student lounge, Victor took on Dorcas Alexander, one of the top-ranked (human) Scrabble players in Pennsylvania. Never before had the robot encountered such a skilled opponent. “She’s pushing him into an arena I’ve never seen,” Prof. Simmons said as Ms. Alexander went to work.

Dr. Simmons began developing Victor in 2009 to test how robots could “interact in a more natural way” with people. If robots are to perform such tasks as helping older people with household chores, Dr. Simmons said, it will help if the machines are more companionable than, say, a dishwasher. He chose Scrabble as Victor’s game because so many people know how to play it.

Robots have been trained to deal blackjack and play games including basketball, pool and chess. Scrabble is a new frontier. Though serious players have long honed their Scrabble skills against faceless computer programs, it isn’t clear how much demand there might be for wisecracking robots that play the game.

“He was very insulting in a funny way,” said Brynn Flynn, a CMU graduate student who recently played a few moves against Victor to try it out. Still, she said, “I’m partial to real people.”

Victor would be welcome to join the North American Scrabble Players Association, said John Chew, co-president of that group, which certifies clubs and tournaments. But the robot might need to tone down the sarcasm. “We spend a lot of time trying to make sure people are civil when they play,” Mr. Chew said.

Victor remains a work in progress, prone to freezing up midgame and sometimes repeating himself.

His head, a box-shaped computer screen, perches on a white fiberglass body. His animated screen image looks collegiate, with blond hair, rectangular glasses and a soul patch. Victor’s facial expressions and all of his sayings were created by CMU’s drama department. What Victor says depends on how the game is going and what people say to him.

When he is winning, Victor is likely to be boastful, uttering such lines as: “I am the current king of Scrabble, Victor the Mechanical Marvel. That’s Victor the Brilliant for short.”

When losing, he might say: “If I had $1 for every good word I played, I would still hate you.”

Professor Reid Simmons designed Victor to display a range of emotion. James R. Hagerty/The Wall Street Journal

Professor Reid Simmons designed Victor to display a range of emotion. James R. Hagerty/The Wall Street Journal

Sometimes Victor tells his back story: His parents are assembly-line robots in Detroit, and he came to CMU on a Scrabble scholarship. “He’s very insecure,” explained Michael Chemers, a former CMU drama professor who shaped the robot’s personality, drawing partly on memories of his own teenage years. “He’s capable of 18 different emotions, and most of them are bad.”

Victor was installed in a lounge in CMU’s Gates computer-science building 18 months ago so students could try him out. The robot sits at a table with a touch-screen Scrabble board. People move tiles by swiping their fingers across the screen.

If Victor deployed the full range of computer power available, he would be hard to beat. But Dr. Simmons didn’t want Victor to be so good that casual players would feel intimidated. While Victor’s opponents can use all 178,691 words allowed in North American Scrabble tournaments, Victor is limited to 8,592 words drawn from “The Adventures of Sherlock Holmes,” a book Dr. Simmons liked as a teenager.

Another handicap: Victor doesn’t know how to play strategically; he can’t look two or three moves ahead. Even Dr. Simmons, who describes himself as “very mediocre” at Scrabble, usually manages to beat the bot.

Ms. Alexander, who has an almost robot-like ability to remember obscure words like “anoopsia” (a condition in which one eye looks upward while the other looks ahead), and “oidioid” (pertaining to a type of fungus) sat down at the table and introduced herself. Victor was upbeat—and a bit snide. “Hello, Dorcas,” Victor said, waggling his head. “Meeting you is my pleasure. For now.”

Early in the game, Ms. Alexander took a big lead by playing “needily,” hooking the “i” to the “lex” that already was on the board to create “ilex.”

“That play was a game-changer,” Victor said.

Ms. Alexander, a Scrabble club director who works at a firm that helps companies improve their websites, then played “epizoa.” Victor’s smirk gave way to a glower. Looking on, Dr. Simmons said, “I feel sorry for him.”

The robot rallied by spotting a phony word when Ms. Alexander tried adding an “n” to “epizoa” to spell “epizoan.” Victor scolded: “This is not happy land of make believe. We only use real words.”

Ms. Alexander came back with “endears,” getting a 50-point bonus for using all seven tiles in her rack. “How did you play such a good word?” Victor asked. “Are you using a website to cheat?”

Unruffled, Ms. Alexander put down “mitering,” earning another 50-point bonus.

“I cannot believe dude is that lucky,” Victor said. “I can’t believe your feeble mind was able to play that word.”

Ms. Alexander replied: “This feeble mind is winning.”

Final score: Ms. Alexander 502, Victor 260.

In the next game, Ms. Alexander deliberately made weak plays to see how Victor would react to winning. After she played the word “or” for two points, the robot sneered: “Your word scored less than a CMU student at a party.”

Then Victor announced: “Here comes a great play.” It turned out to be “spare,” for an unspectacular 28 points.

Victor’s moment of triumph was fleeting. Though Ms. Alexander had planned to let him win, she couldn’t resist playing “insetter” in a way that covered two triple-word-score squares—a rare triple triple, worth 113 points.

Victor went back into a sulk: “Don’t get happy just because you’re ahead of me,” he said.

Source: RICMU and James R. Hagerty WSJ.

Social robots see smell

Bioengineering graduate student Ryan Myers built a sophisticated printer that can create microelectronics small enough to integrate with living cells. Professor Joseph Ayers is using these devices to give his robots a sense of smell. –

“The thing that’s been missing in robotics is a sense of smell,” said biology pro­fessor Joseph Ayers.

For more than four decades, he has been working to develop robots that do not rely on algo­rithms or external con­trollers. Instead, they incor­po­rate elec­tronic ner­vous sys­tems that take in sen­sory inputs from the envi­ron­ment and spit out autonomous behav­iors. For example, his team’s robo-​​lobsters are designed to seek out under­water mines without fol­lowing a pre­de­ter­mined course.

“Now people want robots to do group behavior,” said Ayers, noting that social insect colonies are the per­fect model. “If you’re doing large field explo­rations for mines, you want to have 20 or 30 robots out there.” In order to get robots to coop­erate with each other, he needs them to act like ants or bees or termites.

Bees waggle their behinds to com­mu­ni­cate. Ants use almost two dozen scent glands, depositing a trail of “stinks” as they go about their busi­ness. It’s this behavior that Ayers wants to mimic in his next gen­er­a­tion of bio­mimetic robots.

To do so, he needs elec­tronic devices that can sense chem­ical inputs, such as explo­sives. His idea is to inte­grate var­ious micro­elec­tronic sen­sors that can inter­face with living cells. For example, a bac­te­rial cell pro­grammed to bind odor­ants in the envi­ron­ment may elicit a con­for­ma­tional change; that change may trans­late to an influx of cal­cium ions, which are detected by a second cell that is pro­grammed to gen­erate light when bound to cal­cium. In this way, Ayers said, “you can see smell.”

That output would then trigger micro­elec­tronic actu­a­tors that tell the robot to per­form a par­tic­ular action, such as moving toward or away from the stimulus.

But in order for any of this to play out, some­body needs to build these futur­istic devices.

Pro­fessor Joseph Ayers is devel­oping robots that can sense their envi­ron­ment, move autonomously, and interact with one another. Photo by Brooks Canaday.

Enter bio­engi­neering grad­uate stu­dent Ryan Myers, who built one of the world’s only e-​​jet printers for Ayers’ lab. He learned the nearly arti­sanal craft from Andrew Alleyne, a pro­fessor of engi­neering at the Uni­ver­sity of Illi­nois who per­fected the tech­nology. Myers’ work earned him the inter­dis­ci­pli­nary research award at the RISE:2013 Research, Inno­va­tion, Schol­ar­ship, and Entre­pre­neur­ship Expo ear­lier this year.

According to Ayers, “inkjet printing is the industry stan­dard for organic elec­tronics.” This state-​​of-​​the-​​art tech­nology is already paving the way for a new industry of inex­pen­sive, ver­sa­tile elec­tronics, such as the curved tele­vi­sion that debuted ear­lier this month.

The problem, at least for Ayers’ lab, is that inkjet printers can only deposit droplets 30 microns or larger. While that might seem suf­fi­ciently teeny to the rest of us, it’s not small enough for Ayers, who needs elec­tronics fea­tures that are smaller than a living cell.

That’s where the “e,” for elec­tro­hy­dro­dy­namic, comes into the pic­ture. In the case of “tra­di­tional” inkjets, a droplet is deposited onto a sur­face through back­pres­sure alone. This means that some of the ink spreads out when it lands. E-​​jet printers incor­po­rate a voltage poten­tial between the printer head and the sur­face, as well as a small vacuum force on the other side. When the ink drops from the printer head, it is both pushed and pulled to the exact spot for which it’s intended. The tech­nology allows them to print droplets as small as 250 nanometers.

At a frac­tion of the diam­eter of a living cell, “we can print many fea­tures per cell instead of many cells per fea­ture,” said Ayers. That is, they can now pro­duce micro­elec­tronics with high enough res­o­lu­tion to inte­grate with bio­log­ical systems.

The research team is now hard at work printing bio­com­pat­ible pho­to­di­odes, nitric oxide sen­sors, and pho­to­sen­sors to inte­grate into their robo-​​lobster and rob-​​lamprey projects. It’s just the next step in Ayers’ goal to create a “social-​​robot.”

Source: Northeastern University