Making the machine win over humans has always been a wild dream in human beings fantasy, since there will be extinction threats; we are limiting ourselves to lose in games for now. Early times, machines were capable of winning us over chess, rubix cube, now there is an all new addition to these blood shedding battle ground (not really), MIT researchers have successfully tested a machine capable of playing the balancing tower game, Jenga.
“Playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing and aligning pieces,” said Prof Alberto Rodriguez from the department of mechanical engineering at Massachusetts Institute of Technology.
With a combination of interactive perception and manipulation, the robot would touch the tower in-order to learn how and when to move blocks accordingly, which is extremely difficult to simulate and therefore the robot has to learn this in the real world. So the researchers placed a two-pronged industrial robo arm with a force sensor in its wrist around the Jenga tower and allowed it to explore rather than using traditional machine-learning techniques that could require data from tens of thousands of block-extraction attempts in order to capture every possible scenario, which would require a lot of time and it’s a overly used learning process too. The robotic arm collects outcomes of approximately 300 attempts as it discovered that some blocks were harder to budge than others.
“The robot builds clusters and then learns models for each of these clusters, instead of learning a model that captures absolutely everything that could happen,” said the paper’s lead author, MIT graduate student Nima Fazeli.
This enables the robotic arm to develop a simple yet complex model to predict a block’s behavior on the basis of its visual and tactile measurements while gaining an appreciation of the dynamics behind Jenga. “There are many tasks that we do with our hands where the feeling of doing it the right way comes in the language of forces and tactile cues,” Rodriguez said. “For tasks like these, a similar approach to ours could figure it out.”After all this also, the bot will have to improve a lot, if it is to conquer a human player, but it is not far off. Miquel Oller, a member of the team, said: “We saw how many blocks a human was able to extract before the tower fell and the difference was not that much.”
This kind of approach is a departure from how other roboticists are tackling the problem of teaching robots how to interact with objects. Researchers at UC Berkeley, for instance, are using something called reinforcement learning, which relies on lots of random movements on the part of the robot and a system of rewards to give it feedback. If the robot moves its arm in some arbitrary way that gets it closer to some predetermined goal, it gets a digital reward, which essentially tells it, “Yes, do that sort of thing again”, more like dopamine feeling in the humans, when we do something that makes us kind of happy, but apparently, being a machine, all these emotional stigmas are nowhere to be find. With lots of trial and error, over time the robot learns a manipulation task. But it doesn’t have that understanding of Physics that the Jenga-playing robot does.
As this new robot is Jenga-ing, code is comparing its experimental proddings to previous attempts and evaluating its success. The robot knows how all these attempts both looked and felt like, given the camera and the force sensor and with a serious power to store data without fear of losing on any specifications. So, when it starts pushing on a sticky block that looks and feels like a block, couldn’t extract before without the tower twisting or collapsing, it backs off, not actually there is a fear kind of emotion to worry about, because of it being a machine. If it feels and sees a loose block, it continues, because it knows that’s worked previously.
While playing Jenga may not seem like a mission-critical skill like exploration moon, for robots to master, the underlying strategy of combining sight and touch is one that’s common in everyday life and thus holds great importance. Take brushing your teeth for instance. You can visually understand that you’re scrubbing your front teeth, but you also need to detect that you’re not scrubbing too hard, which is difficult to determine from sight alone. Not that we need robots to be brushing our teeth, as we aren’t that lazy, at least until now, but there are a lot of manipulation problems out in the real existing world that they’ll be needing to parse by using a combination of both sight and touch.
The Jenga bot is also signaling a shift in how some robots would be learning in coming future. For years now, roboticists have trained their creations by running their software in simulations, allowing the robots to accrue experience faster than they would in the real world. But that approach has got its natural limits. Let’s just consider how complicated the Physics of a walking robot are, and how difficult that would be to model with perfect precision.
“If you wanted to walk on different surfaces, you won’t know the friction, you don’t know the center of mass,” says Caltech AI researcher Anima Anand Kumar, who wasn’t involved in this new work though. “All these minor details add up rather quickly. That’s what makes it impossible to exactly model these parameters.” Experimenting with Jenga in the real world, on the contrary, skips all that modeling and forces the robot to get a grasp on the Physics firsthand.
Researchers at Elon Musk’s OpenAI lab, for instance, are getting physical robot hands to be more seamlessly bridge the gap between what they learn in simulation and the conditions of the physical world. In these early days of robot learning, there’s no one right way to go about things.
As for robots that can beat you at Jenga, no need to hold your breath, they’re still learning the basics here and for the brighter side you can simply break it into pieces if the ego gets hurt. But at least they’ll have something to keep themselves occupied after that global thermonuclear war of ours.
Source: MIT News Office