Monday, November 29, 2021
More

    Latest Posts

    Artificial Intelligence Is Smart, but Does It Doesn’t Play Well With Others

    Studies have shown that humans find AI to be a frustrating teammate when playing cooperative games together and pose a challenge to “team intelligence.”

    When it comes to games such as chess and go, artificial intelligence (AI) programs far outperform the best players in the world. These “superhuman” AIs are unmatched competitors, but perhaps more difficult than competing with humans is working with them. Can the same technology get along with people?

    In a new study MIT Researchers at the Lincoln Laboratory use advanced AI models that are well-trained to play with teammates they’ve never met before, and how well humans play collaborative card game fireworks. I searched for what I could do. In the single-blind experiment, participants played two series of games. One used an AI agent as a teammate and the other a rule-based agent, a bot manually programmed to play in a predefined way.

    The results surprised the researchers. Not only did AI teammates score worse than rule-based agents, but humans consistently hated playing with AI teammates. They found it unpredictable, unreliable, unreliable and felt negative even when the team gave a good score. A paper detailing this study was accepted by the 2021 Conference on Neural Information Processing Systems (NeurIPS).

    Fireworks experiment

    When playing the cooperative card game “Fireworks,” humans were dissatisfied and confused by the movements of their AI teammates. Credit: Bryan Mastergeorge

    “This underscores the subtle difference between creating an AI that performs objectively well and creating an AI that is subjectively trusted or prioritized,” co-authored the paper. Said Ross Allen, a researcher in the Artificial Intelligence Technology Group. “They are so close together that they may appear to be out of the sun, but this study has shown that they are actually two separate issues. We need to work to unravel them. “

    Humans who dislike AI teammates can be of concern to researchers designing this technology in collaboration with humans on real-world challenges such as missile protection and performing complex surgery. This dynamic, called teaming intelligence, is the next frontier of AI research and uses a specific type of AI called reinforcement learning.

    Reinforcement learning AI is not informed of the action to take, but by experimenting with the scenario many times, it discovers which action provides the most numerical “reward”. It is this technology that has created superhuman chess and Go players. Unlike rule-based algorithms, these AIs are not programmed to follow “if / then” statements. This is because the results of the human tasks you plan to work on, such as driving a car, are too high to code.

    “Reinforcement learning is a much more versatile way to develop AI. If you can train it to learn how to play chess games, that agent doesn’t necessarily go to drive a car. But with the right data, you can use the same algorithm to train another agent to drive a car, “Allen says. “Theoretically, the sky is the limit of what it can do.”

    Bad tips, bad play

    Today, researchers test the performance of reinforcement learning models developed for collaboration in much the same way that chess has served as a benchmark for testing competitive AI for decades. I’m using fireworks to do.

    The fireworks game is similar to Solitaire’s multiplayer format. Players work together to stack cards in the same suit in order. However, players cannot see their cards, only the cards held by their teammates. Each player has severe restrictions on what can be communicated to his teammates, allowing him to choose the best card from his hand and stack next.

    Researchers at the Lincoln Laboratory did not develop the AI ​​or rule-based agent used in this experiment. Both agents are the best in their respective fields of fireworks performance. actual, AI model was previously paired Together with an AI teammate who has never played with them, the team achieved the highest score ever in a fireworks play between two unknown AI agents.

    “That was an important result,” says Allen. “I thought that if these AIs that I had never met before could get together and play really well, I should be able to bring in people who also know how to play well with AI. They also do very well. That’s why I thought the AI ​​team could play well objectively, and I thought that humans like it because, in general, if it works, they like better things. “

    Neither of those expectations was fulfilled. Objectively, there was no statistical difference in scores between AI and rule-based agents. Subjectively, all 29 participants reported a clear preference for rule-based teammates in the survey. Participants were not informed of which agent and which game they were playing.

    “One participant said he actually had a headache because he was stressed by the bad play of the AI ​​agent,” said Jaimepena, a researcher and author of the paper in the AI ​​Technology and Systems group. “Another said that rule-based agents thought they were ridiculous but feasible, but AI agents showed that they understood the rules, but the move was the appearance of the team. It didn’t match. For them, it gave a bad hint and was playing badly. “

    Inhuman creativity

    This perception that AI is “playing badly” is linked to the surprising behavior previously observed in reinforcement learning tasks.For example, when DeepMind’s AlphaGo first defeated one of the best Go players in the world in 2016, one of the most widely admired moves made by AlphaGo was Move 37 in game 2An unusual move that human commentators thought it was a mistake. Subsequent analysis revealed that this movement was actually very well calculated and described as a “genius”.

    Such moves can be praised when performed by an AI opponent, but less likely to be celebrated in a team setting. Researchers at the Lincoln Laboratory have found that strange or seemingly illogical moves are the worst criminals to break human confidence in these tightly coupled teams’ AI teammates. Such movements not only reduce the player’s perception of how well the player and AI teammates work together, but also how much AI is used, especially when the potential rewards are not immediately apparent. I also lowered what I wanted to do.

    “There were a lot of comments about giving up, like” I hate dealing with this, “” said Hosea Siu, author of the paper and researcher in the Control and Autonomous Systems Engineering Group. Adds.

    Participants who evaluated themselves as fireworks experts, conducted by the majority of players in this survey, often gave up on AI players. Siu considers this a concern for AI developers, as the primary users of this technology are likely to be domain experts.

    “Let’s say you train a super smart AI guidance assistant for a missile defense scenario. You don’t hand it over to a trainee. You give it to your expert on your ship who has been doing this for 25 years. Therefore, if there is a strong expert bias against it in the game scenario, it can manifest itself in actual operations, “he adds.

    Squeeze human

    Researchers state that the AI ​​used in this study was not developed for human taste. But that’s part of the problem — not many. Like most collaborative AI models, this model is designed to score as high as possible, and its success is benchmarked by its objective performance.

    If researchers don’t focus on the issue of subjective human tastes, “you can’t create the AI ​​that humans really want to use,” Allen says. “It’s easier to work with AI that improves very clean numbers. It’s much harder to work on AI that works in this human-favorite world of Musier.”

    Solving this more difficult problem is the goal of the MeRLin (Mission-Ready Reinforcement Learning) project. The experiment was funded at the MIT Lincoln Laboratory’s technical office in collaboration with the US Air Force Artificial Intelligence Accelerator and the MIT Electrical Engineering and Computer Division. Chemistry. This project is investigating what prevents collaborative AI technology from jumping out of the gaming space and into the more awkward reality.

    Researchers believe that the ability of AI to explain its behavior creates trust. This will be the focus of their work next year.

    “I can imagine we re-run the experiment, but after the fact-and this isn’t as easy as it sounds-humans said,” I don’t understand why you did that move. Did you do that? ” If AI can provide insight into what they think will happen based on their behavior, our hypothesis is that humans “oh, weird thinking, but I understand it now.” Will say, and they will do so Trust it. Even if you don’t change the basic AI decisions, the results will change completely, “Allen says.

    Like post-game chats, this type of exchange often helps humans build friendships and cooperation as a team.

    “Maybe it’s also a staffing prejudice. Most AI teams don’t want to tackle these squeaky people and their soft problems,” Siu adds with a laugh. “People want to do math and optimization. That’s the basics, but that’s not enough.”

    Mastering fireworks-like games between AI and humans has the potential to open up a universe of possibilities for teaming intelligence in the future. But technology can stay either mechanical or human until researchers can bridge the gap between AI performance and human taste.

    Reference: Ho Chit Siu, Jaime D. Pena, Kimberlee C. Chang, Edenna Chen, Yutai Zhou, Victor J. Lopez, Kyle Palko, RossE “Evaluation of Human-AI Team for Rule-Based Agents Learned in Hanabi” “. Allen, accept, 2021 Conference on Neural Information Processing Systems (NeurIPS)..
    arXiv: 2107.07630

    (function(d, s, id){
    var js, fjs = d.getElementsByTagName(s)[0];
    if (d.getElementById(id)) return;
    js = d.createElement(s); js.id = id;
    js.src = “//connect.facebook.net/en_US/sdk.js#xfbml=1&version=v2.6”;
    fjs.parentNode.insertBefore(js, fjs);
    }(document, ‘script’, ‘facebook-jssdk’));

    Artificial Intelligence Is Smart, but Does It Doesn’t Play Well With Others Source link Artificial Intelligence Is Smart, but Does It Doesn’t Play Well With Others

    The post Artificial Intelligence Is Smart, but Does It Doesn’t Play Well With Others appeared first on California News Times.

    Source link

    Latest Posts

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.