Tonight’s Asteroid Will Pass So Close To Earth, Home Telescopes Can See It Google is Bringing Its AI Assistant Service To People Without Internet Access
Sep 18

In the early days of life on Earth, biological organisms were exceedingly simple. They were microscopic unicellular creatures with little to no ability to coordinate. Yet billions of years of evolution through competition and natural selection led to the complex life forms we have today — as well as complex human intelligence. Researchers at OpenAI, the San-Francisco-based for-profit AI research lab, are now testing a hypothesis: if you could mimic that kind of competition in a virtual world, would it also give rise to much more sophisticated artificial intelligence? From a report: The experiment builds on two existing ideas in the field: multi-agent learning, the idea of placing multiple algorithms in competition or coordination to provoke emergent behaviors, and reinforcement learning, the specific machine-learning technique that learns to achieve a goal through trial and error. In a new paper released today, OpenAI has now revealed its initial results. Through playing a simple game of hide and seek hundreds of millions of times, two opposing teams of AI agents developed complex hiding and seeking strategies that involved tool use and collaboration. The research also offers insight into OpenAI’s dominant research strategy: to dramatically scale existing AI techniques to see what properties emerge.

Read more of this story at Slashdot.

full article

written by admin

Leave a Reply

You must be logged in to post a comment.

«     |     ?     |     »