Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Quake Google

DeepMind's AI Beats Humans At Quake III Arena (yahoo.com) 98

"A team of programmers at a British artificial intelligence company has designed automated 'agents' that taught themselves how to play the seminal first-person shooter Quake III Arena, and became so good they consistently beat human beings," reports AFP: The work of the researchers from DeepMind, which is owned by Google's parent company Alphabet, was described in a paper published in Science on Thursday and marks the first time the feat has ever been accomplished... "Even after 12 hours of practice, the human game testers were only able to win 25% of games against the agent team," the team wrote. The agents' win-loss ratio remained superior even when their reaction times were artificially slowed down to human levels and when their aiming ability was similarly reduced....

The team did not comment, however, on the AI's potential for future use in military settings. DeepMind has publicly stated in the past that it is committed to never working on any military or surveillance projects, and the word "shoot" does not appear even once in the paper (shooting is instead described as tagging opponents by pointing a laser gadget at them). Moving forward, Jaderberg said his team would like to explore having the agents play in the full version of Quake III Arena and find ways his AI could work on problems outside of computer games. "We use games, like Capture the Flag, as challenging environments to explore general concepts such as planning, strategy and memory, which we believe are essential to the development of algorithms that can be used to help solve real-world problems," he said.

DeepMind's agents "individually played around 450,000 games of capture the flag, the equivalent of roughly four years of experience," reports VentureBeat. But that was enough to make them consistently better than human players, according to Ars Technica. "The only time humans beat a pair of bots was when they were part of a human-bot team, and even then, they typically won only five percent of their matches..."

"Humans' visual abilities made them better snipers. But at close range, [DeepMind's team FTW] excelled in combat, in part because its reaction time was half that of a human's, and in part because its accuracy was 80 percent compared to the humans' 50 percent."
This discussion has been archived. No new comments can be posted.

DeepMind's AI Beats Humans At Quake III Arena

Comments Filter:
  • Why, why, why? (Score:5, Interesting)

    by McFortner ( 881162 ) on Saturday June 01, 2019 @11:40AM (#58691004)
    Why, in the name of all that is High and Holy, are we training Skynet in combat?
    • by Anonymous Coward

      So it can create synergies collaboratively training other skynets in a forward looking, open environment, pushing into new realms, and creating residual income opportunities and new market positions, while, at the same time, using sustainable techniques to face challenges head on and climb to new plateaus.

    • Clearly we're maximizing the odds for destroying humanity. No need to put all our eggs in one basket and hope that war or climate change does humanity in. Earth is prime real estate and it's wasted on those hairless apes but the law is still the law: no invading planets with beings capable of higher thought. #AngelInvestor #ImNotAnAlienYourAnAlien ;)

      • by MrL0G1C ( 867445 )

        Reminds me of this:

        In his sci-fi trilogy The Three Body Problem, author Liu Cixin presents the dark forest theory of the universe.

        When we look out into space, the theory goes, weâ(TM)re struck by its silence. It seems like weâ(TM)re the only ones here. After all, if other forms of life existed, wouldnâ(TM)t they show themselves? Since they havenâ(TM)t, we assume thereâ(TM)s no one else out there.

        Liu invites us to think about this a different way.

        Imagine a dark forest at night. It

    • Israel's army will replace human soldiers soon.
    • For the same reason people kept building more powerful nukes I guess.

    • by gweihir ( 88907 )

      We are not doing that. You are hysterical and ignorant of the facts. The automaton described here has zero understanding and zero insight in what it does. It would have lost badly if the "contest" had not been rigged in an extreme way by honor-less and despicable liars.

    • So Skynet will try rocket jumping, presumably.
  • This was news about 20 years ago?

    • Esports was on an exponential growth curve to exceeding Hollywood and sports revenues. Now what? Who wants to watch Bender play twitch. Well wait, that's a bad example. I'd definitely pay to watch drunk snarky robots play games. But not an Nvidia card play a game.

      Esports Market crash in 3 years.

    • by Tinsoldier314 ( 3811439 ) on Saturday June 01, 2019 @12:35PM (#58691256)

      This was news about 20 years ago?

      I too remember getting wrecked by bots in the Quake 2 days. The article belaboring how much better the AI was than the humans is misleading, that's not the achievement. The achievement is due to *how* these bots learned to play. They didn't have paths pre-programmed by a script, they learned everything about the game and the maps organically including the concept of capturing the flag. Where the bots of yore were probably big ass scripts defining all of their behavior with fuzzy logic and what not, these are using more sophisticated methods to achieve a similar end-result but without someone engineering it specifically to accomplish this goal.

      • Nah, they didn't learn it all organically, they had pre-programmed concepts like “Do I have the flag?”, “Did I see my teammate recently?”, and “Will I be in the opponent’s base soon?”
      • by timeOday ( 582209 ) on Saturday June 01, 2019 @01:45PM (#58691604)
        Bots in the quake 2 days were fed information on the layout of the map and where everybody was. The DeepMind AI works with only its own first-person, pixel-based view, like a person. Very very different.

        In fact if anything I'm sure history will show that building up a representation of the world from imagery was a much more difficult problem than the team tactics themselves. We just take it for granted because it's on our hardware (wetware).

  • by Dunbal ( 464142 ) * on Saturday June 01, 2019 @11:42AM (#58691014)
    It just means it learned to camp.
    • Nah, it's an aimbot. AIs have been good at FPSs for a long time. It even says in the summary:

      "Humans' visual abilities made them better snipers. But at close range, [DeepMind's team FTW] excelled in combat, in part because its reaction time was half that of a human's, and in part because its accuracy was 80 percent compared to the humans' 50 percent."

      Wow, they made a computer with faster reaction times than a human. What next, a computer that can calculate projectile trajectories faster than humans? Von Neumann would be proud (sarcasm).

  • It does when the conditions are extremely skewed in favor of the automaton. The way this is done is the same, deeply dishonest and dishonorable thing Google did and others as well: The Automaton gets to see tons of games played by the humans and the humans get to see zero games by the automaton in advance. Also, the humans are given no time to experiment and adjust to the way the automaton plays. As the automaton has zero intelligence and absolutely no understanding regarding what it does, the humans would

    • Re:No, it does not (Score:4, Interesting)

      by phantomfive ( 622387 ) on Saturday June 01, 2019 @12:51PM (#58691330) Journal

      The way this is done is the same, deeply dishonest and dishonorable thing Google did and others as well: The Automaton gets to see tons of games played by the humans and the humans get to see zero games by the automaton in advance. Also, the humans are given no time to experiment and adjust to the way the automaton plays.

      Note that when the Starcraft human was given time to reflect after playing against the AI, the human won and the AI lost rather badly.

      • by gweihir ( 88907 )

        Not a surprise.

      • by Anonymous Coward

        > Note that when the Starcraft human was given time to reflect after playing against the AI, the human won and the AI lost rather badly.

        You did not provide source for your claim, so I will add it:
        https://www.techspot.com/news/78431-human-player-finally-beat-deepmind-alphastar-ai-starcraft.html

        I think this happened most likely because just like they did in chess, the AI was trained to prioritize victory over a draw. MaNA took advantage on this and focused on defending and thus getting economical advantage

        • I think this happened most likely because just like they did in chess, the AI was trained to prioritize victory over a draw.

          It happened because the AI was really, really, good at micro. That was entirely it. The human had to choose strategies that would work even with (relatively) poor micro.

    • As the automaton has zero intelligence and absolutely no understanding regarding what it does, the humans would always win after a while.

      I've seen enough scifi movies to know that bots are invariably controlled by some central computer, so humans can always win by sneaking in to the main base and blowing up the controller.

    • Note that in previous games, when they played against humans, Google's AI has only been able to play on a single map. It's been unable to do anything reasonable with a random map. Thus the AI could memorize "at time T go to point (x,y) on the map." It didn't have to learn whether they were in the enemy base or not, or even what a base was. In Quake, the maps are auto-generated, so that was impossible. The team is making an attempt to force the bot to learn the meaning of things in the game.

      When this type
  • How has all other advantages of 'being a computer' been ommitted in this test so that we know the AI is creating the win? Using a camera on a monitor? Only able to interact with keyboard and mouse with two hands and five fingers? Physical response time slowed down to be with an average human, dexterity the same as a human. Nothing different except for AI?
    • RTFS

      SRSLY, RTFS

      The only thing you asked that isn't literally in the summary is whether they pointed a camera at a monitor. I'm guessing they actually used a GPU to render the scene, then handed the frame off to the computer. TFS outright states that humans had better visual abilities (regardless of how the image data was getting into deepmind) but it retained superior reaction time and accuracy.

      Your remaining question is answered in one of the links in the summary, but I'm not telling you which one since yo

      • "Humans' visual abilities made them better snipers. But at close range, [DeepMind's team FTW] excelled in combat, in part because its reaction time was half that of a human's, and in part because its accuracy was 80 percent compared to the humans' 50 percent."

        How the FUCK does that amount to 'ability was adjusted to be equal to a human"? The way I read it, they handicapped it in certain ways and it was still faster than others.

        • They handicapped it to add human-level button pushing reaction speeds, but DeepMind is still faster at deciding which buttons to push. Since they're talking about the humans having superior visual abilities, it's clear that DeepMind is having to analyze the scene visually. I didn't find either of these determinations to be particularly complicated.

          • If you're going to make a claim about AI, you need to have clinical standards. If you do not know how the baselines for handicaps were established and controlled, you really don't know anything.
            • You do know you can look all this stuff up, right? And there's a link to the study in the summary? Oh wait, this is Slashdot. I must be new here.

      • > I'm guessing they actually used a GPU to render the
        Well, I'm guessing they didn't and simply fed it the game state like they do in most of these pointless AI vs human challenges.
        If they'd done something as different as having the AI decode the actual frames they would be shouting that from the rooftops.
        • If they'd done something as different as having the AI decode the actual frames they would be shouting that from the rooftops.

          They've been doing that for half a decade now. Here was the first story on the topic [slashdot.org]. The inputs were the pixels on the screen, and the evaluation metric was the score.

  • "Won't be used for military applications" is code speak for: It WILL be used for military applications. When there is profit to be made, lives will always be expendable. The military complex is one of the most profitable (and wasteful) industries out there. And Google has a legal fiduciary responsibility to make a profit above all else. So the standard course of action will be to simply deny the allegation. Shocker!
  • Yea, we have been doing this with much cheaper equipment for a lot longer than this.

    What an amazing waste of hardware! Sign me up for an account please! I have some tards to frag!

  • by gurps_npc ( 621217 ) on Saturday June 01, 2019 @12:58PM (#58691370) Homepage

    All modern video games are designed to be 'addictive', not difficult nor fun. The term "grinding" is a key example of this.

    But in any case, computers have different capabilities. A game that is difficult for humans might require fine motor control, which a computer could have perfect motor control. Similarly, a game that is fun could be because of interesting plot twists based on assumptions that humans make that a computer would not make.

    As such, I would expect any competent computer to win all of them.

    It is not be difficult for someone to design a game that humans would win and computers would never win. Right now, we call such games, 'turing test'.

    Base it on creativity and generalized, non-specific extremely varied pattern recognition without any instruction that is a pattern recognition (i.e. one level could be won by recognizing the music played, another by seeing which icon is slightly different than another, a third by the timing something, all without instructions that any of this stuff matters).

    Such a game would not be solvable by a computer, but humans would excel.

    • All modern video games are designed to be 'addictive', not difficult nor fun. The term "grinding" is a key example of this.

      All Slashdot posts are designed to convey overt displays of sensationalism. A full 100% of them.

      Side note: Play better games, there's plenty of damn fun "modern" games out there. Grinding is a negative word used to describe crap games, definitely not an example of "all modern" games.

      The fact we are talking about Quake 3 Arena adds a real "WTF" element to your post.

    • by Anonymous Coward

      > Base it on creativity and generalized, non-specific extremely varied pattern recognition without any instruction that is a pattern recognition (

      This is exactly what the AI did in this case. It was offered raw pixels of the game and information whether it won the whole match or not. The AI was not told what is a flag nor that the flag needs to be captured, the AI was not told that some moving elements are opponents and some are team mates. Yet the AI learned it all and beat humans in it. Adding music or

      • First of all, you are making assumptions after reading a summary article. I bet that what you said is not true, programming is a lot more complicated than that.

        Among other things, you said adding music would not be sufficient to change things. WRONG. You said it was told to analyse pixels. That alone is too much instructions. Given that instructions if the game designer added sound as as essential element then the game could not win because it did not not analyse sound. NO. Stop cheating to help t

      • The AI was not told what is a flag nor that the flag needs to be captured, the AI was not told that some moving elements are opponents and some are team mates.

        Ironically it was basically told all those things.

    • Those games bring modern trappings to old school values. Dragon Quest XI is good too.

      I don't like games that have you picking up and opening capsules or chests that you then have to open for random rewards. I'd rather have to hunt in a level for cleverly hidden valuables and thoughtfully placed items in appropriate locations.

  • ... really sounds deep, man. Whoa.

  • ...and also not the Sheriff, I just started a miniature chemical explosion in a metal encasing and a metal slug at one side was the weak point, the gases went that way and ejected the slug and unfortunately the Sheriff ...

  • Nope, Deepmind has not perfected Q3. They did not even play it. That's why there are no videos of Deepmind bots bunny-hopping around.
    • To be fair, they did move closer towards it by now actually (attempting) playing on actual Q3 maps instead of tiny square-shaped levels.

      It's still highly stylized visuall, though and shooting is limited to 'tagging', whatever that may be. No Q3 weapons.

  • by Malc ( 1751 )

    the seminal first-person shooter Quake III Arena

    Do you even know what âoeseminalâ means? This was the last first person shooter I spent a lot of time on, mostly because I was bored of the format with Q3.

    Wolf 3D. Seminal. Doom, possibly seminal. Doom II, Quake and Quake 2 came from the same company. Unreal took a lot of our time before Quake 3.

    We wired our house in 1995 with an Arcnet network (Ethernet too expensive) to play multiplayer Doom.

    Q3 was not seminal.

  • In the Wheel of Time series, heroes are 'thrown back' from the dead to live and fight again and again, with memories intact.

    One of the main characters is a general with untold battles, won and lost, in memory. While he is not invincible, he is extremely proficient at war, and becomes more so each iteration.

    Wars of the future will be AI against AI.
  • Does anyone know if these bots can perform in levels that are new to them, or do they only work in known environments?

Do you suffer painful elimination? -- Don Knuth, "Structured Programming with Gotos"

Working...