Google employees don't want to help the pentagon on AI

Discussion in 'Politics, Religion, Social Issues' started by Zombie Acorn, Apr 5, 2018.

  1. Zombie Acorn macrumors 65816

    Zombie Acorn

    Joined:
    Feb 2, 2009
    Location:
    Toronto, Ontario
    #1
    https://mobile.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html

    So these Google employees don't want to help the pentagon over concerns AI tech will be used in military, however, at the same time Google is doubling down on investments in AI in China. Do they not realize that working on AI in China is exactly the same as giving it to the Chinese government/military?

    http://fortune.com/2017/12/13/google-china-artificial-intelligence/

    Not sure if naive, or just another unpatriotic leftist outburst to keep the US from getting a leg up here.

    China has already made it clear that they plan to dominate AI by 2030 including military.

    https://www.google.com/amp/s/sputni...ina-artificial-intelligence-google-dominance/
     
  2. Plutonius macrumors 604

    Plutonius

    Joined:
    Feb 22, 2003
    Location:
    New Hampshire, USA
    #2
    I'm not worried. I have seen the Terminator movie series and military AI robots lose in the end :).

    I don't believe it's a unpatriotic leftist outburst but instead it's naivety.
     
  3. ericgtr12 macrumors 65816

    ericgtr12

    Joined:
    Mar 19, 2015
    #3
    Yes, leftists are out to keep America down by working on AI with the Chinese out of an abundance of ignorance. Who would've thought that AI could be used to better mankind and working on it WITH other nations instead hording it and utilizing it strictly for war and Government secrets was unpatriotic. Silly leftists and their outbursts.
     
  4. VulchR macrumors 68020

    VulchR

    Joined:
    Jun 8, 2009
    Location:
    Scotland
    #4
    There are too many ways AI can go wrong to trust AI with weaponry. Indeed, some universities are now discussing the possibility of offering degrees in AI Safety, mostly because we haven't a clue about how to constrain AI to limit itself to the impacts we want and avoid the impacts we don't.
     

Share This Page