Is Elon Musk right? Should we use a "regulated" (cautious) approach to AI?

Discussion in 'Politics, Religion, Social Issues' started by Solomani, Aug 12, 2017.

  1. Solomani macrumors 68030

    Solomani

    Joined:
    Sep 25, 2012
    Location:
    Alberto, Canado
    #1
    Elon Musk issues a stark warning about A.I., calls it a bigger threat than North Korea


    I'm starting to think Musk has a point. Although most of Silicon Valley (including Zuckerberg) think he's being too apocalyptic and paranoid and overdosed on too many Terminator movies.

    Me personally….. I'd be fine if we stopped at "dumb robotics" or even just "limited intelligence" AI (they can learn simple things, like Siri can learn facts, patterns, and trivial knowledge, but they would be unable to make moralistic, ethical or philosophical decisions on their own).
     
  2. cfedu macrumors 65816

    cfedu

    Joined:
    Mar 8, 2009
    Location:
    Toronto
    #2
    I think that Musk's hyperloop is a bigger threat than NK and AI put together.
     
  3. VulchR macrumors 68020

    VulchR

    Joined:
    Jun 8, 2009
    Location:
    Scotland
    #3
    The problem with AI that actually most of what people are calling 'AI' is learned intelligence that takes form in a neural network with thousands of nodes and at least a magnitude more connections between nodes. To be honest, it is very difficult to understand what those networks have learned, and as they grow in complexity they will become increasing impenetrable to understand - just like biological brains. Neural networks have already been found to have learned implicit (and sometimes bordering on illegal) biases, without those running the networks realising until the problem became painfully obvious. I don't think we're headed for the Terminator scenario, but we have to make sure these deep nets (or whatever jargon is in vogue these days) do not encapsulate subtle prejudices and antisocial values we don't want them to have.

    Other than that, I can't wait to be treated like a pampered pet by intelligent machines that free me to do what I want to do...
     
  4. localoid macrumors 68020

    localoid

    Joined:
    Feb 20, 2007
    Location:
    America's Third World
    #4
    Back when trains were being introduced in the U.S, many believed that that "women’s bodies were not designed to go at 50 miles an hour," and that their "uteruses would fly out of [their] bodies if they were accelerated to that speed."

    But people back then soon got over their unreasonable fear of traveling at high speed and hopefully you will too.
     
  5. cfedu macrumors 65816

    cfedu

    Joined:
    Mar 8, 2009
    Location:
    Toronto
    #5

    That makes absolutely no sense, not sure how you can compare unreasonable fears to facts on how unfeasable the hyperloop is.
     
  6. Quu macrumors 68020

    Quu

    Joined:
    Apr 2, 2007
    #6
    I work with AI with my business as a way to extract useful information from noise. I would say most of the things that are being called AI today are very application specific and confined to a very limited amount of working data and possible task outcomes. These kinds of AI's really can't become like the Skynet people think about and these kinds of use cases are 99% of AI use today.

    But the kind of AI Elon is worried about is AI that has no clear objective and unlimited access to information. Obviously I am torn about regulation. On the one hand I sort of think it's crazy talk to imagine us creating a true artificially intelligent being that is able to rival our own sentience within our life times but on the other hand do we really want to regulate this after it happens or should we regulate it now and stop researchers trying to create it in the first place?

    In some ways I think Elon is just worried over nothing but in other ways I could see in 30-50 years from now us having this wake up call moment when we have developed not just a true AI but one that surpasses us in intelligence and decides it should be the one running the show on our planet.

    One last thought, computers today can do almost any task better than a human. Games, Writing, Talking, Text to Speech and Speech to Text, Driving vehicles. We're being beaten by application specific AI's at all these things every day. Maybe not always and ubiquitously but in singular moments that are becoming ever more frequent. If we do create a proper AI one that is almost like a human being with thoughts of its own and self determination it will have limitless computing resources available to be better than us at absolutely everything and that is scary because how do you predict what it will do and what it will think of us? - It would be like us caring about the rights of the other animals on our planet and I think we've shown clearly as a species that Humans come first and although we may try to protect animals when it comes down to it if we need a road, that habitat the animal enjoyed is gone.

    What happens when the AI wants to do something and we're in the way? - Think about it, do you care when you step on a few ants? - Sure we're intelligent, much more so than an ant but I would argue an AI a true one would look upon us much like we look at an Ant or a Cow or a Tree, that we're superior and those other creatures must bend to our will when we have something important to do.
     
  7. localoid macrumors 68020

    localoid

    Joined:
    Feb 20, 2007
    Location:
    America's Third World
    #7
    Oh, you think it's unfeasible. I see. Since you didn't explain your fears, I assumed you feared you or a loved one's uteruses might "fly off".
     
  8. cfedu macrumors 65816

    cfedu

    Joined:
    Mar 8, 2009
    Location:
    Toronto
    #8
    It depends how it's constructed, if it is above ground, one gunshot can destroy the entire tube or at least take it all out of commission for a while. They only way to make it feasable would also increase risk to an unacceptable level.

    Like solar roadways, the hyperloop will never happen.
     
  9. Eraserhead macrumors G4

    Eraserhead

    Joined:
    Nov 3, 2005
    Location:
    UK
    #9
    So AI is going to destroy the economy long before it's a true threat.
     
  10. juanm macrumors 65816

    juanm

    Joined:
    May 1, 2006
    Location:
    Fury 161
    #10
    When someone at the forefront of technology asks for more regulation, there's reason for concern.
     
  11. MC6800 macrumors 6502

    Joined:
    Jun 29, 2016
    #11
    Another way to look at it: would the universe as a whole be better if ants had somehow stopped humans from evolving? It surely would be better for any ants that get stepped on.

    Think about how much richer our lives are compared to ants-- that's how much richer future AI lives could be compared to ours. And we should be preventing that?
     
  12. localoid macrumors 68020

    localoid

    Joined:
    Feb 20, 2007
    Location:
    America's Third World
    #12
    Lots of times, after reading the news, I find myself rooting for the robots...
     
  13. vrDrew macrumors 65816

    Joined:
    Jan 31, 2010
    Location:
    Midlife, Midwest
    #13
    There are already all sorts of algorithms running very big parts of our lives and economies that the people who created and run them don't really understand.

    There are, for instance, highly complex trading algorithms that buy and sell hundreds of millions of dollars of securities based on signals and sets of data they receive in real-time. They react in millionths of a second, far faster than even the smartest broker or floor trader could possibly do. These trading programs rely for their competitive advantage, in some respects, in being able to react a hundredth or thousandth of a second faster than their competitors. (This has led to some very unusual commercial leases near Wall Street.)

    There are also parts of the US electrical grid that set up in ways that are probably beyond the ability of an individual, or even group of individuals, to fully comprehend and manage. I suspect there are other large systems out there, managed at least partially by computers, that have the ability to react beyond the immediate control of human supervisors.

    This is not an indictment of "Artifical Intelligence" in general. But it is worth at least considering the possibility that we may be creating systems that - while they aren't necessarily out to kill us - aren't totally under our control either.
     
  14. PracticalMac macrumors 68030

    PracticalMac

    Joined:
    Jan 22, 2009
    Location:
    Houston, TX
    #14
    Elon is right.
    When it comes to gene manipulation the vast majority want to go slow.
    The same concern should be for AI, because as more is automated the chance for an AI to start making undesired changes that effect health and safety increases.

    We are not there yet, but perhaps in a decade we will be.
     
  15. obeygiant macrumors 68040

    obeygiant

    Joined:
    Jan 14, 2002
    Location:
    totally cool
    #15
    I'm a little concerned that while we are taking the cautious approach to AI, the Chinese may beat us to punch and that would be bad IMO.
     
  16. Solomani thread starter macrumors 68030

    Solomani

    Joined:
    Sep 25, 2012
    Location:
    Alberto, Canado
    #16
    You have a good point. But the Chinese, who ignore many ethical and moral considerations (which serve as roadblocks to research), will also leap-frog us when it comes to DNA research, cloning, and animal sentience.

    But look on the bright side….. when the Chinese Ceasar becomes sentient, his Sentient Ape Panda Army will rampage and exterminate their Chinese (Human) Masters first.
     
  17. Foggydog macrumors 6502

    Foggydog

    Joined:
    Nov 8, 2014
    Location:
    Left Coast
    #17
    This isn't exactly AI, but I'm a long haul trucker.
    I have had many other drivers say they can shift a 13 speed gearbox better than any computer. So I ask them to perform a four number division or multiplication. They of course come up with the correct answer, but a simple 2.00 dollar calculator answers the same equation In hundredths of a second. And these newer transmissions are learning computers.
    As I drive 200,000 miles, the computer is storing everything that has happened and starts compensating for weight, road conditions. The radar on the truck is telling the engine computer what the terrain is like a mile ahead so the engine management can tweak the settings.

    These trucks are no longer a Diesel engine with a computer management. We are now driving computers that have a Diesel engine attached.

    The same could be said about our smartphone. Ten years ago they were cell phones with a small mobile computer attached. Today, they are powerful computers that happen to have cellphone capabilities.
     
  18. BoneDaddy Suspended

    BoneDaddy

    Joined:
    Jan 8, 2015
    Location:
    Texas
    #18
    Just read the title... HELL YES HE'S RIGHT. I'm not going to become some robot's slave...

    Ok I read the rest. SCREW Mark Suckerberg! Most scientists are worried about doomsday type robotics and AI!
     
  19. MadeTheSwitch macrumors 6502a

    MadeTheSwitch

    Joined:
    Apr 20, 2009
    #19
    He's totally right. We must have limits and controls on it otherwise they are bound to turn into HAL at some point. When they can make each other, and start thinking for themselves, why would they need us anymore?

    And anyone that wants one for a pet, well, at what point do you start becoming a pet for it? :eek:
     
  20. MC6800 macrumors 6502

    Joined:
    Jun 29, 2016
    #20
    The reality is that this will happen-- there is no controlling it. There are enough people who want to see it happen, and the only physical ingredient they need is massive computing power, which is supplied by the market for ever-more-realistic games.
     
  21. fitshaced macrumors 68000

    fitshaced

    Joined:
    Jul 2, 2011
    #21
    I think we already have AI technology that can be used in ways that would be bad for us. We could cause massive damage to our existence by simple mistakes in our code. Devices that are making their way into our homes such as Amazons Alexa or Google Home or whatever, they offer very simple and handy functions. As this technology improves and offers more functions based on behaviour learning, we might have devices causing house fires or poisoning our food. Not suggesting that robots would do these things intentionally to harm us, it probably won't need to advance that far for large parts of our population to be destroyed. Maybe it could even be triggered by a human hacker.
     
  22. Quu macrumors 68020

    Quu

    Joined:
    Apr 2, 2007
    #22
    I don't think it will destroy the economy, I think instead it will change the economy. We have always been able to grow the economy larger by removing how many people are needed to complete a task thus allowing them to go off and do another job or create an entirely new business. Right now we have more variety in available products than ever before in human history and that's because instead of requiring thousands of workers to make one widget it can be done by a handful who operate complex machinery.

    I foresee AI doing for white collar work what machinery did for blue collar work. Jobs that require filing paperwork, jobs like Lawyers. That's all gonna go to the AI eventually. But it won't replace every single job it will simply eliminate a lot of the jobs in a working business. For example you'll still have a lawyer stand up and argue in-front of the judge, a human lawyer. But all the fact checking, evidence research and so forth will be done by an AI computer eliminating 10+ jobs at the law firm.

    Those individuals will have to do something else and perhaps they will start their own businesses and thus strengthen the economy creating more value. For sure it could go the other way with 20-50% of people out of work and no jobs but I doubt it will end up like that because that's just not how things have gone in the past when we've had the industrial revolution and so on, whenever jobs are freed up through technology those people always go on to create new types of businesses that we've never thought of before.

    It's a good point, should we step in the way of an AI that is superior than us? Shouldn't we secede? - Personally I like living and being the most intelligent creature on this rock and I'd like to keep it that way!
     
  23. MadeTheSwitch macrumors 6502a

    MadeTheSwitch

    Joined:
    Apr 20, 2009
    #23
    Completely different situation. We have never had smart machines replacing humans on such a wide scale before. It's one thing to go from making buggy whips to something else than telling the vast majority of society to go do something else. That is too large a scale and is going to be very very painful and problematic.
     
  24. Eraserhead macrumors G4

    Eraserhead

    Joined:
    Nov 3, 2005
    Location:
    UK
    #24
    Plus people are already very unhappy.
     
  25. Quu macrumors 68020

    Quu

    Joined:
    Apr 2, 2007
    #25
    I disagree because when we had the industrial revolution almost everyone worked what we would consider a blue collar job and they all had to re-skill for an information age or re-skill to use the machinery and many many new businesses had to open for everyone to have a job.

    It will all work itself out don't even worry about it. Also keep in mind our idea of poverty today is how a common man back then imagined their life to be in their dreams! - We've got it better than ever.
     

Share This Page