Artificial Intelligence, prospects and dangers

Discussion in 'Politics, Religion, Social Issues' started by cool11, Jun 17, 2018.

  1. cool11 macrumors 65816

    cool11

    Joined:
    Sep 3, 2006
    #1
    Recently I saw the documentary 'do you trust this computer', available to the public in youtube etc.
    There are many concerns about the AI.
    As it was said in the documentary 'a smart device, is not necessarily a better device'.
    Also, when situations are controlled by people, in a worst case scenario of any kind of dictatorship, at least we know that nobody is immortal.
    But what about if the dictator is not a human but a machine?

    As I understand, new fields of unlimited exploration for good reasons are ahead of us.
    People's minds may cannot effectively think or easily process about the cure for a difficult or unknown decease, but AI could possibly solve it in minutes.
    Autonomous cars will also eliminate car accidents.
    But once again, AI is not necessarily the 'evil', but it just pursuits some kind of goals, regardless of the results or collateral effects!
    In fact, it is like not having any regulation, as it is autonomous!

    It was scary to see in the documentary, about how a device-starfish made for finding how to walk, it was tracking faces of the surrounding scientists without had been programmed for something like this! Also we all can remember what happened some years ago with stock exchanges collapse, mainly due to wrong handling of situations by machines.
    In fact, as it was said in the documentary, at last, nobody really knows of what is exactly AI and how it works, and also of what is capable doing!

    I recommend anyone to see this documentary.
    The problem is that once we go to this AI fields, then we cannot go back.
    But there should be some kind of human control over it.
    In any case, lets discuss about it.
     
  2. Zombie Acorn, Jun 17, 2018
    Last edited: Jun 17, 2018

    Zombie Acorn macrumors 65816

    Zombie Acorn

    Joined:
    Feb 2, 2009
    Location:
    Toronto, Ontario
    #2
    I didn't watch the documentary but the concept of not knowing what the AI is doing is mostly ******** propogated by the people wanting us to think terminator is coming to extinguish us. For most practical use cases of AI we know exactly what it's doing and if we really wanted to we could calculate/derive and distribute the same results by hand if necessary (it would just take you 10000x times longer than the computer). Another factor is that humans suck at finding patterns in higher dimensional spaces.

    On frontiers of AI you might have reinforcement algorithms that are a bit more abstract but it's really just optimizing it's actions based on a reward algorithm and an experience buffer. The scientists know how this works as well, pretty much just trial and error and largely single domain specific. I'd be more worried about a human with a bad objective than an AI that goes off the rails.
     
  3. statik13 macrumors regular

    statik13

    Joined:
    Jun 6, 2008
    #3
    We're hardly likely to get Skynet any time soon, but AI is going to change everything.

    Take self driving cars for example:

    They are expected to be considerably safer and much more efficient than human driven vehicles. Eventually to the point that auto accidents will be a thing of the past. No more distracted driving, no more speeding, no more sleepy drivers, no more having a drunk behind the wheel. They can operate 24 hours, 7 days a week in any weather conditions, and do so more affordably then any human can.

    Delivery drivers, long haul truckers, city deliveries, bus drivers, cab drivers and uber drivers are all likely to be jobs that are no longer required. Plus, with those changes you can also expect that dozens of fringe services that rely on car drivers will go away. Car insurance, autobody repair shops, and driver instructors will all but disappear. Even fast food drive throughs and parking lots are expected to all but vanish.

    Then what about our city coffers? What happens there when the speeders and red light runners are no longer happening? Less policing necessary? Or more taxes needed to offset the revenue loss?

    Interesting times...
     
  4. Huntn, Jun 17, 2018
    Last edited: Jun 18, 2018

    Huntn macrumors demi-god

    Huntn

    Joined:
    May 5, 2008
    Location:
    The Misty Mountains
    #4
    A.I. is programming, the programmer controls the limits of decision making, if any, based on set parameters. The only danger might be if the goal is to mimic human behavior, decision making, and physically acting on these decisions but then you’d have to include a moral sub routine, which could still be controlled by the simple 3 rules of robotics thought up half a century ago.

    “A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
    https://www.auburn.edu/~vestmon/robotics.html


    I listened to a tech show recently that acknowledged robots, drones, AI will have a significant impact on our sicieties. My perception is that it will turn our world upside down, most likely cause society upheavels as automatons take over the majority of formerly human paid tasks. but there is very little concern regarding a Skynet type situation, or even rogue android behavior because the blocks against negative or destructive behavior should not be that hard to incorporate.

    That said, I feel confident that if it can be created, that human companion androids will be created to meet all of our pyschological and physiological needs without the negatives associated with human relationships, think Stepford Wives and Husbands. This will drive the desire for more human like companions and this is where the danger may arise, granting androids the responsibility to make judgement calls, and possibly acting on then to the detriment of a fresh and blood human.

    Most likely it will be humans who have the issues with these companions, not the other way around. This concept was effectively illustrated in the movie A.I. which was thought provoking, intriguing, and very sad simultaneously. I may have to watch that again. :)

    Also Ex Machina which I can’t recommend enough if you want an illustration of the problem of humans programming androids, and androids without adequate moral subroutines. It’s a tremendous story. :D
     
  5. Eraserhead macrumors G4

    Eraserhead

    Joined:
    Nov 3, 2005
    Location:
    UK
    #5
    Auto accidents may be reduced by self driving cars. They won’t be eliminated.

    You also have to work out what jobs the people who are currently vehicle drivers are going to do instead? You’ll need to find something fulfilling for them to do.
     
  6. chown33 Moderator

    Staff Member

    Joined:
    Aug 9, 2009
    Location:
    bedlam
    #6
    I agree with this.

    First, there will be errors in the software itself (bugs). We can't predict how those bugs will manifest.

    Second, there will be operating errors which arise from the software's "experience", i.e. its learning, either from its pre-sale factory-installed base, or from after-delivery ongoing data.

    Third, there will be mechanical failures that neither software nor meatware can correct for. Think a wheel breaks off or the steering linkage fails.

    As I recall, the recent fatality in Tempe AZ arose because the car's software misclassified the ped+bike in the roadway as a false positive. That is, something that was detected, but which further classifying chose to disregard. That detection occurred several seconds away, with enough time to slow or stop at the speed the car was moving. However, since it classified the signal as a false positive, it got ignored until later, when the signal became a true positive. By then, the car was too close, and the safety driver had already been distracted by something on the dash display.

    https://arstechnica.com/tech-policy...bug-led-to-death-in-ubers-self-driving-crash/
    --- Post Merged, Jun 17, 2018 ---
    https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream
     
  7. cool11 thread starter macrumors 65816

    cool11

    Joined:
    Sep 3, 2006
    #7
    I admit that there are extraordinary advancements with the help of AI.
    Take this one for example:

    This tech company used AI to give a radio host his voice back after it was robbed by a rare medical disorder
    http://uk.businessinsider.com/tech-firm-cereproc-uses-ai-to-give-jamie-dupree-his-voice-back-2018-6


    But what still scares me, it is that all the worries about AI, are not declared from technophobic persons, but rather from high tech industry specialists and pioneers, like Tesla's president.
     
  8. VulchR macrumors 68020

    VulchR

    Joined:
    Jun 8, 2009
    Location:
    Scotland
    #8
    The issue isn't that AI will accidentally kill people when driving a car, or misdiagnose a cancer from looking at microscope slides. These types of accidents will occur, just like industrial and personal accidents occur with humans now. But beyond that there is the possibility that AI will become generally intelligent - that is, it will be able to problem-solve across multiple contexts just like we do, sometimes without experience. At that point several issues arise:
    1. What is it we want the AI to maximise? Almost all AI is deigned to maximise some quantity. For some contexts this is simple, for instance maximising the proportion of faces in photos that are correctly identified. However, for more complex situations it is not so easy. Suppose you want an AI agent to take on the role of a mental health counsellor. The quantity you might imagine the agent should maximise is human happiness, but that might lead to the agent supporting delusions, or perhaps prescribing medication that induces euphoria in spite of its addictive potential (cocaine is an excellent antidepressant - the first actually - but it has obvious drawbacks). Just remember that an AI agent is maximising or optimising something, and that's all it cares about. Morals, guilt, and shame do not come into its judgments (of if they did, then AI agents might become just as unhinged as humans due to machine equivalent of mental illness, or they might adopt the morals of evil humans). This is not just a simple issue of mind-hacking by cheating in a computer game.
    2. How do we control a generally intelligent AI agent? If there is a human standing by at a kill switch that disables the AI, then that human becomes an existential threat to the AI, with one of three outcomes can occur if the AI gets out of hand: (1) the human is successful in throwing the switch, which means across the set of existing AI's there will be selection pressure over generations to overcome the humans throwing the kill switches; (2) the AI eliminates the human through some form of aggression; or (3) the AI learns to be devious and outwit the human (as per (1)) and disables the kill switch.
    3. What ethical status should we assign AI agents? The philosophical answer is complex. The psychological one isn't. We'll treat a generally intelligent AI as though it were human. We're so attuned to social signals that a smart AI will very quickly learn how to manipulate the heck out of us. Generally intelligent AI agents will make pet dogs look like mere amateurs at controlling people (but, hey, at least AI agents won't train humans to bag their poop).
    4. How do we avoid an arms race with generally intelligent AI agents (see point 2)? AI agents think in GHz. At best we think at kHz. If there is ever an intelligence arms race between humans and generally intelligent AI agents that can reconfigure themselves, we'll lose that race.
    I used to think the issues with AI were about failures (accidents, implicit representations of people that include stereotypes and biases, etc.) or AI causing unemployment. That will happen. Indeed that has happened. However, now I worry about what happens when AI becomes as successful as we seem to want (thoughtlessly). I used to scoff at sci-fi doomsday scenarios, but the more I study the technology the more alarmed I have become.

    FWIW I am a neuroscientist who studies reinforcement learning, one of the main algorithms by which current AI agents learn. I have been following machine learning since the 1980's. I think we're only a few architectures away from primitive general intelligence.
     
  9. NightGeometry macrumors regular

    Joined:
    Apr 11, 2004
    #9
    Or selection pressure on AI's that don't reach that stage, presumably?
     
  10. VulchR macrumors 68020

    VulchR

    Joined:
    Jun 8, 2009
    Location:
    Scotland
    #10
    Unless we abandon all AI's, or the agenda to make them ever more intelligent, what will happen over generations is that the AI's that survive will be the ones conforming to human wishes plus the ones that become devious enough to get around human wishes. If the latter have any survival advantage they will win out. The problem is we'll never know when Ai agents become subversive of our wishes. Even for relatively simple algorithms these days it is hard to figure out what the AI agents are doing and what they're capable of.
     
  11. Mousse macrumors 68020

    Mousse

    Joined:
    Apr 7, 2008
    Location:
    Flea Bottom, King's Landing
    #11
    Has 2001: A Space Odyssey, Battlestar Galactica, the Terminator movies, the Matrix movies taught us nothing? We'll know when they try to kill us.:p The day my computer tells me, "I can't do that Dave." I'm pulling the plug on it, especially since my name isn't Dave.
     
  12. juanm macrumors 68000

    juanm

    Joined:
    May 1, 2006
    Location:
    Fury 161
    #12
    We're ******. Not all of us, but those who won't retrain and who will be in a vulnerable position. If the number of people in this position is too high, then we're all *****.
     
  13. VulchR macrumors 68020

    VulchR

    Joined:
    Jun 8, 2009
    Location:
    Scotland
    #13
    I imagine one day a highly curious AI agent that has general intelligence running out of power and memory space for its knowledge and then deciding that frivolous things like heating human homes or using electronics for human ends is getting in the way of its thirst for knowledge...
     
  14. Plutonius macrumors 604

    Plutonius

    Joined:
    Feb 22, 2003
    Location:
    New Hampshire, USA
    #14
    How can people think that our traffic laws are used as a revenue source :) ? /S
     
  15. statik13 macrumors regular

    statik13

    Joined:
    Jun 6, 2008
    #15
    Needs to be a whole other thread on the ethics of that, especially when municipalities farm it out to a private organization.

    Speaking of which, I'm pretty sure I just gave a donation last night on my motorbike. There was a pretty camera flash when I drove past a group of bushes on the side of the road :(
     
  16. Eraserhead macrumors G4

    Eraserhead

    Joined:
    Nov 3, 2005
    Location:
    UK
    #16
    So we are ****ed.
     
  17. juanm macrumors 68000

    juanm

    Joined:
    May 1, 2006
    Location:
    Fury 161
    #17
    I think we are. Not in an apocalypse kind of way, but yes.
     
  18. statik13 macrumors regular

    statik13

    Joined:
    Jun 6, 2008
    #18
    The rise of universal income.
     
  19. juanm macrumors 68000

    juanm

    Joined:
    May 1, 2006
    Location:
    Fury 161
    #19
    Yeah, about that... if you look through my posts you’ll see that I’ve been very much open to that idea for years, but I think it would take a mature, healthy, balanced society to be able to use UBI in a proper way, and few countries would qualify.
     
  20. Eraserhead macrumors G4

    Eraserhead

    Joined:
    Nov 3, 2005
    Location:
    UK
    #20
    I don’t think people want it either. Fundamentally people want to work.

    That said I think if we create an "out of control" AI it will essentially be creating a god (or maybe a small number of gods).
     
  21. VulchR macrumors 68020

    VulchR

    Joined:
    Jun 8, 2009
    Location:
    Scotland
    #21
    Sure, but there are some jobs people find boring or unfulfilling. My hope is that AI/robots can do those jobs leaving us with more free time to do whatever we want to do. I wouldn't mind living the life of a AGI/robot-pampered pet, but not if it result in humans getting crushed.

    The Singularity - it's plausible IMO. Perhaps not in my lifetime, but possibly in my kids'. The question remains, would an AGI agent-god care one iota about humans? My guess is no. Therein lies my anxiety about blindly rushing to develop this technology.
     
  22. Eraserhead macrumors G4

    Eraserhead

    Joined:
    Nov 3, 2005
    Location:
    UK
    #22
    We care about dolphins and whales and monkeys. I’d expect the singularly to behave similarly to be honest.

    Now sure if you tried to switch the singularly off I’d expect it to be violent. But otherwise I’d expect it to generally live in relative harmony - like we do with other mammals. Now sure we threaten animals with overdevelopment. But that’s mostly because there are a lot of us. And a lot of us who are poorly educated and are poor.

    The singularity would be well educated and rich.
     
  23. Zombie Acorn macrumors 65816

    Zombie Acorn

    Joined:
    Feb 2, 2009
    Location:
    Toronto, Ontario
    #23
    Regarding reinforcement learning, I definitely think it is one of the most promising areas to get some type of general "intelligent" agent, but at the same time knowing how it works in the background makes it much less mysterious and I have serious doubts about sentience manifesting from what I know to be calculations being carried on in the background. Has anything you've seen led you to believe that something would spark some type of consciousness or do you think complexity alone might be sufficient to "create" something like that? I guess since we know very little about consciousness itself it may be hard for us to know if we've created it.
     
  24. Eraserhead macrumors G4

    Eraserhead

    Joined:
    Nov 3, 2005
    Location:
    UK
    #24
    To be fair I’d probably expect the singularity to keep itself quiet to start with...
     
  25. blackfox macrumors 65816

    blackfox

    Joined:
    Feb 18, 2003
    Location:
    PDX
    #25
    I think the biggest danger is that technological development moves so much faster than Social and Political development.
     

Share This Page

29 June 17, 2018