Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
We also need to stop imbuing these algorithms with agency. AI can't lie, or be unethical. I can be trained to do so by lying or unethical humans. If not trained specifically for that purpose, then the AI response is simply wrong.
Good point. I agree that AI is amoral. That does not mean its output can't be unethical. Just ask Taylor Swift...

EDIT: Also, bear in mind that any algorithm using reinforcement learning at any level gives AI agency.
 
Isn't one GPU core the difference between the current iPhone processors? So does it make a big difference, or not? When it's in Apple's marketing favor, apparently so. When not, apparently not.
 
Good point. I agree that AI is amoral. That does not mean its output can't be unethical. Just ask Taylor Swift...

EDIT: Also, bear in mind that any algorithm using reinforcement learning at any level gives AI agency.
It's output can be used unethically, that's different than the output itself being unethical. Sadly, I lost Taylor's number so can't get the specifics from her...

Saying reinforcement learning gives AI agency then gives any system with a feedback loop agency. AI, at least to date, is just an algorithm. It is a complex and opaque algorithm, but there's no sign right now that any of these systems have an agenda beyond transforming input to output.
 
  • Haha
Reactions: VulchR
Isn’t this false advertising for those who already bought one and are outside their return window? Seems like another class action waiting to happen.
 
  • Disagree
Reactions: slippery-pete
...
Saying reinforcement learning gives AI agency then gives any system with a feedback loop agency. AI, at least to date, is just an algorithm. It is a complex and opaque algorithm, but there's no sign right now that any of these systems have an agenda beyond transforming input to output.
I would not only say that AI using reinforcement learning has agency (a drive that guides behaviour), but that the AI is addicted, but that's just my opinion (as somebody working in the neuroscience of reinforcement learning).
 
I would not only say that AI using reinforcement learning has agency (a drive that guides behaviour), but that the AI is addicted, but that's just my opinion (as somebody working in the neuroscience of reinforcement learning).
Anything with a feedback loop has a drive that guides behavior. Maybe we're using different definitions of what agency means in this context, but I take it to include self determination and intent. My car's cruise control is a drive that guides behavior, but I wouldn't assign it agency. Likewise the voltage regulator in my power supply. Or the spin rate governor in a steam engine.

Perhaps "self determination" would have been a better phrase for me to use...

A neural network transforms inputs to outputs. We can wrap it in some interesting control systems and feedback loops to converge to the desired mapping function more quickly or perhaps with less precise expectations provided in training, but I believe it a stretch to think it has any sense of ethics guiding its decision or that it is pursuing any agenda beyond the goal of providing the most correct output for the given input. Maybe humans exhibit ethics because our neural nets are so much more complex and interconnected, but at the level of sophistication we have with artificial networks we're far from being able to assign right and wrong to the machine rather than the humans creating and using it.
 
  • Like
Reactions: VulchR
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.