Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
64,959
33,114



The Information Technology Industry Council (ITI), an industry group that represents several tech companies like Apple, Google, Microsoft, Amazon, and Facebook, this week released Artificial Intelligence Policy Principles [PDF] covering responsible and ethical artificial intelligence development.

itiaipolicyprinciples-800x210.jpg

"We recognize our responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws," reads the document. AI researchers and stakeholders should "spend a great deal of time" working to ensure the "responsible design and deployment of AI systems." Some of the specific policies addressed are outlined below:

Government: The ITI supports government investment in fields related to AI and encourages governments to evaluate existing tools and use caution before adopting new laws, regulations, and taxes that could impede the responsible development and use of AI. ITI also discourages governments from requiring tech companies to provide access to technology, source code, algorithms, and encryption keys.

Public-Private Partnerships: Public-Private Partnerships should be utilized to speed up AI research and development, democratize access, prioritize diversity and inclusion, and prepare the workforce for the implications of artificial intelligence.

Responsible Design and Deployment: Highly autonomous AI systems must be designed consistent with international conventions that preserve human dignity, rights, and freedoms. It is the industry's responsibility to recognize potential for misuse and commit to ethics by design.

Safety and Controllability: Autonomous agents must treat the safety of users and third parties as a paramount concern and AI technologies should aim to reduce risks to humans. AI systems must have safeguards to ensure the controllability of the AI system by humans.

Robust and Representative Data: AI systems need to leverage large datasets to avoid potentially harmful bias.

The ITI goes on to encourage robust support for AI research, a flexible regulatory approach, and strong cybersecurity and privacy provisions.

ITI President Dean Garfield told Axios that the guidelines have been released as a way for the industry to get involved in the discussion about AI. In the past, the group has learned "painful lessons" about staying on the sidelines of debates about emerging technology.

"Sometimes our instinct is to just put our head down and do our work, to develop, design, and innovate," he said. "But there's a recognition that our ability to innovate is going to be affected by how society perceives it."

Article Link: Industry Group Representing Apple and Google Releases AI Policy Principles
 

heretiq

Contributor
Jan 31, 2014
945
1,515
Denver, CO
Why is it that governance is being left to companies. Where are governments on this?
Exactly. These policies should be shaped by governing laws suitable to ensuring safety and protecting individual rights and liberties in the 21 Century. The absence of such legislation (principally in the US) has resulted in the construction of business models that harm society and exploit individuals instead of serving them.

We can all see what the permissive regulatory framework applied to internet businesses has wrought to society: erosion of personal sovereignty online and easily manipulated media leading to eroded trust in essential institutions.

Imagine what this lax framework will curse us with in the AI era. At a minimum, personal identity needs to be treated legally like personal property and a restrictive legal framework for AI needs to be imposed. The burden should be to prove utility and safety *before hand* not after the fact.
 
Last edited:

springsup

macrumors 65816
Feb 14, 2013
1,250
1,266
Why is it that governance is being left to companies. Where are governments on this?

We can't really expect law-makers to have infinite foresight in all areas of life, science and technology. There are two practical alternatives:

- Try to pre-emptively regulate everything anyway (Proactive): It's not certain how effective this would be; money and people can flow around the planet easier than ever these days, and if somebody creates a world-changing invention, it will flow back just as easily and you'll have to update your backwards laws to allow it. So you don't really gain stability.

Regulating things on an international level is close to impossible. Look at climate change - basically everybody agrees that it's an existential crisis for humanity, but getting actual words on paper takes years of hard-bargaining and even then it's all non-binding so anybody can drop-out at any minute.

- Regulate as little as possible until you need to (Reactive): The obvious issue here is that it's reactionary; bad stuff already happened, and we need to stop it happening again. Depending on how bad the thing is, this might not be an option (e.g. if the Earth is no longer habitable, it's too late to start regulating carbon emissions -- or if we're all enslaved to an AI, it's probably too late to start regulating them).

The good thing about this option is that it allows the technology to develop to such a point where society can evaluate it and set meaningful limits on things which are known to be actual problems.

...Basically, we have to hope that we get a few minor AI hiccups along the way to Siri becoming our overlord. If it all goes smoothly until the one day that they suddenly turn, we're fooked.
 

heretiq

Contributor
Jan 31, 2014
945
1,515
Denver, CO
We can't really expect law-makers to have infinite foresight in all areas of life, science and technology. There are two practical alternatives:

- Try to pre-emptively regulate everything anyway (Proactive): It's not certain how effective this would be; ..

- Regulate as little as possible until you need to (Reactive): The obvious issue here is that it's reactionary; ..

Those options aren’t mutually-exclusive. We have proactive and reactive regulation existing simultaneously.

Dangerous chemicals, nuclear materials, certain medications, certain financial transactions, etc. are proactively regulated .. while reactive regulation is applied in areas where the risks of little/no regulation are minimal.

The question is “how should “AI” be treated?”

Blanket treatment of “AI” to allow rapid “innovation” is non-sensical because using AI to produce better photographs or better Siri suggestions has different risks than using AI to produce armed, autonomous security guards, or newsfeeds that can be used to systematically brainwash segments of the population.

AI needs to be categorized and subject to proportionate regulation by category. This categorization, like the regulation of controlled substances cannot be left entirely to business. Laissez faire regulation gave us Facebook, Twitter and fake news. The stakes and risks with AI are arguably greater and will increase exponentially as more and more human judgment is subordinated to AI.

If Stephen Hawkins and Elon Musk believe AI is a threat, we should heed their warnings and proactively regulate it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.