Some Answers about Policy Outreach on Artificial Intelligence

I asked a question last week about whether efforts to ensure that artificial intelligence is developed safely should include public outreach. This goes significantly against the grain of most people working on AI safety, as the predominant view is that all that is useful right now is research, and even outreach to elites should wait. While I'm still not persuaded that public outreach would be harmful, I was moved toward seeing why it might be a bad idea from a few answers I got:

1) On the core issues, the policy asks have yet to be worked out for ensuring safe development of artificial intelligence. Nobody knows how we actually program AI to be safe, yet. We are so far from that there is little to say.

2) Regulation could tie the ethical AI developers' hands and let bad actors be the ones who develop AI. This argument closely resembles arguments about other regulations: industries flee countries with the most regulations, causing industries to move to less-regulated countries. In most cases I think it's still worth passing the regulation, but it's at least plausible that AI is a case where regulation right now would be bad, especially given (1).

3) Working on AI safety today is very different from working on a risk like climate change because climate change is already happening, and AI safety problems are almost entirely in the future. (There are some today, though.) Working on AI safety today is like working on climate change in 1900.

4) On the specific question of lethal autonomous weapons, it's not clear how harmful these are. A recent post on the effective altruism forum persuaded me that the effect of AI weapons is closer to ambiguous than I'd thought.

Still, I have reservations:

1) It seems there are policy goals that could be achieved in this area. One would be more coordination by the main actors. Another would be regulation on the things that are here today like lethal offensive autonomous weapons, even if a ban may not make sense. Getting the infrastructure in place to deal with these issues could pay off down the road.

2) I don't buy the idea that getting members of the public on board with AI safety would be counterproductive. Sure, members of the public have a worse time understanding and explaining things, but most people are somewhat literate, and scientific literacy is increasing. Polarization does not seem an inevitable result of careful, friendly public outreach–only confrontational outreach. Also, poor explanations and polarization can be outweighed by upsides.

At the end of the day, it does seem clear that this is a conversation to keep having. Outreach directly on the topic of superintelligence may not be helpful, but I still wonder about whether more preparations for the day that superintelligence is near might make sense.

Comments

Popular posts from this blog

How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong.

We're Here, We're Queer, Get Used to It: A Lesson the Animal Rights Movement Could Learn

What I Learned from a Year Spent Studying How to Get Policymakers to Use Evidence