Some Answers about Policy Outreach on Artificial Intelligence
I asked a question last week about whether efforts to ensure that artificial intelligence is developed safely should include public outreach. This goes significantly against the grain of most people working on AI safety, as the predominant view is that all that is useful right now is research, and even outreach to elites should wait. While I'm still not persuaded that public outreach would be harmful, I was moved toward seeing why it might be a bad idea from a few answers I got: 1) On the core issues, the policy asks have yet to be worked out for ensuring safe development of artificial intelligence. Nobody knows how we actually program AI to be safe, yet. We are so far from that there is little to say. 2) Regulation could tie the ethical AI developers' hands and let bad actors be the ones who develop AI. This argument closely resembles arguments about other regulations: industries flee countries with the most regulations, causing industries to move to less-regulate...