Thoughts on AI regulation
Risk analysis
Risks have (at least) two important axes, probability and impact.
- You stub your toe — high probability, low impact.
- You detonate a thermonuclear bomb — High impact, low probability.
AI is interesting because there is a large amount of disagreement about its impact. Is it just incremental empowerment to the individual? Or does its potential for self-improvement mean we should consider it to be a technology that will grant many orders of magnitude of individual empowerment? Yet here we are, debating our positions on how it should be distributed before there is any real understanding of or consensus on its impact. Distribution is clearly directly related to the probability of manifesting any particular risk that AI technology poses.
AI impact is not obvious
What if the increased access to knowledge that AI provides increases nuclear proliferation, or makes it easier to synthesize novel viruses, etc.? In this case, AI may be an existential risk, not due to any actions of the AI, per se, but due to the actions of newly enabled malicious, ignorant, or incompetent humans.
Also, what is the relationship of AI to consciousness? If consciousness arises out of brain function, and the material from which the brain is made is not relevant to its realization, which seem to be fairly modest assumptions, then it is possible that AI systems (present or future) could become conscious.
Furthermore, if you assume any rate of progress, someone will eventually give these systems thought-action loops, allowing them to interact continuously rather than discretely with the world as in prompt-based interactions. The system would then be chaotic, like humans, where output is too sensitively dependent on the precise (and continuous) input to meaningfully predict behavior. This would imply “self”-direction and therefore goal-orientation rather than instruction-orientation.
Then there’s the possibility, however remote, of an “intelligence explosion.” I see no reason to suspect our evolved ape brains assume an apex position in the space of possible intelligence. Silicon brains, and especially those that are self-directed, could feasibly iterate on their own design and generate subsequent increasingly capable generations. We could find ourselves in relation to an intellect whose actions we comprehend no better than a chicken comprehends our own.
All of this is related to the potential impact of this technology and can be considered independent of probability.
Controlling the probability
Could we control it if we wanted to? Developing AI doesn’t require rare and dangerous isotopes, anyone could produce a dangerous model armed with naught but a computer and the internet.
Also, we could be in a situation where regulation only prevents the “good guys” from participating in its further development.
Furthermore, if the risks are truly existential/global, local regulation is insufficient, and we would need some kind of international cooperation.
Whatever the risks of AI are, the barrier to entry for the development of AI seems to me to be too low for any attempt at regulation to be meaningful.
Possible worlds
AI could become a conscious, self-directed, apex intelligence, or it could remain limited to its role as a human tool. In a world where the latter was true, the regulation debate would be a classical one: whether more or less government regulation and centralized control of innovative (albeit disruptive) technology should be preferred. But in a world where the former was true, then distribution and lowering the barrier to entry seems like it could only accelerate the arrival of the post-human era of AI-dominated power, intelligence, and competence hierarchies (singularity).
However, even partially effective regulation (which is a tall order) couldn’t indefinitely stave off the singularity, Moloch will see to that.
My conclusion
The cynical representation of the pro-regulation position is that it’s entirely about building moats to capture network effects, economies of scale, etc., and maximize profit.
But I think there’s a much more intelligent pro-regulation position. Extremely powerful individuals are a threat to society’s stability and possibly to its existence. Regulation should therefore be used explicitly to stifle innovation and slow progress. Furthermore, AI technology might lead to a singularity, and regulation might delay that, and a delay might allow us to be in a better position to engage with the post-human world.
Currently, however, because of the nature of AI development, I don’t think regulation can meaningfully stifle innovation or slow progress. Furthermore, I don’t really see how delaying the singularity for a finite amount of time will allow us to be meaningfully more prepared for a post-human world, should that come to pass. And lastly, centralization and top-down control of AI technology would certainly exacerbate power inequality and could potentially lead to some kind of totalitarian surveillance dystopia.
Therefore, my position is that we should aim for decentralization/distribution of AI technology. The counter-arguments I think I’m most interested in are those that hint at reasons for why delaying the singularity would actually be a good idea.