The two types of AI risk
I talked about hypothetical and real risks of AI this week at the University of Cambridge’s Westminster College.
I would divide AI risks into two categories: First, manageable risks. To this I count many of the current issues, e.g., bias in AI systems or the problem of fake content, for which there exist emerging technical and/or organizational solutions. E.g., regarding fake content, while it is a problem, I also see that we humans will quickly adapt to it and respectively change the patterns whom and what we trust (for example, not anonymous sources on the web). After all, who likes to be winded up.
Second, there are also those risks that one may call higher order risks: Emerging risks to society, not inherent in any single application or shortcoming. Existential risk would be one, though a larger part of the AI community (including me) is convinced this is purely hypothetical. More real, to me, is the following:
Humans, intimidated by perceived machine competence and unable to resist the convenience offered by AI systems, may stop exercising agency over their own life, society, and future. They may stop voluntary embracing the “pain” necessary for us humans to grow as persons (considering all learning as some sort of pain), because the convenient easy solution by automation is so near. Example: you could do the exercise yourself, spend some hours of effort, and learn; or cheat by handing in the solution an AI system created for you (safe the time & effort but not learn).
Our track record as a species in exercising this self control is not too good, if the convenience offered is easily available. We need to think and debate more on how to deal with this in the presence of powerful AI systems, in some sense ultimate “convenience tools”.
I think (and will elaboarte more in the future) that technology itself could help here, and also how that faith-based traditions can be instrumental in strengthening human worth & value, leading to the necessary character growth.