AI in sports - a real-world laboratory for upcoming societal change?

So what is happening in sports due to AI? Well, the game has changed, as one could say. Diverse AI methods, from data analysis on wearables’ data streams to computer vision methods for electronic line calling in Tennis, are improving many aspects, from athletes’ health and game plan to fairness of decisions.

But there are downsides as well: For example, the VAR (video assisted referee) in soccer is not received well by fans worldwide, for reasons of unnaturally interrupting the game in a perceived random way, negatively affecting all persons involved: impeding the authority of the referee on the field, the experience of the audience, and the motivation and joy of players. In one word: It has dehumanizing effects.

Additionally, seeing people compete against AI systems - whether table tennis robots or chess engines - shows another dehumanizing effect: The loss of hope on the side of the human competitor. In a similar vein, as Nate Silver tells the story, Garri Kasparov first lost his hope (his confidence in understanding a seemingly absurd move by Deep Blue) before being thrown off track by it and ultimately losing the tournament.

As sports is a mirror of society - people from all walks of life are engaged on its different levels, as players, fans, or representatives - it is reasonable to assume that effects that we see of AI on the niche of sports could play out as well on the much larger playing field of society as AI systems are increasingly deployed to many use cases. That, then, tasks us with thinking more about how to prevent the potentially dehumanizing and hope-corroding effects identified above.

My three points on this to have technology with hope:

  1. Regard AI as a tool, not a personal counterpart. Specifically, don’t sell it as if it were.

  2. Improving and deploying AI demands a clearer view of what is the human, since this has traditionally been the source of human value (think our liberal democratic constitutions or the general declaration of human rights, independent of any individuals’ skills). Hence, technomorphizing humans diminishes human value and perceived self-efficacy as much as anthropomorphizing AI contributes to fear.

  3. Hope is the most needed commodity today. Therefore, we need to give people a hopeful outlook on their future by showing them that they have agency in designing it. And because we can only build what we can imagine, we need to find and tell better narratives on how positive futures with AI may look like.

An approach in finding such narratives is in thinking about use cases of technology like AI that would actually strengthen (instead of diminishing) what makes us distinctly human. For example, one characteristic of being human is to be limited. As Neil Lawrence explains in The Atomic Human, this is not just a weakness to be mitigated - it profoundly shapes our existence and intelligence (e.g., the way we experience the world is characterized by how little information we can actually perceive at once; the value we ascribe to a thing has to do with how rare/limited that thing is).

Given this, AI applications that give us the illusion of unlimitedness (e.g., being able to write even more emails, nurturing even more contacts, …) could be regarded as not necessarily supporting our core - while the SPAM filter, that shields us from being overwhelmed by ever more input, appears as maybe the most ethical AI application to date.

Written on November 26, 2024 (last modified: November 26, 2024)