A Kantian approach to Artificial Intelligence:Scholastica ratio machinis

Our creative minds in cinema and novels have created a narrative that often repeats itself, that of the rogue machine, a machina malefica with a lack of human understanding and with no will other than destruction. Before it, total, irreversible defeat on all fronts. The machine was never the mastermind, but we were always the enablers in stories like these; we failed long before a system took control and swept us aside. This lack of critical control has concerned legislators, engineers and philosophers for a long time, who perhaps not unjustly fear that after artificial intelligence satisfies our greatest dreams, like the return of Jason and the Argonauts with the golden fleece only to find the throne occupied by us.

Conversely, there are those who envision a world where machines serve as equalizers, ensuring justice for all. Research conducted by the University of Tulane revealed a surprising trend: many judges were reluctant to accept AI’s proposal for alternative punishment, such as probation for individuals of color.  This sentiment was echoed in the Harris Poll of the Judicial System Sentiment, with over half of the respondents expressing support for AI judges over their human counterparts. This is a glimmer of hope, a testament to the potential of AI to be a force for good in our society, provided we steer it with the right principles and values.

Today, we recognize that machines do not make abstract decisions; they are products of human creation, bound by a rulebook, and as prone to narrow-mindedness and intolerance as we are. The collective aspiration of many for a more just future teeters on the decisions of AI creators and their moral compass. A stark reminder of our immense responsibility in shaping AI’s future. It is not enough to create powerful machines; we must ensure that they are steered by ethical principles that mirror our shared values.

Neither side can definitively predict the impact of AI, whether it will be positive or negative or if it will alter our very nature. As Kant postulated, if we cannot make decisions a priori based on ethics, what do we have left to examine and act on AI? Kant would argue that the answer lies in the logic of synthetic knowledge, where we stop attributing a priori concepts in AI and instead create synthetically new, necessary, and appropriate elements. Within their scope of possible experiences and moral conscience, AI developers can create the rulebooks of AI systems. However, this responsibility comes with a caveat.   It requires us to employ our intuitions and knowledge by asking “how” we think of right and wrong, better or worse and not only “why”.