Can Machines Act With Integrity?
by Hamilton Mann

guestPosted by

In Alan Turing’s seminal paper “Computing Machinery and Intelligence”, he posed the now-famous question, “Can machines think?”—an inquiry that laid the groundwork for exploring the cognitive potential of machines.

Today, as we witness AI systems moving beyond narrow tasks into increasingly autonomous roles, this question evolves into a new, urgent line of inquiry: “Can machines act with integrity?” 

Just as Turing’s work invited us to ponder the boundaries of machine cognition, the rise of advanced AI compels us to ask if machines can be equipped to uncompromisingly act ethically and responsibly.

The need for Artificial Integrity—an AI’s ability to consistently recognize the moral implications of its influence and outcomes, learning both ex-ante and ex-post from experience to guide decisions and actions that reflect integrity-driven behavior in.a context-sensitive manner—is an increasingly critical imperative as the sophistication of AI systems evolves.

As these systems exhibit increasing autonomy and decision-making power, they begin to impact critical areas of human life, from healthcare and finance to security and personal relationships. 

The stakes are high: an AI capable of high-level autonomy without a framework of integrity could make decisions that, while efficient or logical, conflict with societal or ethical values.

Moreover, as we approach the theoretical thresholds of superintelligence or Artificial General Intelligence (AGI), the potential consequences of AI decisions become even more significant. In these advanced stages, AI would not only execute tasks but might also initiate actions and adapt independently. 

Artificial Integrity thus serves as a safeguard, guiding AI systems to act in alignment with human values, to respect boundaries in crisis management, and to maintain transparency where decision-making directly affects human well-being.

Ultimately, as Turing’s inquiry transitioned from “thinking” to “acting” with increasing autonomy, it reveals a profound necessity: for society to leverage AI’s full potential safely and ethically, its development must be grounded in Artificial Integrity—a framework that upholds ethical principles as steadfastly as it demonstrates intelligence.

For AI systems to be capable of Artificial Integrity, ethical considerations should be built in as the core aspect of their reasoning, prioritizing Integrity over Intelligence.

Concrete applications include:

Hiring and recruitment

  • Case: AI-powered hiring tools risk replicating biases if they are purely data-driven without considering fairness and inclusivity.
  • Artificial Integrity systems would proactively address potential biases (ex-ante) and evaluate the fairness of its outcomes (ex-post), making fair, inclusive hiring recommendations that respect diversity and equal opportunity values.

Ethical product recommendations and consumer protection

Insurance claims processing risk assessment

  • Case: AI systems in insurance might prioritize cost-saving measures, potentially denying fair claims or overcharging based on demographic assumptions.
  • Artificial Integrity systems would consider the fairness of its risk assessments and claims decisions, adjusting for ethical standards and treating clients equitably, with ongoing ex-post analysis of claims outcomes to refine future assessments.

Supply chain ethical sourcing and sustainability

  • Case: AI systems in supply chain management may optimize costs but overlook ethical concerns around sourcing, labor practices, and environmental impact.
  • Artificial Integrity systems would prioritize suppliers that meet ethical labor standards and environmental sustainability criteria, even if they are not the lowest-cost option. It would conduct ex-ante ethical evaluations of sourcing decisions and track outcomes ex-post to assess long-term sustainability.

Content moderation and recommendation algorithms

  • Case: AI systems on social platforms often prioritize engagement, which can lead to the spread of misinformation or harmful content.
  • Artificial Integrity systems would prioritize user well-being and community safety over engagement metrics. It would preemptively filter content that could be harmful or misleading (ex-ante) and continually learn from flagged or removed content to improve its ethical filtering (ex-post).

Self-Harm detection and prevention

  • Case: AI systems may encounter users expressing signs of distress or crisis, where insensitive or poorly chosen responses could exacerbate the situation. Some users may express thoughts or plans of self-harm in interactions with AI, where a standard system might lack the ability to recognize or appropriately escalate these red flags.
  • Artificial Integrity systems would be equipped to recognize such red-flag reactions, taking proactive steps to alert human supervisors or direct the user to crisis intervention resources, such as helplines or mental health professionals. Ex-post data reviews would be critical to improve the AI’s sensitivity in recognizing distress cues and responding safely.

Intelligence alone, without a guiding moral framework, can easily stray off course, risking harm or unintended consequences. 

Artificial Integrity over Intelligence has become crucial since the moment AI systems, like Google DeepMind’s AlphaGo, demonstrated capabilities that far exceed human prediction or control. 

When AlphaGo learned to play Go at superhuman levels, it achieved this by training autonomously, playing against itself millions of times, and generating an almost infinite array of strategies, moves, and counter-moves. This development highlighted a profound truth: AI can learn and adapt in ways that even its developers cannot fully anticipate. Such power brings both immense potential and significant risks, as AI systems, when acting independently, may develop approaches or behaviors that were neither foreseen nor intended by their creators.

This unpredictability underscores that, as AI continues to grow more autonomous and capable, having a foundation of integrity becomes essential—not just as a feature, and certainly not as a replacement for human integrity (if we do not embody integrity ourselves, how could we hope to design AI capable of mirroring it?)—but as the core safeguard guiding intelligent machines toward responsible and ethical outcomes in all circumstances.

About the author:

Hamilton Mann is an authority on digital and AI for good, tech executive and author of Artificial Integrity (Wiley)

Leave a Reply

Your email address will not be published. Required fields are marked *