Thursday, December 11, 2025

Why I’m Done Pretending AI Can Be Ethical

Why I’m Done Pretending AI Can Be Ethical

Yes, I used to believe in AI, not just the technology but the promise that it could be a tool, a force for good that if we made it transparent, fair, explainable, and it would help people.

I was naive and probably I’m not alone. Now, I see the truth:
AI isn’t neutral.
It’s not just a tool. it’s a system of control and we’re being sold the idea that “explainability” will fix it.

Let’s be clear:
I’ve spent years reading about (XAI) explainable AI, I have studied frameworks, debated ethics, and believed that if we just made AI understandable, we could make it fair. But I was wrong.

The Illusion of Clarity

We’re told: If the AI can explain its decision, then it’s ethical. But that’s a lie, it’s not about understanding, it’s about avoiding responsibility.

When an AI denies you a loan, fires you, or misdiagnoses your illness:
  • The company says: “It’s just an algorithm.”
  • The developers say: “We didn’t understand it either.”
  • The executives say: “We don’t control it.”
And then XAI comes along and says: “Here’s how it worked.”

But explanations don’t mean accountability. They just make the harm feel more “rational.” They make it easier to ignore the real question: Who is actually responsible?

For teenagers, asking ChatGPT has become very common, this can be a danger for the most vulnerable, and I'm not just talking about teenagers. We must not forget, when we are in trouble, a wrong answer can cause disasters like suicide  5. Even knowing how things stand, companies, developers, let's say responsibles continue to pretend they don't know and, like we know, for sure they aren't going to stop the business but instead they talk about AI Edge: the growing trend of running AI directly on local devices (Edge Computing).

Dr. Martin Peterson, a philosophy professor at Texas A&M University, says that while AI can mimic human decision-making, it cannot truly make moral choices. AI cannot, by itself, be a "moral agent" with an understanding of the difference between right and wrong and is held accountable for its actions, he said  4

Hannah Arendt warned about “rule by Nobody” a system where no one is responsible because decisions are made by processes, not people  2  3. And now, AI is the ultimate “Nobody” system, it’s not a tool, it’s a mechanism of moral evasion  3

XAI as a Corporate Safety Net

Most XAI research is funded by the same companies building opaque AI systems. So what’s the goal?
  • To create “explainability” standards so complex that only big tech can comply
  • To use “transparency” as a shield against real regulation
  • To make AI look ethical without changing anything
It’s not ethics, it’s ethics theater and I’m tired of pretending.

The Real Question

We don’t ask:
  • Who benefits from this AI?
  • Who gets punished when it fails?
  • Is this system even necessary?
Instead, we focus on how it works when we should be asking why it exists. XAI treats the symptom. It assumes the system is legitimate just needs to be “clear.” But transparency doesn’t mean fairness. Explainability doesn’t mean justice. Clarity doesn’t mean control.

The Power Behind the Algorithm

Every AI system is more than code and data it’s a reflection of human choices, values, and power dynamics. Behind every recommendation, prediction, or decision lies a set of deliberate, often unspoken, decisions:

  • What data gets included and what gets left out?
  • What outcomes are prioritized efficiency, fairness, profit, or safety?
  • And perhaps most critically: Who gets to decide?

  • These aren’t neutral technical questions. They’re deeply political. The design of an AI system is never value free. It encodes the priorities of those who build it, fund it, and control it.

    Take healthcare: An AI guided by utilitarian principles might prioritize cost effective treatments, aiming to maximize health outcomes across a population. But in doing so, it may deprioritize expensive therapies for rare conditions effectively sacrificing individual lives for the “greater good.” The algorithm doesn’t make a moral choice it reveals one.

    In hiring, a Kantian approach would insist on treating every applicant as an end in themselves respecting their dignity, autonomy, and rights. But this can clash with efficiency. An AI built on Kantian ethics might reject biased shortcuts, even if they speed up hiring. It’s principled, but it may slow down processes that favor quick decisions over justice.

    Then there’s finance. A virtue ethics framework might push for AI that promotes “responsible” financial behavior steering users toward saving, avoiding debt, or making long-term investments. But this often veers into paternalism: the AI assumes it knows what’s best, even if users want to take risks. It’s not about fairness or efficiency it’s about shaping behavior in the name of “goodness.”

    Each of these ethical frameworks serves different interests. Utilitarianism favors scalability and population level outcomes. Kantianism protects individual rights. Virtue ethics aims to cultivate moral character.

    But here’s the catch: none of them are truly neutral. And when we talk about explainable AI (XAI) the idea that AI decisions should be transparent and understandable we often fall into a trap. XAI makes systems seem more transparent, but it doesn’t resolve the deeper ethical tensions. It often disguises political choices behind technical language: “The model optimized for fairness” or “The algorithm prioritizes robustness.” These sound objective but they’re not. They’re just rebranding value judgments as algorithmic necessities.

    So the real question isn’t how to make AI explainable. It’s who gets to define what’s fair, what’s efficient, what’s responsible. Because in the end, the algorithm doesn’t decide. It reflects. And the power to shape that reflection well, that’s where the real decision lies.

    What we Actually Need

    If we want ethical AI, we need:
    • Real accountability - not explanations, but liability
    • Democratic control - communities must be able to say “no”
    • Profit redistribution - AI should benefit everyone, not just the tech elite
    • Ban on AI in high stakes decisions - like parole, child custody, or medical triage
    Because some decisions are too important to automate.

    The Uncomfortable Truth

    I’m not angry. I’m not bitter. I’m just… done. I used to believe in the promise of AI. Now I know the truth: We don’t need better explanations. We need better systems. We need to ask: Should this decision be made by AI at all? Because the real danger isn’t the algorithm. It’s the assumption that we can explain away injustice.

    Final Thought

    I’m not writing to convince the powerful. I’m writing to wake up the aware. To give language to the skepticism you feel. To show that transparency without accountability is just another form of control.


    Sources:
    1 - reallifemag.com | 2 - www.goodreads.com | 3 - thewestendnews.com | 4 - phys.org | 5 - www.nbcnews.com


    @genartmind

    No comments:

    Post a Comment

    The Invisible Scorecard: Your Digital Echo

    The Invisible Scorecard: Your Digital Echo Hi there! Imagine having an invisible score following every click, like, and search you make o...