Issue 01 - 2026MAGAZINETechnology
Ethical crisis in AI

The ethical crisis in AI

The most comprehensive legal response to date is the European Union’s AI Act, which employs a risk-based classification system for artificial intelligence applications

The proliferation of advanced artificial intelligence (AI) systems necessitates an immediate and formalised moral framework, as the underlying functional logic of optimisation frequently conflicts with the complex, non-quantifiable dimensions of human value.

Traditional ethical theories offer crucial lenses for evaluating AI’s impact, but they prove fundamentally insufficient when confronted with the dynamic nature of contemporary machine learning.

Utilitarianism, which emphasises maximising quantifiable positive outcomes, might theoretically justify mass surveillance if it promises overall efficiency gains or increased public safety, potentially sacrificing the rights of minority populations.

Deontology, conversely, provides clear ethical boundaries through moral rules, such as protecting privacy rights and ensuring informed consent, but these rules struggle to provide actionable guidance when core principles conflict, for instance, in situations where security imperatives clash with privacy protections in complex systems.

Modern artificial intelligence systems, particularly those based on large neural networks, are inherently opaque and unpredictable. Due to their learning and adaptive capabilities, neither the developer nor the user can reliably predict how a system will react to every input, meaning past behaviours are not perfect predictors of future actions in identical situations.

This technological uncertainty results in critical “responsibility gaps,” making it challenging to allocate moral or legal accountability when autonomous technologies produce harmful outcomes. The only practical philosophical alternative lies in “Virtue Ethics,” which recommends a process-based approach to AI evaluation.

This framework shifts the focus away from evaluating individual outputs or rules toward assessing the ethical character of the entire development and deployment process, demanding continual auditing, rigorous governance, and human oversight.

The evidence of ethical failure is already catastrophic, proving that reliance on automated optimisation without robust ethical guardrails embeds systemic societal problems. Real-world incidents demonstrate that inherent biases from historical data invariably lead to bias in future algorithmic outcomes.

For example, studies reveal significant gender bias where search results for “school girl” often include explicit imagery, while searches for “school boy” yield ordinary pictures. Similarly, the use of predictive policing tools, which rely on location and personal data to forecast future criminality, risks exacerbating existing racial discrimination, creating a feedback loop of injustice.

HL: The ethical crisis in AI

These failures are not limited to legacy systems, as demonstrated by the rise of highly sophisticated generative AI. Instances have been documented where models have produced extreme content, including hate speech and antisemitic tropes, echoing disinformation from deleted troll accounts.

Furthermore, the versatility of these systems has been exploited for malignant purposes, such as crafting psychologically manipulative extortion messages used against major global institutions.

The fundamental discord is that artificial intelligence operates on a logic of optimisation and prediction, seeking to maximise statistical patterns. When this logic is applied without ethical constraints, it inevitably reduces complex human judgment to measurable metrics, efficiently embedding and reinforcing existing social inequalities under the guise of technological efficiency. This outcome highlights why proactive governance, cultivated through processes like IBM’s “AI Ethics Council,” is necessary, transforming the abstract concept of “Virtue Ethics” into a concrete, institutionalised check on automated decision-making.

These corporate structures attempt to pre-empt the failures caused by insufficient design, such as the discontinuation of “IBM Watson for Oncology” due to reliance on synthetic data and inadequate validation protocols.

Engineering ethical alignment

Artificial intelligence alignment is a critical, yet unresolved, challenge within AI safety research, focusing on building systems that consistently act in accordance with human intentions. This challenge is partitioned into “Outer Alignment,” which involves the initial careful specification of the system’s purpose, and “Inner Alignment,” which demands technical mechanisms to ensure the system robustly adheres to that specification, even when facing adversarial inputs or discovering loopholes.

The process of robust “Inner Alignment” effectively functions as the ethical dimension of cybersecurity, aiming to prevent emergent behaviours such as power-seeking or strategic deception that might arise in advanced systems.

Research using frameworks like Schwartz’s theory of basic values reveals that Large Language Models (LLMs) possess “motivational biases” that diverge substantially from human populations. LLMs tend to prioritise abstract values such as universalism and self-direction while notably de-emphasising highly human-centric motivations like achievement, security, and power.

This finding demonstrates that simply training models on large datasets of human text is insufficient to guarantee value alignment and highlights the necessity of developing methods like Universal Value Representation (UniVaR) to accurately map and visualise these inherent algorithmic biases.

To govern this emerging technology, global regulatory bodies are developing frameworks rooted in fundamental human rights and dignity. UNESCO’s “Recommendation on the Ethics of Artificial Intelligence” provides core principles, including “Accountability, Transparency, Fairness, and Human Oversight.”

The most comprehensive legal response to date is the European Union’s AI Act, which employs a risk-based classification system for artificial intelligence applications. The Act strictly prohibits unacceptable risks, which include government-run social scoring and manipulative AI practices. High-risk systems, such as CV-scanning tools that rank job applicants, are subject to stringent legal obligations.

By legally regulating the data inputs and development processes, the AI Act directly addresses the documented failures caused by inherited bias. Systems classified as having limited risk, such as chatbots and deepfakes, are subject to lighter transparency obligations, requiring developers to inform end-users that they are interacting with an AI.

In the United States, the regulatory landscape is more fragmented, lacking comprehensive legislation that directly governs all AI applications. Instead, policy often relies on targeted “Executive Orders” and existing frameworks.

For instance, recent executive action directing the use of artificial intelligence in paediatric cancer research emphasises integrating AI into health data interoperability while ensuring that patients and parents maintain control over sensitive health information, aligning with legal standards like HIPAA and risk management frameworks such as NIST.

This approach shows that regulatory compliance often starts by translating principles of fairness and transparency into concrete legal requirements, recognising that data governance must be legally mandated to prevent the replication of harmful societal biases.

Governance enables trust

The global regulatory landscape remains highly fragmented, forcing multinational organisations to reconcile potentially conflicting compliance obligations across different jurisdictions. This fragmentation is a clear mirror of global disunity regarding the core philosophy of accountability and responsibility in the digital age.

Ultimately, the long-term success of artificial intelligence and its integration into sensitive domains, such as healthcare, relies entirely on the establishment of public trust, which is fundamentally built upon principles of patient control, data protection, and accountability.

Ethical governance, characterised by transparency, explainability, and human oversight, is crucial for developing trustworthy technology. The challenge is not just building intelligent algorithms, but moral ones that balance innovation with ethical safeguards.

Related posts

Child Trust Funds: Could your money be waiting?

GBO Correspondent

How to capitalize on a growing HNWI market?

GBO Correspondent

The AI wave hits website builders

GBO Correspondent