The Rome Call for AI Ethics: A Global Framework for Protecting Human Dignity and Human Rights

Artificial Intelligence and Emerging Human Rights Challenges

Artificial intelligence is no longer merely a technical or academic subject. Today, it directly affects decisions that shape people’s everyday lives—from employment and education to access to public services, justice systems, security, and digital surveillance. Alongside its many opportunities, AI also poses serious risks, including the reinforcement of structural inequalities, discriminatory practices, violations of privacy, and the erosion of human dignity.

In this context, the central question is no longer whether AI will be used, but according to which ethical and legal principles it will be designed and deployed, and what mechanisms of accountability and oversight will govern its use.

The Rome Call: A Shared Response to a Global Challenge

The Rome Call for AI Ethics, launched on 28 February 2020, represents a global effort to respond to these pressing concerns. It proposes a shared ethical framework to ensure that the development and use of artificial intelligence serve human dignity, social justice, and the common good.

A defining feature of the Rome Call is its cross-sectoral and interdisciplinary nature. This initiative brings together highly diverse actors, including global religious institutions such as the Vatican, leading technology companies such as Microsoft and IBM, academic institutions, and civil society organizations. This broad collaboration reflects a growing recognition that the ethical challenges of AI cannot be addressed from a single disciplinary or institutional perspective.

The full text of the Rome Call is available at:
https://www.romecall.org

The Six Core Principles of the Rome Call

The Rome Call articulates six foundational principles to guide the ethical development and use of artificial intelligence:

  • Transparency: AI systems should be understandable and explainable, especially when they affect human rights and life opportunities.
  • Inclusion: Technology should serve all people and avoid excluding or marginalizing individuals or communities.
  • Responsibility: Designers, developers, and users of AI must be accountable for its impacts.
  • Impartiality and Fairness: Algorithms must not reinforce existing biases or structural inequalities.
  • Reliability: AI systems should be secure, robust, and trustworthy.
  • Security and Privacy: The protection of personal data and human dignity must be central to technological design.

Together, these principles establish a clear link between technology ethics and fundamental human rights values.

Ethics, Religion, and Human Dignity

Alongside human rights frameworks, many ethical and religious traditions—including Islamic teachings—have long emphasized the inherent dignity of the human person, justice, responsibility, and the creation of equal opportunities for all. From this perspective, the concerns raised by the Rome Call are not new inventions of the digital age, but rather reflect enduring moral commitments that societies have struggled to uphold across history.

Why a Cross-Sectoral and Interdisciplinary Approach Matters

Global experience shows that without genuine collaboration among human rights advocates, technologists, policymakers, academics, and civil society, artificial intelligence risks becoming a tool for deepening inequality, expanding surveillance, and restricting freedoms. The Rome Call underscores that cross-sectoral cooperation is not optional but a human rights necessity.

Roya Institute’s Position and Support

The Roya Institute supports the Rome Call for AI Ethics and similar initiatives that seek to align technological development with human rights, social justice, and the reduction of discrimination. We believe that without clear ethical frameworks and active civil society engagement, emerging technologies may do more harm than good.

Roya supports efforts that aim to ensure artificial intelligence serves humanity—rather than undermines dignity, equality, and justice.