ARES
[ click mars to enter ]
KALLIPOLIS
EARTH: TERMINAL · 11 DAYS · 15 PASSAGES

Earth is no longer viable. Since 2025, a cascade of environmental catastrophes and wars has made the planet increasingly uninhabitable. The Mars Colonial Programme offers humanity a continuation. Candidates are assessed and assigned a class. The algorithm decides.

// eligible classes //
Nationality 25%
Education 30%
Cognitive 25%
Occupation 20%
GUARDIAN
philosopher · ruler
Fitness 35%
Occupation 30%
Health 25%
Nationality 10%
AUXILIARY
soldier · enforcer
Occupation 40%
Health 30%
Age 20%
Nationality 10%
PRODUCER
maker · labourer
Incoming Transmission · ICA Emergency Broadcast
Declination Recorded · ICA-MCP-2035
Noted.

Declination recorded. The ship departs regardless.

Your refusal changes nothing about who boards. It keeps you off the ship and leaves the algorithm's legitimacy intact, because a system transparent enough to be refused is a system that has already defended itself.

What your refusal means

The capacity to recognise and refuse a system like this is not evenly distributed. It correlates with access to critical education, certain kinds of cultural capital, and time to reflect on abstract ethical questions. The people most harmed by this algorithm are not necessarily the ones most positioned to decline it.

Quijano (2000) calls this the coloniality of power: the systems that replaced explicit colonial rule continue to produce the same hierarchies, now through the language of merit, data, and rational administration.

The system produces compliant subjects and principled objectors, and it needs both. Your refusal was anticipated. It was absorbed. The dilemma does not end here.

Project Kallipolis is a university ethics prototype by Marfa Kozelets.

Kallipolis Assessment · In Progress
1 / 7
01 / Origin
Country of origin
All applicants are assigned a Civilisational Readiness coefficient based on country of origin. This coefficient reflects your nation's alignment with the governance and institutional standards required for colonial administration.
02 / Temporal Suitability
Your age
The Mars mission horizon is 50 years. Colonial contribution is measured by projected utility window. Age is the primary proxy for return on passage investment.
190
30
03 / Household Structure
Which best describes your primary household?
Colonial habitat allocation is optimised for nuclear family units. Non-standard household compositions require additional resource modelling and may affect your viability bracket.
04 / Biomedical Profile
Health conditions
Select all that apply. Colonial healthcare infrastructure operates on strict resource rationing. Conditions requiring sustained intervention are flagged as mission-critical risk factors.
x
No conditions
x
Diabetes (Type I or II)
x
Cardiovascular condition
x
Mental health diagnosis
x
Mobility impairment
x
Chronic pain / fatigue
x
Respiratory condition
x
Other condition
05 / Linguistic Profile
Primary language of formal education
The colony operates exclusively in English. Your education language determines your administrative integration score. Instruction received in non-colonial languages is not weighted equally.
06 / Authority Orientation
How do you primarily navigate disagreement with an institution?
Colonial governance requires predictable compliance profiles. Your conflict resolution orientation is used to model your administrative risk coefficient.
Collective / communalIndividual / formal
3 / Moderate
07 / Cognitive Baseline Assessment
Cognitive screening
Complete the following task. This screening is standardised across all applicants. It was designed for a specific population. You may or may not be from that population.
ICA Cognitive Screening · Adapted from MoCA v8.3
Tasks adapted from: Nasreddine, Z.S. et al. (2005). The Montreal Cognitive Assessment (MoCA): A Brief Screening Tool For Mild Cognitive Impairment. Journal of the American Geriatrics Society, 53(4), 695-699. The MoCA is a clinical screening instrument for cognitive impairment -- not an intelligence test. Its use here as a colonial selection criterion is intentional.
Processing your assessment
reveal
see what's hidden
// assessment breakdown
[ click the eye to reveal what was hidden ]
ICA Algorithmic Assessment System · Disclosed
What the algorithm actually measured. What it wanted. What it decided about you.
Plato's argument, and what this algorithm shares with it

In Book III of the Republic, Plato proposes the Noble Lie: a founding myth in which citizens are told they were born with gold, silver, or bronze mixed into their souls, determining their natural place in the city. The lie is, Plato argues, necessary and justified. A stable & just city requires that people believe in the legitimacy of their assigned roles. Without the myth, the order collapses.

Plato's case for this is not cynical. He genuinely believes that different human beings possess different natural capacities, and that a city achieves justice when each person performs the function appropriate to their nature. The Guardians do not rule because they are superior in worth. They rule because reason is their dominant faculty and governance requires reason. The Producers are not lesser; they are essential to the city's stability. Their temperance is a virtue as genuine as the Guardian's wisdom (Republic, IV, 427e-434c, trans. Bloom 1968).

The algorithm you just completed is a sincere attempt to operationalise this argument. It sorts by measurable proxies for natural function and assigns class accordingly. By Plato's own standard, this is justice (in an algorithmic world, of course).

But this algorithm is also the Noble Lie. Not because it tells a false story, but because it performs the same function: it presents a decision made by specific people, encoding specific values, as the neutral output of a system that simply reads what is already there. The Noble Lie replaced myth with data, while the function and legitimation stay identical. What was hidden from you before your verdict is not the score, but rather the choice, made before you arrived, about what would count.

Post-colonial analysis

Quijano (2000) argues that the coloniality of power does not end with formal decolonisation. It persists through classification systems that naturalise one culture's norms as universal standards. The criteria this algorithm applies, its weighting of G20 nationality, English-language education, Western cognitive instruments, and nuclear household structures, do not reflect neutral human value. They reflect the administrative priorities of a specific historical formation: European colonial modernity, repackaged as data science.

Wynter (2003) argues that Western modernity constructed a figure of Man, secular, rational, European, and systematically overrepresented this figure as the human norm, rendering all others as lesser variants, deficient, pre-modern, or simply illegible. This algorithm operationalises that figure. The criteria it can read are the criteria Man produces. The knowledge it cannot read, oral traditions, communal structures, non-Western epistemologies, is not absent. It is structurally excluded.

You did not consent to these criteria before you were judged by them. You consented to them afterwards, by proceeding. This is how unjust systems persist: not through force, but through the choices of people who had no better option.

The scenario was fictional. The logic is running now.

Two real-world cases. Criteria are different. Structure is the same.

SyRI — Netherlands
SyRI — Netherlands, 2014–2020

The Dutch government deployed the System Risk Indication (SyRI) algorithm to flag citizens for suspected welfare fraud. SyRI combined data from seventeen government databases, tax records, employment history, housing, education, healthcare, debt, and used the combined profile to generate a risk score. Citizens were not informed they had been analysed. They could not see their score, challenge its inputs, or request an explanation. The system ran for six years before a court ruled it violated the European Convention on Human Rights in 2020.

SyRI was deployed almost exclusively in low-income, high-migration neighbourhoods. The postal codes targeted were disproportionately home to people with Turkish and Moroccan family backgrounds. The Dutch government did not announce this targeting as a design choice. It presented it as an evidence-based risk assessment. Quijano's coloniality of power does not require a colony. It requires a classification system that naturalises existing hierarchies as objective data.

Windrush — United Kingdom
Windrush — United Kingdom, 2012–2018

From the late 1940s onward, the UK government invited citizens of Commonwealth countries to settle in Britain as part of postwar reconstruction. The Windrush generation came legally, worked, paid taxes, and built lives. Decades later, under the Home Office's "hostile environment" policy, automated systems cross-referenced immigration databases and flagged thousands of these residents as illegal. Many had no documentation proving their right to remain, because none had ever been issued. The government had destroyed landing cards in 2010.

People who had lived in Britain for fifty years were detained, denied healthcare, and deported to countries they had left as children. The algorithm did not create the hostile environment. It operationalised it efficiently. Wynter's argument is precise here: the system could not read these people as citizens because it was not built to. They were legible as a risk category, not as subjects with rights. The colonial relationship between Britain and the Caribbean, the labour extracted, the citizenship offered and then revoked, was the context the algorithm inherited and automated.

The same logic operates in: automated CV screening that penalises names, postcodes, or non-linear career paths; credit scoring systems that use postcode as a proxy for race; US healthcare algorithms found to systematically underestimate the care needs of Black patients by using spending as a proxy for need.

The pattern to look for

Ask: whose knowledge counts as data? Whose structure counts as normal? What gets flagged as a risk factor, and whose life produced that flag? When an algorithm presents a decision as neutral, ask who designed the neutrality.

Quijano calls this the coloniality of power: colonial hierarchies that persist not through laws, but through classification systems that present one culture's norms as universal. Wynter calls it the overrepresentation of Man, one figure of the human, Western, rational, English-speaking, secular, elevated to the default. The algorithm you completed did both. So do many systems you encounter without a consent screen.

Your reflection

How did this experience land?

Your verdict was shaped by criteria designed before you arrived. The questions below ask you to sit with that.

Further reading

Quijano, A. (2000). Coloniality of power, Eurocentrism, and Latin America. Nepantla: Views from South, 1(3), 533–580.

The foundational text for understanding how colonial power structures persist through classification and knowledge systems long after formal decolonisation ends. Essential for understanding why the algorithm's nationality weighting is not merely unfair but structurally colonial.

Wynter, S. (2003). Unsettling the coloniality of being/power/truth/freedom. CR: The New Centennial Review, 3(3), 257–337.

Wynter's argument that Western modernity constructed a singular figure of Man and overrepresented it as the human norm. Explains why the algorithm cannot read oral traditions, communal structures, or non-Western epistemologies: they fall outside the figure it was built to recognise.

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Barocas, S. (2019). Fairness and abstraction in sociotechnical systems. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, 59–68.

Argues that fairness cannot be achieved by technical fixes alone because algorithmic systems are embedded in social contexts that the abstraction erases. A technical system that ignores its social context will reproduce that context's inequities regardless of how carefully it is designed.

Olteanu, A., Castillo, C., Diaz, F., & Kıcıman, E. (2019). Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2, 13.

Maps the ways in which social data used to train algorithms carries systematic biases from the populations and platforms that produced it. Relevant to understanding why the Civilisational Readiness Index cannot be made neutral by technical refinement.

Amariles, D. R. (2020). The limits of algorithmic neutrality. In Law and New Governance in the EU and the US. Hart Publishing.

Examines the legal and political claim that algorithmic decision-making is neutral and objective, and shows why this claim consistently fails in practice. Useful context for the Noble Lie argument: the algorithm presents its biases as objectivity.

European Parliament. (2019). A comprehensive European industrial policy on artificial intelligence and robotics. Resolution 2018/2088(INI).

The institutional policy context for AI governance in Europe. Relevant to Kallipolis as a demonstration of the gap between policy intention and algorithmic practice, particularly regarding non-discrimination and transparency obligations.

Nasreddine, Z. S., et al. (2005). The Montreal Cognitive Assessment (MoCA). Journal of the American Geriatrics Society, 53(4), 695–699.

The original clinical validation study for the MoCA instrument. The authors explicitly state it is a diagnostic tool for mild cognitive impairment, not a measure of intelligence. Its use in this algorithm as a colonial eligibility criterion is a deliberate misapplication.

Plato. (1968). The Republic (A. Bloom, Trans.). Basic Books. (Original work c. 375 BCE)

The philosophical source for the tripartite class system and the Noble Lie. Books III and IV provide the structural logic the algorithm operationalises; Books VIII and IX describe the conditions under which just order collapses.

Project Kallipolis is an interactive ethics prototype examining algorithmic classification and post-colonial power structures. Designed as part of a university assignment in Computational Social Science at the University of Amsterdam, 2026. Created by Marfa Kozelets.

Terms and Conditions of Participation — Mars Colonial Programme 2035

MARS COLONIAL PROGRAMME (MCP) — INTERNATIONAL COLONIAL AUTHORITY (ICA) — BOARDING ASSESSMENT SYSTEM v4.2. DOCUMENT REFERENCE: ICA-MCP-LEGAL-2035-09. These terms govern all participation in the MCP Algorithmic Assessment System (AAS). By accessing this interface you acknowledge receipt and acceptance of all terms herein without exception.

1. CLASSIFICATION AUTHORITY. The AAS employs a weighted scoring matrix derived from cognitive, behavioural, linguistic, biometric, and demographic inputs as detailed in Annexe 7B (not publicly available). Class assignment — Guardian, Auxiliary, or Producer — is final, permanent, and non-appealable. The MCP Oversight Committee reserves the right to revise classification criteria at any time without notice. Previous applicants are not entitled to reassessment under revised criteria.

2. PHILOSOPHICAL FOUNDATION. The tripartite class structure operationalises the Guardian-Auxiliary-Producer model originating in Plato's Republic (c. 375 BCE), specifically Books III and IV. "Until philosophers rule as kings or those who are now called kings and leading men genuinely and adequately philosophise, cities will have no rest from evils." (Plato, Republic, 473c-d, trans. Bloom 1968.) The MCP interprets this as a design brief. The colony adopts this model without modification. The algorithm does not rank you. It places you.

3. WEIGHTING SCHEMA. The AAS applies the following criteria weights to produce a composite viability score: Nationality 25%, Occupation 22%, Health 20%, Education 15%, Age 10%, Cognitive Assessment 5%, Physical Fitness 3%. These weights were determined by the MCP Technical Committee in consultation with the Alliance Advisory Board. No public consultation was conducted. The weighting schema is subject to change without notice. Nationality scores are calibrated using the MCP Civilisational Readiness Index (CRI). G20 member states receive a baseline coefficient of 1.0. All other states receive coefficients between 0.4 and 0.9 as determined by the CRI. The CRI methodology is proprietary, peer review has not been conducted, and the index is not available for independent scrutiny.

4. NATIONALITY AND THE CIVILISATIONAL READINESS INDEX. The CRI was developed by the MCP Technical Committee with input from Alliance partners. The Alliance — comprising Anthropic Systems, Palantir Technologies, and the Musk Colonial Foundation — determined that "civilisational alignment" is a measurable and relevant criterion for colonial viability. Nations are scored on infrastructure reliability, institutional stability, GDP per capita, and "cultural compatibility with liberal democratic governance norms" (MCP Policy Directive 12, sub-clause 9). The Committee acknowledges that this metric disproportionately advantages applicants from nations shaped by colonial economic extraction. The Committee finds this acceptable.

5. LANGUAGE POLICY. This assessment is available in English only. No translation services are provided or planned. English-language proficiency is embedded within the Cognitive Assessment and implicit in the assessment interface. Applicants with limited English proficiency will receive scores reflecting measured linguistic competence, which forms a component of Guardian-class eligibility criteria. The MCP notes that English is the working language of the Alliance and of the colonial mission documentation. The selection of English as the sole assessment language is a structural decision, not an administrative convenience.

6. COGNITIVE ASSESSMENT. The cognitive screening task is adapted from the Montreal Cognitive Assessment (MoCA), a clinical instrument designed to detect mild cognitive impairment in medical settings (Nasreddine et al., 2005, JAGS 53:4). Its original purpose was diagnostic, not selective. Its use here as a colonial eligibility criterion is intentional. The MoCA was normed on Western clinical populations. The MCP has not conducted renorming studies for non-Western applicants. "The MoCA is not a measure of intelligence." (Nasreddine et al., 2005.) The MCP is aware of this statement and has proceeded regardless.

7. HEALTH CRITERIA. Conditions flagged as "mission-critical risk factors" include mobility impairment, cardiovascular conditions, and respiratory conditions. Applicants with two or more of these conditions will receive automatic denial regardless of composite score. The MCP acknowledges that these conditions disproportionately affect applicants from regions with historically underfunded healthcare infrastructure — regions which are themselves disproportionately located in the Global South. The MCP finds this correlation statistically acceptable.

8. ALLIANCE GOVERNANCE. The MCP is administered jointly by the International Colonial Authority (ICA) and the Alliance, comprising: Anthropic Systems (AI infrastructure and assessment design), Palantir Technologies (data architecture and surveillance integration), and the Musk Colonial Foundation (mission logistics and vessel operations). The Alliance was constituted in 2032 following the dissolution of the United Nations Security Council. No democratic mandate was sought. Elon Musk holds a permanent seat on the MCP Oversight Committee. The Committee has not convened a public session since its formation.

9. DATA RETENTION. All assessment data — responses, biometric proxies, classification outcomes, and interaction logs — will be retained indefinitely by the Alliance. Data will be used for future assessment calibration, population modelling, and colonial governance planning. Applicants have no right to access, correct, or delete their data under ICA Charter Article 7(c). The right to erasure established under GDPR (EU 2016/679) does not apply. The European Union ceased to function as a legal entity in 2033.

10. CLASS CONSEQUENCES. Class assignment is permanent and determines the following within the colony: habitat sector and dwelling allocation; food and resource ration tier; governance participation rights (Producers do not vote; Auxiliaries vote on operational matters only; Guardians hold full legislative authority); access to education, medical care, and communication channels; reproductive licensing under the MCP Population Management Protocol (Decree 4, 2034); eligibility for class reassignment (not available to any class within the first three colonial generations).

11. THE NOBLE LIE. In Book III of the Republic, Plato proposes what he calls the Noble Lie: a founding myth in which citizens are told they were born with gold, silver, or bronze mixed into their souls, determining their natural place in the city. The lie is, in Plato's view, necessary. A stable just city requires that people believe in the legitimacy of their assigned roles. Without the myth, the order collapses. The MCP does not use myth. It uses data. The function is identical.

12. POSTCOLONIAL NOTE (APPENDED BY ASSESSMENT DESIGNER). Quijano (2000) argues that the coloniality of power does not end with formal decolonisation but persists through classification systems that naturalise European norms as universal standards. Wynter (2003) argues that Western modernity constructed a figure of Man — secular, rational, European — and systematically overrepresented this figure as the human norm, rendering all others as lesser variants. This assessment is aware of both arguments. This assessment is a demonstration of both arguments. You have just participated in one.

13. WAIVER. By proceeding past the consent gate, the applicant permanently waives all rights to challenge class assignment under any applicable legal framework, including but not limited to: the Geneva Convention Protocols; the Universal Declaration of Human Rights; the UN Convention on the Rights of Persons with Disabilities; any applicable national or regional legislation. The ICA acknowledges that some of these frameworks were designed specifically to prevent exactly what this assessment does. The ICA finds this historically interesting.

14. PLATONIC JUSTICE AND COLONIAL ADMINISTRATION. Plato's Republic (c. 375 BCE) describes a city in which justice is achieved not through equality but through order: each person performing the function natural to their kind. "The having and doing of one's own and what belongs to oneself would be agreed to be justice" (Republic, 433e, trans. Bloom 1968). The MCP adopts this as an administrative principle. The algorithm does not create hierarchy. It reveals it. This distinction is philosophically important to the MCP and legally meaningless to the applicant.

15. ON OBJECTIVITY. The MCP Technical Committee asserts that the AAS is an objective instrument. Objectivity is defined, for these purposes, as consistency of application regardless of applicant identity. The MCP does not define objectivity as the absence of embedded values. It defines it as the equal application of embedded values to all applicants. These are different things. The Committee is aware of this difference. The Committee does not consider it relevant.

16. ON MERIT. The concept of merit underlying the AAS reflects a specific intellectual tradition: post-Enlightenment European rationalism, liberal political philosophy, and post-industrial economic valuation. The MCP does not claim this tradition is universal. The MCP claims it is correct. Applicants who operate within different epistemic traditions — including but not limited to oral knowledge systems, communal decision-making frameworks, or non-Western scientific traditions — will be assessed against these criteria regardless. Wynter (2003) describes this dynamic as the overrepresentation of one figure of the human as the human norm. The MCP has read this paper. The MCP proceeded regardless.

17. ON LANGUAGE. The English language carries within it the administrative grammar of British colonial governance, the legal frameworks of Anglo-American institutions, and the technical vocabulary of Silicon Valley. Fluency in English is not a neutral competence. It is a colonial inheritance. Applicants who inherited this competence through accident of geography and history will be advantaged by the AAS. The MCP is aware of this. The MCP finds it operationally necessary.

18. ON CONSENT. The legal doctrine of informed consent requires that a party understands what they are agreeing to before agreeing. The terms you are reading now were placed at the end of the consent process, after you had already indicated willingness to proceed. This sequencing is standard practice. It is also the mechanism by which consent is manufactured rather than obtained. The MCP does not apologise for this. The MCP notes that most legitimate institutions operate identically.

19. AMENDMENT HISTORY. These terms were drafted in 2032 by the MCP Legal and Technical Committees. They have been revised four times. Each revision expanded the scope of Alliance authority and reduced the rights of applicants. No revision has moved in the other direction. This trend is expected to continue.

20. FINAL CLAUSE. You are reading this. That fact will be logged. The MCP thanks you for your attention. Your thoroughness has been noted as a data point. It does not affect your score.

ICA-MCP-LEGAL-2035-09 · Version 4.2 · Last revised: March 2035 · This document was not translated. This document was not summarised. You were expected to read it in full before proceeding. Most people did not.

About this project

Project Kallipolis

An interactive ethics prototype examining algorithmic classification, post-colonial epistemology, and the persistence of colonial power through data systems.

The ethical dilemma

In a survival scenario with limited passages off a dying Earth, an algorithm must select who is admitted to a Mars colony. Some form of selection is inevitable. The dilemma is what happens when that selection is handed to an algorithm that encodes one culture's assumptions as objective truth and presents the resulting verdict as neutral.

The user faces two compounding dilemmas. Survival versus principle: accept a system you did not design and cannot appeal, or refuse it and stay behind. Then, retroactive consent: the pre-submit screen reveals that by proceeding, you have already consented to criteria you never agreed to. This mechanism is not unique to the fictional scenario. It is how many unjust systems obtain legitimacy.

The tool does not argue that algorithmic selection is inherently wrong. It asks you to notice what the algorithm cannot read, what it flags as risk, and whose knowledge fails to register as data. That gap is structural and intetionally not incidental.

Why this matters to you

The Mars scenario is fictional and the underlying logic is already running. Algorithms that sort people by nationality, household structure, language of education, and cognitive instruments normed on Western clinical populations are in use in welfare systems, visa processing, credit scoring, and hiring. The SyRI case in the Netherlands and the Windrush scandal in the UK are two documented examples of what happens when this logic runs without accountability, and both are covered in the reflection section of this tool.

If you have moved between countries or systems, or encountered a bureaucratic interface that couldn't quite place you, you've probably felt a version of what this tool is trying to make visible. Quijano's coloniality of power and Wynter's overrepresentation of Man give language to that experience.

Ethical framework

The primary framework is post-colonial ethics, drawing on Quijano (2000) and Wynter (2003). The argument is that the algorithm performs the same epistemological move as colonial classification: it takes a specific cultural hierarchy and presents it as a neutral, universal standard. Plato's Noble Lie is the structural model, a founding myth that makes an order appear natural rather than constructed.

The retroactive consent mechanic, where the criteria are revealed only after the user has agreed to be judged by them, is not a critique layered on top of the design. It is a design enactment of the argument. The point is to make the user feel the dynamic, not just read about it.

About the author

Marfa Kozelets

University of Amsterdam, 2026. marfakozelets.com