top of page

Human Compatibility Framework©

IR5’s Human Compatibility Framework is a practical tool for evaluating whether a system remains livable for humans over time. It is built for situations where something can “work” on paper and still quietly harm the people inside it. The framework focuses on four basics: whether humans keep agency, whether they can understand what is happening, whether responsibility stays clear, and whether human capability is preserved rather than slowly eroded.

 

The framework has two layers. The first is Orientation and Calibration. This is where we define what we mean by compatibility, set the scope of what we are evaluating, and agree on the principles that guide evaluation.

 

The second is Evaluation and Application. This is where the framework is actually applied to a project or system. Inside the Evaluation and Application layer, the work happens through three core mechanisms. First, refusal patterns flag designs that are incompatible by structure, not by accident, such as dependency and capability atrophy, manipulation and exploitative engagement, loss of meaningful exit, and hidden power without accountability. Second, core compatibility dimensions structure the assessment, including comprehensibility, contextual coherence, emotional and experiential legibility, usability across human differences, ethical alignment, and human agency. Third, structured human judgment is used to interpret trade-offs, handle disagreement, and make assumptions explicit rather than hidden.

 

IR5 applies the framework through domain modules so it can be used in real contexts. We focus on five domains: Artificial Intelligence Systems, Cultural Institutions and Creative Projects, Human Development Health and Community Systems, Infrastructure and Large Scale Developments, and Institutional Governance and Decision Systems. These modules exist because the stakes, stakeholders, and risks differ by domain. The same framework applies, but what you look for, what counts as harm, and what counts as responsible recourse changes with context.

Framework Structure
Sections 1-3: Orientation and Calibration
Sections
 4-10: Evaluation and Application
Sections 1-3: Orientation and Calibration
1. Purpose and Scope
  • The Problem

    Modern systems are optimized for speed, efficiency, growth, or control, but they often ignore what that optimization does to humans over time. Friction is treated as waste even when it protects judgment, learning, and responsibility. The result is quiet adaptation: dependence, confusion, reduced capability, and blurred accountability.

  • The Intent

    The framework helps teams ask a basic question early: does this system still make sense for humans over time. It is designed to surface trade-offs before they become normalized and hard to reverse. The goal is practical: preserve agency, understanding, accountability, and long-term capability as systems scale and settle into everyday life.

  • The Boundary

    The framework does not replace technical review, financial analysis, legal scrutiny, or artistic criticism. It focuses on the human-system interface and its consequences: who must adapt, who bears risk, who retains control, and whether outcomes are contestable. It also examines long-term effects such as dependence, erosion of responsibility, and capability atrophy.

2. What Human Compatibility Means
  • Compatibility as a Relationship

    Human compatibility is not a property of a system alone. It emerges from the relationship between the system, its context, and the people affected. The same intervention can be workable in one setting and harmful in another. Compatibility therefore requires asking compatible for whom, under what conditions, and with what power dynamics and time horizon.

  • Beyond Functional Success

    A system can meet targets and still be incompatible. It may be efficient and scalable while producing confusion, hidden labor, dependence, or responsibility gaps. Functional success answers whether the system performs a task. Human compatibility asks what that performance does to people: whether they still understand, can intervene, and remain capable and accountable over time.

  • Why Compatibility Cannot Be Fully Automated

    Many compatibility signals are qualitative, contextual, and slow to emerge. Trust, coercion, fatigue, dignity, and accountability do not reliably appear in dashboards. Automated indicators can help, but they cannot settle what is acceptable for humans to live with. The framework therefore relies on structured human judgment to interpret trade-offs and make assumptions explicit.

3. Guiding Principles
  • Human-Centered Before System-Centered

    When system performance and human outcomes conflict, IR5 prioritizes human sustainability. This does not reject innovation. It rejects progress that depends on humans becoming less aware, less capable, or less responsible for the system to function. A human-centered evaluation asks where burden is pushed and whether the system adapts to human limits.

  • Context Matters More Than Abstraction

    Compatibility cannot be assessed in a vacuum. Institutional rules, cultural norms, time pressure, incentives, and power relations shape experience. A design that is acceptable when optional may be coercive when mandatory. The framework requires grounding evaluation in real contexts and realistic use, not ideal assumptions.

  • Preserve Human Agency and Responsibility

    Humans cannot be stripped of agency while remaining responsible for outcomes. If people must live with consequences, they must retain meaningful ability to understand, contest, and intervene. Responsibility must remain traceable to humans, even when systems are complex. Compatibility requires reversibility, escalation pathways, and roles that are real in practice.

Sections 4-10: Evaluation and Application
4. Non-Compatible Patterns
  • Refusal Power

    Some designs are incompatible by structure, not by accident. Refusal power marks boundaries where evaluation alone is insufficient. When a non-compatible pattern is present, the correct response is redesign, strong constraint, or non-deployment, not incremental tuning. Refusal protects the framework from being used to launder harmful systems.

  • Dependency and Capability Atrophy

    A system is incompatible if it systematically weakens human capability as a condition of its usefulness. Assistance becomes harmful when it replaces understanding and judgment without preserving a path for skill and responsibility. Over time this creates dependence while accountability remains. Compatibility requires humans remain able to act, decide, and recover without being trapped.

  • Exploitative Engagement and Manipulation

    A system is incompatible if it relies on exploiting attention, emotion, or compulsion to maintain participation. High engagement can reflect value, but it can also reflect engineered dependence and non-conscious nudging. Compatibility requires participation aligned with human intent and the ability to disengage without penalty. Retention at the expense of autonomy fails this boundary.

  • Irreversibility and Loss of Meaningful Exit

    A system is incompatible if people cannot contest outcomes, reverse effects, or exit without disproportionate harm. Loss of exit can be formal, such as no appeals, or practical, such as lock-in through social or economic pressure. Compatibility requires credible recourse, reversible pathways, and the ability to withdraw participation. Without exit, systems become gatekeepers.

  • Asymmetric Manipulation

    A system is incompatible when it uses an imbalance of information to steer human behavior toward outcomes people did not clearly choose. This includes hyper-personal persuasion and covert influence below awareness. Compatibility requires influence to be visible and contestable and rejects treating human psychology as a resource for extraction and control.

  • Hidden Authority

    A system is incompatible when it shapes outcomes while remaining invisible, unchallengeable, or unaccountable. Hidden authority operates through defaults, ranking, filtering, or opaque rules that narrow options without being seen. Compatibility requires relevant legibility: humans must be able to see where power sits, how it is exercised, and how it can be questioned.

5. Core Dimensions of Human Compatibility
  • Comprehensibility

    People must be able to form a reliable understanding of what the system is doing and why outcomes occur. Comprehensibility does not require full technical disclosure. It requires clear scope, clear limits, and clear uncertainty signals. When people cannot build a workable mental model, they defer judgment and responsibility drifts.

  • Emotional and Experiential Legibility

    Systems communicate through tone, pacing, feedback, and confidence signals. Compatibility requires coherent signals that do not mislead or coerce. People should be able to tell when something is uncertain, when escalation is needed, and when urgency is real. Emotional legibility supports reflection rather than bypassing it.

  • Contextual Coherence

    A system must make sense within the real environment where it operates, including edge cases, stress conditions, and uneven incentives. Compatibility requires robust behavior under realistic constraints, not ideal settings. If users must constantly compensate with workarounds or vigilance, the system is misaligned with human reality.

  • Usability Across Human Differences

    Humans differ in ability, language, experience, attention, and capacity. A compatible system works for more than ideal users. It should fail gracefully, communicate clearly, and avoid punishing those with less time, literacy, or power. When safe use requires exceptional resources, burden shifts onto the most vulnerable.

  • Ethical and Social Alignment

    Compatibility requires behavior that fits expectations of fairness, dignity, and responsibility in its setting. This is tested under scale and pressure. If harms are externalized or fairness collapses when incentives change, alignment is not stable. Compatibility must hold under real conditions, not only ideal pilots.

  • Agency and Interpretability

    Humans must retain meaningful ability to influence outcomes and interpret why decisions are made. Agency is practical ability to intervene without penalty. Interpretability provides explanations at a level appropriate to role and stakes, enabling contestability and accountability. When outcomes cannot be questioned or changed, humans become passive recipients.

  • Directional Constraints Over Time

    Compatibility is time-dependent. Each dimension must preserve human capability over time and must not degrade it through dependence, coercion, or responsibility deflection. A system may appear compatible at launch and become corrosive as it scales and normalizes. Directional constraints treat compatibility as something maintained, monitored, and sometimes regained.

6. Role of Human Judgment
  • Why Human Judgment Is Essential

    Compatibility involves meaning, responsibility, trust, and lived experience. These cannot be settled by metrics alone. Human judgment is required to interpret what is acceptable, notice slow harms, and evaluate trade-offs. The aim is disciplined reasoning that can be explained, challenged, and revised.

  • Forms of Judgment

    Compatibility evaluation involves perceptual judgment about understanding, experiential judgment about cognitive and emotional load, contextual judgment about appropriateness within a setting, and ethical judgment about harm and responsibility. No single perspective is sufficient. The framework expects plural viewpoints because systems affect work, meaning, and power at once.

  • Disagreement and Calibration

    Disagreement is normal and often reveals real trade-offs or uneven burden. The framework manages subjectivity by making reasoning explicit, comparing cases, and calibrating evaluators over time. Calibration is not forcing consensus. It is ensuring decisions are explainable and not arbitrary. Persistent disagreement signals hidden assumptions that need attention.

  • Responsibilities and Limits

    Evaluators must state assumptions, acknowledge uncertainty, and accept accountability for conclusions. The framework cannot prevent bad-faith adoption, but it can make misuse easier to detect through explicit reasoning requirements. Compatibility judgments are provisional and should be revisited as systems, contexts, and incentives change.

7. People, Context, and Scale
  • Intended Versus Affected Populations

    Systems often affect people who are not direct users. Compatibility requires identifying intended audiences and affected populations, including those downstream of decisions and those with limited ability to opt out. If evaluation focuses only on primary users, it will miss where harm accumulates and where power asymmetries concentrate risk.

  • Context of Use and Consequence

    Context shapes impact. Stakes, time pressure, institutional rules, and power dynamics determine whether a system is experienced as supportive or coercive. A system that is compatible when optional may become incompatible when mandatory. The framework requires evaluating alternatives, failure consequences, and realistic use conditions.

  • Scale and Accumulation

    Effects tolerable at small scale can become incompatible when systems become widespread, continuous, or unavoidable. Scale changes incentives, normalizes behavior, and concentrates power. Compatibility requires asking what happens as the system becomes routine infrastructure and whether accountability and capability remain clear.

  • Sensitivity and Risk Surfaces

    Some contexts demand higher compatibility standards than others. Where consequences are severe or hard to reverse, requirements for agency, accountability, and recourse increase. The framework maps impact surfaces where system behavior intersects with vulnerability. Sensitivity mapping prevents low-stakes assumptions from being imported into high-stakes settings.

8. Roles, Power, and Responsibility
  • Role Clarity

    Compatibility requires clear roles and decision pathways. Who initiates actions, who reviews, who overrides, and who owns outcomes must be explicit. When roles are ambiguous, responsibility slides downward while control slides upward. Clear roles are the basis for accountability and meaningful human participation.

  • Power Distribution and Asymmetry

    Every system encodes power relationships. Compatibility requires examining where power accumulates, who benefits, and who bears risk. Systems become incompatible when they centralize control while externalizing costs to those with least leverage. Power must be visible and contestable; otherwise humans adapt silently to constraints they cannot change.

  • Intervention and Redress

    Humans need credible ways to intervene when systems fail or drift. Compatibility requires pause options, override pathways, escalation channels, and meaningful appeals where stakes are high. Intervention mechanisms must be real in practice, not symbolic. A system that cannot be challenged safely is incompatible with human governance.

  • Accountability Without Deflection

    Responsibility cannot be delegated to tools or processes. If harm occurs, a human chain of accountability must exist. Compatibility requires that someone can explain decisions, change outcomes, and accept responsibility. When accountability dissolves into abstraction, governance fails and humans lose recourse.

9. Adaptation Across Domains
  • What Remains Constant

    IR5 keeps a stable spine across domains: refusal patterns, core dimensions, and judgment logic. Human constraints such as agency, understanding, accountability, and the need for meaningful exit do not change with sector. This stability enables comparability across cases and prevents each domain from being treated as an exception.

  • What Must Adapt

    Different domains introduce different risks, stakeholders, and sensitivities. Adaptation means translating the framework into domain-relevant questions, examples, and warning signs without altering the core logic. Modules exist because the same human principles apply, but stakes and harm surfaces differ in practice.

  • Modules and Case Studies

    Modules operationalize the framework for specific domains, and case studies show how it is applied in practice. Case-based learning builds credibility through evidence rather than claims. It also improves the framework by exposing blind spots and edge cases that theory alone misses.

  • Stewardship and Feedback

    The framework stays credible through stewardship: transparent updates, public critique, and revision grounded in real cases. Without stewardship, frameworks ossify or become performative. Feedback from stakeholders and affected populations is part of compatibility evaluation, not a public relations add-on.

10. Ongoing Development
  • Limits of the Framework

    IR5 is a reasoning framework, not an enforcement mechanism. It cannot guarantee outcomes, eliminate trade-offs, or replace governance, law, or democratic decision-making. It helps people see human consequences earlier and reason about them more clearly. Limits are stated explicitly to reduce misuse and overclaiming.

  • Time Drift and Reassessment

    Compatibility is not permanent. Systems change, contexts shift, and incentives evolve. A system can be compatible at launch and incompatible later through normalization, scale, or updates. IR5 therefore treats reassessment as a requirement. Compatibility must be maintained, not declared once.

  • Misuse, Co-option, and Performative Adoption

    Any public framework can be used superficially. IR5 can be cited without being applied, or selectively interpreted to justify predetermined decisions. The framework reduces this risk by naming refusal conditions, centering affected populations, and requiring explicit reasoning. Claims of compatibility should be treated skeptically when they avoid accountability or recourse.

  • Revision Commitment

    IR5 is intended to evolve. Revision is stewardship, not instability. Updates should be grounded in case evidence, documented failure modes, and transparent change logs. A framework that cannot revise itself will eventually fail the humans it is meant to protect.

bottom of page