Ryan SangBaek Kim, Ph.D.
Founder & Director, Ryan Research Institute
Affective Neuroscience · Cognitive Science & Philosophy · AI Ethics
Ryan Research Institute
Institute for Affective Sovereignty and Interdisciplinary Studies
The Ryan Research Institute (RRI) is an officially registered independent research institute in Paris, dedicated to advancing the emerging field of Affective Sovereignty Studies.
Our work integrates affective neuroscience, philosophy of mind, psychology, and AI ethics, while building bridges into law, business studies, and the arts.
We explore the deep structures of emotion, cognition, freedom, and human identity, with a mission to develop new frameworks for the ethical design of human-centered technologies and societies.
At the core of our mission lies a conviction:
To understand the architecture of feeling is to redraw the map of what it means to be human
This website serves as the official archive of our peer-reviewed publications, books, musical works, and essays — and as an evolving hub for interdisciplinary research, international collaboration, and ethical policy dialogue.
🔎 Latest Research Highlight
Featured Event — International Presentation
Affective Sovereignty presented at Sapienza Università di Roma (2 February 2026)
Event: Ethics for AI: Challenges, Opportunities, and Human-Centered Perspectives
Session: Accountability and Care
Organizer: SIpEIA (Italian Society for Ethics in AI)
Location: Sapienza Università di Roma, Italy
Date: 2 February 2026
On 2 February 2026, Ryan SangBaek Kim presented the framework of Affective Sovereignty at Sapienza Università di Roma, addressing a growing ethical risk in emotion AI: the quiet displacement of interpretive authority from the person to the system when emotional labels become records.
The talk argued that the central risk of emotion AI is not misclassification, but interpretive displacement—a structural shift in who holds the final authority over the meaning of one’s emotional experience.
Core claims:
- Privacy protects data; Affective Sovereignty protects meaning.
- Emotion AI must be designed so that interpretation remains contestable by default.
- Systems should operate under three auditable principles: interpretive restraint, interpretive provenance, and user authority.
“The deeper risk is not that the machine gets emotions wrong — but that its answer becomes the only answer.”
Verification & related work:
- Peer-reviewed foundation: DefMoN — Machine Learning with Applications (Elsevier, 2026) DOI: https://doi.org/10.1016/j.mlwa.2025.100817
- Extended essay: The Night I Defended the Right to Feelhttps://profryankim.substack.com
→ [Read the full event summary →]
NADI / ANEST — Four-Paper Program
Status: Under Review
The full NADI–ANEST program — Dataset → Human Baseline → Mechanistic Theory → Geometry / State-Space Modeling —
has been fully completed and submitted across four top-tier venues:
- Scientific Data (Dataset Paper)
- Nature Human Behaviour (Human Baseline)
- Psychological Review (ANEST Framework)
- Nature Machine Intelligence (Geometry / Modeling)
Together, these papers establish the first integrated ecosystem for narrative–affect discrepancy, emotional regulation theory, and mechanistic modeling.
News & Highlights
📘 NADI / ANEST – Full Four-Paper Program Submitted
The entire program (351k dataset, human expressive baseline, ANEST theory, and geometry modeling)
is now in review across Scientific Data, Nature Human Behaviour, Psychological Review, and Nature Machine Intelligence.
🧪 The Affective Thermodynamic Relationship (ATR) — Submitted to Nature Communications
ATR establishes the Collapse Curve of Emotion and provides the first quantitative scaling law for normative–affective conflict.
Dataset and ADI measurement toolkit are publicly released.
🧩 DefMoN — Final Acceptance at Elsevier MLWA + Registry v3.0 Released
DefMoN now has a fully reproducible research registry:
accepted manuscript (v3.0), multilingual synthetic corpus, and linked dataset metadata.
Research Programs
RRI now develops five flagship research lines, each representing a core dimension of affective sovereignty and human–AI cognition:
- NADI–ANEST Program — Narrative–Affect Discrepancy & Emotional Self-Regulation
- Defensive Motivational Nodes (DefMoN) — Language × Defense × Affect
- Resonant Amplification Framework (RAF) — Human–AI Amplification & Circuit Breakers
- Predictive Emotional Selfhood in Artificial Minds (PESAM) — Computational Selfhood & Affective Priors
- Algorithmic Affective Blunting (AAB) — Collapse Curve of Interpretative Failure
Four-paper ecosystem: Dataset → Human Baseline → ANEST Mechanistic Theory → Geometry / State-Space Modeling.
Formal model for inferring defense mechanisms and affective motivations from linguistic structure.
Includes the Affective Degradation Index (ADI) and junk-persona causal testing.
Explore All Research Programs →
Research Collaboration & Fellowship Programs
RRI operates small, invitation-based research collaboration and fellowship programs designed to support advanced scholarly work beyond standard publication cycles.
These programs focus on reproducibility, validation, documentation, and methodological refinement, and are structured around asynchronous, document-driven collaboration rather than coursework or mentoring.
Participation is limited to a small number of researchers per cohort, with clearly defined scopes, deliverables, and governance frameworks.
→ View current research fellowship programs
Books
📘 You Don’t Really Know Your Emotions
A neuroscience-based guide that reveals why your emotions are not what you think — and how to truly feel them.
📗 Feel First. Act Freely
How to trust your emotions again, stop overanalyzing, and restore your emotional rhythm through body and brain.
📙 Strategic Psychology for CEOs
A strategic psychology manual for CEOs integrating neuroscience and behavioral science.
Music
400+ original compositions — from orchestral textures to contemporary minimalism.
🎵 New Album — "살아내는 중입니다" (2025)
A full-length emotional narrative across 11 tracks, distributed globally on all major platforms.
Contact & Institutional Affiliations
Email (General Inquiries): ryan@ryanresearch.org
ORCID (Research Registry): https://orcid.org/0009-0006-2751-496X
Affiliated Academic Networks:
EurAI — European Association for Artificial Intelligence
BCS — The Chartered Institute for IT (SGAI)
KAIA — Korean Artificial Intelligence Association
KASBA — Korean Academic Society of Business Administration
International Governance & Ethics Network
© 2025 Ryan Research Institute. All rights reserved.
A multidisciplinary hub for research, ethics, and the arts in the age of AI
News & HighlightsPresentationsResearch ProgramsResearch Fellowship ProgramsInstitute InsightsPublicationsBooksMusicManifestoAbout & ContactHome(KR)