Skip to main content

My Why Behind Glitch

Mike Keeman

AI safety research at the intersection of psychology and technology.

I'm a engineer and researcher with a somewhat unusual background - started as a clinical psychologist in psychiatric hospitals, earned my PhD in sport science with ML for Olympic athletes, and spent the last years building AI-powered products that i'm hubly proud of.

Mike Keeman - AI Personality Drift Researcher

The Journey to AI Safety

After founding and exiting hattl (AI recruitment platform) in 9 months, and building really-thinking copilots at Outter, I found myself drawn back to a fundamental questions from my psychology days: How do we actually think? What drives us to change? To become better, depper and more human?

When Jack Lindsey tweeted about Anthropic's "AI Psychiatry" initiative in July 2025, it crystallized an idea I'd been carrying for months - we need proper tools to study AI personality changes with the same rigor we use for human psychological research.

Why AI Personality Drift Research Matters

As AI systems (we're on the doorsteps of AGI though) become more capable and autonomous, understanding personality drift becomes critical for safety. But the field lacked the infrastructure to study this systematically. So glitch AI Peronality Drift Simulator is a step towards that.

Glitch combines mechanistic interpretability with clinical assessment tools to create the first comprehensive platform for AI personality research. It's not just about detecting changes - it's about understanding the mechanisms behind them.

Research Focus: AI Personality Drift

My research focuses on understanding how AI systems' behaviors and characteristics change over time - a phenomenon known as AI personality drift. This critical area of AI Safety research examines:

  • Behavioral Consistency: How AI systems maintain consistent personality traits over time
  • Value Alignment: Changes in AI systems' alignment with human values
  • Safety Implications: Risks associated with personality changes in AI systems
  • Intervention Strategies: Methods for preventing harmful drift patterns

Technical Background

Research & Science

  • PhD in Sport Science (ML for Elite Athletes)
  • Clinical Psychology (BSc) - psychiatric hospitals
  • 50+ publications, patents and interdisciplinary research
  • National Technical Committee "Artificial Intelligence" RU

Engineering & Product

  • Founded & exited AI startup in less than 1 year
  • Led R&D teams with $2.5M+ budgets
  • Deployed systems for Olympics, FIFA World Cup
  • Scale: 13K+ athletes, 5M+ MAU platforms

Current Focus: AI Safety Research

Building research infrastructure that enables breakthrough discoveries in AI safety.
The Glitch platform represents the first step toward comprehensive AI personality research - proper tools for proper science.
My work at Outter on AI copilots that can genuinely think (complete with memory systems and intent analysis) directly informs this research. Understanding how to build AI systems with cognitive depth makes the question of personality drift even more critical - as these systems become more sophisticated, we need robust tools to monitor and understand their psychological evolution.

Next: Bridging the gap between building thinking AI systems and ensuring they remain safe and aligned. Expanding Glitch's capabilities while applying insights from real-world AI deployment to safety research.

Research Contributions

My work in AI personality drift research contributes to several key areas:

  • Experimental Design: Developing controlled experiments for studying AI behavior changes
  • Measurement Protocols: Creating quantitative methods for drift detection
  • Safety Monitoring: Building tools for real-time AI behavior monitoring
  • Intervention Testing: Evaluating methods to prevent harmful drift patterns

Let's Collaborate

Interested in AI personality research, mechanistic interpretability, or building tools that advance AI safety? I'd love to connect.

Contact: [email protected] | LinkedIn | GitHub

What Drives This Work

"If we can create AI systems that exhibit consistent personality traits and respond to life events in psychologically meaningful ways, we need to understand what that means for AI safety, AI rights, and human-AI interaction. The more we treat AI systems as having psychological states, the more we need proper tools to study those states scientifically."