The vision
Your train of thought
should stay yours.
AI is becoming a place where people think, write, decide, and draft. Hugonomy builds tools that help people stay aware while using it — and we want to prove that this kind of feedback actually changes behavior. Scroll to see the question we are trying to test.
The open question
Early research shows a consistent signal: passive AI use correlates with cognitive decline. But all of that research measures outcomes after the fact — surveys, post-hoc scores, theoretical models. Nobody has tested whether real-time awareness feedback actually changes how people engage with AI while it's happening. We want to be the ones who find out — honestly, rigorously, in public. And if the answer is no, the world deserves to know that too.
The gap in the current science
"What's missing is a tool that observes how people actually engage with AI while the conversation is happening. Today there is no real-time feedback about how someone is interacting with AI." — Hugonomy advisor pitch, UIUC EIR, March 2026
The hypothesis
We believe the answer is yes. Real-time awareness of passive acceptance should interrupt the drift before it becomes a habit. But belief isn't science. We want to measure it.
Watch the explainer on YouTube ↗
2-week randomized pilot. Treatment group sees live VibeAI FoldSpace cognitive feedback during AI conversations. Control group uses the same AI tools with no engagement feedback shown. Pre/post behavioral mapping.
n ≈ 50 students. Target: university cohort using AI tools for coursework. Mixed AI usage frequency. Diverse academic backgrounds. Minimal risk — all data stays on device, local-first architecture.
Behavioral: active vs. passive engagement ratio, session persistence, message depth, reflection frequency.
Self-reported: pre/post survey on AI overreliance, mindfulness, and metacognition — your awareness of your own thinking as it happens.
Users who see a real-time "Thinking Engagement" signal will display meaningfully different interaction patterns compared to users without feedback — moving from passive acceptance toward active inquiry.
We are not claiming this works. We are claiming it's worth testing — with enough scientific rigor that the result, positive or negative, adds something real to the field. If real-time cognitive awareness doesn't change behavior, that finding matters just as much. The AI era needs honest instruments, not just optimistic ones.
We're looking for academic partners, researchers, and institutions willing to help design and run a rigorous pilot. IRB-ready. Local-first architecture. Consent-gated from day one.
The stack
What exists now, what comes next, and what stays longer-term — all built around the same rule: notice first, understand second, automate last.
A real-time awareness HUD for AI conversations. Runs entirely in your browser — no cloud, no accounts, no profiling. Shows your thinking stage (Exploring, Evaluating, Refining, Passive Mode) as it shifts. Activates the Thinking Mirror when you accept an AI response too quickly.
Chrome & Edge · Local-only · Consent-gated
Built for cross-session work. Where FoldSpace helps inside one conversation, Lens is meant to reconnect your thinking across many sessions — with your consent and under your control.
Next active build · Separate codebase from FoldSpace · Different trust model
A longer-term concept for coordinated multi-agent work with explicit rules, human review, and human sign-off before action. Directional, not a shipping product.
Long-term · Multi-agent work · Explicit rules · Human sign-off
Architecture
As AI use expands, we see three human problems that need different kinds of tools. The Hugonomy roadmap is our attempt to respond to them step by step.
🧠 Layer 1 — Cognitive Erosion
The pain in one line: your thinking has no mirror. VibeAI FoldSpace is the answer — a real-time signal that shows whether you're engaging or accepting.
🔗 Layer 2 — Cognitive Fragmentation
Every AI conversation starts cold. AllMinds Lens is meant to reconnect your work across sessions — with your consent and under your control.
🏛 Layer 3 — Governance Vacuum
Teams are starting to let agents act before they have clear oversight. AllMinds Council is the long-term answer — review before execution, human sign-off before action.
Why now
The habits people build around AI now will shape how they think with it later. We want better defaults before passive use becomes normal.
Participants in Gerlich (2025), which found a 75% correlation between cognitive offloading and lower critical-thinking scores
By 2025, empirical work, theory papers, and mainstream academic commentary were all pointing in the same direction: passive AI use can carry a cognitive cost
Projected enterprise AI governance market by 2035 — a sign that oversight is becoming a real business problem, not a niche concern
Design principles
Constraints are a form of architecture. These are ours.
Roadmap
What exists today, what comes next, and what stays directional for now.
Range 1 — Now
Individual awareness tool. Prove that real-time feedback helps people use AI more intentionally. VibeAI FoldSpace is the proof point.
Range 2 — Next
Rules and review for human-AI work. Define what agents may do, what needs approval, and what always stays human.
Range 3 — Endgame
Long-term idea: carry your context and decision history across tools without handing control of your thinking over to the tools themselves.