Your train of thought
should stay yours.

AI is becoming a place where people think, write, decide, and draft. Hugonomy builds tools that help people stay aware while using it — and we want to prove that this kind of feedback actually changes behavior. Scroll to see the question we are trying to test.

We built the tool.
Now we want to test it.

Early research shows a consistent signal: passive AI use correlates with cognitive decline. But all of that research measures outcomes after the fact — surveys, post-hoc scores, theoretical models. Nobody has tested whether real-time awareness feedback actually changes how people engage with AI while it's happening. We want to be the ones who find out — honestly, rigorously, in public. And if the answer is no, the world deserves to know that too.

The gap in the current science

"What's missing is a tool that observes how people actually engage with AI while the conversation is happening. Today there is no real-time feedback about how someone is interacting with AI." — Hugonomy advisor pitch, UIUC EIR, March 2026

The hypothesis

If you can see your own thinking engagement in real time — does it change how you think?

We believe the answer is yes. Real-time awareness of passive acceptance should interrupt the drift before it becomes a habit. But belief isn't science. We want to measure it.

Study design: 2-week randomized pilot testing whether real-time cognitive awareness changes AI engagement behavior

Study Design

2-week randomized pilot. Treatment group sees live VibeAI FoldSpace cognitive feedback during AI conversations. Control group uses the same AI tools with no engagement feedback shown. Pre/post behavioral mapping.

Participants

n ≈ 50 students. Target: university cohort using AI tools for coursework. Mixed AI usage frequency. Diverse academic backgrounds. Minimal risk — all data stays on device, local-first architecture.

What We Measure

Behavioral: active vs. passive engagement ratio, session persistence, message depth, reflection frequency.

Self-reported: pre/post survey on AI overreliance, mindfulness, and metacognition — your awareness of your own thinking as it happens.

The Grounded Hypothesis

Users who see a real-time "Thinking Engagement" signal will display meaningfully different interaction patterns compared to users without feedback — moving from passive acceptance toward active inquiry.

We are not claiming this works. We are claiming it's worth testing — with enough scientific rigor that the result, positive or negative, adds something real to the field. If real-time cognitive awareness doesn't change behavior, that finding matters just as much. The AI era needs honest instruments, not just optimistic ones.

If this question matters to you — we want to hear from you.

We're looking for academic partners, researchers, and institutions willing to help design and run a rigorous pilot. IRB-ready. Local-first architecture. Consent-gated from day one.

University researchers Cognitive scientists Education technologists IRB coordinators Curious institutions
Start the conversation on Discord →

Three products. Three horizons.

What exists now, what comes next, and what stays longer-term — all built around the same rule: notice first, understand second, automate last.

VibeAI FoldSpace™

A real-time awareness HUD for AI conversations. Runs entirely in your browser — no cloud, no accounts, no profiling. Shows your thinking stage (Exploring, Evaluating, Refining, Passive Mode) as it shifts. Activates the Thinking Mirror when you accept an AI response too quickly.

Chrome & Edge · Local-only · Consent-gated

Visual showing FoldSpace helping a person notice AI drift and stay engaged in the conversation
Live now
Next

AllMinds Lens™

Built for cross-session work. Where FoldSpace helps inside one conversation, Lens is meant to reconnect your thinking across many sessions — with your consent and under your control.

Next active build · Separate codebase from FoldSpace · Different trust model

AllMinds Lens
Horizon

AllMinds Council™

A longer-term concept for coordinated multi-agent work with explicit rules, human review, and human sign-off before action. Directional, not a shipping product.

Long-term · Multi-agent work · Explicit rules · Human sign-off

AllMinds Council

Three problems. Three responses.

As AI use expands, we see three human problems that need different kinds of tools. The Hugonomy roadmap is our attempt to respond to them step by step.

🧠 Layer 1 — Cognitive Erosion

AI performs the reasoning. Humans stop practicing it.

The pain in one line: your thinking has no mirror. VibeAI FoldSpace is the answer — a real-time signal that shows whether you're engaging or accepting.

🔗 Layer 2 — Cognitive Fragmentation

No continuity across sessions. Your thinking resets every time.

Every AI conversation starts cold. AllMinds Lens is meant to reconnect your work across sessions — with your consent and under your control.

🏛 Layer 3 — Governance Vacuum

No structure for AI-assisted decisions. No accountability.

Teams are starting to let agents act before they have clear oversight. AllMinds Council is the long-term answer — review before execution, human sign-off before action.

The window is narrow.

The habits people build around AI now will shape how they think with it later. We want better defaults before passive use becomes normal.

666

Participants in Gerlich (2025), which found a 75% correlation between cognitive offloading and lower critical-thinking scores

2025

By 2025, empirical work, theory papers, and mainstream academic commentary were all pointing in the same direction: passive AI use can carry a cognitive cost

$68B

Projected enterprise AI governance market by 2035 — a sign that oversight is becoming a real business problem, not a niche concern

What we will never build.

Constraints are a form of architecture. These are ours.

Now, next, and long-term.

What exists today, what comes next, and what stays directional for now.

Range 1 — Now

Sentinel

Individual awareness tool. Prove that real-time feedback helps people use AI more intentionally. VibeAI FoldSpace is the proof point.

Range 2 — Next

Legislator

Rules and review for human-AI work. Define what agents may do, what needs approval, and what always stays human.

Range 3 — Endgame

Sovereign Mind

Long-term idea: carry your context and decision history across tools without handing control of your thinking over to the tools themselves.