About
Hugonomy Systems is a bootstrapped company building tools that help people
stay aware while using AI. Founded in Urbana, Illinois.
↓ New to the terminology? See our plain-English glossary
The founder
MD/PhD · Texas A&M University
Joseph Tingling is a physician-scientist, systems thinker, and the founder of Hugonomy Systems. He completed his MD/PhD at Texas A&M University, then returned to his home country of Jamaica during the COVID-19 pandemic to serve at Sav-La-Mar Public General Hospital — providing critical care and research support during a period of global uncertainty.
After that work, he joined the University of Illinois Urbana-Champaign (UIUC) to conduct advanced research in virology and neuroinflammation. His publication in Brain, Behavior, and Immunity (2023) reflects his commitment to rigorous, translational science and the study of human resilience under stress.
Today, Joseph is applying his scientific training to a new problem: helping people stay aware, intentional, and grounded while using AI. Hugonomy Systems is his response — a bootstrapped effort to build practical tools for that job.
Working from a studio in Urbana, Illinois, he designs, codes, and iterates on Hugonomy's technology with a focus on real-world impact, ethical design, and human-centered communication.
MD/PhD — Texas A&M University
Research: Virology & Neuroinflammation, UIUC
Critical care physician, Sav-La-Mar Public General Hospital, Jamaica (COVID-19 response)
Brain, Behavior, and Immunity, 2023 — human resilience under neuroinflammatory stress
USPTO provisional filed — No. 63/856,714
Core engagement architecture
The journey
2020 – 2021
Critical care physician at Sav-La-Mar Public General Hospital during the pandemic. Frontline medicine under global uncertainty.
Urbana, Illinois
Advanced research in virology and neuroinflammation at the University of Illinois Urbana-Champaign. Published in Brain, Behavior, and Immunity (2023).
2025 – Present
Bootstrapped. Building tools that help people stay aware while using AI. VibeAI FoldSpace live on Chrome & Edge.
Mission
Our mission is to restore clarity and trust to digital communication by addressing the massive cost of cognitive misalignment in the modern economy.
Through human-centered AI systems and interfaces, we're building tools that help professionals across medicine, law, science, engineering, and enterprise stay aware, intentional, and in control while using AI.
We build within Google's Chrome ecosystem and Microsoft's Edge platform — not to replace their tools, but to complement them with the reflection and oversight tools needed for high-stakes, human-centered work.
The goal is simple: enhance clarity, agency, and intentional thinking across the AI tools people already rely on.
— J. D. Tingling, MD/PhD · Founder & CEO, Hugonomy Systems · Urbana, Illinois, 2026How we build
Hugonomy is built through a disciplined, research-grounded development process.
Product decisions are grounded in neuroscience, cognitive science, and real clinical observation — not market trends.
An internal AI Council cross-validates every major decision — architecture, policy, privacy, and store submission — before shipping.
Lean and bootstrapped. Rapid iteration on real-world user feedback. No bloat, no feature theater.
Emotional safety and user consent are architectural constraints, not afterthoughts. The governance model is built before the product, not after.
Key concepts
Plain English — no PhD required. Click any term to expand.
Thinking about your own thinking. The part of your brain that steps back and asks: "Wait — am I actually understanding this, or just processing it?" It's the first thing to go when you're on autopilot. VibeAI FoldSpace is designed to trigger it.
Used in: Vision — The Open Question
Handing a mental task to something outside your brain — a calculator, a notepad, or an AI. Not inherently bad. The problem is when you start offloading understanding itself, not just memory. That's when it starts to cost you.
Used in: Home — The Science · Vision
The gradual weakening of a mental skill from not using it — like a muscle you stop exercising. The slow drift from "I can figure this out" to "I'll just ask the AI." Gerlich (2025) found measurable evidence of this across 666 participants.
Used in: Home — The Science · Vision — Layer 1
The tendency to trust what an automated system says — even when your own instinct says something different. Not a personal failing — a documented psychological pattern. Clinicians show it with diagnostic AI. You might show it with ChatGPT right now.
Used in: Home — Fluency Trap diagram
AI responses are grammatically perfect and smooth. The brain registers that fluency as comprehension — so you feel like you understood it, even if you didn't really engage. The displacement is invisible from the inside.
Used in: Home — The Science
What VibeAI FoldSpace calls the moment you accept an AI response without really engaging — a quick "ok thanks" or "got it." The extension detects this and activates the Thinking Mirror. Not a judgment. A mirror.
Used in: Home — The Moment · Vision
The FoldSpace feature that activates on Passive Mode. Offers reflection prompts: "Challenge this answer," "Ask for evidence," "Add my own thinking." Not telling you what to think. Reminding you that you can.
Used in: Home — The Moment
Your right and ability to direct your own thinking — without it being replaced or eroded by an algorithm. The idea behind everything Hugonomy builds. AI can help you think. But the thinking has to stay yours.
Used in: Vision · Privacy Policy
A layer that sits alongside your AI work and helps you notice drift, overload, passive acceptance, or loss of focus while it is happening. On the public site, we usually explain this more simply as "tools that help you stay aware while using AI."
Used in: older site language, product strategy, and vision framing
An internal Hugonomy phrase for products designed to help people stay clear, intentional, and in control while using AI. Useful internally, but too abstract for most first-time visitors — which is why the public site now uses plainer wording.
Used in: older site language and internal planning
The design idea behind FoldSpace: detect how engaged the user is, surface feedback in real time, and prompt reflection before passive acceptance becomes a habit. This is the core approach covered by Hugonomy's USPTO provisional filing.
Used in: trust, IP, and internal product language
The rules, permissions, review steps, and human sign-off that decide what an AI system may do, what requires oversight, and what should never be automated. In plain English: who gets to act, under what limits, and with whose approval.
Used in: Vision, governance planning, and future-product framing
An internal shorthand for a rules system that defines what AI agents are allowed to do, what must be reviewed, and what always stays human. In plainer language: policy, guardrails, and approval logic for AI-assisted work.
Used in: older roadmap language and governance planning
Your train of thought as it unfolds through a question, decision, or problem. It is the line of reasoning you are trying to hold onto before distraction, overload, or a polished AI answer pulls you away from it.
Used in: older site language and vision framing
The path your thinking takes over time as you move from question to evidence to decision. Useful as an internal phrase, but on the public site we usually explain it more simply as keeping continuity across sessions or staying connected to your own train of thought.
Used in: older product and roadmap language
Vygotsky's concept: the sweet spot where you grow — tasks just beyond what you can do alone, but reachable with support. A good teacher holds you there. AI used passively skips it entirely, giving you the answer before the struggle that builds understanding.
Used in: Vision — The Open Question
Human and AI thinking evolve together over time — but not always in the same direction. Reflective use can make you sharper. Passive use can make you weaker. Which way you drift depends on how you engage.
Used in: Home — The Science
Get involved
Hugonomy is actively looking for collaborators, advisors, and investors who believe people need better ways to stay in control of their thinking while using AI.
Seeking seed-stage investors for 2026–2027. Enterprise AI governance is a $68B market by 2035.
Reach outActively looking for technical specialists in cognitive science, ML, and browser extension development.
Reach outExploring peer review collaborations and academic partnerships on cognitive engagement research.
Reach out