Aquin's RnD
We're building the future of AI UI/UX starting with floating AI assistant that co-works with you across all apps & sites, learns all the context so you never have to repeat yourself.
aquin today: the context consumption layer
Aquin is a proactive floating assistant with an incredible interface that stays wherever you put it on your screen. it's not just another AI chat app.
it's a context consumption layer. every feature, every integration, context windows, MCP connections, local LLM support, browser tabs, file attachments, screen sharing, voice recording, it all feeds into one unified understanding of what you're doing.
the floating interface adapts to your attention and context. hide it when you don't need it, pull it out instantly when you do. ask anything, get precise usable answers, generate directly into any app or site.
but this is just the foundation. what we've built isn't the destination, it's the infrastructure for what comes next.
research focus #1: hyper-personalized LLMs
we're building an LLM training platform where ANYONE can train their own models based on their regular computer activity.
not through complex ML pipelines or expensive cloud GPUs. just use Aquin like you normally would, and your personalized model learns from everything, your context windows, conversations, MCP integrations, browsing patterns, work habits.
the current Aquin features become your data collection layer. every interaction, every file you attach, every URL you reference, every screen you share, it all becomes training data. automatically formatted, cleaned, and ready for fine-tuning.

the complete training pipeline: from data collection to model validation
how it works: democratizing model training
train locally or in the cloud. your choice. small models run on your machine. bigger models? we handle cloud hosting. you get full access to the trained model either way.
create your own APIs. generate API keys for your personalized models. serve them through inference endpoints. rate limiting and usage tracking built in.
marketplace for everything. trained models, training datasets, even GPU compute time. download others' fine-tuned models, sell your own. or host your model yourself.
your own database. users get their own DB they fully control. visual UI for managing models, viewing stats, creating API keys.
the vision: AI that knows you
everything you do in Aquin becomes part of your model's understanding. conversations, files, context windows, integrations, all of it feeds into a personalized AI that actually knows how you work.
no more re-explaining. no more generic responses. your trained model remembers your preferences, understands your style, and generates outputs that feel like they came from someone who's been working with you for years.
we're building the infrastructure to make this accessible to everyone. not just ML engineers, not just developers, everyone.
proving it works
we're testing everything rigorously before launch. does the trained model actually know you better than a base model? does it reference your context correctly? does it feel personalized?
we're comparing models trained on different amounts of data, different training approaches, different timeframes. we're measuring real improvements, not just hoping it works.
success means your trained model feels like it knows you. it understands your work patterns, your preferences, your style. that's the bar.
research focus #2: hyper-personalized interface
if you can train your own model, why can't you design your own interface?
we're researching how anyone can program and design their own version of Aquin's floating assistant. not with code, with English. describe how you want it to look, behave, respond, and it builds itself.
want a minimal interface that only shows when you hover? describe it. want a dashboard view with multiple AI personalities side by side? describe it. want custom shortcuts, themes, layouts, interaction patterns? just describe them.
the goal isn't to replace designers or developers. it's to make interface customization as accessible as training is becoming. your AI should look and feel like yours, not ours.
same marketplace concept applies. share your interface designs, download others', remix and improve. the best ideas should spread, not stay locked in one person's setup.
economics: pay-per-use, full control
simple pricing. pay based on what you use. compute, tokens, storage. no hidden fees, no surprise bills.
marketplace revenue. sell your trained models, buy others'. fair revenue sharing. or host everything yourself if you prefer.
full ownership. your data, your models, your API keys. export everything, host anywhere. we're infrastructure, not a walled garden.
beyond training: intelligent automation
trained models unlock smarter automation. AI that understands your system, suggests contextual actions, executes tasks safely. not hijacking your computer, augmenting your capabilities.
imagine: AI notices you're working on a presentation, automatically pulls relevant files from your context, suggests layouts based on your past work, generates content in your writing style. all while you stay in your presentation app.
this is what hyper-personalized AI enables. not generic responses, but actions and outputs that match how you actually work.
breaking the monopoly.
current AI landscape: Google, Oracle, OpenAI, Microsoft, Anthropic. centralized power, closed systems, your data training their models.
we're building the opposite. decentralized training, open marketplaces, your data training your models. if programming democratized computing, we're democratizing AI itself.
making LLM training as simple as creating a Notion page. that's the vision. that's what we're researching. that's what we're building.
join our discord to see real-time progress, test early builds, and shape what comes next.
Not sure if Aquin is right for you?
