Navigation

Reference · Environment

AI and the environment.

AI uses real energy. We're not going to pretend otherwise. What we can say is this: Continio is built to use less of it per useful answer than the alternative.

The honest position

Continio is not zero-impact. No AI product is. Every message you send, every reply, every bit of background work uses some compute, which uses some energy. Anyone telling you otherwise is selling you something.

What's true: Continio is designed deliberately to use less of it per useful answer than the alternative.

How that actually works

Memory means you stop re-explaining yourself.

The biggest source of wasted AI energy is people re-introducing themselves on every new chat. Their job. Their project. Their tone. Every "I'm working on X for client Y" is energy spent re-processing information the AI should already know about you. Continio's whole job is to stop this happening. Fewer redundant words, less energy per useful answer.

The same instructions don't get re-read every time.

Every message Continio sends to the AI carries a set of background instructions about how to behave. Those instructions barely change between messages, so we cache them. The AI reads them from a cache instead of re-reading them every time. That cuts roughly 90% of the energy cost of those instructions across a session.

The instructions also got shorter.

In March we rewrote those background instructions to be roughly half as long, without losing any of the rules they enforce. Combined with caching, that permanently halved the dominant cost of every chat.

The right size of AI for the right size of task.

Not every message needs the heaviest AI. Quick factual questions go to a lighter one. Complex, nuanced, or personal conversations get the full one. This is mostly a quality decision (lighter AI for fiddly things produces worse answers), but lighter AI also uses less energy per response.

It only sends what's relevant.

Continio doesn't push everything it knows about you at the AI on every message. It picks just what fits the question. A simple question might only bring in a few things you've told it. A reflective conversation might bring in more. The rest stays quiet. Less data, less energy.

Background work waits its turn.

When Continio is doing things in the background (noticing what you've said, summarising old chats, organising for search), it pauses while you're actively chatting. Energy load gets spread out in time, not stacked on top of itself.

Where Continio runs

Continio runs on Vercel (frontend), Railway (backend), and Supabase (database). The AI itself runs on Anthropic and OpenAI infrastructure. All four sit on cloud providers (Google Cloud, Microsoft Azure) that have published net-zero and renewable energy commitments.

We don't claim credit for those commitments. But it matters that the underlying infrastructure isn't running on coal.

What we don't claim

We're not carbon-neutral. We don't have a sustainability certificate. We haven't worked out our emissions per user per month. Doing that properly would need more data than we currently have. Publishing a number we can't back up would be worse than saying nothing.

What we do commit to: as the product grows, efficiency improvements come before feature additions. Lighter AI for lighter tasks, more caching, fewer redundant messages. These are priorities, not afterthoughts.

The comparison that matters

The right question isn't "is this zero impact?" It's "is this better or worse than what people are doing instead?"

Someone who explains themselves five times across five different AI tools, restarts conversations, re-uploads the same files, re-processes the same background information several times a week, is using a lot more energy than the same person using Continio.

Continio's efficiency case is structural, not incremental. The product is built around the principle that you should explain things to AI once, not repeatedly.

(That's also why "use it lightly to save the planet" isn't really our message. The most useful thing you can do is concentrate your AI use in one place that remembers, instead of spreading it across five that don't.)