In 2025 we launched the first full version of our platform and saw it move quickly into live use in organisations that take risk, regulation, and client trust seriously. From Kingsley Napley and The University of Law to Luminate Education, Lex Mundi, Schofield Sweeney and others, our early clients spanned exactly the sectors Kalisa is built for - legal, education, financial services and professional networks where knowledge and expertise is the product and privacy and AI safety are non-negotiable.
Across these clients, a common pattern emerged: Leaders were no longer asking whether they should use generative AI. They were asking how to do it safely, in a way that reflects their brand, protects their intellectual property, and fits their governance standards.
Kalisa’s answer in 2025 was clear. Start with your knowledge and expertise, add guardrails that reflect your risk appetite, and make it easy for non-technical teams to build practical generative AI use cases that deliver real value. Chat agents, workflows, AI workspaces, client portals and more, all driven from your own documents, data and organisational expertise.
That foundation is now in place. The real question is what we do next.
What we learn from our first year in market
Working with our first group of clients shows us what “good” looks like in regulated sectors.
- Different modes for different types of work. People want to interact with AI in more than one way, but always within a secure and private environment. Some tasks demand deep grounding in verified knowledge, minimal hallucinations, and consistent outputs. Others benefit from speed, flexibility, and broader exploration. Kalisa supports both. Knowledge Agents are designed for formal, high-confidence work grounded in an organisation’s own expertise. Sandbox provides a safe space for more flexible, everyday tasks, where users can work with private information without fear that it will leave their environment or be reused elsewhere.
- Security and governance as standard. Clients expect AI to operate within their existing rules, not alongside them. They want clear guardrails on how agents behave, straightforward control over data, and confidence that the system supports their privacy and regulatory obligations.
- Leaders want outcomes, not experiments. The most successful projects focus on specific outcomes. Better client support. Faster knowledge management. New business models. Improved productivity.
These lessons shape where we take the platform next.
Evolving the platform: Custom branding, personalisation, safe everyday working
In 2026, Kalisa’s roadmap is centred on one idea. Every organisation should have an AI environment that feels like it is their own, not a generic tool.
1. Make Kalisa yours
We are extending Kalisa so that every part of the experience reflects your brand identity and tone of voice.
New brand customisation features
Visual customisation with your brand colours so that agents, portals, and workspaces feel like part of your digital estate, not a third-party add-on.
Personalised voice and context
Brand voice controls keep agents in line with your content guidelines and allow users to add simple personal facts (role, preferences, work context) so responses are more relevant. This background information is stored securely and only used in agents where personalisation is enabled.
.webp)
Coming soon! Users will soon be able to configure their preferred language for AI responses!
Configurable journeys for different audiences
Internal and external users see experiences tailored to their needs, with clear boundaries between what each group can see and do.
.webp)
The goal is simple. When someone uses a Kalisa-powered agent or portal, they should feel they are dealing with your organisation, not with “an AI system”.
2. Kalisa Sandbox: Safe every day working
Our early work confirmed that teams need a place to work with AI in a more open, creative way, without losing sight of the importance of managing risk and governance.
Kalisa’s Sandbox will continue to evolve as an everyday workspace for lower-risk tasks such as summarising and rewriting documents, testing ideas, and exploring concepts. Sandbox agents are not grounded in your institutional knowledge by default, to give you freedom and flexibility where it is appropriate. This creates a clear separation between low risk activity (for example exploring an idea) and formal, governed tasks (for example drafting a letter to your staff based on official guidance). When you add your own documents, Sandbox becomes a powerful way to work with your organisation’s materials.
Doubling down on privacy and responsible AI
Privacy and data control sit at the centre of Kalisa’s design. Our clients work in sectors where confidentiality is not optional, so we treat “private by default” as a baseline, not a feature.
Kalisa’s model is clear:
- Client data stays within controlled environments, is encrypted in transit and at rest, and is never used to train public models.
- Prompts and documents do not flow out to the open internet, and any use of external sources is deliberate, traceable, and under client control.
- Platform alignment with emerging requirements in areas such as data protection, AI safety laws, and sector specific guidance, so that Kalisa supports organisations in meeting their obligations instead of adding to their burden.
This focus on responsible AI use is not theoretical. It reflects the same principles discussed at policy level, including in conversations our CEO, Adam Roney, took part in at Number 10 Downing Street in December 2025, where the emphasis was on building AI systems that are safe, trustworthy, and fit for regulated environments.
Looking ahead, we expect regulation and client expectations to tighten further. That is a positive pressure. We are continuing to architect the platform around responsible, safe, and compliant AI use. That means clear guardrails, audit trails, and policy controls that map to the standards in regulated sectors, rather than asking clients to fit around a general purpose AI tool.
Key priorities for 2026
Looking ahead, we are focused on three key priorities:
- Growing our team (we’re hiring by the way!)
We will continue to grow a multidisciplinary team across product, engineering, design, and client success, so that we can ship faster and support our clients with their AI transformation. - Enhancing the platform
We will continue to enhance our platform, adding new modules, increasing the power of existing modules and making it even easier to build and manage secure AI solutions, without requiring technical expertise. - Helping clients move from “build vs buy” to “influence”
We aim to work with forward-thinking organisations that want to shape how AI is used in their sectors. These early adopters will not just use Kalisa, they will influence its future roadmap, providing critical feedback, helping to shape the platform around real needs in their sector. AI does not have to be a race to the bottom on user experience, or features and does not need to be dominated by the big box vendors. Kalisa is providing a powerful alternative to these corporate monocultures.
Building the future together
Kalisa was created for organisations whose value lies in their knowledge and expertise. Our first year in the market has confirmed strong demand for AI that is reliable, private, and firmly under organisational control.
In 2026, we will continue to develop a platform that helps you codify what makes your organisation valuable, protect it, and extend it into new services and products.
The next phase of Kalisa will be shaped by the forward-thinking organisations already building on the platform, and with those who join us as this work continues.
.png)

.webp)
