2026 and beyond: The road ahead

In 2025 we launched the first full version of our platform and saw it move quickly into live use in organisations that take risk, regulation, and client trust seriously. From Kingsley Napley and The University of Law to Luminate Education, Lex Mundi, Schofield Sweeney and others, our early clients spanned exactly the sectors Kalisa is built for - legal, education, financial services and organisations where knowledge and expertise is the product and privacy and AI safety are non-negotiable.

Across these clients, a common pattern emerged: Leaders were no longer asking whether they should use generative AI. They were asking how to do it safely, in a way that reflects their brand, protects their intellectual property, and fits their governance standards.

Kalisa’s answer in 2025 was clear. Start with your knowledge and expertise, add guardrails that reflect your risk appetite, and make it easy for non-technical teams to build practical generative AI use cases that deliver real value. Chat agents, workflows, AI workspaces, client portals and more, all driven from your own documents, data and organisational expertise.

That foundation is now in place. The real question is what we do next. 

What we learned from our first year in market

Working with our first group of clients showed us what “good” looks like in regulated sectors.

  • Different AI modes for different types of work: People want to interact with AI in more than one way, but always within a secure and private environment. To support different ways of working Kalisa has two powerful types of Agents:
    • Knowledge Agents become subject matter experts through being linked to Topics which can contain manually uploaded data, data synchronised from third party systems and data crawled from selective public websites. Once attached to a Topic they will only answer questions on this Topic and will also provide source references for their responses. They are great for HR Policy/Customer Support use cases. They are also great if you want to sell your knowledge and expertise directly to your end clients through an AI experience.
    • Sandbox Agents act in a more open-ended way. They can be used to help you reason through a specific problem, draft documentation, and even simulate a situation in your organisation, helping you see different points of view. They can also have files and data attached to them, but unlike Knowledge Agents, this is an optional choice. Like all Kalisa functionality, they are private, secure, and fully guardrailed. Due to this, and their inherent flexibility, some organisations have rolled out Sandbox as a safe, private, and secure alternative to other publicly available AI systems. You can probably guess which ones…
  • Security and governance as standard: Clients expect AI to operate within their existing rules, not alongside them. They want clear guardrails on how agents behave, straightforward control over data, and confidence the system supports their privacy and regulatory obligations. Kalisa delivers on all these fronts.
  • Leaders want outcomes, not experiments: The most successful projects focus on specific outcomes. Better client support. Faster knowledge management. New business models. Improved productivity. Kalisa’s clients are able to quickly deliver meaningful outcomes because of the power of the platform and the speed at which new ideas can be tested and implemented by non-technical people.

Evolving the platform: Custom branding, personalisation, safe everyday working

1. Custom branding: Make Kalisa yours

Visual customisation with your brand colours so that agents, portals, and workspaces feel like part of your digital organisation, not a third-party add-on.

2. Personalisation

Background information in Personal Settings lets users specify key facts about themselves, what is important to them and how they want the AI to communicate with them. This background information is stored securely and only used in agents where personalisation is enabled.

*The organisation and character shown in these visuals are fictional and created solely for this demo.

Coming soon! Users will soon be able to configure their preferred language for AI responses.

3. Configurable journeys for different audiences

Internal and external users see experiences tailored to their needs, with clear boundaries between what each group can see and do.

*The organisation and character shown in these visuals are fictional and created solely for this demo.

The goal is simple. When someone uses a Kalisa-powered agent or portal, they should feel they are dealing with your organisation, not just another AI system.

4. Kalisa Sandbox: Safe everyday working

Our early work confirms that teams need a place to work with AI in a more open, exploratory way, without losing sight of managing risk, privacy, and governance.

Kalisa’s Sandbox acts as an everyday workspace for broader tasks such as summarising and rewriting documents, reasoning through problems, drafting early versions of content, and exploring ideas from different points of view. Sandbox Agents are designed to be flexible and open-ended, supporting thinking and experimentation.

Users can choose to attach files and data to Sandbox Agents when needed, making them a powerful way to work with organisational materials.

Like all Kalisa functionality, Sandbox is private, secure, and fully guardrailed. For this reason, some organisations use Sandbox as a safe, controlled alternative to publicly available AI tools, giving their teams the flexibility they want without exposing their data or intellectual property.

Our commitment to privacy responsible AI

Privacy and data protection sit at the heart of Kalisa’s design. Our clients work in sectors where confidentiality is not optional, so we treat “private by default” as a baseline, not a feature.

Kalisa’s model is clear: 

  • Client data stays within controlled environments, is encrypted in transit and at rest, and is never used to train AI models.
  • Prompts and documents do not flow out to the open internet, and any use of external sources is deliberate, traceable, and under client control.
  • We ensure platform alignment with emerging requirements in areas such as data protection, AI safety laws, and sector specific guidance, so that Kalisa supports organisations in meeting their obligations instead of adding to their burden.

The focus on responsible AI is one of our founding principles. It reflects the same principles discussed at policy level, including in conversations our Founder and CEO, Adam Roney, took part in at Number 10 Downing Street in December 2025, where the emphasis was on building AI systems that are safe, trustworthy, and fit for regulated environments.

Looking ahead, we expect regulation and client expectations to tighten further. That is a positive pressure. We are continuing to architect the platform around responsible, safe, and compliant AI use. That means clear guardrails, audit trails, and policy controls that map to the standards in regulated sectors, rather than asking clients to fit around a general purpose AI tool.

Key priorities for 2026

Looking ahead, we are focused on three key priorities:

  1. Growing our team (we’re hiring by the way!)
    We will continue to grow a multidisciplinary team across product, engineering, design, and client success, so we can ship faster and support our clients with their AI transformation.  
  2. Enhancing the platform
    We will continue to enhance our platform, adding new modules, increasing the power of existing modules and making it even easier to build and manage secure AI solutions, without requiring technical expertise. 
  3. Helping clients move from “build vs buy” to “influence”
    We aim to work with forward-thinking organisations that want to shape how AI is used in their sector. These early adopters will not just use Kalisa, they will influence its future roadmap, providing critical feedback, helping shape the platform around real user needs. AI does not have to be a race to the bottom on user experience, or features and does not need to be dominated by the big box vendors. Kalisa is providing a powerful alternative to these corporate monocultures.

Building the future together

Kalisa was created for organisations whose value lies in their knowledge and expertise. Our first year in the market has confirmed strong demand for AI that is reliable, private, and firmly under organisational control.

In 2026, we will continue delivering a platform that helps you codify what makes your organisation valuable, protect it, and extend it into new markets, products and services.

The next phase of Kalisa will be shaped by the forward-thinking organisations already building on the platform, and with those who join us on this exciting journey.

Get a free demo of Kalisa

Discover how to securely deploy GenAI across your organisation.
Explore Kalisa's capabilities
Explore GenAI use cases
Learn about pricing
By using this website, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.