Specific bottleneck?
Let us talk for 30 minutes about your use case.
No obligation, no cost, with concrete next steps at the end.
Book a 30-minute callEigenentwicklung / Evanto-Produkt
From the wild growth of public AI tools and unused company knowledge, a closed, in-house AI platform emerges — chat, knowledge base, document import, and automated agents under one interface, with built-in privacy automation and full data sovereignty.
THE STARTING POINT
Employees want to work with AI — they already are, just outside the company. Texts get pasted into public chat tools, documents get sent to external translators, research runs through outside searches. What's convenient is at the same time a double loss: sensitive data leaves the building, and your own company knowledge — contracts, manuals, Confluence pages, grown SharePoint structures, newsletter subscriptions, incoming emails — stays invisible to the AI.
Honestly: who today has an honest answer to who has posted which company data into which external AI? And who can guarantee that an AI answer an employee passes to a customer actually comes from the current internal state of knowledge — and not from a two-year-old internet snippet?
A generic cloud AI doesn't solve either problem. A quickly assembled chat interface in front of a language model doesn't either — it's just chat again, with no link to the content and routines that actually create work in the company.
WHAT WE BUILT
A closed platform that runs as a dedicated single instance for one company — either centrally as a web application in a Docker container, or autonomously as a native desktop app. Both variants share the same interface; employees don't notice which mode they're in. What they do notice: everything they need for AI-assisted work is in one place — and everything that goes in stays in their own environment.
The entry looks like a modern chat — answers stream in word by word, files can be attached, the language model is swappable per task (Claude, GPT, Gemini, local models, and many more), conversations are saved and searchable. What sets it apart: every answer can lead to clickable source references straight into the relevant knowledge article. Employees can forward a good answer to colleagues, or download it as a Word or PDF document in corporate design — without a copy-paste break.
Company documents become the actual answer source. Word, PDF, Markdown, Excel, CSV, images get uploaded — singly or as bulk ZIP — automatically transferred into a vector database, and made available to the AI as context — searchable semantically, by meaning rather than keyword. The knowledge base isn't a single bucket: separate vaults for departments or projects, four confidentiality levels, role-based access, personal vaults for every user. Where a vector database isn't wanted, a slim in-memory BM25 search does the same job.
A dedicated import page brings content from six sources into the same pipeline:
Every import job runs through the stages Initial → Convert → Ready for review → Published. In the review dialog, a Markdown editor, frontmatter editor (classification, department, roles), and preview sit side by side in tabs. Only after release does the content move into the knowledge base and trigger indexing. Source-hash checks prevent duplicates; already-imported items are flagged with a reference to the existing entry.
Where employees repeat the same AI-assisted step daily or weekly, an agent takes over. An agent is a routine described once: an input source (no input, a vault folder, or an email mailbox), a personality (via a skill definition), a tool selection with clear read/write classification (read from the knowledge base, create new articles, move articles, mark or file emails), and an optional cron schedule.
Before live operation, every agent first runs in a dry-run: it shows in full what it would do — planned write operations appear in a preview panel, without touching a single file. Once the plan looks right, you press live-run — and watch each step on a live timeline: input bundle, tool call, tool result, model response. Iteration and runtime limits prevent endless loops; a cancel button stops the run at any time. Every run is permanently logged with status, tokens, tool calls, errors, and summary.
A concrete example: web clippings from the browser land automatically in a vault folder. A scheduled agent checks the folder every night, normalises HTML residue, translates English-language articles into German, classifies them by topic, assigns tags, and moves them into the right thematic location. What used to be hours of weekly tidying now runs at three in the morning — and is ready in the morning as a clean, searchable knowledge base.
Before every handover to a language model — chat, tool result, agent run — the content runs through a two-stage PII protection: first pattern-based, then AI-supported (Presidio-based), in German and English. Personal data is replaced with placeholders like [PERSON_1]. The AI sees the placeholder; the answer is automatically re-identified before output. Employees notice nothing, except a brief notification when something was detected. A status indicator in the header shows at any time whether the shield is active and healthy; a complete audit log secures compliance — for agents including a reference to the run and trigger.
Three extension paths without touching code:
Which language model does the work is a configuration question: Anthropic Claude, OpenAI GPT, Google Gemini, AWS Bedrock, Vertex AI, or a local model running on Ollama on your own hardware. The choice between cloud convenience and on-premise control stays a decision that can be revisited later — without touching code.
WHAT IT GIVES THEM
WHAT WE DELIBERATELY DID NOT AUTOMATE
WHY THIS PATTERN TRANSFERS
The setup works wherever a company wants to let employees work with AI without losing sensitive data to external providers — and where the actual value isn't in the language model, but in the combination of language model, company knowledge, routines, and control: tax and audit firms with client files, insurers with claims and clause libraries, mid-sized manufacturers with complex product and service portfolios, municipal administrations with citizen files, education providers with curated learning material, social and health organisations with particularly sensitive data.
The pattern: one interface → swappable language model → own knowledge base with RBAC → six import sources for existing content → scheduled agents for recurring work → automatic PII protection before every external handover → dedicated single instance instead of shared cloud.
AI takes the routine off the employee's desk — research, preparation, sorting, drafting a first version. Subject-matter and legal responsibility stays with the person — where it belongs. The data stays in the building — where it belongs. And because the platform is built open for models, knowledge sources, and tools, it doesn't become legacy in two years — it grows with what the next generation of AI models and integrations brings.
Talk to us
Specific bottleneck?
No obligation, no cost, with concrete next steps at the end.
Book a 30-minute callYour own AI platform?
Demo with your own data is possible. We bring the pseudonymisation set up and ready.
Request a demo