Eigenentwicklung / Evanto-Produkt

KI-PLATTFORM

Eigenentwicklung / Evanto-Produkt

From the wild growth of public AI tools and unused company knowledge, a closed, in-house AI platform emerges — chat, knowledge base, document import, and automated agents under one interface, with built-in privacy automation and full data sovereignty.
Pattern AI inquiry on your website
  • .NET 10
  • Blazor WebAssembly
  • Photino (Desktop)

THE STARTING POINT

The starting point

Employees want to work with AI — they already are, just outside the company. Texts get pasted into public chat tools, documents get sent to external translators, research runs through outside searches. What's convenient is at the same time a double loss: sensitive data leaves the building, and your own company knowledge — contracts, manuals, Confluence pages, grown SharePoint structures, newsletter subscriptions, incoming emails — stays invisible to the AI.

Honestly: who today has an honest answer to who has posted which company data into which external AI? And who can guarantee that an AI answer an employee passes to a customer actually comes from the current internal state of knowledge — and not from a two-year-old internet snippet?

A generic cloud AI doesn't solve either problem. A quickly assembled chat interface in front of a language model doesn't either — it's just chat again, with no link to the content and routines that actually create work in the company.

WHAT WE BUILT

What we built

A closed platform that runs as a dedicated single instance for one company — either centrally as a web application in a Docker container, or autonomously as a native desktop app. Both variants share the same interface; employees don't notice which mode they're in. What they do notice: everything they need for AI-assisted work is in one place — and everything that goes in stays in their own environment.

The AI chat as central workspace

The entry looks like a modern chat — answers stream in word by word, files can be attached, the language model is swappable per task (Claude, GPT, Gemini, local models, and many more), conversations are saved and searchable. What sets it apart: every answer can lead to clickable source references straight into the relevant knowledge article. Employees can forward a good answer to colleagues, or download it as a Word or PDF document in corporate design — without a copy-paste break.

The knowledge base as AI context

Company documents become the actual answer source. Word, PDF, Markdown, Excel, CSV, images get uploaded — singly or as bulk ZIP — automatically transferred into a vector database, and made available to the AI as context — searchable semantically, by meaning rather than keyword. The knowledge base isn't a single bucket: separate vaults for departments or projects, four confidentiality levels, role-based access, personal vaults for every user. Where a vector database isn't wanted, a slim in-memory BM25 search does the same job.

From originals to searchable knowledge

A dedicated import page brings content from six sources into the same pipeline:

  • Word and PDF — either via Syncfusion DocIO offline or Mistral OCR, each with images as an asset sidecar.
  • HTML and web URLs — stripped of tracking and layout clutter, data tables retained, font hierarchy recognised as Markdown headings.
  • Confluence — direct connection with a space and page browser, including linked attachments.
  • SharePoint — via Microsoft Graph with delegated authentication (in production preparation).
  • X articles — single tweets or threads via the official API as a cleanly structured knowledge article.
  • Email mailboxes — IMAP, POP3, and Exchange On-Premise via EWS: pick a mailbox, set date range and sender filter, import matches by ticking — attachments come along as assets.

Every import job runs through the stages Initial → Convert → Ready for review → Published. In the review dialog, a Markdown editor, frontmatter editor (classification, department, roles), and preview sit side by side in tabs. Only after release does the content move into the knowledge base and trigger indexing. Source-hash checks prevent duplicates; already-imported items are flagged with a reference to the existing entry.

AI agents for recurring work

Where employees repeat the same AI-assisted step daily or weekly, an agent takes over. An agent is a routine described once: an input source (no input, a vault folder, or an email mailbox), a personality (via a skill definition), a tool selection with clear read/write classification (read from the knowledge base, create new articles, move articles, mark or file emails), and an optional cron schedule.

Before live operation, every agent first runs in a dry-run: it shows in full what it would do — planned write operations appear in a preview panel, without touching a single file. Once the plan looks right, you press live-run — and watch each step on a live timeline: input bundle, tool call, tool result, model response. Iteration and runtime limits prevent endless loops; a cancel button stops the run at any time. Every run is permanently logged with status, tokens, tool calls, errors, and summary.

A concrete example: web clippings from the browser land automatically in a vault folder. A scheduled agent checks the folder every night, normalises HTML residue, translates English-language articles into German, classifies them by topic, assigns tags, and moves them into the right thematic location. What used to be hours of weekly tidying now runs at three in the morning — and is ready in the morning as a clean, searchable knowledge base.

Privacy that doesn't get in the way, but works

Before every handover to a language model — chat, tool result, agent run — the content runs through a two-stage PII protection: first pattern-based, then AI-supported (Presidio-based), in German and English. Personal data is replaced with placeholders like [PERSON_1]. The AI sees the placeholder; the answer is automatically re-identified before output. Employees notice nothing, except a brief notification when something was detected. A status indicator in the header shows at any time whether the shield is active and healthy; a complete audit log secures compliance — for agents including a reference to the run and trigger.

Extensibility without programming

Three extension paths without touching code:

  • Skills — pre-built knowledge modules (contract analysis, code review, email drafting, web-clipping processing, and many more). Loaded via ZIP upload, hot-reloadable, picked per task by the user — and also as a personality definition for agents.
  • External tools (MCP) — connect any third-party systems via the Model Context Protocol: databases, APIs, internal tools. OAuth 2.0 client flow and HTTP header configuration included. Configurable per user.
  • Plugin architecture for import sources (in preparation) — new import sources like Notion, Jira, or your own wikis can be loaded as isolated plugins, without recompiling the core application.

One platform — swappable model

Which language model does the work is a configuration question: Anthropic Claude, OpenAI GPT, Google Gemini, AWS Bedrock, Vertex AI, or a local model running on Ollama on your own hardware. The choice between cloud convenience and on-premise control stays a decision that can be revisited later — without touching code.

WHAT IT GIVES THEM

What it gives them

  • "Somewhere with an external provider" becomes "in our house". Sensitive data doesn't leave the environment unfiltered. Personal data is automatically pseudonymised before every AI handover.
  • Unused company knowledge becomes AI context. Six import sources (Word, PDF, HTML, Confluence, X, email) feed the same knowledge base — employees get answers from the internal state, not from the public internet.
  • Routine work becomes scheduled automation. Newsletter triage, invoice pre-checks, web-clipping clean-up, knowledge-base maintenance — anything describable can run as an agent on a schedule.
  • Dry-run makes write operations plannable — before the damage. Before an agent moves a file or archives an email, administrators see the full plan.
  • Source references prevent hallucination worries. Answers link to the specific knowledge article, not to a generic "according to the knowledge base".
  • One interface, two worlds. Web for teams, desktop for high-security or field scenarios — identical UI, identical feature set, no second training effort.
  • Vendor lock-in avoided. AI provider, vector-database path, and import sources are swappable. Switching tomorrow for regulatory or cost reasons changes a configuration — not code.

WHAT WE DELIBERATELY DID NOT AUTOMATE

What we deliberately did not automate

  • The final approval of imports. Conversion, dedup check, and frontmatter suggestion are automatic — the move into the knowledge base is explicitly human.
  • The choice of what an agent does. The dry-run shows the plan; pressing "live run" stays a deliberate decision. Even cron-triggered agents require that the plan was validated dry at least once first.
  • The classification of confidentiality. Frontmatter is suggested, not set. Which level a document carries and who's allowed to read it stays the responsibility of the relevant person.
  • Binding statements on legal or regulatory matters. The AI answers from in-house data and points to the source — interpretation stays with the person who has the corresponding responsibility.
  • Editing content in external systems. Confluence, SharePoint, and email mailboxes are read, not written back. What's maintained stays where it's maintained.
  • Multi-tenancy. Deliberately not. One dedicated instance per company. Other tenants' data is technically unreachable, because it simply isn't in the same system.

WHY THIS PATTERN TRANSFERS

Why this pattern transfers

The setup works wherever a company wants to let employees work with AI without losing sensitive data to external providers — and where the actual value isn't in the language model, but in the combination of language model, company knowledge, routines, and control: tax and audit firms with client files, insurers with claims and clause libraries, mid-sized manufacturers with complex product and service portfolios, municipal administrations with citizen files, education providers with curated learning material, social and health organisations with particularly sensitive data.

The pattern: one interface → swappable language model → own knowledge base with RBAC → six import sources for existing content → scheduled agents for recurring work → automatic PII protection before every external handover → dedicated single instance instead of shared cloud.

AI takes the routine off the employee's desk — research, preparation, sorting, drafting a first version. Subject-matter and legal responsibility stays with the person — where it belongs. The data stays in the building — where it belongs. And because the platform is built open for models, knowledge sources, and tools, it doesn't become legacy in two years — it grows with what the next generation of AI models and integrations brings.

Talk to us

Two doors, one address.

Specific bottleneck?

Let us talk for 30 minutes about your use case.

No obligation, no cost, with concrete next steps at the end.

Book a 30-minute call

Your own AI platform?

See CompanyWizard live in action.

Demo with your own data is possible. We bring the pseudonymisation set up and ready.

Request a demo