Introducing Plan AI: A Bot-Free Local Meeting Recorder
Are you tired of having three different "AI notetaker bots" joining every single Zoom or Google Meet call? It feels intrusive, disrupts the meeting flow, and poses a corporate privacy risk. Not to mention the 30 minutes spent after every daily standup or sprint planning manually writing Jira or Linear tickets based on what was just discussed.
To solve this, we are introducing Plan AI — an open-core (BUSL-1.1) platform that records your meetings natively (without bots) and automatically generates perfectly scoped engineering tickets and actionable insights.
You can see a demo of how it works on our GitHub Repo or the Plan AI Website.
How it works (The Architecture)
Instead of the standard approach (spinning up headless browsers to join meetings as a bot), Plan AI uses a native system audio capture approach. This guarantees privacy and completely eliminates the friction of managing virtual meeting guests.
The project is structured as a powerful monorepo with three clients and a distributed backend:
- Desktop Recorder (Electron): Captures your OS system audio and microphone locally. No bots needed. It uploads the heavy audio streams directly to the backend.
- Mobile App (React Native/Expo): For capturing in-person meetings or recording on the go.
- Web Platform (React): Where the magic happens. While recording, you get a live meeting assistant (polling every 15s) to chat with the transcript in real-time.
- Backend Pipeline (Node.js/Express & BullMQ): Audio processing is heavy, so it's fully asynchronous. Audio goes into Redis queues -> Deepgram for fast transcription -> Python Microservice (SpeechBrain) for Voice Biometrics and speaker diarization.
- AI Orchestration: We use Qdrant (Vector DB) to store context and OpenRouter to hit advanced LLMs. Once a meeting ends, the background workers process the transcript and automatically inject assigned tasks directly into Jira, Linear, or Trello.
Here is a visual breakdown of how the internal components orchestrate the flow of data:
The Mobile Companion (Expo 55)
While the Desktop app captures virtual calls, in-person meetings require a different approach. The mobile companion app is built using the latest Expo SDK 55. Using Expo Router, file-based routing makes navigating the mobile app seamless. The entire codebase is strictly typed with TypeScript, sharing the exact same generated types (api.d.ts) as the React Web app and the backend.
Intelligent LLM Routing (OpenRouter & BYOK)
When building an AI app, locking into a single provider like OpenAI or Anthropic limits flexibility. Plan AI uses OpenRouter as a unified LLM gateway to dynamically route tasks based on cognitive requirements:
- Agentic Investigation: Routed to
openai/gpt-4o-minifor incredibly fast and cost-effective context gathering. - Final Task Extraction: Routed to
anthropic/claude-opus-4.7to deeply understand project context and structure perfectly scoped tickets. - Architectural Diagrams: Routed to
anthropic/claude-sonnet-4.6, the absolute best model for generating complex Mermaid.js diagrams. - Image Features: Routed to
black-forest-labs/flux.2-klein-4bfor high-throughput visual features.
Furthermore, Plan AI uses a BYOK (Bring Your Own Key) architecture. Users input their own OpenRouter keys per workspace, guaranteeing maximum data privacy.
Semantic Memory (Qdrant Vector DB)
Having a raw transcript isn't enough to generate actionable Jira tickets. If a developer says "I'll fix the auth bug", the LLM needs to know what the "auth bug" actually is.
Plan AI integrates Qdrant, an open-source Vector Database. Every time a meeting finishes, the transcript is chunked and vectorized. When the background worker extracts tasks, it performs a semantic search first, pulling past architectural decisions and injecting them into the LLM prompt.
Bridging Tech and Non-Tech (Repomix & GitHub)
The ultimate goal of Plan AI is to completely eliminate the friction between non-technical stakeholders (Product Managers) and technical execution (Developers and AI Agents).
When a Product Manager finishes a meeting, Plan AI pushes the generated engineering tickets directly to GitHub Issues. To make those tickets immediately actionable for an AI coding assistant (like Cursor or Copilot), we integrated Repomix.
A simple command (yarn repomix) packs the entire monorepo into a single, highly-optimized Markdown file. The workflow is magical:
- The PM speaks in the meeting. Plan AI creates a highly technical GitHub Issue.
- The developer assigns the issue to their AI coding assistant.
- The AI assistant reads the issue, consumes the
repomix.mdfile to instantly understand the entire monorepo context, and writes the Pull Request autonomously.
Bleeding-Edge DevEx: GitNexus (MCP)
To safely maintain and scale a large monorepo with AI assistants, Plan AI ships with a Model Context Protocol (MCP) server via GitNexus. GitNexus indexes the entire monorepo into a local graph database.
When an AI agent is asked to modify a service, it automatically calls gitnexus_impact() to see the blast radius of the change, and gitnexus_query() to understand the exact execution flow. It makes AI pair-programming incredibly safe and deterministic.
The "Type Safety" Workflow
One technical aspect we are quite proud of is how type safety is handled across the monorepo. Hand-writing interfaces for API responses is prone to human error and rapidly becomes technical debt. To avoid this, our single source of truth is the Prisma database schema.
Prisma feeds into TSOA controllers, which generate a swagger.json. A single command (yarn update) auto-migrates the DB, updates the swagger, and syncs the api.d.ts types perfectly across the React Web, Electron Desktop, and Expo Mobile frontends simultaneously.
Why Open Core?
We decided to release this under the Business Source License (BUSL-1.1), which converts to AGPLv3 after 4 years. When you are asking people to install an Electron app that records their system audio, trust is everything. Open-sourcing the core allows the community to audit the codebase and verify that it’s completely privacy-first and not spyware. Also, it’s completely self-hostable via Docker.
If you are looking to integrate advanced AI workflows into your own products, let's talk. Our team is equipped to diagnose, stabilize, or supercharge your software endeavors.
Let's Talk AI Integration