10 Essential Steps to Build an AI-Enhanced Conference Assistant with .NET's Composable AI Toolkit
Building a smart, interactive conference assistant used to involve stitching together disparate AI components—models, vector stores, ingestion pipelines, and agent frameworks—each with its own quirks and breaking changes. But with .NET's new composable AI stack, you can now create a seamless, end-to-end experience using unified abstractions. In this article, we'll walk through the ten key steps we took to build ConferencePulse, a live conference assistant that runs polls, answers audience questions, generates insights, and summarizes sessions—all powered by AI. Whether you're building for a virtual summit or an in-person event, these building blocks will help you deliver a polished, real-time assistant.
1. Set Up the .NET Aspire Orchestrator
The foundation of any distributed AI app is reliable orchestration. We used .NET Aspire to manage dependencies like Qdrant (vector database), PostgreSQL, and Azure OpenAI. Aspire provides a unified dashboard, health checks, and service discovery, making it easy to spin up and connect microservices. In our ConferenceAssistant.AppHost project, we defined each resource and their connections. This step ensures that all components—from ingestion to agent communication—are resilient and observable. Back to top

2. Unify AI Providers with Microsoft.Extensions.AI
Different AI models require different APIs, but Microsoft.Extensions.AI provides a single IChatClient abstraction that works with OpenAI, Azure OpenAI, Ollama, and more. This allowed us to swap providers without changing code. For ConferencePulse, we used Azure OpenAI for the main RAG pipeline and local models for development. The abstraction handles chat history, streaming, and function calling uniformly. Back to top
3. Build a RAG Pipeline with Microsoft.Extensions.DataIngestion
To ground AI answers in real content, we needed a retrieval-augmented generation (RAG) pipeline. Microsoft.Extensions.DataIngestion offers composable stages for chunking, embedding, and storing documents. We pointed it at a GitHub repo containing session markdown, and it automatically downloaded, split into chunks, and embedded them into Qdrant. This pipeline runs at startup and incrementally updates as content changes. The result: every Q&A answer and poll suggestion is based on actual session material. Back to top
4. Manage Vector Data with Microsoft.Extensions.VectorData
Once documents are embedded, you need a consistent way to perform similarity searches. Microsoft.Extensions.VectorData provides a IVectorStore abstraction that supports Qdrant, Azure Cognitive Search, and more. We used it to index embedding vectors and retrieve top-k results for each user query. The abstraction handles collection creation, upsert operations, and distance calculations, so we could focus on the search logic rather than vendor-specific APIs. Back to top
5. Create Intelligent Agents with Microsoft Agent Framework
For complex tasks like generating session summaries, we built multiple AI agents using Microsoft Agent Framework. Each agent has a specific role: one analyzes poll trends, another interprets Q&A sentiment, and a third merges findings into a cohesive summary. Agents communicate via a shared message bus and can access tools like the vector store. This modular approach lets you extend functionality without rewriting core logic. Back to top
6. Integrate Tools via Model Context Protocol (MCP)
Agents need real-time data from the conference platform. We used Model Context Protocol (MCP) to expose tools like "get live poll results" or "fetch current question". MCP standardizes how AI models call external functions, so our agents could act without hard-coded dependencies. The MCP server runs as a .NET service, and the client code in the Blazor app invokes it seamlessly. Back to top

7. Implement Real-Time Polls and Q&A
ConferencePulse uses Blazor Server's SignalR integration for real-time updates. When a presenter starts a poll, the AI generates options based on session content. Attendees scan a QR code to join and vote; results update instantly on all screens. Similarly, the Q&A panel pulls answers from the RAG pipeline and streams them to the room. The combination of AI-generated content and live UI creates an engaging experience. Back to top
8. Generate Live Insights and Session Summaries
As polls and questions accumulate, the system runs background analysis. The insight engine detects trends—like a surge in questions about deployment—and surfaces them on the presenter's dashboard. At session end, a multi-agent workflow concurrently processes all engagement data, then merges findings into a concise summary. This summary is saved to the knowledge base for future reference. Back to top
9. Structure the Blazor Server UI
The front end is built with Blazor Server, which provides low-latency interactivity without heavy client-side bloat. We organized the UI into three main areas: an attendee view (polls, Q&A, live feed), a presenter dashboard (insights, controls), and an admin panel (session management). Components are lazily loaded and reuse shared state through dependency injection. The result is a responsive, accessible interface that works on both desktop and mobile. Back to top
10. Deploy and Scale with Aspire
Finally, we used .NET Aspire to deploy ConferencePulse locally for development and to Azure for production. Aspire's built-in telemetry and health checks helped us monitor each service—ingestion, vector store, agent workers—and scale horizontally as audience size grew. The entire stack runs on .NET 10, with containerization managed by Aspire. This setup ensures your conference assistant can handle thousands of concurrent users without manual tuning. Back to top
Building ConferencePulse taught us that the key to a successful AI-powered application is not just the models but the infrastructure that ties them together. With .NET's composable AI stack—Microsoft.Extensions.AI, DataIngestion, VectorData, Agent Framework, MCP, and Aspire—you can move from prototype to production faster than ever. We hope these ten steps give you a blueprint for your own interactive conference assistant. Back to top
Related Articles
- Silent Vibrations: The Hidden Cause of Unease in Old Buildings, Scientists Warn
- Catch PyTorch NaNs at the Source: Build a 3ms Layer-Level Detector
- 10 Essential Steps to Build an Efficient Knowledge Base for AI Models
- A Practical Guide to Selecting the Right Regularizer: Ridge, Lasso, or ElasticNet (Backed by 134,400 Simulations)
- AI Knowledge Base Construction Must Be Iterative, Not One-Time, Experts Warn
- Building an Interactive Conference Assistant with .NET’s AI Toolkit: Q&A
- Mapping the Unspoken: How Meta Built an AI to Unlock Tribal Knowledge in Massive Codebases
- Introduction to Time Series Analysis with Python