The complete MongoDB Developer Day RAG lab reimagined as a VAI CLI workflow. Use it as a backup during the Build RAG Applications using MongoDB lab or afterward to reinforce concepts: ingest documentation chunks, create a vector index, and run RAG chat — end to end from the command line.
• Docker installed (for local MongoDB Atlas) or access to a MongoDB Atlas cluster.
• Node.js >= 18 installed.
• VAI CLI installed globally: `npm install -g voyageai-cli`.
• VOYAGE_API_KEY for embeddings and chat (free tier at voyageai.com), or another configured LLM provider.
Choose your MongoDB setup. Both options work identically with the VAI CLI — the Docker local image includes full Atlas Vector Search support.
Start MongoDB Atlas Local
The mongodb-atlas-local Docker image provides a single-node MongoDB instance with full Atlas Search and Vector Search support — no cloud account needed.
Install & Configure VAI CLI
Install the CLI globally and point it at your local MongoDB instance.
This demo mirrors the full MongoDB Developer Day RAG lab. Use it as a backup reference during the lab (if you get stuck or want to see the CLI equivalent) or afterward to reinforce concepts with a single-terminal workflow.
Open Build RAG Applications using MongoDB| Lab section | VAI command | Note |
|---|---|---|
| 20 Dev EnvironmentSetup prerequisites | docker run -d --name mongodb-atlas-local -p 27017:27017 mongodb/mongodb-atlas-local:latest | Same local MongoDB. VAI uses the terminal instead of Jupyter notebooks. |
| 30 Prepare the DataStep 2: Load the dataset | vai ingest --file docs/demos/rag-devday-docs.jsonl ... | Lab loads mongodb_docs.json. VAI ingests from JSONL. Use the full workshop data for production. |
| 30 Prepare the DataStep 3: Chunk and embed | vai ingest ... --field embedding --text-field text | Lab uses RecursiveCharacterTextSplitter + voyage-context-3. VAI chunks and embeds in one step with voyage-4. |
| 30 Prepare the DataStep 4: Ingest into MongoDB | vai ingest --db mongodb_genai_devday_rag --collection knowledge_base ... | Same db and collection. VAI writes embedded chunks directly. |
| 40 Perform Vector SearchCreate vector index | vai index create --db mongodb_genai_devday_rag --collection knowledge_base --field embedding --dimensions 1024 | Same vector index definition. 1024 dimensions for voyage-4. |
| 40 Perform Vector SearchVector search queries | vai chat --db mongodb_genai_devday_rag --collection knowledge_base | vai chat embeds the query, runs $vectorSearch, retrieves context, and passes to the LLM. |
| 50 Build RAG AppStep 7: Build the RAG application | vai chat ... (create_prompt + generate_answer in one command) | Lab builds create_prompt and generate_answer. vai chat does both: retrieve context, assemble prompt, call LLM. |
| 50 Build RAG AppAdd reranking | vai chat ... (reranking enabled by default) | vai chat uses reranking when available. Use --no-rerank to compare. |
| 60 Add MemoryStep 8: Add memory | vai chat (omit --no-history for session memory) | vai chat persists sessions. Use vai chat --session <id> to resume. The lab stores history in MongoDB. |
The lab teaches RAG from first principles: chunk, embed, store, index, retrieve, prompt, generate. This VAI demo gives you the same workflow in a reproducible CLI form. Run both to reinforce the mental model: Python for exploration, VAI for automation and demos.
See the exact VAI command, the matching Voyage AI layer, and the MongoDB query shape behind the demo.
vai chat --db mongodb_genai_devday_rag --collection knowledge_base
vai chat is the RAG entrypoint. It embeds your question, runs vector search, retrieves the top chunks, assembles a prompt with context, and calls your configured LLM to generate an answer.
Share or copy this demo
Keep it lightweight. The prepared text stays behind the buttons.
Share
Copy
LinkedIn opens the share dialog and copies the prepared text so you can paste it in quickly.
The full walkthrough is included here so anyone can replay the demo exactly as published.
More shareable workflows from the same VAI demo library.
Build a tiny Atlas-backed RAG chat flow using local nano embeddings and Ollama for generation.
VAI command
vai chat --db "$DEMO_DB" --collection "$DEMO_COLLECTION" --local --llm-provider ollama --llm-model "$OLLAMA_MODEL" --llm-base-url http://localhost:11434 --no-history --no-stream
Show Under the Hood
Prerequisites
Ollama is installed and running locally.
Run the full workflow in one command: create sample docs, chunk them, embed them, store them in Atlas, and auto-create the vector index.
VAI command
vai pipeline /tmp/vai-demo-docs/ --db vai_demo --collection knowledge --create-index
Show Under the Hood
Prerequisites
A valid VOYAGE_API_KEY is set in the environment.
Walk through the classic retrieval stack: embed the query, run Atlas vector search, rerank the candidates, then compare the result to a vector-only pass.
VAI command
vai query 'how does vector search work?' --db vai_demo --collection knowledge --model voyage-4-lite
Show Under the Hood
Prerequisites
A valid VOYAGE_API_KEY is set in the environment.