vai logo
vai
Use CasesShared SpaceDocs
Get Started
RAG
MongoDB Atlas
Embeddings
Chat
Pipeline
Docker

RAG Developer Day (VAI Edition)

The complete MongoDB Developer Day RAG lab reimagined as a VAI CLI workflow. Use it as a backup during the Build RAG Applications using MongoDB lab or afterward to reinforce concepts: ingest documentation chunks, create a vector index, and run RAG chat — end to end from the command line.

MongoDB Atlas
API key
View Source TapeOpen Docs
Prerequisites

Docker installed (for local MongoDB Atlas) or access to a MongoDB Atlas cluster.

Node.js >= 18 installed.

VAI CLI installed globally: `npm install -g voyageai-cli`.

VOYAGE_API_KEY for embeddings and chat (free tier at voyageai.com), or another configured LLM provider.

Environment Setup
Copy all

Choose your MongoDB setup. Both options work identically with the VAI CLI — the Docker local image includes full Atlas Vector Search support.

1

Start MongoDB Atlas Local

The mongodb-atlas-local Docker image provides a single-node MongoDB instance with full Atlas Search and Vector Search support — no cloud account needed.

$docker run -d --name mongodb-atlas-local -p 27017:27017 mongodb/mongodb-atlas-local:latest
$sleep 5
$mongosh --eval "db.runCommand({ ping: 1 })"
2

Install & Configure VAI CLI

Install the CLI globally and point it at your local MongoDB instance.

$npm install -g voyageai-cli
$vai config set mongodb-uri "mongodb://localhost:27017"
$vai --version
Lab companion

This demo mirrors the full MongoDB Developer Day RAG lab. Use it as a backup reference during the lab (if you get stuck or want to see the CLI equivalent) or afterward to reinforce concepts with a single-terminal workflow.

Open Build RAG Applications using MongoDB
Lab step → VAI equivalent
Lab sectionVAI commandNote
20 Dev EnvironmentSetup prerequisitesdocker run -d --name mongodb-atlas-local -p 27017:27017 mongodb/mongodb-atlas-local:latestSame local MongoDB. VAI uses the terminal instead of Jupyter notebooks.
30 Prepare the DataStep 2: Load the datasetvai ingest --file docs/demos/rag-devday-docs.jsonl ...Lab loads mongodb_docs.json. VAI ingests from JSONL. Use the full workshop data for production.
30 Prepare the DataStep 3: Chunk and embedvai ingest ... --field embedding --text-field textLab uses RecursiveCharacterTextSplitter + voyage-context-3. VAI chunks and embeds in one step with voyage-4.
30 Prepare the DataStep 4: Ingest into MongoDBvai ingest --db mongodb_genai_devday_rag --collection knowledge_base ...Same db and collection. VAI writes embedded chunks directly.
40 Perform Vector SearchCreate vector indexvai index create --db mongodb_genai_devday_rag --collection knowledge_base --field embedding --dimensions 1024Same vector index definition. 1024 dimensions for voyage-4.
40 Perform Vector SearchVector search queriesvai chat --db mongodb_genai_devday_rag --collection knowledge_basevai chat embeds the query, runs $vectorSearch, retrieves context, and passes to the LLM.
50 Build RAG AppStep 7: Build the RAG applicationvai chat ... (create_prompt + generate_answer in one command)Lab builds create_prompt and generate_answer. vai chat does both: retrieve context, assemble prompt, call LLM.
50 Build RAG AppAdd rerankingvai chat ... (reranking enabled by default)vai chat uses reranking when available. Use --no-rerank to compare.
60 Add MemoryStep 8: Add memoryvai chat (omit --no-history for session memory)vai chat persists sessions. Use vai chat --session <id> to resume. The lab stores history in MongoDB.

The lab teaches RAG from first principles: chunk, embed, store, index, retrieve, prompt, generate. This VAI demo gives you the same workflow in a reproducible CLI form. Run both to reinforce the mental model: Python for exploration, VAI for automation and demos.

Under the hood

See the exact VAI command, the matching Voyage AI layer, and the MongoDB query shape behind the demo.

vai chat --db mongodb_genai_devday_rag --collection knowledge_base

vai chat is the RAG entrypoint. It embeds your question, runs vector search, retrieves the top chunks, assembles a prompt with context, and calls your configured LLM to generate an answer.

Share or copy this demo

Keep it lightweight. The prepared text stays behind the buttons.

Open canonical URL

Share

Copy

LinkedIn opens the share dialog and copies the prepared text so you can paste it in quickly.

Exact commands

The full walkthrough is included here so anyone can replay the demo exactly as published.

$docker run -d --name mongodb-atlas-local -p 27017:27017 mongodb/mongodb-atlas-local:latest
$sleep 5 && mongosh --eval "db.runCommand({ ping: 1 })"
$npm install -g voyageai-cli
$vai config set mongodb-uri "mongodb://localhost:27017"
$vai ingest --file docs/demos/rag-devday-docs.jsonl --db mongodb_genai_devday_rag --collection knowledge_base --field embedding --text-field text --batch-size 5
$vai index create --db mongodb_genai_devday_rag --collection knowledge_base --field embedding --dimensions 1024 --similarity cosine
$vai chat --db mongodb_genai_devday_rag --collection knowledge_base --no-history --no-stream
Chat input
>What are some best practices for data backups in MongoDB?
Chat input
>How to resolve alerts in MongoDB?
Chat input
>/quit
$docker stop mongodb-atlas-local && docker rm mongodb-atlas-local

Related demos

More shareable workflows from the same VAI demo library.

RAG
Local Inference
Featured
Local RAG Chat With Ollama And Nano

Build a tiny Atlas-backed RAG chat flow using local nano embeddings and Ollama for generation.

Requires Ollama
Atlas

VAI command

vai chat --db "$DEMO_DB" --collection "$DEMO_COLLECTION" --local --llm-provider ollama --llm-model "$OLLAMA_MODEL" --llm-base-url http://localhost:11434 --no-history --no-stream

Show Under the Hood

Prerequisites

Ollama is installed and running locally.

View DemoSource
Pipeline
MongoDB Atlas
Featured
End-to-End Atlas Pipeline

Run the full workflow in one command: create sample docs, chunk them, embed them, store them in Atlas, and auto-create the vector index.

Atlas
API key

VAI command

vai pipeline /tmp/vai-demo-docs/ --db vai_demo --collection knowledge --create-index

Show Under the Hood

Prerequisites

A valid VOYAGE_API_KEY is set in the environment.

Retrieval
Reranking
Featured
Two-Stage Retrieval With Reranking

Walk through the classic retrieval stack: embed the query, run Atlas vector search, rerank the candidates, then compare the result to a vector-only pass.

Atlas
API key

VAI command

vai query 'how does vector search work?' --db vai_demo --collection knowledge --model voyage-4-lite

Show Under the Hood

Prerequisites

A valid VOYAGE_API_KEY is set in the environment.