Walk through the classic retrieval stack: embed the query, run Atlas vector search, rerank the candidates, then compare the result to a vector-only pass.
• A valid VOYAGE_API_KEY is set in the environment.
• MongoDB Atlas is configured through MONGODB_URI or vai config set mongodb-uri.
• The `vai_demo.knowledge` collection already exists, for example by running the pipeline demo first.
See the exact VAI command, the matching Voyage AI layer, and the MongoDB query shape behind the demo.
vai query 'how does vector search work?' --db vai_demo --collection knowledge --model voyage-4-lite
The high-level command packages the canonical RAG retrieval pattern into one CLI step. The second command in the tape disables reranking so the precision gain is visible, not theoretical.
Share or copy this demo
Keep it lightweight. The prepared text stays behind the buttons.
Share
Copy
LinkedIn opens the share dialog and copies the prepared text so you can paste it in quickly.
The full walkthrough is included here so anyone can replay the demo exactly as published.
More shareable workflows from the same VAI demo library.
Run the full workflow in one command: create sample docs, chunk them, embed them, store them in Atlas, and auto-create the vector index.
VAI command
vai pipeline /tmp/vai-demo-docs/ --db vai_demo --collection knowledge --create-index
Show Under the Hood
Prerequisites
A valid VOYAGE_API_KEY is set in the environment.
Rerank intentionally messy candidate documents against a query, then compare the full reranker to the lite version to show the latency-precision tradeoff.
VAI command
vai rerank 'how do I connect to MongoDB Atlas?' --documents 'Use the connection string from your Atlas dashboard' 'Python is a popular language' 'Atlas supports vectorSearch aggregation' 'Copy your URI and pass it to MongoClient' 'The weather in San Francisco is mild'
Show Under the Hood
Prerequisites
A valid VOYAGE_API_KEY is set in the environment.
Build a tiny Atlas-backed RAG chat flow using local nano embeddings and Ollama for generation.
VAI command
vai chat --db "$DEMO_DB" --collection "$DEMO_COLLECTION" --local --llm-provider ollama --llm-model "$OLLAMA_MODEL" --llm-base-url http://localhost:11434 --no-history --no-stream
Show Under the Hood
Prerequisites
Ollama is installed and running locally.