Run the full workflow in one command: create sample docs, chunk them, embed them, store them in Atlas, and auto-create the vector index.
• A valid VOYAGE_API_KEY is set in the environment.
• MongoDB Atlas is configured through MONGODB_URI or vai config set mongodb-uri.
See the exact VAI command, the matching Voyage AI layer, and the MongoDB query shape behind the demo.
vai pipeline /tmp/vai-demo-docs/ --db vai_demo --collection knowledge --create-index
This is the highest-leverage ingestion demo in the gallery: one command orchestrates chunking, embedding, Atlas writes, and optional vector index creation.
Share or copy this demo
Keep it lightweight. The prepared text stays behind the buttons.
Share
Copy
LinkedIn opens the share dialog and copies the prepared text so you can paste it in quickly.
The full walkthrough is included here so anyone can replay the demo exactly as published.
More shareable workflows from the same VAI demo library.
Walk through the classic retrieval stack: embed the query, run Atlas vector search, rerank the candidates, then compare the result to a vector-only pass.
VAI command
vai query 'how does vector search work?' --db vai_demo --collection knowledge --model voyage-4-lite
Show Under the Hood
Prerequisites
A valid VOYAGE_API_KEY is set in the environment.
Build a tiny Atlas-backed RAG chat flow using local nano embeddings and Ollama for generation.
VAI command
vai chat --db "$DEMO_DB" --collection "$DEMO_COLLECTION" --local --llm-provider ollama --llm-model "$OLLAMA_MODEL" --llm-base-url http://localhost:11434 --no-history --no-stream
Show Under the Hood
Prerequisites
Ollama is installed and running locally.
Compare fixed, sentence, and markdown chunking on the same sample document before any embedding or storage layer is introduced.
VAI command
vai chunk /tmp/sample.md --strategy markdown
Show Under the Hood
Prerequisites
The `vai` CLI is installed locally. No API key is required for chunking-only workflows.