Quick Start¶
Get your first data pipeline running in under 10 minutes.
Prerequisites¶
Before starting, make sure you have:
- Installed Dango (Installation Guide)
- Python 3.10+ and Docker Desktop running
- Virtual environment activated (if using venv)
Step 1: Initialize Your Project¶
If you haven't already initialized Dango:
The interactive wizard will guide you through:
- Project name and configuration
- Initial data source setup (optional)
- Directory structure creation
Step 2: Add a Data Source¶
Let's add your first data source. Dango supports CSV files and 29+ verified dlt sources.
Option A: CSV File (Simplest)¶
Follow the prompts:
- Select CSV as the source type
- Provide a path to your CSV file
- Give it a descriptive name (e.g.,
sales_data)
Example:
$ dango source add
? Select source type: CSV
? CSV file path: /path/to/your/data.csv
? Source name: sales_data
✓ CSV source 'sales_data' added successfully
Option B: Stripe (API Integration)¶
For a more advanced example, try Stripe:
Follow the prompts:
- Select Stripe as the source type
- Enter your Stripe API key (get it from Stripe Dashboard)
- Give it a descriptive name (e.g.,
stripe_payments)
Option C: Google Sheets (OAuth)¶
Follow the prompts:
- Select Google Sheets as the source type
- Complete OAuth authentication in your browser
- Provide the Google Sheet URL
- Give it a descriptive name (e.g.,
marketing_data)
Step 3: Sync Your Data¶
Now let's pull data from your source into DuckDB:
What happens during sync:
- dlt connects to your data source
- Data is loaded into the
rawschema in DuckDB - dbt generates staging models automatically
- Transformations run to create clean, deduplicated data
Example output:
$ dango sync
[18:30:45] Starting sync for all sources...
[18:30:46] → sales_data: Extracting data...
[18:30:47] → sales_data: Loading to DuckDB...
[18:30:48] → sales_data: 1,234 rows loaded
[18:30:49] Running dbt transformations...
[18:30:51] ✓ 3 models completed successfully
[18:30:51] ✓ Sync completed in 6.2s
Dry Run (Preview Without Executing)¶
To preview what will happen without executing:
Step 4: Start the Platform¶
Start the Web UI, Metabase, and dbt docs server:
What starts:
- Web UI -
http://localhost:8800 - Metabase - Accessible through Web UI
- dbt docs - Accessible through Web UI
Example output:
$ dango start
[18:31:00] Starting Dango platform...
[18:31:02] ✓ Docker containers started
[18:31:05] ✓ Metabase ready
[18:31:06] ✓ Web UI ready at http://localhost:8800
[18:31:06] ✓ Platform started successfully
Open the Dashboard¶
Or simply visit http://localhost:8800 in your browser.
Step 5: Explore Your Data¶
Web UI (http://localhost:8800)¶
The Web UI provides:
- Pipeline Status - See all your data sources and their sync status
- Data Sources - Add, edit, and manage sources
- Transformations - View and manage dbt models
- Metabase - Access dashboards (link in Web UI)
- dbt docs - Explore your data models (link in Web UI)
Metabase Dashboards¶
- Click "Open Metabase" in the Web UI
- Metabase is auto-configured with your DuckDB database
- Start exploring your data with SQL or visual query builder
First time setup:
- No login required (auto-configured)
- All tables are already connected
- Start creating dashboards immediately
Query Your Data with SQL¶
You can also query DuckDB directly:
Or open an interactive SQL session:
Step 6: Add Transformations¶
Dango auto-generates staging models, but you can add your own transformations:
Create a New dbt Model¶
-
Navigate to your dbt models directory:
-
Create a new model file (e.g.,
marts/revenue_summary.sql): -
Run dbt to materialize your model:
Your new model is now available in DuckDB and Metabase!
Step 7: Automate with File Watcher¶
Enable automatic syncing when data files change:
What it does:
- Monitors CSV files for changes
- Automatically runs
dango syncwhen changes detected - Keeps your data up-to-date in real-time
Press Ctrl+C to stop watching.
Common Workflows¶
Daily Data Pipeline¶
# Morning routine
source venv/bin/activate
dango sync # Pull fresh data
dango start # Start dashboards
Development Workflow¶
# Make changes to dbt models
cd dbt_project/models/
# Test your changes
dango sync --dry-run # Preview changes
dango sync # Apply changes
# View results in Metabase
open http://localhost:8800
Adding More Sources¶
# Add another source
dango source add
# Sync all sources
dango sync
# Sync specific source only
dango sync --source stripe_payments
Verify Everything Works¶
Let's make sure your setup is complete:
# Check Dango version
dango --version
# Validate installation
dango validate
# Check sync status
dango status
# List all sources
dango source list
Next Steps¶
Now that you have a working pipeline:
- Core Concepts - Understand Dango's architecture
- Data Sources - Connect more data sources
- Transformations - Write advanced dbt models
- Dashboards - Build Metabase dashboards
- Web UI & CLI - Explore all commands
Troubleshooting¶
"dango: command not found"¶
Make sure your virtual environment is activated:
"Docker not running"¶
Start Docker Desktop and verify:
"Port 8800 already in use"¶
Stop any running Dango instances:
Or kill the process using the port:
More Issues?¶
Check the full Troubleshooting Guide or open an issue.
Summary¶
You've successfully:
- ✅ Initialized a Dango project
- ✅ Added a data source
- ✅ Synced data to DuckDB
- ✅ Started the Web UI and Metabase
- ✅ Explored your data
Keep learning:
- Explore the CLI Reference for all commands
- Learn about Data Sources
- Master dbt Transformations