Local Development¶
Project organization and development workflows for Dango.
Project Structure¶
A typical Dango project has the following structure:
my-analytics/
├── .dango/ # Dango state (auto-generated)
│ ├── sources.yml # Source configurations
│ ├── project.yml # Project settings
│ └── logs/ # Sync logs
├── .dlt/ # dlt configuration
│ ├── config.toml # dlt settings
│ └── secrets.toml # Credentials (DO NOT COMMIT)
├── data/ # Data files
│ ├── uploads/ # CSV upload directory
│ └── warehouse.duckdb # DuckDB database
├── dbt/ # dbt project
│ ├── models/
│ │ ├── staging/ # Auto-generated staging models
│ │ ├── intermediate/ # Your intermediate models
│ │ └── marts/ # Your mart models
│ ├── macros/
│ ├── tests/
│ └── dbt_project.yml
└── metabase-data/ # Metabase state (if using Docker)
Directory Organization¶
Data Directory¶
The data/ directory contains all raw data files:
data/
├── uploads/ # CSV files organized by source
│ ├── sales/ # Sales data CSVs
│ │ ├── orders_2024.csv
│ │ └── orders_2025.csv
│ └── inventory/ # Inventory data CSVs
│ └── products.csv
└── warehouse.duckdb # DuckDB database file
Organizing CSV Files
Create subdirectories in data/uploads/ for each CSV source. This keeps files organized and makes source configuration clearer.
dbt Directory¶
The dbt/ directory contains your transformation logic:
dbt/
├── models/
│ ├── staging/ # Auto-generated, don't edit
│ │ ├── stg_orders.sql
│ │ └── stg_products.sql
│ ├── intermediate/ # Your business logic
│ │ └── int_order_items.sql
│ └── marts/ # Final tables for dashboards
│ ├── fct_daily_sales.sql
│ └── dim_products.sql
├── macros/ # Reusable SQL snippets
├── tests/ # Data quality tests
├── seeds/ # Static lookup data
└── dbt_project.yml
Development Workflow¶
1. Initial Setup¶
2. Add Data Sources¶
# Add CSV source
dango source add csv
# Add OAuth source (e.g., Google Sheets)
dango source add google_sheets
dango auth google_sheets
3. Sync Data¶
4. Develop Transformations¶
# After sync, staging models are auto-generated
# Edit or add models in dbt/models/
# Run transformations
dango run
# Or run specific model during development
cd dbt && dbt run --select my_model
5. Build Dashboards¶
Iterative Development¶
Hot Reload for CSV Sources¶
When the file watcher is enabled, CSV changes are automatically detected:
# Check watcher status
dango status
# The watcher will:
# 1. Detect file changes in data/uploads/
# 2. Wait for debounce period (default: 10 minutes)
# 3. Auto-sync changed sources
# 4. Auto-run dbt transformations
Manual Iteration¶
For faster development cycles:
# 1. Make changes to CSV or source config
# 2. Sync immediately (skip watcher debounce)
dango sync --source my_source
# 3. Run only affected models
cd dbt && dbt run --select my_model+
Environment Management¶
Development vs Production¶
Create separate configurations for different environments:
Using Environment Variables¶
Store sensitive values in environment variables:
# .env file (DO NOT COMMIT)
STRIPE_API_KEY=sk_live_xxx
GOOGLE_SHEETS_CREDENTIALS_PATH=/path/to/creds.json
Reference in secrets.toml:
Testing Your Setup¶
Validate Configuration¶
Test Sync¶
# Dry run to see what would be synced
dango sync --dry-run
# Sync with debug logging
RUNTIME__LOG_LEVEL=DEBUG dango sync
Test Transformations¶
Cleaning Up¶
Reset Data¶
# Remove all data and start fresh
dango db clean
# This removes:
# - DuckDB database
# - dlt state
# - Keeps configuration files
Stop Services¶
Next Steps¶
- dbt Workflows - Advanced transformation development
- Git Workflows - Version control best practices
- Troubleshooting - Common issues and solutions