Skip to content

Quick Start

Get your first data pipeline running in under 10 minutes.

Prerequisites

Before starting, make sure you have:

  • Installed Dango (Installation Guide)
  • Python 3.10+ and Docker Desktop running
  • Virtual environment activated (if using venv)

Step 1: Initialize Your Project

If you haven't already initialized Dango:

cd my-analytics
source venv/bin/activate
dango init
cd my-analytics
.\venv\Scripts\Activate.ps1
dango init

The interactive wizard will guide you through:

  • Project name and configuration
  • Initial data source setup (optional)
  • Directory structure creation

Step 2: Add a Data Source

Let's add your first data source. Dango supports CSV files and 29+ verified dlt sources.

Option A: CSV File (Simplest)

dango source add

Follow the prompts:

  1. Select CSV as the source type
  2. Provide a path to your CSV file
  3. Give it a descriptive name (e.g., sales_data)

Example:

$ dango source add
? Select source type: CSV
? CSV file path: /path/to/your/data.csv
? Source name: sales_data
 CSV source 'sales_data' added successfully

Option B: Stripe (API Integration)

For a more advanced example, try Stripe:

dango source add

Follow the prompts:

  1. Select Stripe as the source type
  2. Enter your Stripe API key (get it from Stripe Dashboard)
  3. Give it a descriptive name (e.g., stripe_payments)

Option C: Google Sheets (OAuth)

dango source add

Follow the prompts:

  1. Select Google Sheets as the source type
  2. Complete OAuth authentication in your browser
  3. Provide the Google Sheet URL
  4. Give it a descriptive name (e.g., marketing_data)

Step 3: Sync Your Data

Now let's pull data from your source into DuckDB:

dango sync

What happens during sync:

  1. dlt connects to your data source
  2. Data is loaded into the raw schema in DuckDB
  3. dbt generates staging models automatically
  4. Transformations run to create clean, deduplicated data

Example output:

$ dango sync
[18:30:45] Starting sync for all sources...
[18:30:46]  sales_data: Extracting data...
[18:30:47]  sales_data: Loading to DuckDB...
[18:30:48]  sales_data: 1,234 rows loaded
[18:30:49] Running dbt transformations...
[18:30:51]  3 models completed successfully
[18:30:51]  Sync completed in 6.2s

Dry Run (Preview Without Executing)

To preview what will happen without executing:

dango sync --dry-run

Step 4: Start the Platform

Start the Web UI, Metabase, and dbt docs server:

dango start

What starts:

  • Web UI - http://localhost:8800
  • Metabase - Accessible through Web UI
  • dbt docs - Accessible through Web UI

Example output:

$ dango start
[18:31:00] Starting Dango platform...
[18:31:02]  Docker containers started
[18:31:05]  Metabase ready
[18:31:06]  Web UI ready at http://localhost:8800
[18:31:06]  Platform started successfully

Open the Dashboard

open http://localhost:8800
Start-Process http://localhost:8800

Or simply visit http://localhost:8800 in your browser.


Step 5: Explore Your Data

Web UI (http://localhost:8800)

The Web UI provides:

  • Pipeline Status - See all your data sources and their sync status
  • Data Sources - Add, edit, and manage sources
  • Transformations - View and manage dbt models
  • Metabase - Access dashboards (link in Web UI)
  • dbt docs - Explore your data models (link in Web UI)

Metabase Dashboards

  1. Click "Open Metabase" in the Web UI
  2. Metabase is auto-configured with your DuckDB database
  3. Start exploring your data with SQL or visual query builder

First time setup:

  • No login required (auto-configured)
  • All tables are already connected
  • Start creating dashboards immediately

Query Your Data with SQL

You can also query DuckDB directly:

dango query "SELECT * FROM marts.dim_customers LIMIT 10"

Or open an interactive SQL session:

dango query --interactive

Step 6: Add Transformations

Dango auto-generates staging models, but you can add your own transformations:

Create a New dbt Model

  1. Navigate to your dbt models directory:

    cd dbt_project/models/
    

  2. Create a new model file (e.g., marts/revenue_summary.sql):

    {{ config(materialized='table') }}
    
    SELECT
        DATE_TRUNC('month', order_date) AS month,
        SUM(amount) AS total_revenue,
        COUNT(DISTINCT customer_id) AS unique_customers
    FROM {{ ref('stg_sales_data') }}
    GROUP BY 1
    ORDER BY 1 DESC
    

  3. Run dbt to materialize your model:

    dango sync
    

Your new model is now available in DuckDB and Metabase!


Step 7: Automate with File Watcher

Enable automatic syncing when data files change:

dango watch

What it does:

  • Monitors CSV files for changes
  • Automatically runs dango sync when changes detected
  • Keeps your data up-to-date in real-time

Press Ctrl+C to stop watching.


Common Workflows

Daily Data Pipeline

# Morning routine
source venv/bin/activate
dango sync                    # Pull fresh data
dango start                   # Start dashboards

Development Workflow

# Make changes to dbt models
cd dbt_project/models/

# Test your changes
dango sync --dry-run          # Preview changes
dango sync                    # Apply changes

# View results in Metabase
open http://localhost:8800

Adding More Sources

# Add another source
dango source add

# Sync all sources
dango sync

# Sync specific source only
dango sync --source stripe_payments

Verify Everything Works

Let's make sure your setup is complete:

# Check Dango version
dango --version

# Validate installation
dango validate

# Check sync status
dango status

# List all sources
dango source list

Next Steps

Now that you have a working pipeline:

  1. Core Concepts - Understand Dango's architecture
  2. Data Sources - Connect more data sources
  3. Transformations - Write advanced dbt models
  4. Dashboards - Build Metabase dashboards
  5. Web UI & CLI - Explore all commands

Troubleshooting

"dango: command not found"

Make sure your virtual environment is activated:

source venv/bin/activate
.\venv\Scripts\Activate.ps1

"Docker not running"

Start Docker Desktop and verify:

docker --version

"Port 8800 already in use"

Stop any running Dango instances:

dango stop

Or kill the process using the port:

lsof -ti:8800 | xargs kill -9

More Issues?

Check the full Troubleshooting Guide or open an issue.


Summary

You've successfully:

  • ✅ Initialized a Dango project
  • ✅ Added a data source
  • ✅ Synced data to DuckDB
  • ✅ Started the Web UI and Metabase
  • ✅ Explored your data

Keep learning: