With non-isolated runs, each run executes as a thread inside the long-running code server process. Memory and CPU are shared across all concurrent runs. This guide covers how to profile your runs, size your ECS tasks, configure Dagster's gRPC settings, and decide when to split into multiple replicas.
| from collections.abc import Iterator | |
| import dagster as dg | |
| from dagster_dbt import DbtCliResource, DbtProjectComponent | |
| class DbtProjectWithInsightsComponent(DbtProjectComponent): | |
| """A DbtProjectComponent subclass that enables Dagster+ Insights. | |
| Chains `.with_insights()` onto the dbt event iterator so that warehouse |
| SELECT | |
| customer_id, | |
| COUNT(*) AS order_count, | |
| SUM(amount) AS total_spent | |
| FROM orders | |
| GROUP BY customer_id |
Comparison of five Claude Code sessions that received the same prompt, with varying skill configurations and prompt refinements.
All sessions received essentially the same base prompt: create a demo Dagster project with Fivetran → dbt → Snowflake → PowerBI, Alteryx, Domo (migrating off) → Census/Fivetran Activations, with event-driven sensors and observe/orchestrate modes.
skills-10 received an enhanced prompt with additional explicit instructions: "Make sure any component that connects to an external system is using a state-backed component, uses a local cache and writes a set of mock assets using that cache, and that when it executes it logs a sample message and metadata instead of connecting to the external system. When modifying a component that exists, ALWAYS subclass, do not create a custom component."
| Aspect | Project 1 (testing-new-skills) |
Project 2 (testing-new-skills-2) |
|---|---|---|
| dbt project location | Inside src/.../defs/dbt_project/ |
Top-level dbt_project/ |
| dbt mart models | account_360, pipeline_summary, lead_conversion_funnel |
fct_sales_pipeline, dim_account_health, fct_lead_conversion |
Analysis Date: March 2, 2026
Between Dagster 1.6 and the current release (1.12.17, as of February 27, 2026), the documentation has undergone a fundamental transformation — not just a cosmetic refresh, but a philosophical and structural overhaul that reflects Dagster's evolution from a flexible orchestration framework into a highly opinionated data platform with prescribed workflows, new abstractions (Components, dg CLI), and a dramatically narrower "happy path" for new users.
| # --------------------------------------------------------------------------- | |
| # 1. Upstream asset — plain @asset | |
| # --------------------------------------------------------------------------- | |
| @dg.asset( | |
| group_name="pipeline", | |
| kinds={"python"}, | |
| tags={"domain": "orders"}, | |
| ) | |
| def raw_orders(context: dg.OpExecutionContext) -> list[dict]: |
| #!/usr/bin/env python3 | |
| #!/usr/bin/env python3 | |
| # /// script | |
| # dependencies = [ | |
| # "requests<3", | |
| # ] | |
| # /// | |
| import argparse | |
| import sys | |
| import requests |