Skip to content

Instantly share code, notes, and snippets.

@DebopamParam
Created March 31, 2025 18:36
Show Gist options
  • Save DebopamParam/9d0b747589790209275348e5a28fc646 to your computer and use it in GitHub Desktop.
Save DebopamParam/9d0b747589790209275348e5a28fc646 to your computer and use it in GitHub Desktop.
Dev Containers - Crash Course

Dev Containers - Crash Course

Connect with me - https://debopamparam.github.io/DebopamParam/

Welcome! As a Developer, you know that managing dependencies, environments, and ensuring consistency across different machines (yours, your colleagues', the CI/CD pipeline) can be a significant headache. VS Code Development Containers (Dev Containers) offer a fantastic solution to these problems.

This tutorial will guide you step-by-step through setting up and using Dev Containers for your Python projects, making your development workflow smoother, more consistent, and easier to share

https://youtu.be/b1RavPr_878?si=4yTrQI9llYDNVo1f

Credit to VS-Code YouTube Channel

Part 1 - Introduction

What Are Dev Containers?

Imagine you need a specialized workshop to build something. Instead of setting up all the tools and configuring the space in your own garage (your host machine), you get a pre-fabricated, perfectly configured portable workshop delivered to you. This workshop has all the tools (dependencies, SDKs, OS-level packages), specific configurations (settings, extensions), and is isolated from everything else. When you're done, the workshop can be packed up cleanly.

That's essentially what a Dev Container is: a running Docker container configured specifically for a development task or project. VS Code, with the Dev Containers extension, can connect directly into this container, allowing you to edit code, run commands, debug, and use extensions as if they were running locally, but they are actually executing inside the isolated container environment.

Why Use Dev Containers for Python?

  • Environment Consistency: The biggest win! The .devcontainer configuration defines the exact environment – OS, Python version, system libraries, VS Code settings, and extensions. Everyone on the team (and your CI server) gets the identical setup, eliminating the frustrating "it works on my machine" problem.
  • Simplified Setup & Onboarding: New team members just need Docker, VS Code, and the Dev Containers extension. They clone the repo, VS Code prompts them to reopen in the container, and bam! – they have the fully configured development environment ready in minutes, without complex local setup guides.
  • Isolation: Dependencies (Python packages, system libraries) are installed inside the container, not on your host machine. This prevents conflicts between project requirements and keeps your local system clean. Need conflicting versions of a library for different projects? No problem!
  • Access Specific Tools/OS: Need Linux command-line tools on your Windows machine? Need a specific version of PostgreSQL only for this project? Dev Containers make it easy to use different operating systems or toolchains without complex local installations or VMs.
  • Clean Host Machine: Since dependencies live inside the container, your host OS stays lean and uncluttered.

2. Prerequisites

Before we start, make sure you have the following installed:

  1. Visual Studio Code: The latest stable version is recommended. Download VS Code
  2. Docker Desktop (or a compatible Docker runtime): Dev Containers rely on Docker to build and run the containerized environments.
    • Docker Desktop for Mac
    • Docker Desktop for Windows (Requires WSL 2 backend, usually configured during installation)
    • Docker Engine for Linux (Requires additional setup for non-root user access)
    • Why Docker? Docker is the containerization platform that actually creates, manages, and runs the isolated environments (containers) defined by your Dev Container configuration. VS Code instructs Docker behind the scenes.
  3. VS Code "Dev Containers" Extension: Install this directly from the VS Code Extensions view (Ctrl+Shift+X or Cmd+Shift+X). Search for ms-vscode-remote.remote-containers.

Make sure your Docker Desktop (or engine) is running before proceeding.

3. Getting Started: Your First Python Dev Container (Template-Based)

Let's start with the easiest approach: using a pre-defined template for a simple Python project.

Scenario: You have a folder (new or existing) for a basic Python project.

  1. Open Your Project Folder: Open the folder containing your Python project (or an empty folder if starting new) in VS Code.

  2. Add Dev Container Configuration:

    • Open the VS Code Command Palette (Ctrl+Shift+P or Cmd+Shift+P).
    • Type Dev Containers: Add Development Container Configuration Files... and select it.
    • VS Code will show a list of predefined container definitions. Type Python 3 and select it.
    • Choose a specific Python version (e.g., 3.11, 3.10). Stick with a recent stable version if unsure.
    • You might be asked to add optional "Features" (like Node.js, Azure CLI, etc.). For now, you can just click "OK" without selecting any extras unless you know you need them immediately. We can add them later.
  3. Examine the Generated Files: VS Code will create a .devcontainer folder in your project root with a devcontainer.json file inside. Let's look at a typical example:

    // .devcontainer/devcontainer.json
    {
        "name": "Python 3",
        // Or use a Dockerfile or Docker Compose file. More info: <https://containers.dev/guide/dockerfile>
        "image": "mcr.microsoft.com/devcontainers/python:0-3.11",
    
        "features": {
            // Example: Adding the GitHub CLI feature
            // "ghcr.io/devcontainers/features/github-cli:1": {}
        },
    
        // Features to add to the dev container. More info: <https://containers.dev/features>.
        // "features": {},
    
        // Use 'forwardPorts' to make a list of ports inside the container available locally.
        "forwardPorts": [8000], // Example: Forward port 8000 if you run a web server
    
        // Use 'postCreateCommand' to run commands after the container is created.
        "postCreateCommand": "pip install --user -r requirements.txt", // Installs packages after container creation
    
        // Configure tool-specific properties.
        "customizations": {
            // Configure properties specific to VS Code.
            "vscode": {
                "settings": {
                    "python.defaultInterpreterPath": "/usr/local/bin/python",
                    "python.linting.pylintEnabled": true,
                    "python.linting.enabled": true,
                    "python.formatting.provider": "black", // Example: Set Black as formatter
                    "editor.formatOnSave": true         // Example: Format on save
                },
    
                // Add the IDs of extensions you want installed when the container is created.
                "extensions": [
                    "ms-python.python",         // Python Extension Pack
                    "ms-python.vscode-pylance", // Language Server
                    "ms-python.flake8",       // Example: Add flake8 linter
                    "ms-python.black-formatter" // Example: Add Black formatter extension
                ]
            }
        }
    
        // Uncomment to connect as root instead. More info: <https://aka.ms/dev-containers-non-root>.
        // "remoteUser": "root"
    }

    Key Properties Explained:

    • name: A display name for your Dev Container configuration.
    • image: Specifies the pre-built Docker image to use as the base. Microsoft maintains excellent base images (mcr.microsoft.com/devcontainers/python). The tag (e.g., 0-3.11) often indicates the Python version.
    • features: A powerful way to easily add common tools (like Docker-in-Docker, databases, CLIs like ghcr.io/devcontainers/features/python) without manually editing a Dockerfile. The base Python image often includes the Python feature already.
    • forwardPorts: A list of ports inside the container that should be accessible from your host machine (e.g., for web servers).
    • postCreateCommand: Commands to run once after the container is first created (but before VS Code connects fully). Ideal for installing project dependencies from requirements.txt.
    • customizations.vscode.settings: VS Code settings specifically applied only when working inside this container. Great for project-specific formatter settings, linter choices, etc.
    • customizations.vscode.extensions: A list of VS Code extension IDs to be automatically installed and enabled inside the container. Essential for ensuring your Python tooling (linters, formatters, debuggers) is available.
  4. Build and Reopen in Container:

  • Start Docker Desktop
  • VS Code should show a notification asking if you want to "Reopen in Container". Click it.
  • Alternatively, open the Command Palette (Ctrl+Shift+P or Cmd+Shift+P) and run Dev Containers: Rebuild and Reopen in Container.
  • The first time, Docker will download the specified image and build the container. This might take a few minutes. Subsequent loads will be much faster.
  • You'll see logs showing the container setup process. Once finished, VS Code will reload, but now it's connected inside the container.
  1. Verify the Environment:
    • Look at the bottom-left corner of the VS Code window. You should see a green area saying Dev Container: Python 3 (or the name you chose), indicating you're inside!

    • Open the Integrated Terminal (Ctrl+ or Terminal > New Terminal`). This terminal is running inside the container.

    • Check the Python version: python --version (or python3 --version). It should match the version specified in your devcontainer.json.

    • Check installed packages (if you had a requirements.txt and postCreateCommand): pip list.

    • Create a simple Python file, hello.py:

      # hello.py
      import sys
      
      print("Hello from inside the Dev Container!")
      print(f"Running Python version: {sys.version}")
    • Run it from the container's terminal:

      python hello.py

      You should see the output printed in the terminal.

Congratulations! You've successfully set up and run code inside your first Python Dev Container. Notice how the VS Code debugger, terminal, and any extensions listed in devcontainer.json work seamlessly within this isolated environment.

4. Customizing Your Environment: Using a Dockerfile

Templates are great, but often you need more control: installing specific OS packages (apt-get install ...), using a different base image, or performing more complex setup steps. This is where a custom Dockerfile comes in handy.

Scenario: Let's create a Dev Container for a basic Flask web application that requires system dependencies.

  1. Project Setup:

    • Ensure you're back in your local environment (use Dev Containers: Reopen Folder Locally if you're still inside the previous container).

    • In your project folder, create a requirements.txt file:

      # requirements.txt
      Flask==3.0.0
      
    • Create a simple Flask app file, app.py:

      # app.py
      from flask import Flask
      import os
      
      app = Flask(__name__)
      
      @app.route('/')
      def hello_world():
          return 'Hello from Flask inside a Dev Container!'
      
      if __name__ == '__main__':
          # Host 0.0.0.0 makes it accessible from outside the container (via forwarded port)
          app.run(debug=True, host='0.0.0.0', port=5000)
  2. Create a Dockerfile:

    • Inside the .devcontainer folder, create a file named Dockerfile (no extension).
    # .devcontainer/Dockerfile
    
    # Start from an official Python base image
    FROM python:3.11-slim
    
    # Set the working directory inside the container
    WORKDIR /app
    
    # Copy the requirements file into the container
    # We copy only requirements first to leverage Docker build cache.
    # The install step will be cached if requirements.txt doesn't change.
    COPY requirements.txt .
    
    # Install system dependencies (example: if your app needed git or a specific library)
    # RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \\
    #     && apt-get -y install --no-install-recommends git \\
    #     && apt-get clean && rm -rf /var/lib/apt/lists/*
    
    # Install Python dependencies
    # Using --no-cache-dir makes the image slightly smaller
    # Consider using a virtual environment inside the container for best practice
    RUN pip install --no-cache-dir -r requirements.txt
    
    # Copy the rest of your application code (if needed during build time)
    # For Dev Containers, the workspace is typically MOUNTED, not copied,
    # so this line is often not strictly necessary unless required by build steps.
    # COPY . /app
    
    # [Optional but Recommended] Create a non-root user for security
    # ARG USERNAME=vscode
    # ARG USER_UID=1000
    # ARG USER_GID=$USER_UID
    # RUN groupadd --gid $USER_GID $USERNAME \\
    #     && useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \\
    #     # Add sudo support for the user
    #     && apt-get update \\
    #     && apt-get install -y sudo \\
    #     && echo $USERNAME ALL=\\(root\\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \\
    #     && chmod 0440 /etc/sudoers.d/$USERNAME
    
    # [Optional] Switch to the non-root user (if created)
    # USER $USERNAME
    
    # EXPOSE is informational; actual port forwarding is handled by devcontainer.json
    EXPOSE 5000
    
    # Default command (optional, often overridden or run manually)
    # CMD ["python", "app.py"]
    
    
  3. Modify devcontainer.json:

    • Open .devcontainer/devcontainer.json.
    • Remove or comment out the image property.
    • Add a build section pointing to your Dockerfile.
    • Ensure forwardPorts includes the port your Flask app uses (5000).
    • Remove or adjust the postCreateCommand if the pip install is now handled in the Dockerfile. You might still use postCreateCommand for things like database migrations or git setup after the container is built and the code is mounted.
    // .devcontainer/devcontainer.json
    {
        "name": "Python Flask App",
        // Use the Dockerfile in the .devcontainer folder
        "build": {
            "dockerfile": "Dockerfile",
            // Optional: Context is the folder containing the Dockerfile (usually .)
            // "context": "..",
            // Optional: Arguments to pass to the Dockerfile build
            // "args": {
            //     "PYTHON_VERSION": "3.11"
            // }
        },
    
        // Forward the Flask port
        "forwardPorts": [5000],
    
        // Example: postCreateCommand for initializing something after build
        // "postCreateCommand": "echo 'Container created!' && python manage.py migrate",
    
        // Keep VS Code customizations
        "customizations": {
            "vscode": {
                "settings": {
                    "python.defaultInterpreterPath": "/usr/local/bin/python",
                    // Add Flask-specific settings if needed
                    "python.linting.pylintEnabled": true,
                    "python.linting.enabled": true,
                    "python.formatting.provider": "black",
                    "editor.formatOnSave": true
                },
                "extensions": [
                    "ms-python.python",
                    "ms-python.vscode-pylance",
                    "ms-python.flake8",
                    "ms-python.black-formatter"
                    // Add Flask snippets or other relevant extensions
                ]
            }
        },
    
        // If you created and switched to a non-root user in Dockerfile:
        // "remoteUser": "vscode"
    
        // Crucial: Define where your local source code should be mounted inside the container
        "workspaceFolder": "/app", // Matches WORKDIR in Dockerfile
        "workspaceMount": "source=${localWorkspaceFolder},target=/app,type=bind,consistency=cached"
    }
    
    • Important: The workspaceFolder and workspaceMount properties tell VS Code where your project files should appear inside the container. The target path in workspaceMount should usually match the WORKDIR in your Dockerfile.
  • Note: "workspaceMount": "source=${localWorkspaceFolder},target=/app,type=bind,consistency=cached"

    let's break down this crucial line from your devcontainer.json configuration in detail:

    json "workspaceMount": "source=${localWorkspaceFolder},target=/app,type=bind,consistency=cached"

    This string configures how your local project files are made available inside the Dev Container. It uses a syntax similar to Docker's --mount command-line flag, specifying several key-value pairs separated by commas.

    Here's a breakdown of each part:

    1. source=${localWorkspaceFolder}
    • source: This specifies the directory on your host machine (your local computer) that you want to mount into the container.
    • ${localWorkspaceFolder}: This is a special variable provided by VS Code. It automatically resolves to the absolute path of the folder you currently have opened in VS Code. For example, if you opened your project located at /Users/yourname/projects/my-python-app on your Mac or C:\\Users\\yourname\\projects\\my-python-app on Windows, VS Code replaces ${localWorkspaceFolder} with that exact path.
    • Why use this variable? It makes the configuration portable. Anyone cloning your project and opening it in VS Code will have their specific local path automatically used, without needing to hardcode paths in devcontainer.json, which would break it for other users or if you move the project locally.
    1. target=/app
    • target: This specifies the destination path inside the running container where the source directory will be mounted.
    • /app: This means that inside the container, there will be a directory at /app. The contents of this /app directory will directly mirror the contents of your local project folder (specified by source).
    • Why /app? This is a common convention, often chosen to match the WORKDIR instruction in the associated Dockerfile or the working_dir in a Docker Compose service definition. It provides a standard, predictable location for your project code within the container's filesystem. You could choose a different path (e.g., /workspaces/my-project, /code), but /app or /workspaces/your-project-name are frequently used. Consistency with other settings (like workspaceFolder in devcontainer.json and WORKDIR in the Dockerfile) is important.
    1. type=bind
    • type: This specifies the type of mount Docker should perform.
    • bind: This indicates a bind mount. A bind mount directly links (or "binds") a file or directory from the host machine's filesystem into the container's filesystem at the target path.
    • What does it mean? Changes made to files in the source directory on the host are immediately reflected inside the container at the target path, and vice-versa (with caveats related to the consistency setting). This is exactly what you want for development: edit code locally using VS Code, run/test it inside the container, and see changes instantly without needing to rebuild the container image. It's different from Docker volumes, which are fully managed by Docker and are often used for persisting data generated by containers (like databases).
    1. consistency=cached
    • consistency: This option fine-tunes the synchronization behavior between the host (source) and the container (target) for bind mounts. This is particularly relevant for performance on Docker Desktop (macOS and Windows), where filesystem operations between the host and container can be slow. On Linux hosts running Docker natively, the performance difference between consistency options is usually negligible.
    • cached: This setting prioritizes performance. It allows the container's view of the mount to be slightly delayed compared to the host's view. The host's view is considered authoritative. This generally provides the best performance for read/write heavy workloads like code editing, compiling, and dependency installation on macOS and Windows. While there's a theoretical possibility of brief inconsistencies, it's rarely an issue for typical development workflows and significantly speeds things up.
    • Other Options (for context):
    • delegated: Similar performance benefits to cached, but the container's view is authoritative. Less commonly used for source code mounts.
    • consistent (or omitted): Ensures perfect consistency between host and container at all times. This can lead to significant performance degradation on macOS and Windows due to the overhead of keeping everything perfectly in sync. It's usually the default if consistency is not specified, but cached is often explicitly recommended or set by default by the Dev Containers extension for workspaces on these platforms.

    In Summary:

    The line "workspaceMount": "source=${localWorkspaceFolder},target=/app,type=bind,consistency=cached" tells the Dev Containers extension (and underlying Docker) to:

    "Take the folder currently open in VS Code (${localWorkspaceFolder}) and directly mirror its contents into the container at the /app path, using a performance-optimized bind mount (type=bind, consistency=cached) suitable for active development on macOS or Windows."

    This is the mechanism that allows you to edit your code using VS Code running on your host machine while executing and debugging that same code within the isolated, controlled environment of the Dev Container.

  1. Rebuild and Run:
    • Use the Command Palette: Dev Containers: Rebuild Container. This will build the image based on your Dockerfile and then start the container.

    • Once VS Code reloads inside the container, open the terminal (Ctrl+`).

    • Navigate to your code directory if needed (though workspaceFolder should place you there).

    • Run the Flask app:

      flask run --host=0.0.0.0 --port=5000
      # OR if you didn't install flask globally in the container but used a venv:
      # . venv/bin/activate
      # flask run --host=0.0.0.0 --port=5000
      # OR directly using python:
      # python app.py
      

      (Using 0.0.0.0 makes the server listen on all available network interfaces inside the container, which is necessary for port forwarding to work.)

    • VS Code should automatically detect the running port (5000) and might prompt you to open it in a browser.

    • Open your local web browser and navigate to http://localhost:5000. You should see "Hello from Flask inside a Dev Container!".

You've now customized your environment using a Dockerfile, installed dependencies during the build, and successfully run a web application accessible from your host machine!

5. Essential Dev Container Concepts

  • Workspace Mounting: By default (using workspaceMount), your local project folder is mounted into the container, not copied during the build. This means changes you make to your code locally are instantly reflected inside the container, and vice-versa. This is different from COPY instructions in a Dockerfile which happen only at build time.
  • Lifecycle Scripts: devcontainer.json provides hooks to run commands at different stages:
    • postCreateCommand: Runs once after the container is created. Ideal for initial setup like pip install, npm install, database migrations.
    • postStartCommand: Runs every time the container starts (including after creation). Good for starting services or performing checks.
    • updateContentCommand: Runs when you pull changes into your workspace after the container is created. Useful for automatically updating dependencies based on changed lock files.
    • postAttachCommand: Runs every time VS Code attaches to the container.
  • Managing Extensions: The customizations.vscode.extensions list is key. It ensures that every developer using the Dev Container automatically gets the necessary VS Code extensions installed inside the container. This guarantees consistent tooling (linters, formatters, debuggers, language support) for everyone.
  • Secrets Management (Brief Mention): Never hardcode secrets (API keys, passwords) in your Dockerfile or devcontainer.json as these files are committed to version control. Common strategies include:
    • Using environment variables passed in via Docker or devcontainer.json (less secure for sensitive secrets).
    • Mounting Docker secrets.
    • Using tools like HashiCorp Vault or cloud provider secret managers, accessed via code running inside the container.
    • For local development, sometimes mounting a local secrets file (e.g., .env) that is not committed to Git is acceptable, but be careful.

6. Tips and Best Practices

  • Base Images: Start with official (python:3.x.x-slim) or well-maintained community images (mcr.microsoft.com/devcontainers/...). Slim variants are smaller.

  • Leverage Features: Before adding complex RUN commands to your Dockerfile, check if a pre-built Dev Container Feature exists (search the web or use the VS Code UI). Features simplify adding common tools (Docker CLI, databases, Node.js, CLIs).

  • Image Size: Keep images reasonably small. Use .dockerignore, multi-stage builds (if complex), combine RUN commands, and clean up package manager caches (apt-get clean, rm -rf /var/lib/apt/lists/*, pip install --no-cache-dir). Smaller images build faster and consume less disk space.

  • Version Control: Always commit the .devcontainer folder (including devcontainer.json, Dockerfile, any scripts it uses) to your Git repository. This is the core benefit – sharing the environment definition!

  • .dockerignore: When using a Dockerfile, create a .dockerignore file in the context directory (usually your project root, alongside the .devcontainer folder) to prevent unnecessary files (like .git, virtual environments, build artifacts, .vscode) from being sent to the Docker daemon during the build. This speeds up builds significantly.

    // Docker Ignore
    .git
    .vscode
    __pycache__
    *.pyc
    .venv
    venv
    *.env
    .pytest_cache
    

7. Conclusion

VS Code Development Containers provide a robust, reproducible, and isolated environment for your Python projects. By defining your environment as code within the .devcontainer configuration, you eliminate setup friction, ensure consistency across your team, and keep your host machine clean.

Whether you start with a simple template or build a highly customized environment with a Dockerfile, Dev Containers integrate seamlessly with VS Code, offering a powerful and productive development experience.

Start using them today! Add a .devcontainer configuration to your next Python project – you'll appreciate the consistency and ease of setup. Happy coding!


Part 2: Advanced Concepts

Welcome back! You've got your first Python Dev Containers running, understand the core benefits, and can customize basic environments. Now, let's level up your skills by exploring multi-container setups, sophisticated dependency management, optimization strategies, and more nuanced integrations.

1. Multi-Container Applications with Docker Compose

Real-world applications often consist of multiple services – your Python backend, a database (like PostgreSQL or Redis), maybe a message queue, or a separate frontend service. Dev Containers seamlessly support this using Docker Compose.

Why use Docker Compose with Dev Containers?

  • Define Multiple Services: Easily define and link multiple containers needed for your development environment.
  • Networked Environment: Compose automatically sets up a network, allowing containers to communicate with each other using service names (e.g., your Python app can connect to db or redis).
  • Manage Service Lifecycles: Start, stop, and manage all related services together.

Scenario: Let's enhance our Flask app from Part 1 to use a Redis cache.

  1. Create docker-compose.yml: Inside the .devcontainer folder (or sometimes in the project root, referenced from devcontainer.json), create a docker-compose.yml file:

    # .devcontainer/docker-compose.yml
    version: '3.8'
    
    services:
      # Your Python application service
      app:
        # Build instructions using the Dockerfile from Part 1
        build:
          context: . # Context is the .devcontainer folder where Dockerfile resides
          dockerfile: Dockerfile
          # Add build args if needed
          # args:
          #   PYTHON_VERSION: "3.11"
    
        volumes:
          # Mount the project directory (adjust source path if compose file is in root)
          # The target path should match your WORKDIR in the Dockerfile
          - ..:/app:cached # Mount project root to /app in the container
    
        # Keep the container running after the entrypoint/command finishes
        # Useful for attaching VS Code; often used with a sleep command or tail -f /dev/null
        # Or, configure your app to run directly (like below)
        command: sleep infinity # Or: flask run --host=0.0.0.0 --port=5000
    
        ports:
          # Forward port 5000 from the container to the host
          - "5000:5000"
    
        environment:
          # Example: Set environment variables for your app
          - REDIS_HOST=redis # Use the service name 'redis' to connect
          - REDIS_PORT=6379
          # PYTHONUNBUFFERED=1 # Often useful for seeing logs immediately
    
        # Depends on Redis starting first (optional but good practice)
        depends_on:
          - redis
    
      # The Redis service
      redis:
        image: "redis:alpine" # Use an official Redis image
        ports:
          # Optionally forward Redis port to host for external tools (usually not needed)
          # - "6379:6379"
        volumes:
          # Optional: Persist Redis data across container restarts using a named volume
          - redis-data:/data
    
    # Define named volumes (optional, for data persistence)
    volumes:
      redis-data:
    
  2. Modify devcontainer.json: Update your .devcontainer/devcontainer.json to use the Docker Compose file instead of build or image.

    // .devcontainer/devcontainer.json
    {
        "name": "Python Flask + Redis",
    
        // Specify the Docker Compose file(s)
        "dockerComposeFile": "docker-compose.yml", // Path relative to .devcontainer folder
    
        // The name of the service VS Code should connect to
        "service": "app", // Must match a service name in docker-compose.yml
    
        // The workspace folder inside the 'service' container
        "workspaceFolder": "/app", // Must match the target path of your volume mount
    
        // Optional: Shutdown Compose services when VS Code closes window
        "shutdownAction": "stopCompose", // Or "none"
    
        // Keep forwardPorts if needed (often defined in Compose 'ports' now)
        // "forwardPorts": [5000],
    
        // postCreateCommand/postStartCommand can still be used here,
        // they run *inside* the specified 'service' container ('app' in this case)
        // "postCreateCommand": "pip install -r requirements.txt", // If not done in Dockerfile
    
        "customizations": {
            "vscode": {
                // Settings and extensions apply to the 'app' service container
                "settings": {
                    "python.defaultInterpreterPath": "/usr/local/bin/python",
                    "python.linting.pylintEnabled": true,
                    "python.linting.enabled": true,
                    "python.formatting.provider": "black",
                    "editor.formatOnSave": true
                    // Maybe add settings for Redis extensions if you use them
                },
                "extensions": [
                    "ms-python.python",
                    "ms-python.vscode-pylance",
                    "ms-python.flake8",
                    "ms-python.black-formatter",
                    "redhat.vscode-yaml" // Useful for compose files
                    // Add Redis client extensions if desired
                ]
            }
        }
        // If using a non-root user in your app service's Dockerfile:
        // "remoteUser": "vscode"
    }
    

    Key devcontainer.json Properties for Compose:

    • dockerComposeFile: Path (or array of paths) to your Compose file(s).
    • service: The name of the service defined in your Compose file that VS Code should attach to and where your development happens (usually your application container).
    • workspaceFolder: The path inside the specified service container where your project code is mounted (defined by volumes in Compose).
    • shutdownAction: Controls what happens to the Compose services when you close VS Code (stopCompose is common).
  3. Update Application Code (Example): Modify app.py and requirements.txt to use Redis.

    # requirements.txt
    Flask==3.0.0
    redis==5.0.1
    
    # app.py
    from flask import Flask
    import redis
    import os
    
    app = Flask(__name__)
    
    # Connect to Redis using the service name 'redis' from docker-compose.yml
    redis_host = os.environ.get('REDIS_HOST', 'localhost')
    redis_port = int(os.environ.get('REDIS_PORT', 6379))
    try:
        r = redis.Redis(host=redis_host, port=redis_port, db=0, decode_responses=True)
        r.ping() # Check connection
        redis_enabled = True
    except redis.exceptions.ConnectionError as e:
        print(f"Could not connect to Redis: {e}")
        redis_enabled = False
    
    @app.route('/')
    def hello_world():
        visit_count = 0
        if redis_enabled:
            try:
                visit_count = r.incr('visits')
            except redis.exceptions.ConnectionError as e:
                 # Handle potential transient connection issue
                 print(f"Redis connection error during request: {e}")
                 return "Error connecting to Redis during request.", 500
        else:
             return "Hello from Flask (Redis disabled)!"
    
        return f'Hello from Flask inside a Dev Container! Visits: {visit_count}'
    
    if __name__ == '__main__':
        app.run(debug=True, host='0.0.0.0', port=5000)
  4. Rebuild and Run:

    • Use the Command Palette: Dev Containers: Rebuild and Reopen in Container.
    • Docker Compose will build/pull the images for app and redis and start both services. VS Code will attach to the app service.
    • Open the integrated terminal (now inside the app container).
    • Run the Flask app (if not started by command in Compose): flask run --host=0.0.0.0 --port=5000
    • Access http://localhost:5000. You should see the visit counter incrementing on each refresh, demonstrating the connection to the Redis container.

2. Advanced Dependency Management Strategies

While pip install -r requirements.txt in postCreateCommand or the Dockerfile works, let's refine dependency management for Python projects within Dev Containers.

  • Locking Dependencies: Always use a lock file (requirements.txt generated via pip freeze > requirements.txt or pip-compile, poetry.lock, pdm.lock) to ensure reproducible builds. Commit this lock file to Git.

  • Using Poetry or PDM: These modern tools manage dependencies, locking, and virtual environments effectively.

    • Installation: Install Poetry/PDM within your Dockerfile:

      # Example for Poetry
      RUN pip install poetry
      # Disable virtualenv creation by Poetry within the container if you don't need it there
      RUN poetry config virtualenvs.create false --local || true
      
      
    • Installing Dependencies: Copy pyproject.toml and poetry.lock (or pdm.lock) and install in the Dockerfile or postCreateCommand:

      # In Dockerfile (preferred for caching)
      COPY pyproject.toml poetry.lock* ./
      RUN poetry install --no-root --no-interaction --no-ansi
      
      # Or in postCreateCommand (runs after container creation)
      # "postCreateCommand": "poetry install"
      
      
  • Virtual Environments Inside the Container:

    • Pros: Isolates project dependencies even within the container, mirroring common local Python workflows. Can be useful if multiple Python projects share one Dev Container image (less common).
    • Cons: Adds a small layer of complexity (remembering to activate). Can be redundant since the container is already an isolated environment.
    • Implementation:
      • Create the venv in Dockerfile or postCreateCommand: python -m venv /opt/venv

      • Install dependencies into the venv: /opt/venv/bin/pip install -r requirements.txt

      • Configure VS Code to use the venv's interpreter in devcontainer.json:

        "settings": {
            "python.defaultInterpreterPath": "/opt/venv/bin/python",
            // OR let the Python extension find it
            // "python.pythonPath": "/opt/venv/bin/python" // Older setting
        }
        
      • Activate the venv in the terminal (source /opt/venv/bin/activate) or run commands using the venv's path (/opt/venv/bin/python app.py).

Recommendation: For most single-project Dev Containers, installing dependencies directly into the container's site-packages (as shown in initial examples or with Poetry/PDM configured not to create venvs) is often simpler and sufficient, thanks to the container's inherent isolation.

3. Optimizing Your Dev Container Build

Slow builds waste time. Here's how to speed things up:

  • Leverage Docker Build Cache:

    • Order Matters: Structure your Dockerfile instructions from least frequently changing to most frequently changing. Install OS packages and dependencies before copying your application code.

    • Copy Selectively: Copy only necessary files (requirements.txt, pyproject.toml, poetry.lock) before running pip install or poetry install. This ensures the dependency installation layer is cached unless those specific files change.

      # GOOD: Copy only requirements first
      COPY requirements.txt .
      RUN pip install -r requirements.txt
      # Copy the rest of the code later
      COPY . .
      
      
  • Use .dockerignore Effectively: As mentioned in Part 1, prevent unnecessary files (.git, __pycache__, .pyc, local venv, large data files) from being sent to the Docker daemon during the build context. This is crucial for speed, especially when copying the entire project (COPY . .).

  • Use Specific Base Image Versions: Avoid using latest tags for base images (python:latest). Pin to specific versions (e.g., python:3.11.5-slim-bullseye) for predictable builds and better caching.

  • Multi-Stage Builds: For complex scenarios where you need build tools or large SDKs only for compiling dependencies or building assets (but not in the final runtime image), use multi-stage builds. This keeps the final image leaner and faster to pull/start.

    # Stage 1: Build stage with potentially heavy build dependencies
    FROM python:3.11 as builder
    WORKDIR /build
    # Install build tools (e.g., compilers, Rust for some Python packages)
    RUN apt-get update && apt-get install -y --no-install-recommends build-essential cargo
    COPY requirements.txt .
    # Build wheels or install packages that need compilation
    RUN pip wheel --no-cache-dir --wheel-dir=/wheels -r requirements.txt
    
    # Stage 2: Final runtime stage, slim base image
    FROM python:3.11-slim
    WORKDIR /app
    # Copy only the built wheels/packages from the builder stage
    COPY --from=builder /wheels /wheels
    COPY requirements.txt .
    # Install from local wheels first, then the rest
    RUN pip install --no-cache-dir --no-index --find-links=/wheels -r requirements.txt \\
        && rm -rf /wheels
    COPY . .
    # Set user, expose port, CMD etc.
    EXPOSE 5000
    CMD ["python", "app.py"]
    
    
  • Dev Container Features: Using pre-built Features often involves optimized installation steps compared to writing them manually in your Dockerfile.

4. Leveraging Dev Container Features

We touched on Features in Part 1. They are self-contained units of installation code and configuration, designed to add tools, runtimes, or libraries to your Dev Container easily and reliably.

  • Discovering Features: You can find official and community Features:

    • Through the VS Code UI when adding configuration (Dev Containers: Add Development Container Configuration Files... or Dev Containers: Configure Container Features...).
    • On the containers.dev website.
    • In repositories like github.com/devcontainers/features.
  • Adding Features: Add them to the features object in your devcontainer.json. You can configure options for each feature.

    // .devcontainer/devcontainer.json
    {
      // ... other properties
      "features": {
        // Using a feature for NodeJS
        "ghcr.io/devcontainers/features/node:1": {
          "version": "18" // Specify options for the feature
        },
        // Adding Docker-in-Docker capability
        "ghcr.io/devcontainers/features/docker-in-docker:2": {
            "version": "latest",
            "moby": true
        },
        // Adding AWS CLI
         "ghcr.io/devcontainers/features/aws-cli:1": {}
      }
      // ...
    }
    
  • Why Use Them?

    • Reusability: Define tool installations once, reuse across projects.
    • Simplicity: Avoid complex Dockerfile scripting for common tools.
    • Maintainability: Features are often maintained and updated by the community or vendors.

Consider using Features for things like the AWS CLI, Azure CLI, Terraform, Node.js, Docker CLI (within the container), database clients, etc., before adding custom RUN commands to your Dockerfile.

5. Git Integration Inside the Container

Working with Git inside the container is usually seamless, but credential management requires attention.

  • File Access: Your .git directory is part of the mounted workspace, so Git commands (git status, git log, git commit) executed inside the container operate on your project repository correctly.
  • Credentials: Pushing or pulling requires Git to authenticate with your remote (e.g., GitHub, GitLab).
    • HTTPS: If you clone using HTTPS, Git might prompt for credentials. The easiest solution is often to use a Git credential manager on your host machine that the container can access.
      • Docker Desktop Credential Helper: Often configured by default to securely forward host credentials.
      • VS Code Automatic Forwarding: VS Code attempts to forward authentication requests to the host, where your OS keychain or Git credential manager can handle them.
    • SSH: If you use SSH keys:
      • SSH Agent Forwarding: The most secure and common method. Ensure your SSH agent is running on the host (ssh-add -l should show keys). VS Code typically forwards the agent socket automatically. You might need to configure Docker Desktop settings or SSH config for this.
      • Mounting SSH Keys (Less Secure): You can mount your ~/.ssh folder into the container, but this is generally not recommended as it exposes your private keys directly to the container environment.
  • GPG Signing: If you sign commits, ensure your GPG key is accessible via agent forwarding, similar to SSH keys.

Tip: Check the Dev Containers: Log output in VS Code if you encounter Git authentication issues; it often provides clues about credential forwarding.

6. Debugging Considerations

Debugging Python code within a Dev Container using VS Code is remarkably straightforward, as the Python extension runs inside the container.

  • launch.json: Your standard .vscode/launch.json file for configuring debugging sessions works as expected. Define configurations for running/debugging specific files, Flask/Django apps, pytest, etc. These configurations will execute within the container environment.

    // .vscode/launch.json (Works inside the Dev Container)
    {
        "version": "0.2.0",
        "configurations": [
            {
                "name": "Python: Current File",
                "type": "python",
                "request": "launch",
                "program": "${file}",
                "console": "integratedTerminal"
            },
            {
                "name": "Python: Flask",
                "type": "python",
                "request": "launch",
                "module": "flask",
                "env": {
                    "FLASK_APP": "app.py", // Or your app factory path
                    "FLASK_DEBUG": "1",
                    "REDIS_HOST": "redis" // Ensure env vars needed by app are set
                },
                "args": [
                    "run",
                    "--no-debugger", // Use VS Code debugger, not Werkzeug's
                    "--no-reload",   // Let VS Code handle restarts if needed
                    "--host=0.0.0.0",
                    "--port=5000"
                ],
                "jinja": true,
                "console": "integratedTerminal"
            }
        ]
    }
    
  • Path Mapping: VS Code and the Python debugger automatically handle mapping paths between your local workspace and the mounted path inside the container (workspaceFolder), so breakpoints usually work without extra configuration.

  • Attaching to Running Processes: You can also configure launch.json to attach the debugger to a Python process already running inside the container.

7. Deeper Dive into Lifecycle Scripts

Let's revisit lifecycle scripts (postCreateCommand, postStartCommand, updateContentCommand) with more concrete examples:

  • postCreateCommand: Runs only once after the container is built. Perfect for setup that doesn't need repeating.

    "postCreateCommand": "pip install -r requirements.txt && pre-commit install && echo 'Dev Container Initialized!'"
    
  • postStartCommand: Runs every time the container starts. Good for ensuring services are running or performing checks.

    // Example: Start a background process (e.g., a mock server) needed during development
    "postStartCommand": "nohup python /path/to/mock_server.py &"
    // Or check database connection/run migrations
    // "postStartCommand": "python manage.py check_db || python manage.py migrate"
    
  • updateContentCommand: Triggered after the container exists and you pull changes into your locally mounted workspace that modify the contents of devcontainer.json or related files. Useful for keeping dependencies in sync.

    // Re-install dependencies if requirements change after a git pull
    "updateContentCommand": "if [ -f requirements.txt ]; then pip install -r requirements.txt; fi"
    // Or for Poetry
    // "updateContentCommand": "if [ -f poetry.lock ]; then poetry install; fi"
    

Conclusion: Mastering Your Development Environment

By leveraging Docker Compose for multi-service applications, refining dependency management with tools like Poetry, optimizing builds, utilizing Features, and understanding Git and debugging integration, you can create highly sophisticated and efficient development environments using VS Code Dev Containers.

These advanced techniques build upon the core principles of isolation and reproducibility, further reducing setup time, eliminating environment drift, and ultimately letting you and your team focus on what matters most: writing great Python code. Keep exploring, keep customizing, and enjoy the power of defining your development environment as code!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment