Fast Local Docker Setup for Daily Development

Fast Local Docker Setup for Daily Development

1/5/2026 DevOps By Tech Writers
DockerLocal DevelopmentDeveloper Experience

Table of Contents

Why Local Docker Often Feels Heavy for Day-to-Day Development

Many teams already use Docker in production, but avoid using it for day-to-day development because it’s seen as:

  • Containers taking too long to start.
  • Laptops heating up quickly and fans going wild.
  • Code changes feeling slower than when running apps directly on the host.

Often the problem isn’t Docker itself, but that:

  • Your Dockerfile isn’t cache-friendly → tiny changes trigger full rebuilds.
  • Images are too fat → lots of build-time dependencies are shipped into runtime.
  • Compose brings up too many services → everything is always on, even when you don’t need it in development.
  • Volume/bind mount choices are suboptimal, so IO becomes the bottleneck (especially on macOS/Windows via VM).

The good news: with a few simple tweaks, you can get a local Docker setup that:

  • Starts fast enough to be usable every day.
  • Has a feedback loop close to running the app directly on the host.
  • Matches production more closely (fewer “works on my machine” moments).

Optimization 1: Write an Efficient Dockerfile with Layer Caching

Layer caching is your primary weapon for speeding up builds. The core principle: the less often a layer changes, the more often Docker can reuse it.

Practical tips:

  • Separate dependency installation from copying source code
    Bad:

    COPY . .
    RUN npm install

    Better:

    COPY package*.json ./
    RUN npm install --production=false
    COPY . .

    This way, when you only change application code, the costly npm install layer can still be reused.

  • Order instructions from least to most frequently changing
    For example:

    1. Set base image & basic env.
    2. Install system dependencies.
    3. Install app dependencies.
    4. Only then copy source code.
  • Use .dockerignore properly
    Ensure folders like node_modules, dist, .git, and other large artifacts are excluded from the build context. This shrinks the context and speeds up builds.

Tools like dive help you inspect which layers are largest and most often invalidated, then you can reorder your Dockerfile based on those insights.

Optimization 2: Multi-Stage Builds for Leaner Images

Multi-stage builds let you use a single Dockerfile for:

  • A build stage (using a heavier image with compilers, dev tools, etc.).
  • A runtime stage (a minimal image containing only the final artifacts).

A common pattern for a Node.js app:

FROM node:22 AS build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

FROM node:22-slim AS runtime
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/package*.json ./
RUN npm install --omit=dev
CMD ["node", "dist/index.js"]

Benefits:

  • Runtime images are smaller and faster to pull.
  • Build-time dependencies (compilers, tooling) are not shipped to production.
  • It’s easier to keep a smaller attack surface.

For local development you can:

  • Use the build stage for development (with bind mounts and hot reload).
  • Still have a clean runtime stage for testing closer to production.

Optimization 3: Docker Compose Profiles for Flexible Services

We often write a single docker-compose.yml that starts every service, even though for most daily work you don’t need everything (e.g. workers, schedulers, admin tools).

With Compose profiles, you can group services:

services:
  api:
    profiles: ["default"]
    # ...
  web:
    profiles: ["default"]
    # ...
  worker:
    profiles: ["worker"]
    # ...
  admin:
    profiles: ["admin"]
    # ...

Then run what you need:

  • Daily dev: docker compose up (only default profile).
  • When testing workers: docker compose --profile worker up.
  • When debugging admin tools: docker compose --profile admin up.

Benefits:

  • Your laptop isn’t forced to run all services all the time.
  • Resource usage (CPU, RAM, disk IO) is more controlled.
  • Developers have environment presets for different activities without editing Compose files each time.

Optimization 4: Using Volumes and Bind Mounts Wisely

Volumes and bind mounts have a big impact on performance, especially on OSes running Docker inside a VM.

Practical principles:

  • Use bind mounts for source code that changes frequently
    So hot reload works and you don’t need to rebuild the image on every change.

  • Use named volumes for internal container data
    For example local database data, build caches, etc. These are usually faster and more stable than bind mounting to the host filesystem.

  • Avoid bind mounting huge folders you don’t need
    Don’t mount the entire project root if you only need a few directories. Combine a good .dockerignore with more targeted mounts.

If filesystem performance is slow:

  • Limit which files your dev server watches (e.g. via watchOptions).
  • Consider running heavy databases (Postgres, Elasticsearch) outside Docker during development if they are truly killing performance and you don’t need 100% production parity.

Enable Hot Reload So Feedback Loops Feel Almost Native

One major reason people avoid Docker for development is slow feedback loops. The key is to combine:

  • Bind mounts for source code.
  • A dev server with hot reload / live reload.

Typical pattern:

  • In your dev Dockerfile or Compose, run commands like npm run dev or uvicorn --reload.
  • Mount the project folder from host to container (.:/app) so file changes are visible to the dev server inside the container.

Things to watch out for:

  • Ensure dev server watch paths match container paths, not host paths.
  • Limit the number of watched files to avoid overhead (exclude node_modules, build output, etc.).
  • If hot reload is still slow, check whether the problem is:
    • A mount that covers too many files.
    • The dev server recompiling too much work on each change.

Your goal: from “save” to seeing changes in browser/CLI, the delay shouldn’t be much worse than running the app directly on the host.

Measure and Monitor Local Container Performance

Optimizing without measurement often ends in “feels faster” with no real clarity. Track a few simple metrics:

  • Startup time: How many seconds from docker compose up until the main service is ready for requests?
  • Resource usage: CPU, RAM, and disk IO per container (via docker stats or other tools).
  • Image size: Are images getting bigger every week? Are they still reasonable for devs to pull from your internal registry?

A suggested iteration loop:

  1. Record a baseline (startup times, image sizes, laptop fan noise).
  2. Apply one optimization (e.g. improve Dockerfile for caching).
  3. Measure again objectively.
  4. Document which changes had real impact.

Over time, your team’s local Docker setup will steadily improve, and developers will feel comfortable making it the default part of their daily workflow, not something they only use when “trying to mimic production.”


References


Got a favorite local Docker trick that makes your development dramatically faster? Or a story about containers suddenly slowing down and being painful to debug? Share it in the comments! 💬