Skip to content

Comments

feat(deployment): add concurrent deployment queue system#3744

Open
mhbdev wants to merge 4 commits intoDokploy:canaryfrom
mhbdev:feature/deployment-queue-observability
Open

feat(deployment): add concurrent deployment queue system#3744
mhbdev wants to merge 4 commits intoDokploy:canaryfrom
mhbdev:feature/deployment-queue-observability

Conversation

@mhbdev
Copy link
Contributor

@mhbdev mhbdev commented Feb 18, 2026

  • Add deployment concurrency configuration for servers (1-5)
  • Add local deployment concurrency configuration in web server settings
  • Implement queue-based deployment system with separate queues per target
  • Add queue summary display in deployment views showing running/queued jobs
  • Update UI to show "queued" status messages instead of immediate success
  • Replace single deployment queue with target-specific queues (local/server)
  • Add deployment worker refresh when concurrency settings change
  • Update database schema with deploymentConcurrency fields
  • Update all deployment enqueue points to use new queue system

What is this PR about?

Please describe in a short paragraph what this PR is about.

Checklist

Before submitting this PR, please make sure that:

  • You created a dedicated branch based on the canary branch.
  • You have read the suggestions in the CONTRIBUTING.md file https://github.com/Dokploy/dokploy/blob/canary/CONTRIBUTING.md#pull-request
  • You have tested this PR in your local instance. If you have not tested it yet, please do so before submitting. This helps avoid wasting maintainers' time reviewing code that has not been verified by you.

Greptile Summary

This PR replaces the single-queue deployment system with a per-target (local/server) queue architecture that supports configurable concurrency (1-5) for both the local host and remote servers. It adds new DB columns, UI controls, a queue summary display, and updates all deployment enqueue points to route jobs to the correct target queue.

  • Bug: Preview deployments routed to wrong queueresolveTargetFromDeploymentJob checks buildServerId || serverId for regular applications but only checks serverId for application-preview, meaning preview builds will bypass the designated build server.
  • Behavioral regression in "Clean Deployment Queue" — The old cleanAllDeploymentQueue called cancelAllJobs() to stop active deployments; the new implementation only removes waiting/delayed jobs, leaving active deployments running.
  • Race condition in worker refreshwithRefreshLock silently drops concurrent refresh requests, so rapid concurrency changes may not take effect.
  • Queue summary excludes preview jobsisServiceJob in getQueueSummaryByType doesn't match application-preview, potentially underreporting queued jobs for an application.

Confidence Score: 2/5

  • This PR has functional bugs that should be fixed before merging — preview deployments will be routed incorrectly when a build server is configured, and the "clean queue" action no longer stops active deployments.
  • Score of 2 reflects two logic bugs (preview deployment routing ignores buildServerId; cleanAllDeploymentQueue regression) and a race condition in the worker refresh lock. The schema migration and UI changes are sound, but the core queue routing logic needs fixes before this is safe to merge.
  • Pay close attention to apps/dokploy/server/queues/queueSetup.ts — it contains the preview deployment routing bug, the cleanAllDeploymentQueue regression, and the refresh lock race condition.

Last reviewed commit: 41eaa67

…bility

- Add deployment concurrency configuration for servers (1-5)
- Add local deployment concurrency configuration in web server settings
- Implement queue-based deployment system with separate queues per target
- Add queue summary display in deployment views showing running/queued jobs
- Update UI to show "queued" status messages instead of immediate success
- Replace single deployment queue with target-specific queues (local/server)
- Add deployment worker refresh when concurrency settings change
- Update database schema with deploymentConcurrency fields
- Update all deployment enqueue points to use new queue system
@mhbdev mhbdev requested a review from Siumauricio as a code owner February 18, 2026 18:07
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

23 files reviewed, 5 comments

Edit Code Review Agent Settings | Greptile

!!id &&
(type === "application" || type === "compose") &&
isCloud === false,
refetchInterval: 1000,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1-second polling may be too aggressive

refetchInterval: 1000 means every open deployment page fires a queueSummaryByType query once per second. This query calls findApplicationById/findComposeById, findServerById, then fetches jobs from Redis. With several users or tabs open, this could create noticeable database and Redis load.

Consider increasing the interval to 3-5 seconds, or switching to a WebSocket/SSE push model that the rest of the deployment UI already uses.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

- Enhance `withRefreshLock` to coalesce concurrent refresh requests, ensuring only the latest concurrency settings are applied
- Extend `resolveTargetFromDeploymentJob` to correctly route both application and preview deployments
- Ensure `cleanAllDeploymentQueue` cancels active workers before clearing jobs
- Increase polling interval for deployment UI to reduce load
- Add comprehensive test suite covering queue routing, cleanup, refresh coalescing, and summary calculations
@NewZAZ
Copy link

NewZAZ commented Feb 19, 2026

That would be amazing, it would be fantastic if this were merged!

@AminDhouib
Copy link

This is easily one of the most requested feature on Dokploy. There is a similar PR open: #2127

It might be a good idea to consider both implementations to merge in the best implementation given the current limitations of the stack.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants