DigitalOcean Services Status

All Systems Operational
API Operational
Billing Operational
Cloud Control Panel Operational
Cloud Firewall Operational
Community Operational
DNS Operational
Support Center Operational
Reserved IP Operational
WWW Operational
App Platform Operational
Global Operational
Amsterdam Operational
Bangalore Operational
Frankfurt Operational
London Operational
New York Operational
San Francisco Operational
Singapore Operational
Sydney Operational
Toronto Operational
Container Registry Operational
Global Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
Droplets Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Event Processing Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Functions Operational
Global Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
GPU Droplets Operational
Global Operational
NYC2 Operational
TOR1 Operational
Managed Databases Operational
Global Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Monitoring Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SGP1 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SYD1 Operational
TOR1 Operational
Networking Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Kubernetes Operational
Global Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Load Balancers Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Spaces Operational
Global Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
BLR1 Operational
Spaces CDN Operational
Global Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
VPC Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Volumes Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Dec 21, 2024

No incidents reported today.

Dec 20, 2024

No incidents reported.

Dec 19, 2024
Resolved - From 1:23 UTC to 3:04 UTC, users may have experienced issues with events being stuck or delayed, such as powering on/off, and resizing Droplets in the NYC3, AMS2, BLR1, SGP1, and SYD1 regions. Additionally, Managed Database creates were delayed in all regions.

Our Engineering team has confirmed full resolution of the issue, delayed events and new events should complete as normal now.

Thank you for your patience through this issue. If you continue to experience any issues, please open a support ticket from within your account.

Dec 19, 05:03 UTC
Update - Our Engineering team has confirmed that new events are succeeding, and we're finalizing cleaning up any stalled events. We will post an update as soon this issue is fully resolved.
Dec 19, 04:28 UTC
Monitoring - Events in NYC3 has been enabled now and users may continue to submit new events. We are starting to see success rates with event processing go up.

We will post another update soon. Thank you for your patience.

Dec 19, 03:08 UTC
Update - Our Engineering team continues to work to mitigate the issue with event processing in multiple regions.

To help address the issue, we are temporarily disabling all Droplet-related actions in the NYC3 region for a period of 20 minutes. During this time, users in NYC3 will not be able to submit actions, such as creating, resizing, powering on/off, etc.

We apologize for the inconvenience and appreciate your patience as we work to resolve this issue. Further updates will be shared as they become available.

Dec 19, 02:42 UTC
Investigating - Our Engineering team is investigating an issue impacting Droplet event processing in multiple regions.

At this time, users may experience issues with events appearing to be stuck or delayed, such as powering on/off, and resizing Droplets in NYC3, AMS2, BLR1, SGP1, and SYD1 regions. Additionally, Managed Database creates are delayed in all regions.

We apologize for the inconvenience and will share an update once we have more information.

Dec 19, 01:45 UTC
Dec 18, 2024

No incidents reported.

Dec 17, 2024

No incidents reported.

Dec 16, 2024

No incidents reported.

Dec 15, 2024

No incidents reported.

Dec 14, 2024

No incidents reported.

Dec 13, 2024
Resolved - From 13:25 to 14:45 UTC, our Engineering team observed a Networking issue in our NYC3 region. During this time, users may have experienced Droplet and VPC connectivity issue, Users should no longer be experiencing these issues.
We apologize for the inconvenience. If you have any questions or continue to experience issues, please reach out via a Support ticket on your account.

Dec 13, 13:25 UTC
Resolved - Our Engineering team has confirmed that the issues with Spaces CDN functionality has been fully resolved. Users should now be able to use CDN functionality normally.

If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.

Dec 13, 11:01 UTC
Monitoring - Our Engineering team has implemented a fix to resolve the CDN functionality issue and is currently monitoring the situation.

We will continue to monitor this at our end and will share an update once the issue is resolved completely.

Dec 13, 10:55 UTC
Investigating - Our Engineering team is investigating an issue affecting Spaces CDN functionality globally. During this time, users may experience issues when enabling Spaces CDN and with existing CDN services.

We apologize for the inconvenience and will share an update once we have more information.

Dec 13, 09:55 UTC
Resolved - Our Engineering team has confirmed the full resolution of the issue impacting Managed Database Operations, and all systems are now operating normally. Users may safely resume operations, including upgrades, resizes, forking, and ad-hoc maintenance patches.

If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.

Dec 13, 04:05 UTC
Monitoring - Our Engineering team has implemented a fix for the issue impacting Managed Database Operations. The team is monitoring the situation, and we will share an update once this is fully resolved.
Dec 13, 03:13 UTC
Update - Our Engineering team is continuing to work on investigating the root cause of the issue affecting Managed Database Operations. At this time, we are seeing improvement and some new cluster creations are completing successfully.

To avoid potential downtime, we continue to ask users to refrain from performing operations that trigger node rotations, such as upgrades, resizes, forking, and ad-hoc maintenance patches.

We apologize for any inconvenience caused and appreciate your patience. We will post further updates as soon we have more information.

Dec 12, 22:03 UTC
Update - During our investigation, we identified that operations triggering node rotation, such as upgrades, resizes, forking and ad-hoc maintenance patches, may cause issues connecting to the cluster.

To prevent potential downtime, we recommend avoiding these operations until the issue is fully resolved.

We apologize for the inconvenience and appreciate your patience as we work to address the situation.

Dec 12, 19:35 UTC
Investigating - Our Engineering team is investigating an issue with the Managed Database clusters which is causing forks, restores and new creates to fail. The Mongo clusters are unaffected and should be operating normally.

Our team is currently assessing the root cause and working to resolve the issue as quickly as possible.

We apologize for any inconvenience this may cause and will share an update as soon as we have more information.

Dec 12, 18:34 UTC
Resolved - Our Engineering team has confirmed that the issues with Authoritative DNS resolution in NYC1 has been fully resolved. DNS queries should now be resolving normally.

If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.

Dec 13, 01:24 UTC
Monitoring - Our Engineering team has implemented a fix to resolve the issues with Authoritative DNS Resolution in NYC1 and is currently monitoring the situation.

We will continue to monitor this at our end and will share an update once the issue is resolved completely.

Dec 13, 01:10 UTC
Investigating - Our Engineering team is investigating an issue with intermittent failures and increased latency in DNS resolution in NYC1.

During this time, a subset of users may experience issues with DNS resolution, or see errors returned when querying DNS records which are hosted on the DigitalOcean authoritative DNS infrastructure.

We apologize for the inconvenience and will share an update once we have more information.

Dec 12, 23:20 UTC
Dec 12, 2024
Dec 11, 2024
Resolved - Our Engineering team has resolved the issue impacting prohibiting a subset of users from logging in, and the login flow is now operating normally.

If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.

Dec 11, 23:33 UTC
Monitoring - Our Engineering team has implemented a fix for the issue prohibiting a subset of users from logging in. The team is monitoring the situation, and we will share an update once this is fully resolved.
Dec 11, 22:53 UTC
Investigating - Our Engineering team is currently investigating an issue prohibiting users from logging in if they are flagged for an additional security challenge.

We are actively reviewing the root cause and working closely to resolve the issue as quickly as possible.

We apologize for the inconvenience and will provide updates as more information becomes available. Thank you for your patience.

Dec 11, 19:33 UTC
Completed - The scheduled maintenance has been completed.
Dec 11, 01:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Dec 10, 19:30 UTC
Scheduled - We are reaching out again to inform you that the Network maintenance in AMS3 region which was previously scheduled to start on 2024-12-10 at 12:00 UTC has been rescheduled to the following window:

Start: 2024-12-10 19:30 UTC
End: 2024-12-11 01:00 UTC


We apologize for any inconvenience this short notice causes and thank you for your understanding. You may find the initial maintenance notice along with a description of any expected impact related to this work included at the bottom of this message.

Expected impact:

During the maintenance window users may experience delays or failures with event processing for a brief duration on Droplets and Droplet-based services including Droplets, Managed Kubernetes, Load Balancers, Container Registry, and App Platform. We will endeavor to keep this to a minimum for the duration of the change.

If you have any questions related to this issue please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket

Dec 10, 19:26 UTC
Resolved - From 16:10 UTC to 22:46 UTC, users may have experienced issues while executing Managed Database CRUD Operations.

Our Engineering team has confirmed the full resolution of the issue impacting Managed Database CRUD Operations, and all systems are now operating normally.
Users may safely resume operations, including upgrades, resizes, forking, and ad-hoc maintenance patches.

If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.

Dec 11, 00:56 UTC
Monitoring - Our Engineering team has implemented a fix for the issue impacting Managed Database CRUD Operations. The team is monitoring the situation, and we will share an update once this is fully resolved.
Dec 10, 22:54 UTC
Identified - Our Engineering team has identified the cause of the issue that is impacting Managed Database CRUD Operations and is actively working on a fix.

To avoid potential downtime, we continue to ask users to refrain from performing operations that trigger node rotations, such as upgrades, resizes, forking, and ad-hoc maintenance patches.

Existing database clusters remain unaffected as long as no node rotation occurs due to DNS issues, and all other services are functioning as expected.

We apologize for any inconvenience caused and appreciate your patience as we work diligently to address the situation. Further updates will be shared as soon as they become available.

Dec 10, 22:44 UTC
Update - During our investigation, we identified that operations triggering node rotation, such as upgrades, resizes, forking and ad-hoc maintenance patches, may also be impacted. To prevent potential downtime, we recommend avoiding these operations until the issue is fully resolved.

We apologize for the inconvenience and appreciate your patience as we work to address the situation.

Dec 10, 20:11 UTC
Investigating - Our Engineering team is investigating an issue causing Managed Database clusters to take longer than usual to be created. Our team is currently assessing the root cause and working to resolve the issue as quickly as possible.

Existing database clusters are not affected, and all other services are operating normally.

We apologize for any inconvenience this may cause and will share an update as soon as we have more information.

Dec 10, 19:22 UTC
Dec 10, 2024
Dec 9, 2024

No incidents reported.

Dec 8, 2024

No incidents reported.

Dec 7, 2024

No incidents reported.