DigitalOcean Services Status

All Systems Operational

API Operational
Billing Operational
Cloud Control Panel Operational
Cloud Firewall Operational
Community Operational
DNS Operational
Support Center Operational
Reserved IP Operational
WWW Operational
GenAI Platform Operational
App Platform Operational
Global Operational
Amsterdam Operational
Bangalore Operational
Frankfurt Operational
London Operational
New York Operational
San Francisco Operational
Singapore Operational
Sydney Operational
Toronto Operational
Container Registry Operational
Global Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
Droplets Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Event Processing Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Functions Operational
Global Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
GPU Droplets Operational
Global Operational
NYC2 Operational
TOR1 Operational
Managed Databases Operational
Global Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Monitoring Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SGP1 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SYD1 Operational
TOR1 Operational
Networking Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Kubernetes Operational
Global Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Load Balancers Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Spaces Operational
Global Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
BLR1 Operational
Spaces CDN Operational
Global Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
VPC Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Volumes Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
May 18, 2025

No incidents reported today.

May 17, 2025

No incidents reported.

May 16, 2025

No incidents reported.

May 15, 2025
Resolved - From 13:45 UTC to 15:50 UTC, users may have experienced an issue affecting the GenAI Platform agents using Llama 3.3 70B.

Our Engineering team has confirmed full resolution of the issue.

If you continue to experience any issues, please reach out to our support team by opening a ticket from within your Cloud Control Panel.

May 15, 17:01 UTC
Monitoring - Our Engineering team has implemented a fix to address the issue impacting the GenAI Platform agents using Llama 3.3 70B. As of now, GenAI Platform agents using Llama 3.3 70B should respond without any issues.

We are actively monitoring the situation and will post an update as soon as the issue is fully resolved.

We apologize for the inconvenience and appreciate your patience.

May 15, 16:18 UTC
Identified - Our Engineering team has identified the cause of the issue affecting the GenAI Platform. During this time, GenAI Platform agents using Llama 3.3 70B will fail to respond.

A fix is in progress, and we will provide an update as soon as we have more information.

May 15, 15:26 UTC
Investigating - Our Engineering Team is currently investigating an issue with the GenAI Platform where the GenAI Platform agents using Llama 3.3 70B are failing to respond.

We are actively working to identify the root cause. We apologize for the inconvenience and will share an update once we have more information.

May 15, 14:58 UTC
Resolved - From 03:20 UTC to 06:01 UTC, users may have experienced an issue affecting the GenAI Platform using Llama 3.370B where chatbots are failing to respond

Our Engineering team has confirmed full resolution of the issue. Users should now be able to run chatbots normally.

If you continue to experience any issues, please reach out to our support team by opening a ticket from within your Cloud Control Panel.

May 15, 06:53 UTC
Monitoring - Our Engineering team has implemented a fix to address the issue impacting with the GenAI Platform agents using Llama 3.370B where chatbots are failing to respond. We are currently monitoring the issue.

We will provide an update as soon as more information becomes available.

We apologize for the inconvenience and appreciate your patience.

May 15, 06:17 UTC
Investigating - We are investigating an issue where GenAI Platform agents are failing to respond. We apologize for the inconvenience this may cause.
May 15, 05:41 UTC
May 14, 2025
Resolved - Our Engineering team has completely resolved the issue that was affecting the Droplet search functionality within the DigitalOcean Cloud Control Panel. We have observed normal functionality following the fix and continued monitoring has shown stable behavior. Users should be able to successfully search for and locate their Droplets. The issue is now considered resolved.

We appreciate your patience and apologize for any inconvenience this may have caused. However, if you continue to face issues then please open a ticket with our Support team for further review.

May 14, 15:43 UTC
Monitoring - Our Engineering team has implemented a fix for the issue affecting the droplet search functionality within the DigitalOcean Cloud Control Panel. Users should now be able to search for the droplets using the search box on the droplets listing page.

We are continuing to monitor the situation closely to ensure stability. Thank you for your patience during the process and we will provide a final update once we confirm the issue is fully resolved.

May 14, 14:32 UTC
Investigating - Our Engineering team is currently investigating an issue affecting the Droplet search functionality within the DigitalOcean Cloud Panel. Some customers may be experiencing difficulties when attempting to search for and locate their Droplets.

We are actively working to identify the root cause and will provide updates as we learn more. We understand how critical this functionality is for managing your infrastructure, and we appreciate your patience during the process.

We will share an update as more information becomes available.

May 14, 12:34 UTC
May 13, 2025
Resolved - Our Engineering team has confirmed that the issue affecting the GenAI Platform DeepSeek model has been fully resolved. Users should no longer encounter the message "It looks like the agent ran out of tokens while reasoning. Please try again or increase the max tokens."

All functionality remains unaffected, and users should no longer experience any interruptions.

If you continue to experience any issues, please reach out to our support team by opening a ticket from within your Cloud Control Panel.

May 13, 23:22 UTC
Monitoring - Our Engineering team has implemented a fix to address the issue impacting the GenAI Platform's DeepSeek model, where some users were encountering the message "It looks like the agent ran out of tokens while reasoning. Please try again or increase the max tokens".

At this time, users should be seeing agents run normally.

We are actively monitoring the situation and will post an update as soon as the issue is fully resolved.

May 13, 22:51 UTC
Investigating - Our Engineering Team is currently investigating an issue with the GenAI Platform DeepSeek model where some users may encounter the message "It looks like the agent ran out of tokens while reasoning. Please try again or increase the max tokens."

We will provide an update as soon as more information becomes available.

We apologize for the inconvenience and appreciate your patience.

May 13, 21:50 UTC
May 12, 2025

No incidents reported.

May 11, 2025

No incidents reported.

May 10, 2025

No incidents reported.

May 9, 2025

No incidents reported.

May 8, 2025
Resolved - Our Engineering team has confirmed the resolution of the issue affecting certain Droplet metrics in the BLR1 region. Users should no longer see intermittent gaps in monitoring graphs for Droplets in the BLR1 region.

We appreciate your patience and apologize for any inconvenience this may have caused.

May 8, 13:01 UTC
Monitoring - Our Engineering team has implemented a fix to address the intermittent unavailability of Droplet metrics and delayed Resource Alerts in the BLR1 region.

Our team is now and monitoring the results and will continue to do so for an extended period to ensure this sporadic issue is addressed by the implemented fix.

At this time, we expect users not to experience further metrics gaps or delays in alerts.

We will post an update as soon as we confirm this incident is fully resolved or once we have further information. Thank you for your patience.

May 8, 01:26 UTC
Update - Our Engineering team is continuing to investigate the root cause of the issue with intermittent unavailability of Droplet metrics in the BLR1 region. Multiple reproduction efforts and testing against potential root causes are underway. Due to the sporadic nature of the issue, we anticipate incident updates for this issue to be less frequent, but we will share new information as soon as it is available.

At this time, users impacted by this incident will experience intermittent gaps in monitoring graphs for Droplets in BLR1, as well as delayed notifications for Resource Alerts (for any resources in BLR1).

We apologize for the inconvenience, and we'll share an update once we have more information.

May 7, 22:04 UTC
Update - Our Engineering team continues to investigate the intermittent unavailability of certain Droplet metrics in the BLR1 region. Due to the sporadic nature of this issue, our analysis requires additional time to correlate patterns and isolate the root cause.

We apologize for the inconvenience, and we'll share an update once we have more information.

May 7, 09:41 UTC
Investigating - Our Engineering team is currently investigating issues with certain Droplet metrics that are missing for the BLR1 region. Users may experience issues when accessing certain metrics on Droplets.

We apologize for the inconvenience, and we'll share an update once we have more information.

May 6, 13:00 UTC
May 7, 2025
Completed - The scheduled maintenance has been completed.
May 7, 23:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 7, 20:00 UTC
Scheduled - Start: 2025-05-07 20:00 UTC
End: 2025-05-07 23:00 UTC

During the above window, our Networking team will be making changes to the core networking infrastructure to improve performance and scalability in the LON1 region.

Expected impact:

During the maintenance window, users may experience delays or failures with event processing for a brief duration on Droplets and Droplet-based services, including Droplets, Managed Kubernetes, Load Balancers, Container Registry, and App Platform. We will endeavor to keep this to a minimum for the duration of the change.

If you have any questions related to this issue, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket

May 4, 20:10 UTC
May 6, 2025
May 5, 2025
Resolved - Our Engineering team has confirmed the complete resolution of the issue that was affecting Droplet creation and console access in the NYC1 region. Users should now be able to deploy new droplets and access them normally in the NYC1 region.

All systems are now operating normally. We appreciate your patience and apologize for any inconvenience this may have caused.

May 5, 15:46 UTC
Monitoring - Our Engineering team has implemented a fix for the issue affecting Droplet creation and console access in the NYC1 region. Impacted users may have seen 504 Gateway Timeout errors during Droplet creation or when accessing the Droplet console.

Our team is currently monitoring the situation to ensure stability. We will provide a final update once we confirm the issue is fully resolved.

May 5, 15:18 UTC
Investigating - Our Engineering team is currently investigating an issue with creating Droplets in NYC1 region. During this time, users may see 504 Gateway Timeout errors when creating droplets in the NYC1 region.

We apologize for the inconvenience and will provide an update as soon as we have more information.

May 5, 14:18 UTC
May 4, 2025

No incidents reported.