DigitalOcean Services Status

Update - Significant submarine cable outages continue to impact multiple network carriers on the Indian subcontinent, with increased latency and packet loss to our BLR1 region occurring sporadically throughout the day.

On a positive note, our upstream providers have worked diligently over the past couple of days to minimize the impact of the loss of submarine capacity. As a result, we are now mainly seeing service degradation during the busier evening peak hours only.

For the moment, we have no updates to provide regarding when the situation might improve. Repair times for submarine cables are typically on the order of weeks or months. However, further short-term improvements may still be possible as upstream carriers work to re-balance traffic where feasible.

Separately, we continue to look into potential avenues for additional mitigations and will keep our customers aprised as we make progress in this area.

We apologize for the ongoing inconvenience created by this extraordinary situation and thank our customers for their patience.

Sep 08, 2025 - 22:29 UTC
Monitoring - Our Engineering team confirms that multiple subsea cable cuts are impacting connectivity in the APAC region, which also affects traffic to Europe and the U.S. East Coast.

Traffic is being rerouted through alternate paths to maintain service continuity. As a result, users may experience higher latency and intermittent connectivity issues.

Our engineering team continues to monitor the situation closely and is working with upstream providers as repair efforts progress. We will share further updates as they become available.

We apologize for the disruption and appreciate your patience.

Sep 06, 2025 - 23:02 UTC
Update - We are continuing to work on a fix for this issue.
Sep 06, 2025 - 18:49 UTC
Update - Our Engineering team has implemented changes to optimize traffic routing in the BLR and SGP regions, addressing networking issues stemming from our upstream providers, which are affected by major submarine cable outages in the APAC region. Despite these adjustments, users may still encounter intermittent packet loss or connectivity issues when accessing resources in the affected areas. We are working with our upstream vendors to gain further insights and achieve a definitive resolution.

We apologize for any inconvenience caused.

Sep 06, 2025 - 18:44 UTC
Identified - Our Engineering team has identified the cause of the issue impacting networking connectivity in BLR1 region to be with our upstream provider.

Our team is actively working on remediation steps. We will post an update as soon as we have more information.

Sep 06, 2025 - 16:48 UTC
Investigating - Our Engineering team is currently investigating an issue impacting networking in BLR1 region. Users may experience network connectivity loss to Droplets and Droplet-based services, like Managed Kubernetes and Database Clusters.

We apologize for the inconvenience and will share an update once we have more information.

Sep 06, 2025 - 16:13 UTC
API Operational
Billing Operational
BYOIP Operational
Cloud Control Panel Operational
Cloud Firewall Operational
Community Operational
DNS Operational
Support Center Operational
Reserved IP Operational
WWW Operational
GenAI Platform Operational
App Platform Operational
Global Operational
Amsterdam Operational
Atlanta Operational
Bangalore Operational
Frankfurt Operational
London Operational
New York Operational
San Francisco Operational
Singapore Operational
Sydney Operational
Toronto Operational
Container Registry Operational
Global Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
Droplets Operational
Global Operational
AMS2 Operational
AMS3 Operational
ATL1 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Event Processing Operational
Global Operational
AMS2 Operational
AMS3 Operational
ATL1 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Functions Operational
Global Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
GPU Droplets Operational
Global Operational
ATL1 Operational
NYC2 Operational
TOR1 Operational
Managed Databases Operational
Global Operational
AMS3 Operational
ATL1 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Monitoring Operational
Global Operational
AMS2 Operational
AMS3 Operational
ATL1 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SGP1 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SYD1 Operational
TOR1 Operational
Networking Degraded Performance
Global Operational
AMS2 Operational
AMS3 Operational
ATL1 Operational
BLR1 Degraded Performance
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Degraded Performance
SYD1 Operational
TOR1 Operational
Kubernetes Operational
Global Operational
AMS3 Operational
ATL1 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Load Balancers Operational
Global Operational
AMS2 Operational
AMS3 Operational
ATL1 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Spaces Operational
Global Operational
AMS3 Operational
ATL1 Operational
BLR1 Operational
FRA1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
Spaces CDN Operational
Global Operational
AMS3 Operational
ATL1 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
VPC Operational
Global Operational
AMS2 Operational
AMS3 Operational
ATL1 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Volumes Operational
Global Operational
AMS2 Operational
AMS3 Operational
ATL1 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance

Scheduled Maintenance

NYC1 Network Maintenance Sep 15, 2025 10:00-12:00 UTC

Start: 2025-09-15 10:00 UTC
End: 2025-09-15 12:00 UTC

During the above window, our Networking team will be making changes to the core networking infrastructure to improve performance and scalability in the NYC1 region.

Expected impact:

While no service disruption is expected during the scheduled maintenance window, there is a low risk of hardware failure associated with the activity. In such a scenario, customers may experience brief interruptions to network traffic within the affected region. If triggered, our Engineering teams will respond immediately to isolate and mitigate the impact.

If you have any questions related to this issue, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket

Posted on Sep 12, 2025 - 11:02 UTC
Sep 12, 2025
Resolved - Our Engineering team has confirmed the full resolution of the issue with the upstream provider. Users should now be able to deploy to App Platform and manage their Spaces Buckets as normal.

If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.

Sep 12, 19:33 UTC
Monitoring - Our Engineering team has identified that the ongoing issue with the upstream provider is also impacting DigitalOcean Spaces. They continue to monitor the ongoing issue with the upstream provider impacting DigitalOcean Spaces and App Platform services.

Improvements are being seen with lower error rates and latency for App Platform deployments as well as for fetching Spaces endpoints.

We apologize for the inconvenience and will share further updates in the due course.

Sep 12, 19:14 UTC
Investigating - Our Engineering team is aware of an upstream provider issue that is causing impact to some DigitalOcean services. More details are being gathered.

At this time, users may experience delayed App Platform deployments.

We apologize for the inconvenience and will provide another update as soon as possible.

Sep 12, 18:54 UTC
Completed - The scheduled maintenance has been completed.
Sep 12, 02:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Sep 11, 14:00 UTC
Scheduled - Start: 2025-Sep-11 02:00 PM UTC
End: 2025-Sep-12 02:00 AM UTC

During the above window, our Engineering team will be performing power maintenance in our SFO2 data center cages to ensure infrastructure stability.

Expected Impact:

All devices run on redundant power. We do not expect any customer impact during this maintenance. If the redundant power feed experiences an issue during the course of the maintenance (which is unlikely), customers could see impact to existing resources housed in the affected cages, including Droplets, rDNS, and Object Storage. We will post updates if any unexpected outage occurs.

To help ensure a safe operation, we've temporarily reduced Droplet capacity in our SFO2 region. Customers who deploy Droplets in SFO2 may see some sizes unavailable or run into capacity errors for large deployments. This is temporary, so if you do encounter an error, we encourage you to try to deploy once more once the maintenance is complete.

Our engineering and operations teams are onsite and will be monitoring affected systems closely throughout the maintenance. Emergency notifications will be sent promptly if any critical impacts arise.

We appreciate your understanding as we work to resolve this situation as quickly and safely as possible.

If you have any questions related to this maintenance please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket

Sep 9, 15:41 UTC
Sep 11, 2025
Sep 10, 2025

No incidents reported.

Sep 9, 2025

No incidents reported.

Sep 8, 2025
Resolved - Our Engineering team investigated reports of Droplets becoming unresponsive. This issue was found to be caused by guest-level kernel hangs, and affected Droplets required a power cycle to restore functionality.

Upon further investigation, this was identified to be an upstream kernel issue with Ubuntu 20.04 running kernel version 5.4.0-122-generic. This kernel version has exhibited stability problems that can lead to Droplets becoming unresponsive intermittently. Customers running Ubuntu 20.04 with kernel 5.4.0-122-generic are advised to upgrade to a newer kernel version using the commands below:

sudo apt update
sudo apt upgrade linux-virtual

This problem is fixed with the newer kernel 5.4.0-123.139. The recommended option is to migrate to a more recent and fully supported OS version, such as Ubuntu 24.04, to ensure continued system stability and support. The affected customers can take the recommended steps above to avoid further impact.

We appreciate your patience and apologize for any inconvenience caused.

Sep 8, 15:22 UTC
Investigating - Our Engineering team is actively investigating an issue causing some Droplets to become unresponsive due to guest-level errors. Affected users may find that their Droplets intermittently hang, requiring a power cycle to restore access.

We've identified that this behavior is occurring on a subset of Droplets running Ubuntu 20.04. The standard support for Ubuntu 20.04 has ended, which may contribute to the observed behavior in certain configurations. We recommend affected customers consider updating their Droplets to a currently supported operating system, such as Ubuntu 24.04, to ensure continued stability and support.

We apologize for the inconvenience and appreciate your patience as we continue to investigate and work toward a resolution. We will share further updates as more information becomes available.

Sep 8, 08:41 UTC
Sep 7, 2025

No incidents reported.

Sep 6, 2025
Sep 5, 2025
Resolved - From 18:53 to 19:15 UTC, our Engineering team identified an issue with the public Droplet API, resulting in difficulties in creating, accessing and listing Droplets via the Cloud Control Panel or API. Autoscaler and LBaaS APIs were also affected. Our team has fully resolved the issues, and as of 19:15 UTC, all services are operating normally.
We apologize for the inconvenience. If you are still experiencing any problems or have additional questions, then please open a support ticket within your account

Sep 5, 19:42 UTC
Sep 4, 2025

No incidents reported.

Sep 3, 2025
Completed - The scheduled maintenance has been completed.
Sep 3, 11:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Sep 3, 07:30 UTC
Scheduled - Start: 2025-09-03 07:30 UTC
End: 2025-09-03 11:30 UTC

During the above window, our Engineering team will be performing maintenance on the Managed Databases API to enhance security, improve access control and enable better auditing and compliance. Please note that existing Databases and workloads will continue to run normally and will not be impacted.

Expected Impact:

We don’t anticipate any service disruptions during this window. Your existing databases and workloads will continue to run normally and will not be impacted.

In the event of an unexpected issue occurs, only control plane actions like provisioning new clusters, modifying configurations, and fetching database details will be impacted. The data-plane for connecting and running queries on your databases will be unaffected.

If an unexpected issue arises, we will endeavor to keep any impact to a minimum and may revert if required.

If you have any questions related to this event, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket

Sep 2, 22:10 UTC
Sep 2, 2025

No incidents reported.

Sep 1, 2025

No incidents reported.

Aug 31, 2025

No incidents reported.

Aug 30, 2025

No incidents reported.

Aug 29, 2025
Resolved - From 14:24 to 14:41 UTC, our Engineering team identified issues affecting Block Storage Volumes in the NYC1 region. During this time, users may have experienced difficulties processing requests on Volumes. It was observed that this might have impacted Kubernetes clusters in the NYC1 region as well. Our team has fully resolved the issues, and as of 14:42 UTC, all services are operating normally.

We apologize for the inconvenience. If you are still experiencing any problems or have additional questions, then please open a support ticket within your account

Aug 29, 15:45 UTC