Update - Around 07:00 UTC, we saw a small period of increased loss, due to increased traffic and congestion in the APAC region. This self-recovered and since then, we have not seen further periods of decreased performance. About two hours later, we experienced an unrelated issue in BLR1, details of which can be found here: https://status.digitalocean.com/incidents/zdd6830qvvzx. We originally believed the impact in BLR1 to be due to this incident, but confirmed it was a separate issue. We apologize for any confusion.

As a reminder, we expect periods of increased loss and latency to continue to occur, but they are normally short-lived, and most happen during APAC business hours due to increased traffic and congestion.

We continue to await the fiber fault repairs and make traffic routing changes when available for better performance. We expect the situation to continue to improve over the coming month as crews continue and complete work.

Mar 29, 2023 - 10:25 UTC
Update - Between 10:20 - 11:10 UTC, we experienced an uptick in loss and latency in the APAC region, due to a downstream effect of this incident. During this time, we saw a slight dip in public API availability, meaning a small subset of users may have experienced errors while sending requests. Users would have also seen loss and latency on Droplets and Droplet-based services in the APAC region.

Periods of increased loss and latency continue to occur, but are normally short-lived, and most happen during APAC business hours due to increased traffic and congestion.

We continue to await the fiber fault repairs and make traffic routing changes when available for better performance. We expect the situation to continue to improve over the coming month as crews continue and complete work.

Mar 28, 2023 - 17:54 UTC
Identified - Our Engineering team has observed recurring incidents that are continuing to affect our customers' resources in the APAC region. As a result, some users may be experiencing packet loss and increased latency in this area. Our team is actively monitoring the situation and implementing traffic routing changes where applicable to alleviate the congestion.

We apologize for any inconvenience caused and will provide updates as the situation progresses.

Mar 08, 2023 - 13:31 UTC
Monitoring - The situation with network performance in the APAC region has continually improved in the past few weeks as repairs have progressed and traffic optimization to route around affected paths has continued. We have now been observing normal latency and zero packet loss for a few days, so we will move this incident to a Monitoring state for a few days. If the improvements we’ve seen continue, we’ll look to close this out and provide any further updates separately. Thank you for your patience as this situation unfolded and our Network Engineering team worked around it.
Mar 07, 2023 - 20:56 UTC
Update - Our Engineering team continues to monitor network performance in the APAC region and make routing changes as needed to alleviate performance concerns.

Customers are still experiencing periodic performance and connectivity issues and will continue to experience those issues until restoration efforts are completed by upstream providers for the undersea cable cuts (fiber faults) in the APAC region.

Full repair is expected to take multiple weeks. We will continue to communicate any updates we receive.

Feb 27, 2023 - 16:34 UTC
Update - Our Engineering team is continuing to take actions to mitigate customer impact and monitor the ongoing issues with network connectivity in the APAC region. Performance has been relatively stable as traffic is slowly increasing across the rerouted network. However, some customers may still experience periodic performance and connectivity issues until restoration efforts are completed by the upstream providers.

We have no ETA for the upstream issue being completely resolved, but we will continue to communicate any relevant information as soon as it is available to us. Thank you for your patience and we apologize for the inconvenience.

Feb 04, 2023 - 18:33 UTC
Update - Our Engineering team has made routing changes for some traffic to work around the congestion caused by multiple subsea fiber faults in the APAC region. We are slowly ramping up traffic on this new route and are carefully monitoring for the next 24 hours.

While this new route is expected to help ease some of the packet loss and latency from the APAC region fiber faults, we expect some users to still experience performance and connectivity drops until the faults are resolved.

We have no ETA for the upstream issues being restored but we will communicate any relevant information as we have it. We apologize for the inconvenience.

Feb 03, 2023 - 17:58 UTC
Update - Our Engineering team continues to monitor the ongoing issue with multiple subsea fiber faults in the APAC region. We have experienced multiple periods of packet loss and increased latency and expect to continue to see those until the issue is resolved upstream. Our team will continue observing performance and making traffic routing changes where applicable/available to work around the congestion.

While we have no ETA for the faults being restored, we will communicate updates as we have them. If you have questions or concerns about impacted services from this incident, we ask that you open a Support ticket from within your account. Thank you!

Feb 01, 2023 - 23:55 UTC
Update - Our Engineering team is continuing to work with upstream providers to fix the network connectivity issues we are experiencing in the SGP1 region. The problems appear to be a direct result of multiple subsea fiber faults in the APAC region causing congestion resulting in packet loss and increased latency connecting to the SGP1 region.

At this time users will continue experiencing intermittent timeout errors with Droplet-based services in the SGP1 region. We do not have a firm ETA for resolution time but we will provide updates as needed. Thank you for your patience and we apologize for any inconvenience.

Jan 30, 2023 - 06:48 UTC
Update - Our team is continuing to work on a fix for this issue. We will provide an update in the due course along with more information about this problem. We apologize for the inconvenience and thank you for your patience and continued support.
Jan 29, 2023 - 20:47 UTC
Identified - As of 12:00 UTC, our Engineering team has again identified an issue with an upstream provider causing packet loss and increased latency in the SGP1 region. Users may have been experiencing degraded performance including timeout errors with Droplet-based services in SGP1. Please note that the concerned team is actually working on a fix and we will share an update once we have further information.
Jan 29, 2023 - 16:27 UTC
Regions Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
TOR1 Operational
SYD1 Operational
Services Operational
API Operational
App Platform Operational
Billing Operational
Block Storage Operational
Cloud Control Panel Operational
Cloud Firewall Operational
Community Operational
Container Registry Operational
DNS Operational
Droplets Operational
Event Processing Operational
Floating IP Operational
Kubernetes Operational
Load Balancers Operational
Managed Databases Operational
Monitoring Operational
Networking Operational
Spaces Operational
Support Center Operational
Spaces CDN Operational
VPC Operational
WWW Operational
Functions Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Mar 29, 2023
Resolved - As of 11:57 UTC, Our Engineering team has confirmed that the issue impacting network connectivity in the BLR1 region has been fully resolved. All services and resources should now be fully reachable.
If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. Thank you for your patience and we apologize for any inconvenience.

Mar 29, 12:47 UTC
Monitoring - Our Engineering team has successfully mitigated the issue impacting multiple products in BLR1. At this time, network connectivity to Droplet-based services including Droplets, Managed Kubernetes, and Managed Database in BLR1, should be operating normally.

We'll continue to monitor the situation to confirm this incident is fully resolved and will post an update soon.

Mar 29, 11:41 UTC
Investigating - As of 09:05UTC, our Engineering team is currently investigating an issue impacting multiple products in the BLR1 region. During this time, users may have experienced packet loss/latency, timeouts, and related issues with Droplet-based services in these regions, including Droplets, Managed Kubernetes, and Managed Database. We will share an update once we have further information.
Mar 29, 11:20 UTC
Mar 28, 2023
Mar 27, 2023

No incidents reported.

Mar 26, 2023

No incidents reported.

Mar 25, 2023

No incidents reported.

Mar 24, 2023

No incidents reported.

Mar 23, 2023

No incidents reported.

Mar 22, 2023

No incidents reported.

Mar 21, 2023

No incidents reported.

Mar 20, 2023

No incidents reported.

Mar 19, 2023

No incidents reported.

Mar 18, 2023

No incidents reported.

Mar 17, 2023
Resolved - After further investigation, our Engineering team has discovered that monitoring alerts, other than disk usage, have been triggering correctly for MongoDB clusters, but have been missing cluster information, such as the name of the cluster in the email subject line.

Additionally, there is a known issue with disk usage alerts for MongoDB clusters that has been present since the launch of the product.

A small number of clusters are impacted. We apologize for any miscommunication and misunderstanding. At the outset of this incident, with the data available, we believed this to be much more impactful.

Our Engineering team will work to correct the missing cluster information in triggered alerts to improve the customer experience. Longer-term work is planned to allow for successful disk alerts for MongoDB clusters as well. Given this, we’ll close this public incident.
If you have any questions or concerns, please reach out to Support from within your account. Thank you.

Mar 17, 16:15 UTC
Identified - As of 13:35 UTC, our Engineering team has identified the issue with Monitoring alerts for MongoDB Clusters and is working on a fix. We will post an update as soon as additional information is available.
Mar 17, 13:51 UTC
Investigating - Our Engineering team is investigating an issue with Monitoring alerts for MongoDB Clusters in all region. During this time, alert policies may be delayed or fail to trigger their configured notification options. We apologize for the inconvenience, and we'll share an update once we have more information.
Mar 17, 12:47 UTC
Mar 16, 2023
Resolved - Our Engineering and Datacenter Operations teams have confirmed that the implemented fixes have successfully resolved the traffic latency and packet loss in the FRA1 and AMS3 regions.

From 07:04 - 13:53 UTC, Droplet-based services in FRA1 and AMS3 experienced network connectivity issues, App Platform deploys were delayed in all regions, and image pulls from the Container Registry intermittently failed.

This was due to a networking hardware issue, which was resolved by our teams rotating out faulty hardware.

If you continue to experience any issues in relation to this incident please open a ticket with our support team. Thank you for your patience.

Mar 16, 16:52 UTC
Monitoring - Our Engineering and Datacenter Operations teams have successfully mitigated the issue with network connectivity in FRA1 and AMS3 and we've confirmed traffic latency and packet loss has returned to pre-incident levels. At this time, network connectivity to Droplet-based services in FRA1 and AMS3, as well as App Platform Deploys and Container Registry pulls, should be operating normally.

We'll continue to monitor the situation to confirm this incident is fully resolved and will post an update soon.

Mar 16, 15:29 UTC
Update - Our Engineering team is continuing to work on a fix. Additionally, our Engineering team identified App Platform and Container Registry to be impacted. Users may see issues with pulling images from Container Registry in FRA1 and AMS3 and App deployments. We will share an update as soon as we have further information
Mar 16, 11:11 UTC
Identified - Our Engineering team has identified the cause of network issues and is actively working on a fix.

Currently, users of services in the FRA1 and AMS3 region will continue to experience network issues.

We will post an update as soon as additional information is available.

Mar 16, 09:23 UTC
Investigating - As of 08:43 UTC, our Engineering team is investigating reports of networking connectivity issues in our FRA1 and AMS3 regions. Users may have experienced packet loss/latency, timeouts, and related issues with Droplet-based services in these regions, including Droplets, Managed Kubernetes, and Managed Database. We will share an update once we have further information.
Mar 16, 09:12 UTC
Mar 15, 2023

No incidents reported.