DigitalOcean Services Status

Update - Our Engineering team has detected large amounts of loss and latency on network routes between NYC/TOR and Singapore. Users may experience higher-than-normal latency or amounts of packet loss for traffic traversing those routes. The team is reviewing any possible traffic shifts to alleviate the situation.

If you have any questions or concerns, please open a support ticket from within your account.

May 24, 2023 - 14:32 UTC
Update - Our Engineering team detected large amounts of loss and latency on network routes between Singapore and Frankfurt, from 13:10 - 13:20 UTC, today. The issue self-recovered, likely due to upstream providers shuffling traffic.

Additionally, our team is seeing high levels of loss and latency on network routes between Bangalore and Sydney. Users may see packet loss or notice less performant services at this time. The Network Engineering team is currently exploring options to route around the issue or any other steps that can be taken to mitigate impact to users.

May 19, 2023 - 14:35 UTC
Monitoring - Our Engineering team has not observed higher-than-normal periods of packet loss and latency over the last 12 hours and user reports of degraded performance have returned to pre-incident levels.

Until the cable cuts in the APAC region are completely resolved, we expect there to be intermittent periods of degraded performance, especially during APAC business hours.

We'll continue to monitor the situation through the next few days and provide any needed updates. If you experience any issues, feel free to open a support ticket and our team will be happy to assist.

May 14, 2023 - 03:31 UTC
Identified - Due to the ongoing subsea fiber faults in the APAC region, our Engineering team is observing a recurrence of packet loss and increased latency between Singapore and Europe/North American regions, during peak hours in Asia.

We've received reports of users experiencing degraded performance for Droplets and Droplet-based services, as well as with pushes to Container Registry, and deployments of App Platform apps.

Our team is actively engaged in attempting to re-route traffic where possible to improve performance. Until the cable cuts are fully resolved, users may continue to experience intermittent periods of degraded service.

You can see our previous post on this incident here: https://status.digitalocean.com/incidents/4w1yx5p58p1t

May 12, 2023 - 16:52 UTC
API Operational
Billing Operational
Cloud Control Panel Operational
Cloud Firewall Operational
Community Operational
DNS Operational
Support Center Operational
Reserved IP Operational
WWW Operational
App Platform Operational
Global Operational
Amsterdam Operational
Bangalore Operational
Frankfurt Operational
London Operational
New York Operational
San Francisco Operational
Singapore Operational
Sydney Operational
Toronto Operational
Container Registry Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
Droplets Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Event Processing Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Functions Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Managed Databases Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Monitoring Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SGP1 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SYD1 Operational
TOR1 Operational
Networking Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Kubernetes Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Load Balancers Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Spaces Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
Spaces CDN Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
VPC Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Volumes Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
May 26, 2023
Resolved - As of 02:10 UTC, Our Engineering team has confirmed that the issue with our Container Registry services in the SFO3 region has been fully resolved.

All operations should now be operating normally with our Container Registry services. If you continue to experience any trouble with these services please open a ticket with our support team.

Thank you for your patience and we apologize for the inconvenience.

May 26, 02:38 UTC
Monitoring - As of 22:10 UTC, Our engineering team has implemented a fix to resolve the issue with our Container Registry services in the SFO3 region and is monitoring the situation closely.

Users should no longer see 500-type errors when uploading/pushing/deleting images, slow cleanup operations, creation failures for new registries, or other errors when interacting with registries in the SFO3 region.

We are going to continue to monitor the situation and will post an update once we are confident this issue will not recur.

May 25, 23:31 UTC
Investigating - We are observing some customer reports for issues with DigitalOcean Container Registries in the SFO3 region. Our Engineering team is investigating any potential issues that are causing these reports. This seems to be a reoccurrence of the incident mentioned in the below link:

https://status.digitalocean.com/incidents/mmngtxzmm6gs

At this time, users may see 500 type errors when uploading/pushing/deleting images, slow cleanup operations, creation failures for new registries, or other errors when interacting with registries in SFO3.

We will post an update as soon as we have further information. Thank you for your patience.

May 25, 16:20 UTC
May 25, 2023
Resolved - Our Engineering team has confirmed the full resolution of this incident.

From 06:00 - 08:00 UTC, users were unable to create and deploy new Apps and experienced errors when updating, and deploying existing Apps. The App deployments should now be operating normally.

If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel.

May 25, 09:45 UTC
Monitoring - Our Engineering team has implemented a fix to resolve the issue with App deployments in the SFO region and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
May 25, 09:20 UTC
Identified - Our Engineering team has identified the cause of the issue with network latency that is impacting the management and creation of Apps in the SFO region and is actively working on a fix. During this time, a subset of users might see errors when updating and deploying new/existing Apps. We will post an update as soon as additional information is available.
May 25, 08:47 UTC
Investigating - Our Engineering team is investigating an issue with network latency that is impacting the management and creation of Apps in the SFO region. As of 06:00 UTC, users are unable to create and deploy new Apps and may see errors when updating, and deploying existing Apps. At this time, previously deployed running Apps are not impacted. We apologize for the inconvenience and will share an update once we have more information.
May 25, 08:30 UTC
Resolved - Our Engineering team has identified the root cause of the incident to be multiple DB calls causing DB contention. During this time, users might have experienced issues interacting and authenticating with the DigitalOcean Container Registry, creating new container registries, and pushing/deleting images to/from registries.

As of 00:10 UTC, we have confirmed the full resolution of the issue affecting the DigitalOcean Container Registry in the SFO3 region. We appreciate your patience throughout the process and if you continue to experience problems, please open a ticket with our support team for further review.

May 25, 02:49 UTC
Update - Our Engineering team continues to investigate the root cause of this incident but has observed a reduction in the error rate with DigitalOcean Container Registry in the SFO3 region.

At this time, users should no longer experience errors while interacting and authenticating with the DigitalOcean Container Registry, creating new container registries, and pushing/deleting images to/from registries.

We will post an update as soon as we have further information. Thank you for your patience.

May 24, 22:53 UTC
Investigating - Following an uptick in customer reports of issues with DigitalOcean Container Registries in SFO3, our Engineering team is investigating any potential issues that are causing these reports.

At this time, users may see 500 type errors when uploading/pushing/deleting images, slow cleanup operations, creation failures for new registries, or other errors when interacting with registries in SFO3.

We will post an update as soon as we have further information. Thank you for your patience.

May 24, 19:28 UTC
May 24, 2023
Resolved - As of 21:15 UTC, Our Engineering team has confirmed the full resolution of this incident. We have verified that the Snapshot and Backup events in the SGP1 region are processing without any failures and we will now mark this issue as resolved.

Thank you for your patience and understanding throughout this process. If you should encounter any further issues at all, then please open a ticket with our Support team.

May 24, 21:30 UTC
Update - We are continuing to monitor for any further issues.
May 24, 21:05 UTC
Monitoring - As of 19:30 UTC, Our Engineering team was able to take action to mitigate the impact of this incident and allow Snapshot and Backup events to process normally in the SGP1 region. We will post an update as soon as the issue is fully resolved.

Please note that while the situation has improved, there may still be a backlog of older events that are in the process of being resolved. We kindly ask for your patience as our team works diligently to address these remaining events.

We apologize for any inconvenience caused and assure you that we are committed to resolving all outstanding issues.

May 24, 20:44 UTC
Investigating - As of 13:30 UTC our Engineering team is investigating an issue with intermittent Snapshot and Backup failures in our SGP1 region. Users may experience errors when performing Snapshots, but may eventually see retries succeed.

Any Backup failures are automatically being retried within the Backup window for individual Droplets.

We apologize for the inconvenience and will share an update once we have more information.

May 24, 19:20 UTC
Completed - The scheduled maintenance has been completed.
May 24, 18:35 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 24, 16:00 UTC
Scheduled - Start: 2023-05-24 16:00 UTC
End: 2023-05-24 20:00 UTC


During the above window, our Networking team will be making changes to our core networking infrastructure to improve performance and scalability in the AMS3 region. This will be the final phase of the three maintenance activities performed by our team in AMS3 on consecutive days.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience a temporary loss of private connectivity between VPCs. We will endeavor to keep any such impact to a minimum.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/createticket .

May 24, 15:05 UTC
Resolved - Our engineering team has confirmed the full resolution of this issue. From approximately 09:05 UTC - 10:40 UTC, users were seeing errors when attempting to create new Functions, invoking or updating existing deploys. Functions should now be operating normally. If you continue to experience problems, please open a ticket with our support team. Thank you for your patience and we apologize for any inconvenience.
May 24, 11:10 UTC
Monitoring - Our Engineering team has deployed a fix to resolve an issue with Serverless Functions in the NYC1 region. All users should be able to create new Functions, invoking or updating existing deploys should be operational. We are monitoring the situation closely and will share an update once the issue is resolved completely.
May 24, 10:50 UTC
Identified - Our Engineering team has identified the cause of the issue with Serverless Functions in the NYC1 region and is actively working on a fix. During this time users may see errors when attempting to create new Functions, as well as when invoking or updating existing deploys. We will post an update as soon as additional information is available.
May 24, 10:34 UTC
Investigating - As of 09:05 UTC, our Engineering team is investigating an issue with Serverless Functions in the NYC1 region. Users may see errors when attempting to create new Functions, as well as when invoking or updating existing deploys in the NYC1 region. We apologize for the inconvenience and will share an update once we have more information.
May 24, 10:09 UTC
May 23, 2023
Completed - The scheduled maintenance has been completed.
May 23, 19:11 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 23, 16:00 UTC
Scheduled - Start: 2023-05-23 16:00 UTC
End: 2023-05-23 20:00 UTC


During the above window, our Networking team will be making changes to our core networking infrastructure to improve performance and scalability in the AMS3 region. This will be the second of three maintenance activities performed by our team in AMS3 on consecutive days.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience a temporary loss of private connectivity between VPCs. We will endeavor to keep any such impact to a minimum.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/createticket .

May 23, 15:20 UTC
May 22, 2023
Completed - The scheduled maintenance has been completed.
May 22, 19:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 22, 16:00 UTC
Scheduled - Start: 2023-05-22 16:00 UTC
End: 2023-05-22 20:00 UTC

During the above window, our Networking team will be making changes to our core networking infrastructure to improve performance and scalability in the AMS3 region. This maintenance will occur in three parts on consecutive days, and we will send other maintenance notices for the second and third phases.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience a temporary loss of private connectivity between VPCs. We will endeavor to keep any such impact to a minimum.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/createticket .

May 22, 15:20 UTC
May 21, 2023

No incidents reported.

May 20, 2023
Resolved - Our Engineering team has confirmed full resolution of this incident.

From 12:24 - 13:26 UTC, we experienced an issue that impacted our DNS API services. During this time, customer deployments to App Platform will have failed to process, and managing domains and DNS records through both the Cloud Control Panel and API would have been unavailable.

If you continue to experience problems with either of these services please open a ticket with our support team from within your Cloud Control Panel. Thank you for your patience.

May 20, 14:55 UTC
Monitoring - Our Engineering team identified that the root cause of the issues impacting App Platform deployments was a wider issue involving our DNS API. Along with the App Platform deployment errors, customers would have experienced trouble viewing and editing domains and their DNS records through both the Cloud Control Panel and the API.

A fix has been rolled out and as of 13:26 UTC customers should no longer be experiencing any issues involving the services listed above. Our team will continue to monitor the situation to ensure stability and provide a final update as soon as we confirm the issue has been fully resolved.

Thank you for your patience.

May 20, 13:57 UTC
Investigating - As of 12:24 UTC, Our Engineering team is investigating a global issue with our App Platform service. During this time, users may experience issues while performing any operations for new, updated and deleting deployments.

At this time, previously deployed running Apps are not impacted. We will provide an update as soon as possible.

May 20, 12:46 UTC
Resolved - Our Engineering team has confirmed full resolution of this incident. From 07:01 UTC - 11:00 UTC, we have verified that there is no further risk to event processing in the FRA1 region, and we will now mark this issue as Resolved. Thank you for your patience and understanding throughout this process. If you should encounter any further issues at all, then please open a ticket with our Support team.
May 20, 11:19 UTC
Monitoring - Our Engineering team was able to take action to mitigate the impact of this incident and allow events to process normally. We will post an update as soon as the issue is fully resolved. Please note that while the situation has improved, there may still be a backlog of older events that are in the process of being resolved. We kindly ask for your patience as our team works diligently to address these remaining events. We apologize for any inconvenience caused and assure you that we are committed to resolving all outstanding issues.
May 20, 08:43 UTC
Identified - Our Engineering team has identified the cause of the issue with event processing in the FRA1 region and is actively working on a fix. During this time, only a subset of users may experience delays during creates, destroys, and power events in the cloud panel. We will post an update as soon as additional information is available.
May 20, 08:20 UTC
Investigating - Our Engineering team is investigating an issue with event processing in the FRA1 region. Beginning 07:01 UTC, users may experience delays during creates, destroys, and power events. We apologize for the inconvenience and will share an update once we have more information.
May 20, 07:59 UTC
May 19, 2023
May 18, 2023
Resolved - Our Engineering team has confirmed full resolution of this incident.

From 17:06 - 17:39 UTC, we experienced an availability outage on an internal storage cluster, due to an issue with a networking component. Users may have seen degraded performance with Volumes, issues connecting to Managed Kubernetes clusters, issues creating/deleting Mongo clusters, and delayed deploys/updates to existing Apps in our FRA1 region.

If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. Thank you for your patience throughout this incident.

May 18, 18:48 UTC
Monitoring - Our Engineering team has confirmed the issue with the networking component of the internal storage cluster was the root cause and the remediation steps taken were successful.

Users should no longer be seeing issues with operations on Volumes, connecting to Managed Kubernetes clusters, operations with Mongo Managed Databases, nor deploys/updates to Apps in Frankfurt.

We will monitor this issue for a short period to ensure it's fully resolved and will post a final update at that time.

May 18, 17:59 UTC
Identified - The team has identified an issue with a networking component of the internal storage cluster and has taken steps to remediate the issue. At this time, we're seeing Volumes operations returning to pre-incident thresholds.

Our App Platform team identified that users with Apps in Frankfurt may have also seen delays in deploys/updates to existing Apps.

We're watching metrics closely to confirm operations return to normal and are confirming the network issue was the root cause of the issue.

May 18, 17:41 UTC
Investigating - Our Engineering team is investigating an issue with a drop in availability for an internal storage cluster in our FRA1 region. At this time, users with Volumes in FRA may experience slower than expected operations, as well as I/O stalls. Users with Managed Kubernetes clusters may see issues connecting to clusters. This issue is also impacting Mongo Managed Database operations, including creates and deletes, but does not impact already running Mongo clusters.

We will post an update as soon as possible.

May 18, 17:32 UTC
May 17, 2023

No incidents reported.

May 16, 2023
Completed - The scheduled maintenance has been completed.
May 16, 23:49 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 16, 22:00 UTC
Scheduled - Start: 2023-05-16 22:00 UTC
End: 2023-05-17 02:00 UTC

During the above window, our Networking team will be making changes to our core networking infrastructure to improve performance and scalability in the NYC2 region. This will be the second of the two maintenance activities performed by our team in the region on consecutive days.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience a temporary loss of private connectivity between VPCs. We will endeavor to keep any such impact to a minimum.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/createticket .

May 16, 21:06 UTC
Completed - The scheduled maintenance has been completed.
May 16, 16:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 16, 12:00 UTC
Scheduled - Start: 2023-05-16 12:00 UTC
End: 2023-05-16 16:00 UTC

During the above window, our networking team will be making changes to core networking infrastructure to improve performance and scalability in the AMS2 region.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer network traffic due to this maintenance.

Should an unexpected issue arise, a possible outcome would be a failure of control plane events, including, but not limited to, Droplet and Droplet-based service creates, Snapshots, Backups, etc. Should this failure occur, we will update this post with further details.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/.

May 13, 12:18 UTC
Completed - The scheduled maintenance has been completed.
May 16, 01:11 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 15, 22:00 UTC
Scheduled - Start: 2023-05-15 22:00 UTC
End: 2023-05-16 02:00 UTC

During the above window, our Networking team will be making changes to our core networking infrastructure to improve performance and scalability in the NYC2 region. This maintenance will occur in two parts, and we will post another maintenance notice for the second phase.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience a temporary loss of private connectivity between VPCs. We will endeavor to keep any such impact to a minimum.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/createticket .

May 15, 21:10 UTC
May 15, 2023
May 14, 2023
May 13, 2023

No incidents reported.

May 12, 2023