DigitalOcean Services Status

All Systems Operational
API Operational
Billing Operational
Cloud Control Panel Operational
Cloud Firewall Operational
Community Operational
DNS Operational
Support Center Operational
Reserved IP Operational
WWW Operational
App Platform Operational
Global Operational
Amsterdam Operational
Bangalore Operational
Frankfurt Operational
London Operational
New York Operational
San Francisco Operational
Singapore Operational
Sydney Operational
Toronto Operational
Container Registry Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
Droplets Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Event Processing Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Functions Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Managed Databases Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Monitoring Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SGP1 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SYD1 Operational
TOR1 Operational
Networking Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Kubernetes Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Load Balancers Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Spaces Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
Spaces CDN Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
VPC Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Volumes Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Jun 4, 2023

No incidents reported today.

Jun 3, 2023

No incidents reported.

Jun 2, 2023

No incidents reported.

Jun 1, 2023
Completed - The scheduled maintenance has been completed.
Jun 1, 03:01 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 1, 01:00 UTC
Scheduled - Start: 2023-06-01 01:00 UTC
End: 2023-06-01 05:00 UTC

During the above window, our Networking team will be making changes to our core networking infrastructure to improve performance and scalability in the SFO2 region. This will be the second of the two maintenance activities performed by our team in the region on consecutive days.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience a temporary loss of private connectivity between VPCs. We will endeavor to keep any such impact to a minimum.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/createticket .

May 31, 23:11 UTC
May 31, 2023
Completed - The scheduled maintenance has been completed.
May 31, 04:04 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 31, 01:30 UTC
Scheduled - Start: 2023-05-31 01:00 UTC
End: 2023-05-31 05:00 UTC


During the above window, our Networking team will be making changes to our core networking infrastructure to improve performance and scalability in the SFO2 region. This maintenance will occur in two parts on consecutive days and we will send another maintenance notice for the second phase.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience a temporary loss of private connectivity between VPCs. We will endeavor to keep any such impact to a minimum.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/createticket .

May 31, 01:26 UTC
Resolved - Our Engineering team has confirmed the full resolution of the issue impacting Spaces performance and availability in our NYC3 region.

From 00:20 to 00:57 UTC, users may have experienced slowness or timeouts when trying to access or manage their Spaces resources in NYC3, static site assets in NYC, or App Platform bandwidth insights.

Spaces should now be operating normally. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience.

May 31, 02:25 UTC
Monitoring - Our Engineering team has observed recovery of availability for Spaces in our NYC3 region.

Availability returned to 100% at 00:57 UTC and since then, users should not be experiencing any issues with Spaces or App Platform.

We'll now monitor the situation for a period of time and post a final update once we confirm the incident is resolved.

May 31, 01:47 UTC
Investigating - Our Engineering team is investigating a drop in availability for Spaces in our NYC region. During this time, some users may experience errors with API or object requests, be unable to create new buckets in NYC3, and/or see issues with loading Spaces in the Cloud Control Panel.

Additionally, users of App Platform will be unable to see bandwidth insights in their dashboards and static site users may notice errors fetching assets from Spaces buckets in NYC3.

We apologize for the inconvenience and will share an update once we have more information.

May 31, 01:05 UTC
May 30, 2023

No incidents reported.

May 29, 2023

No incidents reported.

May 28, 2023
Resolved - Our Engineering team has continued to actively monitor the situation resulting from multiple subsea fiber faults in the APAC region. Over the last few days, our team has continued to make routing changes where possible.

Crews have been able to complete some cable repairs and we expect to see our network routing stabilize once the repair of a few more cables are completed in the coming weeks. Until then, we expect to see intermittent periods of packet loss and latency on network routes between Singapore and New York City, as well as Singapore and Toronto. These are normally short-lived and happen during Singapore business hours when traffic is heavy.

Given the relative stability of routes, we will now close out this incident and provide any needed updates separately.

If users experience a disruption in service or have any questions, we invite them to submit a support ticket from within their account.

Thank you for your patience and understanding throughout this incident.

May 28, 19:16 UTC
Update - Our Engineering team has detected large amounts of loss and latency on network routes between NYC/TOR and Singapore. Users may experience higher-than-normal latency or amounts of packet loss for traffic traversing those routes. The team is reviewing any possible traffic shifts to alleviate the situation.

If you have any questions or concerns, please open a support ticket from within your account.

May 24, 14:32 UTC
Update - Our Engineering team detected large amounts of loss and latency on network routes between Singapore and Frankfurt, from 13:10 - 13:20 UTC, today. The issue self-recovered, likely due to upstream providers shuffling traffic.

Additionally, our team is seeing high levels of loss and latency on network routes between Bangalore and Sydney. Users may see packet loss or notice less performant services at this time. The Network Engineering team is currently exploring options to route around the issue or any other steps that can be taken to mitigate impact to users.

May 19, 14:35 UTC
Monitoring - Our Engineering team has not observed higher-than-normal periods of packet loss and latency over the last 12 hours and user reports of degraded performance have returned to pre-incident levels.

Until the cable cuts in the APAC region are completely resolved, we expect there to be intermittent periods of degraded performance, especially during APAC business hours.

We'll continue to monitor the situation through the next few days and provide any needed updates. If you experience any issues, feel free to open a support ticket and our team will be happy to assist.

May 14, 03:31 UTC
Identified - Due to the ongoing subsea fiber faults in the APAC region, our Engineering team is observing a recurrence of packet loss and increased latency between Singapore and Europe/North American regions, during peak hours in Asia.

We've received reports of users experiencing degraded performance for Droplets and Droplet-based services, as well as with pushes to Container Registry, and deployments of App Platform apps.

Our team is actively engaged in attempting to re-route traffic where possible to improve performance. Until the cable cuts are fully resolved, users may continue to experience intermittent periods of degraded service.

You can see our previous post on this incident here: https://status.digitalocean.com/incidents/4w1yx5p58p1t

May 12, 16:52 UTC
May 27, 2023

No incidents reported.

May 26, 2023
Resolved - As of 02:10 UTC, Our Engineering team has confirmed that the issue with our Container Registry services in the SFO3 region has been fully resolved.

All operations should now be operating normally with our Container Registry services. If you continue to experience any trouble with these services please open a ticket with our support team.

Thank you for your patience and we apologize for the inconvenience.

May 26, 02:38 UTC
Monitoring - As of 22:10 UTC, Our engineering team has implemented a fix to resolve the issue with our Container Registry services in the SFO3 region and is monitoring the situation closely.

Users should no longer see 500-type errors when uploading/pushing/deleting images, slow cleanup operations, creation failures for new registries, or other errors when interacting with registries in the SFO3 region.

We are going to continue to monitor the situation and will post an update once we are confident this issue will not recur.

May 25, 23:31 UTC
Investigating - We are observing some customer reports for issues with DigitalOcean Container Registries in the SFO3 region. Our Engineering team is investigating any potential issues that are causing these reports. This seems to be a reoccurrence of the incident mentioned in the below link:

https://status.digitalocean.com/incidents/mmngtxzmm6gs

At this time, users may see 500 type errors when uploading/pushing/deleting images, slow cleanup operations, creation failures for new registries, or other errors when interacting with registries in SFO3.

We will post an update as soon as we have further information. Thank you for your patience.

May 25, 16:20 UTC
May 25, 2023
Resolved - Our Engineering team has confirmed the full resolution of this incident.

From 06:00 - 08:00 UTC, users were unable to create and deploy new Apps and experienced errors when updating, and deploying existing Apps. The App deployments should now be operating normally.

If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel.

May 25, 09:45 UTC
Monitoring - Our Engineering team has implemented a fix to resolve the issue with App deployments in the SFO region and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
May 25, 09:20 UTC
Identified - Our Engineering team has identified the cause of the issue with network latency that is impacting the management and creation of Apps in the SFO region and is actively working on a fix. During this time, a subset of users might see errors when updating and deploying new/existing Apps. We will post an update as soon as additional information is available.
May 25, 08:47 UTC
Investigating - Our Engineering team is investigating an issue with network latency that is impacting the management and creation of Apps in the SFO region. As of 06:00 UTC, users are unable to create and deploy new Apps and may see errors when updating, and deploying existing Apps. At this time, previously deployed running Apps are not impacted. We apologize for the inconvenience and will share an update once we have more information.
May 25, 08:30 UTC
Resolved - Our Engineering team has identified the root cause of the incident to be multiple DB calls causing DB contention. During this time, users might have experienced issues interacting and authenticating with the DigitalOcean Container Registry, creating new container registries, and pushing/deleting images to/from registries.

As of 00:10 UTC, we have confirmed the full resolution of the issue affecting the DigitalOcean Container Registry in the SFO3 region. We appreciate your patience throughout the process and if you continue to experience problems, please open a ticket with our support team for further review.

May 25, 02:49 UTC
Update - Our Engineering team continues to investigate the root cause of this incident but has observed a reduction in the error rate with DigitalOcean Container Registry in the SFO3 region.

At this time, users should no longer experience errors while interacting and authenticating with the DigitalOcean Container Registry, creating new container registries, and pushing/deleting images to/from registries.

We will post an update as soon as we have further information. Thank you for your patience.

May 24, 22:53 UTC
Investigating - Following an uptick in customer reports of issues with DigitalOcean Container Registries in SFO3, our Engineering team is investigating any potential issues that are causing these reports.

At this time, users may see 500 type errors when uploading/pushing/deleting images, slow cleanup operations, creation failures for new registries, or other errors when interacting with registries in SFO3.

We will post an update as soon as we have further information. Thank you for your patience.

May 24, 19:28 UTC
May 24, 2023
Resolved - As of 21:15 UTC, Our Engineering team has confirmed the full resolution of this incident. We have verified that the Snapshot and Backup events in the SGP1 region are processing without any failures and we will now mark this issue as resolved.

Thank you for your patience and understanding throughout this process. If you should encounter any further issues at all, then please open a ticket with our Support team.

May 24, 21:30 UTC
Update - We are continuing to monitor for any further issues.
May 24, 21:05 UTC
Monitoring - As of 19:30 UTC, Our Engineering team was able to take action to mitigate the impact of this incident and allow Snapshot and Backup events to process normally in the SGP1 region. We will post an update as soon as the issue is fully resolved.

Please note that while the situation has improved, there may still be a backlog of older events that are in the process of being resolved. We kindly ask for your patience as our team works diligently to address these remaining events.

We apologize for any inconvenience caused and assure you that we are committed to resolving all outstanding issues.

May 24, 20:44 UTC
Investigating - As of 13:30 UTC our Engineering team is investigating an issue with intermittent Snapshot and Backup failures in our SGP1 region. Users may experience errors when performing Snapshots, but may eventually see retries succeed.

Any Backup failures are automatically being retried within the Backup window for individual Droplets.

We apologize for the inconvenience and will share an update once we have more information.

May 24, 19:20 UTC
Completed - The scheduled maintenance has been completed.
May 24, 18:35 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 24, 16:00 UTC
Scheduled - Start: 2023-05-24 16:00 UTC
End: 2023-05-24 20:00 UTC


During the above window, our Networking team will be making changes to our core networking infrastructure to improve performance and scalability in the AMS3 region. This will be the final phase of the three maintenance activities performed by our team in AMS3 on consecutive days.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience a temporary loss of private connectivity between VPCs. We will endeavor to keep any such impact to a minimum.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/createticket .

May 24, 15:05 UTC
Resolved - Our engineering team has confirmed the full resolution of this issue. From approximately 09:05 UTC - 10:40 UTC, users were seeing errors when attempting to create new Functions, invoking or updating existing deploys. Functions should now be operating normally. If you continue to experience problems, please open a ticket with our support team. Thank you for your patience and we apologize for any inconvenience.
May 24, 11:10 UTC
Monitoring - Our Engineering team has deployed a fix to resolve an issue with Serverless Functions in the NYC1 region. All users should be able to create new Functions, invoking or updating existing deploys should be operational. We are monitoring the situation closely and will share an update once the issue is resolved completely.
May 24, 10:50 UTC
Identified - Our Engineering team has identified the cause of the issue with Serverless Functions in the NYC1 region and is actively working on a fix. During this time users may see errors when attempting to create new Functions, as well as when invoking or updating existing deploys. We will post an update as soon as additional information is available.
May 24, 10:34 UTC
Investigating - As of 09:05 UTC, our Engineering team is investigating an issue with Serverless Functions in the NYC1 region. Users may see errors when attempting to create new Functions, as well as when invoking or updating existing deploys in the NYC1 region. We apologize for the inconvenience and will share an update once we have more information.
May 24, 10:09 UTC
May 23, 2023
Completed - The scheduled maintenance has been completed.
May 23, 19:11 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 23, 16:00 UTC
Scheduled - Start: 2023-05-23 16:00 UTC
End: 2023-05-23 20:00 UTC


During the above window, our Networking team will be making changes to our core networking infrastructure to improve performance and scalability in the AMS3 region. This will be the second of three maintenance activities performed by our team in AMS3 on consecutive days.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience a temporary loss of private connectivity between VPCs. We will endeavor to keep any such impact to a minimum.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/createticket .

May 23, 15:20 UTC
May 22, 2023
Completed - The scheduled maintenance has been completed.
May 22, 19:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 22, 16:00 UTC
Scheduled - Start: 2023-05-22 16:00 UTC
End: 2023-05-22 20:00 UTC

During the above window, our Networking team will be making changes to our core networking infrastructure to improve performance and scalability in the AMS3 region. This maintenance will occur in three parts on consecutive days, and we will send other maintenance notices for the second and third phases.

Expected Impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience a temporary loss of private connectivity between VPCs. We will endeavor to keep any such impact to a minimum.

If you have any questions or concerns regarding this maintenance, please reach out to us by opening up a ticket on your account via https://cloudsupport.digitalocean.com/s/createticket .

May 22, 15:20 UTC
May 21, 2023

No incidents reported.