DigitalOcean Services Status

All Systems Operational
API Operational
Billing Operational
Cloud Control Panel Operational
Cloud Firewall Operational
Community Operational
DNS Operational
Support Center Operational
Reserved IP Operational
WWW Operational
App Platform Operational
Global Operational
Amsterdam Operational
Bangalore Operational
Frankfurt Operational
London Operational
New York Operational
San Francisco Operational
Singapore Operational
Sydney Operational
Toronto Operational
Container Registry Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
Droplets Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Event Processing Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Functions Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Managed Databases Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Monitoring Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SGP1 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SYD1 Operational
TOR1 Operational
Networking Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Kubernetes Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Load Balancers Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Spaces Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
BLR1 Operational
Spaces CDN Operational
AMS3 Operational
FRA1 Operational
NYC3 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
VPC Operational
Global Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Volumes Operational
AMS2 Operational
AMS3 Operational
BLR1 Operational
FRA1 Operational
LON1 Operational
NYC1 Operational
NYC2 Operational
NYC3 Operational
SFO1 Operational
SFO2 Operational
SFO3 Operational
SGP1 Operational
SYD1 Operational
TOR1 Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Mar 3, 2024

No incidents reported today.

Mar 2, 2024

No incidents reported.

Mar 1, 2024

No incidents reported.

Feb 29, 2024
Resolved - Our Engineering team identified and resolved an issue that was affecting the booting of Droplets from the Recovery ISO.

From 00:20 UTC to 05:24 UTC, users might have experienced errors when attempting to boot Droplets from the Recovery ISO.

If you continue to experience problems, please open a ticket with our support team. Thank you for your patience and we apologize for any inconvenience.

Feb 29, 05:50 UTC
Feb 28, 2024
Resolved - Our Engineering team has confirmed that creation, forking, and restoration of PostgreSQL clusters on v16 is functioning correctly.

Upgrades for lower-versioned PostgreSQL clusters to v16 remain unavailable at this time and users will see errors if they attempt to perform that upgrade. Our Engineering team continues to work on making upgrades to v16 available again, but we expect this to take some time.

If you continue to experience issues or have any questions, please open a ticket with our support team.

Feb 28, 22:53 UTC
Monitoring - After testing, teams have determined that PostgreSQL v16 is safe for new creations, as well as forks and restores for existing clusters. At this time, v16 is re-enabled in our Cloud Control Panel and users creating, forking, or restoring v16 clusters should be able to do so successfully.

We will now monitor new creates for a short period of time.

Feb 28, 20:56 UTC
Identified - The identified issue with the image used to create PostgreSQL v16 clusters has been reported upstream to Postgres. Engineering teams are currently engaged in testing the image to ensure it is safe for users to continue using for new cluster creations and upgrades.

Until that determination is made, customers are unable to create, fork, or restore v16 clusters, both through the Cloud Control Panel and API. Customers may use v15 or lower for new PostgreSQL cluster creations in the interim.

We appreciate your patience and will provide another update once we have more information.

Feb 28, 18:03 UTC
Update - Our Engineering team is continuing to investigate the root cause of this incident. During this period, users may encounter errors when trying to create PostgreSQL database clusters v16. We intend to re-enable the creation of PGSQL v16 database instances as soon as possible.

We will provide an update as soon as we have further information

Feb 28, 10:42 UTC
Update - During the course of investigation, our Engineering team has discovered there may be an issue with the image used to create PostgreSQL v16 Database Clusters. Due to this, our team is temporarily removing the option to create v16 clusters from our Cloud Control Panel, while they continue to work on addressing the root cause. Users attempting to create v16 clusters via the API will continue to receive errors. Additionally, users with existing v16 clusters will be unable to fork or restore those clusters until this incident is resolved.

We will continue to provide updates as they are available. In the meantime, users are free to create new clusters on versions other than v16.

Feb 27, 18:27 UTC
Update - Our Engineering team continues to investigate the root cause of this incident. During this time, users are unable to create v16 PostgreSQL Database Clusters.

We will provide an update as soon as we have further information.

Feb 27, 14:45 UTC
Investigating - As of 10:18 UTC, our Engineering team is investigating an issue with creating PostgresSQL Managed Database clusters via our Cloud Control Panel.

During this time, users may face issues creating the PostgreSQL Databases from the Cloud Control Panel. The creation of clusters below v16 remains unaffected at the moment.

We apologize for the inconvenience and will share an update once we have more information

Feb 27, 10:40 UTC
Resolved - Our Engineering team has confirmed full resolution of the issue with networking in our SFO2 region.

If you continue to experience problems, please open a ticket with our support team. Thank you for your patience throughout this incident!

Feb 28, 22:49 UTC
Monitoring - Our Engineering team has confirmed that the faulty network hardware component was the cause of this issue. From 21:39 - 22:11 UTC, this component was not functioning correctly, causing networking issues for a subset of customers in our SFO2 region, as well as internal alerts in our SFO1/SFO3 regions.

At this time, all services should now be operating normally. We will monitor this incident for a short period of time to confirm full resolution.

Feb 28, 22:23 UTC
Identified - Our Engineering team has identified the cause of the issue with networking in our SFO regions to be related to an issue with a network hardware component in SFO2. They have isolated that component and we're observing error rates returning to pre-incident levels at this time.

We are continuing to look into this failure, but users should be seeing recovery on their services. We'll provide another update soon.

Feb 28, 22:17 UTC
Investigating - Our Engineering team is currently investigating internal alerts and customer reports for an increase in networking errors in our SFO regions for Droplets and Droplet-based services. We will provide an update as soon as we have further information.
Feb 28, 22:06 UTC
Feb 27, 2024
Completed - The scheduled maintenance has been completed.
Feb 27, 20:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 27, 16:00 UTC
Scheduled - Start: 2024-02-27 16:00 UTC
End: 2024-02-27 20:00 UTC


During the above window, our Networking team will be making changes to core networking infrastructure, to improve performance and scalability in the AMS3 region.

Expected impact:

These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience increased latency or a brief disruption in network traffic. We will endeavor to keep any such impact to a minimum.

If you have any questions related to this issue please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket

Feb 20, 15:19 UTC
Feb 26, 2024

No incidents reported.

Feb 25, 2024

No incidents reported.

Feb 24, 2024

No incidents reported.

Feb 23, 2024

No incidents reported.

Feb 22, 2024

No incidents reported.

Feb 21, 2024
Resolved - Our Engineering team has confirmed the resolution of the issue impacting the Container Registry in multiple regions.

Everything involving the Container Registry should now be functioning normally.

We appreciate your patience throughout the process and if you continue to experience problems, please open a ticket with our support team for further review.

Feb 21, 20:43 UTC
Monitoring - Our Engineering team has identified an internal operation within the Container Registry service which was placing load on the service, leading to latency and errors. The team has paused that operation in order to resolve the issue impacting the Container Registry in multiple regions. Users should not be facing any latency issues while interacting with their Container registries and also while building their Apps.

We are actively monitoring the situation to ensure stability and will provide an update once the incident has been fully resolved.

Thank you for your patience and we apologize for the inconvenience.

Feb 21, 18:03 UTC
Investigating - Our Engineering team is investigating an issue with the DigitalOcean Container Registry service. Beginning around 20:00 UTC on February 20, there has been an uptick in 401 errors for image pulls from the Container Registry service.

During this time, a subset of customers may experience latency or see 401 errors while interacting with Container Registries. This issue also impacts App Platform builds and users may encounter delays while building their Apps or experience timeout errors in builds as a result. Users utilizing Container Registry for images for deployment to Managed Kubernetes clusters may also see latency or failures to deploy.

We apologize for the inconvenience and will share an update once we have more information.

Feb 21, 15:41 UTC
Feb 20, 2024
Resolved - As of 06:25 am UTC, our Engineering team has confirmed the resolution of the issue impacting Spaces availability in the BLR1 region.

Users should no longer experience issues with their Spaces resources in the BLR1 region.

If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.

Feb 20, 07:07 UTC
Monitoring - Our Engineering team has implemented a fix to resolve the Spaces availability issues in the BLR1 region and is monitoring the situation.

Users should no longer encounter errors when accessing Spaces in the BLR1 region and should be able to create new Spaces buckets from the cloud control panel.

We will post an update as soon as the issue is fully resolved.

Feb 20, 06:39 UTC
Investigating - Our Engineering team is investigating an issue with Spaces availability in the BLR1 region. During this time users may encounter errors when accessing Spaces objects and creating new buckets in the BLR1 region.

We apologize for the inconvenience and will share an update once we have more information.

Feb 20, 05:46 UTC
Feb 19, 2024
Completed - The scheduled maintenance has been completed.
Feb 19, 17:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 19, 14:00 UTC
Scheduled - Start: 2024-02-19 14:00 UTC
End:  2024-02-19 17:00 UTC

Hello,

During the above window, we will be performing maintenance in our BLR1 region as part of a firewall migration.

Expected impact:

As part of this maintenance, event processing in BLR1 will be disabled for a period of up to 15 minutes during the three-hour window. During this period, users won't be able to create, destroy, or modify new or existing DO services in BLR1 (such as Droplets, DBaaS/DOKS clusters, etc.).

If you have any questions related to this issue, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket


Thank you,
Team DigitalOcean

Feb 16, 14:52 UTC
Resolved - From 15:50 - 16:46, our team received customer reports of issues impacting multiple products in our BLR1 region, including the accessibility of Managed Databases and Managed Kubernetes clusters and general network connectivity disruption. These issues may be related to a scheduled maintenance event in the region, per our status post linked below:

https://status.digitalocean.com/incidents/5z0npmmmnc1h

Our team continues to review customer reports and diagnose the impact related to this maintenance. In the meantime, we have rolled back the maintenance process and all services should now be responding normally. If you experience any further issues, please open a ticket with our Support team. Thank you for your patience and we apologize for any inconvenience.

Feb 19, 15:50 UTC
Feb 18, 2024
Resolved - As of 17:47 UTC, our Engineering team has confirmed the full resolution of the problem impacting the Managed Kubernetes service in our NYC3 region. The Cilium pods inside the clusters should be functioning normally.

If you continue to experience problems, please open a ticket with our Support team.

Thank you for your patience and we apologize for the inconvenience.

Feb 18, 19:08 UTC
Monitoring - Our Engineering team has deployed the fix for the issue with Managed Kubernetes service where users were experiencing network connectivity issues with Cilium pods being restarted inside the clusters. Cilium pods should now be functioning normally.

We are monitoring the situation and will post another update once we confirm the fix resolves this incident.

Feb 18, 18:09 UTC
Investigating - Our Engineering team is investigating an issue with our Managed Kubernetes service in NYC3 region.

During this time users may experience network connectivity issues specifically with the Cilium pods inside their clusters.

We apologize for the inconvenience and will share an update once we have more information.

Feb 18, 16:57 UTC