tag:status.digitalocean.com,2005:/historyDigitalOcean Status - Incident History2024-03-18T07:47:11ZDigitalOceantag:status.digitalocean.com,2005:Incident/202264082024-03-17T05:00:56Z2024-03-17T05:00:58ZDigitalOcean Support Portal Maintenance<p><small>Mar <var data-var='date'>17</var>, <var data-var='time'>05:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Mar <var data-var='date'>17</var>, <var data-var='time'>02:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>16:11</var> UTC</small><br><strong>Scheduled</strong> - Start: 2024-03-17 02:00 UTC<br />End: 2024-03-17 05:00 UTC<br /><br />During the above maintenance window, there will be maintenance performed on our ticketing system.<br /><br />Expected Impact:<br /><br />During the course of the maintenance, users will be unable to submit support tickets, update existing tickets, or receive replies to existing tickets. Users will also be unable to log into https://cloudsupport.digitalocean.com/s/ or open the Support Portal from within the Cloud Control Panel.<br /><br />Any tickets submitted during the maintenance via alternate methods (such as replying to an email chain or via our webform) will be saved and entered into our ticketing system at the completion of the maintenance.<br /><br />Our Support Team will also be impacted by this maintenance and will be unable to enter the ticketing system to receive any tickets or reply to existing tickets. As soon as the vendor maintenance has completed, our team will address all support queries as quickly as possible.<br /><br />We appreciate your patience throughout this process.</p>tag:status.digitalocean.com,2005:Incident/202736852024-03-15T23:54:14Z2024-03-15T23:54:14ZDroplet Creation in BLR1<p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>23:54</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed full resolution of DNS resolution issue in the BLR1 region. <br /><br />We appreciate your patience throughout this process and if you continue to experience problems, please open a ticket with our support team for further review.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>23:27</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has rolled out a fix for the DNS resolution issue in the BLR1 region. Users should now be able to create Droplets with firewall rules.<br /><br />We'll post an update once the incident is fully resolved.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>23:09</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating issues with DNS resolution in BLR1 region. During this time, customers may experience issues while creating new Droplets with firewall rules in the BLR1 region.<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>tag:status.digitalocean.com,2005:Incident/202221832024-03-12T11:43:59Z2024-03-12T11:43:59ZDroplet connectivity and Event processing in multiple regions.<p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>11:43</var> UTC</small><br><strong>Resolved</strong> - As of 10:09 UTC, our Engineering team has confirmed the full resolution of the issue impacting Droplet connectivity and Event processing in multiple regions.<br />Users should no longer see issues with their Droplets and Droplet-related services.<br />If you continue to experience problems, please open a ticket with our support team. Thank you for your patience throughout this incident.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>10:07</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has confirmed that the issue impacting Droplet connectivity in multiple regions has been mitigated.<br /><br />At this time, users should no longer see issues when connecting to their Droplet and Droplet-related services.<br /><br />We will further monitor this incident and will post an update as soon as the issue is fully resolved.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>09:29</var> UTC</small><br><strong>Identified</strong> - Our Engineering team has identified the cause of the issue impacting Droplet connectivity in multiple regions and applied a fix.<br /><br />The impact has started to mitigate and users should be able to connect to their Droplets in the affected regions. <br /><br />We're now monitoring the fix for stability and will post an update once we are confident it is successful.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>08:28</var> UTC</small><br><strong>Update</strong> - Our Engineering team continues to investigate the issue impacting Droplet connectivity and Event processing in multiple regions.<br /><br />At this time, users may experience issues when connecting to their Droplet and may notice events appearing to be stuck or delayed when processed against services in this region. <br /><br />Additionally, users may see issues with Droplet-based services like Managed Databases, and Kubernetes Clusters in the affected regions.<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>07:24</var> UTC</small><br><strong>Investigating</strong> - As of 06:20 UTC, our Engineering team is investigating an issue impacting Droplet connectivity and Event processing in multiple regions.<br /><br />At this time, users may experience issues when connecting to their Droplet. Also, users may notice events appearing to be stuck or delayed when processed against services in this region.<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>tag:status.digitalocean.com,2005:Incident/201844652024-03-07T12:21:18Z2024-03-07T12:21:18ZSpaces Availability in SFO2<p><small>Mar <var data-var='date'> 7</var>, <var data-var='time'>12:21</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team identified and resolved an issue that affected Spaces availability in the SFO2 region.<br /><br />From 11:38 UTC to 11:58 UTC, users may have encountered errors while accessing Spaces objects and creating new buckets in the SFO2 region.<br /><br />If you continue to experience problems, please open a ticket with our support team. Thank you for your patience and we apologize for any inconvenience.</p>tag:status.digitalocean.com,2005:Incident/201786072024-03-06T19:09:05Z2024-03-06T19:10:54ZNetwork Connectivity in TOR1<p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>19:09</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has identified and resolved an issue in the TOR1 region that impacted the network connectivity for a subset of Droplets and Droplet-based services for a brief duration.<br /><br />From 17:38 - 17:47 UTC, users might have experienced delays or errors while accessing and connecting to their resources in the TOR1 region from the public internet or from other resources in TOR1. Swift action was taken by our Engineering team that restored service and all services in TOR1 are operating correctly. <br /><br />We apologize for the inconvenience. If you have any questions or continue to experience issues, please reach out via a Support ticket on your account.</p>tag:status.digitalocean.com,2005:Incident/201658502024-03-05T23:41:12Z2024-03-05T23:41:12ZManaged Databases Control Plane and Connectivity<p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:41</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed that this incident has been fully resolved.<br />If you continue to experience any issues with Managed Database Clusters please open a ticket with our support team. Thank you for your patience.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>20:28</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to resolve the issue with our Managed Databases services. At this time, we're observing error rates returning to pre-incident levels and seeing operations such as create/fork/restore succeed. Trusted sources updates are also functioning normally, so connectivity to Database clusters from newly added resources to trusted sources is restored. <br /><br />We are monitoring the situation closely and will post an update as soon as we confirm the issue is fully resolved.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>17:39</var> UTC</small><br><strong>Update</strong> - Our Engineering team is still working on a fix for the issue with our Managed Databases service (excluding Mongo clusters). At this time, users may be impacted by: <br /><br />- Errors/latency for creation, forking, and restoration of clusters<br />- 5xx errors on Managed Databases API endpoints<br />- Due to updates to trusted sources not taking effect, connections to Database clusters from newly added trusted sources will fail. This includes new Managed Kubernetes nodes, Droplets, and Apps using Databases. <br /><br />As soon as we have further information, we'll provide another update.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>14:53</var> UTC</small><br><strong>Update</strong> - Our Engineering team continues to work towards resolving the issue impacting our Managed Databases Control Plane. At this time, they have also confirmed that new Managed Kubernetes nodes are unable to connect successfully to Managed Database clusters. Users may see connection time outs or failures from Kubernetes nodes to their Databases, excluding Mongo Databases.<br /><br />Additionally, updates to trusted sources on Managed Database clusters are not being applied successfully, so any updates made will not take effect.<br /><br />We'll provide another update as soon as possible.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>13:52</var> UTC</small><br><strong>Update</strong> - Our Engineering team is continuing to work on resolving the issue impacting Managed Database Control Plane. We can confirm that the Get, List, and Update operations are now functioning properly, while the creation function remains blocked. Additionally, we'd like to specify that MongoDB remains unaffected by this incident. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>12:43</var> UTC</small><br><strong>Identified</strong> - Our Engineering team has identified the cause of the issue impacting the Managed Database Cluster. During this time, users will continue to experience errors when working with Managed Databases such as creating, viewing, or updating Clusters.<br /><br />We are actively working on a fix and will post an update as soon as additional information is available.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>12:10</var> UTC</small><br><strong>Investigating</strong> - As of 11:22 UTC, our Engineering team is investigating an issue with the Managed Database Cluster. During this time, users may experience errors while creating Database clusters via the Cloud Control Panel and API requests.<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>tag:status.digitalocean.com,2005:Incident/201655762024-03-05T15:00:56Z2024-03-05T15:00:56ZBLR1 Network Maintenance<p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>15:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>14:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>11:30</var> UTC</small><br><strong>Scheduled</strong> - Start: 2024-03-05 14:00 UTC<br />End: 2024-03-05 15:00 UTC<br /><br />During the above window, we will be performing maintenance in our BLR1 region as part of a firewall migration. This maintenance was previously attempted on 2024-02-19 but the changes were reverted after our Engineers encountered unexpected issues, resulting from the maintenance. Our team has performed a thorough examination of the previous attempt and is confident in performing this maintenance, as well as measures to mitigate any negative outcomes. <br /><br />Expected impact:<br /><br />As part of this maintenance, event processing in BLR1 will be delayed for a period of up to 15 minutes during the one-hour window. During this period, users will experience a delay with creating, destroying, or modifying new or existing DO services in BLR1(such as Droplets, DBaaS/DOKS clusters, etc.), existing services that are running should not be impacted.<br /><br />If you have any questions related to this issue, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket</p>tag:status.digitalocean.com,2005:Incident/201038642024-02-29T05:50:18Z2024-02-29T05:50:18ZDroplet Recovery Image<p><small>Feb <var data-var='date'>29</var>, <var data-var='time'>05:50</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team identified and resolved an issue that was affecting the booting of Droplets from the Recovery ISO.<br /><br />From 00:20 UTC to 05:24 UTC, users might have experienced errors when attempting to boot Droplets from the Recovery ISO.<br /><br />If you continue to experience problems, please open a ticket with our support team. Thank you for your patience and we apologize for any inconvenience.</p>tag:status.digitalocean.com,2005:Incident/200867732024-02-28T22:53:47Z2024-02-28T22:53:47ZManaged Databases creation - PostgresSQL v16<p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>22:53</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed that creation, forking, and restoration of PostgreSQL clusters on v16 is functioning correctly.<br /><br />Upgrades for lower-versioned PostgreSQL clusters to v16 remain unavailable at this time and users will see errors if they attempt to perform that upgrade. Our Engineering team continues to work on making upgrades to v16 available again, but we expect this to take some time.<br /><br />If you continue to experience issues or have any questions, please open a ticket with our support team.</p><p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>20:56</var> UTC</small><br><strong>Monitoring</strong> - After testing, teams have determined that PostgreSQL v16 is safe for new creations, as well as forks and restores for existing clusters. At this time, v16 is re-enabled in our Cloud Control Panel and users creating, forking, or restoring v16 clusters should be able to do so successfully.<br /><br />We will now monitor new creates for a short period of time.</p><p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>18:03</var> UTC</small><br><strong>Identified</strong> - The identified issue with the image used to create PostgreSQL v16 clusters has been reported upstream to Postgres. Engineering teams are currently engaged in testing the image to ensure it is safe for users to continue using for new cluster creations and upgrades. <br /><br />Until that determination is made, customers are unable to create, fork, or restore v16 clusters, both through the Cloud Control Panel and API. Customers may use v15 or lower for new PostgreSQL cluster creations in the interim. <br /><br />We appreciate your patience and will provide another update once we have more information.</p><p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>10:42</var> UTC</small><br><strong>Update</strong> - Our Engineering team is continuing to investigate the root cause of this incident. During this period, users may encounter errors when trying to create PostgreSQL database clusters v16. We intend to re-enable the creation of PGSQL v16 database instances as soon as possible.<br /><br />We will provide an update as soon as we have further information</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>18:27</var> UTC</small><br><strong>Update</strong> - During the course of investigation, our Engineering team has discovered there may be an issue with the image used to create PostgreSQL v16 Database Clusters. Due to this, our team is temporarily removing the option to create v16 clusters from our Cloud Control Panel, while they continue to work on addressing the root cause. Users attempting to create v16 clusters via the API will continue to receive errors. Additionally, users with existing v16 clusters will be unable to fork or restore those clusters until this incident is resolved.<br /><br />We will continue to provide updates as they are available. In the meantime, users are free to create new clusters on versions other than v16.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>14:45</var> UTC</small><br><strong>Update</strong> - Our Engineering team continues to investigate the root cause of this incident. During this time, users are unable to create v16 PostgreSQL Database Clusters. <br /><br />We will provide an update as soon as we have further information.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>10:40</var> UTC</small><br><strong>Investigating</strong> - As of 10:18 UTC, our Engineering team is investigating an issue with creating PostgresSQL Managed Database clusters via our Cloud Control Panel. <br /><br />During this time, users may face issues creating the PostgreSQL Databases from the Cloud Control Panel. The creation of clusters below v16 remains unaffected at the moment.<br /><br />We apologize for the inconvenience and will share an update once we have more information</p>tag:status.digitalocean.com,2005:Incident/201012172024-02-28T22:49:41Z2024-02-28T22:49:41ZSFO Networking<p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>22:49</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed full resolution of the issue with networking in our SFO2 region. <br /><br />If you continue to experience problems, please open a ticket with our support team. Thank you for your patience throughout this incident!</p><p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>22:23</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has confirmed that the faulty network hardware component was the cause of this issue. From 21:39 - 22:11 UTC, this component was not functioning correctly, causing networking issues for a subset of customers in our SFO2 region, as well as internal alerts in our SFO1/SFO3 regions. <br /><br />At this time, all services should now be operating normally. We will monitor this incident for a short period of time to confirm full resolution.</p><p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>22:17</var> UTC</small><br><strong>Identified</strong> - Our Engineering team has identified the cause of the issue with networking in our SFO regions to be related to an issue with a network hardware component in SFO2. They have isolated that component and we're observing error rates returning to pre-incident levels at this time. <br /><br />We are continuing to look into this failure, but users should be seeing recovery on their services. We'll provide another update soon.</p><p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>22:06</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating internal alerts and customer reports for an increase in networking errors in our SFO regions for Droplets and Droplet-based services. We will provide an update as soon as we have further information.</p>tag:status.digitalocean.com,2005:Incident/200300842024-02-27T20:00:57Z2024-02-27T20:00:57ZAMS3 Network Maintenance<p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>20:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>15:19</var> UTC</small><br><strong>Scheduled</strong> - Start: 2024-02-27 16:00 UTC<br />End: 2024-02-27 20:00 UTC<br /><br /><br />During the above window, our Networking team will be making changes to core networking infrastructure, to improve performance and scalability in the AMS3 region. <br /><br />Expected impact:<br /><br />These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, affected Droplets and Droplet-based services may experience increased latency or a brief disruption in network traffic. We will endeavor to keep any such impact to a minimum.<br /><br />If you have any questions related to this issue please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket</p>tag:status.digitalocean.com,2005:Incident/200393022024-02-21T20:43:48Z2024-02-21T20:43:48ZContainer Registry Latency in Multiple Regions<p><small>Feb <var data-var='date'>21</var>, <var data-var='time'>20:43</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed the resolution of the issue impacting the Container Registry in multiple regions. <br /><br />Everything involving the Container Registry should now be functioning normally. <br /><br />We appreciate your patience throughout the process and if you continue to experience problems, please open a ticket with our support team for further review.</p><p><small>Feb <var data-var='date'>21</var>, <var data-var='time'>18:03</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has identified an internal operation within the Container Registry service which was placing load on the service, leading to latency and errors. The team has paused that operation in order to resolve the issue impacting the Container Registry in multiple regions. Users should not be facing any latency issues while interacting with their Container registries and also while building their Apps. <br /><br />We are actively monitoring the situation to ensure stability and will provide an update once the incident has been fully resolved. <br /><br />Thank you for your patience and we apologize for the inconvenience.</p><p><small>Feb <var data-var='date'>21</var>, <var data-var='time'>15:41</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue with the DigitalOcean Container Registry service. Beginning around 20:00 UTC on February 20, there has been an uptick in 401 errors for image pulls from the Container Registry service.<br /><br />During this time, a subset of customers may experience latency or see 401 errors while interacting with Container Registries. This issue also impacts App Platform builds and users may encounter delays while building their Apps or experience timeout errors in builds as a result. Users utilizing Container Registry for images for deployment to Managed Kubernetes clusters may also see latency or failures to deploy.<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>tag:status.digitalocean.com,2005:Incident/200262672024-02-20T07:07:10Z2024-02-20T07:07:10ZSpaces Availability in BLR1<p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>07:07</var> UTC</small><br><strong>Resolved</strong> - As of 06:25 am UTC, our Engineering team has confirmed the resolution of the issue impacting Spaces availability in the BLR1 region.<br /><br />Users should no longer experience issues with their Spaces resources in the BLR1 region.<br /><br />If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.</p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>06:39</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to resolve the Spaces availability issues in the BLR1 region and is monitoring the situation. <br /><br />Users should no longer encounter errors when accessing Spaces in the BLR1 region and should be able to create new Spaces buckets from the cloud control panel. <br /><br />We will post an update as soon as the issue is fully resolved.</p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>05:46</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue with Spaces availability in the BLR1 region. During this time users may encounter errors when accessing Spaces objects and creating new buckets in the BLR1 region. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>tag:status.digitalocean.com,2005:Incident/200004832024-02-19T17:00:56Z2024-02-19T17:00:56ZBLR1 Network Maintenance<p><small>Feb <var data-var='date'>19</var>, <var data-var='time'>17:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>19</var>, <var data-var='time'>14:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>16</var>, <var data-var='time'>14:52</var> UTC</small><br><strong>Scheduled</strong> - Start: 2024-02-19 14:00 UTC<br />End: 2024-02-19 17:00 UTC<br /><br />Hello,<br /><br />During the above window, we will be performing maintenance in our BLR1 region as part of a firewall migration.<br /><br />Expected impact:<br /><br />As part of this maintenance, event processing in BLR1 will be disabled for a period of up to 15 minutes during the three-hour window. During this period, users won't be able to create, destroy, or modify new or existing DO services in BLR1 (such as Droplets, DBaaS/DOKS clusters, etc.).<br /><br />If you have any questions related to this issue, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket<br /><br /><br />Thank you,<br />Team DigitalOcean</p>tag:status.digitalocean.com,2005:Incident/200227522024-02-19T15:50:00Z2024-02-19T18:22:46ZMultiple Products Impacted in BLR1<p><small>Feb <var data-var='date'>19</var>, <var data-var='time'>15:50</var> UTC</small><br><strong>Resolved</strong> - From 15:50 - 16:46, our team received customer reports of issues impacting multiple products in our BLR1 region, including the accessibility of Managed Databases and Managed Kubernetes clusters and general network connectivity disruption. These issues may be related to a scheduled maintenance event in the region, per our status post linked below:<br /><br />https://status.digitalocean.com/incidents/5z0npmmmnc1h<br /><br />Our team continues to review customer reports and diagnose the impact related to this maintenance. In the meantime, we have rolled back the maintenance process and all services should now be responding normally. If you experience any further issues, please open a ticket with our Support team. Thank you for your patience and we apologize for any inconvenience.</p>tag:status.digitalocean.com,2005:Incident/200140662024-02-18T19:08:07Z2024-02-18T19:08:07ZManaged Kubernetes in NYC3<p><small>Feb <var data-var='date'>18</var>, <var data-var='time'>19:08</var> UTC</small><br><strong>Resolved</strong> - As of 17:47 UTC, our Engineering team has confirmed the full resolution of the problem impacting the Managed Kubernetes service in our NYC3 region. The Cilium pods inside the clusters should be functioning normally. <br /><br />If you continue to experience problems, please open a ticket with our Support team. <br /><br />Thank you for your patience and we apologize for the inconvenience.</p><p><small>Feb <var data-var='date'>18</var>, <var data-var='time'>18:09</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has deployed the fix for the issue with Managed Kubernetes service where users were experiencing network connectivity issues with Cilium pods being restarted inside the clusters. Cilium pods should now be functioning normally. <br /><br />We are monitoring the situation and will post another update once we confirm the fix resolves this incident.</p><p><small>Feb <var data-var='date'>18</var>, <var data-var='time'>16:57</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue with our Managed Kubernetes service in NYC3 region. <br /><br />During this time users may experience network connectivity issues specifically with the Cilium pods inside their clusters.<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>tag:status.digitalocean.com,2005:Incident/199314762024-02-07T20:54:58Z2024-02-07T20:54:58ZDroplet Resize Events<p><small>Feb <var data-var='date'> 7</var>, <var data-var='time'>20:54</var> UTC</small><br><strong>Resolved</strong> - As of 19:37 UTC, our Engineering team has confirmed the full resolution of the problem impacting the Droplet resize events in all regions. All the Droplet resize events should now be succeeding normally. <br /><br />If you continue to experience problems, please open a ticket with our Support team. <br /><br />Thank you for your patience and we apologize for the inconvenience.</p><p><small>Feb <var data-var='date'> 7</var>, <var data-var='time'>19:41</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has fully deployed the fix for the issue with Droplet resizes and is now monitoring the situation. Users can now retry Droplet resizes and should see them succeed.<br /><br />We'll post another update once we confirm the fix resolves this incident.</p><p><small>Feb <var data-var='date'> 7</var>, <var data-var='time'>15:49</var> UTC</small><br><strong>Identified</strong> - Our Engineering team has identified the root cause of the issue with failed Droplet resizes and a fix is in the process of being deployed. <br /><br />Users attempting to resize Droplets where the image for the Droplet has been deleted or retired (e.g. a user created a Droplet from a Snapshot, but later deleted that Snapshot) will see failures. All other resizes are succeeding normally.<br /><br />We'll post another update once the fix has completed deployment.</p><p><small>Feb <var data-var='date'> 7</var>, <var data-var='time'>15:32</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an uptick in failed Droplet resizes, beginning Feb 6, 20:57 UTC. <br /><br />During this time, some users may experience failures when attempting to resize Droplets, in all regions. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>tag:status.digitalocean.com,2005:Incident/199274032024-02-07T07:21:30Z2024-02-07T07:21:30ZNetworking in NYC Regions<p><small>Feb <var data-var='date'> 7</var>, <var data-var='time'>07:21</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed the resolution of the issue impacting network latency in our NYC regions.<br /><br />The issues were a direct result of traffic congestion from our upstream providers, which has been repaired. Users should no longer experience packet loss or increased latency while interacting with their resources in the NYC regions.<br /><br />We sincerely apologize and thank you for your patience as we worked through this issue. In case of any questions or concerns, please open a ticket with our Support team.</p><p><small>Feb <var data-var='date'> 7</var>, <var data-var='time'>04:03</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating multiple reports of network latency when connecting to services in our NYC regions. During this time, users may experience intermittent packet loss or increased latency while interacting with their resources in the NYC regions.<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>tag:status.digitalocean.com,2005:Incident/198960562024-02-06T22:21:17Z2024-02-06T22:21:17ZCore Infrastructure Maintenance<p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>22:21</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>17:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>20:23</var> UTC</small><br><strong>Scheduled</strong> - Start Time: 17:00 UTC Feb 6, 2024<br />End Time: 00:00 UTC Feb 7, 2024<br /><br />During the above time, our Engineering Team will be performing maintenance to failover some internal databases from one cluster to another.<br /><br />Extensive testing has been conducted to ensure this maintenance will be successful and result in minimal impact to DigitalOcean users. The actual failover is estimated to take less than 3 seconds.<br /><br />Existing infrastructure, including Droplets and Droplet-based services, should continue running without issue. There is no network disruption to existing services expected as part of this maintenance. However, there are dependencies on multiple services. During the failover, there may be customer impacts that should be brief and transitory. The following actions may experience increased latency or failure rates during the maintenance period:<br /><br />- API calls to the DigitalOcean public API <br />- Events for Droplets and Droplet-based services such as create, delete, power on/off, resize, etc <br />- Control operations through the DigitalOcean Cloud Control Panel <br /><br />Multiple teams will be engaged to keep downtime to a minimum and mitigate any impact that does occur. We’ll post updates here for any unexpected changes to this scheduled maintenance, as well as progress updates during the maintenance itself.<br /><br />If you have any questions or concerns, please reach out to the Support team from within your account.</p>tag:status.digitalocean.com,2005:Incident/199093672024-02-05T05:54:58Z2024-02-05T05:54:58ZSpaces CDN in SGP1<p><small>Feb <var data-var='date'> 5</var>, <var data-var='time'>05:54</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed the resolution of the issue impacting Spaces CDN in our SGP1 region.<br /><br />From 03:02 UTC - 05:15 UTC, users were experiencing errors for objects served over the CDN.<br /><br />We apologize for the inconvenience. If you have any questions or continue to experience issues, please reach out via a Support ticket on your account.</p><p><small>Feb <var data-var='date'> 5</var>, <var data-var='time'>05:10</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has applied a fix to mitigate the issue related to the Spaces CDN in the SGP1 region. Users should no longer experience errors for objects served over the CDN. <br /><br />We apologize for the inconvenience and will post another update once we're confident that the issue is fully resolved.</p><p><small>Feb <var data-var='date'> 5</var>, <var data-var='time'>04:52</var> UTC</small><br><strong>Identified</strong> - From 03:02 UTC, our Engineering team has identified an issue with the Spaces CDN in our SGP1 region and is actively working on a fix. During this time, users may experience errors for objects served over the CDN. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>tag:status.digitalocean.com,2005:Incident/198761402024-01-31T15:26:00Z2024-01-31T15:26:00ZCustomer Support Ticket Portal<p><small>Jan <var data-var='date'>31</var>, <var data-var='time'>15:26</var> UTC</small><br><strong>Resolved</strong> - Our team has confirmed the full resolution for the problem with our support portal at https://cloudsupport.digitalocean.com/s/ where customers were unable to create tickets with 'Billing' ticket type. <br /><br />We sincerely apologize and thank you for your patience as we worked through this issue. <br /><br />In case of any questions or concerns, please open a ticket with our Support team.</p><p><small>Jan <var data-var='date'>31</var>, <var data-var='time'>15:12</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has identified the cause of the issue and implemented a fix to resolve the problem with the Support Portal. Users should now be able to create the tickets in the Support portal with Billing ticket type. <br /><br />We are monitoring the situation now and will post an update as soon as the issue is fully resolved.</p><p><small>Jan <var data-var='date'>31</var>, <var data-var='time'>14:26</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue with customers being unable to create the support tickets to our support portal for Ticket type "Billing" at https://cloudsupport.digitalocean.com. <br /><br />As a temporary workaround, users may still contact us via the form here: https://www.digitalocean.com/company/contact/support<br /><br />We apologize for the inconvenience and will post an update as soon as further information is available.</p>tag:status.digitalocean.com,2005:Incident/198684632024-01-30T16:42:28Z2024-01-30T16:50:50ZSnapshots Page - Cloud Control Panel<p><small>Jan <var data-var='date'>30</var>, <var data-var='time'>16:42</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team identified and resolved an issue impacting the Snapshots page in our Cloud Control Panel. <br /><br />From 13:00 - 15:00 UTC, users attempting to navigate to https://cloud.digitalocean.com/images/snapshots (via Images -> Snapshots) were unable to access the page, and instead saw an error page returned. <br /><br />We apologize for the inconvenience. If you have any questions or continue to experience issues, please reach out via a Support ticket on your account.</p>tag:status.digitalocean.com,2005:Incident/198597622024-01-29T18:36:45Z2024-01-29T19:53:18ZDNS Resolution in FRA1, AMS3 and LON1 Regions<p><small>Jan <var data-var='date'>29</var>, <var data-var='time'>18:36</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed the workaround fix is successful and all services should now be operating normally. We will now close this incident and work with the DNS provider separately on the root cause. <br /><br />We appreciate your patience throughout the process and if you continue to experience problems, please open a ticket with our support team for further review.</p><p><small>Jan <var data-var='date'>29</var>, <var data-var='time'>18:18</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has identified the root cause of the issue with DNS resolution. DigitalOcean resolvers in use in FRA1, AMS3, and LON1 are unable to reach an upstream DNS provider, resulting in resolution for a subset of domain names being unavailable from our resolvers. Our Engineering team is reaching out to the provider for assistance.<br /><br />In the meantime, our Engineering team has been able to implement a workaround fix by filtering some incorrectly announced network routes. At this time, we are seeing recovery and resolution of hostnames returning to normal in the impacted regions. We'll continue to await an update from the DNS provider. We're now monitoring the workaround fix for stability and will post an update once we are confident it is successful.</p><p><small>Jan <var data-var='date'>29</var>, <var data-var='time'>17:27</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating issues with DNS resolution in FRA1, AMS3, and LON1. During this time, customers may experience errors trying to resolve domain names from within DigitalOcean services in those regions, including Droplets and Droplet-based services, as well as App Platform. Additionally, App Platform builds may fail or experience delays. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>tag:status.digitalocean.com,2005:Incident/198237792024-01-25T02:56:12Z2024-01-25T02:56:12ZSnapshots are failing in SFO3 and NYC3<p><small>Jan <var data-var='date'>25</var>, <var data-var='time'>02:56</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has resolved the issue with snapshots taken by customers in the NYC3 and SFO3 regions. If you continue to experience problems, please open a ticket with our support team. Thank you for your patience and we apologize for any inconvenience.</p><p><small>Jan <var data-var='date'>25</var>, <var data-var='time'>01:50</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to resolve the issue with snapshots taken by customers in the NYC3 and SFO3 regions and are monitoring the situation closely. <br /><br />We will post another update once we're confident that the issue is fully resolved.</p><p><small>Jan <var data-var='date'>25</var>, <var data-var='time'>00:26</var> UTC</small><br><strong>Identified</strong> - Our Engineering team has identified an issue with snapshots taken by customers in the NYC3 and SFO3 regions and is actively working on a fix. We will post an update as soon as additional information is available.</p>tag:status.digitalocean.com,2005:Incident/198057532024-01-23T22:50:40Z2024-02-02T01:32:01ZManaged Kubernetes Cluster in FRA1<p><small>Jan <var data-var='date'>23</var>, <var data-var='time'>22:50</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has completed mitigation efforts for the issue impacting Managed Kubernetes in the FRA1 region and we are marking this incident as Resolved. <br /><br />At this time, functionality to impacted clusters has been restored but customers may need to reconfigure some Kubernetes resources. Customer Support is contacting impacted customers directly with further instructions. <br /><br />If you have any questions or concerns regarding this incident, please open a ticket with our support team.</p><p><small>Jan <var data-var='date'>23</var>, <var data-var='time'>18:44</var> UTC</small><br><strong>Update</strong> - Our Engineering team continues to work on mitigation efforts. An additional small bug has been discovered and remediated. About 10% of clusters have had accessibility restored and restoration efforts are ongoing. <br /><br />We will post another update as soon as we have new developments.<br /><br />Thank you for your patience and we apologize for any inconvenience.</p><p><small>Jan <var data-var='date'>23</var>, <var data-var='time'>15:12</var> UTC</small><br><strong>Identified</strong> - Our Engineering team has identified the cause of the issue with Managed Kubernetes clusters in the FRA1 region. 200 clusters are impacted by the issue and remain inaccessible to users at this time. <br /><br />Our Engineering team is engaged in remediating these clusters to restore accessibility. As soon as we are able to provide an estimated time to restore, we will provide an update.</p><p><small>Jan <var data-var='date'>23</var>, <var data-var='time'>13:02</var> UTC</small><br><strong>Investigating</strong> - As of 12:18 UTC, our Engineering team is investigating an issue with Kubernetes clusters in the FRA1 region. During this time, users may experience errors while communicating with their clusters in the FRA1 region. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>