All Systems Operational
Regions Operational
Global   Operational
AMS2   Operational
AMS3   Operational
BLR1   Operational
FRA1   Operational
LON1   Operational
NYC1   Operational
NYC2   Operational
NYC3   Operational
SFO1   Operational
SFO2   Operational
SGP1   Operational
TOR1   Operational
Services Operational
API   Operational
Block Storage   Operational
Cloud Control Panel   Operational
Cloud Firewall   Operational
Community   Operational
DNS   Operational
Droplets   Operational
Event Processing   Operational
Load Balancers   Operational
Monitoring   Operational
Networking   Operational
Spaces   Operational
Support Center   Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Feb 24, 2018
Resolved - Our engineering team has resolved the issue with Spaces in our SGP1 region. Spaces should now be operating normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 24, 04:39 UTC
Update - At this time our engineering team is still monitoring the situation with Spaces in our SGP1 region. We appreciate your patience and will post an update as soon as the issue is fully resolved.
Feb 24, 03:40 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with Spaces in our SGP1 region and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 24, 02:54 UTC
Investigating - Our engineering team is investigating an issue with Spaces in our SGP1 region. During this time you may experience issues accessing Spaces. We apologize for the inconvenience and will share an update once we have more information.
Feb 24, 02:20 UTC
Feb 23, 2018
Resolved - Our engineering team has resolved the issue impacting Spaces in our NYC3 region. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 23, 18:54 UTC
Monitoring - Our engineering team has implemented a fix to address the issue impacting Spaces in our NYC3 region and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 23, 17:49 UTC
Update - Our engineering team continues to works toward a fix for the issue impacting Spaces in our NYC3 region. We appreciate your patience and will post an update as soon as additional information is available.
Feb 23, 17:05 UTC
Identified - Our engineering team has identified the cause of the issue impacting Spaces in our NYC3 region and is actively working on a fix. We will post an update as soon as additional information is available.
Feb 23, 15:56 UTC
Update - Our engineering team continues to investigate the issue in our NYC3 region causing delays with connecting to Spaces. We appreciate your patience as we work to resolve this situation.
Feb 23, 14:46 UTC
Update - Our engineering team continues to investigate the issue in our NYC3 region causing delays with connecting to Spaces. We appreciate your patience and will post an update as soon as additional information is available.
Feb 23, 14:05 UTC
Investigating - Our engineering team is investigating an issue with Spaces events in our NYC3 region. During this time, you may experience delays connecting to Spaces. We apologize for the inconvenience and will share an update once we have more information.
Feb 23, 13:28 UTC
Completed - We have completed the scheduled hypervisor reboots in NYC3 and SFO2. If you experience any issues with your Droplets, please open a ticket with our support team.
Feb 23, 16:41 UTC
In progress - We have begun the scheduled rebooting of the selected hypervisors in our NYC3 and SFO2 regions. We will share additional updates as necessary.
Feb 23, 15:04 UTC
Scheduled - We have scheduled reboots for a subset of hypervisors in our NYC3 and SFO2 regions and we have notified affected customers by email with a list of their Droplets that will be rebooted. These maintenances are part of our mitigation against the Meltdown and Spectre vulnerabilities, and more information is available on our blog:

https://blog.digitalocean.com/a-message-about-intel-security-findings/
Feb 23, 14:48 UTC
Resolved - Our engineering team has resolved the networking connectivity issues. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 23, 06:15 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with network connectivity and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 23, 05:55 UTC
Update - Our engineering team continues to work on resolving the networking connectivity issues impacting some users. We apologize for any inconvenience and will post additional information here as it becomes available.
Feb 23, 04:54 UTC
Identified - Our engineering team has identified network connectivity issues for some users. You may experience delays or failures with event processing. We are working to resolve the issue and will post additional updates as more information becomes available.
Feb 23, 03:55 UTC
Resolved - Our engineering team has resolved the issue with adding GSTIN numbers through the cloud. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 23, 02:44 UTC
Identified - Our engineering team has identified the cause of the issue related to adding GSTIN numbers through the cloud and is actively working on a fix. We will post an update as soon as additional information is available.
Feb 23, 02:14 UTC
Resolved - Our engineering team has resolved the issue with stalled create events and cloud API errors in all regions. If you continue to experience issues, please open a ticket with our support team. We apologize for the inconvenience.
Feb 23, 00:07 UTC
Update - Our engineering team continues to monitor an intermittent issue with stalled create events and increased cloud API errors in all regions. We apologize for the inconvenience and will post an update as soon as the issue is fully resolved
Feb 22, 23:14 UTC
Monitoring - Our engineering team is monitoring an intermittent issue with stalled create events and increased cloud API errors in all regions. We apologize for the inconvenience and will post an update as soon as the issue is fully resolved.
Feb 22, 22:06 UTC
Feb 22, 2018
Completed - We have completed the scheduled hypervisor reboots in NYC3 and SFO2. If you experience any issues with your Droplets, please open a ticket with our support team.
Feb 22, 21:01 UTC
In progress - We have begun the scheduled rebooting of the selected hypervisors in our NYC3 and SFO2 regions. We will share additional updates as necessary.
Feb 22, 15:04 UTC
Scheduled - We have scheduled reboots for a subset of hypervisors in our NYC3 and SFO2 regions and we have notified affected customers by email with a list of their Droplets that will be rebooted. These maintenances are part of our mitigation against the Meltdown and Spectre vulnerabilities, and more information is available on our blog: 

https://blog.digitalocean.com/a-message-about-intel-security-findings/
Feb 22, 14:50 UTC
Resolved - Our engineering team has resolved the networking connectivity issues. If you continue to experience issues with event processing, please open a ticket with our support team. We apologize for any inconvenience.
Feb 22, 04:48 UTC
Monitoring - Our engineering team has implemented a fix to resolve the networking connectivity issues and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 22, 04:24 UTC
Investigating - Our engineering team has identified issues with network connectivity for some users. You may experience delays or failures with event processing. We will post additional updates as more information becomes available.
Feb 22, 03:48 UTC
Resolved - Our engineering team has resolved the issue with AMS3 Spaces. All Space related events should now be operating normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 22, 04:10 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with AMS3 Spaces functionality and is monitoring the situation for any additional problems. We will post an update as soon as the issue is fully resolved.
Feb 22, 03:49 UTC
Identified - Our engineering team has identified an issue with AMS3 Spaces functionality and is actively working on a fix. In the meantime, you may experience problems with all Spaces activity in the region. We will post an update as soon as additional information is available.
Feb 22, 03:08 UTC
Feb 21, 2018
Completed - We have completed the scheduled hypervisor reboots in NYC3 and BLR1. If you experience any issues with your Droplets, please open a ticket with our support team.
Feb 21, 21:25 UTC
In progress - We have completed the scheduled hypervisor reboots in BLR1. We are continuing to work on rebooting the remaining hypervisors in NYC3.
Feb 21, 19:01 UTC
Scheduled - We have scheduled reboots for a subset of hypervisors in our NYC3 and BLR1 regions and we have notified affected customers by email with a list of their Droplets that will be rebooted. These maintenances are part of our mitigation against the Meltdown and Spectre vulnerabilities, and more information is available on our blog:

https://blog.digitalocean.com/a-message-about-intel-security-findings/
Feb 21, 14:31 UTC
Resolved - Our engineering team has resolved the issue with NYC3 Spaces API availability. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 21, 19:20 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with NYC3 Spaces API availability and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 21, 18:57 UTC
Investigating - Our engineering team is investigating an issue with NYC3 Spaces API availability, as the behavior has returned. We apologize for the inconvenience and will share an update once we have more information.
Feb 21, 17:42 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with NYC3 Spaces API availability and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 21, 17:33 UTC
Investigating - Our engineering team has identified the cause of the issue with NYC3 Spaces API availability and is actively working on a fix. We will post an update as soon as additional information is available.
Feb 21, 17:12 UTC
Postmortem - Read details
Feb 23, 19:33 UTC
Resolved - Our engineering team has resolved the issue with new Block Storage creates and volume attaches that occurred between 21:16 and 21:53 UTC in our NYC3 region. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience and thank you for your patience.
Feb 21, 01:20 UTC
Update - Our engineering team continues to monitor an issue with new Block Storage creates and volume attaches in our NYC3 region. If you are still experiencing problems you may need to reboot the Droplet. Please open a ticket with our support team for further assistance. We apologize for the inconvenience and will post an update as soon as the issue is fully resolved.
Feb 20, 23:39 UTC
Monitoring - Our engineering team is monitoring an issue with new Block Storage creates and volume attaches in our NYC3 region. During this time you may experience power failures or issues with new volume events. We apologize for the inconvenience and will share an update once we have more information.
Feb 20, 22:14 UTC
Investigating - Our engineering team is investigating an issue with new Block Storage creates and volume attaches in our NYC3 region. During this time you may experience power failures or issues with new volume events. We apologize for the inconvenience and will share an update once we have more information.
Feb 20, 22:07 UTC
Feb 20, 2018
Resolved - We have completed maintenance in the BLR1 region and we have re-enabled Droplet creation. At this time, other Droplet events snapshots or backups should also proceed normally. If you experience any problems with event processing or event delays on your Droplet, please open a ticket for our support team.
Feb 20, 21:41 UTC
Identified - We are conducting maintenance in the BLR1 region that will cause some interruption in event processing. During this time, Droplet creation will be disabled and you may see delays with other Droplet actions such as power events, backups, and snapshots. We will share additional updates as we have more information.
Feb 20, 20:04 UTC
Completed - We have completed the scheduled hypervisor reboots in NYC3 and BLR1. If you experience any issues with your Droplets, please open a ticket with our support team.
Feb 20, 20:43 UTC
Update - We have completed the scheduled hypervisor reboots in BLR1. We are continuing to work on rebooting the remaining hypervisors in NYC3.
Feb 20, 19:51 UTC
In progress - We have begun the scheduled rebooting of the selected hypervisors in our NYC3 and BLR1 regions. We will share additional updates as necessary.
Feb 20, 15:00 UTC
Scheduled - We have scheduled reboots for a subset of hypervisors in our NYC3 and BLR1 regions and we have notified affected customers by email with a list of their Droplets that will be rebooted. These maintenances are part of our mitigation against the Meltdown and Spectre vulnerabilities, and more information is available on our blog:

https://blog.digitalocean.com/a-message-about-intel-security-findings/
Feb 20, 14:29 UTC
Feb 19, 2018
Completed - At this time the scheduled maintenance has been completed. Please feel free to contact support if you have any additional questions or concerns.
Feb 19, 22:14 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 19, 22:00 UTC
Scheduled - During the above window the network engineering team will be performing maintenance on border Internet routers in LON1. This is pro-active maintenance to ensure optimal performance of the networking infrastructure.

Expected Impact: We expect there to be no noticeable impact, however; you may experience brief periods of increased latency and a small amount of packet loss lasting up to 2 minutes as traffic is re-routed.

Periodic updates will follow as work progresses. Do not hesitate to contact support if you have any additional questions or concerns.
Feb 19, 21:57 UTC
Resolved - Our engineering team has resolved the issue with Load Balancer Updates. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 19, 11:03 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with Load Balancer Updates and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 19, 10:49 UTC
Identified - Our engineering team has identified the cause of the issue with Load Balancer updates and is actively working on a fix. We will post an update as soon as additional information is available.
Feb 19, 09:27 UTC
Update - Our engineering team continues to investigate the issues with Load Balancer service updates. We appreciate your patience and will keep you posted as additional information becomes available.
Feb 19, 08:52 UTC
Investigating - Our engineering team is investigating reports of issues with the Load Balancer service. During this time, updates to Load Balancers may experience errors through the Cloud Panel and API.
Feb 19, 07:58 UTC
Feb 18, 2018
Resolved - Our engineering team has resolved the issue with delay in processing events in our BLR1 region. The event processing should now be operating normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 18, 12:27 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with delay in processing events in our BLR1 region and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 18, 11:58 UTC
Investigating - Our engineering team is investigating an issue with delay in processing events in our BLR1 region. During this time you may experience network slowness or connectivity issue. We apologize for the inconvenience and will share an update once we have more information.
Feb 18, 11:18 UTC
Feb 17, 2018
Resolved - Our engineering team has resolved the issue with create events in BLR1. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 17, 07:28 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with creates in BLR1 and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 17, 07:07 UTC
Investigating - Our engineering team is investigating an issue with create events in BLR1. Currently, create events are disabled in this region as we work to resolve this issue. We apologize for the inconvenience and will share an update once we have more information.
Feb 17, 05:39 UTC
Feb 16, 2018
Resolved - Our engineering team has resolved the issue with create events in SFO2. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 16, 21:01 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with creates in SFO2 and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 16, 20:33 UTC
Investigating - Our engineering team is investigating an issue with create events in SFO2. Currently create events are disabled in this region as this issue is addressed. We apologize for the inconvenience and will share an update once we have more information.
Feb 16, 20:25 UTC
Completed - We have completed the scheduled hypervisor reboots in NYC1 and NYC3. If you experience any issues with your Droplets, please open a ticket with our support team.
Feb 16, 18:16 UTC
In progress - We have begun the scheduled rebooting of the selected hypervisors in our NYC1 and NYC3 regions. We will share additional updates as necessary.
Feb 16, 14:56 UTC
Scheduled - We have scheduled reboots for a subset of hypervisors in our NYC1 and NYC3 regions and we have notified affected customers by email with a list of their Droplets that will be rebooted. These maintenances are part of our mitigation against the Meltdown and Spectre vulnerabilities, and more information is available on our blog:

https://blog.digitalocean.com/a-message-about-intel-security-findings/
Feb 16, 14:41 UTC
Resolved - Our engineering team has resolved the issue with Block Storage in our FRA1 region. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 16, 04:21 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with Block Storage in our FRA1 region and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 16, 03:46 UTC
Investigating - Our engineering team is investigating an issue with Block Storage in our FRA1 region. During this time you may experience degraded performance when using Block Storage. We apologize for the inconvenience and will share an update once we have more information.
Feb 16, 03:04 UTC
Feb 15, 2018
Completed - This status page was unintentionally posted earlier than the scheduled maintenance time. The maintenance will occur for a subset of users in AMS2 on 2018-02-20 at 22:00 UTC, and we will notify affected users by email. We apologize for the confusion.
Feb 15, 22:14 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 15, 22:00 UTC
Scheduled - The DigitalOcean Network Engineering team is performing necessary maintenance on networking hardware for a subset of physical machines in AMS2 to ensure the stability and reliability of our network.

We expect there to be up to 15 minutes of public networking unavailability, meaning that your Droplet will not be reachable from outside of the region. Droplet uptime will not be impacted, and networking will be restored as quickly as possible.

If you have any questions or concerns, please feel free to open up a support ticket.
Feb 15, 21:30 UTC
Completed - We have completed the scheduled hypervisor reboots in NYC1. If you experience any issues with your Droplets, please open a ticket with our support team.
Feb 15, 20:38 UTC
Update - We have completed the scheduled hypervisor reboots in NYC3. We are continuing to work on rebooting the remaining hypervisors in NYC1.
Feb 15, 19:26 UTC
In progress - We have begun the scheduled rebooting of the selected hypervisors in our NYC1 and NYC3 regions. We will share additional updates as necessary.
Feb 15, 15:01 UTC
Scheduled - We have scheduled reboots for a subset of hypervisors in our NYC1 and NYC3 regions and we have notified affected customers by email with a list of their Droplets that will be rebooted. These maintenances are part of our mitigation against the Meltdown and Spectre vulnerabilities, and more information is available on our blog:

https://blog.digitalocean.com/a-message-about-intel-security-findings/
Feb 15, 14:43 UTC
Resolved - Our engineering team has resolved the issue with event processing. If you continue to experience issues, please open a ticket with our support team. We apologize for any inconvenience.
Feb 15, 04:57 UTC
Monitoring - Our engineering team has diagnosed and resolved an issue with event processing and we are actively monitoring the situation. If you continue to experience issues, please open a ticket with our support team.
Feb 15, 04:01 UTC
Feb 14, 2018
Completed - We have completed the scheduled hypervisor reboots in NYC1. If you experience any issues with your Droplets, please open a ticket with our support team.
Feb 14, 20:31 UTC
In progress - We have begun the scheduled rebooting of the selected hypervisors in our NYC1 region. We will share additional updates as necessary.
Feb 14, 15:00 UTC
Scheduled - We have scheduled reboots for a subset of hypervisors in our NYC1 region and we have notified affected customers by email with a list of their Droplets that will be rebooted. These maintenances are part of our mitigation against the Meltdown and Spectre vulnerabilities, and more information is available on our blog:

https://blog.digitalocean.com/a-message-about-intel-security-findings/
Feb 14, 14:44 UTC
Resolved - Our engineering team has resolved the issue with Failed Optimized Droplet Create Events. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 14, 15:53 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with failed optimized droplet creation in our NYC1 region and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 14, 14:54 UTC
Update - Our engineering team continues to investigate the issue with failed optimized droplet creation in our NYC1 region We appreciate your patience and will post an update as soon as additional information is available.
Feb 14, 13:57 UTC
Investigating - Our engineering team is investigating an issue with Failed Optimized Droplet Creation in our NYC1 region. We apologize for the inconvenience and will share an update once we have more information.
Feb 14, 12:51 UTC
Resolved - Our engineering team has resolved the issue with failed optimized droplet create events. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 14, 15:48 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with Event Processing Delays in NYC1 and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 14, 15:45 UTC
Feb 13, 2018
Completed - We have completed the scheduled hypervisor reboots in NYC1. If you experience any issues with your Droplets, please open a ticket with our support team.
Feb 13, 20:31 UTC
In progress - We have begun the scheduled rebooting of the selected hypervisors in our NYC1 region. We will share additional updates as necessary.
Feb 13, 15:00 UTC
Scheduled - We have scheduled reboots for a subset of hypervisors in our NYC1 region and we have notified affected customers by email with a list of their Droplets that will be rebooted. These maintenances are part of our mitigation against the Meltdown and Spectre vulnerabilities, and more information is available on our blog:

https://blog.digitalocean.com/a-message-about-intel-security-findings/
Feb 13, 14:43 UTC
Feb 12, 2018
Completed - We have completed the scheduled hypervisor reboots in NYC1. If you experience any issues with your Droplets, please open a ticket with our support team.
Feb 12, 18:35 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 12, 15:00 UTC
Scheduled - We have scheduled reboots for a subset of hypervisors in our NYC1 region and we have notified affected customers by email with a list of their Droplets that will be rebooted. These maintenances are part of our mitigation against the Meltdown and Spectre vulnerabilities, and more information is available on our blog:

https://blog.digitalocean.com/a-message-about-intel-security-findings/
Feb 12, 14:43 UTC
Feb 11, 2018

No incidents reported.

Feb 10, 2018
Resolved - Our engineering team has resolved the issue with Droplet notifications being sent in our NYC1 region. Droplet notifications should now be operating normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Feb 10, 22:14 UTC
Monitoring - Our engineering team has implemented a fix to resolve the issue with Droplet notifications being sent in our NYC1 region and is monitoring the situation. We will post an update as soon as the issue is fully resolved.
Feb 10, 21:26 UTC
Identified - Our engineering team is investigating an issue with Droplet notifications being sent in our NYC1 region. During this time you may experience issues receiving password reset emails or new Droplet creation emails. We apologize for the inconvenience and will share an update once we have more information.
Feb 10, 20:35 UTC