Monitoring - Our Engineering team is still observing intermittent issues impacting network connectivity in the BLR1 region. Between 17:28 and 17:37 UTC, the team observed a major connectivity loss to this region and took immediate steps to reroute traffic via a different upstream provider to alleviate the impact.
The issue BLR1 is experiencing at the moment stems from a broader Internet issue. Network accessibility in the region has improved as of now, and users should already experience better performance when accessing Droplets and other services. We are closely monitoring the situation to ensure stability.
We appreciate your patience and will provide an update once the issue is fully confirmed as resolved.
Nov 08, 2025 - 18:07 UTC
Resolved -
From 13:50 to 14:15 UTC, our Engineering team observed an issue with an upstream provider impacting network connectivity in the BLR1 region.
During this time, users may have experienced increase in latency or packet loss when accessing Droplets and Droplet-based services, like Managed Kubernetes and Database Clusters in BLR1 region.
The impact has now subsided and as of 14:15 UTC, users should already experience better performance when accessing Droplets and other services.
We apologize for the inconvenience. If you are still experiencing any problems or have additional questions, then please open a support ticket within your account.
Nov 8, 15:15 UTC
Resolved -
This incident has been resolved.
Oct 29, 23:59 UTC
Update -
We are continuing to work on a fix for this issue.
Oct 29, 21:12 UTC
Update -
Our Engineering team has identified the cause of the issue with the deployment of Gradient AI Platform Agents in VPCs and is actively working on a fix.
We will post an update as soon as the fix has rolled out or there is additional information to share.
Oct 29, 21:11 UTC
Identified -
Our Engineering team has identified the cause of the issue with the deployment of Gradient AI Platform Agents in VPCs and is actively working on a fix.
We will post an update as soon as the fix has rolled out or there is additional information to share.
Oct 29, 21:09 UTC
Investigating -
As of 18:50 UTC, our Engineering team is investigating reports of agent creation issues impacting customers using a VPC on the GradientAI platform. At this point, affected users may experience errors where the agent creation process is stuck on "Waiting for Deployment." We apologize for the inconvenience and will share an update once we have more information.
Oct 29, 20:26 UTC
Resolved -
Our Engineering team has resolved the issue affecting Garbage Collection in container registries, and all services are operating normally.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Oct 28, 19:58 UTC
Monitoring -
Our Engineering team has implemented a fix for the Garbage Collection issue affecting container registries and Customers should no longer experience Garbage Collection jobs failing or getting stuck.
We are currently monitoring the situation and will post an update as soon as the issue is fully resolved. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Oct 28, 19:06 UTC
Identified -
Our Engineering team has identified the root cause of the issue affecting Garbage Collection in the container registries.
A fix is being implemented to resolve failures and stuck operations. We will provide an update once the mitigation has been deployed.
We apologize for the inconvenience and will share an update once we have more information.
Oct 28, 14:57 UTC
Investigating -
Our Engineering team is investigating an issue with the Garbage Collection in the container registries. At this time, users may experience errors with the Garbage Collection failing or being stuck.
We apologize for the inconvenience and will share an update once we have more information.
Oct 28, 11:30 UTC