Resolved

We've now resolved the incident. The backlog has been completely processed by now. Any failed jobs from the past hours will need to be manually restarted, or you can wait for the next scheduled execution. We would like to apologize for any inconvenience caused and thank you for your patience.

Queued workloads continue to be processed from the backlog. Failures are still expected at this time. We will give another update in about 30 minutes.

Recovering

The fix has been rolled out and workloads that have been queued in the meantime have started processing again. Due to a high backlog, failures are still expected at this time. We will give another update in about 30 minutes.

We are still working on the rollout of the fix. Data ingestion on us-1 continues to be impacted by this issue. We will give another update in about 30 minutes.

Rollout of the fix on us-1 is ongoing. We will give another update in about 30 minutes.

Identified

We have identified the issue. A fix is currently being implemented and rolled out. We will give another update in about 30 minutes.

The ongoing issues are impacting data ingestion and Data Model loads. We will give another update in about 30 minutes.

Investigating

The issue has resurfaced and we are investigating a solution.

The services are up again and are currently processing workloads that have been queued in the meantime.

Recovering

We've fixed the core issue, and are waiting for things to recover.

Identified

We've confirmed there is a problem, we're working to resolve it.

Investigating

Some customers might currently experience issues with running Data Jobs. We have already identified the root cause and are applying a fix. We apologize for any inconvenience caused.

Began at:

Affected components
  • US-1