Some systems are experiencing issues.
Our ingest pipeline is down to running around 8 hours behind live at this time. All other queues appear to be operating as expected.
We are seeing a high error rate when interacting with our object storage endpoint especially when generating preview images for links so a large number of jobs are likely going to need to be manually re-ran.
We have restarted our content processing services which appear to have hung without any errors being recorded. We will continue to monitor the queues while they reduce. As of this update all queues with the exception of ingress are now running in real time and ingress is tracking approx 13 hours behind.
We are investigating reports of the queue service and content processing service being unresponsive.
Incident UUID 54021e29-eff3-4aec-bacc-d5e411e6ecdb