Some systems are experiencing issues.
Real time updates from the ATLAS Media Group portfolio
Last updated 2025-11-18 15:54:15
The main website for the ATLAS Media Group Team
Last updated 2025-11-18 15:54:15
The ATLAS Media Group Ltd. Blog Platform
Last updated 2025-11-18 15:54:16
Service which hosts static content across a number of our parked and inactive domains.
Last updated 2025-11-18 15:54:14
The primary billing panel for the Superior Networks clients.
Last updated 2025-11-18 15:54:13
The VPS Control panel for Superior Networks Customers.
Last updated 2025-11-18 15:54:12
The main Superior Networks website.
Last updated 2025-12-31 00:37:32
Our servers in the French location. These servers are - Eggplant
Last updated 2025-02-16 19:03:50
Our new London Region for Superior Networks
Last updated 2025-02-16 19:03:50
The Superior Networks German region
Last updated 2026-01-11 14:33:05
The mastodonapp.uk site
View DetailsLast updated 2026-01-04 11:11:50
The database service storing appropriate content for the service
Last updated 2026-01-11 14:33:04
Handles the processing of content both internally to our server as well as communicating with remote servers
Last updated 2025-12-31 03:41:29
Storage for all videos, pictures and other content on the site
Last updated 2026-01-04 11:11:47
Used to handle delivery of e-mails to users
Last updated 2026-01-04 11:11:51
Search capability for MastodonApp.UK
Last updated 2026-01-12 19:05:25
The Universeodon public website
View DetailsLast updated 2026-01-12 19:05:24
Universeodon's Database
Last updated 2026-01-17 14:31:14
The Universeodon Queue and content processing service.
Last updated 2026-01-09 10:33:34
Universeodon Media Storage
Last updated 2026-01-09 10:33:42
Universeodon's Advanced / full text search service.
Last updated 2026-01-09 10:33:59
Universeodon e-mail service
Last updated 2025-11-18 13:08:05
The ActivityPub relay hosted on relay.universeodon.com
Last updated 2024-09-02 18:20:52
Located in the London Region | Legacy Server
Last updated 2025-12-31 00:37:34
Located in the France Region
Last updated 2024-08-26 22:02:14
Located in the London Region
We will be re-building our full text search infrastructure which will result in a short period where there will be no full text search results or the results will be limited.
The current image handling for both MastodonAppUK and Universeodon depend on ImageMagic which is being deprecated in a future version of Mastodon with LibVips being the new engine we should be using. We will need to install and configure LibVips and switch Mastodon over to using it as part of this maintenance.
We do not expect there to be any impact to our community during this time and the maintenance will be performed in a way which will minimise any risk for disruption.
We will be performing maintenance to switch our Database server from the legacy server cluster to our new server cluster. As part of this we will need to take the site offline for a short amount of time to facilitate the switch over to our new server infrastructure.
We expect the outage itself to last no more than approx 45 mins and will do what we can to minimise the disruption.
Content processing is now fully operational and working as expected.
We are starting to once again see further performance issues to the database layer of MastodonAppUK which is also resulting in disruption to our ability to process content on Universeodon.com - We are working to mitigate the impacts of this now.
We have managed to slightly increase our ingress capacity for content processing. We're currently running approx 8 Mins behind live on all our queues with the exception of ingress which is at around 21 Hours behind. We expect this queue to take a couple of hours to fully clear and will continue to monitor.
We have increased capacity on our database infrastructure however are still hitting bottlenecks. As a result we've scaled back our processing workers to prioritise the default content queue and are currently running a small amount of capacity for the ingress queue to try to catch this up. This means for all queues other than ingress we're currently running around 30 mins behind with ingress currently around 22 hours behind. We will continue to adjust the scaling to ensure the site remains online and operational while we process all this content.
We have identified a capacity bottleneck on the router that serves part of our database infrastructure, we are scaling this up now and hope this should upgrading some of the bottleneck issues.
It appears that the content processing has resulted in too much pressure being put on our database infrastructure causing major outages across the site. We are looking to scale back the content processing to restore the sites access.
We have powered on our legacy content processing server which is starting to work through the backlog, it looks like around midnight on the 29th August 2024 the new content processing services had a major failure resulting in the vast majority of content processing jobs failing to be executed. We currently have a backlog of a little over 1.1 million events which is likely to increase as we process content and additional processing is required. I suspect it'll take a few hours to get things back caught up. We are going to monitor the infrastructure and queues over the coming hours to ensure full recovery.
Our content processing service has experienced a catastrophic failure resulting in feeds not being updated. We are actively back queueing all of these actions and as soon as we can remediate the issue we will start to catch up on the content that needs processing.
Issue was previously resolved.
New infrastructure has been deployed, we will monitor the queue lengths over the next 24 hours to see if we need to deploy further infrastructure to replace our original queue service or if there is tuning we need to do to better make use of the new infrastructure.
There is currently a major outage on our content processing and queue service. This is due to ongoing maintenance which has not gone to plan. We are currently deploying additional infrastructure to ultimately minimise and mitigate the disruption.