ATLAS Media Group Status Page

Some systems are experiencing issues.

ATLAS Media Group Status Page

Real time updates from the ATLAS Media Group portfolio

  • Main Website

    Last updated 2025-11-18 15:54:15

    Operational

    The main website for the ATLAS Media Group Team

  • Blog

    Last updated 2025-11-18 15:54:15

    Operational

    The ATLAS Media Group Ltd. Blog Platform

  • Domain Parking Service

    Last updated 2025-11-18 15:54:16

    Operational

    Service which hosts static content across a number of our parked and inactive domains.

2 Incidents
  • Billing Panel

    Last updated 2025-11-18 15:54:14

    Operational

    The primary billing panel for the Superior Networks clients.

  • VPS Control Panel

    Last updated 2025-11-18 15:54:13

    The VPS Control panel for Superior Networks Customers.

  • Main Website

    Last updated 2025-11-18 15:54:12

    Operational

    The main Superior Networks website.

  • French Region

    Last updated 2025-12-31 00:37:32

    Our servers in the French location. These servers are - Eggplant

  • London Region

    Last updated 2025-02-16 19:03:50

    Operational

    Our new London Region for Superior Networks

  • Germany Region

    Last updated 2025-02-16 19:03:50

    Operational

    The Superior Networks German region

11 Incidents
  • MastodonApp.UK Website

    Last updated 2026-01-11 14:33:05

    The mastodonapp.uk site

    View Details
  • MastodonAppUK Database Service

    Last updated 2026-01-04 11:11:50

    The database service storing appropriate content for the service

  • MastodonAppUK Queue & Content Processing Service

    Last updated 2026-01-11 14:33:04

    Handles the processing of content both internally to our server as well as communicating with remote servers

  • MastodonAppUK Media Storage

    Last updated 2025-12-31 03:41:29

    Operational

    Storage for all videos, pictures and other content on the site

  • MastodonAppUK E-Mail Service

    Last updated 2026-01-04 11:11:47

    Operational

    Used to handle delivery of e-mails to users

  • MastodonAppUK Advanced / Full Text Search

    Last updated 2026-01-04 11:11:51

    Search capability for MastodonApp.UK

4 Incidents
  • Universeodon Website

    Last updated 2026-01-12 19:05:25

    The Universeodon public website

    View Details
  • Universeodon Database

    Last updated 2026-01-12 19:05:24

    Operational

    Universeodon's Database

  • Universeodon Queue & Content Processing Service

    Last updated 2026-01-17 14:31:14

    The Universeodon Queue and content processing service.

  • Universeodon Media Storage

    Last updated 2026-01-09 10:33:34

    Operational

    Universeodon Media Storage

  • Universeodon Advanced Search

    Last updated 2026-01-09 10:33:42

    Universeodon's Advanced / full text search service.

  • Universeodon E-Mail Service

    Last updated 2026-01-09 10:33:59

    Operational

    Universeodon e-mail service

  • Universeodon Relay

    Last updated 2025-11-18 13:08:05

    The ActivityPub relay hosted on relay.universeodon.com

1 Incident
  • DB04 Server

    Last updated 2024-09-02 18:20:52

    Operational

    Located in the London Region | Legacy Server

  • DB05 Server

    Last updated 2025-12-31 00:37:34

    Located in the France Region

  • DB06 Server

    Last updated 2024-08-26 22:02:14

    Operational

    Located in the London Region

Planned Maintenance

  • Universeodon - Full Text Search Migration

    1 day ago —
    Affected Components: Universeodon Advanced Search
    In Progress

    We will be re-building our full text search infrastructure which will result in a short period where there will be no full text search results or the results will be limited.

  • MastodonAppUK & Universeodon - LibVips Switch

    3 days from now —
    Affected Components: MastodonApp.UK Website, MastodonAppUK Queue & Content Processing Service, Universeodon Website and Universeodon Queue & Content Processing Service
    Upcoming

    The current image handling for both MastodonAppUK and Universeodon depend on ImageMagic which is being deprecated in a future version of Mastodon with LibVips being the new engine we should be using. We will need to install and configure LibVips and switch Mastodon over to using it as part of this maintenance.

    We do not expect there to be any impact to our community during this time and the maintenance will be performed in a way which will minimise any risk for disruption.

  • MastodonAppUK - Database Migration

    4 days from now —
    Affected Components: MastodonAppUK Database Service, MastodonApp.UK Website and MastodonAppUK Queue & Content Processing Service
    Upcoming

    We will be performing maintenance to switch our Database server from the legacy server cluster to our new server cluster. As part of this we will need to take the site offline for a short amount of time to facilitate the switch over to our new server infrastructure.

    We expect the outage itself to last no more than approx 45 mins and will do what we can to minimise the disruption.

Past Incidents

No incidents reported.

No incidents reported.

Fixed

Fixed

1 year ago —

Resolved

Watching

1 year ago —

The data migration has now completed. We're starting services back up and monitoring to see if this has resolved our issues. Further updates to follow.

Identified

1 year ago —

We've taken all services offline and are starting the storage migration now.

Identified

1 year ago —

From 15:30 UTC Today (6 November) we will be taking Universeodon offline to migrate the database storage with the expectation that this should resolve the performance issues we've started to see. This will then allow us to scale up our content processing and web tier to keep up with demand and should ultimately resolve a lot of the issues folks have been seeing. Apologies for needing to take the site down over such an active period in the world but we don't currently have a better option that will be a good experience for the Universeodon community.

1 year ago —

We have identified major performance issues with the Universeodon Database which is currently causing upstream performance issues both to the website and content processing. Due to the ongoing US Election coverage we will delay a full resolution on this issue which would require a multi-hour outage to the site and our content processing and will continue to monitor the situation balancing the performance and access to the website with that of our other back-end resources.

Fixed

Fixed

1 year ago —

The database storage migration fully resolved the issue.

Investigating

1 year ago —

Our content processing is once again falling behind with queues between 30 and over 1 hour. Ingress currently is at least 1 hour behind. I'm working to adjust scaling however we're seeing a significantly higher load than usual at this time combined with our known database issues is resulting in poor performance all around.

Watching

1 year ago —

All feeds have now caught up. We will continue to observe live for a short amount of time before we sign off for the night and will check back in tomorrow morning to ensure nothing falls over for too long of a period.

Watching

1 year ago —

All of our content feeds with the exception of ingress have once again caught up fully. We're running approx 30 mins behind real time now on our ingress queue which continues to shrink. I'm hoping in the next 20-30 mins we should have that queue down to real time as well and can get the content processing back to it's normal state.

Watching

1 year ago —

We're now down to approx 55 mins of backlog on our ingress pipeline and have cleared a large amount of the backlog. We're seeing around a 10 min delay currently on processing other content which may mean you have a slightly outdated feed, this will resolve itself shortly and as soon as our ingress pipeline is processed the compute capacity will ensure any other content in the queue is processed which should keep things running smoother tonight.

Watching

1 year ago —

We are continuing to see a steady reduction in our ingress queue and a parallel (approx) increase in our other queues as the posts get processed into timelines and other aspects of the application. We're currently still tracking around 1 hour behind on ingress and we're seeing a small spike where some timelines and other basic functionality is between 5 and 10 mins behind real time. Often a refresh will fix this or alternatively waiting until queues catch up. We're continuing to adjust the scaling of the service behind the scenes to account for these fluctuation.

Watching

1 year ago —

We are monitoring the status of the queues and current data suggests the balance we have is slowly burning down our ingress queue while maintaining our other queues at near real time (Anywhere up to 2-3 mins lag currently). We will continue to monitor and update as appropriate.

Investigating

1 year ago —

I've managed to re-set our content processing scale back to a sensible default and cleared all queues except ingress which is currently around an hour behind based on the oldest item in the queue. I'm going to look to do some delicate scaling now to try to scale us up without breaking too much.

Investigating

1 year ago —

We're starting to see major disruption to the site, likely a result of an increase in traffic to the website itself further stressing our capacity. We've paused content processing for a moment while we look to see how we can more safely scale back up the content processing without crashing things further.

Identified

1 year ago —

We have reverted our additional server as we're finding the capacity constraints on our database to be too great to allow us at this time to scale beyond where we already were. We're looking to restore full service as a matter of critical urgency.

Identified

1 year ago —

I'm currently trying to increase our databases capacity without needing to take the entire service offline for an extended amount of time, the hope being that this may unlock additional capacity for the second content processing service which is currently unable to connect as we have saturated the capacity of our database. Updates to follow.

Identified

1 year ago —

We have confirmed that part of the impact of this issue relates to ongoing performance issues on our database. We are currently building out an additional content processing server to attempt to pick up some of the slack and reduce the backlog. We're currently still tracking an approx 2 hour lag on new content being ingested through our queues with all other queues currently remaining near real time.

1 year ago —

We are currently recovering from an incident with our content processing service. This is resulting in a delay on new ingested content and intermittent delays to other content. We're continuing to re-balance our content processing capacity for the various types of content we need to process to expedite the processing of this content to the site.

No incidents reported.

No incidents reported.

No incidents reported.

Fixed

1 year ago —

We are back fully online.

Identified

1 year ago —

The site is now mostly back online however we seem to be seeing huge spikes in traffic which is significantly disrupting the site. We're working to get things back up and running but have limited capacity to scale up further than we already are.

Investigating

1 year ago —

In attempting to restore content processing it appears we have caused major disruption to the entire site. We're working to get things back up and running.

1 year ago —

It appears that at around mid-day UK time our content processing services crashed due to an ongoing bug in the processing engine. I've got the service back online and we're working through the content backlog.