ATLAS Media Group Status Page

Some systems are experiencing issues.

ATLAS Media Group Status Page

Real time updates from the ATLAS Media Group portfolio

  • Main Website

    Last updated 2025-11-18 15:54:15

    Operational

    The main website for the ATLAS Media Group Team

  • Blog

    Last updated 2025-11-18 15:54:15

    Operational

    The ATLAS Media Group Ltd. Blog Platform

  • Domain Parking Service

    Last updated 2025-11-18 15:54:16

    Operational

    Service which hosts static content across a number of our parked and inactive domains.

2 Incidents
  • Billing Panel

    Last updated 2026-01-21 11:57:49

    Operational

    The primary billing panel for the Superior Networks clients.

  • VPS Control Panel

    Last updated 2025-11-18 15:54:13

    The VPS Control panel for Superior Networks Customers.

  • Main Website

    Last updated 2025-11-18 15:54:12

    Operational

    The main Superior Networks website.

  • French Region

    Last updated 2025-12-31 00:37:32

    Our servers in the French location. These servers are - Eggplant

  • London Region

    Last updated 2025-02-16 19:03:50

    Operational

    Our new London Region for Superior Networks

  • Germany Region

    Last updated 2025-02-16 19:03:50

    Operational

    The Superior Networks German region

12 Incidents
  • MastodonApp.UK Website

    Last updated 2026-02-05 18:59:06

    The mastodonapp.uk site

    View Details
  • MastodonAppUK Database Service

    Last updated 2026-01-04 11:11:50

    The database service storing appropriate content for the service

  • MastodonAppUK Queue & Content Processing Service

    Last updated 2026-01-11 14:33:04

    Handles the processing of content both internally to our server as well as communicating with remote servers

  • MastodonAppUK Media Storage

    Last updated 2025-12-31 03:41:29

    Operational

    Storage for all videos, pictures and other content on the site

  • MastodonAppUK E-Mail Service

    Last updated 2026-01-04 11:11:47

    Operational

    Used to handle delivery of e-mails to users

  • MastodonAppUK Advanced / Full Text Search

    Last updated 2026-01-04 11:11:51

    Search capability for MastodonApp.UK

4 Incidents
  • Universeodon Website

    Last updated 2026-02-05 18:59:07

    The Universeodon public website

    View Details
  • Universeodon Database

    Last updated 2026-01-12 19:05:24

    Operational

    Universeodon's Database

  • Universeodon Queue & Content Processing Service

    Last updated 2026-01-17 14:31:14

    The Universeodon Queue and content processing service.

  • Universeodon Media Storage

    Last updated 2026-01-09 10:33:34

    Operational

    Universeodon Media Storage

  • Universeodon Advanced Search

    Last updated 2026-01-09 10:33:42

    Universeodon's Advanced / full text search service.

  • Universeodon E-Mail Service

    Last updated 2026-01-09 10:33:59

    Operational

    Universeodon e-mail service

  • Universeodon Relay

    Last updated 2025-11-18 13:08:05

    The ActivityPub relay hosted on relay.universeodon.com

1 Incident
  • DB04 Server

    Last updated 2024-09-02 18:20:52

    Operational

    Located in the London Region | Legacy Server

  • DB05 Server

    Last updated 2025-12-31 00:37:34

    Located in the France Region

  • DB06 Server

    Last updated 2024-08-26 22:02:14

    Operational

    Located in the London Region

Planned Maintenance

  • Universeodon - Full Text Search Migration

    5 days from now —
    Affected Components: Universeodon Advanced Search
    Upcoming

    We will be re-building our full text search infrastructure which will result in a short period where there will be no full text search results or the results will be limited.

Past Incidents

No incidents reported.

No incidents reported.

Fixed

1 year ago —

Service restored.

1 year ago —

We have identified an issue whereby our Nexus server is failing to automatically renew certificates as expected, the site is currently offline as a result.

Fixed

1 year ago —

Full service has now been restored.

Identified

1 year ago —

Storage has been restarted. Some services have come online however we are still trying to get all services online.

1 year ago —

We are investigating an issue impacting multiple services due to shared storage becoming unavailable. We will update as soon as we know more.

Fixed

1 year ago —

We are back fully online.

Identified

1 year ago —

We have identified the issue on both the old and new host and the original host is now fully operational. There was an additional issue when we attempted to migrate clients onto a new host, we are currently reversing the migrations now.

1 year ago —

We are working to resolve issues with one of the nodes in our French region. We have started the process of moving clients onto a new operational node however our VPSCP is having issues with the migration. We have engaged our vendor that owns the VPSCP software and are awaiting further updates from them for us to resolve this issue.

Fixed

1 year ago —

The migration has fully completed and we're back up and running. We'll continue to keep an eye but all appears to be operational at this time.

Watching

1 year ago —

The migration is now underway and we think it should be around 2 hours from now before full service is restored at the current transfer times.

1 year ago —

Starting 19:30 BST we will commence migration of the Universeodon.com database server to new infrastructure to enable us to use additional capacity that has been provisioned.

Fixed

1 year ago —

Content processing is fully restored and operational.

Identified

1 year ago —

We have identified the root cause as a misconfiguration on one of our core routers. We are actively correcting the configuration now and hope to bring the content processing back online shortly.

1 year ago —

We are currently experiencing a major outage on our content processing services. We are looking to restore service ASAP and are actively investigating this issue.

Fixed

1 year ago —

Content processing is now fully operational and working as expected.

Watching

1 year ago —

We are starting to once again see further performance issues to the database layer of MastodonAppUK which is also resulting in disruption to our ability to process content on Universeodon.com - We are working to mitigate the impacts of this now.

Watching

1 year ago —

We have managed to slightly increase our ingress capacity for content processing. We're currently running approx 8 Mins behind live on all our queues with the exception of ingress which is at around 21 Hours behind. We expect this queue to take a couple of hours to fully clear and will continue to monitor.

Watching

1 year ago —

We have increased capacity on our database infrastructure however are still hitting bottlenecks. As a result we've scaled back our processing workers to prioritise the default content queue and are currently running a small amount of capacity for the ingress queue to try to catch this up. This means for all queues other than ingress we're currently running around 30 mins behind with ingress currently around 22 hours behind. We will continue to adjust the scaling to ensure the site remains online and operational while we process all this content.

Identified

1 year ago —

We have identified a capacity bottleneck on the router that serves part of our database infrastructure, we are scaling this up now and hope this should upgrading some of the bottleneck issues.

Identified

1 year ago —

It appears that the content processing has resulted in too much pressure being put on our database infrastructure causing major outages across the site. We are looking to scale back the content processing to restore the sites access.

Watching

1 year ago —

We have powered on our legacy content processing server which is starting to work through the backlog, it looks like around midnight on the 29th August 2024 the new content processing services had a major failure resulting in the vast majority of content processing jobs failing to be executed. We currently have a backlog of a little over 1.1 million events which is likely to increase as we process content and additional processing is required. I suspect it'll take a few hours to get things back caught up. We are going to monitor the infrastructure and queues over the coming hours to ensure full recovery.

1 year ago —

Our content processing service has experienced a catastrophic failure resulting in feeds not being updated. We are actively back queueing all of these actions and as soon as we can remediate the issue we will start to catch up on the content that needs processing.

No incidents reported.