Fluxer - Notice history

Fluxer API (api.fluxer.app) experiencing degraded performance

Website (fluxer.app) - Operational

100% - uptime
Feb 2026 · 99.94%Mar · 100.0%Apr · 99.95%
Feb 2026
Mar 2026
Apr 2026

Fluxer API (api.fluxer.app) - Degraded performance

100% - uptime
Feb 2026 · 98.71%Mar · 99.84%Apr · 99.91%
Feb 2026
Mar 2026
Apr 2026

Fluxer Gateway (gateway.fluxer.app) - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 99.19%Apr · 99.82%
Feb 2026
Mar 2026
Apr 2026

Web Client (web.fluxer.app) - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 100.0%Apr · 99.95%
Feb 2026
Mar 2026
Apr 2026

Media Proxy (fluxerusercontent.com) - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 100.0%Apr · 99.92%
Feb 2026
Mar 2026
Apr 2026

Static Assets (fluxerstatic.com) - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 100.0%Apr · 99.92%
Feb 2026
Mar 2026
Apr 2026

Notice history

Apr 2026

Mar 2026

Feb 2026

We're letting people back in slowly!
  • Resolved
    Resolved
    This incident has been resolved.
  • Monitoring
    Monitoring

    It's finally here, we're going to start letting in traffic slowly, and then probably more rapidly. We will be rolling out the sessions a smaller groups to maintain a close eye on the infrastructure as we do this, but seeing as it is a lower traffic time of day, it shouldn't be an issue (hopefully...knock on wood).

  • Update
    Update

    We're awake again and working through the remaining pre-takeoff checklist!

  • Update
    Update

    Sharing exact ETAs hasn't worked out so well so far, but there isn’t much left to do, and the two people working on this migration could really use some catch-up sleep!

  • Investigating
    Investigating

    And right as I posted that update, it went down again.

    We’re going to limit access to the app and show a clear message while we finish the migration. These outages are pulling time away from the migration work, and the team working on it is extremely small. Right now it is a single person, me.

    We do have one new hire though, and are working towards expanding the team!

    A massive surge of new signups is also overwhelming the single production server we are working to migrate away from. That server is currently serving about 120,000 users, and we received those users in just two weeks.

    Things are going to be back up and better, fully on the new environment, with wider voice server coverage in Johannesburg, Mumbai, São Paulo, Sydney, Tokyo, Miami, Dallas, Madrid, Frankfurt, Nuremberg, Stockholm, and more to come, plus improved anti-abuse and platform moderation tools to fight spam and raids, and more, at 10 AM UTC on Saturday if everything goes as expected!

    I can also reveal that we've got a surprise for all pre-existing Plutonium and lifetime Visionary users, and all non-paying users too, as soon as everything is back and running.

    Thanks for your patience, and have an awesome weekend!

  • Resolved
    Resolved

    We're really sorry about the downtime!

    Things are going to get better soon, but right now we have to keep two worlds alive at the same time. We have to maintain the old production environment that is already overloaded, and we also have to keep working through issues in the new environment we're trying to move everyone to.

    The hard part is that a lot of people want back in all at once, and most things are still running on that old environment. That creates the classic thundering herd effect. Requests pile up, some time out, clients retry, the retries add even more load, and it can spiral into downtime across multiple layers of the stack.

    We've had to tweak a lot of things to blunt that surge and stop the negative loop. These are the well documented symptoms you see in systems that scale fast, including Discord in its early days.

    We honestly did not expect to be operating at this scale so quickly, and we are an extremely small team. It is basically a single person driving the core work (with one new hire just today!). We're trying to do better <3

  • Identified
    Identified

    The API has been brought back online. However, the real-time Gateway is being slammed with requests to bring your communities back online. We have identified the slowness and are working on unclogging the queue.

  • Monitoring
    Monitoring

    We implemented a fix to get everyone back in and are currently monitoring the result, you may experience some missing servers and instability.

  • Identified
    Identified

    We are currently working on resolving the elevated error rates on the API.

API Migration (Part 2!)
  • Completed
    March 01, 2026 at 11:04 AM
    Completed
    March 01, 2026 at 11:04 AM
    Maintenance has completed successfully.
  • Update
    February 28, 2026 at 4:22 AM
    Update
    February 28, 2026 at 4:22 AM

    We're still in the process of working on the migration, but we've flipped over all of the infrastructure to the new stuff now! We're working on some bugs still, but we should still be on track for 10 AM UTC on Saturday.

  • Update
    February 27, 2026 at 9:46 PM
    Update
    February 27, 2026 at 9:46 PM

    Just wanted to share an update here for everyone watching alongside at home that we're still working on things behind the scenes.

  • Update
    February 26, 2026 at 7:41 PM
    Update
    February 26, 2026 at 7:41 PM

    The media proxy and the static assets remain stable on the new infrastructure, so both are being marked operational now. We are continuing to have a subset of traffic hitting the new infrastructure and hope to fully cut it over soon.

  • Update
    February 26, 2026 at 2:34 PM
    Update
    February 26, 2026 at 2:34 PM

    We're still working on fixing some some bugs with the updated gateway, which are causing all guilds to become unavailable on certain changes and making the platform unstable for brief periods of time before we roll it out to a wider audience.

  • Update
    February 25, 2026 at 8:20 PM
    Update
    February 25, 2026 at 8:20 PM

    The maintenance is still underway, however preliminary testing with the new infrastructure has been well! We hope to be fully online again with a little treat for everyone soon. 👀

  • Update
    February 25, 2026 at 9:26 AM
    Update
    February 25, 2026 at 9:26 AM

    Sorry for the lack of updates, we've been heads down working on getting everything online and haven't had time to get the status page properly updated. We're almost to the point where things are in working order, however, there's an existing issue with the client not loading properly that we are working on fixing.

  • Update
    February 24, 2026 at 8:01 PM
    Update
    February 24, 2026 at 8:01 PM

    We're finally ready to start cutting over traffic! Things will continue to be a little bumpy as we sort out and scale up the new infrastructure, and may become unavailable entirely for some users for a short period.

    If you are using the official client and you're not let back in, please try to hit [CTRL] + [R] (or [] + [R] if you're a Mac user) to reload it to see if that solves the issue.

  • Update
    February 24, 2026 at 2:21 PM
    Update
    February 24, 2026 at 2:21 PM

    The infrastructure migration is still in progress and the maintenance window has been extended by an hour to accommodate needs.

  • In progress
    February 24, 2026 at 12:00 PM
    In progress
    February 24, 2026 at 12:00 PM
    Maintenance is now in progress
  • Planned
    February 24, 2026 at 12:00 PM
    Planned
    February 24, 2026 at 12:00 PM

    We are planning for a scheduled maintenance during this time to complete the migration of the infrastructure. We hope to keep the platform online as much as possible throughout this transition.

Feb 2026 to Apr 2026

Next