Happy leap year! February comes with a fun 29th day? What will we do with the extra 24 hours? Sorry I missed the January update but I was too busy living the dream in cyberspace.

Community Funding

As always, I’d like to thank our dedicated community members who keep the lights on. Unfortunately our backup script went a little haywire over this last couple months and it resulted in a huge bill regarding our object storage! This bug has been fixed and new billing alerts have been updated to ensure this doesn’t happen again.

If you are new to Reddthat and have not seen our OpenCollective page it is available over here. All transactions and expenses are completely transparent. If you donate and leave your name (which is completely optional) you’ll eventually find your way into our main funding post over here!

Upcoming Postgres Upgrade

In the admin matrix channels there has been lots of talk about database optimisations recently as well as the issues relating to out of memory (OOM) errors. These OOM issues are mostly in regards to memory “leaks” and is what was plaguing Reddthat on a weekly basis. The good news is that other instance admins have confirmed that Postgres 16 (we are currently on Postgres 15) fixes those leaks and performs better all around!

We will be planning to upgrade Postgres from 15 to 16 later on this month (February). I’m tentatively looking at performing it during the week of the 18th to 24th. This will mean that Reddthat will be down for the period of the maintenance. I expect this to take around 30 minutes, but further testing on our development machines will produce a more concrete number.

This “forced” upgrade comes at a good time. As you may or may not be aware by our uptime monitoring we have been having occasional outages. This is because of our postgres server. When we do a deploy and make a change, the postgres container does not shutdown cleanly. So when it restarts it has to perform a recovery to ensure data consistency. This recovery process normally requires about 2-3 minutes where you will see an unfortunate error page.
This has been a sore spot with me as it is most likely an implementation failure on my part and I curse to myself whenever postgres decides to restart for whatever reason. Even though it should not restart because I made a change on a separate service. I feel like I am letting us down and want to do better! These issues leads us into the next section nicely.

Upcoming (February/March) “Dedicated” Servers

I’ve been playing with the concept of separating our services for a while now. We now have Pictrs being served on different server but still have lemmy, the frontends and the database all on our big single server. This single server setup has served us nicely and would continue to do so for quite some time but with the recent changes to lemmy adding more pressure onto the database we need to find a solution before it becomes an even bigger problem.
The forced postgres upgrade gives us the chance to make this decision and optimise our servers to support everyone and give a better experience on a whole.

Most likely starting next month (March) we will have two smaller front-end servers, which will contain the lemmy applications, a smallish pictrs server, and a bigger backend server to power the growing database. At least that is the current plan. Further testing may make us re-evaluate but I do not forsee any reason we would change the over-arching aspects.

Lemmy v0.19 & Pictrs v0.5 features

We’ve made it though the changes to v0.19.x and I’d like to say even with the unfortunate downtimes relating to Pictrs’ temp data, we came through with minor downtimes and (for me) better appreciation of the database and it’s structure.

Hopefully everyone has been enjoying the new features. If you use the default web-ui, you should also be able to upload videos, but I would advise against it. There is still the limit of the 10 second request regarding uploads, so unless your video is small and short it won’t complete in-time.

Closing

It has been challenging over the past few months and especially with communities not as active as they once were. It almost seems that our little instance is being forgotten about and other instances have completely blown up! But it is concerning that without 1 or 2 people our communities dry up.

As an attempt to breathe life into our communities, I’ve started a little Community Spotlight initiative. Where every 1-2 weeks I’ll be pinning a community that you should go and checkout!

The friendly atmosphere and communities is what make me want to continue doing what I do in my spare time. I’m glad people have found a home here, even if they might lurk and I’ll continue trying to make Reddthat better!

Cheers,
Tiff
On behalf of the Reddthat Admin Team.

  • souperk
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    Unfortunately our backup script went a little haywire over this last couple months and it resulted in a huge bill regarding our object storage! This bug has been fixed and new billing alerts have been updated to ensure this doesn’t happen again.

    It’s good to see guards being put into place, but don’t stress too much about it. Mistakes happen, last week at my job we realized we had a monthly cronjob running every second, it was accumulating around 180M logs per month (or around 1,200$ per customer per month for gke logs alone)…

    I haven’t been paying close attention to funding, but I believe the instance has a relatively good runway. If that’s not the case I would consider increasing my monthly contribution, just give us a heads up.

    This has been a sore spot with me as it is most likely an implementation failure on my part and I curse to myself whenever postgres decides to restart for whatever reason. Even though it should not restart because I made a change on a separate service. I feel like I am letting us down and want to do better!

    You make do with what you have, the Lemmy ecosystem is still young and you are a pioneer. Thanks for exploring these uncertain waters so others can follow along!

    Personally, I think since October the instance has been pretty stable, of course there were issues like having to upload comment twice, but they are infrequent and for comparison during the same period GitHub had more than a couple of hour long outages…

    As an attempt to breathe life into our communities, I’ve started a little Community Spotlight initiative. Where every 1-2 weeks I’ll be pinning a community that you should go and checkout!

    Noticed this and really liked it, makes the server feel so homely!! 🥰

    As always, thanks for the update, it’s so fun to be able to observe as this instance is evolving.

  • deadcatbounce
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    Thank-you very much for your servers.

    Love from England (because it was the only working stable Lemmy instance that I could open an account on back in the day. It was only later that found out I’m the other side of the planet!).

  • Blaze
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Thank you very much for the updates!