Hello! It seems you have made it to our donation post.

Thankyou

We created Reddthat for the purposes of creating a community, that can be run by the community. We did not want it to be something that is moderated, administered and funded by 1 or 2 people.

Current Recurring Donators on OpenCollective:

Current Total Amazing People on OpenCollective:

Background

In one of very first posts titled “Welcome one and all” we talked about what our short term and long term goals are.
In 7 days since starting, we have already federated with over 700 difference instances, have 24 different communities and over 250 users that have contributed over 550 comments. So I think we’ve definitely achieved our short term goals, and I thought this was going to take closer to 3 months to get these types of numbers!

We would like to first off thank everyone for being here, subscribing to our communities and calling Reddthat home!

Donation Links

From now on we will be using the following services.

Current Plans:

Database

The database service is another story. The database has grown to 1.8GB (on the file system) in the last 7 days. Some quick math, makes that 94GB in 1 year’s time. Our current host allows us to addon 100GB of SSD storage for $100/year, which is very viable and will allow us to keep costs down, while planning for our future.

Annual Costings:

Our current costs are

  • Domain: 15 Euro (~$25 AUD)
  • Server: $897.60 Usd (~$1365 AUD)
  • EU Server: 39 Euro (~$64 AUD)
  • Wasabi Object Storage: $72 Usd (~$111 AUD)
  • Total: ~1565 AUD per year (~$130.42/month)

That’s our goal. That is the number we need to achieve in funding to keep us going for another year.

Cheers,
Tiff

PS. Thank you to our donators! Your names will forever be remembered by me: Last updated on 2024-05-03

Current Recurring Gods (🌟)

  • Nankeru (🌟)
  • souperk (🌟)
  • Incognito (x3 🌟🌟🌟)
  • ThiccBathtub (🌟)
  • Bryan (🌟)
  • Guest (x2 🌟🌟)
  • Ashley (🌟)
  • Alex (🌟)
  • Ed (🌟)

Once Off Heroes

  • Guest(s) x13
  • MonsiuerPatEBrown
  • Patrick x4
  • Stimmed
  • MrShankles
  • RIPSync
  • Alexander
  • muffin
  • MentallyExhausted
  • Dave_r
  • Stimmed
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    A quick question related to the DB, is the data broken into many smaller tables or is most data in one or two tables? If it is all in one, we may run into performance issues as soon as the DB becomes to large as queries run against whole tables unless promised really well.

    • TiffOPMA
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      Great question! The data is broken up across a fair amount of tables so I think it’s pretty much fine in that regard. Though the database is the current bottleneck and the Lemmy Devs have said they need to fix it as they are front end people they are not the best with databases.

      Unfortunately we are at the behest of the Lemmy devs at the moment, and I’m sure there are issues as the big instances are really struggling.

      But that isn’t something we will really need to worry about until we 10x our db.

      • Stimmed
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        Thank you for the answer. I have delt with scaling DBs with tons of data so the alarm bells were ringing. DBs tend to be fine up to a point and then fall over as soon as the isn’t enough ram to cache the data and mask issues of the DB architecture. With the exponential growth of both users and content to cache, my gut tells me this will become a problem quickly unless some excellent coding is done on the back end to truncate remote instance data quickly.

        Sadly I am better at breaking systems in wonderful ways than building systems for use, so I can’t be too helpful other than to voice concerns about issues I have ran into before.

        • TiffOPMA
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 months ago

          Hahahaha, are you me?

          Yeah the db on the filesystem is double the size of it in memory, the current memory usage is about 200-300mb active for postgre. So I’m not worried about it too much.

          There are big wins for sure in the database, and I’m for sure looking at the activity table when I get a few hours to myself. It keeps a log of; every like, from every user that the server federates with, and I think it’s not efficient at all. Each row contains a huge amount of data when I’d expect it to be quite small.

          But I digress. I haven’t delved into the intricacies and I’m sure it’s not as simple as I make it out to be. There will be lots of QoL work in 0.18 for sure so stay tuned. I’ll be using this Announcement community for all of our big milestones and to keep everyone updated, as well as to crowd source solutions!