Hello! It seems you have made it to our donation post.


We created Reddthat for the purposes of creating a community, that can be run by the community. We did not want it to be something that is moderated, administered and funded by 1 or 2 people.

Current Recurring Donators:

Current Total Amazing People:


In one of very first posts titled “Welcome one and all” we talked about what our short term and long term goals are.
In 7 days since starting, we have already federated with over 700 difference instances, have 24 different communities and over 250 users that have contributed over 550 comments. So I think we’ve definitely achieved our short term goals, and I thought this was going to take closer to 3 months to get these types of numbers!

We would like to first off thank everyone for being here, subscribing to our communities and calling Reddthat home!


Open Collective is a service which allows us to take donations, be completely transparent when it comes to expenses and our budget, and allows us some form of idea of how we tracking month-to-month.

Servers are not free, databases are growing, images are uploading and the servers are running smoothly.

This post has been edited to only include relevant information about total funds and “future” plans. Because sometimes, we reach the future we were planning for!

Current Plans:


The database service is another story. The database has grown to 1.8GB (on the file system) in the 7 days. Some quick math, makes that 94GB in 1 year’s time. Our current host allows us to addon 100GB of SSD storage for $100/year, which is very viable and will allow us to keep costs down, while planning for our future.

Annual Costings:

Our current costs are

  • Domain: 15 Euro
  • Server: $118.80 Aud
  • Server Ram Upgrade: $54.50 Aud
  • Server: $528.00 Aud
  • Wasabi Object Storage: $72 Usd
  • Total: ~$830 Aud per year (~$70/month)

That’s our goal. That is the number we need to achieve in funding to keep us going for another year.


PS. Thank you to our donators! Your names will forever be remembered by me: Last updated on 2023-10-03

Recurring Gods (🌟)

  • Nankeru (🌟)
  • pleasestopasking (🌟)
  • souperk (🌟)
  • Incognito (x2 🌟🌟)
  • Siliyon (🌟)
  • Ed (🌟)
  • ThiccBathtub (🌟)
  • VerbalQuicksand (🌟)
  • hexed (🌟)
  • Bryan (🌟)
  • Daniel (🌟)

Once Off Heroes

  • Guest(s) x12
  • Stimmed
  • MrShankles
  • RIPSync
  • Alexander
  • muffin
  • MentallyExhausted
  • Dave_r
  • Patrick
  • @Stimmed
    36 months ago

    A quick question related to the DB, is the data broken into many smaller tables or is most data in one or two tables? If it is all in one, we may run into performance issues as soon as the DB becomes to large as queries run against whole tables unless promised really well.

    • TiffOPMA
      46 months ago

      Great question! The data is broken up across a fair amount of tables so I think it’s pretty much fine in that regard. Though the database is the current bottleneck and the Lemmy Devs have said they need to fix it as they are front end people they are not the best with databases.

      Unfortunately we are at the behest of the Lemmy devs at the moment, and I’m sure there are issues as the big instances are really struggling.

      But that isn’t something we will really need to worry about until we 10x our db.

      • @Stimmed
        56 months ago

        Thank you for the answer. I have delt with scaling DBs with tons of data so the alarm bells were ringing. DBs tend to be fine up to a point and then fall over as soon as the isn’t enough ram to cache the data and mask issues of the DB architecture. With the exponential growth of both users and content to cache, my gut tells me this will become a problem quickly unless some excellent coding is done on the back end to truncate remote instance data quickly.

        Sadly I am better at breaking systems in wonderful ways than building systems for use, so I can’t be too helpful other than to voice concerns about issues I have ran into before.

        • TiffOPMA
          46 months ago

          Hahahaha, are you me?

          Yeah the db on the filesystem is double the size of it in memory, the current memory usage is about 200-300mb active for postgre. So I’m not worried about it too much.

          There are big wins for sure in the database, and I’m for sure looking at the activity table when I get a few hours to myself. It keeps a log of; every like, from every user that the server federates with, and I think it’s not efficient at all. Each row contains a huge amount of data when I’d expect it to be quite small.

          But I digress. I haven’t delved into the intricacies and I’m sure it’s not as simple as I make it out to be. There will be lots of QoL work in 0.18 for sure so stay tuned. I’ll be using this Announcement community for all of our big milestones and to keep everyone updated, as well as to crowd source solutions!