I’m not sure if that’s only on Reddthat and 0.18 servers (I’ll go check out my account on beehaw now), but I’m getting shown posts very near the top of the list from, like, 20-30 days ago, sometimes even going as far as a year or two ago. They haven’t that many upvotes or comments either. Anyone seeing that?

  • TiffMA
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    There was some talk about some issues with sorting with 0.18.0, that are supposedly fixed in 0.18.1.

    What are you sorting by if I may ask. (I have new and can’t see the issue)

    • fernandofigOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Hey Tiff, thanks for chiming in!

      I’m sorting by Hot. I’ve came across this thread on AskLemmy about the same problem, and one of the comments seems to allude to what you’re saying, adding that a server restart seems to fix the problem temporarily.

      • TiffMA
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Unfortunately, while people are continuing to join and exodus reddit. I’m reluctant to make a server change even though it would only be down for a minute or so. I’ll probably schedule it for later today or tomorrow once the initial wave of people have all joined.

        To give you an idea, we were processing around ~30k requests per hour. That includes federation traffic. We are now processing ~90k requests per hour!

        • fernandofigOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Wow. And I expect that’ll only go up from here! Is the infra handling it well? Are you all setup with autoscaling or stuff like that?

          • TiffMA
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            We are only using 40% of our cpu, and 50% of our memory.

            I can easily add extra CPUs and Memory if needed. We recently added extra memory to deal with the instabilities. (Check the latest announcement on the memory, and why your display picture is throwing a 404 😥). I’m terribly sorry about that!

            I did a “how I’m hosting reddthat here”: https://reddthat.com/post/19103
            Honestly I never expected it to blow up as it did 😅. Especially with people actually funding our little adventure. I originally bought a 12 month server which cost just under $120 which I had thought would be enough for a slow and gradual increase of users.

            The next plan is to scale out all the services to their own server. So we have Lemmy & lemmy-ui, Pictrs (pictures), Postgres (database). Biggest memory hog is lemmy, due to the spikes and coming in 0.18.1 will be an in memory cache. Biggest cpu hog is pictrs, as we have to convert the images -> webp, or gifs -> webm & soon to be videos. I’d like to scale out pictrs to have multiple instances but it is currently impossible as it uses an internal kv-store which is unable to be accessed by multiple instances. That will allow us to scale out each of the server to allocate more memory per item that is needed.

            Ideally i’d like to run a docker-swarm, have each service scaled. But the only ones that could be possibly done like that (at this time) is lemmy and lemmy-ui. Which will be the first ones we do, but having the docker-swarm with only 2 of the services would just add complexity when it isn’t needed. Plus, they have some internal configuration files, so i’d need to build the deployment pipleline to build the config into our containers. Not hard, but not a huge priority.

            😊 I got plans, and now we got funding. Worst comes to worse, I’ll throw extra cpu & memory at this box until it’s stupidly huge as a temporary measure (which is what other lemmy instances have done), and then migrate the services to other servers at a later date.

            Tiff

            • fernandofigOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              nice!

              Is lemmy tightly coupled to pictrs? Looks like it’s the biggest roadblock in scaling flexibility. Would be useful if it could be replaced at some point.

              Not that I intend to fire up my own instance or anything, but as an amateur sysadmin, it’s nice to have a peek, at this early stage of the community, at how it’s all put together! It’ll be even more interesting to see how it’ll grow 😊

              (Check the latest announcement on the memory, and why your display picture is throwing a 404 😥)

              I hadn’t noticed until you mentioned, I guess I had it cached until now, it was only when I went to my profile page that the picture started breaking. No worries though, I don’t care about that!

              • TiffMA
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 months ago

                Is lemmy tightly coupled to pictrs?

                Not tightly, but there are no (short-term) plans to migrate away from pictrs.

                to see how it’ll grow right there with you!

      • TiffMA
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        (I did a restart for you, let me know if the sorting looks better now)

        • fernandofigOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          lol, no way dude, you didn’t have to, I just said that as a datapoint! I don’t want users over here taking pot shots at you because of me, now! 🤣

          But yeah, that seems to have fixed it, for now at least; the sorting on Hot is making sense again!

          • TiffMA
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            I tested on my dev instance, and the restart took just over 1 second. So I figured it would be fine. Glad its working as you expected it now.