I placed a low bid on an auction for 25 Elitedesk 800 G1s on a government auction and unexpectedly won (ultimately paying less than $20 per computer)

In the long run I plan on selling 15 or so of them to friends and family for cheap, and I’ll probably have 4 with Proxmox, 3 for a lab cluster and 1 for the always-on home server and keep a few for spares and random desktops around the house where I could use one.

But while I have all 25 of them what crazy clustering software/configurations should I run? Any fun benchmarks I should know about that I could run for the lolz?

Edit to add:

Specs based on the auction listing and looking computer models:

  • 4th gen i5s (probably i5-4560s or similar)
  • 8GB of DDR3 RAM
  • 256GB SSDs
  • Windows 10 Pro (no mention of licenses, so that remains to be seen)
  • Looks like 3 PCIe Slots (2 1x and 2 16x physically, presumably half-height)

Possible projects I plan on doing:

  • Proxmox cluster
  • Baremetal Kubernetes cluster
  • Harvester HCI cluster (which has the benefit of also being a Rancher cluster)
  • Automated Windows Image creation, deployment and testing
  • Pentesting lab
  • Multi-site enterprise network setup and maintenance
  • Linpack benchmark then compare to previous TOP500 lists
  • Matthew Gasoline @lemmy.world
    link
    fedilink
    English
    arrow-up
    69
    ·
    29 days ago

    Senior year of Highschool, I put Unreal Tournament on the school server. If it were me, I’d recreate that experience, including our teacher looking around the class. That was almost 20 years ago, I hope everyone is doing alright.

    • Wojwo@lemmy.ml
      link
      fedilink
      English
      arrow-up
      23
      ·
      29 days ago

      I have a box with 10 old laptops that I keep around, just for that. Unreal tournament 2004, Insane, Brood Wars and all the Id classics. I don’t get to set it up a lot, but when I do it’s always a hit.

  • PhlubbaDubba@lemm.ee
    link
    fedilink
    English
    arrow-up
    49
    ·
    29 days ago

    According to Bush Jr. And Cheney you are now capable of building a super computer dangerous enough to warrant a 20+ year invasion

    Depending on the actual condition of all those computers and your own skill in building I’d say you could rig a pretty decent home server rack out of all of those for really most purposes you could imagine, including as a personal VPN, personal RDP to conduct work on, personal test server for experimental code and/or testing potentially unsafe downloads/links for viruses

    Shit you could probably build your own OS that optimizes for all that computing power just for the funzies, or even use it to make money by contributing its computing power to a crowd sourced computing project where you dedicate memory bandwidth to the project for some grad student or research institute to do all their crazy math with. Easiest way to rack up academic citations if you ever want to be a researcher!

  • solrize@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    29 days ago

    25 machines at say 100W each is about 2.5KW. Can you even power them all at the same time at home without tripping circuit breakers? At your mentioned .12/KWH that is about 30 cents an hour, or over $200 to run them for a month, so that adds up too.

    i5-4560S is 4597 passmark which isn’t that great. 25 of them is 115k at best, so about like a big Ryzen server that you can rent for the same $200 or so. I can think of various computation projects that could use that, but I don’t think I’d bother with a room full of crufty old PC’s if I was pursuing something like that.

      • 11111one11111@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        28 days ago

        Psh 1 plug aint shit. Every Pic I see from anyone who lives out in those ghettos of India, Central America or any spacific islands they also only rock 1 plug but theyre running the corner store, the liquor store, the hospital, their style of little school middle school and old school, 3 hair salons if Latin or 3 nail salons if spasific, Bollywood, every stadium from every country in the world cup, and always 1 dude trying to squeeze 1 more plug in cuz hes runing low bats. Idk why the American ghetto is so pussy. One time i seen a family that fuckin put covers over empty sockets?!? Come on dog thats like wearing a condom jerking off. NGL tho, I get super jelly seeing pictures from those countries thp with their thousands of power lines, phone lines, sidelines, cable lines, borderlines, internet lines… fuck I don’t know much about how my AOL works but those wizards must be streaming some Hella fast Tokyo banddrifts with all them wires.

            • ulterno@lemmy.kde.social
              link
              fedilink
              English
              arrow-up
              0
              ·
              27 days ago

              And Japan has a 300+ Tb/s connection. Your point?
              My point is that the average Indian is not doing “Hella fast Tokyo banddrifts” (not sure what banddrift even means, but no).

              And yes, a 1Gb/s connection is theoretically available, but how many people are using the ~₹4000/month connection?

              Considering how many people tend to just not have Broadband at home, relying just on mobile internet, we can see how things compare with others.

              Also, to point to the tread starter, most of the “thousands of” cables that you see on poles in congested areas, are just abandoned cables from older installations which nobody cared to remove.

              • MenacingPerson@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                27 days ago

                Also ~100Mb/s is in no way the average speed in an Indian household. It’s usually lower. I also don’t see any specific mentions of india in your link up there to that random site.

                • ulterno@lemmy.kde.social
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  26 days ago

                  Also ~100Mb/s is in no way the average speed in an Indian household.

                  You’re right. It’s not.

                  I also don’t see any specific mentions of india in your link up there to that random site.

                  I don’t see any either. Guess why. Because it only has the top 10, further emphasising the point that :

                  the average Indian is not doing “Hella fast Tokyo banddrifts”

    • TrainguyromOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      29 days ago

      I won’t be leaving all of them on for long at all. I’ve got a few basically unused 15A electrical circuits in the unfinished basement (can see the wires and visually trace the entire runs) I’ll probably only run all 25 long enough to run a linpack benchmark and maybe run some kind of AI model on the distributed compute then start getting rid of at least half of them

    • billwashere@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      28 days ago

      This is only about 21 amps. Most outlets in a home are 15amps but 20amps isn’t unheard of. From one outlet doubtful but yes one house would provide that much power easily if you split them up to three or 4 rooms on different breakers.

      Now it would be fun to watch his electric meter spin like a saw blade … (yes I’m old … I remember meters that had spinning discs)

      • Zorg@lemmings.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        28 days ago

        Just two 15A breakers is enough actually. Outlets are supposed to be able to sustain 80% power, so you should be able to pull 1.44kW from a singly puny Nema 5-15.

        • billwashere@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          28 days ago

          Well true but I was assuming the circuits had some things drawing a little power. Flipping on a device and tripping a breaker with 12 machines on it wouldn’t be ideal :)

          I have done this before in my upstairs home lab. 3 beefy ESXi machines, some nas storage, and a basic 10gbe switch eats up a lot of a single 15amp circuit. And apparently turning on a TV pushes it over the edge. Luckily the UPS saved my but while a reset the breaker and shut some stuff off.

    • rsolva@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      28 days ago

      I have a couple of these (only the G2 and G3 SFF) and they consume between 6-10w when not under load, and they max out at 35w (or 65w depending on CPU). I run proxmox with 64gb ram and they are surprisingly efficient.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      27 days ago

      That’s less than a kettle, in the UK at least.

      Of course I wouldn’t want to be running that all the time, because electric ain’t cheap.

  • Diabolo96@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    29 days ago

    Run 70b llama3 on one and have a 100% local, gpt4 level home assistant . Hook it up with coqui.Ai xttsv2 for mind baffling natural language speech (100% local too ) that can imitate anyone’s voice. Now, you got yourself Jarvis from Ironman.

    Edit : thought they were some kind of beast machines with 192gb ram and stuff. They’re just regular middle-low tier pcs.

    • SaintWacko@midwest.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      29 days ago

      I tried doing that on my home server, but running it on the CPU is super slow, and the model won’t fit on the GPU. Not sure what I’m doing wrong

      • Diabolo96@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        29 days ago

        Sadly, can’t really help you much. I have a potato pc and the biggest model I ran on it was Microsoft phi-2 using the candle framework. I used to tinker with Llama.cpp on colab, but it seems they don’t handle llama3 yet. ollama says it does , but I’ve never tried it before. For the speed, It’s kinda expected for a 70b model to be really slow on the CPU. How much slow is too slow ? I don’t really know…

        You can always try the 8b model. People says it’s really great and even replaced the 70b models they’ve been using.

        • SaintWacko@midwest.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          29 days ago

          Show as in I waited a few minutes and finally killed it when it didn’t seem like it was going anywhere. And this was with the 7b model…

          • Diabolo96@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            29 days ago

            It shouldn’t happen for a 8b model. Even on CPU, it’s supposed to be decently fast. There’s definitely something wrong here.

            • SaintWacko@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              29 days ago

              Hm… Alright, I’ll have to take another look at it. I kinda gave up, figuring my old server just didn’t have the specs for it

                • SaintWacko@midwest.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  28 days ago

                  It has a Intel Xeon E3-1225 V2, 20gb of ram, and a Strix GTX 970 with 4gb of VRAM. I’ve actually tried Mistral 7b and Decapoda Llama 7b, running them in Python with Huggingface’s Transformers library (from local models)

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    19
    ·
    29 days ago

    I certainly wouldn’t want pay the power bill from leaving a bunch of these running 24/7, but would work fine if you wanted to learn cluster computing.

    You could always load them up with a bunch of classic games and get all your friends over for a LAN party.

    • TrainguyromOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      29 days ago

      The thought did cross my mind to run Linpack and see where I fall on the Top500 (or the Top500 of 2000 for example for a more fair comparison haha)

  • seaQueue@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    28 days ago

    Distcc, maybe gluster. Run a docker swarm setup on pve or something.

    Models like those are a little hard to exploit well because of limited network bandwidth between them. Other mini PC models that have a pcie slot are fun because you can jam high speed networking into them along with NVMe then do rapid fail over between machines with very little impact when one goes offline.

    If you do want to bump your bandwidth per machine you might be able to repurpose the wlan m2 slot for a 2.5gbe port, but you’ll likely have to hang the module out the back through a serial port or something. Aquantia USB modules work well too, those can provide 5gbe fairly stably.

    Edit: Oh, you’re talking about the larger desktop elitedesk g1, not the USFF tiny machines. Yeah, you can jam whatever hh cards into these you want - go wild.

    • TrainguyromOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      29 days ago

      From the listing photos these actually have half-height expansion slots! So GPU options are practically nonexistant, but networking and storage is blown wide open for options compared to the miniPCs that are more prevalent now.

      • seaQueue@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        29 days ago

        Yeah, you’ll be fairly limited as far as GPU solutions go. I have a handful of hh AMD cards kicking around that were originally shipped in t740s and similar but they’re really only good for hardware transcoding or hanging extra monitors off the machine - it’s difficult to find a hh board with a useful amount of vram for ml/ai tasks.

    • TrainguyromOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      29 days ago

      12 cents per kilowatt-hour. I certainly don’t plan on leaving more than a couple on long term. I might get lucky with the weather and need the heating though :)