• 1 Post
  • 488 Comments
Joined 10 months ago
cake
Cake day: July 14th, 2023

help-circle

  • That’s a bit abstract, but saying what others “should” do is both stupid and rude.

    Buddy, if anyone’s being stupid and rude in this exchange, it’s not me.

    And any true statement is the same as all other true statements in an interconnected world.

    It sounds like the interconnected world you’re referring to is entirely in your own head, with logic that you’re not able or willing to share with others.

    Even if I accepted that you were right - and I don’t accept that, to be clear - your statements would still be nonsensical given that you’re making them without any effort to clarify why you think them. That makes me think you don’t understand why you think them - and if you don’t understand why you think something, how can you be so confident that you’re correct?



  • Because a good person would never need those. If you want to have shadowbans on your platform, you are not a good one.

    This basically reads as “shadow bans are bad and have no redeeming factors,” but you haven’t explained why you think that.

    If you’re a real user and you only have one account (or have multiple legitimate accounts) and you get shadow-banned, it’s a terrible experience. Shadow bans should never be used on “real” users even if they break the ToS, and IME, they generally aren’t. That’s because shadow bans solve a different problem.

    In content moderation, if a user posts something that’s unacceptable on your platform, generally speaking, you want to remove it as soon as possible. Depending on how bad the content they posted was, or how frequently they post unacceptable content, you will want to take additional measures. For example, if someone posts child pornography, you will most likely ban them and then (as required by law) report all details you have on them and their problematic posts to the authorities.

    Where this gets tricky, though, is with bots and multiple accounts.

    If someone is making multiple accounts for your site - whether by hand or with bots - and using them to post unacceptable content, how do you stop that?

    Your site has a lot of users, and bad actors aren’t limited to only having one account per real person. A single person - let’s call them a “Bot Overlord” - could run thousands of accounts - and it’s even easier for them to do this if those accounts can only be banned with manual intervention. You want to remove any content the Bot Overlord’s bots post and stop them from posting more as soon as you realize what they’re doing. Scaling up your human moderators isn’t reasonable, because the Bot Overlord can easily outscale you - you need an automated solution.

    Suppose you build an algorithm that detects bots with incredible accuracy - 0% false positives and an estimated 1% false negatives. Great! Then, you set your system up to automatically ban detected bots.

    A couple days later, your algorithm’s accuracy has dropped - from 1% false negatives to 10%. 10 times as many bots are making it past your algorithm. A few days after that, it gets even worse - first 20%, then 30%, then 50%, and eventually 90% of bots are bypassing your detection algorithm.

    You can update your algorithm, but the same thing keeps happening. You’re stuck in an eternal game of cat and mouse - and you’re losing.

    What gives? Well, you made a huge mistake when you set the system up to ban bots immediately. In your system, as soon as a bot gets banned, the bot creator knows. Since you’re banning every bot you detect as soon as you detect them, this gives the bot creator real-time data. They can basically reverse engineer your unpublished algorithm and then update their bots so as to avoid detection.

    One solution to this is ban waves. Those work by detecting bots (or cheaters, in the context of online games) and then holding off on banning them until you can ban them all at once.

    Great! Now the Bot Overlord will have much more trouble reverse-engineering your algorithm. They won’t know specifically when a bot was detected, just that it was detected within a certain window - between its creation and ban date.

    But there’s still a problem. You need to minimize the damage the Bot Overlord’s accounts can do between when you detect them and when you ban them.

    You could try shortening the time between ban waves. The problem with this approach is that the ban wave approach is more effective the longer that time period is. If you had an hourly ban wave, for example, the Bot Overlord could test a bunch of stuff out and get feedback every hour.

    Shadow bans are one natural solution to this problem. That way, as soon as you detect it, you can prevent a bot from causing more damage. The Bot Overlord can’t quickly detect that their account was shadow-banned, so their bots will keep functioning, giving you more information about the Bot Overlord’s system and allowing you to refine your algorithm to be even more effective in the future, rather than the other way around.

    I’m not aware of another way to effectively manage this issue. Do you have a counter-proposal?

    Out of curiosity, do you have any experience working in content moderation for a major social media company? If so, how did that company balance respecting user privacy with effective content moderation without shadow bans, accounting for the factors I talked about above?



  • If you’re in the US, unpaid overtime is only permissible if you’re salaried exempt. To be salaried exempt:

    • you must make at least $684 every week ($35,568/year)
    • your primary job responsibility must be one of the following:
      • executive - managing the enterprise, or managing a customarily recognized department or subdivision; you must also regularly direct your work of at least two FTEs and be able to hire / fire people (or be able to provide recommendations that are strongly considered)
      • administrative - office or non-manual work directly related to the management or general business operations, or
      • learned professional - work which is predominantly intellectual in character and which includes work requiring the consistent exercise of discretion and judgment, in the field of science or learning
      • creative professional - work requiring invention, imagination, originality or talent in a recognized field of artistic or creative endeavor
      • IT related - computer systems analyst, computer programmer, software engineer or other similarly skilled worker in the computer field
      • sales
      • HCE (you must be making at least $107k per year)
    • your pay must not be reduced if your work quality is reduced or if you work fewer hours
      • for example, if you work 5 days a week, for an hour a day, you must get the same pay as if you worked 8 hours every day. There are some permissible deductions they can make - like if you miss a full day - and they can require you to use vacation time or sick time, if you have it - and of course they can fire you if you’re leaving without completing your tasks… but they still have to pay you.

    Check out https://www.dol.gov/agencies/whd/fact-sheets/17a-overtime for more details on the above.

    It’s quite possible you’re eligible for back-paid overtime.

    Note also that the minimum exempt wages are increasing in July.

    Re your “cover my expenses just to exist” bit and the follow-up about employers catching on and pushing abusive shit… if this is related to a disability make sure to look into getting that on record and seeking an accommodation. If your primary job duty is X and they’re pushing you to do Y, but your disability makes Y infeasible, then it’s a pretty reasonable accommodation to ask to not have to do Y (assuming your HCP agrees, of course).




  • You can also get replacement Hall effect analog sticks from Gulikit and install them in your joycons yourself. They also made them for the Steam Deck. I installed a set in my old LCD Steam Deck and it was really straightforward, but I suspect the joycons take a bit more work.

    It’s a shame they don’t make them for the PS5 - there are multiple third party controllers with Hall effect sensors that are compatible with pretty much everything else, but there’s only one Hall effect controller compatible with the PS5 (the Nacon Revolution 5 Pro), and it’s $200.


  • If you use that docker compose file, I recommend you comment out the build section and uncomment the image section in the lemmy service.

    I also recommend you use a reverse proxy and Docker networks rather than exposing the postgres instance on port 5433, but if you aren’t familiar with Docker networks you can leave it as is for now. If you’re running locally and don’t open that port in your router’s firewall, it’s a non-issue unless there’s an attacker on your LAN, but given that you’re not gaining anything from exposing it (unless you need to connect to the DB directly regularly - as a one off you could temporarily add the port mapping), it doesn’t make sense to increase your attack surface for no benefit.


  • There’s an idea that just won’t die that Linux is extremely difficult to use/maintain/troubleshoot. It’s certainly often a lot easier than windows, so it just gets to me to see that idea propagated.

    Pretending it’s all sunshine and rainbows isn’t realistic, either. That said, I had a completely different takeaway - that the issues are mostly kinda random and obscure or nitpicky, and the sorts of things you would encounter in any mature OS.

    The issue about PopOS not having a Paint application is actually the most mainstream of them - and it feels very similar to the complaints about iPadOS not including a Calculator app by default. But nobody is concluding that iPads aren’t usable as a result.

    Teams having issues is believable and relevant to many users. It doesn’t matter whose fault an issue is if the user is impacted. TBH, I didn’t even know that Teams was available on Linux.

    That said, the only people who should care about Teams issues on Linux are the ones who need to use them, and anyone who’s used Microsoft products understands that they’re buggy regardless of the platform. Teams has issues on MacOS, too. OneDrive has issues on MacOS. On Windows 10, you can’t even use a local account with Office 365.


  • I’m not the person you responded to, but I can say that it’s a perfectly fine take. My personal experience and the commonly voiced opinions about both browsers supports this take.

    Unless you’re using 5 tabs max at a time, my personal experience is that Firefox is more than an order of magnitude more memory efficient than Chrome when dealing with long-lived sessions with the same number of tabs (dozens up to thousands).

    I keep hundreds of tabs open in Firefox on my personal machine (with 16 GB of RAM) and it’s almost never consuming the most memory on my system.

    Policy prohibits me running Firefox on my work computer, so I have to use Chrome. Even with much more memory (both on 32 GB and 64 GB machines) and far fewer tabs (20-30 at most vs 200-300), Chrome often ends up taking up far too much memory + having a substantial performance drop, and I have to to through and prune the tabs I don’t need right now, bookmark things that can be done later, etc…

    Also, see https://www.techspot.com/news/102871-zero-regrets-firefox-power-user-kept-7500-tabs.html - I’ve never seen anything similar for Chrome and wasn’t able to find anything.


  • Definitely not, I do the same.

    I installed 64 GB of RAM in my Windows laptop 4 years ago and had been using 64 GB of RAM in the laptop that it replaced - which was from 2013 (I think I bought it in 2014-2105). I was using 32 GB of RAM prior (on Linux and Windows laptops), all the way back to 2007 or so.

    My work MacBook Pros generally have 32-64 GB of RAM, but my personal MacBook Air (the 15” M2) has 16 GB, simply because the upgrade wasn’t a cost effective one (and the M1 before it had performed great with 16) and because I’d only planned on using it for casual development. But since I’ve been using it as my main personal development machine and for self-hosted AI, and have run into its limits, when I replace it I’ll likely opt for 64 GB or more.

    My Windows gaming desktop only has 32 GB of RAM, though - that’s because getting the timings higher with more RAM - particularly 4 sticks - was prohibitively expensive when I built it, and then when the cost wasn’t a concern and I tried to upgrade, I learned that my third and fourth RAM slots weren’t functional. I could upgrade to 64 GB in two slots but it wouldn’t really be worth it, since I only use it for gaming.

    My Linux desktop / server has 128 GB of ECC RAM, though, because that’s as much as the motherboard supported.



  • It first showed up on Netflix in mid-2023, in the middle of the writer’s guild strike (meaning there was a dearth of new content). So basically the Netflix effect. It had been on other streaming platforms before - Prime Video and Hulu - but Netflix is still a juggernaut compared to them - it has 5 times as many subscribers as Hulu, for example, and many of the subscribers to Prime Video are incidental and don’t stream as much on average as Netflix users.

    I assume Netflix funded off-platform advertising, but the on-platform advertising has a big effect, too. And given that Suits broke a record in the first week it was on Netflix and they have a spinoff coming, it makes sense that they would keep advertising.


  • The idea that someone does this willingly implies that the user knows the implications of their choice, which most of the Fediverse doesn’t seem to do

    The terms of service for lemmy.world, which you must agree to upon sign-up, make reference to federating. If you don’t know what that means, it’s your responsibility to look it up and understand it. I assume other instances have similar sign-up processes. The source code to Lemmy is also available, meaning that a full understanding is available to anyone willing to take the time to read through the code, unlike with most social media companies.

    What sorts of implications of the choice to post to Lemmy do you think that people don’t understand, that people who post to Facebook do understand?

    If the implied license was enough, Facebook and all the other companies wouldn’t put these disclaimers in their terms of service.

    It’s not an implied license. It’s implied permission. And if you post content to a website that’s hosting and displaying such content, it’s obvious what’s about to happen with it. Please try telling a judge that you didn’t understand what you were doing, sued without first trying to delete or file a DMCA notice, and see if that judge sides with you.

    Many companies have lengthy terms of service with a ton of CYA legalese that does nothing. Even so, an explicit license to your content in the terms of service does do something - but that doesn’t mean that you’re infringing copyright without it. If my artist friend asks me to take her art piece to a copy shop and to get a hundred prints made for her, I’m not infringing copyright then, either, nor is the copy shop. If I did that without permission, on the other hand, I would be. If her lawyer got wind of this and filed a suit against me without checking with her and I showed the judge the text saying “Hey hedgehog, could you do me a favor and…,” what do you think he’d say?

    Besides, Facebook does things that Lemmy instances don’t do. Facebook’s codebase isn’t open, and they’d like to reserve the ability to do different things with the content you submit. Facebook wants to be able to do non-obvious things with your content. Facebook is incorporated in California and has a value in the hundreds of billions, but Lemmy instances are located all over the world and I doubt any have a value even in the millions.




  • That was my first comment and all I did was share a list of games that have historically used EAC. If a game used EAC at launch then it’s pretty clear that its publishers have used EAC in their games. I made no statements about it being kernel-level or otherwise.

    That said, EAC is a kernel-level anticheat, but unlike Vanguard it doesn’t run at startup. A tool being (or not being) kernel-level is a matter of which privileges it has when it runs, not when it starts up. Starting at startup allows an anti-cheat tool to perform more diagnostics and catch cheats that might otherwise go uncaught, but it’s also more invasive and increases the attack surface of people who have it installed.