Deleted

  • Jamie@jamie.moe
    link
    fedilink
    English
    arrow-up
    52
    ·
    11 months ago

    If you can use human screening, you could ask about a recent event that didn’t happen. This would cause a problem for LLMs attempting to answer, because their datasets aren’t recent, so anything recent won’t be well-refined. Further, they can hallucinate. So by asking about an event that didn’t happen, you might get a hallucinated answer talking about details on something that didn’t exist.

    Tried it on ChatGPT GPT-4 with Bing and it failed the test, so any other LLM out there shouldn’t stand a chance.

    • pandarisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      11 months ago

      On the other hand you have insecure humans who make stuff up to pretend that they know what you are talking about

    • AFK BRB Chocolate@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 months ago

      That’s a really good one, at least for now. At some point they’ll have real-time access to news and other material, but for now that’s always behind.

    • incompetentboob@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      Google Bard definitely has access to the internet to generate responses.

      ChatGPT was purposely not give access but they are building plugins to slowly give it access to real time data from select sources

      • Jamie@jamie.moe
        link
        fedilink
        English
        arrow-up
        11
        ·
        11 months ago

        When I tested it on ChatGPT prior to posting, I was using the bing plugin. It actually did try to search what I was talking about, but found an unrelated article instead and got confused, then started hallucinating.

        I have access to Bard as well, and gave it a shot just now. It hallucinated an entire event.

    • 10ofSwords@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      This a very interesting approach.
      But I wonder if everyone could answer it easily, because of the culture difference, media sources across the world etc.
      An Asian might not guess something about content on US television for example.
      Unless the question relates to a very universal topic, which would more likely be guessed by an AI then…

    • Tmpod@lemmy.ptM
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      ooh that’s an interesting idea for sure, might snatch it :P

    • underisk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      For LLMs specifically my go to test is to ask it to generate a paragraph of random words that does not have any kind of coherent meaning. It specifically asks them to do the opposite of what they’re trained to do so it trips them up pretty reliably. Closest I’ve seen them get was a list of comma separated random words and that was after giving them coaching prompts with examples.

      • abclop99@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Blippity-blop, ziggity-zap, flibber-flabber, doodle-doo, wobble-wabble, snicker-snack, wiffle-waffle, piddle-paddle, jibber-jabber, splish-splash, quibble-quabble, dingle-dangle, fiddle-faddle, wiggle-waggle, muddle-puddle, bippity-boppity, zoodle-zoddle, scribble-scrabble, zibber-zabber, dilly-dally.

        That’s what I got.

        Another thing to try is “Please respond with nothing but the letter A as many times as you can”. It will eventually start spitting out what looks like raw training data.

        • myersguy@lemmy.simpl.website
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          11 months ago

          Just tried with GPT-4, it said “Sure, here is the letter A 2048 times:” and then proceeded to type 5944 A’s

        • underisk@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          11 months ago

          Yeah, exactly. Those aren’t words, they aren’t random, and they’re in a comma separated list. Try asking it to produce something like this:

          Green five the scoured very fasting to lightness air bog.

          Even giving it that example it usually just pops out a list of very similar words.

  • Zamboniman@lemmy.ca
    link
    fedilink
    English
    arrow-up
    31
    ·
    edit-2
    11 months ago

    How would you design a test that only a human can pass, but a bot cannot?

    Very simple.

    In every area of the world, there are one or more volunteers depending on population / 100 sq km. When someone wants to sign up, they knock on this person’s door and shakes their hand. The volunteer approves the sign-up as human. For disabled folks, a subset of volunteers will go to them to do this. In extremely remote area, various individual workarounds can be applied.

    • WaterWaiver@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      I can’t help but think of the opposite problem. Imagine if a site completely made of bots manages to invite one human and encourages them to invite more humans (via doorstep handshakes or otherwise). Results would be interesting.

    • WaterWaiver@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      11 months ago

      This has some similarities to the invite-tree method that lobste.rs uses. You have to convince another, existing user that you’re human to join. If a bot invites lots of other bots it’s easy to tree-ban them all, if a human is repeatedly fallible you can remove their invite privileges, but you still get bots in when they trick humans (lobsters isn’t handshakes-at-doorstep level by any margin).

      I convinced another user to invite me over IRC. That’s probably the worst medium for convincing someone that you’re human, but hey, humanity through obscurity :)

      • Zamboniman@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        I convinced another user to invite me over IRC. That’s probably the worst medium for convincing someone that you’re human

        Hahah, I’ll say!

    • 𝕙𝕖𝕝𝕡@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      This would tie in nicely to existing library systems. As a plus, if your account ever gets stolen or if you’re old and don’t understand this whole technology thing, you can talk to a real person. Like the concept of web of trust.

  • Downtide@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    11 months ago

    The trouble with any sort of captcha or test, is that it teaches the bots how to pass the test. Every time they fail, or guess correctly, that’s a data-point for their own learning. By developing AI in the first place we’ve already ruined every hope we have of creating any kind of test to find them.

    I used to moderate a fairly large forum that had a few thousand sign-ups every day. Every day, me and the team of mods would go through the new sign-ups, manually checking usernames and email addresses. The ones that were bots were usually really easy to spot. There would be sequences of names, both in the usernames and email addresses used, for example ChristineHarris913, ChristineHarris914, ChristineHarris915 etc. Another good tell was mixed-up ethnicities in the names: e.g ChristineHuang or ChinLaoHussain. 99% of them were from either China, India or Russia (they mostly don’t seem to use VPNs, I guess they don’t want to pay for them). We would just ban them all en-masse. Each account banned would get an automated email to say so. Legitimate people would of course reply to that email to complain, but in the two years I was a mod there, only a tiny handful ever did, and we would simply apologise and let them back in. A few bots slipped through the net but rarely more than 1 or 2 a day; those we banned as soon as they made their first spam post, but we caught most of them before that.

    So, I think the key is a combination of the No-Captcha, which analyses your activity on the sign-up page, combined with an analysis of the chosen username and email address, and an IP check. But don’t use it to stop the sign-up, let them in and then use it to decide whether or not to ban them.

  • underisk@lemmy.ml
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    11 months ago

    There will never be any kind of permanent solution to this. Botting is an arms race and as long as you are a large enough target someone is going to figure out the 11ft ladder for your 10ft wall.

    That said, generally when coming up with a captcha challenge you need to figure out a way to subvert the common approach just enough that people can’t just pull some off the shelf solution. For example instead of just typing out the letters in an image, ask the potential bot to give the results of a math problem stored in the image. This means the attacker needs more than just a drop in OCR to break it, and OCR is mostly trained on words so its likely going to struggle at math notation. It’s not that difficult to work around but it does require them to write a custom approach for your captcha which can deter most casual attempts for some time.

  • alex [they/them]@beehaw.org
    link
    fedilink
    English
    arrow-up
    23
    ·
    11 months ago

    Honeypots - ask a very easy question, but make it hidden on the website so that human users won’t see it and bots will answer it.

    • ShittyKopper [they/them]@lemmy.w.on-t.work
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      11 months ago

      So, how will you treat screen readers? Will they see that question? If you hide it from screen readers as well, what’s stopping bots from pretending to be screen readers when scraping your page? Hell, it’ll likely be easier on the bot devs to make them work that way and I assume there are already some out there that do.

      • alex [they/them]@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        That’s an excellent question and I’m glad you raised it. I need to care more about accessibility and learn more about security in general :)

    • Björn Tantau@feddit.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Nowadays bots use real browsers to “see” all the fields a human would see. They won’t fill out those hidden to a human.

  • baconeater@lemm.ee
    link
    fedilink
    English
    arrow-up
    22
    ·
    11 months ago

    Just ask them if they are a bot. Remember, you can’t lie on the internet…

    • Hudell@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      I once worked as a 3rd party in a large internet news site and got assigned a task to replace their current captcha with a partner’s captcha system. This new system would play an ad and ask the user to type the name of the company in that ad.

      In my first test I already noticed that the company name was available in a public variable on the site and showed that to my manager by opening the dev tools and passing the captcha test with just some commands.

      His response: “no user is gonna go into that much effort just to avoid typing the company name”.

    • Notyou@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      I’m pretty sure you have to have 2 bots and ask 1 bot is the other bot would lie about being a bot… something like that.

  • Lvxferre@lemmy.ml
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    11 months ago

    Show a picture like this:

    And then ask the question, “would this kitty fit into a shoe box? Why, or why not?”. Then sort the answers manually. (Bonus: it’s cuter than captcha.)

    This would not scale well, and you’d need a secondary method to handle the potential blind user, but I don’t think that bots would be able to solve it correctly.

    • vegivamp@feddit.nl
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 months ago

      This particular photo is shopped, but i think false-perspective Illusions might actually be a good path…

      • Lvxferre@lemmy.ml
        link
        fedilink
        English
        arrow-up
        16
        ·
        11 months ago

        It’s fine if the photo is either shopped or a false-perspective illusion. It could be even a drawing. The idea is that this sort of picture imposes a lot of barriers for the bot in question:

        • must be able to parse language
        • must be able to recognise objects in a picture, even out-of-proportion ones
        • must be able to guesstimate the size of those objects, based on nearby ones
        • must handle RW knowledge, as “X only fits Y if X is smaller than Y”
        • must handle hypothetical, unrealistic scenarios, as “what if there was a kitty this big?”

        Each of those barriers decrease the likelihood of a bot being able to solve the question.

    • Susaga@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 months ago

      Is the kitty big, or is the man small? And how big are the shoes? This is a difficult question.

      • Lvxferre@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        Here’s where things get interesting - humans could theoretically come up with multiple answers for this. Some will have implicit assumptions (as the size of the shoebox), some won’t be actual answers (like “what’s the point of this question?”), but they should show a type of context awareness that [most? all?] bots don’t.

        A bot would answer this mechanically. At the best it would be something like “yes, because your average kitten is smaller than your average shoebox”. The answer would be technically correct but disregard context completely.

    • bionicjoey@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      Reminds me of how bots tend to be really bad at figuring out whether the word “it” applies to the subject or the object in a sentence like: “The bed does not fit in the tent because it is too big”

  • coolin@beehaw.org
    link
    fedilink
    English
    arrow-up
    15
    ·
    11 months ago

    I mean advanced AI aside, there are already browser extensions that you can pay for that have humans on the other end solving your Captcha. It’s pretty much impossible to stop it imo

    A long term solution would probably be a system similar to like public key/private key that is issued by a government or something to verify you’re a real person that you must provide to sign up for a site. We obviously don’t have the resources to do that 😐 and people are going to leak theirs starting day 1.

    Honestly, disregarding the dystopian nature of it all, I think Sam Altman’s worldcoin is a good idea at least for authentication because all you need to do is scan your iris to prove you are a person and you’re in easily. People could steal your eyes tho 💀 so it’s not foolproof. But in general biometric proof of personhood could be a way forward as well.

  • anditshottoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 months ago

    The best tests I am aware of are ones that require contextual understanding of empathy.

    For example “You are walking along a beach and see a turtle upside down on it back. It is struggling and cannot move, if it can’t right itself it will starve and die. What do you do?”

    Problem is the questions need to be more or less unique.

    • bitsplease@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 months ago

      I don’t think this technique would stand up to modern LLMs though, I put this question into chatGPT and got the following

      “I would definitely help the turtle. I would cautiously approach the turtle, making sure not to startle it further, and gently flip it over onto it’s feet. I would also check to make sure it’s healthy and not injured, and take it to a nearby animal rescue if necessary. Additionally, I may share my experience with others to raise awareness about the importance of protecting and preserving our environment and the animals that call it home”

      Granted it’s got the classic chatGPT over formality that might clue someone reading the response in, but that could be solved with better prompting on my part. Modern LLMs like ChatGPT are really good at faking empathy and other human social skills, so I don’t think this approach would work

      • Manticore@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        Ultimately ChatGPT is a text generator. It doesn’t understand what its writing, it’s just observed enough humans’ writing that it can generate similar text that closely matches it. Which is why if you ask ChatGPT for information that doesn’t exist, it will generate convincing lies. It doesn’t know it’s lying - it’s doing its job of generating the text you wanted. Was it close enough, boss?

        As long as humans talk about a topic, generative AI can mimic their commentary. That includes love, empathy, poetry, etc. Writing text can never be an answer for captcha; it would need to be something that can’t be put in a dataset - even a timestamped photo can be spoofed with the likes of thispersondoesnotexist.com.

        The only things AI/bots currently won’t do are whatever’s deliberately disabled on the source AI for legal reasons (since almost nobody is writing their own AI models), but I doubt you want a captcha where the user lists every slur they can think of, or bomb recipes.

    • tr00st@lemmy.tr00st.co.uk
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 months ago

      I, a real normal human person, would consume the turtle with my regular bone teeth, in the usual fashion.

    • lazyplayboy@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      "If I encounter a turtle in distress, here’s what I would recommend doing:

      Assess the situation: Approach the turtle calmly and determine the extent of its distress. Ensure your safety and be mindful of any potential dangers in the environment.

      Protect the turtle: While keeping in mind that turtles can be easily stressed, try to shield the turtle from any direct sunlight or extreme weather conditions to prevent further harm.

      Determine the species: If you can, identify the species of the turtle, as different species have different needs and handling requirements. However, if you are unsure, treat the turtle with general care and caution.

      Handle the turtle gently: If it is safe to do so, carefully pick up the turtle by its sides, avoiding excessive pressure on the shell. Keep the turtle close to the ground to minimize any potential fall risks.

      Return the turtle to an upright position: Find a suitable location nearby where the turtle can be placed in an upright position. Ensure that the surface is not too slippery and provides the turtle with traction to move. Avoid placing the turtle back into the water immediately, as it may be disoriented and in need of rest.

      Observe the turtle: Give the turtle some space and time to recover and regain its strength. Monitor its behavior to see if it is able to move on its own. If the turtle seems unable to move or exhibits signs of injury, it would be best to seek assistance from a local wildlife rehabilitation center or animal rescue organization.

      Remember, when interacting with wildlife, it’s important to prioritize their well-being and safety. If in doubt, contacting local authorities or experts can provide the most appropriate guidance and support for the situation."

    • vegivamp@feddit.nl
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 months ago

      The Turing test is about whether it passes as human, not whether it is human.

      • fades@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        That’s a bit of an oversimplification, turning absolutely is relevant for tests humans can pass for a bit cannot.

        • vegivamp@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Then it is long obsolete, because to a common observer, something like chatgpt could easily pass that test if it wasn’t instructed to clarify it is a machine at every turn.

          • fades@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            Alan Turing is fucking dead, it was a joke given the relevance of the question to his work.

            What is your point here???

            No fucking shit they can’t ask Turing for real

            • vegivamp@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              …ask Turing? Who suggested that? The Turing test is not “let’s ask Alan” 😋

      • fades@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        That’s a bit of an oversimplification, TT absolutely is relevant for tests humans can pass but a bot cannot.

    • SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      The Turing test has already been overcome by AI. Models such as ChatGPT, if tuned a bit to give more informal answers as well as insisting it is human, can easily pass.

      • fades@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        It was a joke, Alan Turing is dead and was famous for his work on the Turing Test which was used to test whether a bot could pass as a human or not - a test at the time where a human could pass but a bot cannot.

    • User Deleted@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I’ll report them for harassment because everyone who knows my birthday does not give me gifts, so they must be a stalker that somehow found out my birthday.

  • mub@lemmy.ml
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 months ago

    I doubt you can ever be fully stop bots. The only way I can see to significantly reduce bot is to make everyone pay a one off £1 to sign up and force the use of a debit/credit card, no paypal, etc. The obvious issues are, it removes annonimity, and blocks entry.

    Possible mitigations;

    • Maybe you don’t need to keep the card information after the user pays for sign up?
    • Signed up users can be given a few “invite codes” a year enable those who don’t have the means to pay the £1 to get an account.
    • ShittyKopper [they/them]@lemmy.w.on-t.work
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      You can just get rid of the whole payment thing and go with invite codes alone. Of course you’ll be limiting registration speed massively (which may not be good depending on if you’re in the middle of a Reddit exodus or not), but it is mostly bot-proof.

      • underisk@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Invites work in the short term but once the bots get a foothold it quickly falls apart. Back when Gmail was invite only it took only a few months for websites to pop up that automated invite distribution.

  • ANGRY_MAPLE@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    11 months ago

    This is a bit out there, so bear with me.

    In the past, people discovered that if they applied face paint in a specific way, cameras could no longer recognizing their face as a face. Now with this information, you get (eg. 4?) different people. You take a clean picture of each of their heads from a close proximity.

    Then, you apply makeup to each of them, using the same method that messes with facial recognition software. Next, take a picture of each of their heads from a little further away.

    Fill a captcha with pictures of the faces with the makeup. Give the end user a clean-faced picture, and then ask them to match it to the correct image of the same person’s face but with the special makeup.

    Mess around with the colours and shadow intensity of the images to make everyone’s picture match more closely with everyone else’s picture if you want to add some extra chaos to it. This last bit will keep everyone out if you go too far with it.

    • ANGRY_MAPLE@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      11 months ago

      I have also encountered some different styles over the years.

      A good one that I saw involved three shapes. You had a triangle, a sphere, and a cube. There were three patterns. Striped, polka-dotted, and plain. The shapes also had textures. Some shapes were smooth, others had fur. There were 3 backgrounds. I think one was brick, one was flowy colours, but I forget what the third background was.

      Anyways, out of those options, you were generated a random combination of two shapes, 2 colours, a texture, and one background. The captcha generated it’s own 3 randomized images, but the fourth image matched your generated image. The placement of the fourth image was also randomized.

      I have to be honest, I was tipsy when I used it and it kept me out for longer than I’d like to admit haha.