Things are still moving fast. It’s mid/late july now and i’ve spent some time outside, enjoying the summer. It’s been a few weeks since things exploded in the month of may this year. Have you people settled down in the meantime?

I’ve since then moved from reddit and i miss the LocalLlama over there, that was/is buzzing with activity and AI news (and discussions) every day.

What are you people up to? Have you gotten tired of your AI waifus? Or finished indexing all of your data into some vector database? Have you discovered new applications for AI? Or still toying around and evaluating all the latest fine-tuned variations in constant pursuit of the best llama?

  • bia@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    I used it quite a lot at the start of the year, for software architecture and development. But the number of areas where it was useful were so small, and running it locally is quite slow. (which I do for privacy reasons)

    I noticed that much of what was generated needed to be double checked, and were sometimes just wrong, so I’ve basically stopped using it.

    Now I’m hopeful for better code generation models, and will spend the fall building a framework around a local model. See if the helps in guiding the models generation.

    • zephyrvs@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      I’m pumped for Llama2 which was released yesterday. Early tests slow some big improvements. Can’t wait for Wizard/Vicuna/Uncensored versions of it.

      • bia@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        Yeah, me too. After summer vacation it’s hopefully available and I’ll dig into it.

      • Toxuin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        It’s marginally better than original but WAYY more censored. It is pretty intrusive. It refused to write a bash script to kill a process by regexp 🤦

        • zephyrvs@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          The first uncensored variants are already on Huggingface though, look for The Bloke. :)

      • rufus@discuss.tchncs.deOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        11 months ago

        I just watched the youtube video that got linked here earlier. I forgot if it was better or worse at programming than its predecessor. but it’s not that much a difference. i’m just now fiddling around with the chat variant. But i’m excited for the tuned versions, too. (thrilled)

    • rufus@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      True. I didn’t even bother giving it tasks like that. I don’t think AI is going to replace software designers or programmers anytime soon. Well… maybe except for simple stuff, copy-paste programming, simple scripting, webdesign and some self contained and not overly complex stuff. It’s a fascinating tool, can help you do things quickly, answer questions, do prototypes. But if you throw real work at it, you soon realise it has severe limitations and isn’t even close to the human intellect. Okay. Maybe you consider it heaven-sent if you studied history instead of computer science and it provides you with python-scripts to sort your data.

      Journalists and people who write text have similar problems. The chatbots generate convincing text and can take over some of the work writing text. But for example if you need to write a text that is correct and factual, you’d be better off without AI. At least that’s what i read in some articles about ChatGPT. Everyone needs to put in considerable effort to fact check its output, and double check everything to the point that it doesn’t make sense to run the AI in the first place.

      • bia@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        I learned the hard way to never generate anything I couldn’t create myself, of at least verify its validity.

  • zephyrvs@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    11 months ago

    I’m building an assistant for Jungian shadow work with persistent storage, but I’m a terrible programmer so it’s taking longer than expected.

    Since shadow work is very intimate and personal, I wouldn’t trust a ChatGPT integration and I’d never be fully open in conversations.

    • rufus@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      11 months ago

      Wow. I’m always amazed by what - previously unknown (to me) stuff - people do. I had to look that one up. Is this some kind of leisure activity? self-improvement or -therapy? or are you just pushing the boundaries of psychology?

      • zephyrvs@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        I was fascinated by Jung’s works after tripping on shrooms and becoming obsessed with understanding conciousness. I already stumbled upon llama.cpp and started playing around with LLMs and just decided to build a prototype for myself, because I’ve doing shadow work for self-therapy reasons anways.

        It’s not really that useful yet, but making it into a product is unlikely because most people who wouldn’t trust ChatGPT won’t trust an open source model on my machine(s) either. Also shipping a product glued together from multiple open source components with rather strict GPU requirements seems like a terrible experience for potential customers and I don’t think I’d be able to handle the effort of supporting others to properly set it up. Dunno, we’ll see. :D

        • rufus@discuss.tchncs.deOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          11 months ago

          Hehe. People keep highjacking the term ‘open source’. If you mean free software… I have faith and trust in that concept. Once your software gets to a point where it is useful and you start attracting other contributors, people will start to realize your software is legit. At least I would do that.

          I use KoboldCPP and llama.cpp, because i don’t own a gpu. I believe you could implement a fallback to something like this, and you could eliminate your strict gpu requirements. (people would need at least 16 - 32gb of RAM though. and a bit of patience because this is slower.)

  • noneabove1182@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    I’m trying to find a way to use it with Guidance to control my smart home, actually really doable with only a 13b model

    • rufus@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      11 months ago

      Nice. I’m not an expert on NLP, are there any resources or frameworks out there to help handing a language model and guiding it to handle the specific set of commands/entities and areas? Or do you design everything from scratch?

      When I first started tinkering with oobabooga’s webui and the roleplay abilities, I also tried to create a character for my smart home. That certainly was fun and I like the idea of having a house with some kind of soul. But I never figured out how to make that useful. It just tried switching on or off random stuff and couldn’t figure out what i wanted nor understand how my apartment looked. And of course kept hallucinating devices.

      With HomeAssistant having ‘The Year of the Voice’. This might get useful soon. They now(?) have official integrations for Whisper STT and a STT. And they’re probably designing the language processing stuff and whatever is needed to handle commands regarding areas or specific domains. I think i will try that, once it’s ready to use. But i want some scifi house with a soul, or the computer from the ‘Enterprise’. And i think i also need more LLM power for that.

      • noneabove1182@sh.itjust.worksM
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Yeah I’m using it with home assistant :)

        Basically I’m using oobabooga for inference and providing an API endpoint as if it were OpenAI, and then plugging that into Microsoft’s guidance, which I then give a tool. The tool takes as input the device and the state, and then calls my home assistant rest endpoint to execute the command!

        • rufus@discuss.tchncs.deOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Thank you for pointing that out. I was completely unaware of microsoft guidance. Once they merge/implement llama.cpp support, i’m definitely going to try it, too.

          • noneabove1182@sh.itjust.worksM
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            That will certainly be amazing, but for now it’s actually not bad to use either oobabooga web UI or koboldcpp to run the inferencing and provide a rest endpoint, cause you can trick basically any program into treating it as if it’s OpenAI and use it the same way

    • Jack Cloudman@ada.junoai.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      I’ve been waiting for ExLLama to have guidance support, but there seem to have been some integration issues. We need more people to learn and get involved, haha, including me

      • noneabove1182@sh.itjust.worksM
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        I actually just recently started having really good experiences with exllama on only 13B models, specifically I found the orca tuned ones to perform really well