When I mean learning I do not mean translating big sentences or something like that, I mean things like: “Is this japanese sentence correctly written? 雲の日が好きです” In this example it answered: "

Yes, the Japanese sentence “雲の日が好きです” is correctly written. It translates to “I like cloudy days” in English. Here’s a breakdown of the sentence:

  • 雲 (くも/kumo): This means “cloud.”
  • の (no): This is a possessive particle that indicates the relationship between “cloud” and “day” in this context.
  • 日 (ひ/hi): This means “day.”
  • が (ga): This is a subject marker particle that marks “cloudy day” as the subject of the sentence.
  • 好きです (すきです/suki desu): This means “like” or “to like” and is a polite way of expressing one’s preference. The verb 好き (suki) means “to like,” and です (desu) is a polite copula used to indicate politeness and formality.

Therefore, the sentence “雲の日が好きです” correctly conveys the meaning “I like cloudy days” in Japanese.

"

Is this correct,do you consider it reliable?

Sometimes I also ask some questions like: "What function does ‘こと’ grammatically fulfill on this sentence? … "

  • pruwyben@discuss.tchncs.de
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    Definitely not. ChatGPT is known to “hallucinate”, i.e. make things up, so you can’t trust that everything it says will be accurate.

  • RightHandOfIkaros@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 year ago

    In my experience, ChatGPT gives wrong or incorrect information when asked about the specifics of literally any subject.

    For example, if you ask ChatGPT about the “Piano Puzzle” in Silent Hill 1, it will reply that the puzzle exists. Interestingly, if you ask it about the poem that coincides with this puzzle by asking it to recite the poem by its name only: “A Tale of Birds Without A Voice,” I was surprised to see ChatGPT correctly recited the poem. ChatGPT forgot to include the lines at the end of the poem starting with the Crow, and while it correctly provided the answer to the riddle, in the correct order, but it did not include the Crow.

    However, it also invented steps to solve the puzzle. It said the piano keys are numbered, and just said “press the key corresponding to [bird name],” which is not helpful with regards to the puzzle solution. The piano keys are not infact numbered, and the puzzle solution requires the player find which piano keys make no sound, and press those keys in the correct order corresponding to the order of the birds that the poem describes.

    Now, I asked ChatGPT to analyze the specific language used in the riddle poem. This poem requires the reader have prior knowledge of the color of the birds feathers to determine the color of the piano keys each bird means. I asked ChatGPT whether this prior knowledge was bad for riddle design, and it responded with “It can be a drawback, here are points to consider.” It then listed four points (accessibility, fairness, clarity and guidance, immersion and engagement) and explained how each of those relate to how a puzzle is designed (in a generic sense, not specific). It concluded in classic AI fashion by rewording its first paragraph and then saying “Ideally, puzzles should provide players with the necessary information or clues within the game world to encourage exploration, deduction, and problem-solving. This allows for a more inclusive and engaging puzzle experience.” Which is basically buzzword diarrhea.

    All in all I think while ChatGPT is a great tool for creative exercises and as a suggestion tool for world building and other creative facets, anything that relies on being factual or correct should not be relying on ChatGPT. It simply provides too much wrong information too often. And while it sometimes gets things correct, a lie is best told between two truths.

    • hierophant_nihilant
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Lol, did you make this reply with ChatGPT or “All in all” is just a nod towards text generated by it? Overall, your comment is hilarious XD

      • RightHandOfIkaros@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I didn’t use it to write the comment nor was it intentional, but I will acquiesce that it does sound like something ChatGPT might say. The most important point I was trying to make still stands though. ChatGPT is really bad as a reliable source of information, and anyone expecting it to give factual information about everything they ask it is a moron.

  • Spectacle8011@lemmy.comfysnug.space
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    ChatGPT, being generative AI, is strong in fabrication. This is stuff like writing an email, producing a table of contents for a book, creating a python program to spec, or, as DigitalAudio suggested, proofreading.

    It is not very strong in fact-checking, or explaining things in a different way. See this recent MDN issue, for example: https://github.com/mdn/yari/issues/9208

  • Zarxrax@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    It’s fine to use ChatGPT for conversation or reading practice. Don’t ask it questions and expect correct answers.

  • DigitalAudio@sopuli.xyzM
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I think ChatGPT works fine for reviewing your text. It will usually manage to point out the bits that sound wrong or inaccurate in your text, and it’s surprisingly precise at doing it.

    However, as everybody else pointed out, don’t expect accurate or well-researched answers from it, and take any grammatical or nuanced statements with a mountain of salt, because it’s highly likely that it’s completely wrong.

    But I’ve also asked ChatGPT to proofread and adjust my emails for coworkers and clients, and to make sure they sound formal and are respectful enough, and I’ve found it to be very good at this. But I would suggest knowing enough Japanese to be able to fix anything that sounds off or is lost in translation.

    I wouldn’t recommend it as a tool for beginners, but I definitely encourage advanced learners to give it a shot, since it really does improve, summarise and rephrase texts successfully a lot of the time.

    I don’t remember where I read this first, but I liked this quote:

    What’s so confusing about AI is that it’s good at things you wouldn’t expect it to, and it’s terrible at tasks that are stereotypically machine-like. Fact checking? Sourcing? Citations? It’s awful. But writing human-like text? Proofreading? Maintaining natural conversations? It’s awesome at that.

  • CHINESEBOTTROLL@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I don’t know about Japanese, but I did study rust (the programming language) with chatgpt and its really good for that, because you can keep asking questions and drilling down until you understand. (a human teacher might get tired) Only real problem I had was it kept forgetting the context. I didn’t really have problems with hallucination and even if i did I could just try the code, which will not work for Japanese.

    • DigitalAudio@sopuli.xyzM
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      This is almost true. I’m a C++ developer, and since the language, its syntax and standard libraries are so well documented, ChatGPT can answer almost anything about them with great accuracy and well written code.

      But any developer knows the real meat of most programs is in external libraries, and ChatGPT’s quality falls off very quickly and sharply once you get into those. I tried asking it to explain how to write a specific audio filter for a program I was working on, and what’s really scary is that the code looks right at a first glance, but once you run it, it doesn’t behave how you would expect it to.

      So by analogy, the same thing can happen with any subject. ChatGPT knows exactly how a well-written answer looks like but a lot of the time it doesn’t know what the right answer is, which means it’s incredibly easy to be fooled by ChatGPT and believe even its wrong answers.

    • kinsnik@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      a big problem with it is that if you don’t know a subject well enough, you can’t know if the answer from ChatGPT is real or not. for programming, you can test it and with knowledge of other languages, you can get something done quickly. but for japanese i found that i can’t know if the answers are correct or not, so i decided against using it

  • Umechan
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    There are services like Lang-8 and HiNative where you can get your writing checked or ask questions, so I’d suggest using those instead.

      • Umechan
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Yes, but you can also get different opinions. Lots of people also love to tell other people that they’re wrong, which can be a toxic trait, but can also help weed out incorrect information when done constructively.

  • hierophant_nihilant
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    The most important skill when working with LLMs, such as ChatGPT, Bard, bing, etc, is to be able to find out their bullshit. I think the best application for chatgpt is to write some training texts in japanese or translate from english, or give readings of kanjis. But if something feels off or weird, double check it. Your example was correctly translated, but I would estimate that reliability is somewhere between 60 and 70% depending on comlexity

    • papa@lemmy.sdfjp.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      But if something feels off or weird

      That is the big trick. Something new learners don’t know.

    • hierophant_nihilant
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I was researching a science topic once using bing chat. It gave me absolutely invalid references that didn’t even include anything similar to what I was searching for. When I pointed that out, bing apologized and then gave me the same references 🤷