So I’ve been working on an implementation of GPT-4-Turbo that’s designed to ingress entire papers into its context window and process them into summaries that would be understandable by someone with a highschool education (originally went for 8th grade max, but that led to rather patronizing results lol). The machine tells me what the content should be for a given paper and I make it using a few tools like Premier Pro and Photoshop. I’ve never made videos like this before though, so it’s a bit rough.

I was hoping to use this tool to expand access to scientific papers by the general public. Those papers are hella dense and def need some translation.

I’ve attached my first full attempt at using my tool. It’s kinda rough, but I’d love to get some feedback on how to make it better (especially as it pertains to the script).

  • Paragone@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    My comment isn’t on your script, it is on paper-selection…

    Please have several, orthogonal, selection-systems:

    • most-cited,

    • most trustworthy researchers

    • most unique area of research, or most unique question, or something

    • most central to new tech

    • most central to old tech

    • most undernoticed, big potential… ( things that should be big news, but the global novelty-addiction ignores it )

    • best fundamental science ( definitely include this category! )

    • best citizen-science

    etc…

    iow, the algorithm that most use, which is “paying attention to what others are paying attention to” is an algorithm we shouldn’t be assigning our viability to, you know?

    Also, you may need to make a flexible-limit for dumbing-down, as some papers may have a higher limit and others a lower limit, so some could be easy for most to get the sense of, but some might be tricky … and it might well do more good to let them have their different thresholds, see?