I’m gay
Any information humanity has ever preserved in any format is worthless
It’s like this person only just discovered science, lol. Has this person never realized that bias is a thing? There’s a reason we learn to cite our sources, because people need the context of what bias is being shown. Entire civilizations have been erased by people who conquered them, do you really think they didn’t re-write the history of who these people are? Has this person never followed scientific advancement, where people test and validate that results can be reproduced?
Humans are absolutely gonna human. The author is right to realize that a single source holds a lot less factual accuracy than many sources, but it’s catastrophizing to call it worthless and it ignores how additional information can add to or detract from a particular claim- so long as we examine the biases present in the creation of said information resources.
I’ve personally found it’s best to just directly ask questions when people say things that are cruel, come from a place of contempt or otherwise trying to start conflict. “Are you saying x?” but in much clearer words is a great way to get people to reveal their true nature. There is no need to be charitable if you’ve asked them and they don’t back off or they agree with whatever terrible sentiment you just asked whether they held. Generally speaking people who aren’t malicious will not only back off on what they’re saying but they’ll put in extra work to clear up any confusion - if someone doesn’t bother to clear up any confusion around some perceived hate or negativity, it can be a more subtle signal they aren’t acting in good faith.
If they do back off but only as a means to try and bait you (such as refusing to elaborate or by distracting), they’ll invariably continue to push boundaries or make other masked statements. If you stick to that same strategy and you need to ask for clarification three times and they keep pushing in the same direction, I’d say it’s safe to move on at that point.
As an aside - It’s usually much more effective to feel sad for them than it is to be angry or direct. But honestly, it’s better to simply not engage. Most of these folks are hurting in some way, and they’re looking to offload the emotional labor to others, or to quickly feel good about themselves by putting others down. Engaging just reinforces the behavior and frankly just wastes your time, because it’s not about the subject they’re talking about… it’s about managing their emotions.
For those who are reporting this, it’s a satire piece and is the correct sub
Could you be a little bit more specific? Do you have an example or two of people/situations you struggled to navigate? Bad intentions can mean a lot of things and understanding how you respond and how you wish you were responding could both be really helpful to figuring out where the process is breaking down and what skills might be most useful.
Cheers for this, found two games that seem interesting that I never heard about before!
This isn’t just about GPT, of note in the article, one example:
The AI assistant conducted a Breast Imaging Reporting and Data System (BI-RADS) assessment on each scan. Researchers knew beforehand which mammograms had cancer but set up the AI to provide an incorrect answer for a subset of the scans. When the AI provided an incorrect result, researchers found inexperienced and moderately experienced radiologists dropped their cancer-detecting accuracy from around 80% to about 22%. Very experienced radiologists’ accuracy dropped from nearly 80% to 45%.
In this case, researchers manually spoiled the results of a non-generative AI designed to highlight areas of interest. Being presented with incorrect information reduced the accuracy of the radiologist. This kind of bias/issue is important to highlight and is of critical importance when we talk about when and how to ethically introduce any form of computerized assistance in healthcare.
ah yes, i forgot that this article was written specifically to address you and only you
I appreciate your warning, and would like to echo it, from a safety perspective.
I would also like to point out that we should be approaching this, as every risk, from a harm reduction standpoint. A drug with impurities that could save your life or prevent serious harm is better than no drug and death. People need to be empowered to make the best decisions they can, given the available resources and education.
Great read. Really loved the way it was paced - while it jumped around a lot, it never felt too out of place and tied together nicely.
Venus rhymes with a piece of anatomy often found on men. Obviously they got it backwards
Alt text: the words “white text with black outline can be read on any color” is superimposed on a rainbow gradient, demonstrating the point
to make a long story short: getting our money out of the old collective and into the new one was actually much more of a mess than we thought
For anyone curious about the details, I had to step in to help ensure this actually happened because, well, tax law is complicated and none of us are experts. Ultimately our current financial host OCE had to bring on a US-based company in order to allow a transfer of tax-exempt funding. On top of that, we had to submit an application and enter an agreement with this partner company so that they could open a bank account on our behalf because having a bank account and agreement with OCE was not enough. What a headache!
Thanks for everyone who set up donations on OCE as soon as we transitioned, that was actually super helpful! For the rest of you who used to donate and were waiting for us to be fully transitioned over to OCE to restart your donations, you are free to do so now, and given our current deficit it would be most appreciated!
Been thinking about picking this one up
It’s FUCKING OBVIOUS
What is obvious to you is not always obvious to others. There are already countless examples of AI being used to do things like sort through applicants for jobs, who gets audited for child protective services, and who can get a visa for a country.
But it’s also more insidious than that, because the far reaching implications of this bias often cannot be predicted. For example, excluding all gender data from training ended up making sexism worse in this real world example of financial lending assisted by AI and the same was true for apple’s credit card and we even have full-blown articles showing how the removal of data can actually reinforce bias indicating that it’s not just what material is used to train the model but what data is not used or explicitly removed.
This is so much more complicated than “this is obvious” and there’s a lot of signs pointing towards the need for regulation around AI and ML models being used in places it really matters, such as decision making, until we understand it a lot better.
Yeah I think that’s a reasonable analogy. It’s important to note that medicine and taking care of your hair are not quite analogous and the potential negative outcomes from bad health care can be orders of magnitude worse than a negative outcome at a salon… But yeah
big weird flex but okay vibes except actually not okay
Like most science press releases I’m not holding my breath
Great read! Thank you