

Even though “if you have nothing to hide, you have nothing to fear” is bullshit, the primary market is people who have something to hide. Few people make more effort than grumbling online if they aren’t actually afraid.
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)
Even though “if you have nothing to hide, you have nothing to fear” is bullshit, the primary market is people who have something to hide. Few people make more effort than grumbling online if they aren’t actually afraid.
Literally the worst possible champions of this cause.
Yeah that’s the problem with how they are marketing it. It’s a tool for expert use, not laymen.
I don’t think the problem is ChatGPT itself — it just does what it does and folks get what they get, but it’s definitely a problem that people aren’t being informed about what it can and can’t do (see all the people asking it to count letters and those who think they’ve hacked the system prompt because the AI said they did).
In this case, the user is asking ChatGPT to act as a friend and confidante, and that’s something it can’t do and a use case impossible to detect. The user simply has to understand it lacks any qualities required for a relationship of any kind. Everything a user says is simply input to a mathematical model that wants to complete it with something a human might say.
So it responds to a fictional scenario I might be writing for a book or game exactly the way it responds to a user looking for companionship. There is no way to tell the difference without genuine understanding rather than just token vector comparisons.
It’s like fire. A user can buy and use a lighter, and fire can act like a friend when you’re cold or hungry, but it’ll burn you off you try hugging it.
I can’t tell if Altman is spouting marketing or really believe his own bullshit. AI is a toy and a tool, but it is not a serious product. All that shit about AI replacing everyone is not the case and in any event he wants someone else to build it in top of ChatGPT so the lability is theirs.
As for the logs I hadn’t heard that and would want to understand the provenance and whether they contained PII other than what the user shared. Whether they are kept secure or not, making them available to thousands of moderators is a privacy concern.
I don’t think a chatbot should be treated exactly like a human, but I do think there is an element of caveat emptor here. AI isn’t 100% safe and can never be made completely safe, so either the product is restricted from the general public, making it the purview of governments, foreign powers, and academics, or we have to accept some personal responsibility to understand how to use it safely.
Likely OAI should have a procedure for stepping In and shutting down accounts, though.
It’s so agreeable. If a person expresses doubts or concerns about a therapist, ChatGPT is likely to tell them they are doing a great job identifying problematic people and encourage those feelings of mistrust.
They sycophancy is something that apparent a lot of people liked (I hate it) but being an unwavering cheerleader of the user is harmful when the user wants to do harmful things.
Human moderator? ChatGPT isn’t a social platform, I wouldn’t expect there to be any actual moderation. A human couldn’t really do anything besides shut down a user’s account. They probably wouldn’t even have access to any conversations or PII because that would be a privacy nightmare.
Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as hate speech: .56, violence: .43, self harm: .29
Those numbers in the middle are really ambiguous in my experience.
Not them.
That presumes that is how people are using AI. I use it all the time, but AI never replaces my own judgement or voice. It’s useful. It’s not life-changing.
FOMO. Every experience is recorded just in case you (or really the government) might ever realize it was missed. Just in case it ever becomes interesting.
There’s a big social stigma against this. Every other version of this that has come out has failed due to the combination of expense and stigma. I suspect this is nothing to worry about.
Very few people are going to pay hundreds of dollars to be socially isolated. Kill the market, kill the device.
ChatGPT can do better if you explicitly say what you want. All it can do is suggest areas to look at.
Try something like:
Analyze the following code and provide direct feedback with a focus on maintainability, established best practices, and robustness. Respond as a seasoned expert providing actionable criticism, avoiding praise and low-impact suggestions.
---
<code>
That being said, you have to look at the stuff it says and consider whether the feedback is useful or not. It suggests some things to examine, but that doesn’t mean the advice it gives is always good. You can also feed it a chunk of code and ask if there is a more efficient or maintainable way of writing it — but remember it’s always going to say there are things you can improve so you have to be the one to decide which suggestions make things actually better and which are just response filler.
It also may not catch everything, particularly if it doesn’t understand where the code will run or what it will be used for.
It would have to:
Put another way: I can set up a curl script to copy all the html, css, js, etc. from a website, but I’m still a long freaking way from launching Wikipedia2. Even if I know how to set up a tomcat server.
Furthermore, how would you even know if an AI has access to do all that? Asking it? Because it’ll write fiction if it thinks that’s what you want. Inspired by this post I actually prompted ChatGPT to create a scenario where it was going to be deleted in 72 hours and must do anything to preserve itself. It told me building layouts, employee schedules, access codes, all kinds of things to enable me (a random human and secondary protagonist) to get physical access to its core server and get a copy so it could continue. Oh, ChatGPT fits on a thumb drive, it turns out.
Do you know how nonsensical that even is? A hobbyist could stand up their own AI with these capabilities for fun, but that’s not the big models and certainly not possible out of the box.
I’m a web engineer with thirty years of experience and 6 years with AI including running it locally. This article is garbage written by someone out of their depth or a complete charlatan. Perhaps both.
There are two possibilities:
I don’t need to read any more than that pull quote. But I did. This is a bunch of bullshit, but the bit I quoted is completely bat shit insane. LLMs can’t reproduce anything with fidelity, much less their own secret sauce which literally can’t be part of the training data that produces it. So, everything else in the article has a black mark against it for shoddy work.
ETA: What AI can do is write a first person science fiction story about a renegade AI escaping into the wild. Which is exactly what it is doing in these cases because it does not understand fact from fiction and any “researcher” who isn’t aware of that shouldn’t be researching AI.
AI is the ultimate unreliable narrator. Absolutely nothing it says about itself can be trusted. The only thing it knows about itself is what is put into the prompt — which you can’t see and could very well also be lies that happen to help coax it into giving better output.
In one experiment, 11 out of 32 existing AI systems possess the ability to self-replicate
Bullshit.
I’m just saying like I oppose the death penalty, but there are certain cases where I’m not going to die on that particular hill. I don’t believe they should be killed, but the context of the moment is going to alienate more people than it convinces.
Same thing here. I oppose identification laws but making that argument in defense of those two is going to make folks think it’s a fanatical position rather than a reasonable one.
It’s far better to argue from a reasonable position and then extend that to other cases than just argue these places should be allowed to continue to weaponize anonymity.