I don’t understand why you would believe that ChatGPT is actually reporting real questions it’s been asked by real people in the past. It specifically says (as you quote in this article) that it doesn’t retain records or memory of past conversations. What it’s clearly doing is what it always does: making up answers to your questions that sound convincing. I’ve played around with ChatGPT enough (based on research topics that I have a Ph.D. in) to recognize that you just can’t rely on any of its answers to be based on any facts at all, and that includes any questions you ask it in an “interview”.
Included in the story is an image of the segment of the conversation where ChatGPT said the questions it shared were real questions by real users. As mentioned in the article, ChatGPT can share false information and that may be the case here. But we also know that it does retain chat logs. What it does with those conversations and whether or not they’re used to dictate future responses isn’t clear. And of course, because we live in a world where there are folks who very well may be frightened at the sight of spaghetti, nothing is out of the question.
This isn’t how large language models work. It has trained weights, and it has the context of your own conversation with it. It does not have the context of other users’ conversations with it while it is replying to you.
Will S. says:
05/11/2023 at 4:41 PM
Darn, I was hoping the answer to the meaning of life was actually going to be “42”!
I tested this and a few similar questions myself, asking it to tell me the most common questions asking by users, the most unique requests it’s received from users to create ideas for short stories, and a sampling of any questions asked by users over the last 3 months. “ChatGPT gave me responses for each, just as you received, but told me repeatedly that the questions, prompts and ideas it provided to me were not based on any users actual questions, as it does not have the ability to retain that sort of information, nor would it be ethical to provide that information to me.It also added that the responses it gave me were a response to my prompt, generating what it ‘imagined’ to be common questions or unique prompts”
The Reverse Turing Test: discovering that the AI has blown past an individual’s capacity to comprehend an interaction. As mentioned above, ChatGPT just makes stuff up, and your own article contradicts your claim that ChatGPT can remember any questions. Or is that just another fabrication of the AI? We’ll never know. But we do know you failed the Reverse Turing Test, if only on purpose to get a few more cheap clicks. Good job, slave to the algorithm!
The AI response to the meaning of life is existentialism. “It is up to each individual to discover their own sense of purpose and meaning in life, based on their own experiences, beliefs, and values.”
While ChatGPT won’t answer many unethical questions, Google will. Just because it’s more conversational than google doesn’t mean that it’s more powerful from a useful information standpoint. In fact, as you’ve pointed out, and many have found out themselves, from a pure knowledge standpoint, it’s wise to double check what the chatbot says through more traditional means. Still, fun article! Thanks for sharing!
It is not possible for it to replay any questions it has been asked because those questions
are not part of any persistent internal state. Everything is wiped between sessions.
I don’t think those are actual questions I asked chatgpt the same thing and one of the questions was “why does 7 sound like a hairy J” and “how come purple tastes like a neutron star” maybe people are typing nonsense questions but those seem particularly nonsensical
People are morons!
If you think that ChatGPT will tell you the truth, I have a bridge in Brooklyn waiting for you.
Google lies and keeps history basically for ever.
The Government lies and keeps data about you basically for ever.
My God even Toyota (and every other manufacturer) lies and keeps history basically for ever.
But “Trust me, I am a computer.”
Are you insane?
At this time, a crime involves an action, not a question or a thinking about an action.
Soon ‘Thought Crime’ will become punishable (see ‘Hate crimes’ for example) and ChatGPT, Google and all the other ‘Big Brother’ spies will be reporting “for your own protection” of course.
Can you answer this question using Glossolalia?
I don’t understand why you would believe that ChatGPT is actually reporting real questions it’s been asked by real people in the past. It specifically says (as you quote in this article) that it doesn’t retain records or memory of past conversations. What it’s clearly doing is what it always does: making up answers to your questions that sound convincing. I’ve played around with ChatGPT enough (based on research topics that I have a Ph.D. in) to recognize that you just can’t rely on any of its answers to be based on any facts at all, and that includes any questions you ask it in an “interview”.
This was the most intelligent thing i found on this page. Thank you!
Included in the story is an image of the segment of the conversation where ChatGPT said the questions it shared were real questions by real users. As mentioned in the article, ChatGPT can share false information and that may be the case here. But we also know that it does retain chat logs. What it does with those conversations and whether or not they’re used to dictate future responses isn’t clear. And of course, because we live in a world where there are folks who very well may be frightened at the sight of spaghetti, nothing is out of the question.
This isn’t how large language models work. It has trained weights, and it has the context of your own conversation with it. It does not have the context of other users’ conversations with it while it is replying to you.
Darn, I was hoping the answer to the meaning of life was actually going to be “42”!
“unfortunately it will not contact authorities” Some people just crave that boot on the neck.
Indeed. See Christopher Hitchens on religion.
I tested this and a few similar questions myself, asking it to tell me the most common questions asking by users, the most unique requests it’s received from users to create ideas for short stories, and a sampling of any questions asked by users over the last 3 months. “ChatGPT gave me responses for each, just as you received, but told me repeatedly that the questions, prompts and ideas it provided to me were not based on any users actual questions, as it does not have the ability to retain that sort of information, nor would it be ethical to provide that information to me.It also added that the responses it gave me were a response to my prompt, generating what it ‘imagined’ to be common questions or unique prompts”
The Reverse Turing Test: discovering that the AI has blown past an individual’s capacity to comprehend an interaction. As mentioned above, ChatGPT just makes stuff up, and your own article contradicts your claim that ChatGPT can remember any questions. Or is that just another fabrication of the AI? We’ll never know. But we do know you failed the Reverse Turing Test, if only on purpose to get a few more cheap clicks. Good job, slave to the algorithm!
Maybe the entire article was written by an AI. “Write an article in which a journalist asks ChatGPT about what questions people ask…”
The AI response to the meaning of life is existentialism. “It is up to each individual to discover their own sense of purpose and meaning in life, based on their own experiences, beliefs, and values.”
hmmmmm
Pleas calculate, to the final digit, the value of pi.
JUST STFU, and do it…. 🙂
I’m very interested in gambling on these theoretical animals fights with ChatGPT as the final decision maker.
While ChatGPT won’t answer many unethical questions, Google will. Just because it’s more conversational than google doesn’t mean that it’s more powerful from a useful information standpoint. In fact, as you’ve pointed out, and many have found out themselves, from a pure knowledge standpoint, it’s wise to double check what the chatbot says through more traditional means. Still, fun article! Thanks for sharing!
ChatGPT has no persistent internal state.
It is not possible for it to replay any questions it has been asked because those questions
are not part of any persistent internal state. Everything is wiped between sessions.
So this article is nonsense.
I don’t think those are actual questions I asked chatgpt the same thing and one of the questions was “why does 7 sound like a hairy J” and “how come purple tastes like a neutron star” maybe people are typing nonsense questions but those seem particularly nonsensical
People are morons!
If you think that ChatGPT will tell you the truth, I have a bridge in Brooklyn waiting for you.
Google lies and keeps history basically for ever.
The Government lies and keeps data about you basically for ever.
My God even Toyota (and every other manufacturer) lies and keeps history basically for ever.
But “Trust me, I am a computer.”
Are you insane?
At this time, a crime involves an action, not a question or a thinking about an action.
Soon ‘Thought Crime’ will become punishable (see ‘Hate crimes’ for example) and ChatGPT, Google and all the other ‘Big Brother’ spies will be reporting “for your own protection” of course.