Canadian Money Forum banner

Chatbot (GPT) gone wrong

6714 Views 166 Replies 14 Participants Last post by  MrMatt
Someone had a really funny interaction with the GPT chat just released on Bing. The bot starts arguing with the person.

One thing you have to remember is that this particular AI technology (GPT) is just a sentence-completer and word predictor. All it does is use all the text it's been trained on to predict the next word in a sequence, so it completes sentences in plausible ways. It doesn't have any intelligence. It doesn't have any smarts; it just does things which impersonates smarts.

https://www.reddit.com/r/bing/comments/110eagl

Font Screenshot Rectangle Parallel Electric blue
See less See more
41 - 60 of 167 Posts
^ Let me think about it ... and think some more about it. For one, I'm not getting a virus on my machine.
I think the error here is trying to use ChatGPT as a fact engine. It's always going to be shaky for that use case. It is most powerful for generating text, ideas, outlines, (content) given prompts. Maybe over time work will be put into ensuring it is factual, but that is a difficult problem.
^ Let me think about it ... and think some more about it. For one, I'm not getting a virus on my machine.
You're not going to get a virus from using it, anymore than you would from using this website.
You're not going to get a virus from using it, anymore than you would from using this website.
... which is good to know but like I said, I like to "think about it" which means I have to use "my own brain" first ... no need for a Chatwhatever to do that for me. Re your above post #42 confirms that.
This thread is ideal feed stock for my "identifying delusions" thread. Clearly, posters in this thread want it to not be something that is going to bring an end to civilization.

I'm sure there were horse breeders in the year 1900 who joked about the automobile and declared it a laughing stock that would never amount to anything.

GPT3 is amazing. I've been playing with it for a short time. It is easy, accurate, and wildly effective. I've found it capable of retrieving information that used to require a bunch of python code, a MariaDB database, and spreadsheet macros, now handled with a single spreadsheet call. It has been a revelation for me.

I just started using it a few days ago for a few things, including investing telemetry.
Makes me think of all the people introduced to FEA, thinking it can replace basic Engineering calculations and proper analysis.

The thing with many of these AI systems is they're just pattern recognition, some with an iterative generate and validate loop.
They create "something", then ask "is that something the thing I wanted", then repeat till it passes a threshold of "is that the something I wanted".

My kids thought it was hilarious that ChatGPT plays illegal chess moves. Sure in many contexts it looks like it's doing okay, and looks correct.
But in many obvious cases it was wrong, clearly and obviously wrong.

The risk is that people will start thinking it is "easy, accurate and wildly effective", and not apply appropriate scrutiny.

It's one of a few cool tools, and I'm thinking of some interesting applications that I want to try, but you have to remember that it has some VERY SERIOUS limitations.
To me the most important part of a scientific paper is the Limitations, and that's the most important part of applying AI.
See less See more
Kind of a simple example, but this is pretty wild. Chat-GPT role-playing an application and having a surprisingly coherent and accurate 'understanding' of how it works.

Kind of a simple example, but this is pretty wild. Chat-GPT role-playing an application and having a surprisingly coherent and accurate 'understanding' of how it works.
Yes, the regurgitating looks a lot like understanding... but it isn't.
Yes, the regurgitating looks a lot like understanding... but it isn't.
At some point it becomes a distinction without a difference.
At some point it becomes a distinction without a difference.
When it doesn't work, that's a pretty important difference.
When it doesn't work, that's a pretty important difference.
I'm comparing it to a human. Humans are also fallible. Of course, it isn't trying to actually replace the application. The point is that it has internalized a pretty impressive understanding of how the application should work.

It's also not merely regurgitating. It doesn't have true understanding in the way a human does, but it is also not just looking up code fragments on stackoverflow like a human does.
To use GPT3 with GoogleSheets:

  • Go to OpenAI.com and create an account
  • generate an API key
  • Open GoogleSheets, go to extensions, and install "GPT for Sheets and Docs"
  • Go to extentions, GPT for Sheets and Docs, add API key

Now use the GPT3() call in your sheets as you wish. There are YouTube videos on how to use it but it is extremely straight forward.

ChatGPT is just a chat front end to GPT3. OpenAI is working on some interesting front ends to GPT3.
What are you doing with it scraping web data into spreadsheets?

Google finance is the best tool I have for crypto taxes because it can import crypto data natively. Some data is missing though especially new blockchains or protocols. Maybe GPT can help automate my crypto taxes. I've paid for several dedicated services but none of them can keep up how fast things are changing.

By the way you will be able to allocate and monetize unused computer resources to this stuff. It's still in development but looks very promising. All CLI on Linux of course.
I tried chatgpt for the first time yesterday. I was asking about the Apollo moon missions. it got every question I asked wrong for about 15 minutes. I explained the response was wrong, and it kept giving incorrect responses. Not impressed.
Yes, it's not an oracle or anything. You can think of GPT as a database lookup.

It's been trained with information off the internet. When you ask questions, this does a kind of database lookup and the bot starts drawing information out of whatever stored data it finds, relevant to the question asked.

If it happens to find inaccurate data, or contextually incorrect data, then you'll get crappy responses.
I think the problem is more that it isn't merely a database. Google has had fact cards for years now (often sourced from wikipedia). GPT3 is sourcing data from many sources and synthesizing.
I think the problem is more that it isn't merely a database. Google has had fact cards for years now (often sourced from wikipedia). GPT3 is sourcing data from many sources and synthesizing.
Google, Siri, Alexa etc rely heavily on WolframAlpha. I've been using WolframAlpha for over a decade even before it was integrated into those assistants. Now it's integrated into many things like Excel. It does exactly what james is describing.

So GPT is doing something far more than that. This is just one service that was made public. I've seen others that can watch you do tedious/repetitive computer tasks and then try to automate it for you. Basically what coders do.
I think the error here is trying to use ChatGPT as a fact engine. It's always going to be shaky for that use case.
Exactly. GPT is not returning facts. It's mostly returning Internet concensus. Even OpenAI cites tests which show it had an accuracy of 76%. While this improves the state of the art, it is not perfect.

You know what? I agree with the ludites. It's best to declare any flaw a fatal, unfixable issue in this early code and declare AI as absolutely no threat to knowledge workers, investors, or anyone. You are right! Best to ignore this and, whatever you do, certainly you should resist any application of GPT which might improve your life. (y)
  • Haha
Reactions: 2
Exactly. GPT is not returning facts. It's mostly returning Internet concensus. Even OpenAI cites tests which show it had an accuracy of 76% in their tests. While this improves the state of the art, it is not perfect.

You know what? I agree with the ludites. It's best to declare any flaw a fatal, unfixable issue in this early code and declare AI as absolutely no thread to knowledge workers, investors, or anyone. You are right! Best to ignore this and, whatever you do, certainly you should resist any application of GPT which might improve your life. (y)
Well I also agree with the Luddites. I think the impact will be very significant.
I think these AI tools are very useful and are likely to raise the bar of what is automatable in the future.
I'm a very concerned about the impact on workers.

I was a big fan of smart reschedule in Todoist.
I actually paid for premium(Yes $60/yr for a todo list app), and removed it when they cancelled it.

I don't think AI is fatally flawed, however I think it is very scary that it will be misapplied and misused in inappropriate circumstances.
For example some posters here have said that ChatGPT is accurate, and it most certainly is NOT.
You shouldn't blindly trust GPT, but it's not merely a parlour trick.
You shouldn't blindly trust GPT, but it's not merely a parlour trick.
That's my point, take the quote below, emphasis mine.
That's what's concerning to me.

GPT3 is amazing. I've been playing with it for a short time. It is easy, accurate, and wildly effective.
These are very useful tools, within their limits.
I am using CHATGPT as I would a junior employee or a COOP student.

I will give it a task so I don't need to figure out how to do it or take the time etc.
But I will always review/check its work as I am ultimately responsible for the use and delivery of information. It may make errors at these early stages and you may need to ask it to modify accordingly but it can do so very quickly. This is a great tool to be used to increase your personal efficiency but it is like the junior employee and it needs to be managed.

I would not want it as my customer facing tool as AI chat bots are often used (yet). But in the back room helping to do the grunt work to make those customer facing people look better.
  • Like
Reactions: 2
Well I also agree with the Luddites. I think the impact will be very significant.
I think these AI tools are very useful and are likely to raise the bar of what is automatable in the future.
I'm a very concerned about the impact on workers.

I was a big fan of smart reschedule in Todoist.
I actually paid for premium(Yes $60/yr for a todo list app), and removed it when they cancelled it.

I don't think AI is fatally flawed, however I think it is very scary that it will be misapplied and misused in inappropriate circumstances.
For example some posters here have said that ChatGPT is accurate, and it most certainly is NOT.
I am using CHATGPT as I would a junior employee or a COOP student.

I will give it a task so I don't need to figure out how to do it or take the time etc.
But I will always review/check its work as I am ultimately responsible for the use and delivery of information. It may make errors at these early stages and you may need to ask it to modify accordingly but it can do so very quickly. This is a great tool to be used to increase your personal efficiency but it is like the junior employee and it needs to be managed.

I would not want it as my customer facing tool as AI chat bots are often used (yet). But in the back room helping to do the grunt work to make those customer facing people look better.
But try to imagine a manager / sr. manager who is wowed by a flashy sales presentation on ChatGPT or similar who has NO /very little knowledge about ChatGPT and then ..approves it as solving higher level problems which should involve considering clients' background to solve psychosocial related decisions. Or simply a more creative, yet practical solution never thought of.

In my personal opinion, some of the large IT or partial IT new system implementations, were decisions made by a sr. manager (with budget authority) who really didn't know/nor knew who to trust as subject matter experts....that is NOT IT folks. ChatGPT is a part an arensal of technology tools. The strongest value add is the subject matter specialists in whatever that software application will support as a tool. For instance to create low level ChatGPT, for the legal sector, you need lawyers and paralegals to help shape and test the tool. Not IT. For my line of work, you need linguists, subject matter specialists across all disciplines (law, engineering, etc.).

And of course, desired strong project manager so costs are controlled and success on usable product.

The heart of all this is use of language and how words are differently interpreted and used by different people ....then how does that transfer to ChatGPT. I am not addressing code and regenerative code for repetitive tasks. I am simply reflecting on a narrow part of ChatGPT as pattern recognition of certain concepts captured in words.
See less See more
41 - 60 of 167 Posts
Top