^ Let me think about it ... and think some more about it. For one, I'm not getting a virus on my machine.
You're not going to get a virus from using it, anymore than you would from using this website.^ Let me think about it ... and think some more about it. For one, I'm not getting a virus on my machine.
... which is good to know but like I said, I like to "think about it" which means I have to use "my own brain" first ... no need for a Chatwhatever to do that for me. Re your above post #42 confirms that.You're not going to get a virus from using it, anymore than you would from using this website.
Makes me think of all the people introduced to FEA, thinking it can replace basic Engineering calculations and proper analysis.This thread is ideal feed stock for my "identifying delusions" thread. Clearly, posters in this thread want it to not be something that is going to bring an end to civilization.
I'm sure there were horse breeders in the year 1900 who joked about the automobile and declared it a laughing stock that would never amount to anything.
GPT3 is amazing. I've been playing with it for a short time. It is easy, accurate, and wildly effective. I've found it capable of retrieving information that used to require a bunch of python code, a MariaDB database, and spreadsheet macros, now handled with a single spreadsheet call. It has been a revelation for me.
I just started using it a few days ago for a few things, including investing telemetry.
Yes, the regurgitating looks a lot like understanding... but it isn't.Kind of a simple example, but this is pretty wild. Chat-GPT role-playing an application and having a surprisingly coherent and accurate 'understanding' of how it works.
At some point it becomes a distinction without a difference.Yes, the regurgitating looks a lot like understanding... but it isn't.
When it doesn't work, that's a pretty important difference.At some point it becomes a distinction without a difference.
I'm comparing it to a human. Humans are also fallible. Of course, it isn't trying to actually replace the application. The point is that it has internalized a pretty impressive understanding of how the application should work.When it doesn't work, that's a pretty important difference.
What are you doing with it scraping web data into spreadsheets?To use GPT3 with GoogleSheets:
- Go to OpenAI.com and create an account
- generate an API key
- Open GoogleSheets, go to extensions, and install "GPT for Sheets and Docs"
- Go to extentions, GPT for Sheets and Docs, add API key
Now use the GPT3() call in your sheets as you wish. There are YouTube videos on how to use it but it is extremely straight forward.
ChatGPT is just a chat front end to GPT3. OpenAI is working on some interesting front ends to GPT3.
Yes, it's not an oracle or anything. You can think of GPT as a database lookup.I tried chatgpt for the first time yesterday. I was asking about the Apollo moon missions. it got every question I asked wrong for about 15 minutes. I explained the response was wrong, and it kept giving incorrect responses. Not impressed.
Google, Siri, Alexa etc rely heavily on WolframAlpha. I've been using WolframAlpha for over a decade even before it was integrated into those assistants. Now it's integrated into many things like Excel. It does exactly what james is describing.I think the problem is more that it isn't merely a database. Google has had fact cards for years now (often sourced from wikipedia). GPT3 is sourcing data from many sources and synthesizing.
Exactly. GPT is not returning facts. It's mostly returning Internet concensus. Even OpenAI cites tests which show it had an accuracy of 76%. While this improves the state of the art, it is not perfect.I think the error here is trying to use ChatGPT as a fact engine. It's always going to be shaky for that use case.
Well I also agree with the Luddites. I think the impact will be very significant.Exactly. GPT is not returning facts. It's mostly returning Internet concensus. Even OpenAI cites tests which show it had an accuracy of 76% in their tests. While this improves the state of the art, it is not perfect.
You know what? I agree with the ludites. It's best to declare any flaw a fatal, unfixable issue in this early code and declare AI as absolutely no thread to knowledge workers, investors, or anyone. You are right! Best to ignore this and, whatever you do, certainly you should resist any application of GPT which might improve your life.
That's my point, take the quote below, emphasis mine.You shouldn't blindly trust GPT, but it's not merely a parlour trick.
These are very useful tools, within their limits.GPT3 is amazing. I've been playing with it for a short time. It is easy, accurate, and wildly effective.
Well I also agree with the Luddites. I think the impact will be very significant.
I think these AI tools are very useful and are likely to raise the bar of what is automatable in the future.
I'm a very concerned about the impact on workers.
I was a big fan of smart reschedule in Todoist.
I actually paid for premium(Yes $60/yr for a todo list app), and removed it when they cancelled it.
Todoist Launches Smart Schedule, an AI-Based Feature to Reschedule Overdue TasksWhen Todoist’s data scientist Oleg Shidlowsky and his team started looking at aggregate task data earlier this year, they discovered an interesting pattern: despite tools to assign due dates and good intentions, most people tend to accumulate incomplete tasks and defer them indefinitely. The...www.macstories.net
I don't think AI is fatally flawed, however I think it is very scary that it will be misapplied and misused in inappropriate circumstances.
For example some posters here have said that ChatGPT is accurate, and it most certainly is NOT.
But try to imagine a manager / sr. manager who is wowed by a flashy sales presentation on ChatGPT or similar who has NO /very little knowledge about ChatGPT and then ..approves it as solving higher level problems which should involve considering clients' background to solve psychosocial related decisions. Or simply a more creative, yet practical solution never thought of.I am using CHATGPT as I would a junior employee or a COOP student.
I will give it a task so I don't need to figure out how to do it or take the time etc.
But I will always review/check its work as I am ultimately responsible for the use and delivery of information. It may make errors at these early stages and you may need to ask it to modify accordingly but it can do so very quickly. This is a great tool to be used to increase your personal efficiency but it is like the junior employee and it needs to be managed.
I would not want it as my customer facing tool as AI chat bots are often used (yet). But in the back room helping to do the grunt work to make those customer facing people look better.