Canadian Money Forum banner
1 - 19 of 99 Posts

· Registered
Joined
·
16,110 Posts
Someone had a really funny interaction with the GPT chat just released on Bing. The bot starts arguing with the person.

One thing you have to remember is that this particular AI technology (GPT) is just a sentence-completer and word predictor. All it does is use all the text it's been trained on to predict the next word in a sequence, so it completes sentences in plausible ways. It doesn't have any intelligence. It doesn't have any smarts; it just does things which impersonates smarts.

https://www.reddit.com/r/bing/comments/110eagl

View attachment 24274
Frankly, most people just impersonate intelligence as well.

Chat-GPT is going to improve. As it is today, it will be able to provide a lot of value in terms of synthesizing information or generating text. I used it to generate a job description recently, and it produced something much better than my recruiter.
 

· Registered
Joined
·
16,110 Posts
Yes exactly this.

A real estate agent can have chatGPT draft a description for a listing and then do a quick edit. It's a more advanced version of copy/pasting a template to work off which always led to copy/paste errors because it's hard to catch every detail.

It's just a tool that makes people more efficient. The boomers are scared and imagining things because they haven't actually used it.
Yes. Given how hilariously poorly written many listings are.

Chat-GPT is like having a very capable but not perfect junior employee. You give it instructions and it will often do very good work, but it is on you to review what it produces and exercise quality control.

Honestly, being able to write pseudocode and convert it to proper syntax, requiring only code review and testing to verify it is working as expected is a huge step up over most 'self-taught' programmers. I rarely see anyone who does more than google code snippets on stack overflow to copy and paste. They don't bother to comment, rename variables, or even use appropriate whitespace. Chat-GPT writes very readable, well-commented code.
 

· Registered
Joined
·
16,110 Posts
Frankly, I have a few people working for me whose jobs I could see being replaced by a Chat-GPT-alike three generations down the line. I often give detailed instructions by email, with recommended inputs/parameters and expected outputs, and watchouts to look for. Honestly can see Chat-GPT being able to generate the appropriate sql scripts and synthesize the information as requested. Including providing updates based on conversational feedback. This would be quite liberating for me as a lot of my time is spent on training folks to the point where they can take these types of instructions and do something with them.
 

· Registered
Joined
·
16,110 Posts
There's a huge industry of "programmers" who only have gone through boot camps or diploma programs. These people will all be wiped out by GPT and CodePilot because as you know, they have very little knowledge and are mainly copying and pasting things from stackoverflow.

There are a huge number of people with these jobs and the jobs pay well! In my US city I knew people like this who were making as much as 80K salaries. They work as front end developers, full stack developers, all kinds of web and IT jobs. Sometimes referred to as code monkeys.

Unfortunately they will be the first to go with this new wave of automation. More DoorDash delivery people.
I come from a CS background though I don't work in software development. I see a lot of supposed 'star programmers' at work, look at their code, and it becomes abundantly clear they never learned any of the theory on algorithms, data structures, computational complexity, much less the good hygiene habits in terms of formatting and readability.
 

· Registered
Joined
·
16,110 Posts
I think the error here is trying to use ChatGPT as a fact engine. It's always going to be shaky for that use case. It is most powerful for generating text, ideas, outlines, (content) given prompts. Maybe over time work will be put into ensuring it is factual, but that is a difficult problem.
 

· Registered
Joined
·
16,110 Posts
When it doesn't work, that's a pretty important difference.
I'm comparing it to a human. Humans are also fallible. Of course, it isn't trying to actually replace the application. The point is that it has internalized a pretty impressive understanding of how the application should work.

It's also not merely regurgitating. It doesn't have true understanding in the way a human does, but it is also not just looking up code fragments on stackoverflow like a human does.
 

· Registered
Joined
·
16,110 Posts
I think the problem is more that it isn't merely a database. Google has had fact cards for years now (often sourced from wikipedia). GPT3 is sourcing data from many sources and synthesizing.
 

· Registered
Joined
·
16,110 Posts
My nightmare is when spammers/malicious state actors get ahold of GPT-like conversational AI. A lot of this activity is quite obvious currently, but places a burden on platforms to filter and remove. Once AI-generated messages are more subtle and essentially indistinguishable from humans, I can see a future where platforms are overwhelmed by very convincing user generated content created by AI. Imagine: "post positive commentary on these penny stocks in the style of sags". Or, "propagate this Russian disinformation in the style of Farouk". I kid, of course.

But there may come a time when the majority of content on places such as this forum is AI chatbots communicating with each other. Short of some validation of the existence of users (credit card, third party authentication etc.) I'm not sure how you can effectively filter it out.
 

· Registered
Joined
·
16,110 Posts
I guess the flip-side is that AI will be useful for understanding the content of messages at a more conceptual level to identify questionable content. It might filter out humans as well though.

I imagine there are already reddit accounts using GPT for karma farming.
 

· Registered
Joined
·
16,110 Posts
Methinks right now in North America we're at peak ideal convergence of using computer technology for automation, but also having sufficient human intervention, so that the computer technology is still a great "tool" to help us do things better.

I have little faith of ChatGPT /AI like clones will be safe without self-propagating in uncontrolled ways. We can't even trust self-driving cars for regular drivers on highways. ChatGPT is designed by humans. And humans are flawed, full of biases.
Oh by the way, does the IT sector continue their wild west ways for documentation??? That's the impression I've gotten.....for a long time.
I think this suggests a misunderstanding on your part about how these systems work. GPT isn't a bunch of if then code written by programmers. Rather, it is much less code wrangling a big statistical model that can't really fully explain its inner workings. There is nothing to comment. And it isn't 'written' by programmers. The bias in machine learning systems is largely introduced by the data used to 'train' the model (feed the statistical model). The wrangling of the model serves, among other things, 'trust and safety'. Basically trying to reduce the negative/undesirable outcomes of the model from the training data, or to prevent misuse.
 

· Registered
Joined
·
16,110 Posts
It's different than search indexing.
The problem is these systems are very good at reinforcing the status quo.

ChatGPT won't appoint a female US president, because she wouldn't fit the pattern.

The problems with this approach to AI were identified decades ago.
Ehh, I'm not so sure I agree. It's all about how the technology is deployed. AlphaGo developed novel Go strategies that had not been conceived of by human grand masters.

 

· Registered
Joined
·
16,110 Posts
It matters on what approach they're using, and what inputs they're using.

Overly simplified comparison of 2 AI strategies.
If you're randomly generating "stuff" and saying "does this look like other 'similar stuff' I've seen before", then you won't get completely new ideas.

If you program the rules of a game, and a success criteria, it might try stuff that has never been tried.

It's not a question of "how it is deployed" it is what is the AI technique being used.


In your example of a Go game, the AI was likely programmed with the rules and asked "is this a valid move" "will it lead to success".
In GPT-3, where it makes illegal chess moves, the AI asked "Did I see successful people do something like this".

I think there is a lot of work to be done to bridge these techniques. I'm not sure how much work that is, particularly since some things have succinct rules to implement, and some do not.
ML models don't need to be programmed with an understanding of the rules, or to be trained on examples of others behaviour (though that can be helpful). Deepmind started training its Go application by playing itself.

ML model learning rules and objectives purely by a first person perspective of an environment.
 

· Registered
Joined
·
16,110 Posts
AI is useful, but it has limits, and it has risks, these are well known, documented, and discussed for a LONG time.
Sure, I'm just not sure what your point is. It's like saying nuclear weapons have limits. They are still very powerful, and a disruptive technology.

AI is going to have a bigger impact than telecommunications and electricity in time.
 
1 - 19 of 99 Posts
Top