Canadian Money Forum banner
61 - 80 of 99 Posts

· Registered
Joined
·
2,156 Posts
I'm open to any search engine that works, but on my initial test queries it failed, so why bother.
It's odd that you can't seem to make AI work, given how easy it is by design.

Check out how accurate ChatGPT is. Right on the money. (y) :ROFLMAO:

TomB16> Can I update a linux kernel without rebooting?

ChatGPT> No, you cannot update the Linux kernel without rebooting. The Linux kernel is a core component of the operating system and is loaded into memory when the system boots up. Updating the kernel requires replacing the existing kernel image with a new one and reloading it into memory. This process cannot be done without restarting the system.
However, some Linux distributions provide a feature called "live kernel patching" or "kernel hot patching" that allows you to patch the kernel without rebooting. This feature is only available for certain types of patches, such as security updates or critical bug fixes, and requires specialized software and configuration. Not all Linux distributions support live kernel patching, so you should check with your distribution's documentation to see if this feature is available and how to use it.
 

· Registered
Joined
·
13,364 Posts
It's odd that you can't seem to make AI work, given how easy it is by design.
I searched for stuff on you.com
I got bad results compared to google and bing.

Check out how accurate ChatGPT is. Right on the money. (y) :ROFLMAO:
Yes it gives nice sounding answers, sometimes pretty good ones, but they're not reliably accurate.
Like self driving cars today, they're pretty good, except when they kill someone.

People seem a bit too eager to trust AI, when we KNOW it has issues.
This goes double when we're talking about a specific AI and we KNOW it's problems, some people will still claim it's "accurate".
 

· Registered
Joined
·
4,121 Posts
Not ChatGPT, but we know there's now DeepFakes and all that stuff.

This whole AI thing can go wrong. We can generate everything with AI. Discussions, speeches, art, images, videos, stories, books, answers to anything, etc.

To me, most technological advancements are kinda balanced: They bring as much good and bad to society.

But all that comes down to the bad inside the human nature : desire for power and control.

Fake - Joe Rogan interviews Justin Trudeau

Anyways we can now make DeepFakes live

DeepFake of Tom Cruise

More
 

· Registered
Joined
·
16,110 Posts
My nightmare is when spammers/malicious state actors get ahold of GPT-like conversational AI. A lot of this activity is quite obvious currently, but places a burden on platforms to filter and remove. Once AI-generated messages are more subtle and essentially indistinguishable from humans, I can see a future where platforms are overwhelmed by very convincing user generated content created by AI. Imagine: "post positive commentary on these penny stocks in the style of sags". Or, "propagate this Russian disinformation in the style of Farouk". I kid, of course.

But there may come a time when the majority of content on places such as this forum is AI chatbots communicating with each other. Short of some validation of the existence of users (credit card, third party authentication etc.) I'm not sure how you can effectively filter it out.
 

· Registered
Joined
·
16,110 Posts
I guess the flip-side is that AI will be useful for understanding the content of messages at a more conceptual level to identify questionable content. It might filter out humans as well though.

I imagine there are already reddit accounts using GPT for karma farming.
 

· Registered
Joined
·
13,364 Posts
My nightmare is when spammers/malicious state actors get ahold of GPT-like conversational AI. A lot of this activity is quite obvious currently, but places a burden on platforms to filter and remove. Once AI-generated messages are more subtle and essentially indistinguishable from humans, I can see a future where platforms are overwhelmed by very convincing user generated content created by AI. Imagine: "post positive commentary on these penny stocks in the style of sags". Or, "propagate this Russian disinformation in the style of Farouk". I kid, of course.

But there may come a time when the majority of content on places such as this forum is AI chatbots communicating with each other. Short of some validation of the existence of users (credit card, third party authentication etc.) I'm not sure how you can effectively filter it out.
Well that's going to shut down those scam boiler rooms, replace all of them with bots.
 

· Registered
Joined
·
941 Posts
My nightmare is when spammers/malicious state actors get ahold of GPT-like conversational AI. A lot of this activity is quite obvious currently, but places a burden on platforms to filter and remove. Once AI-generated messages are more subtle and essentially indistinguishable from humans, I can see a future where platforms are overwhelmed by very convincing user generated content created by AI. Imagine: "post positive commentary on these penny stocks in the style of sags". Or, "propagate this Russian disinformation in the style of Farouk". I kid, of course.

But there may come a time when the majority of content on places such as this forum is AI chatbots communicating with each other. Short of some validation of the existence of users (credit card, third party authentication etc.) I'm not sure how you can effectively filter it out.
Methinks right now in North America we're at peak ideal convergence of using computer technology for automation, but also having sufficient human intervention, so that the computer technology is still a great "tool" to help us do things better.

I have little faith of ChatGPT /AI like clones will be safe without self-propagating in uncontrolled ways. We can't even trust self-driving cars for regular drivers on highways. ChatGPT is designed by humans. And humans are flawed, full of biases.
Oh by the way, does the IT sector continue their wild west ways for documentation??? That's the impression I've gotten.....for a long time.
 

· Registered
Joined
·
16,110 Posts
Methinks right now in North America we're at peak ideal convergence of using computer technology for automation, but also having sufficient human intervention, so that the computer technology is still a great "tool" to help us do things better.

I have little faith of ChatGPT /AI like clones will be safe without self-propagating in uncontrolled ways. We can't even trust self-driving cars for regular drivers on highways. ChatGPT is designed by humans. And humans are flawed, full of biases.
Oh by the way, does the IT sector continue their wild west ways for documentation??? That's the impression I've gotten.....for a long time.
I think this suggests a misunderstanding on your part about how these systems work. GPT isn't a bunch of if then code written by programmers. Rather, it is much less code wrangling a big statistical model that can't really fully explain its inner workings. There is nothing to comment. And it isn't 'written' by programmers. The bias in machine learning systems is largely introduced by the data used to 'train' the model (feed the statistical model). The wrangling of the model serves, among other things, 'trust and safety'. Basically trying to reduce the negative/undesirable outcomes of the model from the training data, or to prevent misuse.
 

· Registered
Joined
·
941 Posts
The bias in machine learning systems is largely introduced by the data used to 'train' the model (feed the statistical model). The wrangling of the model serves, among other things, 'trust and safety'. Basically trying to reduce the negative/undesirable outcomes of the model from the training data, or to prevent misuse.
I know for automated indexing of content, it is feeding the system with training data to the system which is how in a simple way of machine learning is currently used for past few decades. Our organization just hasn't bought the module because organization uses same words in different ways by different depts. and their subject matter experts. So this is a very clear example on patterns in language and word use on multiplicity how certain word concepts are used differently across different work cultures and subject disciplines.
 

· Registered
Joined
·
13,364 Posts
I think this suggests a misunderstanding on your part about how these systems work. GPT isn't a bunch of if then code written by programmers. Rather, it is much less code wrangling a big statistical model that can't really fully explain its inner workings. There is nothing to comment. And it isn't 'written' by programmers. The bias in machine learning systems is largely introduced by the data used to 'train' the model (feed the statistical model). The wrangling of the model serves, among other things, 'trust and safety'. Basically trying to reduce the negative/undesirable outcomes of the model from the training data, or to prevent misuse.
The problem is these systems are very good at reinforcing the status quo.

ChatGPT won't appoint a female US president, because she wouldn't fit the pattern.

The problems with this approach to AI were identified decades ago.
 

· Registered
Joined
·
16,110 Posts
It's different than search indexing.
The problem is these systems are very good at reinforcing the status quo.

ChatGPT won't appoint a female US president, because she wouldn't fit the pattern.

The problems with this approach to AI were identified decades ago.
Ehh, I'm not so sure I agree. It's all about how the technology is deployed. AlphaGo developed novel Go strategies that had not been conceived of by human grand masters.

 

· Registered
Joined
·
13,364 Posts
It's different than search indexing.

Ehh, I'm not so sure I agree. It's all about how the technology is deployed. AlphaGo developed novel Go strategies that had not been conceived of by human grand masters.

It matters on what approach they're using, and what inputs they're using.

Overly simplified comparison of 2 AI strategies.
If you're randomly generating "stuff" and saying "does this look like other 'similar stuff' I've seen before", then you won't get completely new ideas.

If you program the rules of a game, and a success criteria, it might try stuff that has never been tried.

It's not a question of "how it is deployed" it is what is the AI technique being used.


In your example of a Go game, the AI was likely programmed with the rules and asked "is this a valid move" "will it lead to success".
In GPT-3, where it makes illegal chess moves, the AI asked "Did I see successful people do something like this".

I think there is a lot of work to be done to bridge these techniques. I'm not sure how much work that is, particularly since some things have succinct rules to implement, and some do not.
 

· Registered
Joined
·
16,110 Posts
It matters on what approach they're using, and what inputs they're using.

Overly simplified comparison of 2 AI strategies.
If you're randomly generating "stuff" and saying "does this look like other 'similar stuff' I've seen before", then you won't get completely new ideas.

If you program the rules of a game, and a success criteria, it might try stuff that has never been tried.

It's not a question of "how it is deployed" it is what is the AI technique being used.


In your example of a Go game, the AI was likely programmed with the rules and asked "is this a valid move" "will it lead to success".
In GPT-3, where it makes illegal chess moves, the AI asked "Did I see successful people do something like this".

I think there is a lot of work to be done to bridge these techniques. I'm not sure how much work that is, particularly since some things have succinct rules to implement, and some do not.
ML models don't need to be programmed with an understanding of the rules, or to be trained on examples of others behaviour (though that can be helpful). Deepmind started training its Go application by playing itself.

ML model learning rules and objectives purely by a first person perspective of an environment.
 

· Registered
Joined
·
13,364 Posts
ML models don't need to be programmed with an understanding of the rules, or to be trained on examples of others behaviour (though that can be helpful). Deepmind started training its Go application by playing itself.

ML model learning rules and objectives purely by a first person perspective of an environment.
If Deepmind played itself, did the "game" block illegal moves? You know to teach/program the rules in?

ChatGPT making illegal chess moves means it didn't "learn" the rules.

Also if the training dataset instills a sense of "rules" to the AI, then you're also likely instilling the status quo as "rules".

Of course this is pretty much how stereotypes work
Oh something like this is this...

AI is useful, but it has limits, and it has risks, these are well known, documented, and discussed for a LONG time.
 

· Registered
Joined
·
16,110 Posts
AI is useful, but it has limits, and it has risks, these are well known, documented, and discussed for a LONG time.
Sure, I'm just not sure what your point is. It's like saying nuclear weapons have limits. They are still very powerful, and a disruptive technology.

AI is going to have a bigger impact than telecommunications and electricity in time.
 

· Registered
Joined
·
13,364 Posts
Sure, I'm just not sure what your point is. It's like saying nuclear weapons have limits. They are still very powerful, and a disruptive technology.

AI is going to have a bigger impact than telecommunications and electricity in time.
AI is dependant on Telecom, which is dependant on electricity, so I don't think it's going to have a bigger impact. But I think the introduction and rollout of AI will be much bigger than the initial rollouts of any previous technology, with the possible exception of agriculture, and larger scale communities.

I think that AI is going to massively disrupt the information economy, I don't know what the next "age" is going to be, but I think we're going to see a massive restructuring of society, like the industrial revolution.
I also think that right now we have no idea what it will look like.

The politics will get very interesting, as we have very little consensus and are losing our ability to discuss issues, political factions disagree on basic facts.
Throw in Foreign/AI assisted influence and we're in for a wild ride.
 

· Registered
Joined
·
13,364 Posts
Sure, I'm just not sure what your point is. It's like saying nuclear weapons have limits. They are still very powerful, and a disruptive technology.
Some people think AI is "accurate" and "thinking", while today it is mostly regurgitating & pattern matching, in a relatively simplistic yet high resolution way.
This will be disruptive, but I think some people are overestimating what todays AI solutions are.

Note what they are, and what they can do are not the same, but there are some rather significant fundamental holes in the systems today.
I also think the Next gen AIs what will address the obvious flaws we know about are going to come fast.
 
61 - 80 of 99 Posts
Top