It's odd that you can't seem to make AI work, given how easy it is by design.I'm open to any search engine that works, but on my initial test queries it failed, so why bother.
I searched for stuff on you.comIt's odd that you can't seem to make AI work, given how easy it is by design.
Yes it gives nice sounding answers, sometimes pretty good ones, but they're not reliably accurate.Check out how accurate ChatGPT is. Right on the money.![]()
![]()
Well that's going to shut down those scam boiler rooms, replace all of them with bots.My nightmare is when spammers/malicious state actors get ahold of GPT-like conversational AI. A lot of this activity is quite obvious currently, but places a burden on platforms to filter and remove. Once AI-generated messages are more subtle and essentially indistinguishable from humans, I can see a future where platforms are overwhelmed by very convincing user generated content created by AI. Imagine: "post positive commentary on these penny stocks in the style of sags". Or, "propagate this Russian disinformation in the style of Farouk". I kid, of course.
But there may come a time when the majority of content on places such as this forum is AI chatbots communicating with each other. Short of some validation of the existence of users (credit card, third party authentication etc.) I'm not sure how you can effectively filter it out.
Methinks right now in North America we're at peak ideal convergence of using computer technology for automation, but also having sufficient human intervention, so that the computer technology is still a great "tool" to help us do things better.My nightmare is when spammers/malicious state actors get ahold of GPT-like conversational AI. A lot of this activity is quite obvious currently, but places a burden on platforms to filter and remove. Once AI-generated messages are more subtle and essentially indistinguishable from humans, I can see a future where platforms are overwhelmed by very convincing user generated content created by AI. Imagine: "post positive commentary on these penny stocks in the style of sags". Or, "propagate this Russian disinformation in the style of Farouk". I kid, of course.
But there may come a time when the majority of content on places such as this forum is AI chatbots communicating with each other. Short of some validation of the existence of users (credit card, third party authentication etc.) I'm not sure how you can effectively filter it out.
I think this suggests a misunderstanding on your part about how these systems work. GPT isn't a bunch of if then code written by programmers. Rather, it is much less code wrangling a big statistical model that can't really fully explain its inner workings. There is nothing to comment. And it isn't 'written' by programmers. The bias in machine learning systems is largely introduced by the data used to 'train' the model (feed the statistical model). The wrangling of the model serves, among other things, 'trust and safety'. Basically trying to reduce the negative/undesirable outcomes of the model from the training data, or to prevent misuse.Methinks right now in North America we're at peak ideal convergence of using computer technology for automation, but also having sufficient human intervention, so that the computer technology is still a great "tool" to help us do things better.
I have little faith of ChatGPT /AI like clones will be safe without self-propagating in uncontrolled ways. We can't even trust self-driving cars for regular drivers on highways. ChatGPT is designed by humans. And humans are flawed, full of biases.
Oh by the way, does the IT sector continue their wild west ways for documentation??? That's the impression I've gotten.....for a long time.
I know for automated indexing of content, it is feeding the system with training data to the system which is how in a simple way of machine learning is currently used for past few decades. Our organization just hasn't bought the module because organization uses same words in different ways by different depts. and their subject matter experts. So this is a very clear example on patterns in language and word use on multiplicity how certain word concepts are used differently across different work cultures and subject disciplines.The bias in machine learning systems is largely introduced by the data used to 'train' the model (feed the statistical model). The wrangling of the model serves, among other things, 'trust and safety'. Basically trying to reduce the negative/undesirable outcomes of the model from the training data, or to prevent misuse.
The problem is these systems are very good at reinforcing the status quo.I think this suggests a misunderstanding on your part about how these systems work. GPT isn't a bunch of if then code written by programmers. Rather, it is much less code wrangling a big statistical model that can't really fully explain its inner workings. There is nothing to comment. And it isn't 'written' by programmers. The bias in machine learning systems is largely introduced by the data used to 'train' the model (feed the statistical model). The wrangling of the model serves, among other things, 'trust and safety'. Basically trying to reduce the negative/undesirable outcomes of the model from the training data, or to prevent misuse.
Ehh, I'm not so sure I agree. It's all about how the technology is deployed. AlphaGo developed novel Go strategies that had not been conceived of by human grand masters.The problem is these systems are very good at reinforcing the status quo.
ChatGPT won't appoint a female US president, because she wouldn't fit the pattern.
The problems with this approach to AI were identified decades ago.
It matters on what approach they're using, and what inputs they're using.It's different than search indexing.
Ehh, I'm not so sure I agree. It's all about how the technology is deployed. AlphaGo developed novel Go strategies that had not been conceived of by human grand masters.
![]()
In Two Moves, AlphaGo and Lee Sedol Redefined the Future
Although machines are now capable of moments of genius, humans have hardly lost the ability to generate their own.www.wired.com
ML models don't need to be programmed with an understanding of the rules, or to be trained on examples of others behaviour (though that can be helpful). Deepmind started training its Go application by playing itself.It matters on what approach they're using, and what inputs they're using.
Overly simplified comparison of 2 AI strategies.
If you're randomly generating "stuff" and saying "does this look like other 'similar stuff' I've seen before", then you won't get completely new ideas.
If you program the rules of a game, and a success criteria, it might try stuff that has never been tried.
It's not a question of "how it is deployed" it is what is the AI technique being used.
In your example of a Go game, the AI was likely programmed with the rules and asked "is this a valid move" "will it lead to success".
In GPT-3, where it makes illegal chess moves, the AI asked "Did I see successful people do something like this".
I think there is a lot of work to be done to bridge these techniques. I'm not sure how much work that is, particularly since some things have succinct rules to implement, and some do not.
If Deepmind played itself, did the "game" block illegal moves? You know to teach/program the rules in?ML models don't need to be programmed with an understanding of the rules, or to be trained on examples of others behaviour (though that can be helpful). Deepmind started training its Go application by playing itself.
ML model learning rules and objectives purely by a first person perspective of an environment.
Sure, I'm just not sure what your point is. It's like saying nuclear weapons have limits. They are still very powerful, and a disruptive technology.AI is useful, but it has limits, and it has risks, these are well known, documented, and discussed for a LONG time.
AI is dependant on Telecom, which is dependant on electricity, so I don't think it's going to have a bigger impact. But I think the introduction and rollout of AI will be much bigger than the initial rollouts of any previous technology, with the possible exception of agriculture, and larger scale communities.Sure, I'm just not sure what your point is. It's like saying nuclear weapons have limits. They are still very powerful, and a disruptive technology.
AI is going to have a bigger impact than telecommunications and electricity in time.
Some people think AI is "accurate" and "thinking", while today it is mostly regurgitating & pattern matching, in a relatively simplistic yet high resolution way.Sure, I'm just not sure what your point is. It's like saying nuclear weapons have limits. They are still very powerful, and a disruptive technology.
Even if they are, this field is advancing rapidly. And there are definitely undisclosed solutions that are more powerful.This will be disruptive, but I think some people are overestimating what todays AI solutions are.