Canadian Money Forum banner
1 - 19 of 99 Posts

· Registered
Joined
·
13,364 Posts
Very good point. One could also argue that 99% of economic activities don't require much intelligence. Many things that occur in jobs from day to day are actually quite routine and mundane.
Of course many people even fail at that.
The average investor underperforms the relevant index, not because they can't replicate it, but because they actively act improperly. Despite an easier and more effective means being available.


No question it can be useful. I agree that it has good uses, I just am hoping that people don't start using it in inappropriate places.
They already are
Another mistake would be removing the vital step of human review, and just using the output without thinking.
They already are, but why is that "vital"
I think a lot of algorithmically generated output doesn't need human review.
Look at DLSS, do we really need a human to review?

AI is mostly pattern recognition, right now, and we use a lot of bad poorly thought out patterns.
AI will inflict whatever stereotypes it is programmed with.

The reason we didn't see the balloons is because "all" the airborne threats were fast and very fast, so the systems didn't look for slow moving objects. Like who'd launch an attack from a blimp?
Now I'll admit NORAD not noticing the balloons is a bad pattern in the pattern recognition algorithm, but that's all the rest of our AI/Deep learning systems are today.

My kids are into chess, they want to play Chess-GPT, but they also thought it was funny when Chess-GPT started moving players illegally.
As digital natives, they have a very interesting perspective on this stuff.


There's a public fear of AI becoming sentient and killing all mankind. I suspect that what actually happens will be much less dramatic, but still very harmful to the world: eliminating a ton of jobs and making a lot of people permanently unemployable.
This has been a serious problem for years. We have to find some way to make these people useful in society.

We used to allow paying disabled/impaired people less than minimum wage, so they were still doing something useful, and it was economically appropriate.
That has been banned in Ontario.

The US army rejects "lower" IQ people, as not being smart enough to contribute positively I've heard 83, now 92.
I don't actually dispute this, even with the increasing specialization, and well documented workflows, some tasks might be difficult for them to learn and adapt, and that bar keeps getting higher.

What do we do when nearly half the population is unemployable, because they can't do the job, or there is a much cheaper alternative to do that job.
Do we force companies to hire people and have them do busywork?
Do we allow an ever growing pool of people with no purpose in life? We see that now, where we have a rather small portion of the population, and look at the damage we do. What happens when 40, 50, 60% of the population has no purpose and nothing to do?

I think the end result is actually as dramatic as machines rising up, we'll have massive social upheaval, by people who don't know what they're doing.
 

· Registered
Joined
·
13,364 Posts
I posted a previously classified slide that NORAD did track balloons both in Canada and Alaska. It's not an AI filter that adjusts itself. If Russia/China are testing response then there is a reason things are classified.

For things like Red Flag or Maple Flag all those filters are off unless there is too much clutter. Things like identification zones are also monitored closer with layered sensors. But yes if you know the exact system there's always potential ways to bypass detection maybe for some time.

None of those slow moving objects were threats. Although it is time for NORAD to upgrade surveillance for real threats. It's not the derelict balloons though. There's talk about how new technologies will work I wouldn't call current system AI.
There is always clutter and noise in any reading.
It is ALWAYS filtered out to some degree.

Now that could be some advanced AI based system, or it could simply be a basic signal filter.

Any scanning system will pull in all sorts of garbage, and that's the whole trick of stealth and camouflage, trying to make the object fail pattern recognition.
 

· Registered
Joined
·
13,364 Posts
This thread is ideal feed stock for my "identifying delusions" thread. Clearly, posters in this thread want it to not be something that is going to bring an end to civilization.

I'm sure there were horse breeders in the year 1900 who joked about the automobile and declared it a laughing stock that would never amount to anything.

GPT3 is amazing. I've been playing with it for a short time. It is easy, accurate, and wildly effective. I've found it capable of retrieving information that used to require a bunch of python code, a MariaDB database, and spreadsheet macros, now handled with a single spreadsheet call. It has been a revelation for me.

I just started using it a few days ago for a few things, including investing telemetry.
Makes me think of all the people introduced to FEA, thinking it can replace basic Engineering calculations and proper analysis.

The thing with many of these AI systems is they're just pattern recognition, some with an iterative generate and validate loop.
They create "something", then ask "is that something the thing I wanted", then repeat till it passes a threshold of "is that the something I wanted".

My kids thought it was hilarious that ChatGPT plays illegal chess moves. Sure in many contexts it looks like it's doing okay, and looks correct.
But in many obvious cases it was wrong, clearly and obviously wrong.

The risk is that people will start thinking it is "easy, accurate and wildly effective", and not apply appropriate scrutiny.

It's one of a few cool tools, and I'm thinking of some interesting applications that I want to try, but you have to remember that it has some VERY SERIOUS limitations.
To me the most important part of a scientific paper is the Limitations, and that's the most important part of applying AI.
 

· Registered
Joined
·
13,364 Posts
Exactly. GPT is not returning facts. It's mostly returning Internet concensus. Even OpenAI cites tests which show it had an accuracy of 76% in their tests. While this improves the state of the art, it is not perfect.

You know what? I agree with the ludites. It's best to declare any flaw a fatal, unfixable issue in this early code and declare AI as absolutely no thread to knowledge workers, investors, or anyone. You are right! Best to ignore this and, whatever you do, certainly you should resist any application of GPT which might improve your life. (y)
Well I also agree with the Luddites. I think the impact will be very significant.
I think these AI tools are very useful and are likely to raise the bar of what is automatable in the future.
I'm a very concerned about the impact on workers.

I was a big fan of smart reschedule in Todoist.
I actually paid for premium(Yes $60/yr for a todo list app), and removed it when they cancelled it.

I don't think AI is fatally flawed, however I think it is very scary that it will be misapplied and misused in inappropriate circumstances.
For example some posters here have said that ChatGPT is accurate, and it most certainly is NOT.
 

· Registered
Joined
·
13,364 Posts
You shouldn't blindly trust GPT, but it's not merely a parlour trick.
That's my point, take the quote below, emphasis mine.
That's what's concerning to me.

GPT3 is amazing. I've been playing with it for a short time. It is easy, accurate, and wildly effective.
These are very useful tools, within their limits.
 

· Registered
Joined
·
13,364 Posts
It's odd that you can't seem to make AI work, given how easy it is by design.
I searched for stuff on you.com
I got bad results compared to google and bing.

Check out how accurate ChatGPT is. Right on the money. (y) :ROFLMAO:
Yes it gives nice sounding answers, sometimes pretty good ones, but they're not reliably accurate.
Like self driving cars today, they're pretty good, except when they kill someone.

People seem a bit too eager to trust AI, when we KNOW it has issues.
This goes double when we're talking about a specific AI and we KNOW it's problems, some people will still claim it's "accurate".
 

· Registered
Joined
·
13,364 Posts
My nightmare is when spammers/malicious state actors get ahold of GPT-like conversational AI. A lot of this activity is quite obvious currently, but places a burden on platforms to filter and remove. Once AI-generated messages are more subtle and essentially indistinguishable from humans, I can see a future where platforms are overwhelmed by very convincing user generated content created by AI. Imagine: "post positive commentary on these penny stocks in the style of sags". Or, "propagate this Russian disinformation in the style of Farouk". I kid, of course.

But there may come a time when the majority of content on places such as this forum is AI chatbots communicating with each other. Short of some validation of the existence of users (credit card, third party authentication etc.) I'm not sure how you can effectively filter it out.
Well that's going to shut down those scam boiler rooms, replace all of them with bots.
 

· Registered
Joined
·
13,364 Posts
I think this suggests a misunderstanding on your part about how these systems work. GPT isn't a bunch of if then code written by programmers. Rather, it is much less code wrangling a big statistical model that can't really fully explain its inner workings. There is nothing to comment. And it isn't 'written' by programmers. The bias in machine learning systems is largely introduced by the data used to 'train' the model (feed the statistical model). The wrangling of the model serves, among other things, 'trust and safety'. Basically trying to reduce the negative/undesirable outcomes of the model from the training data, or to prevent misuse.
The problem is these systems are very good at reinforcing the status quo.

ChatGPT won't appoint a female US president, because she wouldn't fit the pattern.

The problems with this approach to AI were identified decades ago.
 

· Registered
Joined
·
13,364 Posts
It's different than search indexing.

Ehh, I'm not so sure I agree. It's all about how the technology is deployed. AlphaGo developed novel Go strategies that had not been conceived of by human grand masters.

It matters on what approach they're using, and what inputs they're using.

Overly simplified comparison of 2 AI strategies.
If you're randomly generating "stuff" and saying "does this look like other 'similar stuff' I've seen before", then you won't get completely new ideas.

If you program the rules of a game, and a success criteria, it might try stuff that has never been tried.

It's not a question of "how it is deployed" it is what is the AI technique being used.


In your example of a Go game, the AI was likely programmed with the rules and asked "is this a valid move" "will it lead to success".
In GPT-3, where it makes illegal chess moves, the AI asked "Did I see successful people do something like this".

I think there is a lot of work to be done to bridge these techniques. I'm not sure how much work that is, particularly since some things have succinct rules to implement, and some do not.
 

· Registered
Joined
·
13,364 Posts
ML models don't need to be programmed with an understanding of the rules, or to be trained on examples of others behaviour (though that can be helpful). Deepmind started training its Go application by playing itself.

ML model learning rules and objectives purely by a first person perspective of an environment.
If Deepmind played itself, did the "game" block illegal moves? You know to teach/program the rules in?

ChatGPT making illegal chess moves means it didn't "learn" the rules.

Also if the training dataset instills a sense of "rules" to the AI, then you're also likely instilling the status quo as "rules".

Of course this is pretty much how stereotypes work
Oh something like this is this...

AI is useful, but it has limits, and it has risks, these are well known, documented, and discussed for a LONG time.
 

· Registered
Joined
·
13,364 Posts
Sure, I'm just not sure what your point is. It's like saying nuclear weapons have limits. They are still very powerful, and a disruptive technology.

AI is going to have a bigger impact than telecommunications and electricity in time.
AI is dependant on Telecom, which is dependant on electricity, so I don't think it's going to have a bigger impact. But I think the introduction and rollout of AI will be much bigger than the initial rollouts of any previous technology, with the possible exception of agriculture, and larger scale communities.

I think that AI is going to massively disrupt the information economy, I don't know what the next "age" is going to be, but I think we're going to see a massive restructuring of society, like the industrial revolution.
I also think that right now we have no idea what it will look like.

The politics will get very interesting, as we have very little consensus and are losing our ability to discuss issues, political factions disagree on basic facts.
Throw in Foreign/AI assisted influence and we're in for a wild ride.
 

· Registered
Joined
·
13,364 Posts
Sure, I'm just not sure what your point is. It's like saying nuclear weapons have limits. They are still very powerful, and a disruptive technology.
Some people think AI is "accurate" and "thinking", while today it is mostly regurgitating & pattern matching, in a relatively simplistic yet high resolution way.
This will be disruptive, but I think some people are overestimating what todays AI solutions are.

Note what they are, and what they can do are not the same, but there are some rather significant fundamental holes in the systems today.
I also think the Next gen AIs what will address the obvious flaws we know about are going to come fast.
 

· Registered
Joined
·
13,364 Posts
I'm glad I won't be around, when the AI layer falls like a pall over our lives and it will become tough to distinguish: was it a real person or imperfect bot doing this.
Already there. Some creators are quite open about using AI extensively in their work.

FYI in grade school they're teaching and screening for AI written essays etc.
 

· Registered
Joined
·
13,364 Posts
Joyful... you mean the teachers have to screen their students' essays to catch any kid using AI? I find this so sad and adds another work burden, when already the teachers are seriously challenged ..ie. behavioural problems, etc. There's nothing celebrate about this type of necessary adult vigilance and of course, kids will not appreciate it that we are trying to help them with skills development.
They've been screening for plagiarism for years.
 

· Registered
Joined
·
13,364 Posts
And will get harder for teachers to spot it. I find this all sad... we so many tools and downgrading of literacy on a slow roll for some (not all) folks.

Spell check is better for adults who already should know how to spell fine but aren't perfect.

Honest opinion: for young children learning to read and write, no I don't think so. It is equivalent a child not knowing how to add,subtract, multiply and divide without a caculator. Same rationale.

Gaining written literacy comes from: word recognition, how to write coherently and with good punctuation.
Go on social media, most people are functionally illiterate as it is.

I actually had to explain to intelligent and capable staff how to write emails as I literally didn't know what they were trying to communicate.
That was years ago, and I haven't had to deal with new grads in a while, but I don't assume it's getting better.

The calculator debate is interesting, I didn't get one until we started doing trig, now they get one a few units past learning the concepts. I can say kids today are learning some math concepts years earlier than they did in the past.
Sure they might not be great at multiplying multi digit numbers in their head, or adding larger numbers, but they are learning the more significant patterns and insights.

Should we focus on spelling and basic grammatical construction? if they're able to put out some thoughts and have the computer do that "low level" work?
 
1 - 19 of 99 Posts
Top