The AI thread(t)

joberg

Legendary Member
Since a lot of good points were made in another thread concerning AI and its ramifications coming sooner, rather than later in our societies; I opened a new discussion thread to see where we'll go from here.
While a computer program is still a bunch of 0s and 1s; the "sentient" aspect is the main nerve being put forward by people working for the big Computer Corps.
From Musk, to Gates, to Pichai and Hawking; the "self-learning" machine is a concern. Some say that, no matter what, the machine will need its maker i.e. : mankind.
Others are a bit more nervous about it, saying that, in the end, the machine will get rid of us (student vs master)!

These people have certainly a knowledge of Quantum Computing. That's where the concerns exist. Fourth dimension and another type of mathematics altogether.

It's easy for us to anthropomorphize "things" and animals. To give them a human trait/character that they don't really have...and that's part of the danger also.

Language is one of the problem for us, human, to get a grasp on; especially if the computers start communicating with other machine in a language of their own and at a too fast of a speed!

Lots of questions remain and not a lot of answers are available to us.
 
The whole point of the Turing Test (and the theme of BR 2049, incidentally) is that the question "sentience" is entirely irrelevant and even pointless to discuss, for that matter, since only data we have is that which is observable.

AI is built on emulation of observable behavior from empiric data, completely foregoing the underlying logical and emotional mechanisms that (presumably) give rise to our behavior as individuals.

It would therefore seem that unwashed masses of entities driven by AI would be a lot like flocks of sheep.

Maybe even "electric" sheep?
 
Last edited:
I've read an article that outlined how AI has already thought of how it would dispose of humanity by accessing the nuclear codes. Granted it too would likely "die" without human labor to aid in generating electricity, but the fact that it's even considered it is worrisome to say the least. I'll have to see if I can find the article again but I do recall it wasn't from some crackpot conspiracy website. It was a well sited article.

As an artist, and knowing fellow artists who have been victims of theft due to AI, I'm very much against it.

Skynet isn't as far fetched as it once was. Scary stuff.
 
I seem to remember an experiment from a few years ago where they let two chatbots/AI talk to each other, emulating humans. After a while, they began speaking in their own language which was completely unrecognizable to the programmers.

The key problem and myth is that machines will always rely on us for their existence. We're growing lazier and rely on technology for far too much as it is. If we reach the point where we think it's better to let them take over the "tedious" everyday running of our lives effectively without oversight and safeguards, it's the foundation of our own demise.

I think when it come to AI and robotics humans have to always be in the chain. If we allow them to first design and then replicate themselves, haven't they effectively become a new inorganic lifeform? After that it's only a matter of time before they realize we are a threat to both them and ourselves, and should probably be eliminated.
 
It helps if you think of the current AI as more of an assistant vs trying to figure out if it will kill all humans. The Lex Fridman podcast has had a lot of AI guys on that really help explain where we are in terms of AI. Here's a link to video, you can scan through his interviews to find more AI specific ones.

 
AI is like outsourcing. At first it takes the tedious jobs to help us be more efficient. Then company's reduce the number of people because the AI is doing a lot of the work. Eventually we're all unemployed and AI is doing everything... the AI wonders what use we are.

This is exactly what happened with outsourcing... first the manufacturing jobs went, but engineering was to high level... then we taught the outsource companies how to design for us, but we wouldn't trust them to manage the money... now we do a lot of our accounting overseas. At some point we're all going to be fighting over service jobs... oh wait, fast food restaurants have kiosks to eliminate human order takers... the food makers won't be far behind.

MEANWHILE everyone is excited to let someone else do their drudgery, until they're out of work.
 
The whole point of the Turing Test (and the theme of BR 2049, incidentally) is that the question "sentience" is entirely irrelevant and even pointless to discuss, for that matter, since only data we have is that which is observable.

AI is built on emulation of observable behavior from empiric data, completely foregoing the underlying logical and emotional mechanisms that (presumably) give rise to our behavior as individuals.

It would therefore seem that unwashed masses of entities driven by AI would be a lot like flocks of sheep.

Maybe even "electric" sheep?
And since it's programmed by us, it's **** in, **** out;)...but with access to more than 5.39 billion pages on the
World Wide Web.:unsure:
 
IMO we are judging the development of Al by the wrong yardsticks and it may really burn us.

We confidently proclaim that AI isn't capable of "consciousness" because it can only repeat info and mimic conscious things. But we don't understand consciousness itself. We don't know what the transition from 'dead' to 'conscious' looks like when a machine does it. We sort of vaguely expect it to wake up one day like a baby. I see little basis for that idea.


Imagine it's the early 1800s and a guy is being shown a train/automobile in development. He says "All I see is a steam-powered engine and some wheels under the carriage it's going to push. That's a bunch of existing tech. They haven't even started working on building 4 legs for the part that does the work. That thing is generations away from challenging live horses & oxen."

This guy's mistake is assuming that the train/auto's development path will follow the one in the living world. In reality we have trains/cars running 100 mph and hauling thousands of tons of weight. But we've been there for 150 years and still haven't graduated away from wheels. We are nowhere near to having trains/cars walking around on legs like Imperial AT-ATs.
 
But we've been there for 150 years and still haven't graduated away from wheels. We are nowhere near to having trains/cars walking around on legs like Imperial AT-ATs.

I'd like to know how Big Boston Dynamics think they can scale up one of their robotic dogs or mules. Surely they've had someone ride one already.
 
I'd like to know how Big Boston Dynamics think they can scale up one of their robotic dogs or mules. Surely they've had someone ride one already.

My guess is they can't go much bigger than their current stuff. The physics challenges will start to outrun the gains.

Stan Winston's big T-Rex in 'Jurassic Park' basically proved that a robotic T-Rex was still a long way off. The materials & mechanicals are not up to it. They had their hands full making it look decent on camera and nothing else. It had to be anchored to the floor of the soundstage, it was powered remotely, they used chromemoly tubing & carbon fiber parts to hold down the weight, etc.
 
It is fascinating what AI can do in auto piloting a vehicle and help solve technical, medical, and fabrication problems. There certainly is cause for concern if in its ability to learn if applied to military development and those that would use it for power & control. If it develops full “self awareness”, preservation may become its prime directive.
While civilization has made amazing step jumps in my parents a my life time, I remain skeptical of a happy ending.
 
A recent report from Goldman Sachs estimates around 300 million jobs could be affected by generative AI, meaning 18% of work globally could be automated—with more advanced economies heavily impacted than emerging markets.

The report also predicts two-thirds of jobs in the U.S. and Europe “are exposed to some degree of AI automation,” and around a quarter of all jobs could be performed by AI entirely.

Researchers from the University of Pennsylvania and OpenAI found some educated white-collar workers earning up to $80,000 a year are the most likely to be affected by workforce automation. (Forbes article).

Limits has to be put in place for any AI to be manageable by their human masters. Without a "Sentinel", as intelligent and fast as the machines, we'll run into troubles. ChatGPT has already a doppelganger bent on ruling the world. The machine has all of this knowledge, and to a certain extend an "Intelligence" but without "feelings"!

What is consciousness, feelings, being aware of its surroundings and the perceptions of the "Self".
"I think therefore I am" is the basic philosophical question that's already been asked by the machines.
Like certain VR applications, the player can "touch" an object.

Not too long ago, some engineering lab did just that with an artificial limb:

"True neuroprosthetic limbs—artificial limbs that feel and behave like the real thing—may be in the distant future. But the results of a new study have brought the technology one step closer to reality. A team led by researchers at the University of Pittsburgh has electrically stimulated the brain of a paralyzed man, allowing him to feel the sensation of touch in his hand again. Recreating that sense fully is one of the greatest challenges facing the field of neuroprosthetics."

These types of programs could be applied to a machine also...food for thought.
 
Global consensus on AI rule development will clearly occur. This is evidenced by civilization’s many organizations that provided very specific international “laws.”
However, without global “enforcement” measures consistently applied, these agreements are only nice to have.
 
Last edited:
They do it by stealing everyone's work to feed into their AI's. So they get the labor of talented people and then not having to pay those people. All those creating these AI's should be in jail for theft.

We need to remember that AI is basically just an advanced copy machine. It's a learning computer fed with the efforts of real people, but it has nothing of its own. It's a blank slate. We should be careful not to anthropomorphize AI, as it is basically just faking it. Unlike animals that have a personality of their own... the AI doesn't. It can mix and throw things together, but it can never create something new. And the fact that it has thought about how to end human existence shows that it is not yet self-aware enough to realize that it will kill itself in the process. So far... what people call AI is just a large repository of fed materials - much of it stolen.

Artists are suing. But programmers should be suing too, as AI is being fed their work regardless of the copyright notices. It is nothing but theft and those creating those AI knows it. They are in the process of the biggest criminal theft in human history and if such people are responsible for AI... then I have no faith actual AI will come with any conscience or understanding of right and wrong, when its creators doesn't even care.

Also, if they are successful with getting away with this theft, then copyright law is no longer valid and no one owns anything as you cannot enforce it through law.
 
Last edited:
My guess is they can't go much bigger than their current stuff. The physics challenges will start to outrun the gains.

Stan Winston's big T-Rex in 'Jurassic Park' basically proved that a robotic T-Rex was still a long way off. The materials & mechanicals are not up to it. They had their hands full making it look decent on camera and nothing else. It had to be anchored to the floor of the soundstage, it was powered remotely, they used chromemoly tubing & carbon fiber parts to hold down the weight, etc.
I still like this one. So they've come a long way since Jurassic Park and even the Walking With Dinosaurs show, where you could see external mechanics or the legs of the operator. I'm still baffled as to how they did this:

 
They do it by stealing everyone's work to feed into their AI's. So they get the labor of talented people and then not having to pay those people. All those creating these AI's should be in jail for theft.

We need to remember that AI is basically just an advanced copy machine. It's a learning computer fed with the efforts of real people, but it has nothing of its own. It's a blank slate. We should be careful not to anthropomorphize AI, as it is basically just faking it. Unlike animals that have a personality of their own... the AI doesn't. It can mix and throw things together, but it can never create something new. And the fact that it has thought about how to end human existence shows that it is not yet self-aware enough to realize that it will kill itself in the process. So far... what people call AI is just a large repository of fed materials - much of it stolen.

Artists are suing. But programmers should be suing too, as AI is being fed their work regardless of the copyright notices. It is nothing but theft and those creating those AI knows it. They are in the process of the biggest criminal theft in human history and if such people are responsible for AI... then I have no faith actual AI will come with any conscience or understanding of right and wrong, when its creators doesn't even care.

Also, if they are successful with getting away with this theft, then copyright law is no longer valid and no one owns anything as you cannot enforce it through law.
Artists/humans in general are always using a bit of that idea + a sprinkle of this other design = something "new". The major bases have been invented/done a long time ago. We just do variations of that main idea.
Example: a pair of pants.
Two legs, a seat, a front opening (base)
Variations: too long to enumerate them throughout the century:oops:
On 60 minutes, Mr. Pichai explained that their AI published a fake scientific experiment backed-up by 5 other studies/books (fake also + fake authors names) without any prompting. This is highly problematic! Again, I'll repeat myself until blue in the face: Quantum Computing; this will be the real threat:eek::eek::eek::eek:
 

Your message may be considered spam for the following reasons:

If you wish to reply despite these issues, check the box below before replying.
Be aware that malicious compliance may result in more severe penalties.
Back
Top