Will the rise of AI replace us all??

The friendliest place on the web for anyone who enjoys boating.
If you have answers, please help by responding to the unanswered posts.
The objective of an objective is to reach some objective... i.e., parallel lines that eventually meet! :thumb:


And that is straight from the Department of Redundancy Department.
 
Not any longer having alcohol drink since 11/25/1995... AI will need other means to confuse me; rather than with booze! :thumb:

For drinkers... good luck!! :facepalm:
 
I'm pretty sure my smart TV is watching me! :ermm:
Last TV purchased the smart TV was suggested. Once I was told it is listening and waiting for your command, that settled it, no smart TV for me.
Watching you? Now that is creepy. Wonder if it is the little red light that blinks when you turn it on.
 
Too Soon:

Govt. backed "Medi-Improvement" [MI] chips and apparatus/devices will become available for "in-body" placements. The next step soon after availability will become govt. and medical sponsored suggestions to have MI's in everyone as a means to maintain good health. Then, not too long thereafter, MI's will become a law to have in-body for travel and other activities. Next step for MI's is to have them placed in every new born. OMG!!!

Really glad I'm from the early 1950's!
 
Last TV purchased the smart TV was suggested. Once I was told it is listening and waiting for your command, that settled it, no smart TV for me.
Watching you? Now that is creepy. Wonder if it is the little red light that blinks when you turn it on.


It's winking at you. I think it likes you, because you "turn it on!"
 
From the horse's mouth (ChatGPT):

"The rise of AI is certainly transforming the way we work and live, but it's unlikely to completely replace humans. While AI has made significant progress in performing certain tasks, it still lacks the flexibility and adaptability of human intelligence, particularly when it comes to complex decision-making and creative problem-solving.

Moreover, AI is a tool created by humans and is only as intelligent and effective as the humans who design and train it. It requires significant human input to develop, maintain, and improve, and there will always be a need for human oversight and intervention to ensure its ethical use.

While some jobs may be automated, it's likely that new jobs will emerge in fields related to AI development, maintenance, and governance. It's important to embrace the potential of AI while also recognizing and preparing for its impact on the workforce and society as a whole."

Famous last words ...
 
Greetings,
OK. I'm surprised no-one has asked Chat GDP or whatever this question. Perhaps someone who has played around with it will and post the response...


iu



Edit: Aw snap. Mr. X must be listening to my Alexa feed...
 
Will AI destroy humanity?

ChatGPT
No, AI is not inherently programmed or designed to destroy humanity. While there are certainly concerns about the misuse or unintended consequences of AI, such as bias, lack of transparency, and safety issues, these risks can be managed and mitigated through responsible development and governance of AI.

It's important to remember that AI is a tool created by humans and is only as ethical and effective as the humans who design and use it. As with any powerful technology, there is always a risk of unintended consequences, but this does not mean that AI is inherently destructive.

Additionally, there are many potential benefits of AI, such as improved healthcare, more efficient transportation, and better disaster response, that can greatly improve the quality of life for people around the world.

Ultimately, the impact of AI on humanity will depend on how we choose to develop, deploy, and regulate it. It's important for policymakers, industry leaders, and the public to work together to ensure that AI is used in a responsible and ethical manner, while also recognizing its potential to create significant benefits for society.




User
But, is there a risk it could destroy humanity?

ChatGPT
While there is no inherent desire for AI to destroy humanity, there are certainly potential risks associated with its development and deployment that could have negative consequences for society. These risks include:

Accidental misuse or unintended consequences: AI systems are only as good as the data they are trained on and the algorithms that drive them. If an AI system is not properly designed or tested, it could produce unexpected and harmful outcomes. For example, an autonomous vehicle that fails to detect a pedestrian could cause a fatal accident.

Malicious use: AI systems could also be used by individuals or groups with harmful intent. For example, hackers could use AI to bypass security systems, or autonomous weapons could be used to carry out attacks without human oversight or control.

Job displacement: The rise of AI could also lead to significant job displacement as automation replaces human workers. This could lead to social and economic upheaval and potentially exacerbate existing inequalities.

Bias and discrimination: AI systems can perpetuate and amplify biases and discrimination, especially if they are trained on biased data or designed with flawed algorithms. This could have negative consequences for marginalized groups and exacerbate existing inequalities.

It's important to address these risks and work towards responsible development and governance of AI to ensure that it benefits society as a whole. This requires collaboration between policymakers, industry leaders, and the public to establish ethical standards and ensure that AI is developed and deployed in a responsible and transparent manner.
 
In various non Euclidean geometries parallel lines do meet at infinity. Such geometries are used to understand non Newtonian physics.
 
Greetings,
In response to Mr. X's post (#72)...Sounds a lot like the dentist who says "Trust me. This won't hurt a bit".


Edit: Re: Mr. X's post #72. "...responsible development...ethical standards...deployed in a responsible and transparent manner."


200.gif
 
Last edited:
AI already trains itself. That’s the definition of machine learning.

AI can already write code. It’s just a matter of time before AI will create more AI without human involvement.
 
I think the power of these natural language programs will peter-out. They learn by reading stuff but they will soon have read everything there is to read.
 
AI already trains itself. That’s the definition of machine learning.

AI can already write code. It’s just a matter of time before AI will create more AI without human involvement.
Agreed. I've tracked this thread. I don't think most folks grasp the difference between algorithmic response and cognitive response.

Peter
 
Computers running Qubits and utilizing light driven chips are on the verge of exponential increasing processing speeds. For certain tasks supercomputers already exceed human capabilities. Simple old example is at present no human can beat a computer. More recent example is no human can win a game of Go against a computer. However these examples are child’s play for the next generation of computers.
Computers can already draw, paint, write poetry and novels you cannot distinguish from human product. The Hollywood writers and artists guilds are already in a battle to prevent computers from assessing their human work and then creating work indistinguishable from the artists or writers oeuvre.

Many of the above examples of capabilities in others prior posts stated to be unique to humans are already not unique to humans and are currently being done by AI. .
 
Please do understand... AI can be "unplugged" ...if... that becomes really required.

Sure AI could become a huge pain in the A for certain situations and along some lines.

However, if all else fails to keep AI acting as it should... [in other words if it gets out of hand] computers don't work well with bullets in them or while on fire or when their energy supply is curtailed.

Humans will win... if it becomes necessary to actually enter into conflict with AI.
 
Greetings,
Mr. A. "...AI can be "unplugged". Read post #80 again-twice.



From my post #30. "Truly autonomous, self aware AI IS coming and I would not be surprised it will be in the next 10 years, if not sooner....much sooner."
 
Greetings,
Mr. A. "...AI can be "unplugged". Read post #80 again-twice.



From my post #30. "Truly autonomous, self aware AI IS coming and I would not be surprised it will be in the next 10 years, if not sooner....much sooner."

OK... So What??!!... "AI can already write code. It’s just a matter of time before AI will create more AI without human involvement".

It's still just AI. Humans are RI [Real Intelligence]. The type of intelligence that can create and/or kill with hands, minds and weapons. I'm not saying that AI will not become a problem. I am saying I bet on humans doing what ever is required - period.
 
Please do understand... AI can be "unplugged" ...if... that becomes really required.

That's not actually obvious at all going forward. As a thought experiment, how would you unplug a system that spans millions of computers around the globe and that may have a sense of self-preservation? Using the example of HAL above, it wasn't that he was sentient and didn't want to die... it was that he perceived a threat to the mission.

Want to turn off the AI, gotta get in the building... but suddenly your electronic key doesn't work. You try turn off the power, suddenly the grid doesn't respond to your command. Call the Army to blow up the building? Suddenly the helicopter won't start and radios don't work. And by the way... there are a hundred other building around the globe containing usable processing power so you gotta deal with them too. You get the idea.

Seems kinda far-fetched but look at computer security today. We have a very good understanding of all the technical components in systems but we STILL have difficulty stopping nation-state actors. How the heck are we going to deal with the complexity of AI systems when we don't even fully understand how they work? Will we have to hire psychologists to validate AI systems? :)

To be clear, I don't think this is where we are now or will be in the near future. But it seems inevitable barring some catastrophic event that upsets civilization as we know it.
 
That's not actually obvious at all going forward. As a thought experiment, how would you unplug a system that spans millions of computers around the globe and that may have a sense of self-preservation? Using the example of HAL above, it wasn't that he was sentient and didn't want to die... it was that he perceived a threat to the mission.

Want to turn off the AI, gotta get in the building... but suddenly your electronic key doesn't work. You try turn off the power, suddenly the grid doesn't respond to your command. Call the Army to blow up the building? Suddenly the helicopter won't start and radios don't work. And by the way... there are a hundred other building around the globe containing usable processing power so you gotta deal with them too. You get the idea.

Seems kinda far-fetched but look at computer security today. We have a very good understanding of all the technical components in systems but we STILL have difficulty stopping nation-state actors. How the heck are we going to deal with the complexity of AI systems when we don't even fully understand how they work? Will we have to hire psychologists to validate AI systems? :)

To be clear, I don't think this is where we are now or will be in the near future. But it seems inevitable barring some catastrophic event that upsets civilization as we know it.

You're right... we're "AI" doomed! OMG What will I do??... What will I do!!... :facepalm: :dance: :nonono: :lol:

Tending to Climate Warming, ecospheric weather destabilizations and oceanic acidification are what one of my companies is laser focused on. AI be damned at this point!!
 
Last edited:
SteveK may not have listened to the Firesign Theater records in his formative years... ;)
Or possibly it was before his time. I was quite a young lad when I heard it from an older gent.
 
A fundamental question posed in the AI channels is who does not benefit from AI as it progresses? That group of answers is sobering especially when considering classic risk reward reality thinking.
 
Back
Top Bottom