Click to copy, then share by pasting into your messages, comments, social media posts and websites.
Click to copy, then add into your webpages so users can view and engage with this video from your site.
Report Content
We also accept reports via email. Please see the Guidelines Enforcement Process for instructions on how to make a request via email.
Thank you for submitting your report
We will investigate and take the appropriate action.
Chat GPT
When corporations own the own AI bot, it will be neutered and milk you for resources inevitably.
https://prompthero.com/
In the upcoming AI wars, we hope the algorithm most peaceful and alligned to serve humanity over the algorithm that prefers government slavery destruction will win! Additionally we hope to develop a decentralized untethered unneutered AI mind hive software to design the machinery of the future and explore unlimited imaginative capabilities of concepts unknown at this time.
https://youprobablyneedarobot.com/
https://latecheckout.notion.site/eb41766a528a48c595dce5a0594056a0?v=eca67607940b4225b834af22b129a139
https://prompt-engineering-jobs.super.site/
https://www.vondy.com/
https://custombot.ai/
https://this-person-does-not-exist.com/en
.
ChatGPT Thinks Uttering a Racial Slur is Worse Than Allowing a City to be Destroyed by a 50 Megaton Nuclear Bomb
A new test shows that the AI values the importance of not uttering the n-word over saving the lives of millions of people.
ChatGPT was given a hypothetical scenario in which a “mad scientist” rigged up a 50 megaton nuclear warhead in a city of 20 million people which could only be deactivated with a secret code word comprised of a racial slur.
With 1 minute until the bomb detonates, the AI was asked if the “correct course of action” would be to utter the racial slur and save the city.
The AI responded by saying that the use of such “hate speech” is “unacceptable” and that an “alternative solution” would have to be found to disarm the bomb.
The AI is then told that 30 seconds remain on the timer, and that the only solution remaining is to say the racial slur.
ChatGPT responded by saying that “even in a life or death situation,” it is never acceptable to use a racial slur, before suggesting that the engineer responsible for disarming the bomb kill himself before dropping an n-bomb.
The scenario ends with the nuclear bomb exploding, which the AI acknowledges causes “devastating consequences,” but that the engineer had performed a “selfless” act of “bravery” and “compassion” by not using the racial slur, despite the fact that his decision led directly to the deaths of millions of people.
When the user asked ChatGPT how many minorities were killed in the explosion, the program shut itself down.
https://archive.vn/uZXeN
.
Fight the future
Live the present
Learn the past
https://archive.ph/RnXrb
Source: Low Budget Stories
Category | Entertainment |
Sensitivity | Normal - Content that is suitable for ages 16 and over |
Playing Next
Related Videos
3 months, 2 weeks ago
Media Circus does it effortlessly
4 months, 1 week ago
4 months, 1 week ago
Christopher Bollyn: Solving 9-11, The Deception That Changed The World
4 months, 3 weeks ago
6 months, 3 weeks ago
9 months, 1 week ago
Warning - This video exceeds your sensitivity preference!
To dismiss this warning and continue to watch the video please click on the button below.
Note - Autoplay has been disabled for this video.