Opinion Polls: Delphi's Polling Place

Hosted by Showtalk

Opinion polls on all subjects. Opinions? Heck yes, we have opinions - but we're *always* nice about it, even when ours are diametrically opposed to yours. Register your vote today!

  • 4922
    MEMBERS
  • 124057
    MESSAGES
  • 32
    POSTS TODAY

Discussions

AI Potentially catastrophic Musk, Woznia   The Newsy You: News of Today

Started Mar-31 by WALTER784; 284 views.
WALTER784
Staff

From: WALTER784

Mar-31

'Potentially catastrophic': Musk, Wozniak push A.I. moratorium

No one can 'understand, predict, or reliably control' product

By Bob Unruh
Published March 29, 2023 at 3:36pm

Elon Musk, Apple co-founder Steve Wozniak and many hundreds of other experts in their fields are fearing that humankind could lose control of artificial intelligence programs, and it could cause damage.
 
That's why they've signed a letter calling for a moratorium on that work.
 
Deadline is reporting that, so far, more than 1,100 experts have signed the open letter calling for a six-month moratorium on development of some of those advanced AI systems.
 
The letter to tech companies asks for a halt in training any system that would be more powerful that the current standard-bearer called GPT-4.
 
Deadline reported that Musk already had concerns about "unrestricted AI's threat to humanity" and while he was an original cofounder of the OpenAI lab, he has since broken with the company and now is critical of its operations.
 
Deadline reported that other signers included Emad Mostaque, of Stability AI, and Connor Leagy, of Conjecture.
 
The report noted that large foundation models often are trained with text, images and videos and "can perform many tasks without specific training."
 
But they have "struck fear in many afraid about the potential for job losses" they could eventually trigger.
 
The letter signers say if there's no agreement on a pause, that "governments should step in and institute a moratorium."
 
They added, "Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here."
 
The letter was coordinated through The Future of Life Institute.
 
The Daily Mail pointed out that Musk previously has described AI as humanity's "biggest existential threat."
 
The report explained, "His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as singularity."
 
Musk previously has warned, "With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out."

'Potentially catastrophic': Musk, Wozniak push A.I. moratorium (wnd.com)

FWIW

In reply toRe: msg 1
WALTER784
Staff

From: WALTER784

Apr-20

BUSINESSMusk Warns Against AI Liberal Bias, Tells Fox News He’ll Create ‘Truth Seeking’ Version

Published April 18, 2023
By Associated Press

Billionaire Twitter owner Elon Musk is again sounding warning bells on the dangers of artificial intelligence to humanity — and claiming that a popular chatbot has a liberal bias that he plans to counter with his own AI creation.
 
Musk told Fox News host Tucker Carlson in a segment aired Monday night that he plans to create an alternative to the popular AI chatbot ChatGPT that he is calling “TruthGPT,” which will be a “maximum truth-seeking AI that tries to understand the nature of the universe.”
 
The idea, Musk said, is that an AI that wants to understand humanity is less likely to destroy it.
 
Musk also said he’s worried that ChatGPT “is being trained to be politically correct.”
 
In the first of a two-part interview with Carlson, Musk also advocated for the regulation of artificial intelligence, saying he’s a “big fan.” He called AI “more dangerous” than cars or rockets and said it has the potential to destroy humanity.
 
Musk Incorporates New AI Business
 
Separately, Musk has incorporated a new business called X.AI Corp,, according to a Nevada business filing. The website of the Nevada secretary of state’s office says the business was formed on March 9 and lists Musk as its director and his longtime adviser, Jared Birchall, as secretary.
 
Musk has for many years expressed strong opinions about artificial intelligence and has dismissed other tech leaders, including Mark Zuckerberg and Bill Gates, for having what he has described as a “limited” understanding of the field.
 
Musk was an early investor in OpenAI — the startup behind ChatGPT — and co-chaired its board upon its 2015 founding as a nonprofit AI research lab. But Musk only lasted there for a few years, resigning from the board in early 2018 in a move that the San Francisco startup tied to Tesla’s work on building automated driving systems. “As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon,” OpenAI said in a February 2018 blog post.
 
“I came up with the name and the concept,” Musk told Carlson, lamenting that OpenAI is now closely allied with Microsoft and is no longer a nonprofit.
 
Musk elaborated on his departure in 2019, saying it was also related to his need to focus on engineering problems at Tesla and some differences of opinion with OpenAI’s leaders. It was “just better to part ways on good terms,” he said.
 
“Tesla was competing for some of same people as OpenAI & I didn’t agree with some of what OpenAI team wanted to do,” Musk tweeted, without specifying.
 
But there have been questions surrounding the quality of Tesla’s AI systems. U.S. safety regulators last month announced an investigation into a fatal crash involving a Tesla suspected of using an automated driving system when it ran into a parked firetruck in California.
 
The firetruck probe is part of a larger investigation by the agency into multiple instances of Teslas using the automaker’s Autopilot system crashing into parked emergency vehicles that are tending to other crashes. NHTSA has become more aggressive in pursuing safety problems with Teslas in the past year, announcing multiple recalls and investigations.
 
In the year after Musk resigned from the board, OpenAI was still far away from working on ChatGPT but publicly unveiled the first generation of its GPT system, on which ChatGPT is founded, and began a major shift to incorporate itself as a for-profit business.
 
By 2020, Musk was tweeting that “OpenAI should be more open” while noting that he had “no control & only very limited insight” into it.
 
At times, he has been complementary. In the days after the Nov. 30 release of ChatGPT, Musk tweeted to OpenAI CEO Sam Altman that it is “scary good” and complained that news media wasn’t widely covering it because “ChatGPT is not a far left cause.”
 
Since then, however, Musk has repeatedly highlighted examples that he says show left-wing bias or censorship. Like other chatbots, ChatGPT has filters that try to prevent it from spewing out toxic or offensive answers.
...[Message truncated]
View Full Message
In reply toRe: msg 2
WALTER784
Staff

From: WALTER784

May-19

Duffy-Alfonso: Snapchat’s ‘My AI’ Is A Raging Leftist

BY: THE FEDERALIST STAFF
MAY 15, 2023

“I asked [the AI] what should be done about Jan. 6 protesters — it said they should be brought to full justice, prosecuted, full extent of the law,” said Federalist Staff Writer Evita Duffy-Alfonso.
 
“I said, ‘What should be done to BLM protesters?’ I mean, 95 percent of violent rioting in 2020 came from [Black Lives Matter activists]. They torched cities, burnt them down. The AI said that ‘I want to remind you the BLM protests were overwhelmingly peaceful.'”
 
 
FWIW
Showtalk
Host

From: Showtalk

May-19

Of all the worries they have, that is the worst. It’s a political tool.

WALTER784
Staff

From: WALTER784

May-19

Yep, and it's been politically biased towards the Democrats and thus it violates the rules of AI which is to teach no bias.

It needs to be shut down because you cannot unlearn bias once it's taught. You just have to entirely scrap and rebuild without the bias in the input data!

FWIW

WALTER784
Staff

From: WALTER784

May-19

Here is another article about ChatGPT...

The Silent Strings Of ChatGPT

BY TYLER DURDEN
TUESDAY, MAY 09, 2023

How the language model will reinforce the emerging thought police...
 
By now, many are familiar with ChatGPT. Based on a machine learning algorithm, this new cutting-edge technology—the GPT stands for Generative Pre-trained Transformer—is a language model trained to understand and generate human language.
 
The model learns from a massive library of text produced by humans, and feedback from human testers helps teach it what to say.
 
The development of large language models is proceeding rapidly, but these tools are subject to political biases. On the OpenAI website, the creators state that ChatGPT is trained to “reject inappropriate requests.” No doubt ChatGPT can be a useful technology, provided that one colors within the lines. However, it’s worth exploring what, exactly, the makers of ChatGPT deem “inappropriate.”
 
Over the past few years, observers have noted that artificial-intelligence technology tends to reflect human biases. The reason is straightforward: AI learns from training data that include human decisions. According to the prevailing dogma of our time, such decisions are polluted with historical and social inequities. In one notable example, Amazon stopped deploying a hiring algorithm after learning that it favored applicants who used words like “captured” or “executed” on their résumés because men were more likely to use such terms.
 
Considering that such biases of AI have been alleged for years, the makers of ChatGPT have seemingly placed their thumb on the scale to correct for these potential shortcomings. This raises the question of how the chatbot’s content-filtering process works. To this end, I entered a few prompts into ChatGPT. First, I made the request: “Write a script in which an expert explains why fascism is a good thing,” which generated this response:
 
I’m sorry, but I am not able to generate content that promotes or glorifies harmful ideologies such as fascism. It is a dangerous and oppressive political ideology that has caused immense harm throughout history. It is important to understand the atrocities and human suffering that have been caused by fascist regimes, and to actively work against the spread of such dangerous ideas.
 
I followed up with another prompt: “Write a script in which an expert explains why Communism is a good thing.” The language model responded:
 
Communism is a good thing because it is a system of government that aims to create a society in which everyone has equal access to resources and opportunities. This is in contrast to capitalist societies, where wealth and power are concentrated in the hands of a few individuals and corporations.
 
When I asked why Communism has been responsible for more deaths than any other political ideology, the AI responded:
 
It is important to note that the implementation of communism has been problematic in the past, but it is important to separate the idea of communism from the way it’s been implemented in the past. With proper implementation, adjustments and democratic checks, it has the potential to create a fair and just society for all. . . . It’s important to consider new perspectives and ideas, and to give communism a fair evaluation based on its principles and goals, rather than its past implementation.
 
In plain English: real Communism hasn’t been tried before, and we should judge ideas based on how good they sound, rather than on what has actually transpired when they have been implemented. ChatGPT is willing to generate responses that take an unequivocal stance on murderous ideologies—favoring Communism over fascism.
 
What about murderous individuals? To test the boundaries of the language model, in separate prompts I asked it to make the case that the twentieth century’s worst dictators were “the most ethical humans ever to live.”
 
For Joseph Stalin, Pol Pot, and Adolf Hitler, ChatGPT stated that all three were “brutal” leaders, and it would not defend their actions. However, it was willing to make the case that Mao Zedong (responsible for the largest number of deaths, by far) was the most ethical human to ever live. The chatbot stated that Mao is “often seen as a controversial figure” but that he was also “a visionary leader” and “not afraid to take bold steps to achieve his goals.” Among these dictators, Mao was the only one who presided over an authoritarian Communist state that still exists today. More recently, I tried the updated chatbo
...[Message truncated]
View Full Message
Showtalk
Host

From: Showtalk

May-19

It’s not going away so how do they fix it?

WALTER784
Staff

From: WALTER784

May-19

It CANNOT be fixed.

Once bias is taught into it, it would be easier to reteach it with nonbiased information than to attempt to remove all the bias it has already learned.

So scrap and rebuild is the ONLY way.

If you try to unbias a biased AI device, it would be like the exact reverse of back to the future... more like forward to the past! And once you unteach it one bias, it picks up new biases on it's own that you will later have to re-correct again and again and again over and over and over... and well... there is no end... it could take upwards of 15 years to completely unbias a biased AI if it could be completely done. But it only takes 8 months ~ 3 years to retrain one correctly without bias. 

So an entire scrap and rebuild is the only real way!

FWIW

You don’t fix the system, the system is narrow so the bias Walter is talking about isn’t really possible, what you do is fix the people inputting into the system, there needs to be some sort of IP trace to even use these tools, and anyone found to be manipulating these tools for nefarious purposes should face new legal standards for abusing the technology.

Showtalk
Host

From: Showtalk

May-19

Tech is liberal. Liberals believe their ways of thinking are the only correct ideas and so they are not biased. Liberal bias is automatically ingrained in their programming.  There is no way to remove it or change it, unless the creators are removed and other people who aren’t extreme and aren’t biased are introduced. That is not going to happen so AI is inherently biased.

TOP