الأربعاء، 17 مايو 2023

Elon Musk is right: we need to regulate AI now


We need to do something about AI before it’s too late.

You may have heard some variation of this statement dozens of times in previous years chat. You’ve probably heard it hundreds of times in the months since ChatGPT.

Something needs to be done about AI or otherwise. If artificial intelligence continues to develop at its current pace, the concern is growing, a catastrophe is likely to occur. Whether it is a tsunami of misinformation, millions of job losses or the end of the world itself, the AI ​​revolution carries huge risks.

in March open letter He called for all labs to pause AI development for six months, during which time the government could work on reasonable regulation. It was signed by Elon Musk, Apple co-founder Steve Wozniak, and among tech and academic luminaries Yuval Noah Harari, author of Sapiens.

“Over the past two years, new AI tools have emerged that threaten the survival of human civilization,” Harari said wrote last month. “Artificial intelligence has acquired some remarkable capabilities of manipulating and generating language…and thus AI has infiltrated the operating system of our civilization.”

Some disturbing words from the man who wrote a book about humankind.

The open letter says it is time to put the guardrails in place because AI will soon be too smart to be constrained. Or, as Musk puts it, if “we only put the regulations in place after something terrible happens, it may be too late to actually put the regulations in place. AI may be in control at this point.”

But there is another reason why lawmakers are jumping on AI now. History tells us that there is a limited amount of time in which AI can be politically regulated.

The problem is the culture war, as usual. The way so many important issues are co-opted and made biased by politicians and online pundits bent on weaponizing the kind of tribalism that is on display daily on social media platforms such as Twitter. If AI becomes part of the culture war, it will be very difficult to achieve thoughtful and comprehensive regulation.

Politicization may have already begun. That Musk quote above? He introduced it during an appearance on The Tucker Carlson Show, when Carlson appeared He still has a show. This is how the former Fox host presented one of Musk’s slides:

“In the long term, AI may become autonomous and take over the world. But in the short term, it will be used by politicians to control what you think, to end your autonomous rule and end democracy on the eve of presidential elections.”

Elon Musk is one of several tech luminaries who have petitioned to halt the development of artificial intelligence for 6 months.

Bloomberg/Getty

bad precedents

The unchecked spread of AI could lead to disaster. But if there’s one thing US lawmakers have proven adept at, it’s courting disaster for political gain. This is often done by fear mongering. Portraying AI as a plot to end democracy, as Carlson did, is one of the many ways this could happen. Once you’ve established the talking points about boiling blood, it can be hard to quell anger.

You don’t have to look far to see examples of pathological partisanship. As I write these words, Congress is playing a game of chicken about raising the debt ceiling. GOP leaders refuse to authorize the government to borrow money to pay its bills Unless the White House agrees To cut green energy incentives, repeal Biden’s student loan forgiveness initiative, and reduce Social Security spending.

It is an example of politics corrupting what should be a simple operation. Raising the debt ceiling is usually an administrative ritual, but in recent decades it has become a political football. But there are real risks attached to it: If neither side blinks and the ceiling is not raised, millions will lose access to Medicare, the military will not get paid and global markets will be disrupted by the United States not paying its debt obligations.

Again, this should be easy – much easier than regulating the AI. But it shows how politics can corrupt even the clearest of goals.

Climate change, and the continued resistance by governments around the world to address it appropriately, is perhaps the best example of a halting culture war. Compromise becomes difficult when one side says climate change is horrible while the other sees it as exaggerated or unreal. A similar gap would make AI regulation impossible or slow at best. very slow.

Even on issues where there is a bipartisan consensus that something needs to be done, Democrats and Republicans often work in opposing directions. Virtually everyone agrees that big tech should be regulated. Democrats worry that highly profitable tech companies do not adequately protect data and bully smaller competitors. Republicans cry over censorship and claim Silicon Valley elites are eroding free speech. No major bill that shines light on big tech has ever been passed.

The same inertia can afflict an AI organization if the parties, although they agree something should be done, describe different solutions.

First answers to AI regulation

Developing comprehensive regulations that address potential AI externalities will take years. But there are some quick and easy rules that can and should be applied now. It was called for by nearly 28,000 people who signed the Musk-backed open letter.

First, regulation should force more transparency on the part of AI developers. That could mean being transparent about when AI is used, as in the case of companies that use AI algorithms to screen job or rental applications, for example. California is already addressing the former problem, with Bill seeks to claim corporations To let people know if AI-powered algorithms are being used to make decisions on behalf of the company.

We also need companies like OpenAI, which is behind ChatGPT, to make available to researchers the data on which chatbots are trained. Copyright claims are likely to abound in the age of artificial intelligence (How do we pay news publications for stories Chatbots like GPT base their answers on images, or images that AI art generators use as input). More transparency about the data on which AI systems are trained will help keep these disputes coherent.

Perhaps most importantly, AI must declare itself to be AI.

One of the big concerns about AI is its ability to appear persuasive and convincing. Dangerous qualities in the wrong hands. Prior to the 2016 elections, Russia used fake social media accounts trying to sow discord On controversial issues such as immigration and racial tension. If supported by AI, such attempts to stir up the rabble would be much more effective and harder to spot.

In the same way that Instagram forces influencers to #ad when they get paid for a post, Facebook and Twitter accounts must declare themselves to be AI. Fake videos must be flagged in such a way that they are recognizable products of artificial intelligence.

Democratic congresswoman from New York Yvette Clark proposed such measures in a bill Submitted earlier this month. But it came in response to the decision of the Republican National Committee to release An ad against Joe Biden was created using AI imageryheralding more Malarkey AI to come as the 2024 election approaches.

Artificial intelligence has not yet entered the culture war like climate change or even Big Tech. But how long will this be the case?

Editors’ note: CNET uses an artificial intelligence engine to create some personal finance explanations that are edited and verified by our editors. For more see This post.

Running now: Watch this: ChatGPT Creator Testifies Before Congress on AI Security…

15:01

Source link

ليست هناك تعليقات:

إرسال تعليق

AI Deepfake Ads: Tom Hanks, Gayle King Sound Warning

Tom Hanks is pretty recognizable, whether he’s holding a box of chocolates in Forrest Gump or wearing a space suit in Apollo 13. But should...