news agency
AI is in the crosshairs of Washington, but there is no clear consensus

AI is in the crosshairs of Washington, but there is no clear consensus

The legislators of USA are debating what barriers to put to the flourishing artificial intelligence, but months after ChatGPT caught the attention of Washingtona consensus is not at all certain.

Interviews with a US senator, congressional staff, AI companies and interest groups show that many options are being discussed.

The debate will land on Tuesday, when Sam Altman, CEO of OpenAI, appears for the first time before a Senate panel.

Some proposals focus on AI that can endanger people’s lives or livelihoods, such as in medicine and finance. Other possibilities include rules to ensure that AI is not used to discriminate or violate someone’s civil rights.

Another discussion is whether to regulate the developer of the AI ​​or the company that uses it to interact with consumers. OpenAI, the company behind the ChatGPT chatbot, has discussed creating an independent AI regulator.

It is not clear which approaches will win, but some members of the business community, such as IBM and the US Chamber of Commerce, favor the approach of only regulating critical areas such as medical diagnostics, which they call a risk-based proposition

If Congress decides new laws are necessary, the US House Committee on Artificial Intelligence advocates that “the risk is determined based on the impact for peoplesays Jordan Crenshaw of the Chamber’s Technology Engagement Center. “A video recommendation may not pose as high a risk as decisions made about health or finances.”

The growing popularity of so-called generative AI, which uses data to create new content like the human-like prose of ChatGPT, has sparked concerns that this rapidly evolving technology could encourage test cheating, encourage misinformation and give rise to new types of scams.

The rise of AI has led to a series of meetings, including a visit to the White House this month by the CEOs of OpenAI, its sponsor Microsoft Corp and Alphabet Inc, who met with President Joe Biden. Congress is equally implicated, say his advisers and technology experts.

“House and Senate staff have woken up and been asked to get to work”says Jack Clark, co-founder of Anthropic, a high-profile AI startup, whose CEO also attended the White House meeting. “People want to get ahead of AI, partly because they feel like they didn’t get ahead of social media.”

As lawmakers catch up, Big Tech’s top priority is lobbying against a “premature overreaction”said Adam Kovacevich, head of the House of Progress, a pro-tech group.

And while lawmakers like Senate Majority Leader Chuck Schumer are determined to tackle AI issues in a bipartisan way, the truth is that Congress is polarized, the presidential election is next year, and lawmakers are busy with other big issues, like raising the debt ceiling.

Schumer’s proposed plan calls for new AI technologies to be tested by independent experts before they are released, and advocates for transparency and providing the government with the data it needs to prevent harm.

Government Micromanagement

The risk-based approach means that AI used to diagnose cancer, for example, would be vetted by the Food and Drug Administration, while AI for entertainment would be unregulated. The European Union has moved towards the approval of similar standards.

But Democratic Sen. Michael Bennet, who has introduced a bill to create a government task force on AI, doesn’t think it’s enough to focus on the risks. Advocates for a “values-based approach” to prioritize privacy, civil liberties and rights.

Risk-based rules may be too rigid and fail to detect dangers such as the use of AI to recommend videos that promote white supremacy, added a Bennet adviser.

Lawmakers have also debated how best to ensure AI isn’t used to discriminate racially, for example when deciding who gets a low-interest mortgage, according to an unauthorized person familiar with the congressional debates. to talk to journalists.

At OpenAI, staff have considered broader oversight.

Cullen O’Keefe, OpenAI research scientist, proposed in an April talk at Stanford University creating an agency that would force companies to obtain licenses before training powerful AI models or putting data centers to work. that facilitate them.

The agency, according to O’Keefe, could be called the Office for AI and Infrastructure Security, or OASIS.

When asked about the proposal, Mira Murati, OpenAI’s chief technology officer, said a trusted body could “hold” developers accountable for security standards, but more important than mechanics was reaching an agreement “on what they are.” the standards, what are the risks that are tried to be mitigated”.

The last major regulator to be created was the Consumer Financial Protection Bureau, after the financial crisis of 2007-2008. Some Republicans may oppose any AI regulation.

“We must be careful that proposals for AI regulation do not become a mechanism for government micromanagement of computer code such as search engines and algorithms”a Senate Republican aide told Reuters.

Source: Reuters

Source: Gestion

You may also like

Hot News

TRENDING NEWS

Subscribe

follow us

Immediate Access Pro