White House Pledges ‘Road Map’ for Managing Artificial Intelligence
The administration also released a new Education Department report on the risks and opportunities artificial intelligence presents in schools.

The Biden administration is pledging to get smart about artificial intelligence, offering an updated approach Tuesday to a technology growing faster than lawmakers or regulators can respond to it.
In a series of documents released Tuesday, the administration announced a revised “road map” to deal with the issue, including a new plan for international collaboration to manage the impact of AI. The administration also released a new Education Department report on the risks and opportunities AI presents in schools and is issuing a new request for public comments and input on critical AI issues.
The action came the same day an AI-generated fake image of a building near the Pentagon going up in smoke appeared on social media, causing a brief plunge in the stock market. The disturbing image, which recalled memories of 9/11, was quickly deemed a fake, and the markets recovered.
But the disruption – which financial news service Bloomberg News said may have been the first time an AI-created image affected the stock market – underscored how potentially damaging AI can be, if practitioners use it to spread lies.
“AI is one of the most powerful technologies of our time, with broad applications. President Biden has been clear that in order to seize the opportunities AI presents, we must first manage its risks,” the White House said in a fact sheet announcing the accelerated moves.
Political Cartoons

View All 1003 Images

AI can be very useful to teachers and professors, the Education Department report said. The technology can support educators in myriad ways, including helping them find new ways to interact with students. But it can also be risky, the report said, especially if there are algorithmic biases in AI-produced material.
The problem of using AI to cheat is already a problem. Last week, a professor at Texas A&M University at Commerce gave students an incomplete or failing grade after he put their essays into a software tool designed to identify material “written” by ChatGPT. The chatbot can write essays, poems, songs and the like which appear to be prepared by humans – and the new software sold to identify what was done by a person and what was generated by AI isn’t always reliable, experts warn.
The technology is becoming increasingly available. Adobe said Tuesday it would integrate artificial intelligence tools into its popular Photoshop software.
That trend is worrying lawmakers and even some in the industry. People can make what are called “deep fakes” to influence, trick or frighten viewers. AI can be easily used, for example, to create a fake campaign video showing a candidate saying things he or she has never uttered – potentially alienating voters who think the video or image is real.
Last week, Sam Altman, chief executive at the tech start-up OpenAI, urged a Senate subcommittee to put up guardrails.
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman told a Judiciary Committee panel, in an unusual case of a business executive recommending that his own industry be regulated.
MORE: Congress ‘Must Move Quickly’ on AI Legislation
Some lawmakers seemed eager to comply.
“Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” Sen. Richard Blumenthal, Connecticut Democrat, said during the hearing. “Congress failed to meet the moment on social media,” he added.
In a separate initiative Tuesday to address the mental health of young people, the Biden administration noted that the combination of artificial intelligence and heavy social media use can be harmful to young people, especially girls who are comparing their own images to computer-generated photos of other girls and women online.
“Far too often, online platforms do not protect minors who use their products and services, even when alerted to the abuses experienced online,” the White House said in a statement announcing a new interagency Task Force on Kids Online Health & Safety.