What the departing White House chief tech advisor has to say on AI

Informed i’s Weekly Business Insights

Extractive summaries and key takeaways from the articles carefully curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since 2017 | Week 377, Nov 29-Dec 5, 2024 | Archive

What the departing White House chief tech advisor has to say on AI

By James O’Donnell | MIT Technology Review | December 2, 2024

Extractive Summary of the Article | Listen

2 key takeaways from the article

  1. President Biden’s administration will end within two months, and likely to depart with him is Arati Prabhakar, the top mind for science and technology in his cabinet. She has served as Director of the White House Office of Science and Technology Policy since 2022 and was the first to demonstrate ChatGPT to the president in the Oval Office. Prabhakar was instrumental in passing the president’s executive order on AI in 2023, which sets guidelines for tech companies to make AI safer and more transparent (though it relies on voluntary participation). 
  2. She reflects on risks assoicared with AI in the following three aspects:  One of the most fully manifested risks in horrific ways is deepfakes and image-based sexual abuse.  About risks of development of biological weapons through AI when people did the serious benchmarking about how much riskier that was compared with someone just doing Google searches, it turns out, there’s a marginally worse risk, but it is marginal.  And if consumers don’t have confidence that the AI tools they’re interacting with are respecting their privacy, are not embedding bias and discrimination, that they’re not causing safety problems, then all the marvelous possibilities really aren’t going to materialize.

Full Article

(Copyright lies with the publisher)Topics:  Technology and Humans, Artificial Intelligence, AI Risks, Deep Fake, Large Language Models

Extractive Summary of the Article | Read | Listen

President Biden’s administration will end within two months, and likely to depart with him is Arati Prabhakar, the top mind for science and technology in his cabinet. She has served as Director of the White House Office of Science and Technology Policy since 2022 and was the first to demonstrate ChatGPT to the president in the Oval Office. Prabhakar was instrumental in passing the president’s executive order on AI in 2023, which sets guidelines for tech companies to make AI safer and more transparent (though it relies on voluntary participation). 

The incoming Trump administration has not presented a clear thesis of how it will handle AI, but plenty of people in it will want to see that executive order nullified. Trump said as much in July, endorsing the 2024 Republican Party Platform that says the executive order “hinders AI innovation and imposes Radical Leftwing ideas on the development of this technology.” Venture capitalist Marc Andreessen has said he would support such a move.  However, complicating that narrative will be Elon Musk, who for years has expressed fears about doomsday AI scenarios, and has been supportive of some regulations aiming to promote AI safety. 

As she prepares for the end of the administration, the author sat down with Prabhakar and asked her to reflect on President Biden’s AI accomplishments, and how AI risks, immigration policies, the CHIPS Act and more could change under Trump.  For the scope consideration, we shared here this reflection in context of AI risks only.

Every time a new AI model comes out, there are concerns about how it could be misused. As you think back to what were hypothetical safety concerns just two years ago, which ones have come true?

We identified a whole host of risks when large language models burst on the scene, and the one that has fully manifested in horrific ways is deepfakes and image-based sexual abuse. We’ve worked with our colleagues at the Gender Policy Council to urge industry to step up and take some immediate actions, which some of them are doing. There are a whole host of things that can be done—payment processors could actually make sure people are adhering to their Terms of Use. They don’t want to be supporting [image-based sexual abuse] and they can actually take more steps to make sure that they’re not. There’s legislation pending, but that’s still going to take some time.

Have there been risks that didn’t pan out to be as concerning as you predicted?

At first there was a lot of concern expressed by the AI developers about biological weapons. When people did the serious benchmarking about how much riskier that was compared with someone just doing Google searches, it turns out, there’s a marginally worse risk, but it is marginal. If you haven’t been thinking about how bad actors can do bad things, then the chatbots look incredibly alarming. But you really have to say, compared to what?

For many people, there’s a knee-jerk skepticism about the Department of Defense or police agencies going all in on AI. I’m curious what steps you think those agencies need to take to build trust.

If consumers don’t have confidence that the AI tools they’re interacting with are respecting their privacy, are not embedding bias and discrimination, that they’re not causing safety problems, then all the marvelous possibilities really aren’t going to materialize. Nowhere is that more true than national security and law enforcement.

Be the first to comment

Leave a Reply