Artificial intelligence changes across the US

Artificial intelligence changes across the US

An increasing number of companies are using artificial intelligence (AI) for everyday tasks. Much of the technology is helping with productivity and keeping the public safer. However, some industries are pushing back against certain aspects of AI. And some industry leaders are working to balance the good and the bad.

“We are looking at critical infrastructure owners and operators, businesses from water and health care and transportation and communication, some of which are starting to integrate some of these AI capabilities,” said U.S. Cybersecurity and Infrastructure Security Agency Director Jen Easterly. “We want to make sure that they’re integrating them in a way where they are not introducing a lot of new risk.”

US AGRICULTURE INDUSTRY TESTS ARTIFICIAL INTELLIGENCE: ‘A LOT OF POTENTIAL’

Consulting firm Deloitte recently surveyed leaders of business organizations from around the world. The findings showed uncertainty over government regulations was a bigger issue than actually implementing AI technology. When asked about the top barrier to deploying AI tools, 36% ranked regulatory compliance first, 30% said difficulty managing risks, and 29% said lack of a governance model.

Easterly says despite some of the risks AI can pose, she said she is not surprised that the government has not taken more steps to regulate the technology.

“These are going to be the most powerful technologies of our century, probably more,” Easterly said. “Most of these technologies are being built by private companies that are incentivized to provide returns for their shareholders. So we do need to ensure that government has a role in establishing safeguards to ensure that these technologies are being built in a way that prioritizes security. And that’s where I think that Congress can have a role in ensuring that these technologies are as safe and secure to be used and implemented by the American people.”

Congress has considered overarching protections for AI, but it has mostly been state governments enacting the rules.

“There are certainly many things that are positive about what AI does. It also, when fallen into the hands of bad actors, it can destroy [the music] industry,” said Gov. Bill Lee, R-Tenn., while signing state legislation in March to protect musicians from AI. 

The Ensuring Likeness Voice and Image Security Act, or ELVIS Act, classifies vocal likeness as a property right. Lee signed the legislation this year, making Tennessee the first state to enact protections for singers. Illinois and California have since passed similar laws. Other states, including Tennessee, have laws that determine names, photographs and likenesses are also considered a property right.

“Our voices and likenesses are indelible parts of us that have enabled us to showcase our talents and grow our audiences, not mere digital kibble for a machine to duplicate without consent,” country recording artist Lainey Wilson said during a congressional hearing on AI and intellectual property.

AI HORROR FLICK STAR KATHERINE WATERSTON ADMITS NEW TECH IS ‘TERRIFYING’

Wilson argued her image and likeness were used through AI to sell products that she had not previously endorsed.

“For decades, we have taken advantage of technology that, frankly, was not created to be secure. It was created for speed to market or cool features. And frankly, that’s why we have cybersecurity,” Easterly said.

The Federal Trade Commission (FTC) has cracked down on some deceptive AI marketing techniques. It launched “Operation AI Comply” in September, which tackles unfair and deceptive business practices using AI, such as fake reviews written by chatbots.

“I am a technologist at heart, and I am an optimist at heart. And so I am incredibly excited about some of these capabilities. And I am not concerned about some of the Skynet things. I do want to make sure that this technology is designed and developed and tested and delivered in a way to ensure that security is prioritized,” Easterly said.

Chatbots have had some good reviews. Hawaii approved a law this year to invest more in research utilizing AI tools in the health care field. It comes as one study finds, OpenAI’s chatbot outperformed doctors in diagnosing medical conditions. The experiment compared doctors using ChatGPT with those using conventional resources. Both groups scored around 75% accuracy, while the chatbot alone scored above 90%.

AI isn’t just being used for disease detection, it’s also helping emergency crews detect catastrophic events. After deadly wildfires devastated Maui, Hawaii state lawmakers also allocated funds to the University of Hawaii to map statewide wildfire risks and improve forecasting technologies. It also includes $1 million for an AI-driven platform. Hawaiian Electric is also deploying high-resolution cameras across the state.

AI DETECTS WOMAN’S BREAST CANCER AFTER ROUTINE SCREENING MISSED IT: ‘DEEPLY GRATEFUL’

“It will learn over months over years to be more sensitive to what is a fire and what is not,” said Energy Department Under Secretary for AI and Technology Dimitri Kusnezov.

California and Colorado have similar technology. Within minutes, the AI can detect when a fire begins and where it may spread.

AI is also being used to keep students safe. Several school districts around the country now have firearm detection systems. One in Utah notifies officials within seconds of when a gun might be on campus.

“We want to create an inviting, educational environment that’s secure. But we don’t want the security to impact the education,” said Park City, Utah, School District CEO Michael Tanner.

Maryland and Massachusetts are also considering state funds to implement similar technology. Both states voted to establish commissions to study emerging firearm technologies. Maryland’s commission will determine whether to use school construction funding to build the systems. Massachusetts members will look at risks associated with the new technology.

“We want to use these capabilities to ensure that we can better defend the critical infrastructure that Americans rely on every hour of every day,” Easterly said.

The European Union passed regulations for AI this year. It ranks risks from minimal, which have no regulations, to unacceptable, which are banned. Chatbots are classified as specific transparency and are required to inform users they are interacting with a machine. Software for critical infrastructure is considered high risk and must comply with strict requirements. Most technology that profiles individuals or uses public images to build-up databases is considered unacceptable.

The U.S. has some guidelines for AI use and implementation, but experts say they believe it will not go as far as the EU classifying risks.

“We need to stay ahead in America to ensure that we win this race for artificial intelligence. And so it takes the investment, it takes the innovation,” Easterly said. “We have to be an engine of innovation that makes America the greatest economy on the face of the earth.”

Leave a Reply

Your email address will not be published. Required fields are marked *