Get the latest news around generative artificial intelligence regulations and insights from our director of global public policy on how it can affect your business.
If you’re asking whether you need to implement generative artificial intelligence (GAI) tools to support your business, you’re not alone. This technology can boost employee productivity, but is it safe? While these tools can help from marketing to customer service to data insights, business leaders have posed concerns about AI’s potential impact and dangers on society and some are calling for generative AI regulations.
A regulatory response has started to coalesce in areas including the European Union, and the U.S.
Rules could quickly take shape and different governments can implement varying oversight.
As a business decision maker, you need to understand GAI — and how it impacts your work with other companies and consumers.
“Most countries are just trying to ensure generative AI is subject to existing measures around privacy, transparency, copyright, and accountability,” said Danielle Gilliam-Moore, director, global public policy at Salesforce.
Review generative AI products on the market and see what makes sense for your business.
Ask: Do I need to build it internally or work with a third-party vendor, like Salesforce, to add its products to our tech stack?
The climate around GAI is moving at breakneck speed and regulators are trying to understand how the technology may affect businesses and the public. Here are some recent headlines:
Companies creating generative AI and ChatGPT-like tools can face legal action alleging defamation, copyright infringement, more.
European countries race to set the AI regulatory pace.
As America lags behind on global AI regulations, Congress members to meet with tech leaders for “insight forums.”
The Biden administration unveiled aggressive plans for mandatory AI regulations.
Microsoft revealed a plan on how it will handle AI governance in India.
Concerns around artificial intelligence (AI) date back years when discussions covered possible job loss, inequality, bias, security issues, and more. With the rapid growth of generative AI after the public launch of ChatGPT in November 2022, new flags include:
Privacy issues and data mining: Companies need to have transparency around where they’re gathering data and how they’re using it.
Copyright concerns: Because GAI tools pull from vast data sources, the chance of plagiarism increases.
Misinformation: False information could spread more quickly with AI chatbots, which also have created entirely inaccurate stories called hallucinations.
Identity verification: Is what you’re reading created by a human or chatbot? There is the need to verify articles, social media posts, art, and more.
Child protection: There’s been a call to ensure children and teenagers are protected against alarming, AI-generated content on social media.
This has all prompted regulators worldwide to investigate how GAI tools collect data and produce outputs and how companies train the AI they’re developing. In Europe, countries have been swift to apply the General Data Protection Regulation (GDPR), which impacts any company working within the EU. It’s one of the world’s strongest legal privacy frameworks; the U.S. does not have a similar overarching privacy law. That may change, with calls for more generative AI regulations..
“These are a lot of the same concerns we’ve seen previously wash up on the shores of the technology industry,” Gilliam-Moore said. “Right now, regulatory efforts, including investigations, seem to focus on privacy, content moderation, and copyright concerns. A lot of this is already addressed in statute, so regulators are trying to make sure that this is fit for purpose for generative AI.”
Companies continue to wonder how these tools will impact their business. It’s not just what the technology is capable of, but also how regulation will play a role in how businesses use it. Where does the data come from? How is it being used? Are customers protected? Is there transparency?
No matter where your company does business or who you interact with — whether developing the technology for other companies to use or interacting directly with consumers — ensure you speak with lawyers who are following generative AI regulations and can help guide you through your process.
“Talking with your trusted advisers is always a good first step in all of this,” Gilliam-Moore said. “Innovation is happening at an incredible speed. So the conversations we’re having now could become stale in the next six months.”
Regulators have been concerned about how companies collect data and how that information gets delivered to users. Having an acceptable use policy – an agreement between two or more people (like a business and its employees or a university and students) outlining proper use when accessing a corporate network or internet — can help safeguard compliance. In addition, it is important to show data provenance, a documented trail that can prove data’s origins and where it currently sits.
“Without data, none of this works,” Gilliam-Moore said.
Larger corporations can often invest in the research and development around the technology, especially to stay compliant. Smaller businesses may not have the resources to do their due diligence, so asking vendors and technology partners in their ecosystem the right questions becomes important.
While Salesforce is taking steps to develop trusted generative AI for its customers, those customers also work with other vendors and processors. They need to stay aware of potential harms that may exist and not just trust blindly. Gilliam-Moore said smaller companies should ask questions including:
Are you GDPR compliant?
Are you HIPAA, or whichever law regulates your industry, compliant?
Do you have an acceptable use policy?
What are your certifications?
What are your practices around data?
Do you have policies that try to provide guardrails around the deployment of this technology?
“If you’re a smaller company, you may need to rely upon the due diligence of your third-party service providers,” Gilliam-Moore said. “Look at the privacy protocols, the security procedures, what they identify as harms and safeguards. Pay close attention to that.”