Search
 
 

Practices

 

Search

FILTERS

  • Please search to find attorneys
Close Btn

Alerts

09/26/2023

Transparency And AI: How To Build Customer Trust

As the use of artificial intelligence (AI) technology increases in scope and frequency, AI providers have begun to take the initiative to provide greater transparency about how they use data collected from users of their products. In particular, two software companies, Twilio and Salesforce, have taken steps that emphasize such transparency by providing key information on current AI developments and technology. Critically, based on research conducted by Twilio, less than half of customers trust AI providers to keep their data secure and to use it responsibly. The primary purpose of providing increased transparency is to build customers’ trust in software providers’ use of AI technology, which, up to this point, has been met by some customers with skepticism and wariness.

Twilio

Twilio, a company that helps businesses automate communications, announced that it will implement “nutrition labels” on AI services it offers. These “nutrition labels” seek to fulfill the same purpose that actual nutrition labels serve: to give customers an idea of what they will be consuming. Twilio’s ultimate goal with these labels is to provide clear, accessible information about the data handling practices of the AI services at hand and to build customer trust.

Specifically, Twilio’s “nutrition labels” provide information on the AI models themselves, Twilio’s data usage, optional features within the AI framework, and human involvement regarding Twilio’s data collection. The labels also provide certain feedback information about the AI’s performance, accuracy, and biases. Additionally, Twilio has developed a “privacy ladder” that will be used to distinguish between company data that is used only for customers’ internal projects and data that is used to train models for external customers. Looking to the future, Twilio will be providing an online tool that will allow other companies to generate similar “nutrition labels” for their own AI offerings.

Salesforce

In a bid to increase transparency and customer trust in AI technologies as a whole, Salesforce, a software company with cloud offerings, has introduced an acceptable use policy for generative AI. Although acceptable use policies are typically used by businesses to govern their employees’ use of different technologies, Salesforce has instead adopted an acceptable use policy that governs their customers’ usage of their technology offerings. By adopting such a policy, Salesforce is taking steps to help ensure that its AI offerings are only used in ways that are both legal and ethical. For example, Salesforce’s policy includes prohibitions on using their generative AI technologies for automated decision-making processes with legal effects, offering advice that should typically come from licensed professionals (i.e., a lawyer or financial advisor), predicting protected characteristics such as race, and generating content to support or attack a political campaign.

Governmental Regulators and How to Build Customer Trust

While certain AI providers have taken the initiative to provide protections against AI misuse, certain governmental regulators, including in the U.S., Canada, China, and the EU have taken an active interest in regulating AI:

  • The U.S. government has initiated the development of robust technical tools (i.e., watermarking systems to distinguish AI-generated content) with the purpose of mitigating the risk of fraud and deception through AI tools and fostering creativity.
  • EU tech companies have urged platforms to implement technology to identify and label AI-generated content as part of their efforts to combat misinformation.
  • The Canadian government is developing a voluntary code of conduct for AI developers to prevent the creation of harmful or malicious content. It seeks to ensure a clear distinction between AI-generated and human-made content while including provisions for user safety and the avoidance of biases.
  • The Chinese government has released interim measures to provide guidelines for the development and use of AI technology like content labeling and verification.
  • Twelve different governmental regulators have urged social media platforms to take steps to curb data scraping, which is often used to train AI models.

Moving forward, the steps that companies can take to ensure customers that AI technologies are not being misused can include adhering to governmental guidance and regulations, adopting acceptable use policies, conducting adversarial testing of AI models and features, and incorporating filters that try to stop generative AI systems form sharing improper content or personal information. These approaches can be used to address concerns about privacy and security as well as lay the foundation for future AI development. It may be beneficial for providers to take these steps to increase transparency in the use of their data and to ease some of these concerns that AI technology is not being misused.

Contact one of the privacy experts in McGrath North’s Privacy and Cybersecurity team for all questions relating to AI regulations and guidance on how your business can improve transparency and security of customer data use.