• Mon. Oct 7th, 2024

Christina Antonelli

Connecting the World, Technology in Time

Texas AG Announces Settlement with Healthcare Generative AI Compa

Texas AG Announces Settlement with Healthcare Generative AI Compa

According to the Texas AG, at least four major Texas hospitals used Pieces’ generative AI products for purposes of producing clinical summaries and documentation from patient data. The settlement alleges that the company developed a series of metrics to claim that its healthcare AI products were “highly accurate,” including advertising and marketing the accuracy of such products and services by claiming “critical hallucination rate” and “severe hallucination rate” of “<.001%” and “<1 per 100,000,” respectively. The Texas AG alleged that “these metrics were likely inaccurate and may have deceived hospitals about the accuracy and safety of the Company’s products,” in violation of the Texas Deceptive Trade Practices – Consumer Protection Act.

While the settlement did not impose any financial penalties, it requires the company to comply with a variety of provisions designed to ensure clear and conspicuous disclosures regarding AI tool performance, including:

  • Marketing and advertising disclosures – For any marketing or advertising that includes direct or indirect statements regarding any metrics, benchmarks, or similar measurements describing the outputs of its generative AI products, the company must clearly and conspicuously disclose (1) the meaning or definition of such metric, benchmark, or similar measurement, and (2) the method, procedure, or any other process used by the company, or on the company’s behalf, to calculate the metric, benchmark, or similar measurement. Alternatively, the company may retain an independent third-party auditor to assess the performance or characteristics of the company’s products and services, and have all marketing and advertising be consistent with the independent auditor’s findings.
  • Prohibitions against misrepresentations – The company may not make any false or misleading statements regarding its AI products.
  • Customer disclosures – The company must provide current and future customers with documentation that clearly and conspicuously discloses any known or reasonably known harmful or potentially harmful uses or misuses of its products and services. This documentation must, at a minimum, include the following information:
    • The type of data and/or models used to train the products and services.
    • A detailed explanation of the intended purpose and use of the products and services, as well as any training or documentation needed to facilitate proper use of the products and services.
    • Any known, or reasonably knowable, limitations of its products or services, including risks to patients and healthcare providers from the use of the product or service, such as the risk of physical or financial injury in connection with a product or service’s inaccurate output.
    • Any known, or reasonably knowable, misuses of a product or service that can increase the risk of inaccurate outputs or increase the risk of harm to individuals.
    • For each product or service, all other documentation reasonably necessary for a user to understand the nature and purpose of an output generated by a product or service, to monitor for patterns of inaccuracy, and to reasonably avoid misuse of the product or service.
  • Additional requirements – The company must notify its principals, officers, directors, employees with managerial responsibilities for the conduct covered by the settlement, agents and representatives who participate in conduct related to the settlement, and successors of the company about the settlement, and must submit to compliance monitoring.

The settlement requires the company’s compliance for five years, although the company may request that the settlement be modified at any time and/or rescinded after one year based on the company’s compliance, changes in the state or federal regulatory landscape, or changes or developments in generative AI technology and related industry standards, metrics, benchmarks, or similar measurements describing the outputs of generative AI products.

As part of the settlement announcement, Texas AG Ken Paxton emphasized the importance of transparency from AI companies, particularly those operating in the healthcare space. “AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations, and appropriate use,” he stated. “Anything short of that is irresponsible and unnecessarily puts Texans’ safety at risk. Hospitals and other healthcare entities must consider whether AI products are appropriate and train their employees accordingly.”

Key Takeaways

The Texas AG settlement is likely the first of many enforcement actions by state regulators and other authorities against AI developers. The settlement illustrates the existing enforcement tools that regulators have available to take action against what they perceive to be bad actors. The customer disclosures that the settlement requires also may be a preview of the kinds of information that future state or federal AI legislation may require AI developers to provide to deployers or other affected stakeholders (and as reflected in the Colorado Artificial Intelligence Act that was enacted in May 2024). The settlement emphasizes the key legislative and regulatory priorities of transparency, accuracy, and consumer protection in AI development and marketing.

The remediation steps imposed by the Texas AG serve as the starting point for preventative best practices that AI companies can adopt. AI compliance programs should include testing and documentation along these lines. The settlement also suggests that investing in consumer-facing, easy-to-understand explanations about the AI will be important, akin to how some companies have created “privacy centers” on their websites that go beyond the more formal language in their privacy policies and terms of use in order to earn public confidence in their privacy claims and protections.

At the federal level, the Federal Trade Commission (FTC) has been active in using its consumer protection authority to oversee AI technologies. And the FTC has broad authority to seek enforcement against claims it finds to be unfair and deceptive – AI or otherwise. For example, the FTC has warned that companies may violate the FTC Act by making false or unsubstituted claims about an AI product’s efficacy, or by making retroactive changes to terms of service or privacy policies to adopt more permissive data practices (e.g., to start sharing consumers’ data with third parties or using that data for AI training). Companies should carefully scrutinize public-facing comments and claims regarding their AI capabilities even absent specific FTC rules for AI. Other federal regulators also have begun to increase their oversight of AI.

While the focus of the Texas enforcement in this case was on an AI developer, AG Paxton’s statements emphasize that there is scrutiny on both developers and procurers of AI, including providers, payors, and other healthcare organizations. Healthcare stakeholders should prioritize the implementation of an AI governance framework that includes carefully evaluating and ensuring the substantiation of claims regarding the accuracy and safety of AI tools that they are developing, commercializing, procuring, or deploying.

link

By admin