Blogs

Employee Policies on Tracking Technology and AI Grow Increasingly Important

By Mark Decker posted 02-23-2024 08:48 AM

  

Advancements in technology have enabled what was once science fiction to become an easily accessible reality. Within a short period, employees can set up company websites, implement tracking tools for those sites, and use artificial intelligence (AI) to answer customers’ questions.  

Given how accessible these activities are, there may be an instinct not to have policies in place that govern employees' approach to these technologies. Ignore this instinct because the liability from these activities is increasingly under scrutiny   

Protected Health Information and Third-Party Trackers 

When setting up a website for a covered entity, it may be common to track user activity on the site to better serve customer needs. A covered entity is defined in the Health Insurance Portability and Accountability Act of 1996 (HIPAA) rules as a health plan, health care clearing house, or a health care provider who electronically transmits any health information in connection with transactions for which the Department of Health and Human Services (HHS) has adopted standards.  

It has never been cheaper or easier to implement tracking technologies on a website. There are many choices, and most of them are low-cost and easy to utilize. The amount of extremely effective tracking solutions makes it hard to pass up getting detailed insight on website visitors so that their needs are better served. While a policy regarding website creation may be in place, be sure that a policy regarding what tracking software is being used on these websites exists and is being followed. Employees may not consider the risks associated with using certain tracking software on a covered entity's site.  

Potentially in response to the proliferation of tracking software, HHS issued a bulletin warning that covered entities under HIPAA could inadvertently disclose protected health information (PHI) when using tracking software on their website and applications. When using tracking software, any data that is being tracked (e.g., user input, behavior, etc.) is sent to the tracking technology vendor for storage and analysis.  

According to the HHS’s bulletin, when PHI is collected on a covered entity site that uses tracking technology without proper privacy policies, notification, or consent and a proper business associate agreement that collection may be considered an inadvertent disclosure.  

How to Ensure Compliance 

More specific and comprehensive solutions can be found in the HHS bulletin. However, here is a general overview: 

  • Ensure that disclosures of PHI to third-party tracking technology vendors are permitted by the Privacy Rule and that only the minimum necessary PHI to achieve the intended purpose is disclosed unless an exception applies.  

  • Inform users of the tracking technologies used in their website or mobile application in a privacy policy, notice, or terms and conditions of use. Sometimes, notice may not be enough. Therefore, covered entities must ensure that vendors have signed a Business Associate Agreement (BAA) and “that there is an applicable permission prior to a disclosure of PHI.”  

  • When establishing a BAA with a vendor that is a “business associate,” within the meaning of HIPAA: 

  • A covered entity should determine whether the vendor meets the definition of “business associate” and make sure that PHI disclosures to them are permitted by the HIPAA rules. 

  • The BAA must state what the vendor’s permitted and required uses and disclosures of PHI and state that the vendor will safeguard PHI. Additionally, the vendor must state they will report any security incidents to the covered entity, such as a breach that causes PHI to be accessible by an outside party. 

AI Use Policies 

Similar to how easy it is to implement tracking technology on a website, it is easy to inadvertently disclose customer information using AI. It is largely reported although disputed by OpenAI, the developer of the AI tool ChatGPT that information put in an AI prompt is not secure and becomes part of the training data used to train AI. If a workplace allows AI to be used by its employees, policies should be in place that prohibit the input of confidential information in AI prompts. Otherwise, if employees input confidential customer information, this may cause an inadvertent disclosure of customer information. As a general rule, we recommend the following: 

  • Employers must decide whether generative AI programs like ChatGPT should be used in an employee’s work. They should evaluate these risks against how using AI could benefit the organization. 

  • If using generative AI is allowed, employees should be taught how to safely use it. They should know that generative AI can fabricate data (e.g., hallucinate) and that confidential information should never be used in any AI prompt. Employers Council offers members a sample handbook policy on the use of content-generating AI. It can be found here. 

As the proliferation of powerful and transformative technologies continues, government regulators will continue to try to catch up. Because of this, don’t assume that using the latest technology doesn’t carry legal risk.  

To learn more about mitigating the potential risks posed by AI, consult Employers Council’s Guide to Managing AI in the Workplace. The guide also presents HR and legal considerations for employers when using AI and identifies effective uses of the technology. If you have any questions, please contact our Member Experience Team. 

  

 

 


#Training
#DataManagementandCybersecurity

0 comments
54 views

Permalink