Jennifer Maisel at AIPLA CLE Webinar on Data Protection Risks of Predictive AI Models

Print PDF Icon

Partner Jennifer Maisel will be a speaker at the American Intellectual Property Law Association's (AIPLA) CLE Webinar titled "The Data Protection Risks of Predictive AI Models" on Wednesday, March 17, 2021, from 12:30 pm - 2 pm. 

Artificial intelligence (AI) has been employed in most industries. Often, for instance, a predictive AI model is used to infer meaningful information from complex datasets. A predictive AI model can be trained to infer the optimal treatment of a disease for a given patient or to detect the existence of a disease in the patient. But, these predictive AI models can create significant data protection challenges, if certain safeguards are not implemented. For instance, predictive AI models are susceptible to unintended memorization, which can result in the leakage of sensitive information to unauthorized parties. Addressing this training data leakage phenomenon of predictive AI models is a difficult challenge. This webinar will discuss the unique data protection risks that arise out of employing predictive AI models in various industries. This webinar will also discuss techniques for safeguarding against the unintentional leakage of sensitive training data by these predictive AI models.

CLE is available for this webinar.

Jump to Page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.