Jennifer Maisel at AIPLA CLE Webinar on Data Protection Risks of Predictive AI Models

Print PDF Icon

Partner Jennifer Maisel will be a speaker at the American Intellectual Property Law Association's (AIPLA) CLE Webinar titled "The Data Protection Risks of Predictive AI Models" on Wednesday, March 17, 2021, from 12:30 pm - 2 pm. 

Artificial intelligence (AI) has been employed in most industries. Often, for instance, a predictive AI model is used to infer meaningful information from complex datasets. A predictive AI model can be trained to infer the optimal treatment of a disease for a given patient or to detect the existence of a disease in the patient. But, these predictive AI models can create significant data protection challenges, if certain safeguards are not implemented. For instance, predictive AI models are susceptible to unintended memorization, which can result in the leakage of sensitive information to unauthorized parties. Addressing this training data leakage phenomenon of predictive AI models is a difficult challenge. This webinar will discuss the unique data protection risks that arise out of employing predictive AI models in various industries. This webinar will also discuss techniques for safeguarding against the unintentional leakage of sensitive training data by these predictive AI models.

CLE is available for this webinar.

Jump to Page

By using this site, you agree to our updated Privacy Policy and our Terms of Use.