Uncategorized

Apr
09

Kelsey Cullinan Reed In re B.O.A., 372 N.C. 372, 831 S.E.2d 35 (2019). In re B.O.A.: The Question of Reasonable Progress to Correct the Circumstances that Led to a Child’s Removal This case turns on the question of whether the trial court erred by terminating a mother’s parental rights to her daughter, Bev, because she

Jan
12

The ever-growing phenomenon of predictive health analytics is generating significant excitement, hope for improved health outcomes, and potential for new revenues. Researchers are developing algorithms to predict suicide, heart disease, stroke, diabetes, cognitive decline, opioid abuse, cancer recurrence, and other ailments. The researchers include not only medical experts, but also commercial enterprises, such as Facebook and LexisNexis, who may profit from the work considerably. This Article focuses on long-term disease predictions (i.e., predictions regarding future illnesses), which have received surprisingly little attention in the legal and ethical literature. It compares the robust academic and policy debates and legal interventions that followed the emergence of genetic testing to the relatively anemic reaction to predictions produced by artificial intelligence and other predictive methods. This Article argues that, like genetic testing, predictive health analytics raises significant concerns about psychological harm, privacy breaches, discrimination, and the meaning and accuracy of predictions. Consequently, as alluring as the new predictive technologies are, they require careful consideration and thoughtful safeguards. These include changes to the HIPAA Privacy and Security Rules and the Americans with Disabilities Act, careful oversight mechanisms, and self-regulation by healthcare providers. Ignoring the hazards of long-term predictive health analytics and failing to provide data subjects with appropriate rights and protections would be a grave mistake.