The ever-growing phenomenon of predictive health analytics is generating significant excitement, hope for improved health outcomes, and potential for new revenues. Researchers are developing algorithms to predict suicide, heart disease, stroke, diabetes, cognitive decline, opioid abuse, cancer recurrence, and other ailments. The researchers include not only medical experts, but also commercial enterprises, such as Facebook and LexisNexis, who may profit from the work considerably. This Article focuses on long-term disease predictions (i.e., predictions regarding future illnesses), which have received surprisingly little attention in the legal and ethical literature. It compares the robust academic and policy debates and legal interventions that followed the emergence of genetic testing to the relatively anemic reaction to predictions produced by artificial intelligence and other predictive methods. This Article argues that, like genetic testing, predictive health analytics raises significant concerns about psychological harm, privacy breaches, discrimination, and the meaning and accuracy of predictions. Consequently, as alluring as the new predictive technologies are, they require careful consideration and thoughtful safeguards. These include changes to the HIPAA Privacy and Security Rules and the Americans with Disabilities Act, careful oversight mechanisms, and self-regulation by healthcare providers. Ignoring the hazards of long-term predictive health analytics and failing to provide data subjects with appropriate rights and protections would be a grave mistake.