Image by Gerd Altmann from Pixabay

Researchers say HIPAA laws must be revised thanks to progress made by AI since the legislation was passed in 1996.

BERKELEY, Calif. — You may feel protected by the HIPAA laws which serve to keep your medical history confidential, but a new study finds that the privacy of our health data is at great risk thanks to the great advances in artificial intelligence.

Researchers from the University of California at Berkeley say their findings show the regulations written within the Health Insurance Portability and Accountability Act of 1996 may be outdated and are incapable of keeping Americans’ health data safe and private with the progress made by AI developers.

The study’s authors reviewed two years’ worth of data from 15,000 Americans to reach their conclusion. They say that AI can now use data collected by fitness trackers, smartphones, smartwatches, and other devices that track movement to identify citizens by learning their daily step patterns and correlating it to demographic data.

“In principle, you could imagine Facebook gathering step data from the app on your smartphone, then buying health care data from another company and matching the two,” says author Anil Aswani, an engineer at Berkeley, in a media release. “Now they would have health care data that’s matched to names, and they could either start selling advertising based on that or they could sell the data to others.”

Ultimately, Aswani says there are programs that can piece together bits of data about a person, even if the all of the identifying information has been hidden.

“HIPAA regulations make your health care private, but they don’t cover as much as you think,” he says. “Many groups, like tech companies, are not covered by HIPAA, and only very specific pieces of information are not allowed to be shared by current HIPAA rules. There are companies buying health data. It’s supposed to be anonymous data, but their whole business model is to find a way to attach names to this data and sell it.”

Aswani believes that companies may find it hard to ignore the ability to to utilize artificial intelligence should it give them an advantage over competitors or the ability to save money, even knowing it’s unethical. For example, he foresees mortgage lenders or credit card companies using AI to discriminate against pregnant or disabled clients.

He believes the problem isn’t in our smart devices — it’s simply in the way data is stored, used, and protected. That means the government must alter policies to keep consumers safer.

“Ideally, what I’d like to see from this are new regulations or rules that protect health data,” he says. “But there is actually a big push to even weaken the regulations right now. For instance, the rule-making group for HIPAA has requested comments on increasing data sharing. The risk is that if people are not aware of what’s happening, the rules we have will be weakened. And the fact is the risks of us losing control of our privacy when it comes to health care are actually increasing and not decreasing.”

The study was published in the JAMA Network Open journal.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink


Chris Melore


Sophia Naughton

Associate Editor