published and reported by youarewithinthenorms.com
NORMAN J CLEMENT RPH., DDS, NORMAN L. CLEMENT PHARM-TECH, MALACHI F. MACKANDAL PHARMD, BELINDA BROWN-PARKER, IN THE SPIRIT OF JOSEPH SOLVO ESQ., INC.T. SPIRIT OF REV. C.T. VIVIAN, JELANI ZIMBABWE CLEMENT, BS., MBA., IN THE SPIRIT OF THE HON. PATRICE LUMUMBA, IN THE SPIRIT OF ERLIN CLEMENT SR., WALTER F. WRENN III., MD., JULIE KILLINGWORTH, LESLY POMPY MD., CHRISTOPHER RUSSO, MD., NANCY SEEFELDT, WILLIE GUINYARD BS., JOSEPH WEBSTER MD., MBA, BEVERLY C. PRINCE MD., FACS., NEIL ARNAND, MD., RICHARD KAUL, MD., LEROY BAYLOR, JAY K. JOSHI MD., MBA, ADRIENNE EDMUNDSON, ESTER HYATT PH.D., WALTER L. SMITH BS., IN THE SPIRIT OF BRAHM FISHER ESQ., MICHELE ALEXANDER MD., CUDJOE WILDING BS, MARTIN NJOKU, BS., RPH., IN THE SPIRIT OF DEBRA LYNN SHEPHERD, BERES E. MUSCHETT, STRATEGIC ADVISORS
Thomas Brewster of Forbes Magazine, Article “Explained: Why The Feds Are Raiding Tech Companies For Medical Records“Senior writer at Forbes covering cybercrime, privacy, and surveillance;
“In a separate case, in July 2019, DrChrono supplied the government with records related to the Pennsylvania-based practice of Neil Anand. He was being investigated for handing out “goody bags” of drugs to patients who didn’t need or ask for them. He’s since pleaded not guilty to all charges of healthcare fraud and conspiracy to distribute controlled substances.
In a search warrant I uncovered, the investigating agent goes into a little detail about what exactly he was able to determine from DrChrono’s record.“

Data Analytics, Warrantless Data Mining of Electronic Medical RECORDS, IN United States v Anand and Dr. Neil Anand
The provided texts outlines a comprehensive understanding and centers on the legal case of United States v. Dr. Neil Anand, focusing on the controversial use of AI and data analytics by the government to investigate healthcare fraud, particularly concerning opioid prescriptions.
These sources reveal concerns about warrantless data mining of electronic medical records obtained from companies like DrChrono and the potential for algorithms to wrongly target innocent healthcare providers based on statistical anomalies.
Dr. Anand is accused of healthcare fraud based on data obtained from DrChrono, raising concerns about warrantless data mining of electronic medical records. His attorney argues that the warrant used to obtain the evidence was defective due to omitted information and seeks to suppress the evidence.
The case underscores the need for a balanced approach that protects both security and individual privacy in the age of AI-driven law enforcement.

The United States Department of Justice (DOJ) and the Department of Health and Human Services (HHS) have recently ramped up their efforts to combat healthcare fraud, leaning heavily on data analytics, Generative Artificial Intelligence (AI), and Machine Learning.

While the government claims significant success in identifying instances of fraud, some experts warn that these advanced technologies may inadvertently target innocent healthcare providers and practices.
The federal government’s utilization of digital solutions and data mining to combat healthcare fraud is not a new concept. However, the recent surge in hiring scores of prosecutors and FBI agents specializing in new technological advances in AI and Machine Learning marks a notable shift in strategy.
The documents call into question if privacy rights are being violated when advanced technologies are implemented to surveil citizen’s medical records.
These efforts are primarily focused on detecting “fraud, waste, and abuse” in the healthcare industry, and they have proven to be quite lucrative for the government. According to the DOJ and HHS, their investment in data analytics yields a return on investment of $4 or greater for every $1 spent on healthcare fraud detection and enforcement. They attribute much of this success to their reliance on data analytics as a key driver.
Artificial Intelligence MAY INFLUENCE WEATHER YOU CAN GET PAIN MEDICATION AND YOUR DOCTOR COMMITS SUICIDE OR SERVES PRISON TIME
The article explores the growing use of AI in monitoring prescription data, specifically for opioid medications, to combat the opioid crisis. These systems, such as NarxCare, generate risk scores that influence painkiller prescriptions, raising concerns about patient access to needed medication.

Patients and doctors have reported negative impacts, including restricted care and professional repercussions, leading to questions about bias and accuracy. While proponents believe AI can help reduce opioid-related harm, critics emphasize the need for transparency and independent evaluation to prevent unintended consequences.
The case raises significant questions about privacy rights, due process, and the balance between security and individual liberties in the age of AI-driven law enforcement, highlighting the efforts of Dr. Anand’s legal team to challenge the evidence and the methods used.
The piece highlights cases where patients have been denied pain medication and physicians have faced legal challenges due to these systems. Ultimately, the article underscores the importance of balancing the benefits of AI in healthcare with the potential for harm to ensure equitable and effective pain management.


However, the rush to employ these new tools has led to unintended consequences, with many innocent physicians, clinics, hospitals, and health systems being accused of wrongdoing merely because they exhibit statistical anomalies or are considered outliers in their practices.
The critical question raised by this approach is whether it inadvertently identifies some legitimate practices and providers as fraudsters.
“Data analytics is a good tool to suggest the need for further investigation, but it should not be used to determine misconduct conclusively,” says Jacob Foster, Principal Assistant Chief of the DOJ’s Health Care Fraud Unit. Foster emphasizes that “data is not the truth” and that “we have to go out and investigate.”

Unfortunately, the government hasn’t always heeded Foster’s advice. Healthcare providers and legal experts have witnessed situations where data analytics resulted in innocent individuals bearing the substantial financial and reputational costs of defending against federal enforcement actions simply because their data points were different or higher than those of their peers.
In several cases, individuals have been wrongly accused of fraud but later had their names cleared after providing the necessary context to the government.



Artificial Intelligence, EVIDENCE LAUNDERING and Targeting Packages Used For Unsolicited Interrogation of Patients

Qlarant, a (Maryland-based) technology company, has developed algorithms to identify questionable behavior patterns and interactions for controlled substances, prescribing and for opioids in particular,” involving medical providers partnering with various state and federal enforcement entities, including the Department of Health and Human Services’ Office of Inspector General, the FBI, and the Drug Enforcement Administration.
These algorithms analyze a wide range of data sources, including court records, insurance claims, drug monitoring data, property records, and incarceration data to flag providers.

However, Qlarant are using AI in ways that undermine the very principles of medical science a closer examination reveals a system riddled with mathematical flaws and ethical oversights. Qlarant’s algorithms, marketed as tools to combat healthcare fraud, are instead perpetuating a system of tyranny that disproportionately targets Black and Brown healthcare professionals, echoing the historical persecution of thinkers like Giordano Bruno and Galileo Galilei.
“…It sucks when we get it wrong…”

William Mapp, Qlarant’s Chief Technology Officer, underlines that the final decision on what to do with the information generated by their algorithms lies with people, not the algorithms themselves. He acknowledges the potential for errors and the company’s continuous efforts to minimize them.
Qlarant’s AI-driven models, such as the NBI MEDIC program, claim to predict opioid overdoses and identify fraudulent healthcare practices. Qlarant’s algorithms rely on correlation rather than causation, mistaking patterns like prescription volume or patient travel distance for indicators of fraud. This approach reduces complex medical practices to simplistic data points, ignoring the nuanced realities of healthcare.

THE OPIOID INQUISTION AN UNCONCIOUS BIAS
For example, Qlarant’s use of Morphine Milligram Equivalents (MME) to assess risk fails to account for the physiological differences between opioid-naïve patients and long-term users. This one-size-fits-all approach disproportionately flags doctors serving communities with high pain management needs, many of whom are Black and Brown healthcare professionals. These doctors and pharmacists, like Galileo and Bruno before them, are being persecuted not for wrongdoing, but for challenging the status quo.

Qlarant’s scoring system, which labels doctors as high-risk based on prescription volume, is reminiscent of the Inquisition’s condemnation of Galileo for his support of the Copernican model. Just as Galileo was punished for daring to question the geocentric view of the universe, Black and Brown doctors and pharmacists are being penalized for serving communities that mainstream healthcare has often neglected. Qlarant’s algorithms, like the Inquisition’s tribunals, are tools of oppression, masquerading as instruments of justice.
MILLI VANILLI JUSTICE (WHERE THE ENTIRE GOVERNMENT PROSECUTORS SHOW IS FRAUDULENT AND FAKE)

The company, in an online brochure, said its “extensive government work” includes partnerships with state and federal enforcement entities such as the Department of Health and Human Services Office of Inspector General, the FBI, and the Drug Enforcement Administration. In a promotional video, the company said its algorithms could “analyze a wide variety of data sources,” including court records, insurance claims, drug monitoring data, property records, and incarceration data to flag providers.
William Mapp, the company’s chief technology officer, stressed the final decision about what to do with that information is left up to people — not the algorithms.

These tactics raise the alarm that innocent healthcare providers may be wrongly accused due to statistical anomalies flagged by algorithms.
Mapp said that “Qlarant’s algorithms are considered proprietary and our intellectual property” and that they have not been independently peer-reviewed. “We do know that there’s going to be some percentage of error, and we try to let our customers know,” Mapp said. “It sucks when we get it wrong. But we’re constantly trying to get to that point where there are fewer things that are wrong.”

The ongoing debate surrounding the use of artificial intelligence and advanced data analytics in the fight against healthcare fraud emphasizes the need for a balanced approach that prioritizes accurate investigations and fairness in targeting potential wrongdoers. As the government continues to refine its strategies, healthcare professionals and legal experts urge a cautious and responsible application of these technologies to prevent the wrongful targeting of innocent parties.

The outcome of this case, United States vs. Anand, has the potential to set important precedents in using AI-driven risk assessments in law enforcement, particularly concerning the protection of individual rights and the fairness of the justice system. It is a critical moment in the ongoing debate about the balance between crime prevention and preserving civil liberties. Prosecutions against doctors through the use of prescribing data have attracted the attention of the American Medical Association.
Concerns about these unreviewed and unknown algorithms have also reached the American Medical Association (AMA). Bobby Mukkamala, MD, chair of the AMA’s Substance Use and Pain Care Task Force, highlights the significant impact on physicians, stating that “these unknown and unreviewed algorithms have resulted in physicians having their prescribing privileges immediately suspended without due process or review by a state licensing board — often harming patients in pain because of delays and denials of care.”

QLARANT FLAWED DATA ANALYTICS USED TO TARGET DR. SHIVA AKULA
The DrCHRONO Electronic Health Records Controversy

At the heart of this case lies the warrantless data mining of Dr. Anand’s patient records, specifically within the DrCHRONO electronic health records system. The use of AI algorithms by government agencies to sift through vast amounts of personal health information without explicit consent or a warrant has raised significant concerns about privacy and individual rights.

One such case United States vs. Anand, in July 2019, involved Dr. Neil Anand and the technology company DrChrono, which supplied the government with records related to the Pennsylvania-based practice of Dr. Anand. He was being investigated for allegedly distributing drugs to patients who didn’t request them, raising concerns about the implications of advanced data analytics in such investigations.

Thomas Brewster of Forbes Magazine, Article “Explained: Why The Feds Are Raiding Tech Companies For Medical Records
“ Senior writer at Forbes covering cybercrime, privacy, and surveillance; “In a separate case, in July 2019, DrChrono supplied the government with records related to the Pennsylvania-based practice of Neil Anand. He was being investigated for handing out “goody bags” of drugs to patients who didn’t need or ask for them. He’s since pleaded not guilty to all charges of healthcare fraud and conspiracy to distribute controlled substances. In a search warrant I uncovered, the investigating agent goes into a little detail about what exactly he was able to determine from DrChrono’s record.“
The reports on the government’s increasing acquisition of citizens’ medical records through healthcare tech companies like DrChrono. These companies, which manage electronic health records, are providing vast amounts of sensitive patient data to federal agents via search warrants.
The government uses this data in investigations, such as those related to opioid distribution, but the scope of information obtained often extends beyond suspects to include victims. Privacy advocates express concern that individuals have little control over their health data, and companies’ privacy policies permit data sales.

Unlike larger tech firms or genetic data companies, these smaller health tech companies face less scrutiny and may not have the resources to resist government demands. This raises serious concerns about medical privacy in the digital age.


Artificial Intelligence and Targeting Packages Under Scrutiny in United States v. Anand
In a landmark case that has sparked nationwide discussions on privacy, surveillance, and the reach of government artificial intelligence (AI), United States vs. Anand has unveiled a complex web of warrantless data mining. Dr. Anand, a medical professional, found himself at the center of a legal battle that sheds light on the use of AI computer algorithms to target and interrogate chronic pain patients and those suffering from substance use disorders.
In response, Dr. Anand’s attorney, the renowned Coley O. Reynolds, filed motions to expose potential unlawful data analytics and targeting packages, igniting a debate about the loss of medical privacy in the United States and its implications for legal cases in the future.

Chronic Pain Patients and Substance Use Disorder Patients Targeted
Attorney General Jeff Sessions delivered a speech in Louisville, Kentucky, addressing efforts to combat violent crime and the opioid crisis. He acknowledged the recent rise in crime rates and overdose deaths, emphasizing the Trump administration’s commitment to reversing these trends.
Sessions highlighted the Department of Justice’s actions, including increased prosecutions of violent criminals, firearm offenses, and opioid-related crimes. He also emphasized the importance of collaborative crime reduction strategies like Project Safe Neighborhoods (PSN) and intelligence sharing between local and federal law enforcement.
Sessions announced a DEA surge to target pharmacies and prescribers dispensing disproportionate amounts of drugs, utilizing data analysis to identify and prosecute opioid-related healthcare fraud. He expressed gratitude to law enforcement officers and their families for their service.



The data mining operations did not stop at mere surveillance but extended to the targeting of specific patient groups, particularly chronic pain patients and those suffering from substance use disorders. Government AI algorithms and targeting packages were utilized for unsolicited interrogation of patients in United States v Anand. The Government algorithms were employed to identify and categorize these patients, raising critical ethical and legal questions about the boundaries of surveillance and profiling.

The Role of Coley O. Reynolds

In response to the alleged invasion of privacy, Dr. Anand’s legal counsel, the renowned attorney Coley O. Reynolds, took up the case with determination and vigor. Reynolds, known for his relentless pursuit of justice and civil liberties, filed motions to uncover the possible unlawful use of data analytics and targeting packages. His actions have set in motion a legal battle that could have profound implications for the future of privacy rights and surveillance in the United States.
EVIDENCE LAUNDERING
MOTION FOR FRANK’S HEARING
In this case, medical records seized pursuant to an August 20, 2019, search warrant are under intense scrutiny. Dr. Anand’s defense counsel, Coley O. Reynolds, contends that the evidence obtained through this warrant should be suppressed, as they argue that it was acquired as a result of a defective search warrant.
Their claim revolves around the omission of crucial information from the search warrant affidavit, which, if included, would have negated the finding of probable cause. Dr. Anand is also respectfully requesting a hearing pursuant to the legal precedent set by Franks v. Delaware (1978), which allows individuals to challenge the accuracy and completeness of information presented in a search warrant affidavit.

JUDGES ORDER DENYING ANAND’S MOTION TO SUPRESS AND FOR FRANK’S HEARING
DR. NEIL ANAND’S, MD., FOIA LAWSUIT & EXHIBITS

Privacy and Legal Precedents


The loss of privacy in medical records has wide-reaching implications for American citizens. These records contain some of the most intimate and sensitive information about a person’s life, including their medical history, conditions, and treatments. The breach of such privacy threatens the trust and confidentiality that are fundamental to the doctor-patient relationship.
Furthermore, the invasion of medical records could have far-reaching consequences for legal cases in the future. If the government can access and use personal medical data without proper oversight, the rights of individuals in other legal contexts may also be at risk. Legal precedents that arise from cases like United States vs. Anand will shape the future boundaries of governmental surveillance and data mining.

Balancing Security and Privacy
In an era where the need for national security often collides with individual privacy, it is essential to strike a balance. AI and advanced technology have undoubtedly revolutionized how information is collected and analyzed. Still, it should be done within the boundaries of the law and with respect for individual rights. As technology advances, legal frameworks must adapt to ensure privacy rights are not eroded.
The legal landscape surrounding healthcare fraud investigations has been thrust into the spotlight with the ongoing case of United States v Anand and Dr. Neil Anand. Dr. Anand, a Pennsylvania-based practitioner, finds himself at the center of a debate that raises significant questions about using artificial intelligence (AI) and data analytics in law enforcement.

This legal battle underscores the broader context of how the federal government, specifically the Department of Justice (DOJ) and the Department of Health and Human Services (HHS), has been harnessing the power of artificial intelligence and data mining in its efforts to combat healthcare fraud. While the use of digital solutions and data analytics in the fight against healthcare fraud is not a new concept, recent developments have seen a significant increase in the government’s reliance on these technologies.
The DOJ and HHS have bolstered their ranks with scores of prosecutors and FBI agents who are now focused on utilizing cutting-edge AI and machine learning tools to uncover instances of “fraud, waste, and abuse” within the healthcare industry.
However, this intensified use of technology raises important questions. Are these innovative data analytics approaches unintentionally targeting innocent practices and providers as fraudsters? The answer, according to some experts, is an undeniable “yes.” In their pursuit of justice, these advanced tools have sometimes failed to differentiate between legitimate healthcare providers and those engaged in fraudulent activities.
The government’s rush to employ these new tools has led to unfortunate consequences, with many innocent physicians, clinics, hospitals, and health systems being accused of wrongdoing merely because their practices are different from their peers. It’s important to recognize that being a statistical outlier or anomaly should serve as a basis for further inquiry, not as proof of fraud.

The Department of Justice has employed data-mining techniques to target practices and providers whose frequency of certain treatments and procedures exceeds the majority of their peers. Examples include the number and strength of opioid prescriptions, high-reimbursement injections, and the frequent ordering of high-complexity labs. Yet, perfectly legitimate reasons could explain these deviations, such as patient populations with unique needs or practitioners specializing in specific treatments.

The involvement of private companies like Qlarant, a Maryland-based technology company, further complicates the landscape. Qlarant has developed algorithms to identify questionable behavior patterns related to controlled substances and opioids. These algorithms have attracted partnerships with state and federal enforcement entities, including the Department of Health and Human Services’ Office of Inspector General, the FBI, and the Drug Enforcement Administration.

The case of United States v Anand highlights the critical need for a balanced approach to the use of advanced technologies in healthcare fraud detection. As the legal battle continues, it underscores the importance of ensuring that constitutional rights are upheld and that evidence obtained through potentially unconstitutional means is not used in legal proceedings. The outcome of this case has the potential to set important precedents in the use of AI and data analytics in law enforcement, particularly concerning protecting individuals’ rights and the integrity of healthcare providers.

United States vs. Anand brings to light the intricate world of government AI surveillance, warrantless data mining, and its consequences for privacy, particularly within the context of medical records. Dr. Anand’s case, coupled with the vigorous efforts of his legal counsel, Coley O. Reynolds, showcases the importance of safeguarding individual privacy and the potential legal precedents that will shape the future of privacy rights in the United States. The battle between security and privacy is far from over, and it is a topic that will continue to demand our attention and careful consideration as technology and legal battles evolve.
FOR NOW, YOU ARE WITHIN
THE NORMS
OR SEND
$175.00 OR MORE TO CASH APP:$docnorm
ZELLE 3135103378
So, Donate to the “Pharmacist For Healthcare Legal Defense Fund,”
Briefing Document: United States vs. Dr. Neil Anand, MD et al. – AI, Data Mining, and Healthcare Fraud Investigations
Date: October 26, 2023 (Based on the “November 2023” reference for the youarewithinthenorms.com report)
Subject: Review of Sources on the Case of United States vs. Dr. Neil Anand, MD et al., Focusing on AI Use, Data Mining, and Implications for Healthcare Providers.
Prepared for: [Intended Audience – e.g., Legal Team, Advocacy Group, Media]

Executive Summary:
This briefing document synthesizes information from the provided sources regarding the case of United States vs. Dr. Neil Anand, MD et al.
The central theme revolves around the U.S. government’s increasing reliance on Artificial Intelligence (AI) and data analytics to combat healthcare fraud, specifically in relation to opioid prescriptions.
The sources highlight concerns about warrantless data mining of electronic medical records, the potential for these technologies to wrongly target innocent healthcare providers based on statistical anomalies, and the implications for patient privacy and due process.

The case of Dr. Anand, an anesthesiologist accused of healthcare fraud based on data obtained from DrChrono, serves as a focal point for these broader issues. The defense argues the evidence was obtained via a defective warrant due to omitted information and challenges the use of AI-driven risk assessments.
Furthermore, the role of companies like Qlarant, which develop proprietary AI algorithms for identifying potential fraud, is examined, raising concerns about their accuracy, bias, and lack of independent review.

Frequently Asked Questions: United States vs. Dr. Neil Anand and the Use of AI in Healthcare Fraud Investigations
1. What is the central issue in the case of United States vs. Dr. Neil Anand? The core of the United States vs. Dr. Neil Anand case revolves around the government’s use of AI-driven data analytics and alleged warrantless data mining of electronic medical records, specifically those of Dr. Anand obtained from DrChrono, to build a healthcare fraud case against him. This raises significant concerns about privacy rights, the potential for false accusations based on statistical anomalies flagged by algorithms, and the legality of the methods used to gather evidence.
2. How are artificial intelligence (AI) and data analytics being used by the government in healthcare fraud investigations? The Department of Justice (DOJ) and the Department of Health and Human Services (HHS) are increasingly employing AI and machine learning to analyze vast amounts of healthcare data, including prescription records, insurance claims, and electronic health records, to identify patterns indicative of fraud, waste, and abuse. This involves using algorithms to detect statistical outliers and flag healthcare providers whose practices deviate from perceived norms.
3. What concerns have been raised about the government’s use of AI and data analytics in these investigations? Several critical concerns have emerged. One major issue is the potential for these technologies to generate false positives, leading to the wrongful accusation and prosecution of innocent healthcare providers based on statistical anomalies rather than actual fraudulent activity. There are also significant worries about the lack of transparency and independent review of these proprietary algorithms, as well as the potential for bias, particularly against Black and Brown healthcare professionals and those serving communities with specific healthcare needs. Furthermore, the impact on patient access to necessary medications, such as pain medication, due to AI-driven risk scores is a growing concern.
4. What is “warrantless data mining” in the context of this case, and why is it significant? Warrantless data mining, as alleged in the Anand case, refers to the government’s access and analysis of electronic medical records without obtaining a traditional warrant based on probable cause of a specific crime. The significance lies in the potential violation of Fourth Amendment rights protecting against unreasonable searches and seizures, as medical records contain highly sensitive and private information. The defense argues that the evidence against Dr. Anand was obtained through a defective warrant based on improperly acquired data.
5. Who is Qlarant, and what role does the company play in healthcare fraud detection? Qlarant is a technology company that develops AI-driven algorithms to identify questionable behavior patterns related to controlled substances and healthcare practices. They partner with various state and federal enforcement agencies, including HHS-OIG, the FBI, and the DEA. Their algorithms analyze diverse data sources to flag providers deemed high-risk, raising concerns about the reliance on correlation rather than causation and the potential for bias in their assessments.
6. What is “evidence laundering” as mentioned in the context of the case? “Evidence laundering” in this context refers to the concern that initial potentially unlawfully obtained data or insights from flawed AI algorithms could be used to retroactively justify or construct a seemingly legitimate investigation and secure warrants. The idea is that the initial taint of potentially illegal or unreliable data collection is obscured in the subsequent investigative process.
7. What actions has Dr. Anand’s defense team taken in response to the charges and the evidence presented? Dr. Anand’s defense attorney, Coley O. Reynolds, has vigorously challenged the government’s case by filing motions to suppress evidence obtained through the search warrant, arguing that the warrant was defective due to the omission of crucial information. They have also sought a “Franks hearing” to challenge the accuracy and completeness of the information presented in the search warrant affidavit, aiming to expose potential unlawful data analytics and targeting packages used by the government.
8. What are the broader implications of the United States vs. Dr. Neil Anand case for privacy, the legal system, and healthcare providers? This case has significant implications for the balance between national security and individual privacy, particularly concerning medical records in the digital age. It raises critical questions about the appropriate use of AI in law enforcement, the potential for algorithmic bias and error, the erosion of the doctor-patient relationship due to government surveillance, and the due process rights of healthcare providers facing fraud allegations based on data analytics. The legal precedents set in this case could shape the future boundaries of governmental surveillance, data mining, and the protection of individual rights in the face of advancing technology. The American Medical Association has also expressed concerns about the impact of unreviewed algorithms on physicians’ prescribing privileges and patient care.
Key Themes and Important Ideas/Facts:
1. Government’s Increased Use of AI and Data Analytics in Healthcare Fraud Investigations:
- The Department of Justice (DOJ) and the Department of Health and Human Services (HHS) are increasingly utilizing AI, Generative AI, and Machine Learning to combat healthcare fraud.
- This involves hiring more prosecutors and FBI agents specializing in these technologies.
- The government claims a significant return on investment (>$4 for every $1 spent) due to the success of data analytics in detecting “fraud, waste, and abuse.”
- Attorney General Jeff Sessions, in 2017, emphasized using data analysis to target pharmacies and prescribers dispensing disproportionate amounts of drugs to combat the opioid crisis.

2. The Case of United States vs. Dr. Neil Anand, MD:
- Dr. Neil Anand, a Pennsylvania-based anesthesiologist, is facing charges of healthcare fraud and conspiracy to distribute controlled substances.
- The investigation against Dr. Anand originated from data obtained from DrChrono, an electronic health records company.
- Forbes Magazine reported that DrChrono supplied the government with records related to Dr. Anand’s practice in July 2019, concerning allegations of him handing out “goody bags” of drugs to patients who didn’t need or ask for them.
- Dr. Anand has pleaded not guilty to all charges.
- His attorney, Coley O. Reynolds, is challenging the evidence, arguing that the search warrant used to obtain it was defective due to omitted crucial information and has filed motions to suppress the evidence and requested a “Franks hearing.”
- The defense contends that the government’s use of data analytics and “targeting packages” led to the unlawful acquisition of medical records and the targeting of chronic pain patients and those with substance use disorders.
3. Warrantless Data Mining and Privacy Concerns:
- The case highlights concerns about the government’s warrantless data mining of electronic medical records held by companies like DrChrono.
- These health tech companies, often smaller and facing less scrutiny than larger tech firms, are providing vast amounts of sensitive patient data to federal agents via search warrants.
- Privacy advocates worry that individuals have little control over their health data, and companies’ privacy policies may permit data sales.
- The sources emphasize the highly sensitive nature of medical records and the potential erosion of the doctor-patient relationship due to such surveillance.
- “These tactics raise the alarm that innocent healthcare providers may be wrongly accused due to statistical anomalies flagged by algorithms. The documents call into question if privacy rights are being violated when advanced technologies are implemented to surveil citizen’s medical records.”
4. The Role and Concerns Regarding AI Algorithms (e.g., Qlarant):
- Companies like Qlarant develop algorithms to identify “questionable behavior patterns” related to controlled substances and opioid prescribing by analyzing various data sources (court records, insurance claims, drug monitoring data, etc.).
- These algorithms are used by federal and state enforcement entities like HHS-OIG, the FBI, and the DEA.
- Concerns are raised that these algorithms rely on correlation rather than causation, potentially misinterpreting patterns like prescription volume or patient travel distance as indicators of fraud.
- Qlarant’s algorithms, such as the NBI MEDIC program, are marketed as tools to predict opioid overdoses and identify fraudulent practices.
- William Mapp, Qlarant’s Chief Technology Officer, acknowledges the potential for errors but stresses that the final decision based on the algorithm’s output rests with people. He also stated that “Qlarant’s algorithms are considered proprietary and our intellectual property” and have not been independently peer-reviewed.
- Critics argue that the use of metrics like Morphine Milligram Equivalents (MME) can disproportionately flag doctors serving communities with high pain management needs, potentially exhibiting unconscious bias and targeting Black and Brown healthcare professionals.
- “Qlarant’s scoring system, which labels doctors as high-risk based on prescription volume, is reminiscent of the Inquisition’s condemnation of Galileo for his support of the Copernican model.”

5. Potential for Wrongful Accusations and “Data Analytics as Junk Science”:
- Experts warn that relying heavily on AI and data analytics can lead to innocent healthcare providers being wrongly accused due to statistical anomalies or being outliers in their practices.
- Jacob Foster of the DOJ’s Health Care Fraud Unit acknowledges that “data analytics is a good tool to suggest the need for further investigation, but it should not be used to determine misconduct conclusively,” emphasizing that “data is not the truth” and requires investigation.
- The sources cite examples of physicians facing severe legal repercussions, including suicide (Dr. Charles R. Syzman) and lengthy prison sentences later overturned (Dr. Steve Henson), highlighting the high stakes involved.
- The American Medical Association (AMA) has expressed concerns about these “unknown and unreviewed algorithms” leading to immediate suspension of prescribing privileges without due process, harming patients.
- “these unknown and unreviewed algorithms have resulted in physicians having their prescribing privileges immediately suspended without due process or review by a state licensing board — often harming patients in pain because of delays and denials of care.” – Bobby Mukkamala, MD, AMA.
6. Evidence Laundering and Targeting Packages:
- The sources raise allegations of “evidence laundering” and the use of “targeting packages” by the government, suggesting a pre-determined approach to investigating and prosecuting medical providers.
- These packages allegedly target specific patient groups, particularly chronic pain patients and those with substance use disorders, for “unsolicited interrogation.”
7. Legal Challenges and the Importance of Due Process:
- Dr. Anand’s legal team is actively challenging the legality of the evidence obtained and the methods used in the investigation, emphasizing the need for due process and the protection of constitutional rights.
- The outcome of United States vs. Anand is seen as potentially setting important legal precedents regarding the use of AI-driven risk assessments in law enforcement and the protection of individual rights.
- The case underscores the ongoing tension between national security/law enforcement objectives and individual privacy rights in the digital age.
- Justice Thurgood Marshall’s quote regarding the purpose of FOIA is included, highlighting the importance of an informed citizenry to check against government overreach.

Conclusion:
The case of United States vs. Dr. Neil Anand brings to the forefront critical questions surrounding the government’s increasing use of AI and data analytics in healthcare fraud investigations.
The sources highlight significant concerns about potential privacy violations through warrantless data mining, the risk of wrongly accusing innocent healthcare providers based on flawed algorithms, and the lack of transparency and independent review of these technologies.
The legal challenges mounted by Dr. Anand’s defense underscore the importance of upholding constitutional rights and ensuring due process in the face of increasingly sophisticated government surveillance methods.
The outcome of this case could have far-reaching implications for the future of medical privacy, the regulation of AI in law enforcement, and the way healthcare fraud is investigated and prosecuted.
Thanks Norman for bringing truth out
Dr Savani
Make this go viral nuclear
Spell check is your friend, contruction.
Thank you