published and reported by youarewithinthenorms.com
NORMAN J CLEMENT RPH., DDS, NORMAN L. CLEMENT PHARM-TECH, MALACHI F. MACKANDAL PHARMD, BELINDA BROWN-PARKER, IN THE SPIRIT OF JOSEPH SOLVO ESQ., INC.T. SPIRIT OF REV. C.T. VIVIAN, JELANI ZIMBABWE CLEMENT, BS., MBA., IN THE SPIRIT OF THE HON. PATRICE LUMUMBA, IN THE SPIRIT OF ERLIN CLEMENT SR., WALTER F. WRENN III., MD., JULIE KILLINGWORTH, LESLY POMPY MD., CHRISTOPHER RUSSO, MD., NANCY SEEFELDT, WILLIE GUINYARD BS., JOSEPH WEBSTER MD., MBA, BEVERLY C. PRINCE MD., FACS., NEIL ARNAND, MD., RICHARD KAUL, MD., LEROY BAYLOR, JAY K. JOSHI MD., MBA, ADRIENNE EDMUNDSON, ESTER HYATT PH.D., WALTER L. SMITH BS., IN THE SPIRIT OF BRAHM FISHER ESQ., MICHELE ALEXANDER MD., CUDJOE WILDING BS, MARTIN NJOKU, BS., RPH., IN THE SPIRIT OF DEBRA LYNN SHEPHERD, BERES E. MUSCHETT, STRATEGIC ADVISORS
Thomas Brewster of Forbes Magazine, Article “Explained: Why The Feds Are Raiding Tech Companies For Medical Records“ Senior writer at Forbes covering cybercrime, privacy, and surveillance;
“In a separate case, in July 2019, DrChrono supplied the government with records related to the Pennsylvania-based practice of Neil Anand. He was being investigated for handing out “goody bags” of drugs to patients who didn’t need or ask for them. He’s since pleaded not guilty to all charges of healthcare fraud and conspiracy to distribute controlled substances.
In a search warrant I uncovered, the investigating agent goes into a little detail about what exactly he was able to determine from DrChrono’s record.“
Data Analytics, Warrantless Data Mining of Electronic Medical Records, in United States v Anand and Dr. Neil Anand

The United States Department of Justice (DOJ) and the Department of Health and Human Services (HHS) have recently ramped up their efforts to combat healthcare fraud, leaning heavily on data analytics, Generative Artificial Intelligence (AI), and Machine Learning.
While the government claims significant success in identifying instances of fraud, some experts warn that these advanced technologies may inadvertently target innocent healthcare providers and practices.
The federal government’s utilization of digital solutions and data mining to combat healthcare fraud is not a new concept. However, the recent surge in hiring scores of prosecutors and FBI agents specializing in new technological advances in AI and Machine Learning marks a notable shift in strategy.
These efforts are primarily focused on detecting “fraud, waste, and abuse” in the healthcare industry, and they have proven to be quite lucrative for the government. According to the DOJ and HHS, their investment in data analytics yields a return on investment of $4 or greater for every $1 spent on healthcare fraud detection and enforcement. They attribute much of this success to their reliance on data analytics as a key driver.

Artificial Intelligence MAY INFLUENCE WEATHER YOU CAN GET PAIN MEDICATION AND YOUR DOCTOR COMMITS SUICIDE OR SERVES PRISON TIME

However, the rush to employ these new tools has led to unintended consequences, with many innocent physicians, clinics, hospitals, and health systems being accused of wrongdoing merely because they exhibit statistical anomalies or are considered outliers in their practices. The critical question raised by this approach is whether it inadvertently identifies some legitimate practices and providers as fraudsters.
“Data analytics is a good tool to suggest the need for further investigation, but it should not be used to determine misconduct conclusively,” says Jacob Foster, Principal Assistant Chief of the DOJ’s Health Care Fraud Unit. Foster emphasizes that “data is not the truth” and that “we have to go out and investigate.”

Unfortunately, the government hasn’t always heeded Foster’s advice. Healthcare providers and legal experts have witnessed situations where data analytics resulted in innocent individuals bearing the substantial financial and reputational costs of defending against federal enforcement actions simply because their data points were different or higher than those of their peers. In several cases, individuals have been wrongly accused of fraud but later had their names cleared after providing the necessary context to the government.
Artificial Intelligence, EVIDENCE LAUNDERING and Targeting Packages Used For Unsolicited Interrogation of Patients

Qlarant, a Maryland-based technology company, has developed algorithms to identify questionable behavior patterns related to controlled substances and opioids, partnering with various state and federal enforcement entities, including the Department of Health and Human Services’ Office of Inspector General, the FBI, and the Drug Enforcement Administration.
These algorithms analyze a wide range of data sources, including court records, insurance claims, drug monitoring data, property records, and incarceration data to flag providers.
“…It sucks when we get it wrong…”

William Mapp, Qlarant’s Chief Technology Officer, underlines that the final decision on what to do with the information generated by their algorithms lies with people, not the algorithms themselves. He acknowledges the potential for errors and the company’s continuous efforts to minimize them. Qlarant, a Maryland-based technology company, said it has developed algorithms “to identify questionable behavior patterns and interactions for controlled substances, and for opioids in particular,” involving medical providers.
MILLI VANILLI JUSTICE (WHERE THE ENTIRE GOVERNMENT PROSECUTORS SHOW IS FRAUDULENT AND FAKE)

The company, in an online brochure, said its “extensive government work” includes partnerships with state and federal enforcement entities such as the Department of Health and Human Services Office of Inspector General, the FBI, and the Drug Enforcement Administration. In a promotional video, the company said its algorithms could “analyze a wide variety of data sources,” including court records, insurance claims, drug monitoring data, property records, and incarceration data to flag providers.
William Mapp, the company’s chief technology officer, stressed the final decision about what to do with that information is left up to people — not the algorithms.
Mapp said that “Qlarant’s algorithms are considered proprietary and our intellectual property” and that they have not been independently peer-reviewed. “We do know that there’s going to be some percentage of error, and we try to let our customers know,” Mapp said. “It sucks when we get it wrong. But we’re constantly trying to get to that point where there are fewer things that are wrong.”
The ongoing debate surrounding the use of artificial intelligence and advanced data analytics in the fight against healthcare fraud emphasizes the need for a balanced approach that prioritizes accurate investigations and fairness in targeting potential wrongdoers. As the government continues to refine its strategies, healthcare professionals and legal experts urge a cautious and responsible application of these technologies to prevent the wrongful targeting of innocent parties.

The outcome of this case, United States vs. Anand, has the potential to set important precedents in using AI-driven risk assessments in law enforcement, particularly concerning the protection of individual rights and the fairness of the justice system. It is a critical moment in the ongoing debate about the balance between crime prevention and preserving civil liberties. Prosecutions against doctors through the use of prescribing data have attracted the attention of the American Medical Association.
Concerns about these unreviewed and unknown algorithms have also reached the American Medical Association (AMA). Bobby Mukkamala, MD, chair of the AMA’s Substance Use and Pain Care Task Force, highlights the significant impact on physicians, stating that “these unknown and unreviewed algorithms have resulted in physicians having their prescribing privileges immediately suspended without due process or review by a state licensing board — often harming patients in pain because of delays and denials of care.”

QLARANT FLAWED DATA ANALYTICS USED TO TARGET DR. SHIVA AKULA
The DrCHRONO Electronic Health Records Controversy

At the heart of this case lies the warrantless data mining of Dr. Anand’s patient records, specifically within the DrCHRONO electronic health records system. The use of AI algorithms by government agencies to sift through vast amounts of personal health information without explicit consent or a warrant has raised significant concerns about privacy and individual rights.
One such case United States vs. Anand, in July 2019, involved Dr. Neil Anand and the technology company DrChrono, which supplied the government with records related to the Pennsylvania-based practice of Dr. Anand. He was being investigated for allegedly distributing drugs to patients who didn’t request them, raising concerns about the implications of advanced data analytics in such investigations.
Thomas Brewster of Forbes Magazine, Article “Explained: Why The Feds Are Raiding Tech Companies For Medical Records
“ Senior writer at Forbes covering cybercrime, privacy, and surveillance; “In a separate case, in July 2019, DrChrono supplied the government with records related to the Pennsylvania-based practice of Neil Anand. He was being investigated for handing out “goody bags” of drugs to patients who didn’t need or ask for them. He’s since pleaded not guilty to all charges of healthcare fraud and conspiracy to distribute controlled substances. In a search warrant I uncovered, the investigating agent goes into a little detail about what exactly he was able to determine from DrChrono’s record.“
Artificial Intelligence and Targeting Packages Under Scrutiny in United States v. Anand
In a landmark case that has sparked nationwide discussions on privacy, surveillance, and the reach of government artificial intelligence (AI), United States vs. Anand has unveiled a complex web of warrantless data mining. Dr. Anand, a medical professional, found himself at the center of a legal battle that sheds light on the use of AI computer algorithms to target and interrogate chronic pain patients and those suffering from substance use disorders.
In response, Dr. Anand’s attorney, the renowned Coley O. Reynolds, filed motions to expose potential unlawful data analytics and targeting packages, igniting a debate about the loss of medical privacy in the United States and its implications for legal cases in the future.
Chronic Pain Patients and Substance Use Disorder Patients Targeted


The data mining operations did not stop at mere surveillance but extended to the targeting of specific patient groups, particularly chronic pain patients and those suffering from substance use disorders. Government AI algorithms and targeting packages were utilized for unsolicited interrogation of patients in United States v Anand. The Government algorithms were employed to identify and categorize these patients, raising critical ethical and legal questions about the boundaries of surveillance and profiling.

The Role of Coley O. Reynolds

In response to the alleged invasion of privacy, Dr. Anand’s legal counsel, the renowned attorney Coley O. Reynolds, took up the case with determination and vigor. Reynolds, known for his relentless pursuit of justice and civil liberties, filed motions to uncover the possible unlawful use of data analytics and targeting packages. His actions have set in motion a legal battle that could have profound implications for the future of privacy rights and surveillance in the United States.
EVIDENCE LAUNDERING
MOTION FOR FRANK’S HEARING
In this case, medical records seized pursuant to an August 20, 2019, search warrant are under intense scrutiny. Dr. Anand’s defense counsel, Coley O. Reynolds, contends that the evidence obtained through this warrant should be suppressed, as they argue that it was acquired as a result of a defective search warrant.
Their claim revolves around the omission of crucial information from the search warrant affidavit, which, if included, would have negated the finding of probable cause. Dr. Anand is also respectfully requesting a hearing pursuant to the legal precedent set by Franks v. Delaware (1978), which allows individuals to challenge the accuracy and completeness of information presented in a search warrant affidavit.
DR. NEIL ANAND’S, MD.,FOIA LAWSUIT & EXHIBITS
Privacy and Legal Precedents


The loss of privacy in medical records has wide-reaching implications for American citizens. These records contain some of the most intimate and sensitive information about a person’s life, including their medical history, conditions, and treatments. The breach of such privacy threatens the trust and confidentiality that are fundamental to the doctor-patient relationship.
Furthermore, the invasion of medical records could have far-reaching consequences for legal cases in the future. If the government can access and use personal medical data without proper oversight, the rights of individuals in other legal contexts may also be at risk. Legal precedents that arise from cases like United States vs. Anand will shape the future boundaries of governmental surveillance and data mining.
Balancing Security and Privacy
In an era where the need for national security often collides with individual privacy, it is essential to strike a balance. AI and advanced technology have undoubtedly revolutionized how information is collected and analyzed. Still, it should be done within the boundaries of the law and with respect for individual rights. As technology advances, legal frameworks must adapt to ensure privacy rights are not eroded.
The legal landscape surrounding healthcare fraud investigations has been thrust into the spotlight with the ongoing case of United States v Anand and Dr. Neil Anand. Dr. Anand, a Pennsylvania-based practitioner, finds himself at the center of a debate that raises significant questions about using artificial intelligence (AI) and data analytics in law enforcement.
This legal battle underscores the broader context of how the federal government, specifically the Department of Justice (DOJ) and the Department of Health and Human Services (HHS), has been harnessing the power of artificial intelligence and data mining in its efforts to combat healthcare fraud. While the use of digital solutions and data analytics in the fight against healthcare fraud is not a new concept, recent developments have seen a significant increase in the government’s reliance on these technologies.
The DOJ and HHS have bolstered their ranks with scores of prosecutors and FBI agents who are now focused on utilizing cutting-edge AI and machine learning tools to uncover instances of “fraud, waste, and abuse” within the healthcare industry.
However, this intensified use of technology raises important questions. Are these innovative data analytics approaches unintentionally targeting innocent practices and providers as fraudsters? The answer, according to some experts, is an undeniable “yes.” In their pursuit of justice, these advanced tools have sometimes failed to differentiate between legitimate healthcare providers and those engaged in fraudulent activities.
The government’s rush to employ these new tools has led to unfortunate consequences, with many innocent physicians, clinics, hospitals, and health systems being accused of wrongdoing merely because their practices are different from their peers. It’s important to recognize that being a statistical outlier or anomaly should serve as a basis for further inquiry, not as proof of fraud.
The Department of Justice has employed data-mining techniques to target practices and providers whose frequency of certain treatments and procedures exceeds the majority of their peers. Examples include the number and strength of opioid prescriptions, high-reimbursement injections, and the frequent ordering of high-complexity labs. Yet, perfectly legitimate reasons could explain these deviations, such as patient populations with unique needs or practitioners specializing in specific treatments.
The involvement of private companies like Qlarant, a Maryland-based technology company, further complicates the landscape. Qlarant has developed algorithms to identify questionable behavior patterns related to controlled substances and opioids. These algorithms have attracted partnerships with state and federal enforcement entities, including the Department of Health and Human Services’ Office of Inspector General, the FBI, and the Drug Enforcement Administration.
The case of United States v Anand highlights the critical need for a balanced approach to the use of advanced technologies in healthcare fraud detection. As the legal battle continues, it underscores the importance of ensuring that constitutional rights are upheld and that evidence obtained through potentially unconstitutional means is not used in legal proceedings. The outcome of this case has the potential to set important precedents in the use of AI and data analytics in law enforcement, particularly concerning protecting individuals’ rights and the integrity of healthcare providers.
United States vs. Anand brings to light the intricate world of government AI surveillance, warrantless data mining, and its consequences for privacy, particularly within the context of medical records. Dr. Anand’s case, coupled with the vigorous efforts of his legal counsel, Coley O. Reynolds, showcases the importance of safeguarding individual privacy and the potential legal precedents that will shape the future of privacy rights in the United States. The battle between security and privacy is far from over, and it is a topic that will continue to demand our attention and careful consideration as technology and legal battles evolve.
FOR NOW, YOU ARE WITHIN
THE NORMS
OR SEND
$175.00 OR MORE TO CASH APP:$docnorm
ZELLE 3135103378
So, Donate to the “Pharmacist For Healthcare Legal Defense Fund,”
Thanks Norman for bringing truth out
Dr Savani
Make this go viral nuclear
Spell check is your friend, contruction.
Thank you