
“Ours may become the first civilization destroyed, not by the power of our enemies, but by the ignorance of our teachers and the dangerous nonsense they are teaching our children. In an age of artificial intelligence, they are creating artificial stupidity.”
…Thomas Sowell
NORMAN J CLEMENT RPH., DDS, NORMAN L. CLEMENT PHARM-TECH, MALACHI F. MACKANDAL PHARMD, BELINDA BROWN-PARKER, IN THE SPIRIT OF JOSEPH SOLVO ESQ., INC., SPIRIT OF REV. IN THE SPIRIT OF WALTER R. CLEMENT BS., MS, MBA. HARVEY JENKINS, MD, PH.D., IN THE SPIRIT OF C.T. VIVIAN, JELANI ZIMBABWE CLEMENT, BS., M.B.A., IN THE SPIRIT OF THE HON. PATRICE LUMUMBA, IN THE SPIRIT OF ERLIN CLEMENT SR., EVELYN J. CLEMENT, WALTER F. WRENN III., MD., JULIE KILLINGSWORTH, RENEE BLARE, RPH, DR. TERENCE SASAKI, MD LESLY POMPY MD., CHRISTOPHER RUSSO, MD., NANCY SEEFELDT, WILLIE GUINYARD BS., JOSEPH WEBSTER MD., MBA, BEVERLY C. PRINCE MD., FACS., NEIL ARNAND, MD., RICHARD KAUL, MD., IN THE SPIRIT OF LEROY BAYLOR, JAY K. JOSHI MD., MBA, AISHA GARDNER, ADRIENNE EDMUNDSON, ESTER HYATT PH.D., WALTER L. SMITH BS., IN THE SPIRIT OF BRAHM FISHER ESQ., MICHELE ALEXANDER MD., CUDJOE WILDING BS, MARTIN NJOKU, BS., RPH., IN THE SPIRIT OF DEBRA LYNN SHEPHERD, BERES E. MUSCHETT, STRATEGIC ADVISORS

This source critically examines the concept of “Artificial Stupidity,” arguing that modern algorithmic risk models, particularly those used by the DEA under figures like Hank Asher, replicate historical injustices by dehumanizing individuals and prioritizing commercial or institutional interests over human welfare.

It draws a stark parallel between the 18th-century Zong Massacre, where enslaved people were deemed “lost cargo” for insurance claims, and contemporary practices where populations are labeled “high risk” by algorithms, leading to a denial of essential services.

The text posits that Asher’s data analytics, rebranded as “network analysis,” often constitute a form of confirmation bias, leading to the wrongful designation of innocent businesses and individuals as illicit, and causing significant harm.

Ultimately, the source, supported by FOIA documents from the Neil Anand v. United States case, suggests that these AI-driven strategies prioritize optics and brute force over precision and due process, leading to “collateral damage” built on flawed or cherry-picked intelligence.
The False Pseudoscience of the Drug Enforcement Agency Artificial Intelligence
The False Pseudoscience of Drug Enforcement Agency Artificial Intelligence Expert Hank Asher and Moneyballing Anne Milgram Has Now Been Consumed By Hume’s Fork:
“..Why Bother With Facts When The Drug Enforcement Agency Can Just Swing a Sledgehammer..”

HANK ASHER: WHEN ENTITLEMENT AND INCOMPETENCE LEAD TO DISASTER
Ah, Hank Asher—the self-proclaimed maestro of financial and medical disruption, armed with his 8th-grade education, “sophisticated” artificial intelligence tools, and data strategies.
We’re supposed to believe that this man, with his data analytics, forensic accounting, and undercover operations, is single-handedly saving the world from evil regimes and criminal enterprises.
However, let’s not kid ourselves, what Asher brings to the table is an old-fashioned sledgehammer in a world that needs scalpels.
THE ANAND-CLEMENT RULE OF ARTIFICIAL STUPIDITY (THE AC RULE) [ AI (alg*)= AS ] AND INTIMATE KNOWLEDGE OF DESIGN: ABUSE IN LAW ENFORCEMENT
Framing the mass killing as a commercial necessity.
These sources collectively examine Gregson v. Gilbert (1783), an infamous British legal case stemming from the Zong massacre, where over a hundred enslaved Africans were thrown overboard.

The shipowners sought to claim insurance for these deaths as “lost cargo” due to “perils of the sea,” framing the mass killing as a commercial necessity.
The sources highlight how British courts of the era consistently treated enslaved individuals as insurable commodities, often prioritizing commercial interests and even accepting hypothetical threats like “insurrection” as justifications for violence to indemnify owners.

Furthermore, this presentation draws a stark parallel between this historical commodification of human lives and modern algorithmic risk modeling, arguing that contemporary systems can perpetuate systemic bias by dehumanizing labeling populations as “high risk,” thereby denying essential services and replicating a similar dehumanizing logic under the guise of objectivity.

THE ZONG LOGIC(ZL)
II. The Zong Massacre: Event Overview
- Date and Location: November-December 1781, Atlantic Ocean, southwest of Jamaica.
- Vessel: British slave ship Zong, owned by a Liverpool syndicate led by William Gregson and insured by Thomas Gilbert’s syndicate.
- Voyage Purpose: Transport and sale of enslaved African persons as commercial cargo from Ghana to Jamaica.
- Overcrowding & Inexperienced Command: The ship embarked with 442 enslaved persons, “nearly double its safe carrying capacity.” The captain, Luke Collingwood, was a former surgeon with no prior command experience. These conditions were “created to maximize profits despite the foreseeable risks to human life and maritime safety.”
- Navigational Errors: Due to “gross navigational errors, including the misidentification of Jamaica for another Caribbean island,” the voyage was “well beyond the norm,” leading to severe water scarcity and disease.
- The Killings: Between November 29 and early December 1781, the crew intentionally threw 122 enslaved Africans overboard on three separate occasions. An additional 10 captives committed suicide by jumping overboard, for a total of 132 deaths. The crew justified these acts as necessary to conserve water and prevent a “never realized” slave insurrection.
- Arrival: The Zong arrived in Black River, Jamaica, on December 22, 1781, with 208 surviving enslaved persons.
III. Legal Proceedings: Gregson v. Gilbert (1783)
The case Gregson v. Gilbert was a maritime insurance dispute arising from the Zong Massacre, not a criminal murder trial.
A. Core Legal Question
- “Whether the forced jettisoning of enslaved people due to alleged necessity was grounds for an insurance claim.”
- The fundamental issue was whether the jettisoned individuals constituted “lost cargo” under marine insurance law, specifically the principle of “general average.”
B. Plaintiffs’ (Ship Owners’) Arguments (Gregson Syndicate)
- Theory of Necessity: The Gregson syndicate argued that the deaths were “the result of a peril of the sea” and that the jettisoning was “justified by the necessity of preserving the ship, crew, and remaining human cargo.” This invoked the “maritime law principle of general average, which permits sacrifice of cargo to save a vessel in jeopardy.”
- Hypothetical Insurrection: Solicitor General John Lee, arguing for the shipowners, acknowledged that “no actual rebellion aboard the ship” had occurred. However, he contended that the killings were “justified by the fear that an insurrection might have occurred had the crew not taken preemptive action.” He stated that throwing the slaves overboard avoided “the greater evil, for otherwise, ‘in a few hours, there must have been such an Insurrection all the blacks would have killed all the Whites.'”
- Claim Amount: They sought compensation of £30 per individual, “reflecting market value” based on the average selling price of surviving enslaved people.
C. Defendants’ (Insurers’) Arguments (Gilbert Syndicate)
Absence of Actual Threat: Mr. Davenport, counsel for the underwriters, “disputed this reasoning, pointing out that no mutiny or active threat existed at the time of the killings.”
Negligence and Incompetence: The insurers “denied liability, asserting that the deaths were due to the negligence and incompetence of the captain and crew—not any covered peril.”
Lack of Emergency: They “contended that sufficient rainfall occurred prior to the final killings, thereby eliminating the alleged emergency and rendering the killings commercially motivated fraud.”
- Lack of Emergency: They “contended that sufficient rainfall occurred prior to the final killings, thereby eliminating the alleged emergency and rendering the killings commercially motivated fraud.”
- Absence of Actual Threat: Mr. Davenport, counsel for the underwriters, “disputed this reasoning, pointing out that no mutiny or active threat existed at the time of the killings.”
D. Trial History and Mansfield’s Rulings
- Initial Jury Trial (March 1783): The jury “found in favor of the plaintiffs and awarded £3,660 for the loss of 122 enslaved individuals at £30 per head.” The jury “accepted the theory of necessity, disregarding the humanity of the victims, treating them instead as chattel property.”
- Appellate Proceedings (Court of King’s Bench): Lord Chief Justice Mansfield presided over the appeal.
- Initial Stance: Mansfield famously remarked that “The Case of Slaves was the same as if Horses had been thrown overboard,” indicating his initial acceptance of the legal treatment of enslaved people as property. He ruled that the jury had been correct to “discount the humanity of the slaves.”
- Order for New Trial: Mansfield ultimately ordered a new trial upon learning of new evidence that undermined the necessity argument:
- “Rain had fallen before the final set of killings, undermining the alleged necessity.”
- “No actual insurrection had occurred.”
- The ship had arrived “in perfect safety,” contrary to the narrative of peril.
- Outcome: “No retrial was held; the owners abandoned their claim,” meaning they did not receive their money.
E. Legal Interpretation of Enslaved Persons
- British admiralty and insurance law “treated enslaved persons as insurable cargo, and their deaths were analyzed not in terms of criminal culpability, but in terms of commercial loss.”
- The “Legal Doctrine of Insurable Interest, as developed in British maritime law, permitted slave traders to treat enslaved individuals as cargo subject to indemnification for loss through perils such as mutiny, drowning, or intentional destruction.”
- “Courts did not assess criminal culpability for the deaths of enslaved persons, but rather applied principles of commercial liability and general average, insulating mass violence from moral or legal scrutiny.”
- The Gregson v. Gilbert case is “unique in that it invoked the legal concept of insurrection not based on actual events, but as a hypothetical or preventive justification for mass killing in the context of commercial insurance.”
“…In an age of artificial intelligence, they are creating artificial stupidity”
Thomas Sowell
Confirmation bias, rebranded as “network analysis“
First up, the much-touted data analytics. Asher’s fans would have you believe that he’s some financial Sherlock Holmes, using advanced algorithms to untangle the world’s most complex financial webs.
But what is this really?
Pattern recognition?
Sure, if you’re willing to see patterns where none exist. Let’s face it: when you’re looking at thousands of transactions, it’s easy to draw connections between just about anything. It’s called confirmation bias, but Asher seems to have rebranded it as “network analysis.”
Let’s be clear—connecting the dots isn’t the same as proving guilt.
But why bother with such nuances when you can label a foreign bank as a “money laundering concern” and watch it crumble?
Who needs evidence when you have the power of suggestion?

Request concerning pill mill prosecutions
The Freedom of Information Act (FOIA) is a powerful tool that has, time and again, unveiled the inner workings of government strategies that were otherwise shielded from public scrutiny.
In the case of Neil Anand v. United States, the FOIA document release has provided a rare glimpse into the machinations behind Hank Asher’s much-lauded artificial intelligence data analytics techniques.
However, rather than reinforcing the narrative of success, these documents reveal a different story—one that underscores the widespread fallacy of Asher’s approach.
The document and A-I discussion below contain a Freedom of Information Act (FOIA) response from the U.S. Department of Justice to a request concerning “pill mill” prosecutions and the opioid crisis, alongside various email communications and alerts related to healthcare fraud.
The FOIA response details the release of some records while withholding others due to specific exemptions, such as those protecting privacy or law enforcement proceedings.

According to the documents released under FOIA, much of the so-called success attributed to Asher’s strategies is built on shaky ground. The Neil Anand v. United States case highlighted several instances where the metrics used to measure the success of medical disruptions by the DEA were, at best, misleading.
The FOIA documents also shed light on the unintended consequences of Asher’s aggressive tactics. While the public narrative focuses on the disruption of criminal enterprises, the reality is that many innocent businesses and individuals were caught in the crossfire.
The documents reveal instances where financial or medical institutions, wrongly designated as drug-dealing or money-laundering concerns, faced devastating consequences.
These institutions, often in poor inner cities of the United States, were left with little recourse, as their reputations were tarnished and their access to the global financial system was severely restricted.
THE ANAND-CLEMENT RULE OF ARTIFICIAL STUPIDITY (THE AC RULE) [ AI (alg*)= AS ] AND INTIMATE KNOWLEDGE OF DESIGN: ABUSE IN LAW ENFORCEMENT
Moreover, the case of Neil Anand v. United States highlighted the lack of due process afforded to these entities.
The FOIA documents indicate that decisions to designate medical institutions were often made based on incomplete or flawed intelligence, with little opportunity for those affected to challenge the findings.
This has led to a growing chorus of criticism that Asher’s strategies are less about precision and more about brute force, with little regard for the collateral damage left in their wake.
Perhaps most damning of all, the FOIA documents challenge the very premise that Asher’s strategies are grounded in solid intelligence.

Contrary to the public portrayal of meticulously planned operations, the documents reveal a chaotic process where intelligence was often cherry-picked to fit pre-existing narratives. In the rush to achieve quick wins, critical pieces of information were either overlooked or misinterpreted, leading to actions that prioritized optics over effectiveness.
The Neil Anand v. United States case also brought to light the limitations of the intelligence apparatus used to support Asher’s operations. The FOIA documents suggest that much of the intelligence was sourced from dubious informants or based on outdated information, raising serious questions about the reliability of the data underpinning these high-stakes decisions.
This casts a long shadow over the legitimacy of Asher’s entire approach, suggesting that his so-called intelligence-led operations were, in reality, built on a foundation of sand.

PROCEED TO EPISODE –2
OR SEND
TO CASH APP:$docnorm
ZELLE 3135103378

ALL WATCHED OVER BY MACHINES OF LOVING GRACE
BE SURE TO DONATE TO THE MARK IBSEN GOFUNDME DEFENSE FUND, WHERE THE SON ALWAYS RISES!!!

FOR NOW, YOU ARE WITHIN
THE NORMS

Briefing Document: Palantir and the Rise of Digital Healthcare Apartheid
Date: JULY 4, 2025
Subject: Analysis of Palantir’s Role in Algorithmic Healthcare Exclusion and Peter Thiel’s Underlying Philosophy
Key Themes:
This briefing document synthesizes the provided sources to outline the central argument: that Palantir Technologies, under the influence of its founder Peter Thiel’s ideology, is instrumental in establishing a “digital healthcare apartheid” in the United States.
This new form of discrimination, cloaked in the language of efficiency and data-driven prediction, mirrors historical injustices like those in South Africa.
Briefing Document: Artificial Stupidity, Algorithmic Bias, and the Dehumanization of Risk
I. Executive Summary
This briefing document examines the critical themes of “Artificial Stupidity,” algorithmic bias, and the historical and contemporary dehumanization of risk, drawing parallels between the infamous Zong Massacre of 1781 and modern applications of artificial intelligence (AI) in law enforcement and healthcare.

The sources highlight how commercial interests and flawed methodologies, rather than objective facts, can drive systemic abuse, leading to devastating consequences for individuals and communities. Key figures like Hank Asher and the Drug Enforcement Agency (DEA)’s use of “sophisticated” AI are critiqued, revealing a pattern of “sledgehammer” approaches rather than precise interventions.
The core argument posits that AI, when designed with inherent biases or used without proper oversight and accountability, perpetuates and amplifies “artificial stupidity” – a concept rooted in the Anand-Clement Rule of Artificial Stupidity ([AI (alg*)= AS]).
II. The Zong Massacre: A Historical Precedent for Dehumanization and Commercial Logic
A. Event Overview (1781)
- Date and Location: November-December 1781, Atlantic Ocean, southwest of Jamaica.
- Vessel: British slave ship Zong, owned by William Gregson and insured by Thomas Gilbert.
- Purpose: Transport and sale of enslaved African persons as “commercial cargo.”
- Conditions: The ship was severely “overcrowding & Inexperienced Command” with 442 enslaved persons, “nearly double its safe carrying capacity,” captained by a former surgeon with no prior command experience. These conditions were “created to maximize profits despite the foreseeable risks to human life and maritime safety.”
- The Killings: Due to “gross navigational errors” and resulting water scarcity and disease, the crew intentionally threw 122 enslaved Africans overboard. An additional 10 committed suicide, totaling 132 deaths. The justification was to conserve water and prevent a “never realized” slave insurrection.
B. Legal Proceedings: Gregson v. Gilbert (1783)
- Nature of Case: A maritime insurance dispute, “not a criminal murder trial.” The core question was “Whether the forced jettisoning of enslaved people due to alleged necessity was grounds for an insurance claim.”
- Ship Owners’ Arguments (Plaintiffs):Theory of Necessity: Argued deaths were “the result of a peril of the sea” and justified by “necessity of preserving the ship, crew, and remaining human cargo,” invoking “maritime law principle of general average.”
- Hypothetical Insurrection: Solicitor General John Lee “acknowledged that ‘no actual rebellion aboard the ship’ had occurred,” but contended the killings were “justified by the fear that an insurrection might have occurred.”
- Claim Amount: Sought £30 per individual, “reflecting market value.”
- Insurers’ Arguments (Defendants):Absence of Actual Threat: Disputed the necessity, stating “no mutiny or active threat existed at the time of the killings.”
- Negligence and Incompetence: Asserted deaths were “due to the negligence and incompetence of the captain and crew—not any covered peril.”
- Lack of Emergency: Contended “sufficient rainfall occurred prior to the final killings, thereby eliminating the alleged emergency and rendering the killings commercially motivated fraud.”
- Lord Mansfield’s Rulings:Initially, Mansfield famously remarked that “The Case of Slaves was the same as if Horses had been thrown overboard,” accepting their legal treatment as property. He initially “ruled that the jury had been correct to ‘discount the humanity of the slaves.'”
- However, he later ordered a new trial upon learning “Rain had fallen before the final set of killings,” “No actual insurrection had occurred,” and the ship arrived “in perfect safety.”
- Outcome: “No retrial was held; the owners abandoned their claim.”
- Legal Interpretation of Enslaved Persons: British admiralty and insurance law “treated enslaved persons as insurable cargo,” and their deaths were analyzed in terms of “commercial loss,” not “criminal culpability.” The case is “unique in that it invoked the legal concept of insurrection not based on actual events, but as a hypothetical or preventive justification for mass killing in the context of commercial insurance.”
C. Parallel to Modern Algorithmic Risk
The sources draw a “stark parallel between this historical commodification of human lives and modern algorithmic risk modeling,” arguing that contemporary systems “can perpetuate systemic bias by dehumanizing labeling populations as ‘high risk,’ thereby denying essential services and replicating a similar dehumanizing logic under the guise of objectivity.”
III. The False Pseudoscience of DEA Artificial Intelligence: Hank Asher and the Opioid Crisis
A. Hank Asher and the “Sledgehammer” Approach
- Critique: Hank Asher, described as having an “8th-grade education,” is presented as a “self-proclaimed maestro of financial and medical disruption” whose “sophisticated” AI tools are, in reality, “an old-fashioned sledgehammer in a world that needs scalpels.”
- “Artificial Stupidity”: The phrase “In an age of artificial intelligence, they are creating artificial stupidity” attributed to Thomas Sowell, is used to frame the outcomes of Asher’s methods.
- Confirmation Bias as “Network Analysis”: Asher’s “data analytics” are dismissed as “confirmation bias, rebranded as ‘network analysis.'” The criticism states, “when you’re looking at thousands of transactions, it’s easy to draw connections between just about anything… connecting the dots isn’t the same as proving guilt.” The question is posed: “But why bother with such nuances when you can label a foreign bank as a ‘money laundering concern’ and watch it crumble?”
B. FOIA Documents: Exposing the Fallacy of Asher’s Approach
- Neil Anand v. United States: This case and its related Freedom of Information Act (FOIA) document release offer a “rare glimpse into the machinations behind Hank Asher’s much-lauded artificial intelligence data analytics techniques.”
- Misleading Metrics: The FOIA documents reveal that “much of the so-called success attributed to Asher’s strategies is built on shaky ground,” highlighting “several instances where the metrics used to measure the success of medical disruptions by the DEA were, at best, misleading.”
- Unintended Consequences and Collateral Damage:“Many innocent businesses and individuals were caught in the crossfire.”
- Financial and medical institutions, “wrongly designated as drug-dealing or money-laundering concerns, faced devastating consequences.” These institutions, “often in poor inner cities of the United States, were left with little recourse.”
- Lack of Due Process: Decisions to designate medical institutions were “often made based on incomplete or flawed intelligence, with little opportunity for those affected to challenge the findings.” This suggests a “brute force” approach with “little regard for the collateral damage.”
- Flawed Intelligence Gathering:Intelligence was “often cherry-picked to fit pre-existing narratives.”
- “Critical pieces of information were either overlooked or misinterpreted.”
- Much intelligence was “sourced from dubious informants or based on outdated information, raising serious questions about the reliability of the data underpinning these high-stakes decisions.” This casts a “long shadow over the legitimacy of Asher’s entire approach, suggesting that his so-called intelligence-led operations were, in reality, built on a foundation of sand.”
IV. The Anand-Clement Rule of Artificial Stupidity (The AC Rule) [ AI (alg*)= AS ] and Intimate Knowledge of Design: Abuse in Law Enforcement
- This rule encapsulates the central thesis: Artificial Intelligence, particularly when its algorithms (alg*) are flawed or biased in their design and application, leads directly to Artificial Stupidity (AS).
- The critique emphasizes the “intimate knowledge of design” required to understand how abuse is perpetuated within law enforcement systems utilizing AI.
- The systemic issues highlighted by the Zong Massacre and the DEA’s AI practices demonstrate how a design focused on commercial gain or a pre-determined narrative can result in dehumanizing outcomes and the denial of due process, irrespective of the technological sophistication involved.

V. Conclusion
The sources collectively present a powerful critique of “artificial stupidity” stemming from biased or poorly applied AI, drawing a direct lineage from historical injustices like the Zong Massacre to contemporary algorithmic abuses.
Both historical and modern instances demonstrate a prioritization of commercial interests or pre-conceived narratives over factual evidence, due process, and human well-being.
The “sledgehammer” approach of individuals like Hank Asher, coupled with the systemic issues within agencies like the DEA, underscores the urgent need for critical examination of AI design, implementation, and its profound societal impact, particularly concerning the potential for dehumanization and the perpetuation of systemic bias under the guise of objectivity.