
NORMAN J CLEMENT RPH., DDS, NORMAN L. CLEMENT PHARM-TECH, MALACHI F. MACKANDAL PHARMD, BELINDA BROWN-PARKER, IN THE SPIRIT OF JOSEPH SOLVO ESQ., INC.T. SPIRIT OF REV. IN THE SPIRIT OF WALTER R. CLEMENT BS., MS, MBA. HARVEY JENKINS MD, PH.D., IN THE SPIRIT OF C.T. VIVIAN, JELANI ZIMBABWE CLEMENT, BS., MBA., IN THE SPIRIT OF THE HON. PATRICE LUMUMBA, IN THE SPIRIT OF ERLIN CLEMENT SR., EVELYN J. CLEMENT, WALTER F. WRENN III., MD., JULIE KILLINGSWORTH, RENEE BLARE, RPH, DR. TERENCE SASAKI, MD LESLY POMPY MD., CHRISTOPHER RUSSO, MD., NANCY SEEFELDT, WILLIE GUINYARD BS., JOSEPH WEBSTER MD., MBA, BEVERLY C. PRINCE MD., FACS., NEIL ARNAND, MD., RICHARD KAUL, MD., IN THE SPIRIT OF LEROY BAYLOR, JAY K. JOSHI MD., MBA, AISHA GARDNER, ADRIENNE EDMUNDSON, ESTER HYATT PH.D., WALTER L. SMITH BS., IN THE SPIRIT OF BRAHM FISHER ESQ., MICHELE ALEXANDER MD., CUDJOE WILDING BS, MARTIN NJOKU, BS., RPH., IN THE SPIRIT OF DEBRA LYNN SHEPHERD, BERES E. MUSCHETT, STRATEGIC ADVISORS
BY BLUE LOTUS
Summer Lee, a trailblazer in Pennsylvania politics and a rising voice in the U.S. Congress, is once again making history.
The first Black woman from Pennsylvania elected to the U.S. House of Representatives, Lee has introduced the Eliminating Bias in Algorithmic Systems Act of 2024 (H.R. 10092).

from Pennsylvania‘s 12th district
This bold legislation seeks to tackle the systemic inequities perpetuated by artificial intelligence (AI) and algorithmic decision-making, particularly in healthcare and other critical sectors.
Lee, born on November 26, 1987, has built her career on advocating for justice and equity. After serving as a member of the Pennsylvania House of Representatives for the 34th district from 2019 to 2022, she made history by winning her congressional race in 2022.

Her primary victory, by less than 1% of the vote, was a testament to her grassroots organizing skills and her vision for a more inclusive future.
With support from groups like the Democratic Socialists of America (DSA), Lee became the first Black woman to represent Southwestern Pennsylvania in the state legislature, where she championed progressive policies aimed at lifting up marginalized communities.
THE BICAMERAL MIND
Ensuring federal agencIES establish a dedicated civil rights office to mitigate risks associated with algorithmS
Now representing Pennsylvania’s 12th congressional district, Lee is leveraging her position in Congress to address the rising influence of AI systems that, without proper oversight, threaten to exacerbate racial, gender, and socioeconomic disparities.

Her proposed legislation, accompanied by a Senate version introduced by Sen. Ed Markey (D-MA), underscores her commitment to equity in emerging technologies.

It aims to ensure that any federal agency employing AI systems establishes a dedicated civil rights office to mitigate risks associated with algorithmic bias, discrimination, and other harms.

Lee’s journey, from breaking barriers in Pennsylvania to leading the charge for fairness in AI regulation, reflects her deep commitment to dismantling systemic injustices.
With this new legislation, she is positioning herself as a leader in the fight for accountability and equity in the digital age. Imagine a dystopia where a patient walks into a hospital seeking care.
Their condition is serious but treatable, and time is of the essence. Somewhere in the background, a computer algorithm analyzes their data, symptoms, medical history, demographic information, and decides whether they will receive immediate treatment or wait.

Now, imagine that this patient is Black, lives in a lower-income neighborhood, and has had limited access to healthcare throughout their life. The algorithm, designed to be impartial, instead mirrors the biases of the system that created it, placing them at the back of the line.

This is not a hypothetical scenario. It is the reality of artificial intelligence (AI), bias flawed algorithms in healthcare today, generating artificial stupidity that are increasingly shaping decisions with little oversight, often exacerbating the very inequities they were intended to solve.
But there is hope on the horizon, as policymakers, advocates, and technologists push for a future where AI can be a tool for justice rather than discrimination.
MEMORY
THE RISE OF ARTIFICIAL STUPIDITY (AS): [AI(alg*) =AS]
The promise of AI in healthcare is almost irresistible. These systems can process vast amounts of data, identify patterns invisible to human eyes, and make predictions that save lives. AI has been used to detect early signs of conditions like sepsis, optimize resource allocation, and even diagnose diseases from medical images. Yet, as their adoption grows, so too does evidence of their flaws.

In one now-infamous example from 2019, an algorithm widely used in hospitals to prioritize care was found to be racially biased. Black patients, despite often being sicker than their white counterparts, were less likely to be flagged for high-priority care.
The reason?
CONSCIOUSNESS: THE PASSENGER TRAPPED IN ORGANIC MATERIAL
The algorithm had been trained on healthcare spending data, which reflected decades of systemic inequities. Black patients, historically underserved by the healthcare system, were less likely to have had high-cost interventions, skewing the algorithm’s judgment.

This is just one case among many. In Arkansas, an algorithm designed to determine in-home care hours for residents with disabilities made drastic cuts that led to hospitalizations and legal battles.
Meanwhile, an AI tool meant to detect sepsis failed to correctly identify the condition in two-thirds of patients while raising false alarms in thousands more. Each failure represents not just a technical glitch but a profound betrayal of trust, particularly for communities already marginalized by the healthcare system.

Member of the U.S. House of Representatives
from Pennsylvania‘s 12th district
Addressing these issues is no small task, but recent legislative efforts represent a meaningful step forward. The Eliminating Bias in Algorithmic Systems Act of 2024, introduced by Rep. Summer Lee (D-PA) and Sen. Markey (D-MA), is designed to bring much-needed oversight to AI systems used by federal agencies.

If passed, this legislation would require agencies to establish civil rights offices specifically tasked with identifying and mitigating the risks posed by algorithmic bias. These offices wouldn’t just pay lip service to fairness, they’d be staffed by experts in bias, discrimination, and AI technology.
Without rigorous oversight
Their mandate would include evaluating how algorithms are developed, ensuring diverse training datasets, and continuously monitoring tools for inequities. By institutionalizing accountability, the act aims to prevent the kind of harms that have already occurred in healthcare and other sectors.

But legislation like this doesn’t happen in a vacuum. It reflects a growing recognition that AI is not inherently objective. These systems are built by humans, trained on data that reflects human decisions, and deployed in environments shaped by human institutions. Without rigorous oversight, AI will continue to replicate, and often amplify, the biases of the world it observes.
The challenges begin with the data. AI systems learn by analyzing past information, but in healthcare, that information is deeply flawed. Communities of color and low-income populations often have less access to care, fewer diagnostic tests, and poorer outcomes. These disparities are encoded into the data, creating a vicious cycle when the algorithms trained on it perpetuate the same inequities.
In one chilling example, researchers discovered that an AI tool trained on medical images could identify a patient’s self-reported race with startling accuracy, even when race had no bearing on the diagnosis. This capability, unintended by its creators, raises troubling questions.
How might this information be misused?
Could it lead to worse care for certain groups, even if unintentionally?
THE shape of A human brain
Eliminating Bias in Algorithmic Systems Act is a critical step in building accountability

And then there’s the issue of transparency. Many AI algorithms approved for use in healthcare do not disclose the diversity of their training datasets or the logic behind their predictions.
This opacity leaves clinicians and patients in the dark, unable to challenge decisions that could have life-or-death consequences.
Despite these challenges, the future of AI in healthcare is not doomed. The very flaws that have been exposed also point to solutions. Legislation like the Eliminating Bias in Algorithmic Systems Act is a critical step in building accountability. But systemic change will require more than regulation, it will demand a cultural shift in how we design, implement, and evaluate these technologies.

(ARTIFICIAL STUPIDITY)
Transparency must become a cornerstone of AI development. Algorithms should not be “black boxes”. AI creators must explain how they work, what data they rely on, and how they have been tested for fairness.

Diversity in training data is equally essential. An AI system cannot serve all patients equitably if it has been trained on data that excludes or misrepresents marginalized groups.
Above all, there must be a commitment to continuous monitoring. Bias is not a problem to be solved once, it is a dynamic challenge that evolves as technology and society change. Regular audits and involving stakeholders from affected communities can help ensure that AI systems remain tools for progress rather than oppression.
THE QUESTION OVERALL THEN IS: ..IF WE STRUGGLE TO DEFINE THE PASSENGER THEN HOW CAN ONE USE A-I AS A PREDICTIVE TOOL?
Artificial intelligence has the potential to revolutionize healthcare, making it more precise, efficient, and accessible. But without careful regulation and a commitment to equity, it risks automating the very injustices it was supposed to fix.
The Eliminating Bias in Algorithmic Systems Act is a reminder that technology alone cannot solve human problems, it must be guided by human values. In this moment of rapid technological change, the question is not whether AI will shape the future of healthcare, but how.

Will it be a future where all patients are treated with dignity and fairness, or one where old biases gain new life in digital form?
The choice is ours to make
OR SEND
TO CASH APP:$docnorm
ZELLE 3135103378

ALL WATCHED OVER BY MACHINES OF LOVING GRACE
BE SURE TO DONATE TO THE MARK IBSEN GOFUNDME DEFENSE FUND, WHERE THE SON ALWAYS RISES!!!

for now, you are withiN
the norms

About the Author Blue Lotus, MD
The Author received an honorable discharge from the U.S. Navy where he utilized regional anesthesia and pain management to treat soldiers injured in combat at Walter Reed Hospital. The Author is passionate about medical research and biotechnological innovation in the fields of 3D printing, tissue engineering and regenerative medicine.