Keeping Humans in the Loop: the Laws and Ethics Driving Human Oversight in AI

Artificial Intelligence (AI) continues to accelerate across every sector. The more pervasive AI becomes, the greater the need to maintain human judgment at its core. This idea is reflected in many new and developing state, national, and international laws and frameworks, and is an important concept for any business utilizing AI tools to be keenly aware of. In an era where automation makes life faster and cheaper, keeping a human-in-the-loop (HITL) isn’t just a best practice, but a legal and ethical necessity. 

This article explains HITL, and outlines some recent laws that require HITL. Skip to the bottom for business tips.

What Having a “Human-in-the-Loop” Really Means

The idea behind HITL is deceptive in its simplicity. HITL is a design and operation practice through which a human (or humans, clinicians, experts in the data subject) have the authority, information and tools to: (1) monitor AI behavior in as close to real time as possible; (b) assess the quality of AI outputs (are they fair, safe and accurate); and (c) correct, override, or stop the AI if needed. 

HITL ensures that AI remains an assistant, rather than a replacement for human consideration, reducing risks by catching model failures, bias, hallucinations, or misuse before they can cause harm. 

The practice of HITL is most important in stances where AI blind spots can cause injury, discrimination, or loss of liberty. Some quick examples are:

  • Safety-critical systems: autonomous vehicles, industrial control systems, medical diagnostic tools (e.g., a clinician should not accept AI suggestions for cancer diagnosis or treatment approaches without double-checking those suggestions against knowledge and practical experience, in case the AI misinterpreted charts, scans or other data).

  • High-stakes decisions about people: hiring, credit scoring, loan and insurance underwriting and eligibility, housing eligibility, public benefits eligibility.

  • Decisions that could deny due process: law enforcement and predictive policing require HITL to prevent against false positive identifications (which are shockingly common; you can read more here, here, here, and here).

  • Completely autonomous decision making by generative AI agents: Systems capable of autonomous content generation or communications must have human moderation to detect harmful or misleading content.

  • Regulatory and legal risk areas: systems for which regulators demand auditability and oversight must be explainable, safe, and accurate, including systems that process health data, payroll and tax information, or provide financial advice. 

  • Brand reputation and trust: content moderation, customer support, and marketing systems may create biased, offensive, or inadvertently copyrighted communications if not reviewed prior to deployment.

Laws and Policies that Require HITL

While implementation of HITL is a good idea from an ethics and brand trust perspective, nationally and internationally HITL requirements are being codified into consumer protection and AI laws. 

Here are some important laws AI developers and deployers/users should be aware of:

EI AI Act

The EU AI act became effective in August 2025. Article 14 of the Act codifies HITL as a central safeguard for “high-risk” AI systems. It mandates that providers design systems that enable humans to intervene, override, or halt AI decisions, and that “qualified personnel” continuously monitor AI system outcomes. As discussed above, these obligations apply to critical sectors (employment, credit, healthcare, infrastructure, law enforcement) to combat automation bias by ensuring meaningful, not token, human control over AI-driven decisions. 

The EU’s General-Purpose AI Code of Practice (July 2025) offers additional guidance for companies deploying foundation and generative models. Effectively, if your product or service uses an AI that’s classified as high-risk under the AI Act, you must implement meaningful human oversight, document it, and be prepared to show regulators in the event of an audit.

America’s AI Action Plan

On July 23, 2025, President Trump unveiled “America’s AI Action Plan” (“AIAP”), along with three executive orders aligning federal efforts around innovation, infrastructure, and international leadership in AI. The plan outlined federal policy direction, and rules relating to procurement, but did not create any statutory obligations that extended to private companies. If your business is a government contractor or vendor, you should be aware that the AIAP strongly encourages (and in some instances may require) observable and auditable operations when using AI to satisfy government contracts.

In the absence of a broad federal AI framework governing private companies and/or consumer protection, some state and local governments have drafted their own laws relating to oversight, and certain regulatory bodies have their own. Here are just a few: 

  • U.S. Equal Credit Opportunity Act (ECOA) prohibits discrimination in credit decisions based on race, religion, national origin, sex, marital status, age or because a person receives public assistance. Specifically, ECOA requires creditors treat similarly situated applicants equally, regardless of whether a human or automated system makes a decision about credit. All credit determinations must be explainable, and if a lender denies an applicant credit, an adverse action notice must be provided to the applicant with the reason for the adverse action (blaming AI is not sufficient). 

  • New York City Local Law 144 took effect in 2023 and establishes explicit rules for automated employment decision tools. If employers choose to use these AI tools, the tools must be audited annually to check for disparate impact based on race, ethnicity and sex, and the use of the AI tool must be disclosed to candidates 10 days prior to the use of the AI tool. Like ECOA, LL144 requires employers treat all similarly situated candidates the same regardless of whether employment decisions are made by a human or AI system, and adverse action notices must be given, which reasonably explain the basis for any adverse action.

  • Texas’s TRAIGA law (signed July 2025, effective January 2026) integrates oversight provisions into ethical AI use with government systems and critical infrastructure.

  • California’s CPPA (updated October 1, effective January 2026) includes a whole host of oversight requirements on automated decision making technologies (ADMTs), which will be covered in full in a future post.

Together, these laws and regulations signal the importance of ensuring that the ease of AI does not outweigh the importance of ethical outcomes. 

Tips & Considerations for Businesses Creating or Using AI Tools

HITL laws create new compliance challenges for early-stage and small- and medium-sized businesses (SMBs). Here are some consideration for companies either developing AI tools, or using them internally: 

  • Determine which laws apply to you: the applicability of laws can depend on where your business is located, and where your users are. For instance, if you develop AI tools for EU citizens, or use AI tools that impact EU citizens, those tools must comply with the EU AI Act, regardless of whether your company is located in the EU. Consult an attorney to determine which laws may impact your company or tech.

  • Consider reputational risks: even if no HITL requirements apply to your company now, not realizing that your AI tools are creating biased, discriminatory or even unsafe outputs could put your company’s reputation at risk. Consider voluntarily implementing HITL to engage in ethical AI practices. 

  • Classify risk: determine if your AI tool makes high-risk decisions impacting health, finances, civil liberties, or safety to determine how critical HITL systems are.

  • Design for intervention: ensure that user interfaces contain an easy way to send AI determinations for human review, and override AI determinations when they may be biased. 

  • Set clear escalation rules: define when humans must act (e.g., for any action that impacts benefits, safety, freedom).

  • Select and train the right humans in the loop: individuals observing AI outputs should be able to determine when outputs may be demonstrating bias, and should have a good understanding of what the “correct” outcome from an AI tool should be (e.g., a patient’s specialist or a qualified clinician familiar with the patient’s condition and health records should be responsible for reviewing diagnostic AI outputs, rather than an AI engineer).

  • Log decision paths: document the inputs, outputs, and any time human intervention was implemented in the event of an audit, or claim of discrimination. Ensure that all outputs (whether human or AI) are rational when similar subjects or situations are concerned.

  • Measure intervention rates: track how often AI has to be overridden, and use instances where AI made an error to improve models and reduce reliance on human review. 

  • Consider HITL in contracts: when negotiating vendor contracts, require vendor transparency, model documentation (including any bias auditing and testing), and audit cooperation in the event that the AI tool is deemed discriminatory. Contract to require incident response provisions requiring timely adjustments to biased models.

Conclusion

Many times, AI is a helpful efficiency tool for dwindling down options, finding answers and making decisions. However, businesses should be aware that AI is not infallible. Implementing HITL practices in AI use ensures that businesses not only stay compliant with existing and evolving laws, but reduces risks of reputational damage and negative societal impacts.

Previous
Previous

CometJacking: Is Your Browser a Security Vulnerability?

Next
Next

Potential Opportunities for AI Funding Under White House Pediatric Cancer Initiative