• Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

Tesla Shareholders Approve Elon Musk’s $1 Trillion Pay Package

November 13, 2025

The OnePlus 15 Solves Battery Anxiety But Trips Over Familiar Flaws

November 13, 2025

Trump’s CZ Pardon Has the Crypto World Bracing for Impact

November 12, 2025
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
UptownBudget
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
UptownBudget
Home » Keep Humans At The Center Of AI Decision Making
Startup

Keep Humans At The Center Of AI Decision Making

adminBy adminOctober 24, 20230 ViewsNo Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

Beena Ammanath – Global Deloitte AI Institute Leader, Founder of Humans For AI and Author of “Trustworthy AI” and “Zero Latency Leadership”

In this era of humans working with machines, being an effective leader with AI takes a range of skills and activities. Throughout this series, I’m providing an incisive roadmap for leadership in the age of AI, and an important part of leading effectively today is making sure your people are at the center of decision-making.

In popular discussions on artificial intelligence, there can be a sense that the machine stands alone, distinct from human intelligence and capable of functioning independently, indefinitely. It has led to some consternation around the mass elimination of jobs and the unfounded fear that the future of business is in replacing humans with machines. This is wrongheaded, and in fact, holding this assumption may actually limit potential value and trust in AI applications.

The reality is that behind every AI model and use case is a human workforce. Humans do the hard, often-unsung work of creating and assembling the data and enabling technologies, using the model to drive business outcomes, and establishing governance and risk mitigation to support compliance. Put another way, without humans, there can be no AI.

Yet, while the human element is a key to unlocking valuable, trustworthy AI, it is not always given the attention and investment it is due. The imperative today is to orient AI programs to focus on humans working with AI, not simply alongside it. The reason is that it can have a direct impact on AI ethics and business value.

Two areas of AI development and use are illustrative of the way in which data is curated and the importance of validating AI outputs.

The Risks In Data Annotation

AI models are largely trained on annotated data. Annotating text, images, sentiments and other data at scale is a time-consuming, highly manual effort. With this, human workers follow instructions from engineers to label data in a particular way, according to whatever is needed for a given model. There are matters of trust and ethics that grow out of this. Are the human annotators injecting bias into the training set by virtue of their personal biases? For example, if an annotator is color blind and asked to annotate red apples in a set of images, they might fail to label the image correctly, thus leading to a model that is less capable of spotting red apples in the real world.

Separately, what are the ethical implications for the humans engaged in this work? While red apples are innocuous, some data might contain disturbing content. If a model is intended to assess vehicle damage-based accident photos, human annotators might be asked to scrutinize and label images that contain things better left unseen. In this, organizations have an obligation to weigh the benefits of the model against the repercussions for the human workforce. Whether it is red apples or crashed cars, the insight is to keep humans at the center of decision-making and account for risks to the employee, the enterprise, the model and the end user.

The Importance Of Output Validation

With machine learning and other types of more traditional AI, model management requires ongoing attention to outputs to account and correct for things like model drift and brittleness. With the emergence of generative AI, the importance of validating outputs becomes even more critical for risk mitigation and governance.

Generative AI, such as large language models (LLMs), has rightly created excitement and urgency around how this new type of AI can be used across myriad use cases, both complementing the existing AI ecosystem with upstream deployments and enabling downstream use cases, such as natural language chatbots and assistive summaries of documents and datasets. Generative AI creates data that is (usually) as coherent and accurate as real-world data. If a prompt for an LLM asks for a review of supply chain constraints over the past month, a model with access to that data could output a tight summary of constraints, suspected causes and remediation steps. That summary provides insight that the user relies on to make decisions, such as changing a supplier that regularly encountered fulfillment issues.

But what if the summary is incorrect and the LLM has (without any malicious intent) cited a constraint that does not exist and, even worse, invents a rationalization for why that “hallucination” is valid? The user is left to make decisions based on false information, which has cascading business implications. This exemplifies why output validation is necessary for generative AI deployments.

To be sure, not all inaccuracies bring the same level of risk and consequence. If using generative AI to write a marketing e-mail, the organization might have a higher tolerance for error, as faults or inaccuracies are likely to be fairly easy to identify and the outcomes are lower stakes for the enterprise. When it comes to other applications that concern mission-critical business decisions, however, the tolerance for error is low. This makes a “human in the loop” who validates model outputs more important than ever before. Generative AI hallucination is a technical problem, but it requires a human solution.

Deloitte, where I’m the Global Head of the AI Institute, calls this the “Age of With,” an era characterized by humans working with machines to accomplish things neither could do independently. The opportunity is limited only by the imagination and the degree to which risks can be mitigated. Recognizing and prioritizing the human element throughout the AI lifecycle can help organizations build AI programs they can trust.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

Tesla Shareholders Approve Elon Musk’s $1 Trillion Pay Package

Startup November 13, 2025

Trump’s CZ Pardon Has the Crypto World Bracing for Impact

Startup November 12, 2025

Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan

Startup November 11, 2025

Mark Zuckerberg Opened an Illegal School at His Palo Alto Compound. His Neighbors Revolted

Startup November 10, 2025

Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

Startup November 9, 2025

The AI Data Center Boom Is Warping the US Economy

Startup November 8, 2025
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

Tesla Shareholders Approve Elon Musk’s $1 Trillion Pay Package

November 13, 2025

The OnePlus 15 Solves Battery Anxiety But Trips Over Familiar Flaws

November 13, 2025

Trump’s CZ Pardon Has the Crypto World Bracing for Impact

November 12, 2025

‘ARC Raiders’ Lowers Cosmetic Prices, Has Duo Matchmaking, Talks Raider Deck Plans

November 12, 2025

Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan

November 11, 2025

Latest Posts

Mark Zuckerberg Opened an Illegal School at His Palo Alto Compound. His Neighbors Revolted

November 10, 2025

Coros Pace 4 Offers AMOLED And High-End Features For Sensible Money

November 10, 2025

Meta Claims Downloaded Porn at Center of AI Lawsuit Was for ‘Personal Use’

November 9, 2025

Google Issues New Gmail, Messages And Play Attacks Warning

November 9, 2025

The AI Data Center Boom Is Warping the US Economy

November 8, 2025
Advertisement
Demo

UptownBudget is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2025 UptownBudget. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.