Ad Code

Is AI Being Misused by Big Corporations?


Table of Contents


  • What Are Some Examples of AI Being Misused?
  • How Might AI Be Misused in the Future?
  • Do Big Tech Companies Misuse AI Due to Lack of Regulation?
  • Do Profits Incentivize AI Misuse?
  • How Can Corporations Use AI More Responsibly?
  • What Can Government And Citizens Do About Corporate AI Misuse?


Artificial Intelligence or AI is having a huge impact across all major industries today. AI algorithms are able to perform many tasks faster and even better than humans in some cases.


However, as the technology becomes more widespread, there are growing concerns that big tech companies and corporations may be misusing AI in the pursuit of higher engagement, data harvesting, and profits. But what does misuse of AI mean, specifically? And why and how exactly are global corporations misusing advanced algorithms? This comprehensive blog examines the reality of irresponsible use of artificial intelligence by big businesses, the implications of such exploitation, and what can be done to encourage more accountable AI.


What Are Some Examples of AI Being Misused?

Some major ways that corporate giants are already misusing AI include:


Social Media News Feeds

  • Facebook, Instagram, Youtube and Tiktok make use of complex neural networks to study user activity and determine what kinds of posts they are more likely to engage with or react to emotionally. Based on these highly personalized insights, their AI algorithms promote posts and videos likely to keep users glued to their platform for longer periods of time.
  • "Research suggests tailored social media feeds are behind rising rates of teen depression and anxiety," says Anil Thomas, a senior researcher studying the mental health effects of digital technology at UC Berkeley.
  • These algorithms are designed purely for user retention instead of their well-being. This can be seen as misuse of AI due to the apps' understanding of human psychology.


Hiring Algorithms

  • Large global corporations are moving from traditional resume screening to use of AI algorithms to assess job applications. However, multiple investigations have found issues of bias with such tools. Legislation forbidding discrimination has been inadequate in governing hiring algorithms.
  • For instance, "One analysis showed Amazon's CV screening tool was unfavorable towards applications with wording indicating the applicant was female," reports Technology Quarterly from The Economist. Such algo-driven discrimination could be considered negligence or deliberate exclusion of candidates from minority groups.

"Artificial intelligence is not inherently malicious or benevolent. In AI assistant technologies as well, everything depends on the data and models we choose to feed such algorithms."

Facial Recognition

  • Retail giants use facial recognition technology in their stores for suspected shoplifters. Video feeds are matched in real-time against databases containing images of past offenders or banned customers. However, there have been multiple cases of mis-identification, especially among ethnic minorities.
  • Wrongful interrogation or even prosecution of innocent individuals due to incorrect face matches poses worrying questions about consent and accuracy standards for corporate facial matching. "There is currently little regulation or 'safety testing' mandating minimum accuracy thresholds for such biometric platforms run by private companies on their premises," notes technology ethics paper from the Journal of Information Policy.

How Might AI Be Misused in the Future?

Like many significant innovations, artificial intelligence holds potential for both empowerment and exploitation depending on how it is governed. Some ways AI could be misused more systemically and scale in the coming years across sectors like jobs, transportation, healthcare and defense include:


  • Job Loss: As machine learning algorithms become more skilled, robots and automated processes take over roles ranging from truckers, telemarketers to paralegals and radiologists. Especially with the pandemic accelerating corporate interest in automation, unless enough re-training opportunities exist, AI could worsen unemployment and inequality globally.
  • Built-In Bias: Since AI algorithms are designed by humans, they inherit the same problems with biases and unfairness, whether conscious or unconscious. Unless safeguards like diverse data and independent audits are mandated, the AI systems of the future stand to automate and exacerbate historic injustice in society.
  • Surveillance Capitalism: Tech companies thriving economically via targeted digital ads have insatiable appetite for data on user activities, interests and networks. There are few checks on how far big corporations go in collecting what could be highly personal user data without meaningful consent for purely commercial aims. Lack of restraint here Crosses privacy boundaries.

Do Big Tech Companies Misuse AI Due to Lack of Regulation?

Silicon Valley technology companies have frequently faced criticism about their ethics and values being misaligned with user or societal interests in pursuit of profits and market domination. Much of this has to do with these influential corporations growing very quickly with digital technologies never foreseen by traditional regulations. For instance, notes Robin Allen QC, Barrister and Visiting Professor at the London School of Economics:


"Existing standards like data protection laws primarily focus on consent mechanisms in notice-and-consent frameworks, but they fail to govern the unseen risks in automated decision-making. More agile and adaptive policy frameworks are urgently needed as algorithmic accountability cannot be left to pure self-governance by tech corporations."


In many countries, lawmakers and judicial systems are often playing catch up trying to frame policies and legal tests tailored to complex tech matters. Without enough government oversight and accountability measures tailored to AI's evolving influence across sectors, overdependence on self-regulation by tech companies could mean ethical shortcuts get taken behind the scenes by engineers and executives.

"AI is the new electricity. Like electricity, used right, harnessed right, AI is going to transform industries and really improve lives. But policies have to protect individuals and communities potentially harmed by AI"- Timnit Gebru, Former Co-Lead of Ethical AI Team at Google

Do Profits Incentivize AI Misuse?

To understand whether profits play a role in misuse of AI, one has to appreciate what kinds of business models enable historic revenue generation for major tech players:


Revenue ModelCompany ExamplesIssues
Advertising via DataGoogle, FacebookTracking user data to microtarget ads
Driving AddictionYouTube, Netflix, TikTokAutoplay features maximize screen time
Worker ExploitationUber, LyftNo employment benefits amid rising automation


As evident, a quest for endless user growth and retention alongside spend less on human resources leads to several ethical conflicts. AI fuels the problems in the above profit-centered models by seamlessly enabling 24/7 tracking and automated hooks. Unless corporations themselves transform such motivations voluntarily or regulatory reforms make them rethink narrow commercial interests, these incentives will likely perpetuate AI misuse on a global scale, increasing inequality and loss of consumer autonomy.


How Can Corporations Use AI More Responsibly?

While stronger government regulation around algorithmic transparency and oversight would establish clearer guard rails against AI exploitation, companies themselves can also act more responsibly. Some best practices worth widespread voluntary adoption include:


  • Explainable AI: Making automated decision systems more interpretable, so that individual users can understand why a particular prediction or content recommendation was made to them. Lack of personal insight into AI judgments undermines trust and perceived fairness.
  • External Algorithm Audits: Undertaking regular bias testing of algorithms by independent third party auditors, especially where risk of discrimination exists. Removing prejudicial correlations in training data or misleading feedback loops would greatly improve AI safety.
  • Data Minimization: Collecting and retaining only user data strictly needed for delivering or improving their core service. Would prevent privacy violations via unauthorized secondary use.
  • Grievance Redress: Setting up dedicated teams and straightforward procedures for users to contest problematic automated decisions or requesting AI's predictions to be overturned in favor of human review. Provides checks against algorithmic irreversibility.


Overall, voluntary self-regulation initiatives by tech giants may still not go far enough due to inherent conflicts of interest. But they signal corporate responsibility in the absence of government action, helping build public trust.


What Can Government And Citizens Do About Corporate AI Misuse?

Public intervention through policy reforms, legislation, and grassroots campaigns play a vital role in balancing commercial applications of Artificial Intelligence with individual rights and providing guard rails against exploitation by corporations. Some civic and governmental ways to counter AI misuse include:


  • Algorithmic Audits Mandate: Governments start requiring transparency reports, risk assessments and scheduled reviews by external auditors for high-risk automated decision systems used by corporations especially where potential for unlawful discrimination, loss of jobs at scale etc exists. Helps address AI's black box aspect.
  • New Data Protection Laws: Updating decades old privacy regulations to deal more explicitly with valid consent, mandatory disclosures and restrictions on secondary/unauthorized use of private data. Legislations specifically tailored to safeguard against opaque mass surveillance via unauthorized data harvesting. Establish thresholds for excessive collection beyond service necessity. Strengthen individual rights and means of accessing, reviewing and deleting retained user data traces to enhance accountability around data handling
  • Employee Organizing: Workers across sectors impacted by automation technologies collectively negotiate agreements with corporations for fairer compensation packages if significant job displacement happens due to AI systems. Provides safety net where adequate public policy lags.
  • Algorithmic Impact Assessments: Before deployment of high-risk AI systems that could adversely impact human rights/civil liberties, companies conduct mandatory impact assessments through an equitable lens and address risks surfaced through additional testing or tweaks during design phase.
  • Public Advocacy Initiatives: Concerned groups run awareness campaigns on problems like manipulation via hyper-personalized social media feeds, lack of accountability for certain automated decisions, creeping privacy violations. Goals include both consumer self-defense and pressurizing corporations.
  • Research Funding: Governments expand support for academic research and civil society institutions focusing on auditing proprietary algorithms, studying physical/mental health side effects of automated hooks, investigating unfair biases in training data and effects of certain AI models replacing human jobs/oversight. Findings inform policy reforms around ethical AI standards.

"Interventions by both policymakers and the public are crucial to balance out the tremendous asymmetry of power held by a handful of global tech corporations wielding AI systems impacting millions of lives" - Ella Ingram, author of Artificial Injustice: The Ethics and Governance of Algorithms


In the technology life cycle, initial unconstrained innovations historically led to public outrage once harmful effects surfaced widely. But subsequent pressure and mobilizations resulted finally in reasonable restraints, like basic environmental regulations, truth in advertising norms etc. AI oversight today seems to be at that inflection point. With growing spotlight on how certain applications and incentives only serve corporate bottomlines versus collective human interests, some thoughtful guard rails are starting to emerge though still insufficient. Sustained public vigilance and debate on this deeply shaping technological force remains vital.

Post a Comment

0 Comments