Close Menu
Emirates InsightEmirates Insight
  • The GCC
    • Duabi
  • Business & Economy
  • Startups & Leadership
  • Blockchain & Crypto
  • Eco-Impact

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

du Named Strategic Partner Of Edge Of Life Campaign

March 5, 2026

Buterin Urges Ethereum to Build ‘Sanctuary Tech’ Against Digital Control

March 5, 2026

Knowing when you shouldn’t use AI is also a skill

March 5, 2026
Facebook X (Twitter) Instagram LinkedIn
  • Home
  • Get Featured
  • Guest Writer Policy
  • Privacy Policy
  • Terms of Use
  • Contact Us
Facebook X (Twitter) Instagram LinkedIn
Emirates InsightEmirates Insight
  • The GCC
    • Duabi
  • Business & Economy
  • Startups & Leadership
  • Blockchain & Crypto
  • Eco-Impact
Emirates InsightEmirates Insight
Home»Startups & Leadership»Knowing when you shouldn’t use AI is also a skill
Startups & Leadership

Knowing when you shouldn’t use AI is also a skill

Emirates InsightBy Emirates InsightMarch 5, 2026No Comments
Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email



Most AI training teaches you how to get outputs. Write a better prompt. Refine your query. Generate content faster.

This approach treats AI as a productivity tool and measures success by speed. It misses the point entirely.

Critical AI literacy asks different questions. Not “how do I use this?” but “should I use this at all?” Not “how do I make this faster?” but “what am I losing when I do?”

AI systems carry biases that most users never see. Researchers analysing the British Newspaper Archive in 2025 found that digitised Victorian newspapers represent less than 20% of what was actually printed. The sample skews toward overtly political publications and away from independent voices.

Anyone drawing conclusions about Victorian society from this data risks reproducing distortions baked into the archive. The same principle applies to the datasets that power today’s AI tools. We cannot interrogate what we do not see.

Literary scholars have long understood that texts help to construct, rather than simply reflect, reality. A newspaper article from 1870 is not a window onto the past but a curated representation shaped by editors, advertisers and owners.

Get the best of Startup Daily straight to your inbox

Want to know the latest in startup news? Subscribe to our daily news and analysis coverage on what’s happening to ANZ startups, investors and the broader ecosystem. And best of all, it’s FREE!

By continuing, you agree to our Terms & Conditions and Privacy Policy.

AI outputs work the same way. They synthesise patterns from training data that reflects particular worldviews and commercial interests. The humanities teach us to ask whose voice is present and whose is absent.

Research published in the Lancet Global Health journal in 2023 demonstrates this. Researchers attempted to invert stereotypical global health imagery using AI image generation, prompting the system to create visuals of black African doctors providing care to white children.

Despite generating over 300 images, the AI proved incapable of producing this inversion. Recipients of care were always rendered black. The system had absorbed existing imagery so thoroughly that it could not imagine alternatives.

AI slop is not just articles peppered with “delve” and em dashes. Those are merely stylistic tells. The real problem is outputs that perpetuate biases without interrogation.

Consider friendship. Philosophers Micah Lott and William Hasselberger argue that AI cannot be your friend because friendship requires caring about the good of another for their own sake. An AI tool lacks an internal good. It exists to serve the user.

When companies market AI as a companion, they offer simulated empathy without the friction of human relationships. The AI cannot reject you or pursue its own interests. The relationship remains one-sided; a commercial transaction disguised as connection.

AI and professional responsibility

Educators need to distinguish when AI supports learning and when it substitutes for the cognitive work that produces understanding. Journalists need criteria for evaluating AI-generated content. Healthcare professionals need protocols for integrating AI recommendations without abdicating clinical judgment.

This is the work I pursue through Slow AI, a community exploring how to engage with AI effectively and ethically. The current trajectory of AI development assumes we will all move faster, think less and accept synthetic outputs as a default state. Critical AI literacy resists that momentum.

None of this requires rejecting technology. The Luddites (textile workers who organised against factory owners across the English Midlands in the early 19th century) who smashed weaving frames were not opposed to progress. They were skilled craftsmen defending their livelihoods against the social costs of automation.

When Lord Byron rose in the House of Lords in 1812 to deliver his maiden speech against the frame-breaking bill (which made the destruction of frames punishable by death), he argued these were not ignorant wreckers but people driven by circumstances of unparalleled distress.

The Luddites saw clearly what the machines meant: the erasure of craft and the reduction of human skill to mechanical repetition. They were not rejecting technology. They were rejecting its uncritical adoption. Critical AI literacy asks us to recover that discernment. Moving beyond “how to use” toward an understanding of “how to think”.

The stakes are not hypothetical. Decisions made with AI assistance are already shaping hiring, healthcare, education and justice. If we lack frameworks to evaluate these systems critically, we outsource judgement to algorithms whose limitations remain invisible.

Ultimately, critical AI literacy is not about mastering prompts or optimising workflows. It is about knowing when to use AI and when to leave it the hell alone.


This article is republished from The Conversation under a Creative Commons license. Read the original article.



Courtesy: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
Emirates Insight
  • Website

Related Posts

Women keep teams aligned – so why aren’t they leading them?

March 4, 2026

Green fertiliser startup PlasmaLeap lands $28 million Series A

March 4, 2026

Want more investment in women founders? Here are 4 you can back for just $250

March 3, 2026
Leave A Reply Cancel Reply

Emirates Insight
LIMITED FEATURE SPOTS
Get Featured. Get Seen.
Position your brand in front of founders, decision makers and professionals across the UAE.
APPLY TO GET FEATURED
Top Posts

Global Leaders Unite at World Climate Summit, The Investment COP 2023 to Redefine Climate Action

December 11, 20235,009 Views
AI & Innovation 2 Mins ReadSponsor: Doers Summit

Doers Summit 2025 opens in Dubai with strong Global participation

Sponsor: Doers Summit November 26, 2025

Australia Risks Falling Behind in Climate Investment, New Report Warns

August 21, 20253,049 Views

How to Start and Scale an E-Commerce Business in the UAE

May 15, 20253,016 Views
Emirares Insight

Emirates Insight - Lens on the Gulf provides in-depth analysis of the Gulf's business landscape, entrepreneurship stories, economic trends, and technological advancements, offering keen insights into regional developments and global implications.

We're accepting always open for new ideas and partnerships.

Email Us:[email protected]

Facebook X (Twitter)
Our Picks

du Named Strategic Partner Of Edge Of Life Campaign

March 5, 2026

Buterin Urges Ethereum to Build ‘Sanctuary Tech’ Against Digital Control

March 5, 2026

Knowing when you shouldn’t use AI is also a skill

March 5, 2026
© 2020 - 2026 Emirates Insight. | Designed by Linc Globa Hub inc.
  • Home
  • Get Featured
  • Guest Writer Policy
  • Privacy Policy
  • Terms of Use
  • Contact Us

Type above and press Enter to search. Press Esc to cancel.