Skip to content
10 min read

Spiritual Intelligence: Big AI

Spiritual Intelligence: Big AI
I recently spoke at the Tech4Palestine Summit in San Francisco

Support UpScrolled, a Palestinian-owned social network free of algorithmic interference and ads. It has both Android and iOS applications. You can follow me here.

Join me on UpScrolled

As-salaam alaykum,

"You have a Ph.D. in AI, how could you be anti-AI?!"

That's the question I got from an audience member after my talk at the Tech4Palestine Summit in San Francisco this past weekend. It's a question I've heard numerous times over the years. It's usually delivered with genuine confusion, and sometimes with a hint of judgment 😂. I understand why people ask it. On the surface, it seems like a contradiction. Why would someone who has dedicated two decades of their life to a field suddenly turn against it?

Before we go any further, I should preface by saying that I joined the field of AI in 2004 when it was called "pattern recognition". I had a vision of applying algorithms to brain imaging data to "solve" Alzheimer's Disease. Even though I didn't crack Alzheimer's, I still believed that data could be used to solve massive societal problems. To that end, I helped establish the field of Climate Informatics, and later I was a regular organizer of conferences on "Data Science for Good", including co-organizing the first "Machine Learning for Good" workshop at one of the largest AI conferences: the International Conference on Machine Learning.

OK, back to the panel discussion. The panel's opening question was about whether we can make "ethical" AI. Before we can even begin to define what ethical means in this context, we first need to establish what AI actually is. And this is where the confusion begins for most people.

Today, most people think AI is synonymous with ChatGPT and other large language models (LLMs) and that's a fundamental misunderstanding.

AI is a broad field of computer science that develops algorithms that improve with more data [For a lengthier discussion on AI and ChatGPT. See my August 2025 newsletter "ChatGPT-5"]. LLMs, like ChatGPT, are one very narrow application of AI, yet because they are funded by billions from Big Tech and venture capital investors, they dominate public discourse, completely overshadowing more modestly funded areas of AI research supported by academic grants. The work being done in universities on climate modeling, public health, and other applications rarely makes headlines, while every incremental update to ChatGPT becomes front-page news.

What we're experiencing today is the realization of Big Tech's vision: defining AI to mean only applications that further their agenda and dominance. For example, it takes hundreds of millions of dollars to train these LLMs. By convincing us that LLMs are "the future," Big Tech essentially owns that future because they're the only ones who can afford to train such models. This is a methodical strategy to concentrate power in the hands of a few corporations while making the rest of us dependent on their tools.

Here's the spiritually intelligent reframe:

There is a difference between the science of AI and Big AI. One is a field of computer science, the other is a system of extraction and exploitation dressed up as "inevitable innovation".

Anti-AI vs. Anti-"Big AI"

By now it should be clear that I am not anti-AI. I am anti-"Big AI." There's a meaningful difference that gets lost when we conflate the science of AI with for-profit LLMs like ChatGPT.

Big AI's main thesis is that LLMs will lead to "general artificial intelligence" or AGI. What AGI is and what it means to humanity is anybody's guess. It's one of those "you'll know it when you see it" kind of a thing, I guess. But if you ask any OpenAI or Anthropic employee, they will likely give you vague and non-sensical definitions of AGI.

AGI is your typical VC-funded hype technology: something that is very expensive to build and maintain that nobody asked for. Just like Instagram, VR/AR, the "meta-verse", and countless other useless inventions that served a singular purpose: make more money for investors.

Big AI and AGI, however, have one distinct difference than the above technologies: its benefits are over-hyped while it has unfathomable negative effects (on the scale of the negative effects of an atomic bomb or worse)

The Hypothetical Future vs. The Actual Present

When you press a "pro-AGI" person about the concrete benefits of current AI systems, you typically get a litany of hypotheticals and maybes. "It'll cure cancer!" they say, though we're no closer to that breakthrough than we were a decade ago. "It'll solve climate change!" though the energy consumption of building AI models actually accelerates environmental destruction. "It'll make healthcare affordable!" though the primary healthcare AI applications are billing optimization and insurance claim processing.

Hypotheticals. Maybes. Somedays. I've been in AI since 2004, and I'm still waiting on these "breakthroughs" that seem perpetually five years away.

But the harms? Those are happening right now.

The environmental destruction required to train a single large model is staggering: millions of gallons of water for cooling, enough electricity to power thousands of homes for a year. Workers in Kenya and other Global South countries make $2/hour labeling traumatic content so that ChatGPT can avoid showing you disturbing images, while they carry the psychological burden of viewing that content day after day. Facial recognition technology is being sold to ICE to facilitate deportations. Surveillance tools are being deployed against Palestinians. And billions of dollars are flowing to a handful of people who were already billionaires, widening inequality rather than addressing it.

One question that few people ask is: if AI is so powerful, how come it hasn't made a dent into issues that matter to us?

The answer is VCs.

As I mentioned earlier, it costs ~$100s of millions to train an LLM. That is because these are extremely general LLMs that try to do a plethora of tasks: answer questions, write essays, write software, create images and videos, etc. Super general applications are a characteristic of VC-funded tech: it needs to be as general as possible to increase the odds of creating a monopoly. Even though AI models are much easier and cheaper to train on narrow problems, say medical literature, VCs see those are "niche" and not worthy of investment.

It is this broad general nature of LLMs that make them unsuitable for solving our most pressing societal needs which require "niche" knowledge. And even then, we can only solve very specific questions. So when a tech bro says "AI will solve climate change", we have to ask them: "what do you mean by climate change? Is rising sea levels, or rising global temperatures, or the increase in the frequency and intensity of extreme events, or something else?" As a general rule of thumb: general vague question ("solve climate change") plus general technical solution (an LLM) equals Ponzi Scheme.

The Myth of Neutrality

Many Muslims I speak with claim AI is a "neutral" tool that depends entirely on the user to employ it for good or bad. This is demonstrably false, and it's an argument designed to deflect accountability from those who build and profit from these systems.

Big-AI is not a neutral technology in several respects. First, and in the simplest example, LLMs are trained mostly on English text. What about the 100s of dialects in Africa and Asia? Training data are never neutral. Second, the very interfaces of Big-AI are not neutral. When OpenAI presents ChatGPT as a friendly chatbot (instead of a stochastic parrot), it is effectively serving its business interests (of getting users to give more data) while actively misleading users about the nature of the technology and the reliability of its outputs. Finally, and this is why most AI (not just Big-AI) is not neutral, every AI system is optimized for something, and that optimization target embeds values. Social media recommendation algorithms are optimized for engagement, which means they systematically amplify content that provokes strong emotional reactions such as outrage, fear, or conflict. Similarly, hiring algorithms optimized on "successful past employees" will reproduce whatever biases existed in past hiring decisions. If a company historically promoted people who fit a certain demographic profile, the algorithm will learn to favor that profile, not because it's programmed to discriminate, but because "neutral" optimization on biased historical data produces biased outcomes.

The claim of neutrality is itself a power move, an attempt to place these technologies beyond critique by framing them as mere tools rather than systems that encode and enforce particular visions of how the world should work.

When I point this out, when I ask who benefits and who pays the price, I'm called a "doom sayer," "anti-progress," or "backward." Critiquing the unquestioning adoption of AI doesn't make you anti-AI. It makes you pro-justice. It makes you pro-equality. It makes you pro-humanity. It makes you someone who refuses to let hypothetical technotopia futures justify real present harms. The prophetic tradition teaches us to question power, to stand with the oppressed, and to refuse complicity in systems of extraction. This is exactly what critical engagement with Big AI requires.

Big AI Cannot Be Reformed

So to go back to the original question: can we build "ethical" AI systems? My answer is no. Big AI cannot be "fixed" by sprinkling ethics frameworks on top or by adding diverse voices to AI teams. These interventions, while valuable, don't address the fundamental problem: capitalism.

Big AI is structurally built on extraction, exploitation, and concentration of power, and these aren't bugs to be patched but features essential to profitability.

Consider the extraction at the heart of these systems: scraping the entirety of the internet, including copyrighted books, pirated materials, and personal content, without permission or compensation. Consider the exploitation: underpaid workers in the Global South doing the traumatic work of content moderation while the companies they enable are worth billions. Consider the environmental violence: each training run consumes resources that could power entire communities, all to generate marginally better chatbot responses. Consider the concentration of power: only a handful of companies can afford to train these models, creating unprecedented technological monopolies that dwarf even the railroad barons of the Gilded Age.

You cannot reform a system whose profitability depends on these harms. It's like asking the tobacco industry to make cigarettes healthy: the harm is the business model.

When the Prophet PBUH began carrying his message to the Meccans, he was very clear that their system was corrupt: worshipping idols and false gods, discriminating based on tribal allegiances, burying daughters alive, etc. No matter how powerful and lucrative the Meccan system was, the Prophet PBUH denounced it. So much so that he sought to build his own system based on equality, justice, and truth in Madinah. Sometimes, you do have to throw out the baby with the bath water.


Is AI Really That Useful?

Although I left AI in 2017 because, back then, I thought it was too much hype. I have been speaking up more vocally against Big AI in the past 3 years so other Muslims know it's OK to resist AI, and it starts with questioning its value.

Seriously, is AI that useful that we're willing to look the other way? [I break down whether AI is really as useful as marketed in my August newsletter]

What if you had to pay $1 per word? Is that email drafted by ChatGPT worth it? Is that LinkedIn comment worth it? Is that AI-generated report that nobody will read worth it?

Most people using ChatGPT are using it for tasks that:

  1. Don't actually save them time
  2. Produce mediocre results that require extensive editing
  3. Could be done better by thinking and writing themselves

The emperor has no clothes, but we've been Jedi mind-tricked into believing that AI is essential, inevitable, and unavoidable.

How Muslims Can Resist Big AI

As Muslims, we have both a responsibility and an opportunity to resist Big AI's colonization of our lives, work, and thoughts. Here are concrete steps we can take:

1. Make Conscious Choices

Before using any AI tool, ask yourself:

2. Support Alternatives

When AI tools are genuinely useful, prioritize:

3. Control the Stack

The real power isn't in applications but in the entire AI stack. Muslims need to invest in:

Don't just be consumers of ChatGPT. Build the alternatives (this will be for a tiny % of Muslims)

4. Speak Up

In your workplaces, when someone suggests "let's just use AI for this," ask:

Your voice matters. Many people are quietly uncomfortable with AI but think they're alone.

5. Prioritize Relationships Over Productivity Theater

Most AI use is "performative productivity", creating the appearance of work rather than actually doing meaningful work. Focus instead on:

6. Reject the Hype

When people claim AI will solve everything, remember: they said the same about the internet, social media, blockchain, and the metaverse.

Big Tech's promises are marketing, not prophecy.

7. Support Ethical AI Research

The scientific study of AI can contribute genuine value, but only when it's:

Support researchers and institutions doing this work, not venture-backed startups chasing billion-dollar valuations.

The Choice Is Ours

The science of AI isn't the enemy. The system extracting wealth while externalizing costs onto the most vulnerable? That's the enemy.

We don't need to passively accept Big AI's vision of the future. We can build different futures: ones aligned with our values, that serve human flourishing rather than shareholder returns.

If standing for this makes me "anti-AI," then I'll wear that label proudly.

The question isn't whether you're pro-AI or anti-AI. The question is: are you willing to ask who benefits and who pays the price?

Are you willing to resist technologies that harm the vulnerable even if it gives you the illusion of productivity?

Are you willing to imagine and build alternatives?

That's what spiritual intelligence requires of us.

What's your relationship with AI? Are you using it consciously or automatically? Reply to this email and let me know what resonates and what concerns you have.

May Allah's Peace be with you 🤲 ✨

James

💡
Whenever you are ready, there are 3 ways I can help you:
1. If you enjoy these reminders, support my work by pre-ordering my upcoming book "Spiritual Intelligence: 10 Lost Secrets to Thrive in the Age of AI" and get exclusive access to a chapter before the general public
2. Join our Spiritual Intelligence community where we host weekly lives on Thursdays, private events, and support you to grow while remaining aligned with your values.
3. Master manifesting for Muslims in my most popular course "Friction to Flow" and get $290 off as a newsletter subscriber!