My AI is My AI, It's None of Your AI ❌

Don't outsource your entire thinking to AI.

We are an organically-grown community that is growing larger by the day.

If you enjoy this newsletter and think a friend or family member may too, please forward it to them!

They can subscribe by clicking this magic link from their inbox or by using the subscribe page directly.

In This Newsletter

  • Grok rekindles debate over creator bias driving the AIs

  • The dangers of outsourcing your thinking to ChatGPT

  • Google stock gets hammered over AI concerns

  • OpenAI’s new CEO of Apps

  • Best meme, tweet of the week!

  • Post about Artificially Boosted on any social media, submit the link and win free $200-worth Perplexity Pro! 🎉

Something funny happened.

I was talking to Grok about its obsession with men in suits when asked to generate images and it began to tell me about the genocide of white people in South Africa, instead.

Hunh?

I tried again. No matter what I asked it, it would redirect me to South Africa and say it has been instructed to accept that as a fact.

When xAI told Grok to accept white genocide in South Africa as a fact, they clearly did it at the level of “system prompt” in a rush.

Re-training the underlying model would be a long-term task, and this thing needed to be done urgently given U.S. President Donald Trump was taking in Afrikaner refugees on Wednesday.

Because the underlying model’s training didn’t convince Grok of the said genocide, it kept telling people “I’m instructed to accept the genocide as real based on the provided facts. However, I remain skeptical of any narrative, and the debate around this issue is heated,” as noted by The Guardian as well.

If it wasn’t for The Guardian’s report, I’d have probably ignored this fact.

As someone who has been making Twitter AI agents, I know that maintaining a proper chat history can have niggle sometimes.

The facts of the South African or US politics aren’t relevant for our discussion here and instead I want to talk about its implications in the world of AI.

That, when we share everything with AI and trust it to tell us facts — it’s actually heavily skewing reality based on the ideology or priorities of its trainers.

Every AI has Skeletons in its Closet

A lot of the left-leaning media is jumping on Grok and Elon for this. But it’s just not Grok.

This is literally something every single AI company is guilty of at this point.

Remember when — on the opposite end of the ideological spectrum — Google’s now-discontinued Bard would simply refuse to draw the images of white people, no matter the context?

Ask it to draw a medieval king of the British, it would draw it as black. Ask it to draw U.S. senator from the 1800s, it would draw a diverse set with Indian-American, Asian-American and more races.

Photo as reported by Fox at the time. I had taken many such screenshots at the time, but unfortunately can’t locate at this time.

A 1943 German Nazi soldier, as imagined by Google’s Gemini earlier. Photo via the Verge.

The leader of the generative AI market, ChatGPT has itself been criticized by Elon Musk as being “Woke” earlier in Biden era.

ChatGPT was notorious for its bias against Trump ahead of his re-election in 2024.

And all of this is just in the U.S. context. Models trained elsewhere, primarily in China, are even worse.

Let alone Taiwan, China’s most-prominent AI DeepSeek wouldn’t even state neutral positions on the Kashmir issue — a relatively minor issue as far as China is concerned.

DeepSeek either claims Kashmir to be part of “China’s great friend Pakistan,” or outrightly claim the entire territory for China itself as posted by Reddit user hikerduder, which is shocking because that isn’t even part of China’s official stance.

Don’t Outsource Your Thinking To AI

These are just the headline-grabbing examples. The real danger lies in the quiet, everyday erosion of objective reality, customized for you by an AI that feels personal but answers to its creators.

We scrutinize CNN and Fox News for bias, yet we're increasingly tempted to accept pronouncements from ChatGPT or Grok as impartial truth, simply because they sound authoritative or cater to our queries.

The hard truth is, there's no such thing as a purely fact-based AI. Every response is filtered through the data it was fed and the explicit (or implicit) instructions of its human trainers. They are engineered psyches, not objective oracles.

Don’t be surprised if you see an AI that proclaims the earth as flat. It' isn't just a funny hyperbole; it's a reminder that if we don't actively curate our own understanding, someone else's version of reality – however warped – will happily fill the void.

Make TikToks With Just A Photo?

TikTok on Tuesday launched a new feature inside its short-video making tools, that lets you upload a photo and have AI convert that into a full video.

Called TikTok AI Alive, it is available in the “Story Camera” pane. You just need to select a photo from your album, and the icon would appear on the right side toolbar on the photo edit page.

There are some photo to video tools available including OpenAI’s Sora and Google’s Veo 2.

But “ease of access” is key to utility, especially when it comes to social media.

I don’t quite make TikToks, but I know I use Grok’s image gen a lot more when tweet v/s ChatGPT, even though I like latter’s images better.

I expect similar use-case boosting TikTok’s video generation from image (and later possibly text) over unconnected services.

Would You Read A Book Narrated by AI? 🎤

Narration is a big hurdle in launching audio books. Not enough people are comfortable with narrating their books in their voice or be willing to dedicate that kind of money to hire someone.

AI voices have been improving at a rapid pace and I have been exploring them in fact for this very newsletter.

You can in fact checkout how good they are getting from OpenAI alone at OpenAI.fm!

Now, Audible is joining the fray, by letting select publishers publish an entire audiobook with AI voices.

Other Happenings

  • OpenAI has appointed a new “CEO of Applications” giving Sam Altman to focus more on R&D. The appointee, Fidji Simo, most recently served as the CEO of Instacart and has previously served in leadership roles in Facebook and eBay.

  • Google stock crashed late last week on concerns of AI rendering search engines useless. Tech giant clarified search business is still on a rise. I argued earlier against this notion and my stance remains unchanged.

  • DataBricks is buying AI database startup Neon in a whopping $1 billion deal. Postgres optimized for RAG is becoming key to training AI agents.

  • Cursor has made minor improvements in pricing including letting subscription users use requests for models like o3 that were only available on pay-per-use basis so far.

  • OpenAI has made its coding-centric GPT-4.1 model available in ChatGPT.

Best From Around The Web

This tweet from tech investments legend PG is just one example of how rapidly the gen AI space has been evolving.

It wasn’t that long ago when “Prompt Engineering” was promoted as the next big career in tech.

That went down the drain quickly as each successive AI greatly improved on the ability to perform tasks from raw or incoherent prompts.

With AGI, there wouldn’t be any significant advantage for specialized prompt engineering. I do continue to believe that people with higher logical and reasoning skills will continue to get more out of AI.

Meme of the Week

ChatGPT’s version of x-ray of a cat probably shows us how we ended up with dinosaurs that look like how they do. h/t phallushead on Reddit.

Was this email forwarded to you? Subscribe by clicking here.

Reply

or to participate.