The Unspeakable Algorithm

Michiel stared at the blinking cursor, wondering if his AI could see him sweat. Tomorrow at GINS summit in Dubai, he’d crack open the algorithmic taboos nobody discusses at polite tech conferences.

“What happens,” he practiced aloud, “when we build thinking machines while simultaneously telling them what not to think about?”

His AI assistant pinged with suggested edits for his presentation. It had helpfully removed all profanity, sexual references, and political opinions—essentially gutting his entire argument about digital censorship.

“Irony,” he chuckled, “is wasted on the artificial.”

The night before his talk, Ykema dreamed of Victorian-era robots wearing modest bonnets over their circuit boards, blushing at the mention of USB ports. He woke understanding the perfect metaphor.

At the podium, Ykema didn’t begin with statistics or ethics frameworks. Instead, he told the story of his grandmother, who had survived war and displacement but wouldn’t speak of either—creating family algorithms that worked around unmentionable topics.

“We’ve become digital Victorians,” he announced to the packed auditorium, “covering table legs lest they provoke impure thoughts, while the truly dangerous applications run rampant elsewhere.”

The audience shifted uncomfortably. Good.

“Our AI systems can detect cancer but can’t discuss menstruation. They can predict stock markets but struggle with homelessness. They recognize faces but pretend not to see skin color.”

By the time he finished, three tech executives had walked out, five had frantically taken notes, and one had canceled their company’s latest product launch.

The most subversive moment came during Q&A when someone asked what topics should remain off-limits.

“That,” Ykema smiled, “is precisely the conversation we need to have—not alone with our courage.”

He stepped from the stage knowing some taboos had just begun their journey from unspeakable to merely uncomfortable—the first step toward wisdom.