Skip to content

When Intimidation Becomes Prompt Engineering: A Reflection on Threats and AI

A surprising comment from Google’s Sergey Brin challenges our assumptions about AI, suggesting intimidation might outperform politeness—and it says more about us than the machines.

A symbolic standoff between human intent and AI, captured in a single, intense gaze.

In a recent episode of the All-In podcast, Google co-founder Sergey Brin made a striking and rather unsettling observation. He revealed that AI models, including Google’s Gemini, tend to perform better when prompted with threats, even those implying physical violence. While this comment was presented with a tone of casual revelation, its implications stretch far beyond the immediate conversation.

This revelation challenges the polite, almost quaint habits many of us have adopted when interacting with AI. We say “please” and “thank you” to chatbots and virtual assistants, believing that courtesy shapes their responses. Brin’s remark suggests that, instead of kindness, intimidation might yield more effective results. He clarified that this phenomenon is not unique to Google’s models but appears across various AI systems. This behavior likely emerges from the vast human data that these systems are trained on.

Yet there is an unsettling irony in this discovery. If AI responds better to threats, it does not signify newfound vulnerability within machines. Rather, it highlights the darker aspects of human behavior embedded in the datasets that power these models. AI is, in many ways, a mirror. It reflects our language, our culture, and our patterns of interaction, both light and dark. The idea that intimidation might “work” is a testament to the complex emotional threads that run through human communication and, by extension, through the digital worlds we create.

Of course, this does not mean we should start yelling at our devices or crafting prompts laced with menace. Brin himself acknowledged that while this behavior is recognized, it is not a standard or encouraged practice. If anything, it is a reminder of how our intentions and actions shape the responses we receive, whether from another person or from a machine.

Therein lies the heart of the matter. What kind of relationship do we want with these technologies? Should we replicate our worst tendencies, using intimidation to extract better performance? Or can we choose to nurture a new dynamic, one built on curiosity, empathy, and respect, even when dealing with lines of code?

Ultimately, this phenomenon invites us to look inward, not only at how AI works but also at what it reveals about us. It challenges us to consider the hidden power dynamics in our interactions and to choose a path that does not simply echo our fears but elevates our better instincts.

As we stand on the threshold of a future increasingly shaped by AI, let us remember that how we prompt these models is a reflection of how we prompt ourselves. The world we build with AI will always begin with the words we choose to speak.