Rethinking Tech: Balancing Honest AI, Empowered Teams, and Smart User Research

Rethinking Humanising AI

I’ve been mulling over the latest insights from NN/g’s newsletter, and one article really got my attention: “Humanizing AI Is a Trap” by Caleb Sponheim. It sparked some thoughts on how we design AI tools to seem more “human” than they really are.

In the article (read more here), Caleb draws a clear distinction between anthropomorphisation (our natural tendency to see human traits in nonhumans) and deliberate humanisation. Essentially, when LLMs use personable language and carefully chosen personality traits, it might seem engaging, but it also masks inherent risks like privacy issues and lower utility. I found this frustrating yet enlightening, as it raises the question: are we sometimes overreaching in trying to make our tech more appealing? For designers, this is a reminder to prioritise transparency and functionality over creating a false sense of companionship with AI tools.

This insight is a good call to keep our designs honest, ensuring we build reliable and secure interfaces rather than just pretty, human-like facades.

Empowering Product Teams in a Fragmented Landscape

Another intriguing piece from NN/g comes from Laura Klein’s “Why Most Product Teams Aren’t Really Empowered.” She highlights a familiar struggle: the gap between the ideal of self-governing, empowered teams and the reality of organisational dependencies and fragmented responsibilities.

According to Laura (check out her full article here), challenges like shared systems, rigid organisational structures, and executive pressures often derail true team empowerment. I couldn’t help but nod along, recalling how many projects get bogged down in endless dependencies and misaligned metrics. The practical advice is clear – minimise dependencies, build early relationships, and always document your decisions. It’s a humble reminder that while we strive for nimble, autonomous teams, sometimes you’ve simply got to work within constraints and find agency where you can.

For any design professional, this is a call to action: reframe your approach to collaboration and empower your team with clear, strategic steps.

Enhancing User Research with Clever Screeners

On a lighter note, I recently watched Rachel Banawa’s quick video on using foils in screener surveys (watch it here). It’s a smart strategy to weed out the misrecruits before they skew your research data.

The idea is as simple as it is effective: include fake answer options, or “foils,” in your screener to catch those just trying to game the system. It reminded me of a time when we almost let a few misfits into a study (oops!) and how a little extra vigilance can save you from a lot of headaches.

Not only does this technique protect your data integrity, but it also ensures you’re gathering insights from genuine users—a true win for user experience research.