0:00
/
0:00
Transcript

The Creativity Recession and Why Product Leaders Must Reverse It Now

AI isn’t killing creativity. We are through the tools we design, the incentives we reward, and the human potential we quietly erode. This is the decade-defining moment for builders who care about huma

Our latest guest is Maya Ackerman — AI‑creativity researcher, professor, and author of Creative Machines: AI, Art & Us (Wiley), as well as founder of WaveAI and LyricStudio (View recent colab with NVidia).

Maya’s perspective is not just insightful — it’s a necessary reality check for anyone building AI today. She challenges the comforting narrative that AI is a neutral tool or a natural evolution of creativity. Instead, she exposes a truth many in tech avoid: AI is being deployed in ways that actively diminish human creativity, and businesses are incentivized to accelerate that trend.

Her research shows how overly aligned, correctness-first models flatten imagination and suppress the divergent thinking that defines human originality. But she also shows what’s possible when AI is designed differently — improvisational systems that spark new directions, expand a creator’s mental palette, and reinforce human authorship rather than absorbing it.

This episode matters because Maya names what the industry refuses to admit. The problem is not “AI getting too powerful,” it’s AI being used to replace instead of elevate. Businesses are applying it as a cost-cutting mechanism, not a creative amplifier. And unless product leaders intervene, the damage to creativity — and to the people who rely on it for their livelihoods — will become irreversible.

Listen to the Episode on Spotify, Apple Podcasts, Youtube


We’re engineering a global creative regression and pretending we aren’t.

Generative AI could radically expand human imagination, but the systems we deploy today overwhelmingly suppress it. The literature is unequivocal:

  • AI boosts creative output only when tools are intentionally designed for exploration, not correctness.

  • When aligned toward predictability, AI drives conformity and sameness.

  • The rise of “AI slop” is not an insult — it’s the logical outcome of misaligned incentives.

  • New evidence shows that AI-assisted outputs become more similar as more people use the same tools, reducing collective creativity even when individual outputs look “better.”

  • Homogenization is measurable at scale: marketing, design, and written content generated with AI converge toward the same tone and syntax, lowering engagement and cultural diversity.

  • Repeated reliance on AI weakens human originality over time — users begin outsourcing ideation, losing confidence and capacity for divergent thought.

Resources:

BOTTOM LINE: If your product optimizes for correctness, brand safety, and throughput before originality, you are actively contributing to the global collapse of creative quality. AI must be designed to spark—not sanitize—human imagination.

Thanks for reading Design of AI: Strategies for Product Teams & Agencies! This post is public so feel free to share it.

Share


Award-winning creative talent is disappearing at scale, and the trend is accelerating.

The global creative workforce is shrinking faster than at any time in modern history. Companies claim AI is “enhancing creativity,” yet most restructuring reveals the opposite: AI is being deployed primarily to cut labor costs. In general, layoff announcements top 1.1 million this year, the most since 2020 pandemic.

What’s happening now:

The most important read on the future & destruction of agencies comes from

. She always brings a powerful and necessary mirror to the shitshow that is modern corporate world. Read it here:

Musings Of A Wandering Mind
Let It Burn
Read more

Freelancers and independent creatives are being hit even harder:

The research into AI shows even deeper risks:

BOTTOM LINE: Creative roles are vanishing because AI is being optimized for efficiency rather than imagination. If we want creative industries to survive, AI must expand human originality — not replace the people who produce it.:** Creative roles are vanishing because AI is being deployed for efficiency rather than imagination. If we want a future with vibrant creative industries, AI must be designed to amplify human originality — not replace it.

Share Design of AI: Strategies for Product Teams & Agencies


Please participate in our year-end survey

We are studying how AI is restructuring careers, skills, and expectations across product, design, engineering, research, and strategy.

Your responses influence:

  • the direction of Design of AI in 2025,

  • what questions we investigate through research,

  • what frameworks we build to help leaders adapt—and protect—their teams.

Take the survey: https://tally.so/r/Y5D2Q5


Understand your cognitive style so you know how to best leverage AI to boost you

The Creative AI Academy has developed as an assessment tool to help you understand your creative style. We all tackle problems differently and come up with novel solutions using different methods. Take the ThinkPrint assessment to get a blueprint of how you ideate, judge, refine, and decide. Knowing this will help you know in which ways AI can boost —rather than undermine— your originality.

For me it was powerful to see my thinking style mirrored back at me. It gave structure to what enhances and undermines my creativity, meaning I better understand what role (if any) AI should play in expanding my creative capabilities. Thank you to Angella Tapé for demonstrating this tool and presenting the perfect next evolution of Dr. Ackerman’s lessons about needing AI to be a creative partner, not cannibalizer.

Thinkprint map

BOTTOM LINE: Without cognitive self-awareness, you’re not “partnering” with AI—you’re surrendering your creative identity to it. Take the ThinkPrint assessment and redesign your workflow around human-led, AI-supported thinking.

Leave a comment


We are trading away human intellect for productivity—and the safety evidence is damning.

The research is now impossible to ignore: AI makes us faster, but it makes us worse thinkers.


A major multi-university study (Harvard, MIT, Wharton) found that users with AI assistance worked more quickly but were “more likely to be confidently wrong.”
Source: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321

This pattern shows up across cognitive science:

  • Stanford and DeepMind researchers found that relying on AI “reduced participants’ memory for the material and their ability to reconstruct reasoning steps.”
    Source: https://arxiv.org/abs/2402.01832

  • EPFL showed that routine LLM use “led to measurable declines in writing ability and originality over time.”
    Source: https://arxiv.org/abs/2401.00612

  • University of Toronto researchers warn that repeated LLM use “narrows human originality, shifting users from creators to evaluators of machine output.”
    Source: https://arxiv.org/abs/2410.03703

In other words: we are outsourcing the exact cognitive muscles that make human thinking valuable — creativity, reasoning, comprehension — and replacing them with pattern-matching convenience.

And while we weaken ourselves, the companies building the systems shaping our cognition are failing at even the most basic safety expectations.

The AI Safety Index (Winter 2025) reported:

“No major AI developer demonstrated adequate preparedness for catastrophic risks. Most scored poorly on transparency, accountability, and external evaluability.”
Source: https://futureoflife.org/ai-safety-index-winter-2025/

A companion academic review by Oxford, Cambridge, and Georgetown concluded:

“Safety commitments across leading LLM developers are inconsistent, largely self-regulated, and often unverifiable.”
Source: https://arxiv.org/pdf/2508.16982

We are weakening human cognition while trusting companies that cannot prove they are safe. There is no version of this trajectory that ends well without deliberate intervention.

Resources:

  • The Hidden Wisdom of Knowing in the AI Era:

Philosopheasy
The Hidden Wisdom of Knowing in the AI Era
The very notion of expertise *is* increasingly fragile in our hyper-connected age. Authority, once vested in demonstrable skill and deep knowledge, *faces* relentless challenges from myriad directions. This erosion *raises* profound questions about how we acquire, validate, and utilize knowledge itself. Socio-political phenomena like "The Death of Expertise" involve factors like excessive egalitarianism and confirmation bias. Complementary critiques focus on technology's cognitive consequences. Our attention must turn inward, examining the cognitive consequences of our digital tools. The medium of information delivery fundamentally reshapes our thinking processes…
Read more

BOTTOM LINE: Tools that reduce effort but not capability are not accelerators—they are cognitive liabilities. Product leaders must design for mental strength, not dependency.


Schools are producing prompt operators, not original thinkers.

Education systems are bolting AI onto decades-old learning models without rethinking what learning is. Instead of cultivating reasoning, imagination, and embodied intelligence, schools are teaching children to rely on AI systems they cannot critique.

Resources:

BOTTOM LINE: If we do not redesign education, we will create a generation of humans who can operate AI but cannot outthink, challenge, or transcend it.


Featured AI Thinker: Luiza Jarovsky

Luiza Jarovsky is one of the most essential voices in AI governance today. At a time when global AI companies are actively pushing to loosen regulation—or bypass it entirely—Luiza’s work provides a critical counterbalance rooted in human rights, safety, law, and long-term societal impact.

Why her work matters now:

  • She exposes the structural risks of deregulated AI adoption across governments and corporations.

  • She documents how weak or performative governance puts vulnerable communities at disproportionate risk.

  • She offers practical frameworks for ethical, enforceable AI oversight.

Follow her work:

BOTTOM LINE: If you build or deploy AI and you are not following Luiza’s work, you are missing the governance lens that will define which companies survive the coming regulatory wave.


Recommended Reality Checks

Two critical signals from the field this week:

  1. Ethan Mollick on the accelerating automation of creative workflows
    https://x.com/emollick/status/1996418841426227516
    AI is quietly outperforming human creative processes in categories many believed were “safe.” The speed of improvement is outpacing organizational awareness.

  2. Jeffrey Lee Funk on markets losing patience with empty AI narratives
    https://x.com/jeffreyleefunk/status/1996612615850676703
    Investors are separating real AI value from hype. Companies promising transformation without measurable impact are being punished.

BOTTOM LINE: The creative and product landscape is shifting beneath our feet. Those who don’t adapt—intellectually, strategically, and operationally—will lose relevance.


Final Reflection — Legacy Is a Product Decision

Everything in this newsletter points to a single, unavoidable truth:

AI does not define our future. The product decisions we make do.

We can build tools that:

  • expand human originality,

  • strengthen cognitive resilience,

  • elevate creative careers,

  • and produce a generation capable of thinking beyond the machine.

Or we can build tools that:

  • replace the creative class,

  • hollow out human judgment,

  • weaken educational outcomes,

  • and leave society dependent on systems controlled by a handful of companies.

As product leaders—designers, strategists, researchers, technologists—we decide which future gets built.

Legacy isn’t abstract. It’s the cumulative effect of every interface we design, every shortcut we greenlight, every metric we reward, and every model we deploy.

If you want to build AI that strengthens humanity instead of diminishing it, reach out. Let’s design for human outcomes, not machine efficiency.

arpy@ph1.ca

Discussion about this video

User's avatar

Ready for more?