Why Most AI Initiatives Fail and How to Build Ones That Matter
Learning from cybersecurity's Human Layer unleashes a powerful truth about AI product development—it's not about technology, it's about the people.
The story of AI adoption is filled with striking contrasts. On one side, we have groundbreaking technical achievements—AI models that can write code, generate images, and solve complex problems. On the other, we see a puzzling pattern of implementation struggles, with studies showing over 80% of AI projects fail to deliver on their promises.
Looking for answers to this paradox, I found an unexpected parallel in cybersecurity. Years ago, security experts realized that even the most sophisticated technical defenses could be undermined by human behavior. Their response was developing the 'Human Layer' approach—recognizing that understanding people, their behaviors, and social dynamics was as crucial as any technical solution.
This connection sparked my curiosity. Could these lessons from cybersecurity help us build better AI products? I started exploring this idea in my daily work as an AI Product Manager, observing how human factors influence AI adoption and success.
Learning Form Patterns
The numbers behind AI project implementations show that success is far rarer than we might expect. Recent research from RAND Corporation shows that over 80% of AI projects fail, twice the rate of traditional IT projects. Looking ahead to 2025, Gartner predicts that at least 30% of generative AI projects will be abandoned after proof of concept, citing poor data quality, inadequate risk controls, escalating costs, and unclear business value as key factors. What I find particularly interesting isn't just these high failure rates, but the reasons behind them.
While data quality emerges as a significant challenge, the deeper patterns point to something more fundamental. The RAND study revealed that many failures stem from misalignment between business leaders and technical teams, and a tendency to focus on cutting-edge technology rather than addressing real-world problems.
These findings resonated with my observations. In my research, I noticed that while post-mortems often focus on technical challenges, the stories behind these failures frequently point to human factors like lack of trust, poor workflow integration, or misalignment with actual needs.
Exploring the Human Layer In AI
Drawing insights from cybersecurity's Human Layer concept, I've been exploring how this approach might transform AI product development. Here's what I'm learning about building AI solutions with humans at the center:
The Journey Beyond Requirements
The first key insight from cybersecurity is that understanding user behavior is as crucial as understanding technical vulnerabilities. In AI product development, this translates to:
Hidden Needs: Looking beyond stated requirements to understand the underlying challenges users face
Context Matters: Considering the full ecosystem where the AI solution will operate
Trust Boundaries: Understanding where and why users draw lines with AI adoption
Building Trust Through Understanding
Just as cybersecurity experts design systems with human behavior in mind, AI products need to take care of:
Transparency: Make AI decision-making understandable and traceable
Control: Provide appropriate levels of user oversight and intervention
Progressive Integration: Build trust through gradual introduction of AI capabilities
Data Strategy as The Foundation
One of the most interesting discoveries in my journey has been how data strategy connects to the Human Layer. Initially, I approached data like many others – focusing primarily on quantity, quality, and technical aspects. However, I'm learning that a human-centered data strategy requires a different perspective.
What I'm Learning About Data
Data Collection with Context
Beyond just collecting data points, I'm discovering the importance of capturing the human context around them
User interviews revealed that understanding why and how data is collected affects trust as much as the data itself
I'm experimenting with collecting "context metadata" – notes about the circumstances, decisions, and human factors that influenced the data
Responsible Data Strategy
Learning from privacy experts about how transparent data practices build user trust
Exploring how giving users more control over their data actually increases the quality of data they're willing to share
Finding that ethical data practices aren't just right – they're good for product success
Current Experiments
Here are some approaches I'm currently testing:
Shadow Mode
Running AI alongside existing processes to understand human decision patterns
Collecting insights about where AI and human judgment differ
Learning which decisions users trust to automation vs. need human oversight
Collaborative Design
Involving users in early stages of AI feature development
Creating feedback loops for continuous improvement
Building trust through transparency and involvement
Looking Ahead
As the AI field continues to evolve, the parallels with cybersecurity's Human Layer become increasingly relevant. Just as cybersecurity moved beyond purely technical solutions, AI product development needs to embrace a more holistic, human-centered approach.
Questions I'm Exploring
I'd love to hear your thoughts on:
How do you measure the "human success" of an AI solution?
What approaches have you found effective in building trust in AI systems?
How do you balance automation with human agency?
How do you approach data collection in a way that respects and empowers users?
What's your experience with making data governance more human-centric?
You mention so many critical aspects regarding real-world implementations of AI (but I’d say more broadly analytics). Data Strategy is key and many organizations miss that - what is especially important often there is no common language between business and tech - something which ontology and knowledge graphs promise to deliver (see Gartner report - KG is next to GenAI as critical „emerging” techs). Feedback from users is another critical factor - many years of implementation taught me that you need to have a process for situation when something went bad - it doesn’t have to be AI, it can be any kind of automation - a manual process of correcting the results (usually with additional human supervision)