
Human Factors in Cybersecurity & SHELL-Privacy™
I got an Amazon notification this week.
A new book. Published by Packt — one of the world's leading technical publishers. Title: Human Factors in Cybersecurity: A field-tested framework for designing resilient, human-centered cybersecurity systems. Authors: Nikki Robinson, DSc, PhD, and Calvin Nobles, PhD. Foreword by the Field CISO and VP of AI Security at the SANS Institute.
I stared at the cover for a few seconds.
Not out of envy. Out of recognition.
Let me tell you a story.
In 2023, I was a lawyer with 14 years of experience in compliance and data protection. I had served as DPO across medium and large organizations in healthcare, government, manufacturing, real estate, startups, retail, and hospitality. I had witnessed firsthand what happens after a privacy incident: the crisis meeting, the regulatory report, the dismissal of the analyst who clicked the wrong link.
And every time I walked out of those meetings, the same question stayed with me: why do we keep punishing people instead of fixing systems?
The answer came from an unexpected place: aviation.
The industry that reduced fatal accidents to less than 0.001% per flight didn't do it by training pilots to be perfect. It did it by redesigning systems around human limitations. It created Just Culture — the fair culture that distinguishes human error from negligence, and negligence from sabotage. It created the SHELL model — a systemic analysis tool that places the human being at the center, surrounded by Software, Hardware, Environment, and the organization.
I adapted that model for data privacy. SHELL-Privacy™ was born.
Then came MEDA-Privacy™ — a 4-question decision tree that classifies human behavior in privacy incidents with fairness and proportionality.
I published both frameworks in 4 books, in 4 languages. Presented at conferences in Brazil and Canada. Built a website, a YouTube channel, a community.
And now, in March 2026, Packt publishes an entire book on the same principle.
"Cybersecurity fails not because technology is weak but because people make mistakes."
That's the first sentence of Chapter 1. It could be the first sentence of any of my books.
Robinson and Nobles do solid, necessary work. They apply Human Factors Engineering to security operations — SOC, incident response, vulnerability management. It's a real contribution to the field.
But there is a fundamental difference between what they do and what SHELL-Privacy™ proposes.
They address operational cybersecurity. I address data privacy and GRC.
They offer principles and recommendations. SHELL-Privacy™ offers a structured analytical instrument applicable to real incidents — with named interfaces, structured questions, and a fair classification of human behavior.
They write for security engineers. I write for DPOs, legal professionals, and compliance managers who need not only to understand what happened, but to explain to the regulator why it happened — and what was done to prevent it from happening again.
The publication of this book tells me one important thing:
The market is waking up to what I already knew in 2023.
The human factor is not the problem. It is the starting point for the solution.
If you work in data privacy, information protection, or risk management and haven't yet discovered SHELL-Privacy™ and MEDA-Privacy™ — this is the moment.
🌐 www.shellprivacy.com 📺 youtube.com/@SHELLPrivacy
Be inspired and fly.
Read the original on LinkedIn
Engage with the community — comments, reactions, and shares.
About how this content was produced
It would be inconsistent to advocate for the ethical and responsible use of artificial intelligence without practising it. So I am transparent: this article was developed with the support of Manus as an AI assistant. The entire process was led by me: I chose the topic, defined the angle, identified the sources to be consulted, reviewed each version, and rewrote the sections that did not accurately reflect what I intended to communicate. The AI handled the research, organisation, and drafting. I handled what no model can do on its own: assess what is technically accurate, what is relevant to the reader, and what is faithful to my professional experience in the field. That is how I understand the role of AI — not as a replacement for the expert, but as an amplifier of what the expert already knows.
Anderson Andrade · DPO · Author · Founder of SHELL-Privacy™ & MEDA-Privacy™ · www.shellprivacy.com