FABR Framework: A Guide to Lead AI Adoption with Safety and Value

#BA4AI – Discover how the FABR Framework (Five AI Business Risks) helps business analysts lead AI projects safely, ethically, and with a focus on value.

Artificial intelligence is already profoundly transforming the way organizations operate, make decisions, and deliver value. In this new scenario, business analysts cannot limit themselves to the role of AI users who simply write requirements more efficiently — they must become strategic leaders in defining how AI will be utilized to ensure positive outcomes for both business and society.

It was with this purpose that we launched the BA4AI (Business Analysis for Artificial Intelligence) movement, explored in an article published in The Corner by the IIBA in collaboration with Michael Augello and introduced by Delvin Fletcher. This movement proposes a shift in focus: instead of reacting to technology (AI4BA), analysts must lead its adoption, connecting AI capabilities to business outcomes (BA4AI).

To support this leadership role, I am sharing here the FABR Framework (Five AI Business Risks Framework), a tool that organizes the five main categories of risk that need to be anticipated and managed in any AI initiative. It is a practical guide to structuring analysis, informing decisions, and designing solutions that create value responsibly and sustainably.

FABR Framework – Five AI Business Risks
FABR Framework – Five AI Business Risks

🧠 Two Ways of Thinking

To understand these risks, we must first understand how AI “thinks” — and, interestingly, this logic is very similar to how we humans think.

Daniel Kahneman, Nobel Prize-winning economist, described in “Thinking, Fast and Slow” two complementary modes of thinking:

  • System 1 – fast, automatic, intuitive (pattern-based reasoning), guided by experience and patterns.
  • System 2 – slow, deliberate, logical (rule-based reasoning), guided by explicit rules and analytical reasoning.

We naturally alternate between these two systems. Sometimes my wife looks at my son and says, “You’re not feeling well.” She doesn’t run any clinical tests — she simply notices subtle signs such as posture, tone of voice, or appearance and recognizes a pattern. That’s intuition — System 1.
If I want to know if my son has a fever, I use logic: IF the temperature is above 37.5 °C, THEN he has a fever — System 2.

The two ways of thinking that explains the FABR Framework
Two ways of thinking

AI systems also operate in the same way:

  • Machine learning models behave like System 1: they identify complex patterns in large volumes of data and make decisions without being able to explain them clearly.
  • Deterministic algorithms behave like System 2: they follow clear and predictable rules.

Understanding this duality is essential to understanding where AI risks emerge and how analysts can anticipate them. Each of the five risks in the FABR Framework is directly linked to how AI “thinks” and makes decisions.

🔐 Risk 1: Privacy Violation – When the Right Information Falls into the Wrong Hands

AI models need large volumes of data to learn and deliver value. But not all data can — or should — be used.
Imagine your credit card spending history or your medical records being used to train an AI model, which then ends up revealing information about you that should have remained private. That’s a privacy violation.

The analyst’s role is to ensure that training data respects laws and consent policies, and that there is clear governance over data collection, storage, and use. The risk is not only about leaks but also about the misuse of legitimate data in inappropriate contexts. For example, data collected for a laundry detergent marketing campaign should not be used to evaluate your mortgage application.

⚖️ Risk 2: Bias & Discrimination – When Historic Data Don’t Represent Reality Anymore

Human intuition doesn’t always apply to new contexts — and the same is true for AI.
Think of someone living in the Amazon region who says, “It’s gonna rain.” Why? Because it rains almost every afternoon. But if that same person is in Colorado, their experience-based prediction might be completely wrong.

If the training data for an AI system isn’t representative of reality, the model will “learn” biased patterns, and its responses will only be accurate in contexts similar to the original data. This leads to discriminatory, unfair, and contextually inappropriate decisions — for example, a recruitment algorithm might reject female candidates because the company’s historical data reflects a past where most hires were men.

The analyst must ensure that training data is diverse and representative. Even in deterministic algorithms, continuous auditing and careful evaluation of decisions’ impacts on different stakeholder groups are crucial.

🎯 Risk 3: Decision Errors – When the Machine Is Confidently Wrong

Even with high-quality data, models can make incorrect decisions — and often with great confidence.
AI might deny a loan to a creditworthy customer, classify harmless behavior as suspicious, or recommend an incorrect action. This happens because statistical patterns are not absolute truths. Just like intuition, they cannot be blindly trusted.

Distinguishing assertiveness from accuracy is essential: AI may be highly assertive — that is, it can express something confidently and convincingly — even when it’s wrong. Analysts must design validation mechanisms, include human reviews at critical points, and continuously monitor model performance over time to adjust and correct eventual errors.

🤖 Risk 4: Over-Reliance on AI – When Blind Trust Leads to Complacency

Excessive trust in automated systems creates a dangerous illusion of infallibility.
Because AI delivers confident, well-articulated answers, users tend to accept them without questioning — even when they are wrong. This leads to human passivity and often to decisions made without supervision. It’s like when autocorrect automatically suggests a word that seems right but changes the meaning of your sentence — if you don’t double-check, you might end up sending the wrong message.

The analyst’s role is to clearly define which decisions can be automated and where human intervention is necessary. Critical processes may require mandatory human validation, regular reviews, and hybrid approaches that combine statistical models with deterministic algorithms.

📊 Risk 5: Explainability & Accountability – When We Don’t Know “Why”

As mentioned earlier, many AI models function “intuitively” — they recognize patterns and reach conclusions without necessarily following explicit logic. Just as a human might “feel” that something is wrong without knowing exactly why, AI can make decisions that seem correct but are difficult to explain.
Imagine an AI model taking a wrong decision but being unable to explain why. Who is responsible for the consequences? The company that built the AI? The user who executed it? The training-data provider?

The lack of explainability undermines trust and creates legal and ethical risks. Analysts must demand explainable models, document decision criteria, and produce reports that enable audits. In many cases, hybrid models combining AI’s intuitive capabilities with explicit logical rules are the safest way to ensure fairness and transparency.

Business Analysis Professionals leading AI initiatives using the FABR Framework

🚀 BA4AI – Business Analysts Leading in the AI Era

The FABR Framework (Five AI Business Risks Framework) was created to help business analysts lead AI adoption safely and strategically within organizations.
By understanding how AI “thinks,” anticipating risks, and designing solutions that consciously combine logic and intuition, analysts can ensure that technology truly delivers value to stakeholders — not just automated results.

💡 Join the movement BA4AI: understand how AI works, anticipate the risks, and design solutions that are explainable, ethical, fair, and effective. This is the contribution that turns analysts into true protagonists in the age of artificial intelligence.

📚 Referências Bibliográficas