Hello!
Let’s talk

How can we help? Leave us a message, and we’ll get back to you in no time.





    Article

    Why AI & Data Literacy Is a Leadership Priority -Not a Tech Problem

    Kira Sjöberg

    GOODIN

    A Strategic Guide for HR and L&D Professionals Navigating the AI Shift.

    The Challenge Isn't the Technology

    Most organisations have already invested heavily in AI and data tools. Dashboards are live. Maybe some predictive models are running. Large Language Models (LLMs)  liscences have been bought or are to be bought n the near future and in some organisations they are in daily use. Yet, in many of these same organisations, adoption is uneven, trust is low, and tools are either quietly underused or fueling a risky new trend: Shadow AI.

    The natural instinct is to blame the technology. However, in most cases, the tech is functioning exactly as designed.

    The real obstacle is that the majority of the workforce – those in commercial, operational, and managerial roles – has never been given a practical “way into” AI. Their expertise lies in judgement, context, and domain knowledge; they weren’t hired to be engineers. 

    Expecting non-tech roles to simply “absorb” AI fluency over time is like asking an engineer to move to the legal team and expecting them to pick up contract law through osmosis.

    Expecting intuition to replace a strategy is not a plan. This is the gap that Learning & Development (L&D) is uniquely positioned to close.

    The Challenge Isn't the Technology

    What "Working Literacy" Actually Means

    AI literacy is not the same as AI expertise. The goal is not to turn your workforce into data scientists; it is to develop working literacy – the confident ability to engage with AI systems during a normal workday.

    A person with working literacy can:

    • Apply Judgement: Use AI outputs without over-trusting them, defaulting to professional expertise rather than “the machine’s answer.”
    • Interrogate Data: Read dashboards and reports without being an analyst – understanding what a figure is telling them, and more importantly, what it isn’t.
    • Flag Inconsistencies: Feel empowered to say when something “feels off,” especially when working with probabilistic models (Aka LLMs) where the “right” answer isn’t always guaranteed.
    • Ask Better Questions: Know how to prompt both the AI and the technical experts in the room.

    This is not a technical skillset. It is a critical thinking and professional responsibility skillset. It belongs directly in the L&D domain.

    The Two Most Common Entry Points for AI (& Data) -and Why They Fail

    AI typically enters organisations through one of two channels, both of which have significant weaknesses:

    • The Technical Route: Led by IT or Data teams, this approach is rigorous but often inaccessible. It is explained through models, architectures, and performance metrics. This creates an “Expert Bias” where the jargon acts as a barrier to the very people who need  and use the tools most.
    • The Hype Route: Led by senior leadership or external consultants, this generates initial excitement but provides no practical foundation. People are told AI will “change everything” without being shown what that means for their specific role or daily decisions, so they end up using a very powerful system in a googling type of way.

    What is missing is a shared middle ground: a non-technical, role-relevant foundation that gives people enough structure to engage meaningfully without requiring them to become specialists.

    The Two Most Common Entry Points for AI (& Data) -and Why They Fail

    Customer Case: How Broman Group Built the Middle Ground

    Broman Group offers a clear example of people-first AI adoption. Rather than launching a “tech project,” they approached AI as a workforce development challenge.

    Building a Shared Baseline Broman partnered with GOODIN Academy to deliver structured  systematic longer term AI literacy training. Crucially, they did not segment by technical ability. They mixed roles and seniority levels in the same cohorts. This made AI a shared, discussable topic rather than a “tech secret.”

    Two Practical Focus Areas:

    • Responsible Output Management: Employees learned to use AI-generated content as a starting point, not a final answer.
    • Data Engagement: Staff built the ability to probe dashboards – understanding the assumptions behind the numbers and knowing when to ask deeper questions.

    The Result: The threshold for experimentation dropped. Internal conversations shifted from “Are we allowed to use this?” to “Where does this genuinely help – and where should we be careful and how should we approach AI as a whole?”

    The Organisational Layer: Where HR Has Real Influence

    Technology does not embed itself. The human layer – structures, habits, and accountability – determines if AI becomes a tool or a toy.

    One of the most effective interventions isn’t a training programme; it’s an organisational agreement. For example: Requiring a human-in-the-loop review for all AI-generated external content. This single policy establishes accountability, improves quality, and signals that AI is a support tool, not a replacement for professional judgement.

    Just some questions for your Leadership Team:

    • Do your people understand where generative AI fits into the “big picture” of  AI and how to apply that to your business?
    • Do employees know what responsible AI use looks like in their specific role?
    • Are there clear norms around when AI outputs require human review?
    • Is there a shared vocabulary for discussing AI quality and risk?

     

    The Organisational Layer: Where HR Has Real Influence

    A Framework for Action

    • Start with a Baseline: Don’t assume everyone is at the same level. Use an outsider’s lens to see where the “expert bias” is hiding.
    • Design for Mixed Cohorts: Mixing “beginners” and “advanced” users builds a shared language and a more open AI culture.
    • Anchor in Context: Abstract training rarely sticks. Connect literacy to the actual decisions people make in their roles.
    • Establish Norms Alongside Training: Build “pilot checklists” or simple agreements to support new habits.
    • View Literacy as a Capability, Not a Destination: AI is moving fast. Ongoing investment is required to ensure human expertise and AI capability grow together.
    • Distinguish Between “Prompting” and “Literacy”: AI is vast and extends far beyond Generative AI. While many understand AI with Chatbots, (Aka for ex chatGPT, Gemini, Claude) true literacy involves understanding the broader landscape of data  behind AI and automation. A baseline understanding of usable AI eventually sparks curiosity for the full picture, enabling deeper adoption and more sophisticated technical discussions across the organisation.

    Is your organisation ready to bridge the gap? GOODIN Academy works with organisations to build practical, people-first AI and data literacy in an AI Act compliant way. Learn more!

    A Framework for Action

    Written by

    About the Author

    Kira Sjöberg

    GOODIN

    Kira Sjöberg is a business designer and co-founder of GOODIN, with over two decades of experience shaping how organisations adapt to change. Her work focuses on the human layer of AI - the critical space where technology and people meet. Kira is recognised for translating the possibilities of data and generative AI into strategies that enable trust, literacy, and meaningful cooperation between humans and intelligent systems. By blending organisational psychology with business design, she helps people move beyond efficiency gains to build cultures that are adaptive, responsible, and future-ready. Her perspective reframes AI not just as a tool, but as a partner in shaping more resilient and human-centric organisations.