Alma Media
Building AI agents in practice: a learning sprint with Alma Media
In partnership with
About
Approach
At Splended, they often hear the same question from organizations: we understand that AI agents are a hot topic, but how do we actually build them?
Together with Alma Media, they decided to answer this question as practically as possible. Instead of lectures or one-off workshops, they designed a Learning Sprint: eight two-hour sessions where multi-disciplinary teams built their own AI agents in Microsoft Copilot Studio. No watching demos. No following along. Just real, hands-on agent building.
Their coach, Santeri Kallio, met with the participants twice a week for four weeks. Each session began with a 20–30 minute teaching block on a new concept, followed by hands-on work where participants built and tested things independently. HR, finance, legal, and communications teams worked side-by-side, which turned out to be one of the best aspects of the training. Participants saw how the same technology could solve very different problems depending on the context.
From Ideas to Functional Agents
They started by asking participants what kind of agent they would most want for their own work. The answer was clear: something that finds and surfaces the right information from internal systems. Most of them were losing time every day just searching for the right document or answer buried somewhere deep within the organization.
Then they asked them to build one right away in the first session. As one participant noted:
“Surprise: the first experiment helped us figure out how the agent should be built.”
Within an hour, people who had never used the platform before were getting useful answers from their agents.
By the second sprint session, there were over 30 concrete agent ideas on the table – all surprisingly practical. No one was proposing sci-fi scenarios. People wanted agents that could handle the things they struggled with weekly: sorting shared inboxes, answering recurring questions about company policies, preparing for meetings, and consolidating information from multiple systems. One team proposed an agent to help managers navigate difficult feedback and performance discussions. The ideas sprang directly from daily frustrations.
Learning by Building
Things really clicked during the prompt engineering sessions. Splended’s coaches gave the participants two intentionally bad agent instructions and asked them to fix them. The feedback was unanimous: every line needed to be more specific. Participants started asking questions they hadn’t considered before. Who exactly should get the meeting summary—everyone in the company or just the invitees? What does a “professional tone” mean in this context? What should the agent do when it doesn’t know the answer?
One group developed a prompt structure that the others quickly adopted: goal, instructions, context, and stopping conditions. This framework changed the nature of the entire exercise. It was no longer a question of whether the AI could do something, but whether we could describe what we wanted with enough precision.
Later, when participants connected their agents to real company data sources, they encountered the kind of friction you can only discover by doing. Some sources worked directly. Others required workarounds due to login walls or processing delays.
Participants tested the agents with their own documents and workflows, and that’s what made the learning stick.
What we learned
By the sixth sprint session, when the coaches introduced more advanced concepts like structured conversation flows, the group had split into two camps, which they were happy to see. Some participants were already thinking about linking multiple agents into a single workflow. As one participant wrote,
“I’m definitely going to use these because the more I can automate the agents’ work, the more I see it really benefiting my job.”
Those who were ready to move fast were given the space to do so. Those who needed more time weren’t rushed. The sprint-based structure allows for both.
A Strong Foundation for AI Adoption
One participant summed up the experience well:
“This is one of the best ways to learn about AI: hands-on, concrete, and directly tied to your own work. Santeri is a true professional who managed to support participants at very different skill levels and kept everyone engaged in the development. This kind of approach moves us in the right direction with Alma’s AI strategy as we implement agent-based workflows across the organization.”
The secret to the collaboration’s success was its rhythm. Two sessions a week maintained momentum without overwhelming the participants. The teach-and-build model meant no one could just passively sit and listen to a presentation. And the arc of eight sprints provided enough time to move from “what is an agent?” to multi-step workflows using real data sources and autonomous action.
In the end, participants had functional agents tied to their own processes. They also gained a clear understanding of where AI agents are helpful and where they are not—something that’s hard to grasp from a presentation or a demo.
In short, this wasn’t about showcasing AI. It was about building the capability to use it in everyday work.