Skip to main content
Aker AI & Robotics External Learning Paths
← Back to Main Page

From Foundation to Mastery:
A Structured Path to AI Fluency

This curriculum provides the technical depth required to move from basic awareness to professional fluency. Click a module below to jump directly to the training sections.

Module 01 ⏱ 75 Minutes Total

Module 1: How LLMs Work —
The Machine Under the Hood

Goal: After this module, the viewer should have an accurate mental model of what a large language model does. They should stop anthropomorphising it and start reasoning about its behaviour—understanding why it’s good at language but bad at maths.

1.1. What is a language model?

⏱ 27 MIN

Objectives: Understand the system that predicts the most likely next word given everything that came before it.

  • Next-token prediction: Generates text one token at a time, predicting what should come next.
  • The “sounds right” framing: The model predicts what sounds right, not what is right—explaining why it can invent facts.

But what is a GPT? Visual intro to Transformers

Covers tokenisation, embeddings, and next-token prediction.

1.2. Tokens, embeddings, and semantic space

⏱ 26 MIN

Objectives: Deep dive into the attention mechanism: how the model decides which tokens are relevant.

  • Tokenisation: Text split into word-sized chunks called tokens.
  • Embeddings: Tokens represented as vectors encoding meaning in high-dimensional space.

Attention in Transformers, step-by-step

Visualizes how context matters for disambiguation.

1.3. Transformers & Training

⏱ 22 MIN

Objectives: Understand where the model’s knowledge comes from and why it has a cutoff date.

  • The Context Window: The model's fixed-size working memory.
  • Pre-training: How the model acquires language, facts, and reasoning patterns.

How might LLMs store facts?

Examines how factual knowledge is encoded approximately in weights.

1.4. Deep Dive for Motivated Employees

The full 3Blue1Brown deep learning playlist, including earlier chapters on neural network fundamentals, provides additional depth for those who want to go further.

EXPLORE THE FULL PLAYLIST →
Link: 3blue1brown.com/topics/neural-networks

LLMs predict what sounds right, explaining both fluency and errors.

They use pattern-matching on meaning, not human understanding.

Knowledge is frozen at a cutoff and doesn't learn from you.

Module 02 ⏱ 45-50 Minutes Total

Module 2: From LLM to System —
Building the Enterprise Intelligence Architecture

Goal: Transition from understanding the "raw" model to understanding the engineered systems used in industry. You will learn how models are connected to data, tools, and multiple inputs to create effective business solutions.

Chain-of-thought reasoning

⏱ 3 MIN READ

Think of Chain-of-thought as "asking the model to show its working, like a maths exam."

Raw LLMs often make mistakes when jumping directly to an answer for complex problems. By prompting a model to "think step-by-step," we force it to generate intermediate reasoning tokens. Because each token the model generates becomes part of its own context for the next token, this sequential processing significantly improves performance on logic and multi-stage planning tasks.

Retrieval-Augmented Generation (RAG)

⏱ 8 MIN READ

How AI products "know" your company documents by connecting the model to a live knowledge base.

  • The Courtroom Analogy: The LLM is the judge; RAG is like having law clerks who fetch relevant case law and precedents before a ruling is made.
  • Retrieval vs Generation: Separating the act of finding information from the act of writing the response.
⚖️

NVIDIA: What Is Retrieval-Augmented Generation?

The industry-standard explainer for how RAG provides context without retraining the model.

READ NVIDIA BLOG →

Required Industry Reading

Anthropic: Building Effective Agents

Reading Instruction: Read the first half of this post, up to and including the section on "Agents." The diagrams are especially useful. The implementation details in the second half can be skipped.

VIEW AGENT ARCHITECTURES →

AI Agents and tool use

⏱ 20 MIN READ (PARTIAL)

Moving from static prompts to autonomous decision-making systems that can use external tools.

  • Workflows vs Agents: Understanding the spectrum from predefined orchestration to model-driven decision-making.
  • The Augmented LLM: Combining the model with retrieval, tools, and memory.

Multimodal models

⏱ 10 MIN READ

Understanding AI that processes multiple inputs—such as sight and sound—simultaneously.

  • Takers, Shapers, Makers: Strategic categories for how organisations adopt multimodal technology.
  • Industrial Impact: How vision-and-language models apply to maintenance and fraud detection.
🖼️

McKinsey: What is multimodal AI?

A strategic overview using sensory analogies to explain the next leap in AI capability.

READ MCKINSEY GUIDE →

Module 2 Takeaways

RAG provides a library of facts, solving the knowledge cutoff and grounding the model in truth.

Agents move AI from "talking" to "doing" by executing multi-step tasks using external tools.

Multimodality allows AI to process images, audio, and sensor data like a human would.

Module 03 ⏱ 45-55 Minutes Total

Module 3: Applying AI —
Judgment, Use Cases, and Safety

Goal: Develop the professional judgment required to use AI effectively. You will learn to identify high-value use cases, recognize the risks of "advanced autocomplete," and understand the regulatory landscape governing our work.

Practical AI judgment

⏱ 10 MIN READ

Success with AI is not about technical skill—it is about knowing when to trust the model and when to step in as the human-in-the-loop.

  • High-Value Tasks: Brainstorming, summarization, and format conversion.
  • Risk Zones: Tasks where accuracy is critical, where you cannot spot errors, or where the "effort" itself is the point of the work.
💡

Ethan Mollick: 15 Times to Use AI, and 5 Not To

A definitive framework for deciding which tasks to delegate and which to keep manual.

VIEW THE JUDGMENT GUIDE →

Safety Awareness

Addressing Hallucinations and Bias

Understand why AI models function like "advanced autocomplete" and learn the five practical mitigation strategies to ensure accuracy in your output.

READ MIT SLOAN ARTICLE →

AI safety and limitations

⏱ 8 MIN READ

Hallucinations aren't bugs; they are a fundamental part of how language models generate text.

  • Verification: Why you must check citations and data points against primary sources.
  • Bias: Recognizing that models inherit the perspectives and flaws of their training data.

EU AI Act

⏱ 22 MIN TOTAL READ

The world's first comprehensive regulation on artificial intelligence, using a risk-based classification system.

European Parliament Overview

The definitive anchor source for risk tiers (Unacceptable, High, Limited, Minimal).

VIEW OFFICIAL SOURCE →

SIG: Detailed Act Summary

A non-legal deep dive into compliance timelines (2025-2027) and potential fines.

VIEW DETAILED SUMMARY →

Module 3 Takeaways

Accountability: AI output is your output. You cannot delegate professional responsibility.

Practical Fit: Use AI to get "unstuck" or draft, but keep humans for precision and learning.

Compliance: Regulation is risk-based. High-stakes use cases require High-stakes rigor.

Forward Track

The capability trajectory of AI is moving rapidly. Look for upcoming internal Aker sessions for the latest on internal tools and use-case roadmaps.

🛡️
Next Level: Architect

Ready to Validate Your Expertise?

Take the next step in your journey. Continue your education with specialized AI courses developed across Aker companies, then complete the final assessment to earn your official Aker AI & Robotics Certificate and digital badge.

Aker AI & Robotics Certification Program