How I Passed the AWS Certified Generative AI Developer - Professional
On February 23, 2026, I earned the AWS Certified Generative AI Developer - Professional (AIP-C01) certification. This is my fourth AWS certification — following Cloud Practitioner, Developer Associate, and ML Engineer Associate — and by far the most challenging one.
I'm open-sourcing my entire study vault so others can benefit from it: AWS-GenAI-Developer-Certification on GitHub.
Why This Certification
Working on enterprise GenAI workflows day-to-day, I wanted to validate my knowledge against AWS's own benchmark. The AIP-C01 is a Professional-level exam — it expects you to not just know what services exist, but to make architectural decisions: when to use RAG vs fine-tuning, how to design multi-agent systems, and why certain guardrail configurations matter in production.
The exam covers five domains:
| Domain | Weight |
|---|---|
| Design GenAI Solutions | ~30% |
| Select Foundation Models | ~20% |
| Responsible AI | ~20% |
| Customize Foundation Models | ~15% |
| Deployment & Operations | ~15% |
The heaviest domain — Design GenAI Solutions — is where most of the difficulty lies. It's not about memorizing API calls; it's about understanding trade-offs across the entire stack.
My Study Strategy: The 7-Phase Roadmap
I structured my preparation into 7 sequential phases, each building on the previous one:
- Core Platform — Amazon Bedrock fundamentals, the service landscape, how everything connects
- Model Customization — Fine-tuning vs LoRA vs prompt engineering, when each makes sense, data formatting requirements
- RAG & Vector Stores — This was the deepest phase. OpenSearch vs Aurora pgvector vs S3 Vectors vs DynamoDB — knowing the decision framework for each
- Agentic AI — LLM agents, the Strands SDK, MCP (Model Context Protocol), human-in-the-loop patterns, Amazon Q
- Data & Storage — S3, Glue, Lake Formation — the plumbing that feeds everything
- Operations — Caching strategies, SageMaker deployment, application services
- Governance & Security — Guardrails, responsible AI frameworks, model evaluation, IAM policies
Each phase produced detailed notes with decision tables, architecture diagrams, and exam-critical callouts. By the end, I had 31 interconnected study notes covering every testable concept.
The Obsidian + Claude Workflow That Changed Everything
Here's where things get interesting. I didn't just write notes in a flat document — I built an Obsidian vault with internal wiki-links, callout boxes, and a visual knowledge map canvas.
The game-changer was using Claude's Obsidian skills throughout the process. Rather than spending hours reformatting and cross-referencing manually, I used Claude to:
- Structure raw study materials into well-organized Obsidian-native markdown with proper
[[wiki links]]connecting related concepts across phases - Generate decision frameworks — those "when to use X vs Y" comparison tables that are critical for the exam
- Create semantic highlighting — marking
==key terms==that are most likely to appear on the exam - Build callout boxes (
> [!tip],> [!warning],> [!important]) that flag the exam-critical nuances AWS loves to test - Refine and consolidate overlapping content across notes, keeping everything DRY while maintaining clear navigation paths
The result was a vault where I could click from "RAG Architecture" straight into "OpenSearch vector configuration" and then into "IAM policies for Bedrock access" — all connected through backlinks. This mimics how the exam actually tests you: jumping between layers of the stack in a single question.
The Knowledge Map Canvas (an Obsidian canvas file) gave me a bird's-eye view of how all 31 notes related to each other. Being able to see the connections made the material stick far better than linear note-taking ever could.
This is the real efficiency gain — not just writing notes faster, but producing better-structured, better-connected knowledge artifacts that make review sessions exponentially more productive.
Key Exam Insights
A few things I wish I'd known earlier:
RAG dominates the exam
Easily 25-30% of questions touched RAG in some form. Know your chunking strategies (fixed-size vs semantic vs hierarchical), understand when to use each vector store, and be crystal clear on the retrieval-augmented generation pipeline end-to-end.
"When to use what" matters more than "how it works"
The exam rarely asks you to configure a service step-by-step. Instead, it gives you a scenario and asks which combination of services solves it best. My decision framework tables were invaluable here.
Guardrails and Responsible AI are free points
Phase 7 content is straightforward compared to RAG and Agentic AI, but it's worth 20% of the exam. Don't skip it. Know the types of guardrails (content filters, denied topics, sensitive info filters, contextual grounding), how model evaluation works in Bedrock, and the responsible AI principles.
Agentic AI is the future-facing section
Questions about multi-agent orchestration, tool use, and human-in-the-loop patterns are increasing. Understand the Strands Agents SDK, MCP protocol, and how agents differ from simple chain-of-thought prompting.
The Study Vault
Everything is open-sourced here: github.com/GravesXX/AWS-GenAI-Developer-Certification
For the best experience, clone the repo and open it in Obsidian. Start with 00 API-C01 Learning Roadmap.md — it's the main hub that links to all 31 notes in the suggested study order.
The vault includes:
- 31 study notes organized by the 7-phase roadmap
- Decision framework tables for vector stores, model customization, caching, and more
- A visual Knowledge Map Canvas connecting all concepts
- Exam Day Checklist with checkboxes for each domain
- Callout boxes highlighting the most testable content
Final Thoughts
This was the hardest AWS exam I've taken, but also the most rewarding. GenAI is moving fast, and having a structured certification path forced me to fill gaps I didn't know I had — especially around governance, guardrails, and the operational side of deploying LLM applications.
If you're preparing for the AIP-C01, I hope my notes save you some time. And if you use Obsidian, give the Claude skills a try — the combination of AI-assisted note structuring and Obsidian's linking system is genuinely powerful for deep technical study.
Good luck. 🚀