Remember that specification we brainstormed in Part 2? It’s time to turn those ideas into working code! In this episode, we’re diving into Amazon’s newly launched Kiro IDE - a revolutionary development environment that brings spec-driven development and AI coding agents directly into your workflow. Let’s see how AI can transform from brainstorming partner to coding companion.

From Specs to Code: The AI-Powered Development Journey

The leap from specification to implementation used to be the biggest hurdle in software development. But with AI coding agents integrated into modern IDEs, we’re entering a new era. Today, I’ll walk you through the real-world experience of using Kiro IDE to build our BIM validation app - including the wins, the challenges, and the crucial lessons learned.

Getting Started: The OpenBIM Template

Before we unleash AI on our project, we need a solid foundation. That Open Company provides an excellent starting point with their BIM app templates available on GitHub. Getting started is as simple as running:

npm create bim-app@latest

(If you’re new to npm, you’ll need to install Node.js from nodejs.org first)

After running the setup commands (npm install and npm run dev), you’ll have a fully functional BIM viewer running locally - complete with IFC loading, 3D visualization using Three.js, component selection, and basic UI controls. It’s like getting a Tesla and then customizing it, rather than building a car from scratch!

Introducing Kiro IDE: Spec-Driven Development

Here’s where things get interesting. Kiro IDE is the first development environment to integrate spec-driven development directly into the IDE. The workflow is elegant:

  1. Design specifications - Define what you want to build
  2. Generate task lists - Break down the spec into actionable items
  3. Let AI implement - Coding agents write the actual code

But there’s a critical first step: generating the agent steering document. Similar to Claude Code’s init command, this analyzes your repository and creates a knowledge base for the AI. Think of it as giving the AI a map of your codebase before asking it to navigate.

The Development Process: From Requirements to Implementation

Phase 1: Requirements Gathering

I fed Kiro our original specification from Part 2 along with the HTML mockup. But here’s the first lesson: AI needs to see the current state. The initial requirements didn’t account for what the template already provided, so I captured a screenshot of the running app and fed it back to Kiro.

The result? A refined requirements document that understood both our vision and our starting point.

Phase 2: Design Document

The design phase is where things get specific. Kiro created a detailed technical design - but I noticed it didn’t know about That Open Company’s built-in IDS functionality. This is where human expertise becomes crucial!

I navigated to the That Open Fragments documentation, found the IDS specifications API, and guided Kiro to use the existing components rather than reinventing the wheel. AI is powerful, but domain knowledge is still your superpower.

Phase 3: Task List and Implementation

With the design refined, Kiro generated a detailed task list. I scanned through it and was impressed:

  • Proper integration with existing UI components
  • Use of content grid elements from That Open Components
  • Even included unit and integration testing!

Time to let the AI code. I kept the development server running (npm run dev) so I could watch changes in real-time. And here’s where the magic (and occasional chaos) happened.

The Reality of AI Coding: Wins and Challenges

The Wins

  • Task 1 (IDS Integration): Successfully initialized IDS functionality
  • Task 2 (IDS Loading): Created a working file loader
  • Task 3 (Validation UI): After some iteration, got the validation button displaying properly

The Challenges

  • Build conflicts when AI tried to run npm run build while dev server was running
  • AI occasionally “hallucinated” - claiming things worked when they didn’t
  • UI elements not appearing until I provided screenshot feedback
  • Getting stuck in endless cycles when context was insufficient

Key Lessons Learned

  1. Give AI visual feedback: Screenshots helped Kiro understand what was actually happening
  2. Manual testing is essential: Don’t trust AI claims - verify with your own eyes
  3. Context is everything: The recording actually cut out partway through because I realized I hadn’t given Kiro enough context about the existing codebase
  4. Intervene when needed: Be ready to guide the AI, enable functions, and double-check work

The Takeaway: AI as a Coding Partner

This experience revealed an important truth: AI coding agents are incredibly powerful when used correctly. They’re not autonomous developers (yet) - they’re force multipliers for developers who understand:

  • The domain they’re working in
  • How to provide proper context
  • When to guide vs when to let AI explore
  • How to verify and validate AI’s work

The app wasn’t complete by the end of this session, and that’s okay. In the next video, I’ll show you how to give AI coding agents all the context they need to be truly effective - because that’s the real secret to productive AI-assisted development.

What’s Next?

In Part 4, we’ll dive deeper into context management for AI coding agents. You’ll learn:

  • How to structure your codebase for AI comprehension
  • What documentation AI needs to be effective
  • Techniques for debugging when AI gets stuck
  • Best practices for iterative AI-driven development

Let’s Build Together

Have you tried AI coding assistants like Kiro, Claude Code, or GitHub Copilot? What’s been your experience? I’d love to hear about your successes, failures, and lessons learned in the comments on YouTube!

Remember: we’re not just building BIM tools - we’re learning how to collaborate with AI to build better software, faster. And that’s a skill that will shape the future of development.

The GitHub repository for this project is available at: https://github.com/vinnividivicci/openingbim-cicd