So we’ve built this pretty neat browser-based BIM validation tool… but what happens when you need to validate hundreds of files? Or integrate validation into your CI/CD pipeline? Or let other applications use your validation logic? That’s when you realize: we need a backend.
Why Move Beyond the Browser?
Don’t get me wrong - browser-based BIM tools are amazing! They’re accessible, require no installation, and work on any device. But there are scenarios where a client-side-only approach just doesn’t cut it:
- Batch Processing: Validating 100 IFC files in sequence? Your browser tab isn’t the ideal environment.
- CI/CD Integration: Your automated build pipeline can’t “click buttons” in a web UI.
- Headless Operations: Sometimes you need validation without any UI at all.
- Resource Constraints: Large IFC models can overwhelm browser memory limits.
- Integration Needs: Other systems need to call your validation logic via API.
The solution? Build a dual-mode architecture that works both in the browser AND on the server.
The Architecture: Best of Both Worlds
Our new architecture supports two operational modes:
- Client-Side Mode (Browser): The original web-based UI for interactive validation
- Server-Side Mode (API): Headless processing via REST endpoints
This gives us maximum flexibility - use the browser when you want interactivity, use the API when you need automation.
Backend Services
We built four core backend services to handle the heavy lifting:
DirectFragmentsService: Converts IFC files to fragment format for 3D visualization (headless)
// src/server/services/DirectFragmentsService.ts
export class DirectFragmentsService {
async convertIFCToFragments(ifcPath: string): Promise<string> {
// Headless IFC parsing and conversion
// Returns path to generated fragments file
}
}
IfcTesterService: Integrates Python’s ifctester library via subprocess for IDS validation
// src/server/services/IfcTesterService.ts
export class IfcTesterService {
async validateIDS(ifcPath: string, idsPath: string): Promise<ValidationResults> {
// Spawns Python subprocess
// Runs ifctester validation
// Returns structured results
}
}
FileStorageService: Manages temporary file uploads with automatic cleanup
// src/server/services/FileStorageService.ts
export class FileStorageService {
async storeUpload(file: Buffer, originalName: string): Promise<string> {
const fileId = generateUniqueId();
// Store with auto-cleanup after 1 hour
return fileId;
}
}
JobQueue: Tracks async validation jobs with status polling
// src/server/services/JobQueue.ts
export type JobStatus = 'in-progress' | 'completed' | 'failed';
export interface Job {
status: JobStatus;
progress: number;
result?: any;
error?: string;
}
class JobQueue {
private jobs: Map<string, Job> = new Map();
createJob(): string {
const jobId = `job-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
this.jobs.set(jobId, { status: 'in-progress', progress: 0 });
return jobId;
}
getJob(jobId: string): Job | undefined {
return this.jobs.get(jobId);
}
updateJob(jobId: string, data: Partial<Job>): void {
const job = this.getJob(jobId);
if (job) {
this.jobs.set(jobId, { ...job, ...data });
}
}
}
The REST API Endpoints
All endpoints live under /api/v1/ and follow RESTful patterns:
IFC to Fragments Conversion
# Upload IFC and convert to fragments
POST /api/v1/fragments
Content-Type: multipart/form-data
# Download converted fragments
GET /api/v1/fragments/:fileId
IDS Validation
# Validate IFC against IDS specification
POST /api/v1/ids/check
Content-Type: multipart/form-data
Body: ifc file + ids file
# Get validation results
GET /api/v1/ids/results/:fileId
Job Status Tracking
# Check async job status
GET /api/v1/jobs/:jobId
Response: { status: "in-progress" | "completed" | "failed", progress: 0-100, ... }
Pretty straightforward! The API follows async patterns for long-running operations - upload your files, get a job ID, poll for completion.
Python Integration: Why Subprocess?
Here’s an interesting challenge we faced: IDS validation in Node.js is possible, but the gold standard implementation is Python’s ifctester library from the IfcOpenShell ecosystem.
Rather than rewrite validation logic in JavaScript (and risk inconsistencies), we integrated the Python implementation directly:
// src/server/services/IfcTesterService.ts (simplified)
import { spawn } from 'child_process';
export class IfcTesterService {
private pythonPath: string;
private async executePythonValidation(ifcPath: string, idsPath: string): Promise<any> {
return new Promise((resolve, reject) => {
const pythonProcess = spawn(this.pythonPath, [
this.scriptPath,
ifcPath,
idsPath,
'json' // Request JSON output format
]);
let output = '';
let errorOutput = '';
pythonProcess.stdout.on('data', (data) => {
output += data.toString();
});
pythonProcess.stderr.on('data', (data) => {
errorOutput += data.toString();
});
pythonProcess.on('close', (code) => {
if (code === 0) {
resolve(JSON.parse(output));
} else {
reject(new Error(`Validation failed: ${errorOutput || output}`));
}
});
});
}
}
The Python script leverages IfcOpenShell’s validation capabilities:
# src/server/python/ids_validator.py
import sys
import json
import ifcopenshell
import ifctester.ids
import ifctester.reporter
def validate_ids(ifc_path: str, ids_path: str, output_format: str = "json") -> dict:
"""Validate an IFC file against an IDS specification."""
try:
# Load IDS specification
specs = ifctester.ids.open(ids_path)
# Load IFC model
ifc_model = ifcopenshell.open(ifc_path)
# Perform validation
specs.validate(ifc_model)
# Generate JSON report
if output_format.lower() == "json":
json_reporter = ifctester.reporter.Json(specs)
json_reporter.report()
result = json_reporter.to_string()
return json.loads(result)
except Exception as e:
return {
"error": type(e).__name__,
"message": str(e)
}
if __name__ == "__main__":
ifc_path = sys.argv[1]
ids_path = sys.argv[2]
output_format = sys.argv[3] if len(sys.argv) > 3 else "json"
result = validate_ids(ifc_path, ids_path, output_format)
print(json.dumps(result, indent=2))
sys.exit(0 if "error" not in result else 1)
This approach gives us the best of both worlds:
- Node.js handles HTTP, routing, file management, and async orchestration
- Python handles IDS validation using the battle-tested ifctester library
- Communication via JSON over stdout/stderr
Docker: Package Everything Together
The trickiest part? Getting Node.js AND Python to play nicely in a single container. We used Docker multi-stage builds with a Python virtual environment:
# Dockerfile.server (simplified production stage)
FROM node:20-bookworm-slim AS production
# Install Python and create virtual environment
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
python3-venv \
&& rm -rf /var/lib/apt/lists/*
# Create Python virtual environment
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
WORKDIR /app
# Install Python dependencies (ifctester, ifcopenshell)
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# Install Node dependencies
COPY package*.json ./
RUN npm ci --only=production
# Copy application source
COPY src ./src
# Create necessary directories
RUN mkdir -p /app/temp/validation \
/app/temp/storage \
/app/temp/fragments
# Create non-root user for security
RUN useradd -m -u 1001 appuser && \
chown -R appuser:appuser /app
USER appuser
EXPOSE 3001
ENV NODE_ENV=production
ENV PYTHON_PATH=/opt/venv/bin/python3
CMD ["npx", "tsx", "src/server/server.ts"]
The docker-compose.yml makes it even easier:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile.server
target: production
ports:
- "3001:3001"
environment:
- NODE_ENV=production
- PORT=3001
- PYTHON_PATH=/opt/venv/bin/python3
volumes:
# Persist storage for uploaded files and results
- validation-data:/app/temp/validation
- storage-data:/app/temp/storage
- fragments-data:/app/temp/fragments
restart: unless-stopped
volumes:
validation-data:
storage-data:
fragments-data:
Now deployment is as simple as:
docker-compose up -d
Pretty neat!
The New Frontend Workflow
We also rebuilt the landing page to support the new backend architecture. Users now follow a step-by-step workflow:
- Upload IFC File → Backend converts to fragments
- Convert & Preview → See the 3D model in the browser
- Upload IDS File → Submit validation specification
- View Results → See which elements passed/failed
Each step communicates with the backend API, but the experience remains smooth and interactive. The frontend manages conversion state and handles async job polling transparently.
What This Unlocks
By adding a backend API, we’ve opened up entirely new possibilities:
CI/CD Integration: Run validation in GitHub Actions, GitLab CI, or any pipeline
curl -X POST http://localhost:3001/api/v1/ids/check \
-F "ifc=@model.ifc" \
-F "ids=@spec.ids"
Batch Processing: Validate entire directories of IFC files in parallel
Tool Integration: Let Revit, ArchiCAD, or any BIM tool call our validation API
Cloud Deployment: Deploy to AWS, Azure, Google Cloud, or on-premises
Scalability: Run multiple instances behind a load balancer for high throughput
What’s Next?
In the next iteration, we’ll focus on improving the user experience by:
- Enhanced UI for IDS Validation: Making the validation results more intuitive and easier to understand
- IDS Creation Interface: Possibly adding a visual interface to create IDS specifications directly in the browser
The goal is to make IDS validation and creation as accessible as possible, lowering the barrier to entry for BIM quality control.
Let’s Build Together
Have you built backend APIs for BIM workflows? What challenges did you face integrating Python libraries with Node.js? How do you handle deployment of mixed-language applications?
Let me know on YouTube or Twitter - I’d love to hear about your experiences building production BIM tools!
The complete code for this backend architecture is available on GitHub: https://github.com/vinnividivicci/openingbim-cicd
We’re not just building browser toys anymore - we’re building real BIM infrastructure that can scale. And that’s exciting!