AI-Powered Interior Design Platform
AI platform that segments residential floor plans, then places furniture across the home using a rule-based layout engine — layouts can be edited in the UI or fetched, fully furnished, through an API.
From a single image to a furnished, exportable plan
A real 3-bedroom plan, on loop. The kitchen and bathroom come as-fitted — the rule engine respects the existing fixtures and only places furniture in the bedrooms, dining, and living areas. Layouts are then editable and exportable to PDF or PNG from the top-right corner.
Upload the floor plan
Drag in a residential floor plan as an image. Architect drawing, scan, or photo — anything readable.
Segment rooms, doors, windows
The fine-tuned segmentation model maps walls, doors, and windows on every plan automatically.
Label rooms from the image
Room names written on the plan (bedroom, kitchen, living room) are read back as labels. If the plan is unlabelled, the user picks them.
Read scale — or set it manually
If the image carries dimensions, the engine reads them. Otherwise the user marks a reference length so the rule engine works in real units.
Choose capacity per room
Per-room toggles — extra storage, TV, dining capacity, desks — feed the rule engine before it places anything.
Auto-generate the layout
One click. The engine places bedroom, dining and living furniture. Bathrooms and kitchen are kept as-fitted — the rule engine respects existing fixtures.
Refine, then export
Move, rotate, delete, or swap any item individually. When the layout looks right, download as PDF or PNG from the top-right corner.
AI platform that segments residential floor plans, then places furniture across the home using a rule-based layout engine — layouts can be edited in the UI or fetched, fully furnished, through an API.
Fine-tuned a SAM2 segmentation model on a custom dataset detects walls, doors, and windows. A rule-based engine then places furniture against the detected geometry, exposing the same output to both a React UI and a public API.
Once segmentation parses the geometry — typically in around 10 seconds per plan, with the model landing near 90% accuracy on the held-out test set — a rule-based placement engine generates one or more furnished layouts per home. The rules cover clearances around doors, sight-lines to windows, and room-type heuristics (a living room is not a bedroom). The same layout output drives two surfaces: a React UI where end users can rearrange, swap, or regenerate furniture interactively, and a public API where third-party systems can fetch a fully furnished plan as structured data.
How a request flows through it
Each request enters at the top of the diagram, flows through every box, and lands at the bottom — exactly the way the production system behaves. The scan-line traces where a live request would be right now.
What it's built with
The interesting parts
Fine-tuned segmentation model
Custom dataset assembled from real-estate floor plans; SAM2 fine-tuned over about two weeks to land near 90% accuracy on walls, doors, and windows.
Rule-based furniture placement engine
After segmentation, an engineering-rules engine places furniture into each room — respecting clearances, window sight-lines, and room-type semantics — and generates multiple valid layouts per plan.
Editable layouts in the UI
Users can rearrange, swap, or regenerate furniture in the React interface; the engine re-validates every change against the same placement rules.
Public API for third-party consumption
The same layout output is exposed as a public API endpoint — third-party systems can fetch a fully furnished plan as structured data with no UI involvement.
Python ML inside a Node.js backend
Python inference scripts invoked from the Node.js runtime, so the MERN stack stays the system of record while ML stays in Python.
The calls that did most of the work
A handful of engineering choices shape how a system feels. Here are the ones we'd still defend — alongside what each one cost.
Fine-tune SAM2 on a custom dataset
General-purpose segmentation models don't reliably pick out walls, doors, and windows on a residential floor plan. A purpose-built dataset gives the model the right inductive bias for the specific shapes the product cares about.
Tradeoff: About two weeks of fine-tuning, plus ongoing dataset curation as new floor-plan styles show up.
Rule-based furniture placement, not ML-generated layouts
Furniture placement has hard physical constraints — clearances around doors, sight-lines to windows, what a 'living room' means versus a 'bedroom'. A rule engine is auditable and predictable; an ML-generated layout would be a black box the user has to wrestle with every time it gets something subtly wrong.
Tradeoff: Rules have to be encoded explicitly and maintained as new room types and furniture categories are added.
Same engine, two surfaces (UI + public API)
Two distinct consumers want the same output — end users editing layouts in the app and partner systems that want a fully furnished plan as a data feed. Building one engine with two surfaces is cheaper than building two pipelines.
Tradeoff: The API contract becomes a public commitment; layout schema changes have to be versioned carefully so third-party integrations don't break.
Run the Python ML inside the Node.js runtime
Keeps the MERN stack as a single deployable unit instead of standing up a separate inference service for what is, at the start, a moderate-throughput workload.
Tradeoff: Web and ML are tightly coupled — scaling inference independently means breaking it out into its own service later.
Tell us what you're building.
Free 30-minute call. Real humans, real timelines, no follow-up emails forever.