The AI used to see one thing: lawn. Green pixels, not-green pixels, draw a polygon. It was useful the way a hammer is useful — perfect for the one thing it does, blind to everything else in the frame.
This session taught it to see ten things.
Lawn. Single trees. Tree canopy. Houses. Driveways. Sidewalks and patios. Pools. Gardens. Debris. Parking lots. Each class gets its own text prompts in the SAM3 handler, its own color on the map, its own mask. When masks overlap — and they always overlap, because a tree canopy hangs over a lawn — a priority system decides who wins. Structures subtract from lawn. Trees subtract from lawn. The lawn that remains is the lawn that's actually treatable, not the lawn that exists on a satellite image where a roof is blocking the view.
The hardest problem wasn't detection. It was trees.
A single oak in a front yard and a dense row of pines along a property line both register as "tree" to a segmentation model. But they're operationally different. The oak needs individual treatment — its canopy drip line affects what grows beneath it. The pine row is a mass, a boundary, a canopy. The algorithm that separates them uses morphological opening to erode thin connections between tree crowns, connected component analysis to identify isolated blobs, and proximity grouping to merge fragments that belong to the same tree. Individual trees versus dense canopy clusters, distinguished computationally without anyone drawing a single annotation.
The review panel got a complete rewrite to match. The old interface showed lawn and structures. The new one shows a unified region list with color-coded dots for each class, a dropdown to relabel any region to any of the ten classes, visibility toggles, lock/unlock per region, and delete. A class summary breaks down square footage by type. The map polygons wear their class colors. Save and Accept are separate actions now — you save your corrections first, then accept the result. The distinction matters because corrections are training data. Every relabeled region teaches the next model version what it got wrong.
Tenants can configure which classes get quoted through a JSONB column — scene_class_config. Default is lawn and single trees, because that's what most lawn care operators price. But a landscaper who also quotes driveway sealing or patio work can enable those classes and see pricing for them too.
A subtle bug died in this session. Rejected AI detections kept reappearing. The root cause was an Eloquent boolean cast colliding with a raw SQL expression: $model->update(['bool_col' => DB::raw('false')]) looks correct, but Eloquent's boolean cast sees the DB::raw() Expression object as truthy and converts it to true. The rejection was being saved as an approval. The fix: bypass Eloquent's casting by using the query builder directly.
All existing AI detections on beta were wiped for a clean training restart. The old single-class data would have poisoned the multi-class model — teaching it that everything is either lawn or not-lawn, when the truth is that a property is a composition of ten different surfaces that each matter for different reasons.
This is the foundation. The handler detects. The panel lets humans correct. The corrections become training data. The next model version learns from the corrections. The cycle tightens. Every property reviewed makes the next detection more accurate. Not artificial intelligence in the grand sense — artificial patience, applied at scale, learning from every mistake a human bothers to fix.