# AI CRM · Real Estate

`01 · ai-crm · Delivered`

Production AI agent with MCP-style 19-endpoint tool API, audit trail, and operator handoff. Qualifies real-estate leads, runs project research, books viewings.

**Scope:** Solo · 7 weeks  
**Role:** Full-stack platform with autonomous Telegram agent

**Video:** [YouTube](https://www.youtube.com/watch?v=Jddfb75n5WA) · [RuTube](https://rutube.ru/video/private/dbdadd9823e8fb606e0561d1077de66a/?p=4yGO6gARyq70kU5tbEhZCg)

## Video walkthrough

Production AI agent runs real-estate client chat in Telegram while operators supervise through a capability-scoped admin panel — Kanban pipeline, calendar, lead qualification, deep project research, viewing booking, operator handoff. Every AI call is tracked: tokens and cost are visible per client, separately for the client-talking agent and the operator helper.

An AI powered CRM for real-estate agencies — clients talk to an AI assistant inside Telegram and your team sees everything in one place.

The dashboard shows deals, leads and team workload. The Kanban board runs the pipeline — drag a card between stages, and the history takes care of itself.

Every card sits on the calendar too. Filter by operator or board, switch between month, week or day, and open any event to see the details, and linked cards.

From a card step straight into the chat. The AI qualifies the lead, finds matching projects, runs deep research if needed and sends an interactive property card right inside Telegram. The client browses photos, units and reports without leaving the app. When they are ready the AI books a viewing and saves a context note for the team.

If a person takes over the AI simply steps aside. Operators also have their own AI helper inside the system — ask it about a client, and it comes back with a quick summary and a ready reply, drawing on the full chat history.

Every AI call is tracked: tokens and cost are visible per client, separately for the one talking to the client and the one helping the operator.

The system also comes with light and dark themes, and a range of accent colors.

---

## Context

> Operators run calls and viewings. The AI runs everything else.

Real-estate agencies running their funnel through Telegram receive inbound around the clock. Volume outpaces what a small operator team can keep up with. Catalogs of thousands of projects and tens of thousands of units sit beyond what any human keeps in their head mid-conversation. Off-the-shelf chatbots reply once with FAQ and stall; nothing carries the buyer from first message to booked meeting.

The split is fixed: operators take calls and viewings; everything else — qualifying, deep project and district research, scheduling against the operator calendar — runs without them.

## Facts

| | |
|---|---|
| **Scope** | 7 weeks solo |
| **Surfaces** | CRM admin + Telegram WebApp (1 repo · 2 Vite entries) |
| **Catalog** | Thousands of projects · tens of thousands of units synced from GenieMap · raw_payload preserved |
| **Agent tools** | 19 in-house endpoints + 5 callbacks · Bearer auth · unified envelope |
| **Auth model** | 15 capability keys |
| **Status** | Delivered · 20 pytest modules · structlog · respx |

## Architecture

### Message lifecycle

```text
 1  Client                          Telegram message
        │
 2  tg_bot
        │  POST /api/v1/conversations/messages/        [BOT_SHARED_TOKEN]
        ▼
 3  Backend  (Django/DRF)
        │  upsert Conversation + Message[RECEIVED]
        │  Celery.dispatch_to_n8n_workflow.delay()
        ▼
 4  Celery worker
        │  POST {N8N_BASE_URL}/{webhook}               [N8N_API_KEY]
        ▼
 5  n8n  ──►  LLM
                │  tool_call  (projects-search, send-webapp, …)
                ▼
              Agent Tools API                          [Bearer]
                │  data  {data, meta, errors}
                ▼
              LLM  ──►  final reply text
        │
 6  Backend  ◄── callback /api/v1/integrations/n8n/    [N8N_CALLBACK_TOKEN]
        │  Message[AWAITING_CALLBACK → READY_TO_SEND]
        ▼
 7  Celery worker
        │  POST /bridge/send                           [TG_BRIDGE_TOKEN]
        ▼
 8  Telegram → Client            Message[SENT]
```

**Message states (7).** Linear path: RECEIVED → PROCESSING → AWAITING_CALLBACK → READY_TO_SEND → SENT. Branches: AWAITING_OPERATOR (AI toggled off mid-conversation), FAILED (any n8n error).

**Why a separate bridge.** The Telegram bot session is owned by one process — tg_bot. Backend never opens its own session; outbound goes through an HTTP bridge inside the same container. Backend stays stateless toward Telegram.

**Where AI lives.** Inside step 5 — n8n owns the LLM tool-loop. Backend serves both the callbacks and the tool calls themselves during the loop. (Why n8n at all — see §04-Decisions.)

### Component layout

```text
   External            tg_bot                   frontend  (admin · webapp)
   ────────            ──────                   ───────────────────────────
   Telegram   ◄─────►  aiogram + bridge         React · Vite (×2 entries)
                            │                          │
               BOT_SHARED ──┤                          │  HTTPS · /api/v1
               TG_BRIDGE  ──┤                          │
                            ▼                          ▼
                     ┌───────────────────────────────────────────┐
                     │   Backend  (Django · DRF · structlog)     │
                     │   Single envelope:  {data, meta, errors}  │
                     └────┬─────────┬──────────┬─────────────────┘
                          │         │          │
                          ▼         ▼          ▼
                    Postgres 16  Redis 7   Celery worker + beat
                                               │  N8N_API_KEY
                                               ▼
                                      n8n + n8n-worker
                                      (LLM tool-loops)
                                               │  N8N_CALLBACK_TOKEN
                                               ▼
                                         Backend.callback
```

**Nine Docker services.** postgres · redis · backend · celery · celery-beat · tg_bot · frontend · n8n · n8n-worker. (n8n-worker is a separate executor process; offloads long-running flows from the orchestrator.)

**Trust model.** Every cross-service call carries an explicit shared secret — bot bridge, n8n outbound, n8n callback, agent-tool Bearer. No service trusts another by container locality alone.

**Frontend split.** One Vite repo, two entry HTMLs — /admin (CRM) and /webapp/:projectId (Telegram WebApp). Components and API client are shared. The WebApp runs inside Telegram via the WebApp SDK; access control sits on the same backend endpoint rules as admin.

### Domain core

```text
CATALOG                          CONVERSATIONS
───────                          ─────────────
Project                          Conversation
  ├── Unit (×N)                    · status
  ├── ResearchBlock × 4            └── Message (×N, 7-state lifecycle)
  │     · core                   Note (manual + 3 system cards)
  │     · market & demand        OperatorAssistantSession
  │     · legal & ops
  │     · dynamics & news
  ├── Amenity (×N)
  └── District (FK)              KANBAN
                                 ──────
USERS                            Board
─────                              └── Column
User · capability strings               └── Card
     · telegram_link                         └── Activity
UserInvite
                                 CALENDAR
GENIEMAP                         ────────
────────                         Calendar
ProviderSyncLog                    ├── CalendarEvent
  · raw_payload preserved          └── ScheduleRule
  · scheduled syncs
```

**37 models in 7 apps.** Domain decomposed by bounded context: catalog · conversations · kanban · calendar · users · geniemap · auth. 5 apps carry no models (core, agent_tools, dashboard, webapp, echo). No god-table; every entity owns one job.

**Status enums everywhere, not bool flags.** Each domain carries its own typed status enum — Project (launch · available · sold_out · cancelled), Unit (available · reserved · sold · on_hold), Message (7-state), Kanban Card (open · closed_won · closed_lost). One projectable surface per domain instead of a parade of `archived: bool`, `is_active`, `is_draft` flags.

**raw_payload preserved.** Every entity synced from GenieMap stores the original JSON alongside normalized columns. Survives any mapping drift, makes re-ingest cheap.

### Agent topologies

![Four research-specialist agents (core, market & demand, legal & ops, dynamics & news), each on its own LLM, all sharing one web-search tool.](https://ilyadev.xyz/private/airea-n8n-specialists.png)

*Specialists cluster, shared tool*

![A single chat agent wired to twelve tools — projects/districts search and research, calendar availability and create, send-webapp, admin-notify, context-save, user-note CRUD, web-search.](https://ilyadev.xyz/private/airea-n8n-omnimodel.png)

*Omnimodel, twelve tools*

**Why this works.** The 19 endpoints are model-agnostic and orchestrator-agnostic. Tested with both GPT and Gemini against the same tool API.

## Key engineering decisions

### 01 · Capability overrides on top of role presets

**Decision.** Three role presets (OWNER · ADMIN · OPERATOR) define the default capability set; per-user extra_capabilities and revoked_capabilities give point overrides. Checks resolve through hasCapability() rather than role == "admin".

**Why.** Real operator combos don't fit a 3-role enum — 'dashboard yes, user-management no, AI-toggle yes' is routine. A pure role table would explode; pure capabilities lose the convenience of a sane default. The override model keeps both.

**Cost.** Two layers to reason about — preset + overrides — when debugging access. Default role presets had to be hard-coded so UX kept its onboarding shortcuts.

### 02 · 19 in-house agent tools on Bearer + unified envelope

**Decision.** Every AI tool call goes through /api/v1/agent/tools/... with its own Bearer token; every response shares the same {data, meta, errors} envelope.

**Why.** n8n runs without user context — session auth does not fit. A Bearer-edge gives the LLM a minimum-privilege surface; one envelope shape makes the tool loop stable.

**Cost.** Several endpoints duplicate per audience (operator vs AI) — two parallel surfaces instead of one with dual auth.

### 03 · n8n for the no-code surface

**Decision.** Agent logic lives in 4 n8n workflows; LLM tool-calls fan out to the in-house tool API, callbacks return through a webhook.

**Why.** Constraint: a no-code surface for agent-flow edits. n8n was the obvious candidate at the time: popular, mature, with native tool-calling and webhook primitives. The alternative — a custom admin UI over an in-process tool-loop — would have doubled MVP scope.

**Cost.** Prompt and orchestration logic split between n8n flows and the in-house tool API; n8n-worker runs as a separate service. Reflective take in §06-Lessons.

### 04 · Research as a 4-block state machine

**Decision.** Per-project research splits into 4 blocks (core, market & demand, legal & ops, dynamics & news), each cycling EMPTY → PENDING → READY | FAILED. n8n callbacks deliver each block back to the backend as it completes, and the project's research panel renders them block-by-block as they arrive.

**Why.** A deep research run takes several minutes — blocking the LLM tool-loop would freeze the Telegram chat. Per-block UX shows partial results instead.

**Cost.** Each block runs its own prompt and search sources — research-pipeline maintenance grows linearly with block count.

## Stack

| | |
|---|---|
| **Backend** | Python · Django · DRF · Celery · structlog · aiogram |
| **Frontend** | React · Vite · Tailwind 4 · Radix UI · MapLibre GL · react-router 7 |
| **Infra** | PostgreSQL · Redis · Docker Compose |
| **AI · orch.** | Gemini, OpenAI · n8n (model-agnostic) |
| **Agent surface** | 19 in-house tool endpoints + 5 callbacks · Bearer auth · {data, meta, errors} envelope |
| **Scale** | 12 Django apps · 224 frontend modules · 20 pytest modules · ~56K LOC |

## Lessons & status

### Carry forward

- Capability overrides on top of role presets — base role gives a sane default, per-user extra/revoked keys cover the long tail without inventing new roles.
- Single `{data, meta, errors}` envelope on every API response — a boring tool-loop is a feature, not a bug.
- Operator console UX — kanban + chat + calendar in one shell, with on-request handoff to a human inside the same thread.
- Audit-trail discipline — X-Request-ID propagated through structlog at every cross-service hop. Without it, debugging an LLM tool-loop falls apart.

### Would change

- Django was picked by reflex. The actual used surface — ORM, routing, app structure — stayed thin, while DRF added boilerplate around every endpoint and class-based permissions fought the capability-string model. FastAPI + Pydantic + async SQLAlchemy would have been a closer fit to the API-only shape this project ended up with.
- n8n didn't earn its keep — most flow edits landed in prompts or the tool API anyway. Today I'd run an in-process tool-loop and ship a thin custom UI only where edits actually happen.
- 9 Docker services were too many for MVP. Today: n8n + n8n-worker drop out (in-process tool-loop), celery-beat folds into celery as a periodic, frontend in production is static files behind backend rather than a dev container. Lands at 4 services: postgres, redis, backend, celery.

Built, deployed, handed off at engagement close; code walkthrough on request.

---

Source: https://ilyadev.xyz/cases/ai-crm (HTML) · /cases/ai-crm.md (this file)
Up next: 02 — Restaurant Stock AI Agent → https://ilyadev.xyz/cases/ai-warehouse.md
Index: https://ilyadev.xyz/llms.txt — full case-study list
Author: Ilya Kazantsev — https://ilyadev.xyz/index.md
