The Problem: Property Search Is Stuck in the Filter Age
Every real estate website gives you the same experience: dropdowns for location, sliders for price, checkboxes for features. It works—until your search criteria don't fit neatly into form fields.
Real questions sound like:
- "Find me a quiet apartment near the city center, under 200k, that doesn't need renovation"
- "What's a fair price for a 3-bedroom house in Split with parking?"
- "Show me the best deals this week—houses that are priced below market value"
- "Compare average prices between Zagreb and Varazdin for new construction"
No filter form handles these. You'd need to run multiple searches, cross-reference results, and do the analysis yourself.
What if you could just ask?
What We Built: AI Property Assistant
We built a conversational AI that sits on top of a live real estate database. Users ask questions in natural language—in English or Croatian—and the chatbot searches listings, runs price predictions, finds deals, and delivers market analysis, all through conversation.
Sample Conversations
User: "Show me apartments in Zagreb under 150,000 euros with at least 2 bedrooms"
AI: Found 23 apartments matching your criteria. Here are the top 5 by value:
- 67m2, Trnje, 2BR — 128,000 EUR (estimated value: 142,000 EUR) — Potential deal
- 58m2, Maksimir, 2BR — 135,000 EUR...
User: "What would a 75m2 new construction apartment on the 4th floor in Split cost?"
AI: Based on our ML model analysis of 1,200+ comparable properties:
- Predicted price: 185,000 - 205,000 EUR
- Price per m2: 2,467 - 2,733 EUR
- Confidence: High (based on strong data coverage for this area)
User: "Koliko kosta renovacija kuce od 120m2?" (Croatian: How much does renovating a 120m2 house cost?)
AI: Procjena troškova renovacije za kuću od 120m2... (Responds in Croatian with renovation cost estimates by scope)
How It Works: LLM Tool Calling Architecture
The Core Pattern
The chatbot uses function calling (tool use)—a pattern where the LLM decides which backend tools to invoke based on the user's message, then interprets the results conversationally.
User message → LLM analyzes intent → Selects tool(s) → Calls backend API → LLM formats response
Available Tools
The LLM has access to 7 specialized tools:
| Tool | What It Does | Example Trigger |
|---|---|---|
search_listings | Query the database with filters | "Find apartments in Zagreb..." |
predict_price | Run ML model on property features | "What's a fair price for..." |
find_deals | Surface underpriced properties | "Show me the best deals..." |
get_market_statistics | Regional price data and trends | "Average price in Split..." |
find_similar_properties | Comparable property search | "Properties similar to this one..." |
estimate_renovation_cost | Cost estimation by scope | "How much to renovate..." |
get_neighborhood_info | Location-specific insights | "Tell me about Maksimir..." |
How Tool Selection Works
- User sends message in any language
- Gemini 2.5 Flash analyzes the message and determines which tool(s) to call
- Parameters are extracted from the natural language (e.g., "under 150k" →
max_price: 150000) - Backend executes the tool with structured parameters
- LLM receives raw data and formats a human-readable response
- Multi-turn context maintains conversation state for follow-up questions
Multi-Tool Queries
Complex questions trigger multiple tools:
-
"Find me a deal on a 2BR apartment in Zagreb and tell me what similar ones sold for" → Calls
find_deals+find_similar_properties -
"What's the average price in Split, and is this listing below average?" → Calls
get_market_statistics+predict_price
The LLM orchestrates these calls and synthesizes results into a single coherent answer.
Technical Architecture
Stack
| Layer | Technology | Purpose |
|---|---|---|
| Frontend | Next.js 16, React 19 | Chat UI with streaming responses |
| LLM | Gemini 2.5 Flash via OpenRouter | Intent understanding + tool calling |
| API | FastAPI (Python) | Tool execution endpoints |
| ML Models | XGBoost, CatBoost, Random Forest | Price prediction behind predict_price |
| Database | Supabase (PostgreSQL) | Live listing data + user history |
| NLP | LangChain + OpenAI | Conversation management |
Why Function Calling Over RAG?
For structured data queries, function calling outperforms RAG (retrieval-augmented generation):
| Approach | Strength | Weakness |
|---|---|---|
| RAG | Great for unstructured docs | Can't run precise database queries |
| Function Calling | Exact database queries, ML inference | Requires pre-built tools |
| Our Approach | Function calling for data + LLM for presentation | Best of both worlds |
The chatbot doesn't search through documents—it calls real APIs with structured parameters, getting exact results from live data.
Multilingual Support
The LLM naturally handles Croatian and English without any translation layer:
- User writes in Croatian → LLM understands intent
- Tools return data in a structured format (language-agnostic)
- LLM responds in the same language the user used
No separate Croatian NLP model needed—the LLM handles language switching natively.
Building a Chatbot That Actually Helps
Lessons Learned
1. Define your tools precisely Vague tool descriptions lead to wrong tool selection. Each tool has explicit parameter schemas and clear descriptions of when to use it.
2. Handle "I don't know" gracefully When the database doesn't have enough data for a prediction, the chatbot says so—with an explanation of why and what data would help.
3. Show your work Users trust the chatbot more when it explains its reasoning: "Based on 47 similar properties in this area, the predicted range is..."
4. Keep context, but know when to reset Multi-turn conversations need context ("show me cheaper ones" refers to the previous search), but the chatbot doesn't carry assumptions across unrelated queries.
Results
| Metric | Value |
|---|---|
| Languages supported | Croatian + English (native, no translation) |
| Tools available | 7 specialized backend functions |
| Response time | 2-4 seconds for complex queries |
| Data freshness | Updated daily via automated scraping |
| Query complexity | Handles multi-tool queries naturally |
What Users Say:
"It's like having a real estate agent who's memorized every listing on the market and can do math instantly. I asked 'what's the best deal in Varazdin right now' and got a ranked list with price predictions in seconds."
— Property investor, Croatia
Who This Is For
This chatbot pattern works for any domain with structured data:
- Real estate platforms adding conversational property search
- E-commerce sites enabling natural language product discovery
- Financial services building portfolio analysis assistants
- Healthcare creating patient-facing appointment and information bots
- Any SaaS product wanting to give users a natural language interface to their data
The architecture—LLM + function calling + domain-specific tools—is the same pattern. The tools change; the approach doesn't.

