structuredClone on a Svelte 5 reactive Proxy throws DataCloneError during
component init, causing MergeProposalPanel to silently fail to mount.
Replace with \$state.snapshot which is the documented way to deep-copy a
reactive prop into a local editable state.
Frontend budget was 180s — equal to the backend goroutine cap — so a race
determined which side timed out first. Bumped to 270s to guarantee the frontend
outlasts the backend's 3-minute window.
Added explicit null guard on result.proposal: if the LLM ever returns a
done-status without a proposal body the UI now surfaces a clear error instead
of silently assigning undefined (which kept the panel hidden with no feedback).
Also guards field_merges ?? {} in MergeProposalPanel to avoid Object.keys(null)
if the model returns a null map.
POST /admin/markets/:id/merge-plan now returns 202 + job_id immediately
and runs the Gemini advisor in a detached goroutine. Frontend polls
GET .../merge-plan/:job_id until done, with backoff up to 3 minutes.
Adds in-memory job registry (keyed map + RWMutex, 5-min TTL sweep) and
handler tests covering the full pending→done and error paths.
All serverFetch calls were going to https://api.marktvogt.de (public
gateway), creating a second nginx hop for every SSR operation. Slow LLM
calls (merge-plan, research-plan) hit the 60s proxy_read_timeout.
- Add PRIVATE_API_BASE_URL=http://marktvogt-backend to web Helm config
- serverFetch now builds SERVER_API_BASE from PRIVATE_API_BASE_URL at
runtime (falls back to PUBLIC_API_BASE_URL when not set)
- apiFetch accepts optional baseURL param; client-side calls unchanged
Merge-plan and research-plan both call Gemini which can take >60s.
The default gateway timeout was killing connections with 504.
- Web HTTPRoute: add /admin/ rule with 120s request+backendRequest timeout
- Backend HTTPRoute: add /api/v1/admin/markets/ rule with 120s timeout
- MergePlan handler: add 110s context deadline for graceful degradation
before the gateway cuts the upstream connection
Gemini returned field_merges as an array without structure constraint,
causing json.Unmarshal to fail with "cannot unmarshal array into Go struct
field of type map[string]mergeFieldDecision".
- Pass merge_advisor_schema.json via JSONSchema instead of bare JSONMode
- Add parseFieldMerges() that accepts both object and array LLM formats
- Validate target_id is one of the two input market IDs after parsing
- Fix schemaFromMap: minimum/maximum are supported by genai.Schema v1.54
LLM tiebreaker can take several seconds; return the duplicates fetch
as an unawaited Promise so the page renders immediately with market
data. Template uses {#await} to render the panel when it resolves.
The Plan handler returns {plan, research_result} directly without a
data wrapper. apiFetch casts the body to ApiResponse<T>, so res.data
was undefined, json(undefined) produced an empty response, and the
client either crashed (JSON.parse) or silently got a null plan.
H1: Drop empty string from enricher_schema.json category enum —
Gemini rejects enum[7]: cannot be empty (Error 400). Remove category
from required so the model can omit it when no category fits.
H2: Research-plan/apply client reads response as text before
JSON.parse; empty or HTML error bodies now surface the actual HTTP
status instead of crashing with "unexpected end of data".
I: Dedup UI for approved markets:
- DuplicatesPanel: LLM verdict pills (same/not-same, confidence),
llm_reason, per-candidate Merge-planen button
- MergeProposalPanel: summary, confidence, flags, per-field
decisions with editable source radio (a/b/combined), current
value context, confirm() before destructive apply
- Two SvelteKit proxy routes: merge-plan/ and merge-into/[targetId]/
- [id]/+page.svelte: wired with full state; navigates to survivor
after successful merge
- [id]/+page.server.ts: load duplicates for all non-merged editions
(was gated to status=rumored only)
- types.ts: DuplicateMarket gains llm_same/llm_confidence/llm_reason;
add MarketMergeProposal + MergeFieldDecision; add merged to
EditionStatus
MergeAdvisor calls Gemini with a German system prompt to propose how to merge
two duplicate market editions. It guards against confident non-duplicates via
ErrNotDuplicate (same=false AND confidence>0.5).
POST /:id/merge-plan generates a MarketMergeProposal (read-only).
POST /:id/merge-into/:target_id applies the merge: updates target fields,
marks source as status=merged with merged_into_id set, reparents discovered_markets,
and writes a market_merge_log audit row — all in one transaction.
AdminHandler gains advisor and updated constructor. VersionMergeAdvisor added
to pkg/ai versions.
Migration 000026 adds merged_into_id + merged_at to market_editions and
extends the status CHECK constraint to include 'merged'. FindSimilar now
excludes merged editions from candidates.
AdminHandler gains a SimilarityClassifier field; FindDuplicates enriches
the top 5 pg_trgm candidates with LLM same/confidence/reason verdicts.
simClassifier from routes.go is passed through to avoid a second instance.
schemaFromMap now logs a warning when keys genai.Schema ignores
(pattern, minLength, $ref, etc.) are present, keeping the workaround
visible. LLMEnricher skips Google Search grounding when total scraped
chars >= 1500, conserving free-tier quota on content-rich pages.
promptHashShort(system+"\x00"+user)[:12] computed on ErrSchemaViolation
and attached to ProviderError.PromptHash. research.go schema-violation
log now includes prompt_hash for cross-referencing ai_usage rows.
Migration 000024 adds prompt_version column + partial index.
PromptVersion plumbed through ChatRequest -> UsageEvent ->
buildUsageEvent -> settings INSERT/SELECT. Version constants
defined in ai/versions.go and wired at all three call sites.
- Add confidence scale (0.95-1.00 / 0.70-0.90 / 0.50-0.70 / 0.00-0.50)
with four annotated few-shot examples to the similarity system prompt
- Add two Ronneburg real-world pairs to similarity.json: descriptive-prefix
swap and low-trigram-overlap rename, both expected same=true
- Replace 3-example inline comment with 7-label taxonomy block so the
model knows all valid categories instead of guessing from partial hints
- Tighten description constraint to 60-220 chars with explicit word bans
- Mark opening_hours as a rough guide, not authoritative for booking
Adds ListEnrichedNeedingLLM to the Repository interface and RunLLMEnrichBacklog
to Service, then wires RunLLMEnrichBacklog into the post-crawl goroutine so
LLM enrichment runs automatically after every crawl without manual triggers.
Replaces JSONMode:true with an embedded enricher_schema.json so Gemini
returns structured output against a typed schema, preventing empty {} responses.
Adds an all-empty warning when the LLM returns a valid but blank payload.
applyResearch() populated form fields but never triggered a save.
After applying all suggestions and appending the KI-Recherche note,
call requestSubmit() on form[action="?/save"] so the data is persisted.
Deterministic output is preferable for extraction and classification
tasks. Temperature=0.1 also enables the if-gate in gemini.go that
forwards the value to the Gemini API config.
Also add llm_enricher.go (renamed from mistral.go) with the temperature
field applied.
The researcher prompt incorrectly told the model to open/fetch URLs; it
only ever sees pre-fetched quellen[].text. Replace all "oeffnen" and
"aufgerufen" references with instructions to work from quellen[].
Add ZielJahr to research.Input and the JSON user-prompt payload so the
model has an explicit target year separate from recherche_datum (wall-clock
time of the request). Use ziel_jahr in the prompt instead of deriving the
year from recherche_datum. Fix the search query in the orchestrator to use
ZielJahr rather than RechercheDatum.Year().
Rename mistral.go → llm_enricher.go and mistral_test.go →
llm_enricher_test.go; update all test function names and stale model
strings (mistral-large-latest → gemini-2.5-flash-lite); drop Ollama
block from .env; mark superseded planning specs; update provider
references in planning docs and CLAUDE.md to Google Gemini.
- Add logo_url as a distinct DB column (migration 000023) and expose it
through model, DTOs, repository, service, and all frontend types
- Update KI-Recherche prompt and both JSON schemas: logo_url field rule,
clarified bild_url rule, hinweis now mandatory non-null (maxLength 200)
- imageURLReachable now also verifies Content-Type: image/* for both
bild_url and logo_url before surfacing suggestions
- MarketCard: image-first with cover style, logo fallback with contain
style, city-initial placeholder as last resort
- /markt/[slug]: hero section follows same image→logo→nothing precedence;
OG/JSON-LD updated accordingly
- Map view on search page: pagination hidden, map height increased to 600px
- Fix einstellungen Svelte warning: wrap showKeyInput init in untrack()
- Add missing= query param (description/image/website/location) to
AdminSearchParams; both AdminSearch and AdminSearchGrouped apply the
SQL condition
- Add has_description/has_image/has_website/has_location booleans to
AdminMarketSummary, populated in ToAdminSummary from existing Market fields
- Dropdown filter in the admin market list routes to the missing param
- Coloured dot indicators per row (amber=image, orange=desc, red=website,
purple=location) with title tooltips
SDK's modelFromMldev maps _self to tunedModelInfo for every model,
making the nil check always true and silently dropping all results.
Name-based filtering is the correct gate; tuned models are excluded
by the gemini- prefix requirement.
- MarketCard: object-fit contain with padding instead of cropped 16:9;
city-initial placeholder so all cards are uniform height in the grid;
imgFailed state falls back to placeholder on broken URLs
- Admin market detail: show image thumbnail + Bild-URL link in Details
- Admin edit form: live image preview below Bild-URL input
- Public detail page: contain + max-height 250px instead of cover crop
- onerror handlers hide broken images on public card and detail pages
- Time inputs changed to text + pattern for reliable 24h display
- Prompt now requires year verification before extracting any field
- Opening times and prices from prior years must be nulled with a hint
- imageURLReachable does a HEAD request (5s timeout) and strips the
image_url from research results when the resource returns 4xx/5xx
All Input fields used market?.xxx as initial value, so a Svelte re-render
triggered by researchResult=null would reset them back to the server-loaded
value, wiping every applied research suggestion.
Replace all research-applicable fields with $state variables and route all
apply calls through setField() instead of querySelector+dispatch. Country
name->code mapping added for LLM-returned values like "Deutschland" -> "DE".
writeReverseResult also updated to use setField.
Description wasn't being applied because querySelector-then-assign runs before
Svelte's reactive flush of researchResult=null, which resets the textarea to
its initial market.description value. Fix: reactive state + exported setter
(same pattern as setHours/setAdmission).
Also add markt_name to felder in both schemas and the prompt so the LLM can
suggest a name correction. Name suggestions are gated to extraktion=direkt
(high confidence only) and guarded on the frontend with setName().
The beschreibung field was schema-required but absent from ## Felder,
causing the LLM to always return null. Add explicit extraction instruction.
Also reword the opening line which said "Keine Beschreibungstexte" —
contradicting the field we actually want.
On apply, append "KI-Recherche: DD.MM.YYYY HH:MM" to admin_notes so
there's a permanent audit trail of when research was run.
Researcher emits {datum_von,von,bis} for opening hours and [{name,betrag,waehrung}]
for admission info — both incompatible with the form's {day,open,close} and
AdmissionInfo shapes. Normalize on apply; extend normalizeDayName to handle
ISO YYYY-MM-DD dates the LLM produces. ResearchPanel renders both LLM and
form-native formats with dedicated table/list views.
Gemini rejects requests that set both GoogleSearchRetrieval and
response_schema. The orchestrator already provides web content via
SearxNG + scraping, so grounding is unnecessary here.
factory.go: treat DB errors from GetGeminiAPIKey as "no key" and fall
back to the GEMINI_API_KEY env var instead of propagating the error
(which caused a panic/crash when migrations haven't been run yet).
gemini.go: ListModelNames returns a ProviderError when the client is
nil so that connected=false is reported correctly in GetAI instead of
the previous nil,nil→connected=true false positive.
+page.server.ts: catch fetch errors so a backend outage doesn't 500 the
whole page. +page.svelte: guard all data.ai access with {#if data.ai}
so the page renders an error banner instead of crashing on null access.
When a freshly-inserted discovered_market has a matched series, konfidenz
"hoch" (≥2 sources), and both start/end dates present, Accept() is called
inline with a nil reviewer (mapped to NULL reviewed_by) so the row goes
straight to accepted without manual review.
CrawlSummary gains auto_accepted counter; slog summary logs it.
MarkAccepted / Service.Accept now take *uuid.UUID for reviewer so nil
cleanly maps to NULL in the DB column (already nullable).