Model used for semantic search and conversation indexing
{#if !modelsState.hasEmbeddingModel}
No embedding model installed. Run ollama pull {settingsState.embeddingModel} to enable semantic search.
Selected model not installed. Run ollama pull {settingsState.embeddingModel} or select an installed model.
Installed: {modelsState.embeddingModels.map(m => m.name).join(', ')}
{:else}Model installed and ready.
{/if} {/if}Auto-Compact
Automatically summarize older messages when context usage is high
Trigger compaction when context usage exceeds this percentage
settingsState.updateAutoCompactThreshold(parseInt(e.currentTarget.value))} class="w-full accent-emerald-500" />Number of recent messages to keep intact (not summarized)
settingsState.updateAutoCompactPreserveCount(parseInt(e.currentTarget.value))} class="w-full accent-emerald-500" />Enable auto-compact to automatically manage context usage. When enabled, older messages will be summarized when context usage exceeds your threshold.
{/if}Use Custom Parameters
Override model defaults with custom values
{PARAMETER_DESCRIPTIONS.temperature}
settingsState.updateParameter('temperature', parseFloat(e.currentTarget.value))} class="w-full accent-orange-500" />{PARAMETER_DESCRIPTIONS.top_k}
settingsState.updateParameter('top_k', parseInt(e.currentTarget.value))} class="w-full accent-orange-500" />{PARAMETER_DESCRIPTIONS.top_p}
settingsState.updateParameter('top_p', parseFloat(e.currentTarget.value))} class="w-full accent-orange-500" />{PARAMETER_DESCRIPTIONS.num_ctx}
settingsState.updateParameter('num_ctx', parseInt(e.currentTarget.value))} class="w-full accent-orange-500" />Using model defaults. Enable custom parameters to adjust temperature, sampling, and context length.
{/if}Set default system prompts for specific models. When no other prompt is selected, the model's default will be used automatically.
{#if isLoadingModelInfo}No models available. Make sure Ollama is running.
{:else}Embedded: {modelInfo.systemPrompt}
{/if}