Memory Management

Model used for semantic search and conversation indexing

{#if !modelsState.hasEmbeddingModel}

No embedding model installed. Run ollama pull {settingsState.embeddingModel} to enable semantic search.

{:else} {@const selectedInstalled = modelsState.embeddingModels.some(m => m.name.includes(settingsState.embeddingModel.split(':')[0]))} {#if !selectedInstalled}

Selected model not installed. Run ollama pull {settingsState.embeddingModel} or select an installed model.

Installed: {modelsState.embeddingModels.map(m => m.name).join(', ')}

{:else}

Model installed and ready.

{/if} {/if}

Auto-Compact

Automatically summarize older messages when context usage is high

{#if settingsState.autoCompactEnabled}
{settingsState.autoCompactThreshold}%

Trigger compaction when context usage exceeds this percentage

settingsState.updateAutoCompactThreshold(parseInt(e.currentTarget.value))} class="w-full accent-emerald-500" />
{AUTO_COMPACT_RANGES.threshold.min}% {AUTO_COMPACT_RANGES.threshold.max}%
{settingsState.autoCompactPreserveCount}

Number of recent messages to keep intact (not summarized)

settingsState.updateAutoCompactPreserveCount(parseInt(e.currentTarget.value))} class="w-full accent-emerald-500" />
{AUTO_COMPACT_RANGES.preserveCount.min} {AUTO_COMPACT_RANGES.preserveCount.max}
{:else}

Enable auto-compact to automatically manage context usage. When enabled, older messages will be summarized when context usage exceeds your threshold.

{/if}

Model Parameters

Use Custom Parameters

Override model defaults with custom values

{#if settingsState.useCustomParameters}
{settingsState.temperature.toFixed(2)}

{PARAMETER_DESCRIPTIONS.temperature}

settingsState.updateParameter('temperature', parseFloat(e.currentTarget.value))} class="w-full accent-orange-500" />
{settingsState.top_k}

{PARAMETER_DESCRIPTIONS.top_k}

settingsState.updateParameter('top_k', parseInt(e.currentTarget.value))} class="w-full accent-orange-500" />
{settingsState.top_p.toFixed(2)}

{PARAMETER_DESCRIPTIONS.top_p}

settingsState.updateParameter('top_p', parseFloat(e.currentTarget.value))} class="w-full accent-orange-500" />
{settingsState.num_ctx.toLocaleString()}

{PARAMETER_DESCRIPTIONS.num_ctx}

settingsState.updateParameter('num_ctx', parseInt(e.currentTarget.value))} class="w-full accent-orange-500" />
{:else}

Using model defaults. Enable custom parameters to adjust temperature, sampling, and context length.

{/if}

Model-Prompt Defaults

Set default system prompts for specific models. When no other prompt is selected, the model's default will be used automatically.

{#if isLoadingModelInfo}
Loading model info...
{:else if modelsState.chatModels.length === 0}

No models available. Make sure Ollama is running.

{:else}
{#each modelsState.chatModels as model (model.name)} {@const modelInfo = modelInfoCache.get(model.name)} {@const mappedPromptId = getMappedPromptId(model.name)}
{model.name} {#if modelInfo?.capabilities && modelInfo.capabilities.length > 0} {#each modelInfo.capabilities as cap (cap)} {cap} {/each} {/if} {#if modelInfo?.systemPrompt} embedded {/if}
{#if modelInfo?.systemPrompt}

Embedded: {modelInfo.systemPrompt}

{/if}
{/each}
{/if}