feat(phase10): complete MCP-only architecture migration
Phase 10 "Cleanup & Production Polish" is now complete. All LLM interactions now go through the Model Context Protocol (MCP), removing direct provider dependencies from CLI/TUI. ## Major Changes ### MCP Architecture - All providers (local and cloud Ollama) now use RemoteMcpClient - Removed owlen-ollama dependency from owlen-tui - MCP LLM server accepts OLLAMA_URL environment variable for cloud providers - Proper notification handling for streaming responses - Fixed response deserialization (McpToolResponse unwrapping) ### Code Cleanup - Removed direct OllamaProvider instantiation from TUI - Updated collect_models_from_all_providers() to use MCP for all providers - Updated switch_provider() to use MCP with environment configuration - Removed unused general config variable ### Documentation - Added comprehensive MCP Architecture section to docs/architecture.md - Documented MCP communication flow and cloud provider support - Updated crate breakdown to reflect MCP servers ### Security & Performance - Path traversal protection verified for all resource operations - Process isolation via separate MCP server processes - Tool permissions controlled via consent manager - Clean release build of entire workspace verified ## Benefits of MCP Architecture 1. **Separation of Concerns**: TUI/CLI never directly instantiates providers 2. **Process Isolation**: LLM interactions run in separate processes 3. **Extensibility**: New providers can be added as MCP servers 4. **Multi-Transport**: Supports STDIO, HTTP, and WebSocket 5. **Tool Integration**: MCP servers expose tools to LLMs This completes Phase 10 and establishes a clean, production-ready architecture for future development. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -107,8 +107,10 @@ fn resources_list_descriptor() -> McpToolDescriptor {
|
||||
}
|
||||
|
||||
async fn handle_generate_text(args: GenerateTextArgs) -> Result<String, RpcError> {
|
||||
// Create provider with default local Ollama URL
|
||||
let provider = OllamaProvider::new("http://localhost:11434")
|
||||
// Create provider with Ollama URL from environment or default to localhost
|
||||
let ollama_url =
|
||||
env::var("OLLAMA_URL").unwrap_or_else(|_| "http://localhost:11434".to_string());
|
||||
let provider = OllamaProvider::new(&ollama_url)
|
||||
.map_err(|e| RpcError::internal_error(format!("Failed to init OllamaProvider: {}", e)))?;
|
||||
|
||||
let parameters = ChatParameters {
|
||||
@@ -190,7 +192,9 @@ async fn handle_request(req: &RpcRequest) -> Result<Value, RpcError> {
|
||||
// New method to list available Ollama models via the provider.
|
||||
methods::MODELS_LIST => {
|
||||
// Reuse the provider instance for model listing.
|
||||
let provider = OllamaProvider::new("http://localhost:11434").map_err(|e| {
|
||||
let ollama_url =
|
||||
env::var("OLLAMA_URL").unwrap_or_else(|_| "http://localhost:11434".to_string());
|
||||
let provider = OllamaProvider::new(&ollama_url).map_err(|e| {
|
||||
RpcError::internal_error(format!("Failed to init OllamaProvider: {}", e))
|
||||
})?;
|
||||
let models = provider
|
||||
@@ -377,7 +381,9 @@ async fn main() -> anyhow::Result<()> {
|
||||
};
|
||||
|
||||
// Initialize Ollama provider and start streaming
|
||||
let provider = match OllamaProvider::new("http://localhost:11434") {
|
||||
let ollama_url = env::var("OLLAMA_URL")
|
||||
.unwrap_or_else(|_| "http://localhost:11434".to_string());
|
||||
let provider = match OllamaProvider::new(&ollama_url) {
|
||||
Ok(p) => p,
|
||||
Err(e) => {
|
||||
let err_resp = RpcErrorResponse::new(
|
||||
|
||||
Reference in New Issue
Block a user