Ollama Model Files Collection
This repository contains custom Ollama model configurations optimized for various tasks, particularly git commit message generation and general AI assistance.
Available Models
1. commit-msg-ernie - ERNIE-4.5 Commit Message Generator
- Base Model: ERNIE-4.5-0.3B (Q4_K_M quantization)
- Purpose: Generates conventional commit messages from git diffs
- Context Window: 4096 tokens
- Optimizations: Temperature 0.4, Top-P 0.9 for consistent output
2. commit-msg-gemma3 - Gemma 3 Commit Message Generator
- Base Model: Gemma 3 1B
- Purpose: Alternative commit message generator using Google's Gemma 3
- Context Window: 4096 tokens
- Optimizations: Similar parameters to ERNIE variant for consistency
3. ernie-fixed - ERNIE-4.5 General Purpose
- Base Model: ERNIE-4.5-21B (IQ4_XS quantization)
- Purpose: General-purpose conversational AI with improved chat template
- Features: Uses ChatML-style template with proper token handling
Prerequisites
Before installing these models, ensure you have:
- Ollama installed on your system
- Sufficient disk space (models range from 1-15GB depending on quantization)
- Internet connection for downloading base models
Install Ollama
Linux/macOS:
curl -fsSL https://ollama.ai/install.sh | sh
Windows: Download from ollama.ai
Installation Instructions
Method 1: Direct Installation from Repository
- Clone the repository:
git clone https://gitea.puchstein.bayern/mpuchstein/Ollama-Modelfiles.git
cd Ollama-Modelfiles
- Install individual models:
For commit message generation with ERNIE:
cd commit-msg-ernie
ollama create commit-msg-ernie -f Modelfile
For commit message generation with Gemma 3:
cd commit-msg-gemma3
ollama create commit-msg-gemma3 -f Modelfile
For general purpose ERNIE chat:
cd ernie-fixed
ollama create ernie-fixed -f Modelfile
- Verify installation:
ollama list
Method 2: Individual Model Installation
You can install specific models without cloning the entire repository:
Download specific Modelfile:
# Example for commit-msg-ernie
wget https://gitea.puchstein.bayern/mpuchstein/Ollama-Modelfiles/raw/branch/main/commit-msg-ernie/Modelfile
ollama create commit-msg-ernie -f Modelfile
Usage Examples
Commit Message Generation
Using ERNIE variant:
# Generate commit message from git diff
git diff --cached | ollama run commit-msg-ernie
Using Gemma 3 variant:
# Alternative commit message generator
git diff --cached | ollama run commit-msg-gemma3
General Chat (ERNIE-Fixed)
# Start interactive chat session
ollama run ernie-fixed
# Or single query
echo "Explain quantum computing" | ollama run ernie-fixed
Integration with Git Workflow
Automated Commit Messages
Create a git alias for automated commit message generation:
# Add to your ~/.gitconfig
git config --global alias.aicommit '!f() { git add .; msg=$(git diff --cached | ollama run commit-msg-ernie); git commit -m "$msg"; }; f'
Usage:
# Stage changes and generate commit automatically
git aicommit
Git Hook Integration
Create a prepare-commit-msg
hook:
#!/bin/sh
# .git/hooks/prepare-commit-msg
if [ "$2" = "" ]; then
# Only generate for manual commits (not merges, etc.)
COMMIT_MSG=$(git diff --cached | ollama run commit-msg-ernie)
if [ -n "$COMMIT_MSG" ]; then
echo "$COMMIT_MSG" > "$1"
fi
fi
Make it executable:
chmod +x .git/hooks/prepare-commit-msg
Model Specifications
Model | Base Model | Size | Context | Temperature | Use Case |
---|---|---|---|---|---|
commit-msg-ernie | ERNIE-4.5-0.3B | ~300MB | 4096 | 0.4 | Git commits |
commit-msg-gemma3 | Gemma 3 1B | ~1GB | 4096 | 0.4 | Git commits |
ernie-fixed | ERNIE-4.5-21B | ~15GB | Default | Default | General chat |
Customization
Modifying System Prompts
To customize the behavior, edit the SYSTEM
section in any Modelfile:
SYSTEM """
Your custom system prompt here...
"""
Then recreate the model:
ollama create your-model-name -f Modelfile
Parameter Tuning
Adjust model parameters for different behaviors:
- Temperature: Lower (0.1-0.4) for consistency, higher (0.7-1.0) for creativity
- Top-P: Controls nucleus sampling (0.9 recommended)
- Context Length: Increase for larger diffs/inputs
Troubleshooting
Common Issues
Model download fails:
- Check internet connection
- Verify Hugging Face model availability
- Try downloading base model separately:
ollama pull gemma3:1b
Out of memory errors:
- Use smaller quantizations (Q4_K_M instead of F16)
- Reduce context window
- Close other applications
Poor commit message quality:
- Ensure git diff has meaningful changes
- Check that changes are staged (
git add
) - Consider adjusting temperature parameter
Performance Tips
- GPU Acceleration: Ollama automatically uses GPU if available
- Model Caching: Models stay loaded in memory for faster subsequent calls
- Batch Processing: Process multiple diffs efficiently by keeping model loaded
Contributing
To contribute improvements:
- Fork the repository
- Create feature branch
- Test your changes
- Submit pull request with conventional commit messages
License
This repository contains model configurations and prompts. Refer to individual base model licenses for usage restrictions.
Related Projects
- Ollama - Local LLM runtime
- Conventional Commits - Commit message specification
- ERNIE Models - Baidu's ERNIE model family