added readme
This commit is contained in:
228
README.md
Normal file
228
README.md
Normal file
@@ -0,0 +1,228 @@
|
||||
# Ollama Model Files Collection
|
||||
|
||||
This repository contains custom Ollama model configurations optimized for various tasks, particularly git commit message generation and general AI assistance.
|
||||
|
||||
## Available Models
|
||||
|
||||
### 1. **commit-msg-ernie** - ERNIE-4.5 Commit Message Generator
|
||||
- **Base Model**: ERNIE-4.5-0.3B (Q4_K_M quantization)
|
||||
- **Purpose**: Generates conventional commit messages from git diffs
|
||||
- **Context Window**: 4096 tokens
|
||||
- **Optimizations**: Temperature 0.4, Top-P 0.9 for consistent output
|
||||
|
||||
### 2. **commit-msg-gemma3** - Gemma 3 Commit Message Generator
|
||||
- **Base Model**: Gemma 3 1B
|
||||
- **Purpose**: Alternative commit message generator using Google's Gemma 3
|
||||
- **Context Window**: 4096 tokens
|
||||
- **Optimizations**: Similar parameters to ERNIE variant for consistency
|
||||
|
||||
### 3. **ernie-fixed** - ERNIE-4.5 General Purpose
|
||||
- **Base Model**: ERNIE-4.5-21B (IQ4_XS quantization)
|
||||
- **Purpose**: General-purpose conversational AI with improved chat template
|
||||
- **Features**: Uses ChatML-style template with proper token handling
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before installing these models, ensure you have:
|
||||
|
||||
- [Ollama](https://ollama.ai/) installed on your system
|
||||
- Sufficient disk space (models range from 1-15GB depending on quantization)
|
||||
- Internet connection for downloading base models
|
||||
|
||||
### Install Ollama
|
||||
|
||||
**Linux/macOS:**
|
||||
```bash
|
||||
curl -fsSL https://ollama.ai/install.sh | sh
|
||||
```
|
||||
|
||||
**Windows:**
|
||||
Download from [ollama.ai](https://ollama.ai/download)
|
||||
|
||||
## Installation Instructions
|
||||
|
||||
### Method 1: Direct Installation from Repository
|
||||
|
||||
1. **Clone the repository:**
|
||||
```bash
|
||||
git clone https://gitea.puchstein.bayern/mpuchstein/Ollama-Modelfiles.git
|
||||
cd Ollama-Modelfiles
|
||||
```
|
||||
|
||||
2. **Install individual models:**
|
||||
|
||||
**For commit message generation with ERNIE:**
|
||||
```bash
|
||||
cd commit-msg-ernie
|
||||
ollama create commit-msg-ernie -f Modelfile
|
||||
```
|
||||
|
||||
**For commit message generation with Gemma 3:**
|
||||
```bash
|
||||
cd commit-msg-gemma3
|
||||
ollama create commit-msg-gemma3 -f Modelfile
|
||||
```
|
||||
|
||||
**For general purpose ERNIE chat:**
|
||||
```bash
|
||||
cd ernie-fixed
|
||||
ollama create ernie-fixed -f Modelfile
|
||||
```
|
||||
|
||||
3. **Verify installation:**
|
||||
```bash
|
||||
ollama list
|
||||
```
|
||||
|
||||
### Method 2: Individual Model Installation
|
||||
|
||||
You can install specific models without cloning the entire repository:
|
||||
|
||||
**Download specific Modelfile:**
|
||||
```bash
|
||||
# Example for commit-msg-ernie
|
||||
wget https://gitea.puchstein.bayern/mpuchstein/Ollama-Modelfiles/raw/branch/main/commit-msg-ernie/Modelfile
|
||||
ollama create commit-msg-ernie -f Modelfile
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Commit Message Generation
|
||||
|
||||
**Using ERNIE variant:**
|
||||
```bash
|
||||
# Generate commit message from git diff
|
||||
git diff --cached | ollama run commit-msg-ernie
|
||||
```
|
||||
|
||||
**Using Gemma 3 variant:**
|
||||
```bash
|
||||
# Alternative commit message generator
|
||||
git diff --cached | ollama run commit-msg-gemma3
|
||||
```
|
||||
|
||||
### General Chat (ERNIE-Fixed)
|
||||
|
||||
```bash
|
||||
# Start interactive chat session
|
||||
ollama run ernie-fixed
|
||||
|
||||
# Or single query
|
||||
echo "Explain quantum computing" | ollama run ernie-fixed
|
||||
```
|
||||
|
||||
## Integration with Git Workflow
|
||||
|
||||
### Automated Commit Messages
|
||||
|
||||
Create a git alias for automated commit message generation:
|
||||
|
||||
```bash
|
||||
# Add to your ~/.gitconfig
|
||||
git config --global alias.aicommit '!f() { git add .; msg=$(git diff --cached | ollama run commit-msg-ernie); git commit -m "$msg"; }; f'
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Stage changes and generate commit automatically
|
||||
git aicommit
|
||||
```
|
||||
|
||||
### Git Hook Integration
|
||||
|
||||
Create a `prepare-commit-msg` hook:
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
# .git/hooks/prepare-commit-msg
|
||||
|
||||
if [ "$2" = "" ]; then
|
||||
# Only generate for manual commits (not merges, etc.)
|
||||
COMMIT_MSG=$(git diff --cached | ollama run commit-msg-ernie)
|
||||
if [ -n "$COMMIT_MSG" ]; then
|
||||
echo "$COMMIT_MSG" > "$1"
|
||||
fi
|
||||
fi
|
||||
```
|
||||
|
||||
Make it executable:
|
||||
```bash
|
||||
chmod +x .git/hooks/prepare-commit-msg
|
||||
```
|
||||
|
||||
## Model Specifications
|
||||
|
||||
| Model | Base Model | Size | Context | Temperature | Use Case |
|
||||
|-------|------------|------|---------|-------------|----------|
|
||||
| commit-msg-ernie | ERNIE-4.5-0.3B | ~300MB | 4096 | 0.4 | Git commits |
|
||||
| commit-msg-gemma3 | Gemma 3 1B | ~1GB | 4096 | 0.4 | Git commits |
|
||||
| ernie-fixed | ERNIE-4.5-21B | ~15GB | Default | Default | General chat |
|
||||
|
||||
## Customization
|
||||
|
||||
### Modifying System Prompts
|
||||
|
||||
To customize the behavior, edit the `SYSTEM` section in any Modelfile:
|
||||
|
||||
```dockerfile
|
||||
SYSTEM """
|
||||
Your custom system prompt here...
|
||||
"""
|
||||
```
|
||||
|
||||
Then recreate the model:
|
||||
```bash
|
||||
ollama create your-model-name -f Modelfile
|
||||
```
|
||||
|
||||
### Parameter Tuning
|
||||
|
||||
Adjust model parameters for different behaviors:
|
||||
|
||||
- **Temperature**: Lower (0.1-0.4) for consistency, higher (0.7-1.0) for creativity
|
||||
- **Top-P**: Controls nucleus sampling (0.9 recommended)
|
||||
- **Context Length**: Increase for larger diffs/inputs
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Model download fails:**
|
||||
- Check internet connection
|
||||
- Verify Hugging Face model availability
|
||||
- Try downloading base model separately: `ollama pull gemma3:1b`
|
||||
|
||||
**Out of memory errors:**
|
||||
- Use smaller quantizations (Q4_K_M instead of F16)
|
||||
- Reduce context window
|
||||
- Close other applications
|
||||
|
||||
**Poor commit message quality:**
|
||||
- Ensure git diff has meaningful changes
|
||||
- Check that changes are staged (`git add`)
|
||||
- Consider adjusting temperature parameter
|
||||
|
||||
### Performance Tips
|
||||
|
||||
1. **GPU Acceleration**: Ollama automatically uses GPU if available
|
||||
2. **Model Caching**: Models stay loaded in memory for faster subsequent calls
|
||||
3. **Batch Processing**: Process multiple diffs efficiently by keeping model loaded
|
||||
|
||||
## Contributing
|
||||
|
||||
To contribute improvements:
|
||||
|
||||
1. Fork the repository
|
||||
2. Create feature branch
|
||||
3. Test your changes
|
||||
4. Submit pull request with conventional commit messages
|
||||
|
||||
## License
|
||||
|
||||
This repository contains model configurations and prompts. Refer to individual base model licenses for usage restrictions.
|
||||
|
||||
## Related Projects
|
||||
|
||||
- [Ollama](https://ollama.ai/) - Local LLM runtime
|
||||
- [Conventional Commits](https://www.conventionalcommits.org/) - Commit message specification
|
||||
- [ERNIE Models](https://github.com/PaddlePaddle/ERNIE) - Baidu's ERNIE model family
|
Reference in New Issue
Block a user