This is a Python script that uses a local LLM model via Ollama to summarize meeting minutes. It extracts key discussion points, decisions made, action items, and follow-up tasks.
- Uses a local LLM model (e.g.,
mistral
,gemma
,llama3
,deepseek
). - Summarizes meeting minutes into key points and action items.
- Outputs the summary on screen and saves it to a file.
- Lightweight, requires only
ollama
as a dependency.
Ollama is required to run the local LLM model. Install it from the official website:
After installation, verify it is working by running:
ollama --version
ollama pull mistral # Small and fast model (default)
ollama pull gemma # Another lightweight model
ollama pull llama3 # More powerful model
ollama pull deepseek # Largest and most powerful model
pip install ollama
python3 ./meeting_summarizer.py meeting_minutes.vtt
By default, the script uses the mistral
model. To use a different model, specify it as an argument:
python3 ./meeting_summarizer.py meeting_minutes.vtt --model gemma
The summary will be printed on screen. A summary file will be saved as meeting_minutes_summary.txt.