wiedervereinigung — screenshot of ollama.com

wiedervereinigung

This is an Ollama-compatible DPO-aligned merge of German models, scoring 7.11 on the mt-bench-de average, useful for German chat applications. I find it a solid option for local LLM experiments.

Visit ollama.com →

Questions & Answers

What is the mayflowergmbh/wiedervereinigung model?
The mayflowergmbh/wiedervereinigung model is a DPO-aligned merge of several German language models, including three based on LeoLM/leo-mistral-hessianai-7b. It is designed for chat use cases and is available via Ollama.
Who is the Wiedervereinigung model intended for?
This model is suitable for developers and researchers working with German language AI applications, particularly those focused on local LLM deployment using Ollama, or those who are "GPU-poor" and train with German datasets.
How does the Wiedervereinigung model stand out from other German LLMs?
It is a DPO-aligned merge of multiple established German models, specifically optimized with a German translation of slimorca DPO for improved quality. This focused merging and training approach aims for high performance on German benchmarks.
When should I consider using the Wiedervereinigung model?
You should consider using it for German-language chat applications or other text generation tasks where a robust, locally runnable German model is required. Its Ollama compatibility makes it convenient for rapid deployment.
What are the performance benchmarks of the Wiedervereinigung model?
The Wiedervereinigung model achieves an average score of 7.11 on the mt-bench-de benchmark. Its performance varies across categories, with high scores in humanities (9.35) and roleplay (8.6), and lower scores in math (4.35) and coding (4.3).