llm-sanity-checks
This repository provides a practical guide and decision tree to help determine if an LLM is truly necessary for a given problem, often indicating that simpler, more efficient solutions are overlooked. It aims to prevent over-engineering with frontier models.