toolbeginner
Show HN: I built the LLM Comparison Tool I wish existed
By JonathanChavezhackernews
View original on hackernewsAn LLM comparison tool that aggregates pricing, performance benchmarks, and speed metrics across 100+ models from major providers and open-source projects. Built with Next.js and TypeScript, it helps users optimize LLM costs through live pricing comparisons, benchmark scores (MMLU, HumanEval, GPQA), context length analysis, and quality-vs-price visualizations.
Key Points
- •Compare 100+ LLMs across pricing, performance, and capabilities to optimize costs and avoid overspending on premium models like GPT-4o
- •Access live pricing comparisons to identify the most cost-effective models for your specific use case
- •Review standardized benchmark scores (MMLU, HumanEval, GPQA) to evaluate model quality objectively across providers
- •Analyze context length vs cost tradeoffs to select models that balance capability with budget constraints
- •Measure speed and throughput performance across different providers to optimize for latency-sensitive applications
- •Use quality vs price visualizations to identify the best value models in different performance tiers
- •Leverage open-source data with verifiable sources on GitHub for transparency and reproducibility
- •Built with modern tech stack (Next.js, TypeScript, Recharts) enabling easy integration and customization
Found this useful? Add it to a playbook for a step-by-step implementation guide.
Workflow Diagram
Start Process
Step A
Step B
Step C
Complete