#003 Top 10 LLM Anti Patterns
I find Anti Patterns as the best way to UNDERSTAND a subject
đ LLM Anti-Patterns with Business Problems
1. Prompt Dumping (Overstuffed Prompts)
Business Scenario: Customer service chatbot with 20+ rules, disclaimers, and FAQs crammed into every request.
Measure: Long latency (>5s), frequent âcontext length exceededâ errors, and rising token costs.
Solution: Modularize prompts â keep system rules separate, load FAQs via retrieval, and summarize older conversations instead of pasting all.
2. Overfitting on Examples
Business Scenario: HR assistant bot only trained on 5 sample CV parsing examples â fails on new formats.
Measure: Evaluate with diverse CV formats â error rate >30% on unseen inputs.
Solution: Use few-shot + explicit reasoning instructions, or fine-tune with varied real-world CVs instead of repeating the same template.
3. Hallucination Ignorance
Business Scenario: Financial research assistant invents references to non-existent analyst reports.
Measure: Random audits show 10â15% fabricated citations.
Solution: Add retrieval grounding from verified sources (Bloomberg, SEC filings), and auto-check citations with external APIs.
4. One-Shot Deployment
Business Scenario: Law firm uses LLM-generated contracts directly without lawyer review.
Measure: Errors spotted only after disputes â legal risk exposure measured in $$$.
Solution: Human-in-the-loop: contracts drafted by LLM, reviewed by lawyer, with contract QA metrics (clause completeness, compliance score).
5. Latent Bias Blindness
Business Scenario: Recruitment assistant screening resumes â downgrades certain demographics.
Measure: Run bias test sets â different acceptance rates by gender/ethnicity (>5% discrepancy).
Solution: Apply bias/fairness evaluation benchmarks (Aequitas, Fairlearn), retrain on balanced datasets, and log demographic parity metrics.
6. Misusing Temperature & Sampling
Business Scenario: Marketing team complains chatbot gives inconsistent product taglines â one answer is âElegant & Smart,â next is âCheapest Deal.â
Measure: Variance score of responses >0.7 on identical input.
Solution: Tune parameters: low temperature for factual Q&A, medium/high for creativity. Document per-task parameter policy.
7. âLLM = Databaseâ Thinking
Business Scenario: Insurance agent bot asked about latest EV policy coverage, but it hallucinates outdated rules.
Measure: Compare LLM responses to official policy docs â 20% mismatch rate.
Solution: Store policies in Postgres/Vector DB, use RAG pipeline for retrieval, let LLM only reason & explain.
8. Unbounded Context Windows
Business Scenario: Healthcare triage bot loads entire patient history in every query â cost spikes and delays.
Measure: Monthly token cost >3x budget; avg response latency >7s.
Solution: Use hierarchical memory: keep summary of past visits + pull detailed notes only if relevant.
9. No Guardrails on Sensitive Tasks
Business Scenario: Internal LLM assistant connected to Jira and GitHub can be manipulated by prompt injection (e.g., âdelete repoâ).
Measure: Security red team shows successful injection in 3/5 attempts.
Solution: Add structured tool APIs, role-based access, sandboxing, and filter unsafe instructions before execution.
10. Ignoring Evaluation & Benchmarks
Business Scenario: Sales team LLM generates proposals, but quality varies â clients complain of wrong pricing.
Measure: Proposal accuracy score <80% on eval set; QA effort >10 hours/week.
Solution: Build LLM eval pipeline with metrics: accuracy, consistency, hallucination %, latency, cost. Use continuous retraining with feedback.

