Knowledge pack — guides, templates, examples

Chapter 07 — Comparisons & Use Cases

Overview

Choose the lightest strategy that meets reliability / transparency requirements. Escalate only when baseline performance or guardrail needs justify cost.

Technique selection matrix

TechniqueBest ForCostTransparencyFailure Risk Reduced
MaieuticTraceable commonsense QAHighHigh (tree)Inconsistent justification
Self-RefineDraft polishing / code qualityMedium (rounds)MediumSurface-level errors
Least-to-MostCompositional multi-step tasksLow→MediumMedium (explicit steps)Premature leaps
Self-ConsistencyMath / classificationLinear in KLow (rationales sampled)Random reasoning errors
RAGFactual Q&A / summarizationIndex + retrieval callsHigh (citations)Hallucinated facts

Hybrid patterns

  • LtM + Self-Refine: Decompose, then refine each step for correctness.
  • RAG + Maieutic: Retrieve passages, build explanation tree citing sources.
  • ReAct + Self-Consistency: Sample tool-augmented traces; majority answer.

Design hybrids only if each layer adds a distinct mitigation (e.g., hallucination reduction + reasoning robustness).

Escalation checklist

  • Baseline simple prompt measured & insufficient?
  • Quantified failure being mitigated (e.g., hallucination rate)?
  • Added latency within SLO budget?
  • Instrumentation captures step-level outputs?
  • Rollback path if hybrid underperforms?