Knowledge pack — guides, templates, examples

Chapter 06 — Least-to-Most Prompting

Overview

Least-to-Most (LtM) decomposes a complex task into a sequence of subproblems ordered from easiest to hardest so earlier solutions scaffold later reasoning.

  • Vs Chain-of-Thought: LtM externalizes structure; CoT is a single monologue.
  • Benefit: Reduces cognitive load; improves compositional generalization.

Decomposition guidelines

  • Each step adds exactly one new transformation or concept.
  • Later steps only reference prior outputs explicitly.
  • Avoid over-splitting (merge steps that always co-occur).

Worked example

Problem: A library has 480 books. 35% are fiction. Of the remainder, 25% are science. How many are neither fiction nor science?
Steps (easiest -> hardest):
 S1: Compute fiction count.
 S2: Compute remaining after fiction.
 S3: Compute science count from remainder.
 S4: Compute remainder after science.
Solutions:
 S1: 0.35 * 480 = 168
 S2: 480 - 168 = 312
 S3: 0.25 * 312 = 78
 S4: 312 - 78 = 234
Answer: 234

Template

Task: <problem>
1) List ordered subproblems (easiest -> hardest).
2) For each i: Solve S_i referencing only S_1..S_{i-1}.
3) Return final answer + JSON breakdown {"steps": [ {"id": "S1", "desc": "...", "value": ... } ]}.

Failure modes & mitigations

  • Over-decomposition: Too many trivial steps → enforce max step count.
  • Dependency leak: Step uses value not yet computed → validate forward-only references.
  • Error cascade: Early mistake propagates → resample first K steps, keep majority.
  • Vague steps: Require verb + object + constraint pattern.

Checklist

  • Necessary minimal steps only?
  • No backward references?
  • Max step limit enforced?
  • JSON breakdown included?