This is written for budget owners who’ve sat through too many vendor demos promising the https://emiliottvz696.tearosediner.net/how-does-faii-measure-the-impact-of-its-changes moon. You want case studies with raw numbers, reproducible methods, and citation trackers you can audit. Below is an anonymized, data-forward case study with the full context, specific tactics, implementation detail, and explicit metrics. Where possible I note the source tags so a procurement-oriented audit can trace the claims back to raw reports.

1. Background and context
Client: "Client X" (anonymized mid-market B2B SaaS; average contract value (ACV): $36,000). Sales cycle: 90–150 days. Funnel tracked end-to-end in the client’s CRM + analytics (CT-001).
- Quarter (Q0 — baseline): Marketing spend = $450,000 (3 months, $150k/month) Top-of-funnel leads: 3,600 (1,200/month) (CT-002) MQL rate (lead → MQL): 8% → 288 MQLs (CT-003) SQL rate (MQL → SQL): 20% → 57 SQLs (CT-004) Close rate (SQL → closed): 8% → 4.56 deals (CT-005) Quarterly new revenue from closed deals: 4.56 * $36,000 = $164,160 (CT-006) CAC (quarter): $450,000 / 4.56 = $98,684 (CT-007)
Procurement note: all figures above come from the client’s CRM exports and ad platform billing report (see CT-001 through CT-007).
2. The challenge faced
Client X had three interrelated problems:
High CAC relative to ACV — marketing spend was not producing a defensible payback period (CAC ≈ 2.7x ACV in this quarter). Quantity over quality — volume was high but MQLs and SQLs were low quality (long sales cycles and low demo-to-close conversion). Vendor fatigue and lack of evidence — previous vendors offered "growth" without handing over raw tracking data or sample-level conversion logs; procurement required verifiable, auditable proof before budget increases.Consequence: leadership paused budget expansion, and marketing was pressured to demonstrate improvements with verifiable numbers within one quarter.
3. Approach taken
Primary principle: treat funnel optimization as an experiment pipeline, not a vendor-led black box. The team agreed to three constraints:
- Every change must have a hypothesis, an assigned owner, a success metric, and a data source tag (CT-li11/li12li12/li13li13/li14li14/li15li15/li16li16/li17li17/li18li18/li19li19/li20li20/li21li21/li22li22/li23li23/li24li24/li25li25/table1tr1th1th1/th2th2/th3th3/th4th4/th5th5/tr1/tr2td1td1/td2td2/td3td3/td4td4/td5td5/tr2/tr3td6td6/td7td7/td8td8/td9td9/td10td10/tr3/tr4td11td11/td12td12/td13td13/td14td14/td15td15/tr4/tr5td16td16/td17td17/td18td18/td19td19/td20td20/tr5/tr6td21td21/td22td22/td23td23/td24td24/td25td25/tr6/tr7td26td26/td27td27/td28td28/td29td29/td30td30/tr7/tr8td31td31/td32td32/td33td33/td34td34/td35td35/tr8/tr9td36td36/td37td37/td38td38/td39td39/td40td40/tr9/table1/li26li26/li27li27/li28li28/li29li29/li30li30/li31li31/li32li32/li33li33/ol2li34li34/li35li35/ol3li36li36/li37li37/li38li38/ol3/li39li39/li40li40/li41li41/ol2/li42li42/li43li43/li44li44/li45li45/li46li46/li47li47/li48li48/li49li49/li50li50/table2tr10th6th6/th7th7/tr10/tr11td41td41/td42td42/tr11/tr12td43td43/td44td44/tr12/tr13td45td45/td46td46/tr13/tr14td47td47/td48td48/tr14/tr15td49td49/td50td50/tr15/tr16td51td51/td52td52/tr16/tr17td53td53/td54td54/tr17/tr18td55td55/td56td56/tr18/tr19td57td57/td58td58/tr19/tr20td59td59/td60td60/tr20/tr21td61td61/td62td62/tr21/tr22td63td63/td64td64/tr22/tr23td65td65/td66td66/tr23/table2/## Final note: If you’re a budget owner evaluating a vendor, your baseline playbook should be simple: demand sample-level logs, run short experiments that your team controls, and prioritize changes that improve downstream conversion and reduce waste. The numbers above show how a disciplined, auditable process turns vendor noise into measurable results.