Add data science practitioner strategy and brand awareness tutorial#286
Add data science practitioner strategy and brand awareness tutorial#286
Conversation
Strategic analysis of the opportunity to make diff-diff appealing to data science practitioners (marketing, product, operations). Adds a parallel B1-B4 roadmap track targeting this audience, and delivers the first item (B1a): Tutorial 17 — measuring campaign impact on brand awareness with survey data. The tutorial showcases the unique survey design support (SurveyDesign with strata, PSU, FPC, weights) in a CPG brand tracking scenario, with naive-vs-corrected comparison, brand funnel analysis, staggered rollout extension, and stakeholder communication guidance. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Overall Assessment Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
Path to Approval
|
- P1: Use base_period="universal" for CallawaySantAnna in staggered
section so HonestDiD sensitivity analysis is methodologically valid
- P2: Fix unit of analysis — rename to respondent_id, reframe narrative
as respondent-level survey data (not market-level DMAs)
- P2: Fix matplotlib fallback — compute trends before conditional
- P2: Replace blanket warnings.simplefilter("ignore") with targeted
RuntimeWarning filter for survey module matmul artifacts only;
UserWarnings and methodology warnings now come through
- P3: Rename "Design effect (SE ratio)" to "SE inflation ratio" to
avoid terminology drift from formal DEFF definition
- Soften parallel trends language from "confirming" to "consistent with"
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment Executive Summary
Methodology Re-review note: the previous HonestDiD/base-period blocker is resolved. The staggered section now uses
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
Path to Approval
|
- Pass survey_design=sd to placebo DiD fit so falsification uses the same design-based inference as the main estimate - Label check_parallel_trends() as informal/non-survey-aware and direct readers to HonestDiD for formal robustness assessment - Soften diagnostic summary prose: "supportive evidence" not validation - Update tutorial summary to distinguish informal checks from formal sensitivity analysis Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment ✅ Looks good Executive Summary
Methodology No findings. Affected methods reviewed: Code Quality No findings. Performance No findings. Maintainability No findings. Tech Debt No findings. I did not see a new silent-correctness issue that would need TODO-based mitigation, and nothing in the changed files appears to depend on an already-deferred limitation in TODO.md:L51-L75. Security No findings. I did not find secrets or PII in the changed files, and the new notebook metadata is minimal. Documentation/Tests
Static review only: notebook execution was not reproduced locally because the environment cannot import |
Summary
docs/business-strategy.md) assessing the opportunity to make diff-diff appealing to data science practitioners — competitive landscape, personas, gap analysis, and phased roadmapROADMAP.mdtargeting practitioners in marketing, product, and operationsSurveyDesignsupport in a CPG brand tracking scenario with naive-vs-corrected comparison (2.14x SE ratio), brand funnel analysis (awareness/consideration/purchase intent), staggered rollout extension with CallawaySantAnna, HonestDiD sensitivity, and stakeholder communication guidanceMethodology references (required if estimator / math changes)
Validation
jupyter nbconvert --executewith seeded DGPs producing deterministic outputSecurity / privacy
Generated with Claude Code