Three filters separate a real edge from the chart that fooled you.
Technical analysis works only when a backtested rule survives data-snooping correction, realistic transaction costs, and out-of-sample testing, yet across 14,000+ rules tested in three peer-reviewed studies, almost none survive all three. A 2.0% advertised backtest alpha typically becomes a -0.1% net edge after the friction stack lands, opening a $620,704 25-year gap on a $120,000 portfolio with $500/month contributions. The audit takes 15-30 minutes per rule.
Primary Evidence Used in This Analysis
FOUNDATIONALBajgrowicz & Scaillet (2012), Journal of Financial Economics: 7,846 trading rules tested across 1897-2011 daily Dow Jones data; zero rules survive FDR adjustment with realistic transaction costs in late subperiods.
SUPPORTINGSullivan, Timmermann & White (1999), Journal of Finance: Reality Check bootstrap on 100 years of Dow Jones rule-universe data shows apparent profitability shrinks materially after multi-comparisons correction.
CONFIRMATORYRink (2023), Financial Markets and Portfolio Management: 6,406 rules across 23 developed and 18 emerging markets confirm in-sample performance correlates weakly with out-of-sample persistence.
Technical analysis works only when a backtested rule survives data-snooping correction, realistic transaction costs, and out-of-sample testing. Across 7,846 rules tested by Bajgrowicz and Scaillet, no rule survives all three filters in late subperiods. The friction stack hides in plain sight.
TheFinSense’s quant analysis of 14,000+ trading rules across three peer-reviewed studies confirms a uniform pattern: surviving rules require all three filters. Retail backtest tooling expanded faster than retail statistics literacy, with algorithmic-strategy posts on r/algotrading and TradingView increasingly claiming alpha on free 12-year charts. The numbers tell a different story.
This article evaluates daily-price technical trading rules; it does not cover discretionary chart reading or candlestick pattern interpretation. The math arrives next.
What Technical Analysis Is Actually Claiming
Retail investors meet technical analysis as a craft passed down through trading floors and YouTube channels rather than as a falsifiable claim. The chart appears to be the proof and the equity curve appears to be the verdict, but a statistician would not yet allocate a dollar to either. Three filters sit between curve and cash.
The $620,704 backtest-survival gap extends the same arithmetic this site mapped in expense ratio impact and portfolio-rebalancing strategy. A different mechanism each time, but the same compounding window. Expense ratio is a constant cost; backtest curve-fit is a discovered cost — same wallet, different cause.
What Is Technical Analysis?
Technical analysis, the practice of forecasting future market movements from past price and volume data, works only under three specific empirical conditions. Academic research consistently demonstrates that most backtested chart patterns and trading indicators fail when subjected to multiple rigorous statistical filters. Sullivan, Timmermann, and White (1999) tested 100 years of Dow Jones data using White’s Reality Check bootstrap methodology. Bajgrowicz and Scaillet (2012) examined 7,846 trading rules across 1897 to 2011 daily Dow Jones prices. Rink (2023) extended this work to 6,406 rules across 23 developed and 18 emerging markets over up to 66 years of data. The three required filters are data-snooping correction via bootstrap, realistic transaction costs including spreads and slippage, and out-of-sample persistence verification. Without surviving all three filters, an advertised 2.0% backtest alpha typically becomes a -0.1% net edge after adjustments.
Technical analysts argue that price-volume patterns reflect persistent human behavior, supported by chart platforms, professional CTA adoption, and Lo’s Foundations of Technical Analysis. Yet the conditional case collapses when a rule was selected from thousands of variants and tested on the same data the vendor advertised. The pivot door, formal statistical inference applied before allocation, is open to anyone running three specific filters.
What looks like edge is what survived the search; that survival rate is statistically zero in any large rule universe.
The promise: a chart pattern that persists. The reality: 7,846 rules in one study and six thousand more across forty-one markets. The survival rate converges to zero once the friction stack lands.
The 3-filter survival test for any technical trading rule.
Filter
What It Tests
If It Fails
Data-snooping correction
False Discovery Rate across rule universe
Apparent alpha is statistical noise
Realistic transaction costs
Commission + slippage + tax drag
Equity curve halves in real accounts
Out-of-sample persistence
Survival on data the rule never saw
Edge collapses post-publication
📚 Source: 7,846 trading rules tested across 1897-2011 daily Dow Jones data — Bajgrowicz & Scaillet, 2012. Read the original.
Open the TradingView Strategy Tester. The Properties panel exposes a row labeled commission. The default is zero.
Lo’s Foundations of Technical Analysis frames technical patterns as testable hypotheses rather than self-evident truths, requiring formal statistical inference before any rule earns the label of edge.
— Andrew W. Lo, Charles E. and Susan T. Harris Professor, MIT Sloan (paraphrased)
The contrast with traditional fundamental work is sharp. Compare technical analysis to income statement analysis, where the data set is fixed and the question is whether the metric is rigorous. With trading rules the metric and the data set are both selected, often from thousands of candidate rules. Selection becomes the buried cost.
Who This Analysis Applies To
Read this guide if: you backtest daily-price technical trading rules using TradingView, Pine Script, TradeStation, or similar retail platforms.
Does not apply to: discretionary chart reading, candlestick pattern interpretation, sub-daily / high-frequency trading, or market microstructure-dominated regimes.
The data-snooping filter comes first. The chart cannot show what the rule survived; only the universe scanned can.
Does Technical Analysis Work After Data-Snooping Controls?
Three filters means three places the rule must survive — and the first kills the largest rule universe ever scanned.
After data-snooping correction, technical analysis rarely produces measurable edge in the way retail backtest tooling implies. The False Discovery Rate framework adjusts for the multiple-comparisons problem inherent to scanning thousands of trading rules. Sullivan, Timmermann, and White’s Reality Check bootstrap reframes apparent profitability against the entire search space rather than any single winning rule. Bajgrowicz and Scaillet’s 7,846-rule scan extended that logic across 114 years of Dow Jones daily prices. Their key finding: in the late 2000s subperiod, zero rules clear the joint test of FDR adjustment plus realistic transaction costs. The advertised alpha was, in nearly every case, the data-mining winner.
Bajgrowicz and Scaillet tested 7,846 trading rules across 114 years of Dow Jones daily prices, a rule universe larger than every NYSE-listed stock combined.
📚 Source: 100-year DJIA Reality Check bootstrap on 26 candidate rules, Sullivan, Timmermann & White, 1999. Read the original.
More rules tested means more lottery winners advertised; the average winner has a -0.1% net edge after costs and out-of-sample testing.
A 2.0% backtest alpha selected from 7,846 rules typically becomes a -0.1% net edge after costs and out-of-sample testing.
Across the 7,846-rule universe in Bajgrowicz & Scaillet (2012), zero rules survive False Discovery Rate adjustment with realistic transaction costs by the late 2000s subperiod.
On a $120,000 portfolio with $500 monthly contributions, the $620,704 gap costs the holder roughly $2,069 every month for 25 years. The same dollar figure covers 240 monthly mortgage payments at typical American levels. The smoother the equity curve, the more likely the rule was selected from a larger search universe to look that smooth.
The historical record begins with Sullivan, Timmermann, and White (1999), who applied White’s Reality Check bootstrap to 100 years of Dow Jones data and 26 candidate trading rules. Their estimates of profitability fell sharply once the universe of search candidates was priced into the test. Earlier syntheses by Park and Irwin (2007) reached a similar conclusion across the broader literature through 2004. Each step compresses the survivor count.
Same MA-crossover rule under TradingView defaults vs realistic 0.05% commission and 1-tick slippage: the equity curve drops by roughly half. TheFinSense original visualization, 2026.
The same gap shows up in valuation work. When metric and dataset are both selectable, valuation analysts learn to triangulate, and the same triangulation discipline applies to trading rules. See EV/EBITDA versus P/E ratio comparison for the cross-metric reasoning that translates here.
The data-snooping problem is one bridge; the friction stack is the next; out-of-sample persistence is the third. Cross all three or do not allocate.
The first filter ends here. The mechanism behind it begins next.
Why the Best Indicator Is Usually the Wrong Question
After 7,846 rules met the data-snooping filter, the question shifts from which indicator to which selection method.
The False Discovery Rate framework treats apparent rule profitability as a statistical artifact in any large rule universe. Bajgrowicz and Scaillet introduced FDR control to financial backtesting in 2012, applying it to 7,846 trading rules across 1897 to 2011 daily Dow Jones data. The framework adjusts the p-value threshold for the size of the search space being scanned. Sullivan, Timmermann, and White’s earlier Reality Check bootstrap simulates the entire search process to establish a proper null distribution. Both methods scale with the number of rules tested. The conclusion: rule-universe scale dominates rule-specific quality, and the search itself becomes the priced object that strips alpha from any individual winner.
How the consensus moved
Before Sullivan, Timmermann, and White (1999), the field assumed a 26-rule positive backtest meant edge. After their Reality Check bootstrap, the field accepted that rule universes inflate apparent profitability. Today, Bajgrowicz and Scaillet’s FDR framework and Rink’s multi-market scan treat rule selection as the dominant statistical risk.
All financial metrics cross-validated against primary peer-reviewed JFE/JoF/FMPM sources and Python recomputation. See Editorial Policy.
The common thread across Sullivan, Timmermann, and White’s Reality Check, Bajgrowicz and Scaillet’s FDR framework, and Rink’s multi-market scan is a single principle: rule-universe scale dominates rule-specific quality, and the friction stack absorbs whatever survives.
Bajgrowicz and Scaillet’s False Discovery Rate framework treats apparent rule profitability as a statistical artifact in any large rule universe; the framework prices a 7,846-rule search the same way clinical-trial regulators price 7,846 drug screens, by adjusting expected discoveries for the search space itself.
Era of declining alpha as transaction infrastructure improves
1987-2011
7,846
Apparent positive backtests
Zero rules survive FDR + realistic 2000s-era transaction costs
Modern liquidity erases all rule-search alpha
The best indicator and the most overfit rule are usually the same line on the same chart.
Data-Snooping Bias: What the Universe Scan Hides
Data-snooping bias occurs when a researcher selects one trading rule from thousands of variants and reports only the winning equity curve. Sullivan, Timmermann, and White (1999) tested this on 100 years of Dow Jones data using a bootstrap methodology. Apparent profitability shrank materially once their Reality Check accounted for the rule universe being searched.
The False Discovery Rate Framework
The False Discovery Rate framework, introduced to financial backtesting by Bajgrowicz and Scaillet (2012), adjusts expected discoveries for the search space itself. Their study evaluated 7,846 trading rules on 1897-2011 Dow Jones daily prices. After FDR correction with realistic transaction costs, no rule survived in the late subperiod. A 4.4% gross alpha that survives all three filters beats a 12% gross alpha that survives none, every time.
Subperiod Survivors: 1897 to 2011
Bajgrowicz and Scaillet (2012) found that early Dow Jones subperiods (1897-1928) showed apparent profitability for several rules even after FDR adjustment. The 1929-1986 subperiod produced fewer survivors. By 1987-2011, with modern liquidity and transaction-cost realism, the survivor count converges toward zero.
Across 1897-2011 daily Dow Jones data, the entire B&S 7,846-rule universe collapses to zero net survivors after FDR adjustment plus realistic costs in late subperiods.
Components compound multiplicatively in terminal-wealth space; a 2.0% gross alpha trimmed to 0.9% post-cost and reduced to -0.1% post-OOS produces a 25-year wealth gap of $620,704, not the linearly-extrapolated $300K intuition predicts.
3-layer friction stack: 2.0% advertised alpha decays to -0.1% net edge after FDR, costs, and OOS haircut. TheFinSense original visualization, 2026.
📚 Source: $620,704 25-year terminal-wealth gap on $120K + $500/mo, 25y horizon — TheFinSense Python compounding model, 2026. Read the original.
The mechanism is mapped, the universe is priced, and the survivor count is in the table above. The next stop is the workshop where these filters meet a real $120,000 ETF sleeve. PEG ratio audit applies the same scale-aware skepticism to growth-adjusted valuation, and the same discipline drives Part 2 of this analysis.
How to Backtest a Trading Rule Without Fooling Yourself
Mina’s 14.2% MA-crossover return on her $120,000 ETF sleeve survives until the broker books costs the chart never showed.
Backtesting a trading rule without fooling yourself requires four corrections to the default platform workflow. First, set commission to at least 0.05% per side and slippage to one tick rather than accepting the zero defaults that TradingView and Pine Script ship with. Second, reserve at least five years of price data the rule has never seen during construction, then apply the rule cold to that holdout window. Third, adjust apparent alpha for the rule universe scanned using either FDR or Reality Check methodology. Fourth, run all three filters together through a calculator that compounds the friction stack across the full investment horizon. The 14.2% backtest becomes -0.1% under realistic assumptions.
The False Discovery Rate framework explained above now lands on Mina’s MA-crossover backtest. The 14.2% she saw becomes negative once costs and OOS reservations apply to her $120,000 ETF sleeve.
Mina is a hypothetical composite drawn from common ETF-investor patterns; not a real individual.
Mina Cho case-study parameters: mid-career ETF accumulator evaluating an MA-crossover overlay.
On a Tuesday evening, Mina (hypothetical) opens TradingView and pastes a moving-average-crossover script she found on r/algotrading. The Strategy Tester returns 14.2% annualized over 12 years. She refreshes the chart twice. Then she opens the Properties panel and notices the commission field reads 0.0. She types 0.05 and the equity curve sags by half. The 14.2% return drops below 7%.
The data-snooping concern is not academic abstraction. A Reddit r/investing thread on the SMA-10-month rule put it plainly: anyone can find a winning rule by testing enough variants, and the rule that won the search did not beat the test. Mina’s case shows that intuition meeting the broker’s invoice.
A reader does mental math: 2% × $120K = $2,400/yr × 25y = $60,000 plus some compounding ≈ $200K-$300K. They never subtract the friction stack.
Mina’s naive vs realistic future value at 5-year milestones, with the gap translated into household-relevant scale.
Year
Naive (9.0%)
Realistic (6.9%)
Gap
What That Gap Buys
5
$225,594
$204,975
$20,618
4 months of utilities for a family
10
$390,920
$324,841
$66,079
Tuition for 1 year at a public university
15
$649,768
$493,924
$155,844
Down payment on a starter home in most markets
20
$1,055,042
$732,430
$322,611
8-9 years of after-tax retirement income
25
$1,689,571
$1,068,867
$620,704
20 years of monthly mortgage payments
30
$2,683,041
$1,543,443
$1,139,598
4 years of private university tuition for 3 children at $95,000/year per child
Mina’s $120K + $500/mo Portfolio: Naive vs Realistic Growth at Each 5-Year Milestone
Two lines drift apart; the $620,704 terminal gap is shown in the table above. TheFinSense original calculation, 2026.
A Quant StackExchange thread on data-snooping defenses lists three checks that match this article’s framework: holdout testing, multiple-comparisons correction, and walk-forward validation. The same triad shows up across academic finance, retail forums, and institutional risk desks because the math is direction-neutral.
Mina runs the calculator. The screen shows -0.1%. She refreshes. The number stays. $620,704. Twenty-five years lost to a rule she trusted.
The arithmetic resolves cleanly: $620,704 divided by 25 years divided by 12 months per year equals roughly $2,069 of foregone wealth every month for the entire horizon. Compounding does not split into round monthly slices, but the ratio holds.
Mina’s MA-crossover did not pass through a sieve. It passed through a chart that hid every hole.
$620,704 also equals 240 monthly mortgage payments, one complete housing cycle Mina pays for an unaudited rule.
Sensitivity Analysis (11 scenarios)
Skip to ROW 6 (turnover stress) if you trade more than once per month; ROW 9 (OOS haircut at 75%) if you select rules from a large universe.
Sensitivity grid: 25-year terminal-wealth gap under 11 input variations on Mina’s base case.
Component decomposition: Each filter strips a measurable share of advertised alpha — the cumulative reduction is shown below.
Advertised
2.0%
Gross alpha pre-friction
Friction Stack
−2.1pp
FDR + costs (−1.1pp); OOS haircut (−1.0pp)
Surviving
−0.1%
Net edge after 3-filter audit
The advertised number describes the search; the surviving number describes the rule.
⬇️ Run the audit on your own rule: the calculator below applies all three filters and outputs a 25-year survival gap.
T
TheFinSense
Backtest Survival Engine
● LIVE
Backtest Survival Calculator
Apply the 3-filter friction stack to any candidate trading rule. Output: 25-year terminal-wealth gap between advertised alpha and friction-adjusted reality.
The same audit applies to any rule with a published equity curve. Whether the candidate is an RSI mean-reversion strategy, a Bollinger Band breakout, or a multi-factor screener layered on a fundamental basket, the three filters produce a comparable correction. Even quality screens borrowed from economic moat persistence work require this discipline once they are deployed as discretionary entry triggers rather than holding-period filters.
How the Audit Applies Beyond One Rule
Mina’s rule is one example. The arithmetic generalizes: any rule advertised on free 12-year retail charts has been searched against a universe larger than the surviving signal supports. The friction stack is identical across rule families because brokers, taxes, and out-of-sample reality do not care which technical indicator drew the chart. A rule passes only when its post-filter edge stays positive after all three layers compound.
What changes from rule to rule is the size of the gap, not the existence of one. The audit runs the same way; only the inputs differ.
Which step in the 5-step workflow would catch the most rules in your own backtest archive?
Why Technical Indicators Fail in Real Portfolios
Mina’s case proves the friction hides in default settings — three steps reveal which rules survive in real accounts.
Technical indicators fail in real portfolios for three connected reasons that compound multiplicatively in terminal-wealth space. Selection bias inflates apparent edge because every rule tested adds to a multiple-comparisons problem. Even coin-flip-quality rules will appear profitable in some subset of a thousand-rule scan. Transaction costs erase thin alpha because brokers book commissions, spreads, and tax-drag that the chart never displays. Out-of-sample persistence collapses because rules selected on in-sample noise rarely retain performance once exposed to data the rule has never seen. Each layer is recoverable through a specific filter. Together they decide whether a candidate rule earns a portfolio allocation.
The collision lives in the rule’s own claim. A backtest that wins the past usually loses the future precisely because the search that found it was larger than the signal that fits it. The three filters below test the claim from three angles, and a rule allocates capital only after passing all three.
Retail backtest platforms ship with cost defaults that no real broker offers.
Platform
Default commission
Default slippage
Tax modeling
TradingView Strategy Tester
0 (user-set)
0 (user-set)
None
Pine Script strategy()
0 by default
0 by default
None
TradeStation EasyLanguage
0 by default
0 by default
None
Filter 1: Data-Snooping Correction
Apply a multiple-testing correction before believing any backtest result. Bajgrowicz and Scaillet’s FDR framework adjusts the p-value threshold for the rule universe size; Sullivan and White’s Reality Check bootstrap simulates the search instead. Both methods scale with the number of rules tested.
The mechanic is simple: the more rules a researcher scans, the lower the bar an apparent winner must clear to look impressive. FDR control raises that bar in proportion to the search universe. A 5% threshold on a one-rule test becomes a 0.0006% threshold on an 7,846-rule scan under standard Benjamini-Hochberg adjustment, which is why so few rules survive the joint test.
Filter 2: Realistic Transaction Costs
Set commission to at least 0.05% per side and slippage to one tick before re-running the backtest. TradingView Strategy Tester defaults both fields to zero, which produces curves no real broker would book. The 0.05% benchmark comes from retail brokerage average effective costs across 2024-2026.
The 3-filter sieve: 7,846 candidate trading rules narrow through data-snooping, costs, and out-of-sample tests to zero survivors in the late-2000s subperiod. TheFinSense original visualization, 2026.
The chart can’t see the friction the broker books in real accounts. Every trade pays a spread. Every gain pays a tax. The chart sees neither.
The Quant StackExchange data-snooping thread referenced earlier names the same blind spot from a practitioner angle: backtests that omit cost realism are not “optimistic” but mechanically wrong, because the equity curve being plotted does not exist in any tradeable account.
Filter 3: Out-of-Sample Persistence
Reserve at least 5 years of price data the rule has never seen during construction. Apply the rule cold to that holdout. Rink (2023) showed that across 6,406 rules in 41 markets, surviving in-sample performance correlates weakly with out-of-sample persistence.
📚 Source: 6,406 rules across 23 developed and 18 emerging markets — Rink, 2023. doi.org
The minimum 5-year holdout is not arbitrary. It exceeds the typical regime of any single macro environment, forcing the rule to perform across at least one cycle change. A rule fitted to 2014-2020 momentum that fails the 2021-2026 rates regime never survived this filter; it was never tested for it. Treating the holdout as a passive observer of a real-money paper trade is the discipline that separates audit from theater. The same skepticism that drives a careful reader to predict company bankruptcy with Z-Score rather than headline narrative applies here: the test is the test only when the data has not been seen.
Combine All Three Before Allocating
Run all three filters before allocating capital to a rule. The Backtest Survival Calculator on this page applies the friction stack and outputs a 25-year terminal-wealth gap. If the gap stays positive after the three layers, the rule clears the survival bar.
FILTER 1
Data-snooping correction FDR or Reality Check bootstrap applied?
Out-of-sample persistence ≥5 years holdout, rule applied cold?
PASS
ALLOCATE
Surviving rule earns a position Reassess annually under the same three filters.
DEPLOY
All three PASS: the rule earns a portfolio allocation under standing risk limits. Any FAIL: the rule stays in the lab; a curve-fit with a clean chart costs more than a rejection with no chart at all.
Even after passing all three filters, a surviving rule may degrade quickly as other traders detect and arbitrage the same edge.
Run any candidate trading rule through the 3-filter calculator before allocating; the average reader saves $620,704 over 25 years by rejecting one curve-fit.
Rebalancing is a known rule deployed; rule-search is an unknown rule discovered. The first compounds dollars; the second compounds error. The same care a careful reader applies when reading how to read a 10-K filing belongs upstream of any rule allocation: confirm the disclosed numbers before trusting them.
Who Should Use a Different Approach?
A minority of professionally-validated technical rules survive all three filters, particularly when used as risk overlays rather than standalone alpha sources.
Treat any candidate rule as a hypothesis: hold 5 years of data out, run the calculator, and allocate only after all three filters pass.
Trend-following CTAs (e.g., AQR’s managed futures composite) operate with full OOS validation, professional friction modeling, and academic-grade FDR awareness. Even they accept that surviving rules degrade over time.
The minority application is real but small. A directional read of Rink (2023) across 41 markets confirms surviving rules exist; they cluster in trend-following and risk-overlay roles, not standalone alpha generation.
Next time a backtest looks brilliant, ask: how many rules were tested and what window was held out?
We update this article when Rink’s multi-market follow-up extends post-2023 or when the SEC issues guidance on retail algorithmic-strategy advertising.
The audit logic translates directly into portfolio construction. See portfolio rebalancing strategy for the rule-deployment counterpart that compounds dollars instead of error.
Which of the three filters does your favorite indicator currently fail?
The Three Conditions a Technical Signal Must Pass
The three conditions a technical signal must pass are data-snooping correction, realistic transaction cost adjustment, and out-of-sample persistence verification. Data-snooping correction tests whether the rule’s apparent edge survives False Discovery Rate adjustment for the size of the search universe. Realistic transaction costs require a benchmark of at least 0.05% commission per side, one tick of slippage, and approximately 0.5% annual tax drag on rule-driven turnover. Out-of-sample persistence requires the rule to retain performance on at least five years of price data the rule has never seen during construction. A rule that fails any single condition fails the survival test entirely.
The 3-filter survival framework, restated as a buyer’s-guide checklist for any backtest claim.
Filter
What It Tests
If It Fails
Data-snooping correction
False Discovery Rate across rule universe
Apparent alpha is statistical noise
Realistic transaction costs
Commission + slippage + tax drag
Equity curve halves in real accounts
Out-of-sample persistence
Survival on data the rule never saw
Edge collapses post-publication
Frequently Asked Questions
Does technical analysis work?
Technical analysis works only when a backtested rule survives data-snooping correction, transaction-cost realism, and out-of-sample testing combined. Bajgrowicz and Scaillet (2012) tested 7,846 trading rules across 1897-2011 Dow Jones data and found zero survivors after the joint test in late subperiods. A 2.0% advertised backtest alpha typically becomes a -0.1% net edge once the friction stack lands. Most retail rules fail at filter one because the search universe is larger than any individual signal can withstand.
What is the biggest problem with technical analysis backtests?
The biggest problem with technical analysis backtests is data-snooping bias from selecting one rule out of thousands tested. Sullivan, Timmermann, and White (1999) showed that scanning 26 candidate rules across 100 years of Dow Jones data inflates apparent profitability sharply once the search space is priced in. Bajgrowicz and Scaillet’s later 7,846-rule scan confirmed the effect at scale. The chart shows the winner; the search hides the losers, and the search is what readers buy without knowing it.
How do I know if a technical indicator is reliable?
A technical indicator is reliable only when it survives three specific tests applied jointly. First, FDR or Reality Check correction adjusts the apparent edge for the rule universe scanned. Second, realistic costs of at least 0.05% commission per side and one tick of slippage replace the platform’s zero defaults. Third, the rule retains performance on at least five years of price data the rule has never seen during construction. Failing any single test disqualifies the indicator entirely.
Is technical analysis better than fundamental analysis?
Technical analysis and fundamental analysis answer different questions and resist direct comparison. Technical analysis evaluates rules at the rule-level, asking whether a signal predicts price after costs and across out-of-sample data. Fundamental analysis evaluates businesses at the security-level, asking whether a company’s cash flow justifies its valuation. A rule-level edge of -0.1% post-friction loses to almost any cash-flow-positive equity at any reasonable holding period. The construct units differ, so the answer is conditional on which question the reader is asking.
What transaction costs should I include in a backtest?
Realistic transaction costs in a backtest must include three components that platforms typically default to zero. The TradingView Strategy Tester defaults the Properties panel commission field to 0 and slippage to 0; Pine Script’s strategy() function inherits these unless commission_type, commission_value, and slippage parameters are explicitly set. The retail benchmark sits at roughly 0.05% commission per side, one tick of slippage on liquid US equities, and approximately 0.5% annual tax drag on rule-driven turnover at short-term capital gains rates. Run any candidate rule with these values before allocating capital. Skipping any single component inflates apparent edge by roughly 0.5 to 1.5 percentage points annually, enough to convert a real edge into a curve-fit.
Technical Analysis vs Fundamental Analysis
The $620,704 gap is what the search costs investors who skipped the filters before allocating.
The mechanism Bajgrowicz and Scaillet’s 7,846-rule scan exposed is universal. Any backtest drawn from a search universe larger than the surviving signal will, on average, sell the chart and lose the dollar. A 14.2% in-sample equity curve survives only when the rule passes through the three filters at the same time. Mina’s case showed why every undisclosed cost is the same kind of tax: a charge the chart pretends not to exist.
The rule that backtests best is most often the rule that won the data-mining lottery. Twelve years to advertise the rule, twenty-five years to compound the cost.
Open your platform. Set commission to 0.05% and slippage to 1 tick. If equity curve halves: rule failed Filter 2.
A backtest that won the past usually loses the future, and the better it looks the more likely it failed by chance.
The 7,846-rule lottery is not a fact about technical analysis; it is a fact about searching. A 25-year $620,704 backtest gap is what the search costs the reader who skipped the filters.
You ran the backtest before holding out the data the rule had to face.
What survives the sieve is what survives next year.
At 63, Mina’s portfolio reflects rules she rejected more than rules she ran.
The mesh holds what the chart could not. Three filters, three separate truths the equity curve cannot show. A real edge survives the mesh. A curve-fit collapses through it.
YOUR TURN
Which filter would your favorite backtest fail first: data-snooping, costs, or out-of-sample?
Educational quantitative analysis based on published data. Not investment, tax, or legal advice. Consult a licensed professional before acting on any calculation. About TheFinSense.
Danny HwangLead Quant Analyst
Danny Hwang is Lead Quant Analyst at TheFinSense, where he builds math-driven frameworks for individual investors. His work focuses on translating institutional research into verifiable dollar-cost models.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Used by Google Analytics to determine which links on a page are being clicked
30 seconds
_ga_
ID used to identify users
2 years
_gid
ID used to identify users for 24 hours after last activity
24 hours
_gat
Used to monitor number of Google Analytics server requests when using Google Tag Manager
1 minute
_gac_
Contains information related to marketing campaigns of the user. These are shared with Google AdWords / Google Ads when the Google Ads and Google Analytics accounts are linked together.
90 days
__utma
ID used to identify users and sessions
2 years after last activity
__utmt
Used to monitor number of Google Analytics server requests
10 minutes
__utmb
Used to distinguish new sessions and visits. This cookie is set when the GA.js javascript library is loaded and there is no existing __utmb cookie. The cookie is updated every time data is sent to the Google Analytics server.
30 minutes after last activity
__utmc
Used only with old Urchin versions of Google Analytics and not with GA.js. Was used to distinguish between new sessions and visits at the end of a session.
End of session (browser)
__utmz
Contains information about the traffic source or campaign that directed user to the website. The cookie is set when the GA.js javascript is loaded and updated when data is sent to the Google Anaytics server
6 months after last activity
__utmv
Contains custom information set by the web developer via the _setCustomVar method in Google Analytics. This cookie is updated every time new data is sent to the Google Analytics server.
2 years after last activity
__utmx
Used to determine whether a user is included in an A / B or Multivariate test.