Dynamic Equity Allocation Model"Cash is Trash"? Not Always. Here's Why Science Beats Guesswork.
Every retail trader knows the frustration: you draw support and resistance lines, you spot patterns, you follow market gurus on social media—and still, when the next bear market hits, your portfolio bleeds red. Meanwhile, institutional investors seem to navigate market turbulence with ease, preserving capital when markets crash and participating when they rally. What's their secret?
The answer isn't insider information or access to exotic derivatives. It's systematic, scientifically validated decision-making. While most retail traders rely on subjective chart analysis and emotional reactions, professional portfolio managers use quantitative models that remove emotion from the equation and process multiple streams of market information simultaneously.
This document presents exactly such a system—not a proprietary black box available only to hedge funds, but a fully transparent, academically grounded framework that any serious investor can understand and apply. The Dynamic Equity Allocation Model (DEAM) synthesizes decades of financial research from Nobel laureates and leading academics into a practical tool for tactical asset allocation.
Stop drawing colorful lines on your chart and start thinking like a quant. This isn't about predicting where the market goes next week—it's about systematically adjusting your risk exposure based on what the data actually tells you. When valuations scream danger, when volatility spikes, when credit markets freeze, when multiple warning signals align—that's when cash isn't trash. That's when cash saves your portfolio.
The irony of "cash is trash" rhetoric is that it ignores timing. Yes, being 100% cash for decades would be disastrous. But being 100% equities through every crisis is equally foolish. The sophisticated approach is dynamic: aggressive when conditions favor risk-taking, defensive when they don't. This model shows you how to make that decision systematically, not emotionally.
Whether you're managing your own retirement portfolio or seeking to understand how institutional allocation strategies work, this comprehensive analysis provides the theoretical foundation, mathematical implementation, and practical guidance to elevate your investment approach from amateur to professional.
The choice is yours: keep hoping your chart patterns work out, or start using the same quantitative methods that professionals rely on. The tools are here. The research is cited. The methodology is explained. All you need to do is read, understand, and apply.
The Dynamic Equity Allocation Model (DEAM) is a quantitative framework for systematic allocation between equities and cash, grounded in modern portfolio theory and empirical market research. The model integrates five scientifically validated dimensions of market analysis—market regime, risk metrics, valuation, sentiment, and macroeconomic conditions—to generate dynamic allocation recommendations ranging from 0% to 100% equity exposure. This work documents the theoretical foundations, mathematical implementation, and practical application of this multi-factor approach.
1. Introduction and Theoretical Background
1.1 The Limitations of Static Portfolio Allocation
Traditional portfolio theory, as formulated by Markowitz (1952) in his seminal work "Portfolio Selection," assumes an optimal static allocation where investors distribute their wealth across asset classes according to their risk aversion. This approach rests on the assumption that returns and risks remain constant over time. However, empirical research demonstrates that this assumption does not hold in reality. Fama and French (1989) showed that expected returns vary over time and correlate with macroeconomic variables such as the spread between long-term and short-term interest rates. Campbell and Shiller (1988) demonstrated that the price-earnings ratio possesses predictive power for future stock returns, providing a foundation for dynamic allocation strategies.
The academic literature on tactical asset allocation has evolved considerably over recent decades. Ilmanen (2011) argues in "Expected Returns" that investors can improve their risk-adjusted returns by considering valuation levels, business cycles, and market sentiment. The Dynamic Equity Allocation Model presented here builds on this research tradition and operationalizes these insights into a practically applicable allocation framework.
1.2 Multi-Factor Approaches in Asset Allocation
Modern financial research has shown that different factors capture distinct aspects of market dynamics and together provide a more robust picture of market conditions than individual indicators. Ross (1976) developed the Arbitrage Pricing Theory, a model that employs multiple factors to explain security returns. Following this multi-factor philosophy, DEAM integrates five complementary analytical dimensions, each tapping different information sources and collectively enabling comprehensive market understanding.
2. Data Foundation and Data Quality
2.1 Data Sources Used
The model draws its data exclusively from publicly available market data via the TradingView platform. This transparency and accessibility is a significant advantage over proprietary models that rely on non-public data. The data foundation encompasses several categories of market information, each capturing specific aspects of market dynamics.
First, price data for the S&P 500 Index is obtained through the SPDR S&P 500 ETF (ticker: SPY). The use of a highly liquid ETF instead of the index itself has practical reasons, as ETF data is available in real-time and reflects actual tradability. In addition to closing prices, high, low, and volume data are captured, which are required for calculating advanced volatility measures.
Fundamental corporate metrics are retrieved via TradingView's Financial Data API. These include earnings per share, price-to-earnings ratio, return on equity, debt-to-equity ratio, dividend yield, and share buyback yield. Cochrane (2011) emphasizes in "Presidential Address: Discount Rates" the central importance of valuation metrics for forecasting future returns, making these fundamental data a cornerstone of the model.
Volatility indicators are represented by the CBOE Volatility Index (VIX) and related metrics. The VIX, often referred to as the market's "fear gauge," measures the implied volatility of S&P 500 index options and serves as a proxy for market participants' risk perception. Whaley (2000) describes in "The Investor Fear Gauge" the construction and interpretation of the VIX and its use as a sentiment indicator.
Macroeconomic data includes yield curve information through US Treasury bonds of various maturities and credit risk premiums through the spread between high-yield bonds and risk-free government bonds. These variables capture the macroeconomic conditions and financing conditions relevant for equity valuation. Estrella and Hardouvelis (1991) showed that the shape of the yield curve has predictive power for future economic activity, justifying the inclusion of these data.
2.2 Handling Missing Data
A practical problem when working with financial data is dealing with missing or unavailable values. The model implements a fallback system where a plausible historical average value is stored for each fundamental metric. When current data is unavailable for a specific point in time, this fallback value is used. This approach ensures that the model remains functional even during temporary data outages and avoids systematic biases from missing data. The use of average values as fallback is conservative, as it generates neither overly optimistic nor pessimistic signals.
3. Component 1: Market Regime Detection
3.1 The Concept of Market Regimes
The idea that financial markets exist in different "regimes" or states that differ in their statistical properties has a long tradition in financial science. Hamilton (1989) developed regime-switching models that allow distinguishing between different market states with different return and volatility characteristics. The practical application of this theory consists of identifying the current market state and adjusting portfolio allocation accordingly.
DEAM classifies market regimes using a scoring system that considers three main dimensions: trend strength, volatility level, and drawdown depth. This multidimensional view is more robust than focusing on individual indicators, as it captures various facets of market dynamics. Classification occurs into six distinct regimes: Strong Bull, Bull Market, Neutral, Correction, Bear Market, and Crisis.
3.2 Trend Analysis Through Moving Averages
Moving averages are among the oldest and most widely used technical indicators and have also received attention in academic literature. Brock, Lakonishok, and LeBaron (1992) examined in "Simple Technical Trading Rules and the Stochastic Properties of Stock Returns" the profitability of trading rules based on moving averages and found evidence for their predictive power, although later studies questioned the robustness of these results when considering transaction costs.
The model calculates three moving averages with different time windows: a 20-day average (approximately one trading month), a 50-day average (approximately one quarter), and a 200-day average (approximately one trading year). The relationship of the current price to these averages and the relationship of the averages to each other provide information about trend strength and direction. When the price trades above all three averages and the short-term average is above the long-term, this indicates an established uptrend. The model assigns points based on these constellations, with longer-term trends weighted more heavily as they are considered more persistent.
3.3 Volatility Regimes
Volatility, understood as the standard deviation of returns, is a central concept of financial theory and serves as the primary risk measure. However, research has shown that volatility is not constant but changes over time and occurs in clusters—a phenomenon first documented by Mandelbrot (1963) and later formalized through ARCH and GARCH models (Engle, 1982; Bollerslev, 1986).
DEAM calculates volatility not only through the classic method of return standard deviation but also uses more advanced estimators such as the Parkinson estimator and the Garman-Klass estimator. These methods utilize intraday information (high and low prices) and are more efficient than simple close-to-close volatility estimators. The Parkinson estimator (Parkinson, 1980) uses the range between high and low of a trading day and is based on the recognition that this information reveals more about true volatility than just the closing price difference. The Garman-Klass estimator (Garman and Klass, 1980) extends this approach by additionally considering opening and closing prices.
The calculated volatility is annualized by multiplying it by the square root of 252 (the average number of trading days per year), enabling standardized comparability. The model compares current volatility with the VIX, the implied volatility from option prices. A low VIX (below 15) signals market comfort and increases the regime score, while a high VIX (above 35) indicates market stress and reduces the score. This interpretation follows the empirical observation that elevated volatility is typically associated with falling markets (Schwert, 1989).
3.4 Drawdown Analysis
A drawdown refers to the percentage decline from the highest point (peak) to the lowest point (trough) during a specific period. This metric is psychologically significant for investors as it represents the maximum loss experienced. Calmar (1991) developed the Calmar Ratio, which relates return to maximum drawdown, underscoring the practical relevance of this metric.
The model calculates current drawdown as the percentage distance from the highest price of the last 252 trading days (one year). A drawdown below 3% is considered negligible and maximally increases the regime score. As drawdown increases, the score decreases progressively, with drawdowns above 20% classified as severe and indicating a crisis or bear market regime. These thresholds are empirically motivated by historical market cycles, in which corrections typically encompassed 5-10% drawdowns, bear markets 20-30%, and crises over 30%.
3.5 Regime Classification
Final regime classification occurs through aggregation of scores from trend (40% weight), volatility (30%), and drawdown (30%). The higher weighting of trend reflects the empirical observation that trend-following strategies have historically delivered robust results (Moskowitz, Ooi, and Pedersen, 2012). A total score above 80 signals a strong bull market with established uptrend, low volatility, and minimal losses. At a score below 10, a crisis situation exists requiring defensive positioning. The six regime categories enable a differentiated allocation strategy that not only distinguishes binarily between bullish and bearish but allows gradual gradations.
4. Component 2: Risk-Based Allocation
4.1 Volatility Targeting as Risk Management Approach
The concept of volatility targeting is based on the idea that investors should maximize not returns but risk-adjusted returns. Sharpe (1966, 1994) defined with the Sharpe Ratio the fundamental concept of return per unit of risk, measured as volatility. Volatility targeting goes a step further and adjusts portfolio allocation to achieve constant target volatility. This means that in times of low market volatility, equity allocation is increased, and in times of high volatility, it is reduced.
Moreira and Muir (2017) showed in "Volatility-Managed Portfolios" that strategies that adjust their exposure based on volatility forecasts achieve higher Sharpe Ratios than passive buy-and-hold strategies. DEAM implements this principle by defining a target portfolio volatility (default 12% annualized) and adjusting equity allocation to achieve it. The mathematical foundation is simple: if market volatility is 20% and target volatility is 12%, equity allocation should be 60% (12/20 = 0.6), with the remaining 40% held in cash with zero volatility.
4.2 Market Volatility Calculation
Estimating current market volatility is central to the risk-based allocation approach. The model uses several volatility estimators in parallel and selects the higher value between traditional close-to-close volatility and the Parkinson estimator. This conservative choice ensures the model does not underestimate true volatility, which could lead to excessive risk exposure.
Traditional volatility calculation uses logarithmic returns, as these have mathematically advantageous properties (additive linkage over multiple periods). The logarithmic return is calculated as ln(P_t / P_{t-1}), where P_t is the price at time t. The standard deviation of these returns over a rolling 20-trading-day window is then multiplied by √252 to obtain annualized volatility. This annualization is based on the assumption of independently identically distributed returns, which is an idealization but widely accepted in practice.
The Parkinson estimator uses additional information from the trading range (High minus Low) of each day. The formula is: σ_P = (1/√(4ln2)) × √(1/n × Σln²(H_i/L_i)) × √252, where H_i and L_i are high and low prices. Under ideal conditions, this estimator is approximately five times more efficient than the close-to-close estimator (Parkinson, 1980), as it uses more information per observation.
4.3 Drawdown-Based Position Size Adjustment
In addition to volatility targeting, the model implements drawdown-based risk control. The logic is that deep market declines often signal further losses and therefore justify exposure reduction. This behavior corresponds with the concept of path-dependent risk tolerance: investors who have already suffered losses are typically less willing to take additional risk (Kahneman and Tversky, 1979).
The model defines a maximum portfolio drawdown as a target parameter (default 15%). Since portfolio volatility and portfolio drawdown are proportional to equity allocation (assuming cash has neither volatility nor drawdown), allocation-based control is possible. For example, if the market exhibits a 25% drawdown and target portfolio drawdown is 15%, equity allocation should be at most 60% (15/25).
4.4 Dynamic Risk Adjustment
An advanced feature of DEAM is dynamic adjustment of risk-based allocation through a feedback mechanism. The model continuously estimates what actual portfolio volatility and portfolio drawdown would result at the current allocation. If risk utilization (ratio of actual to target risk) exceeds 1.0, allocation is reduced by an adjustment factor that grows exponentially with overutilization. This implements a form of dynamic feedback that avoids overexposure.
Mathematically, a risk adjustment factor r_adjust is calculated: if risk utilization u > 1, then r_adjust = exp(-0.5 × (u - 1)). This exponential function ensures that moderate overutilization is gently corrected, while strong overutilization triggers drastic reductions. The factor 0.5 in the exponent was empirically calibrated to achieve a balanced ratio between sensitivity and stability.
5. Component 3: Valuation Analysis
5.1 Theoretical Foundations of Fundamental Valuation
DEAM's valuation component is based on the fundamental premise that the intrinsic value of a security is determined by its future cash flows and that deviations between market price and intrinsic value are eventually corrected. Graham and Dodd (1934) established in "Security Analysis" the basic principles of fundamental analysis that remain relevant today. Translated into modern portfolio context, this means that markets with high valuation metrics (high price-earnings ratios) should have lower expected returns than cheaply valued markets.
Campbell and Shiller (1988) developed the Cyclically Adjusted P/E Ratio (CAPE), which smooths earnings over a full business cycle. Their empirical analysis showed that this ratio has significant predictive power for 10-year returns. Asness, Moskowitz, and Pedersen (2013) demonstrated in "Value and Momentum Everywhere" that value effects exist not only in individual stocks but also in asset classes and markets.
5.2 Equity Risk Premium as Central Valuation Metric
The Equity Risk Premium (ERP) is defined as the expected excess return of stocks over risk-free government bonds. It is the theoretical heart of valuation analysis, as it represents the compensation investors demand for bearing equity risk. Damodaran (2012) discusses in "Equity Risk Premiums: Determinants, Estimation and Implications" various methods for ERP estimation.
DEAM calculates ERP not through a single method but combines four complementary approaches with different weights. This multi-method strategy increases estimation robustness and avoids dependence on single, potentially erroneous inputs.
The first method (35% weight) uses earnings yield, calculated as 1/P/E or directly from operating earnings data, and subtracts the 10-year Treasury yield. This method follows Fed Model logic (Yardeni, 2003), although this model has theoretical weaknesses as it does not consistently treat inflation (Asness, 2003).
The second method (30% weight) extends earnings yield by share buyback yield. Share buybacks are a form of capital return to shareholders and increase value per share. Boudoukh et al. (2007) showed in "The Total Shareholder Yield" that the sum of dividend yield and buyback yield is a better predictor of future returns than dividend yield alone.
The third method (20% weight) implements the Gordon Growth Model (Gordon, 1962), which models stock value as the sum of discounted future dividends. Under constant growth g assumption: Expected Return = Dividend Yield + g. The model estimates sustainable growth as g = ROE × (1 - Payout Ratio), where ROE is return on equity and payout ratio is the ratio of dividends to earnings. This formula follows from equity theory: unretained earnings are reinvested at ROE and generate additional earnings growth.
The fourth method (15% weight) combines total shareholder yield (Dividend + Buybacks) with implied growth derived from revenue growth. This method considers that companies with strong revenue growth should generate higher future earnings, even if current valuations do not yet fully reflect this.
The final ERP is the weighted average of these four methods. A high ERP (above 4%) signals attractive valuations and increases the valuation score to 95 out of 100 possible points. A negative ERP, where stocks have lower expected returns than bonds, results in a minimal score of 10.
5.3 Quality Adjustments to Valuation
Valuation metrics alone can be misleading if not interpreted in the context of company quality. A company with a low P/E may be cheap or fundamentally problematic. The model therefore implements quality adjustments based on growth, profitability, and capital structure.
Revenue growth above 10% annually adds 10 points to the valuation score, moderate growth above 5% adds 5 points. This adjustment reflects that growth has independent value (Modigliani and Miller, 1961, extended by later growth theory). Net margin above 15% signals pricing power and operational efficiency and increases the score by 5 points, while low margins below 8% indicate competitive pressure and subtract 5 points.
Return on equity (ROE) above 20% characterizes outstanding capital efficiency and increases the score by 5 points. Piotroski (2000) showed in "Value Investing: The Use of Historical Financial Statement Information" that fundamental quality signals such as high ROE can improve the performance of value strategies.
Capital structure is evaluated through the debt-to-equity ratio. A conservative ratio below 1.0 multiplies the valuation score by 1.2, while high leverage above 2.0 applies a multiplier of 0.8. This adjustment reflects that high debt constrains financial flexibility and can become problematic in crisis times (Korteweg, 2010).
6. Component 4: Sentiment Analysis
6.1 The Role of Sentiment in Financial Markets
Investor sentiment, defined as the collective psychological attitude of market participants, influences asset prices independently of fundamental data. Baker and Wurgler (2006, 2007) developed a sentiment index and showed that periods of high sentiment are followed by overvaluations that later correct. This insight justifies integrating a sentiment component into allocation decisions.
Sentiment is difficult to measure directly but can be proxied through market indicators. The VIX is the most widely used sentiment indicator, as it aggregates implied volatility from option prices. High VIX values reflect elevated uncertainty and risk aversion, while low values signal market comfort. Whaley (2009) refers to the VIX as the "Investor Fear Gauge" and documents its role as a contrarian indicator: extremely high values typically occur at market bottoms, while low values occur at tops.
6.2 VIX-Based Sentiment Assessment
DEAM uses statistical normalization of the VIX by calculating the Z-score: z = (VIX_current - VIX_average) / VIX_standard_deviation. The Z-score indicates how many standard deviations the current VIX is from the historical average. This approach is more robust than absolute thresholds, as it adapts to the average volatility level, which can vary over longer periods.
A Z-score below -1.5 (VIX is 1.5 standard deviations below average) signals exceptionally low risk perception and adds 40 points to the sentiment score. This may seem counterintuitive—shouldn't low fear be bullish? However, the logic follows the contrarian principle: when no one is afraid, everyone is already invested, and there is limited further upside potential (Zweig, 1973). Conversely, a Z-score above 1.5 (extreme fear) adds -40 points, reflecting market panic but simultaneously suggesting potential buying opportunities.
6.3 VIX Term Structure as Sentiment Signal
The VIX term structure provides additional sentiment information. Normally, the VIX trades in contango, meaning longer-term VIX futures have higher prices than short-term. This reflects that short-term volatility is currently known, while long-term volatility is more uncertain and carries a risk premium. The model compares the VIX with VIX9D (9-day volatility) and identifies backwardation (VIX > 1.05 × VIX9D) and steep backwardation (VIX > 1.15 × VIX9D).
Backwardation occurs when short-term implied volatility is higher than longer-term, which typically happens during market stress. Investors anticipate immediate turbulence but expect calming. Psychologically, this reflects acute fear. The model subtracts 15 points for backwardation and 30 for steep backwardation, as these constellations signal elevated risk. Simon and Wiggins (2001) analyzed the VIX futures curve and showed that backwardation is associated with market declines.
6.4 Safe-Haven Flows
During crisis times, investors flee from risky assets into safe havens: gold, US dollar, and Japanese yen. This "flight to quality" is a sentiment signal. The model calculates the performance of these assets relative to stocks over the last 20 trading days. When gold or the dollar strongly rise while stocks fall, this indicates elevated risk aversion.
The safe-haven component is calculated as the difference between safe-haven performance and stock performance. Positive values (safe havens outperform) subtract up to 20 points from the sentiment score, negative values (stocks outperform) add up to 10 points. The asymmetric treatment (larger deduction for risk-off than bonus for risk-on) reflects that risk-off movements are typically sharper and more informative than risk-on phases.
Baur and Lucey (2010) examined safe-haven properties of gold and showed that gold indeed exhibits negative correlation with stocks during extreme market movements, confirming its role as crisis protection.
7. Component 5: Macroeconomic Analysis
7.1 The Yield Curve as Economic Indicator
The yield curve, represented as yields of government bonds of various maturities, contains aggregated expectations about future interest rates, inflation, and economic growth. The slope of the yield curve has remarkable predictive power for recessions. Estrella and Mishkin (1998) showed that an inverted yield curve (short-term rates higher than long-term) predicts recessions with high reliability. This is because inverted curves reflect restrictive monetary policy: the central bank raises short-term rates to combat inflation, dampening economic activity.
DEAM calculates two spread measures: the 2-year-minus-10-year spread and the 3-month-minus-10-year spread. A steep, positive curve (spreads above 1.5% and 2% respectively) signals healthy growth expectations and generates the maximum yield curve score of 40 points. A flat curve (spreads near zero) reduces the score to 20 points. An inverted curve (negative spreads) is particularly alarming and results in only 10 points.
The choice of two different spreads increases analysis robustness. The 2-10 spread is most established in academic literature, while the 3M-10Y spread is often considered more sensitive, as the 3-month rate directly reflects current monetary policy (Ang, Piazzesi, and Wei, 2006).
7.2 Credit Conditions and Spreads
Credit spreads—the yield difference between risky corporate bonds and safe government bonds—reflect risk perception in the credit market. Gilchrist and Zakrajšek (2012) constructed an "Excess Bond Premium" that measures the component of credit spreads not explained by fundamentals and showed this is a predictor of future economic activity and stock returns.
The model approximates credit spread by comparing the yield of high-yield bond ETFs (HYG) with investment-grade bond ETFs (LQD). A narrow spread below 200 basis points signals healthy credit conditions and risk appetite, contributing 30 points to the macro score. Very wide spreads above 1000 basis points (as during the 2008 financial crisis) signal credit crunch and generate zero points.
Additionally, the model evaluates whether "flight to quality" is occurring, identified through strong performance of Treasury bonds (TLT) with simultaneous weakness in high-yield bonds. This constellation indicates elevated risk aversion and reduces the credit conditions score.
7.3 Financial Stability at Corporate Level
While the yield curve and credit spreads reflect macroeconomic conditions, financial stability evaluates the health of companies themselves. The model uses the aggregated debt-to-equity ratio and return on equity of the S&P 500 as proxies for corporate health.
A low leverage level below 0.5 combined with high ROE above 15% signals robust corporate balance sheets and generates 20 points. This combination is particularly valuable as it represents both defensive strength (low debt means crisis resistance) and offensive strength (high ROE means earnings power). High leverage above 1.5 generates only 5 points, as it implies vulnerability to interest rate increases and recessions.
Korteweg (2010) showed in "The Net Benefits to Leverage" that optimal debt maximizes firm value, but excessive debt increases distress costs. At the aggregated market level, high debt indicates fragilities that can become problematic during stress phases.
8. Component 6: Crisis Detection
8.1 The Need for Systematic Crisis Detection
Financial crises are rare but extremely impactful events that suspend normal statistical relationships. During normal market volatility, diversified portfolios and traditional risk management approaches function, but during systemic crises, seemingly independent assets suddenly correlate strongly, and losses exceed historical expectations (Longin and Solnik, 2001). This justifies a separate crisis detection mechanism that operates independently of regular allocation components.
Reinhart and Rogoff (2009) documented in "This Time Is Different: Eight Centuries of Financial Folly" recurring patterns in financial crises: extreme volatility, massive drawdowns, credit market dysfunction, and asset price collapse. DEAM operationalizes these patterns into quantifiable crisis indicators.
8.2 Multi-Signal Crisis Identification
The model uses a counter-based approach where various stress signals are identified and aggregated. This methodology is more robust than relying on a single indicator, as true crises typically occur simultaneously across multiple dimensions. A single signal may be a false alarm, but the simultaneous presence of multiple signals increases confidence.
The first indicator is a VIX above the crisis threshold (default 40), adding one point. A VIX above 60 (as in 2008 and March 2020) adds two additional points, as such extreme values are historically very rare. This tiered approach captures the intensity of volatility.
The second indicator is market drawdown. A drawdown above 15% adds one point, as corrections of this magnitude can be potential harbingers of larger crises. A drawdown above 25% adds another point, as historical bear markets typically encompass 25-40% drawdowns.
The third indicator is credit market spreads above 500 basis points, adding one point. Such wide spreads occur only during significant credit market disruptions, as in 2008 during the Lehman crisis.
The fourth indicator identifies simultaneous losses in stocks and bonds. Normally, Treasury bonds act as a hedge against equity risk (negative correlation), but when both fall simultaneously, this indicates systemic liquidity problems or inflation/stagflation fears. The model checks whether both SPY and TLT have fallen more than 10% and 5% respectively over 5 trading days, adding two points.
The fifth indicator is a volume spike combined with negative returns. Extreme trading volumes (above twice the 20-day average) with falling prices signal panic selling. This adds one point.
A crisis situation is diagnosed when at least 3 indicators trigger, a severe crisis at 5 or more indicators. These thresholds were calibrated through historical backtesting to identify true crises (2008, 2020) without generating excessive false alarms.
8.3 Crisis-Based Allocation Override
When a crisis is detected, the system overrides the normal allocation recommendation and caps equity allocation at maximum 25%. In a severe crisis, the cap is set at 10%. This drastic defensive posture follows the empirical observation that crises typically require time to develop and that early reduction can avoid substantial losses (Faber, 2007).
This override logic implements a "safety first" principle: in situations of existential danger to the portfolio, capital preservation becomes the top priority. Roy (1952) formalized this approach in "Safety First and the Holding of Assets," arguing that investors should primarily minimize ruin probability.
9. Integration and Final Allocation Calculation
9.1 Component Weighting
The final allocation recommendation emerges through weighted aggregation of the five components. The standard weighting is: Market Regime 35%, Risk Management 25%, Valuation 20%, Sentiment 15%, Macro 5%. These weights reflect both theoretical considerations and empirical backtesting results.
The highest weighting of market regime is based on evidence that trend-following and momentum strategies have delivered robust results across various asset classes and time periods (Moskowitz, Ooi, and Pedersen, 2012). Current market momentum is highly informative for the near future, although it provides no information about long-term expectations.
The substantial weighting of risk management (25%) follows from the central importance of risk control. Wealth preservation is the foundation of long-term wealth creation, and systematic risk management is demonstrably value-creating (Moreira and Muir, 2017).
The valuation component receives 20% weight, based on the long-term mean reversion of valuation metrics. While valuation has limited short-term predictive power (bull and bear markets can begin at any valuation), the long-term relationship between valuation and returns is robustly documented (Campbell and Shiller, 1988).
Sentiment (15%) and Macro (5%) receive lower weights, as these factors are subtler and harder to measure. Sentiment is valuable as a contrarian indicator at extremes but less informative in normal ranges. Macro variables such as the yield curve have strong predictive power for recessions, but the transmission from recessions to stock market performance is complex and temporally variable.
9.2 Model Type Adjustments
DEAM allows users to choose between four model types: Conservative, Balanced, Aggressive, and Adaptive. This choice modifies the final allocation through additive adjustments.
Conservative mode subtracts 10 percentage points from allocation, resulting in consistently more cautious positioning. This is suitable for risk-averse investors or those with limited investment horizons. Aggressive mode adds 10 percentage points, suitable for risk-tolerant investors with long horizons.
Adaptive mode implements procyclical adjustment based on short-term momentum: if the market has risen more than 5% in the last 20 days, 5 percentage points are added; if it has declined more than 5%, 5 points are subtracted. This logic follows the observation that short-term momentum persists (Jegadeesh and Titman, 1993), but the moderate size of adjustment avoids excessive timing bets.
Balanced mode makes no adjustment and uses raw model output. This neutral setting is suitable for investors who wish to trust model recommendations unchanged.
9.3 Smoothing and Stability
The allocation resulting from aggregation undergoes final smoothing through a simple moving average over 3 periods. This smoothing is crucial for model practicality, as it reduces frequent trading and thus transaction costs. Without smoothing, the model could fluctuate between adjacent allocations with every small input change.
The choice of 3 periods as smoothing window is a compromise between responsiveness and stability. Longer smoothing would excessively delay signals and impede response to true regime changes. Shorter or no smoothing would allow too much noise. Empirical tests showed that 3-period smoothing offers an optimal ratio between these goals.
10. Visualization and Interpretation
10.1 Main Output: Equity Allocation
DEAM's primary output is a time series from 0 to 100 representing the recommended percentage allocation to equities. This representation is intuitive: 100% means full investment in stocks (specifically: an S&P 500 ETF), 0% means complete cash position, and intermediate values correspond to mixed portfolios. A value of 60% means, for example: invest 60% of wealth in SPY, hold 40% in money market instruments or cash.
The time series is color-coded to enable quick visual interpretation. Green shades represent high allocations (above 80%, bullish), red shades low allocations (below 20%, bearish), and neutral colors middle allocations. The chart background is dynamically colored based on the signal, enhancing readability in different market phases.
10.2 Dashboard Metrics
A tabular dashboard presents key metrics compactly. This includes current allocation, cash allocation (complement), an aggregated signal (BULLISH/NEUTRAL/BEARISH), current market regime, VIX level, market drawdown, and crisis status.
Additionally, fundamental metrics are displayed: P/E Ratio, Equity Risk Premium, Return on Equity, Debt-to-Equity Ratio, and Total Shareholder Yield. This transparency allows users to understand model decisions and form their own assessments.
Component scores (Regime, Risk, Valuation, Sentiment, Macro) are also displayed, each normalized on a 0-100 scale. This shows which factors primarily drive the current recommendation. If, for example, the Risk score is very low (20) while other scores are moderate (50-60), this indicates that risk management considerations are pulling allocation down.
10.3 Component Breakdown (Optional)
Advanced users can display individual components as separate lines in the chart. This enables analysis of component dynamics: do all components move synchronously, or are there divergences? Divergences can be particularly informative. If, for example, the market regime is bullish (high score) but the valuation component is very negative, this signals an overbought market not fundamentally supported—a classic "bubble warning."
This feature is disabled by default to keep the chart clean but can be activated for deeper analysis.
10.4 Confidence Bands
The model optionally displays uncertainty bands around the main allocation line. These are calculated as ±1 standard deviation of allocation over a rolling 20-period window. Wide bands indicate high volatility of model recommendations, suggesting uncertain market conditions. Narrow bands indicate stable recommendations.
This visualization implements a concept of epistemic uncertainty—uncertainty about the model estimate itself, not just market volatility. In phases where various indicators send conflicting signals, the allocation recommendation becomes more volatile, manifesting in wider bands. Users can understand this as a warning to act more cautiously or consult alternative information sources.
11. Alert System
11.1 Allocation Alerts
DEAM implements an alert system that notifies users of significant events. Allocation alerts trigger when smoothed allocation crosses certain thresholds. An alert is generated when allocation reaches 80% (from below), signaling strong bullish conditions. Another alert triggers when allocation falls to 20%, indicating defensive positioning.
These thresholds are not arbitrary but correspond with boundaries between model regimes. An allocation of 80% roughly corresponds to a clear bull market regime, while 20% corresponds to a bear market regime. Alerts at these points are therefore informative about fundamental regime shifts.
11.2 Crisis Alerts
Separate alerts trigger upon detection of crisis and severe crisis. These alerts have highest priority as they signal large risks. A crisis alert should prompt investors to review their portfolio and potentially take defensive measures beyond the automatic model recommendation (e.g., hedging through put options, rebalancing to more defensive sectors).
11.3 Regime Change Alerts
An alert triggers upon change of market regime (e.g., from Neutral to Correction, or from Bull Market to Strong Bull). Regime changes are highly informative events that typically entail substantial allocation changes. These alerts enable investors to proactively respond to changes in market dynamics.
11.4 Risk Breach Alerts
A specialized alert triggers when actual portfolio risk utilization exceeds target parameters by 20%. This is a warning signal that the risk management system is reaching its limits, possibly because market volatility is rising faster than allocation can be reduced. In such situations, investors should consider manual interventions.
12. Practical Application and Limitations
12.1 Portfolio Implementation
DEAM generates a recommendation for allocation between equities (S&P 500) and cash. Implementation by an investor can take various forms. The most direct method is using an S&P 500 ETF (e.g., SPY, VOO) for equity allocation and a money market fund or savings account for cash allocation.
A rebalancing strategy is required to synchronize actual allocation with model recommendation. Two approaches are possible: (1) rule-based rebalancing at every 10% deviation between actual and target, or (2) time-based monthly rebalancing. Both have trade-offs between responsiveness and transaction costs. Empirical evidence (Jaconetti, Kinniry, and Zilbering, 2010) suggests rebalancing frequency has moderate impact on performance, and investors should optimize based on their transaction costs.
12.2 Adaptation to Individual Preferences
The model offers numerous adjustment parameters. Component weights can be modified if investors place more or less belief in certain factors. A fundamentally-oriented investor might increase valuation weight, while a technical trader might increase regime weight.
Risk target parameters (target volatility, max drawdown) should be adapted to individual risk tolerance. Younger investors with long investment horizons can choose higher target volatility (15-18%), while retirees may prefer lower volatility (8-10%). This adjustment systematically shifts average equity allocation.
Crisis thresholds can be adjusted based on preference for sensitivity versus specificity of crisis detection. Lower thresholds (e.g., VIX > 35 instead of 40) increase sensitivity (more crises are detected) but reduce specificity (more false alarms). Higher thresholds have the reverse effect.
12.3 Limitations and Disclaimers
DEAM is based on historical relationships between indicators and market performance. There is no guarantee these relationships will persist in the future. Structural changes in markets (e.g., through regulation, technology, or central bank policy) can break established patterns. This is the fundamental problem of induction in financial science (Taleb, 2007).
The model is optimized for US equities (S&P 500). Application to other markets (international stocks, bonds, commodities) would require recalibration. The indicators and thresholds are specific to the statistical properties of the US equity market.
The model cannot eliminate losses. Even with perfect crisis prediction, an investor following the model would lose money in bear markets—just less than a buy-and-hold investor. The goal is risk-adjusted performance improvement, not risk elimination.
Transaction costs are not modeled. In practice, spreads, commissions, and taxes reduce net returns. Frequent trading can cause substantial costs. Model smoothing helps minimize this, but users should consider their specific cost situation.
The model reacts to information; it does not anticipate it. During sudden shocks (e.g., 9/11, COVID-19 lockdowns), the model can only react after price movements, not before. This limitation is inherent to all reactive systems.
12.4 Relationship to Other Strategies
DEAM is a tactical asset allocation approach and should be viewed as a complement, not replacement, for strategic asset allocation. Brinson, Hood, and Beebower (1986) showed in their influential study "Determinants of Portfolio Performance" that strategic asset allocation (long-term policy allocation) explains the majority of portfolio performance, but this leaves room for tactical adjustments based on market timing.
The model can be combined with value and momentum strategies at the individual stock level. While DEAM controls overall market exposure, within-equity decisions can be optimized through stock-picking models. This separation between strategic (market exposure) and tactical (stock selection) levels follows classical portfolio theory.
The model does not replace diversification across asset classes. A complete portfolio should also include bonds, international stocks, real estate, and alternative investments. DEAM addresses only the US equity allocation decision within a broader portfolio.
13. Scientific Foundation and Evaluation
13.1 Theoretical Consistency
DEAM's components are based on established financial theory and empirical evidence. The market regime component follows from regime-switching models (Hamilton, 1989) and trend-following literature. The risk management component implements volatility targeting (Moreira and Muir, 2017) and modern portfolio theory (Markowitz, 1952). The valuation component is based on discounted cash flow theory and empirical value research (Campbell and Shiller, 1988; Fama and French, 1992). The sentiment component integrates behavioral finance (Baker and Wurgler, 2006). The macro component uses established business cycle indicators (Estrella and Mishkin, 1998).
This theoretical grounding distinguishes DEAM from purely data-mining-based approaches that identify patterns without causal theory. Theory-guided models have greater probability of functioning out-of-sample, as they are based on fundamental mechanisms, not random correlations (Lo and MacKinlay, 1990).
13.2 Empirical Validation
While this document does not present detailed backtest analysis, it should be noted that rigorous validation of a tactical asset allocation model should include several elements:
In-sample testing establishes whether the model functions at all in the data on which it was calibrated. Out-of-sample testing is crucial: the model should be tested in time periods not used for development. Walk-forward analysis, where the model is successively trained on rolling windows and tested in the next window, approximates real implementation.
Performance metrics should be risk-adjusted. Pure return consideration is misleading, as higher returns often only compensate for higher risk. Sharpe Ratio, Sortino Ratio, Calmar Ratio, and Maximum Drawdown are relevant metrics. Comparison with benchmarks (Buy-and-Hold S&P 500, 60/40 Stock/Bond portfolio) contextualizes performance.
Robustness checks test sensitivity to parameter variation. If the model only functions at specific parameter settings, this indicates overfitting. Robust models show consistent performance over a range of plausible parameters.
13.3 Comparison with Existing Literature
DEAM fits into the broader literature on tactical asset allocation. Faber (2007) presented a simple momentum-based timing system that goes long when the market is above its 10-month average, otherwise cash. This simple system avoided large drawdowns in bear markets. DEAM can be understood as a sophistication of this approach that integrates multiple information sources.
Ilmanen (2011) discusses various timing factors in "Expected Returns" and argues for multi-factor approaches. DEAM operationalizes this philosophy. Asness, Moskowitz, and Pedersen (2013) showed that value and momentum effects work across asset classes, justifying cross-asset application of regime and valuation signals.
Ang (2014) emphasizes in "Asset Management: A Systematic Approach to Factor Investing" the importance of systematic, rule-based approaches over discretionary decisions. DEAM is fully systematic and eliminates emotional biases that plague individual investors (overconfidence, hindsight bias, loss aversion).
References
Ang, A. (2014) *Asset Management: A Systematic Approach to Factor Investing*. Oxford: Oxford University Press.
Ang, A., Piazzesi, M. and Wei, M. (2006) 'What does the yield curve tell us about GDP growth?', *Journal of Econometrics*, 131(1-2), pp. 359-403.
Asness, C.S. (2003) 'Fight the Fed Model', *The Journal of Portfolio Management*, 30(1), pp. 11-24.
Asness, C.S., Moskowitz, T.J. and Pedersen, L.H. (2013) 'Value and Momentum Everywhere', *The Journal of Finance*, 68(3), pp. 929-985.
Baker, M. and Wurgler, J. (2006) 'Investor Sentiment and the Cross-Section of Stock Returns', *The Journal of Finance*, 61(4), pp. 1645-1680.
Baker, M. and Wurgler, J. (2007) 'Investor Sentiment in the Stock Market', *Journal of Economic Perspectives*, 21(2), pp. 129-152.
Baur, D.G. and Lucey, B.M. (2010) 'Is Gold a Hedge or a Safe Haven? An Analysis of Stocks, Bonds and Gold', *Financial Review*, 45(2), pp. 217-229.
Bollerslev, T. (1986) 'Generalized Autoregressive Conditional Heteroskedasticity', *Journal of Econometrics*, 31(3), pp. 307-327.
Boudoukh, J., Michaely, R., Richardson, M. and Roberts, M.R. (2007) 'On the Importance of Measuring Payout Yield: Implications for Empirical Asset Pricing', *The Journal of Finance*, 62(2), pp. 877-915.
Brinson, G.P., Hood, L.R. and Beebower, G.L. (1986) 'Determinants of Portfolio Performance', *Financial Analysts Journal*, 42(4), pp. 39-44.
Brock, W., Lakonishok, J. and LeBaron, B. (1992) 'Simple Technical Trading Rules and the Stochastic Properties of Stock Returns', *The Journal of Finance*, 47(5), pp. 1731-1764.
Calmar, T.W. (1991) 'The Calmar Ratio', *Futures*, October issue.
Campbell, J.Y. and Shiller, R.J. (1988) 'The Dividend-Price Ratio and Expectations of Future Dividends and Discount Factors', *Review of Financial Studies*, 1(3), pp. 195-228.
Cochrane, J.H. (2011) 'Presidential Address: Discount Rates', *The Journal of Finance*, 66(4), pp. 1047-1108.
Damodaran, A. (2012) *Equity Risk Premiums: Determinants, Estimation and Implications*. Working Paper, Stern School of Business.
Engle, R.F. (1982) 'Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation', *Econometrica*, 50(4), pp. 987-1007.
Estrella, A. and Hardouvelis, G.A. (1991) 'The Term Structure as a Predictor of Real Economic Activity', *The Journal of Finance*, 46(2), pp. 555-576.
Estrella, A. and Mishkin, F.S. (1998) 'Predicting U.S. Recessions: Financial Variables as Leading Indicators', *Review of Economics and Statistics*, 80(1), pp. 45-61.
Faber, M.T. (2007) 'A Quantitative Approach to Tactical Asset Allocation', *The Journal of Wealth Management*, 9(4), pp. 69-79.
Fama, E.F. and French, K.R. (1989) 'Business Conditions and Expected Returns on Stocks and Bonds', *Journal of Financial Economics*, 25(1), pp. 23-49.
Fama, E.F. and French, K.R. (1992) 'The Cross-Section of Expected Stock Returns', *The Journal of Finance*, 47(2), pp. 427-465.
Garman, M.B. and Klass, M.J. (1980) 'On the Estimation of Security Price Volatilities from Historical Data', *Journal of Business*, 53(1), pp. 67-78.
Gilchrist, S. and Zakrajšek, E. (2012) 'Credit Spreads and Business Cycle Fluctuations', *American Economic Review*, 102(4), pp. 1692-1720.
Gordon, M.J. (1962) *The Investment, Financing, and Valuation of the Corporation*. Homewood: Irwin.
Graham, B. and Dodd, D.L. (1934) *Security Analysis*. New York: McGraw-Hill.
Hamilton, J.D. (1989) 'A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle', *Econometrica*, 57(2), pp. 357-384.
Ilmanen, A. (2011) *Expected Returns: An Investor's Guide to Harvesting Market Rewards*. Chichester: Wiley.
Jaconetti, C.M., Kinniry, F.M. and Zilbering, Y. (2010) 'Best Practices for Portfolio Rebalancing', *Vanguard Research Paper*.
Jegadeesh, N. and Titman, S. (1993) 'Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency', *The Journal of Finance*, 48(1), pp. 65-91.
Kahneman, D. and Tversky, A. (1979) 'Prospect Theory: An Analysis of Decision under Risk', *Econometrica*, 47(2), pp. 263-292.
Korteweg, A. (2010) 'The Net Benefits to Leverage', *The Journal of Finance*, 65(6), pp. 2137-2170.
Lo, A.W. and MacKinlay, A.C. (1990) 'Data-Snooping Biases in Tests of Financial Asset Pricing Models', *Review of Financial Studies*, 3(3), pp. 431-467.
Longin, F. and Solnik, B. (2001) 'Extreme Correlation of International Equity Markets', *The Journal of Finance*, 56(2), pp. 649-676.
Mandelbrot, B. (1963) 'The Variation of Certain Speculative Prices', *The Journal of Business*, 36(4), pp. 394-419.
Markowitz, H. (1952) 'Portfolio Selection', *The Journal of Finance*, 7(1), pp. 77-91.
Modigliani, F. and Miller, M.H. (1961) 'Dividend Policy, Growth, and the Valuation of Shares', *The Journal of Business*, 34(4), pp. 411-433.
Moreira, A. and Muir, T. (2017) 'Volatility-Managed Portfolios', *The Journal of Finance*, 72(4), pp. 1611-1644.
Moskowitz, T.J., Ooi, Y.H. and Pedersen, L.H. (2012) 'Time Series Momentum', *Journal of Financial Economics*, 104(2), pp. 228-250.
Parkinson, M. (1980) 'The Extreme Value Method for Estimating the Variance of the Rate of Return', *Journal of Business*, 53(1), pp. 61-65.
Piotroski, J.D. (2000) 'Value Investing: The Use of Historical Financial Statement Information to Separate Winners from Losers', *Journal of Accounting Research*, 38, pp. 1-41.
Reinhart, C.M. and Rogoff, K.S. (2009) *This Time Is Different: Eight Centuries of Financial Folly*. Princeton: Princeton University Press.
Ross, S.A. (1976) 'The Arbitrage Theory of Capital Asset Pricing', *Journal of Economic Theory*, 13(3), pp. 341-360.
Roy, A.D. (1952) 'Safety First and the Holding of Assets', *Econometrica*, 20(3), pp. 431-449.
Schwert, G.W. (1989) 'Why Does Stock Market Volatility Change Over Time?', *The Journal of Finance*, 44(5), pp. 1115-1153.
Sharpe, W.F. (1966) 'Mutual Fund Performance', *The Journal of Business*, 39(1), pp. 119-138.
Sharpe, W.F. (1994) 'The Sharpe Ratio', *The Journal of Portfolio Management*, 21(1), pp. 49-58.
Simon, D.P. and Wiggins, R.A. (2001) 'S&P Futures Returns and Contrary Sentiment Indicators', *Journal of Futures Markets*, 21(5), pp. 447-462.
Taleb, N.N. (2007) *The Black Swan: The Impact of the Highly Improbable*. New York: Random House.
Whaley, R.E. (2000) 'The Investor Fear Gauge', *The Journal of Portfolio Management*, 26(3), pp. 12-17.
Whaley, R.E. (2009) 'Understanding the VIX', *The Journal of Portfolio Management*, 35(3), pp. 98-105.
Yardeni, E. (2003) 'Stock Valuation Models', *Topical Study*, 51, Yardeni Research.
Zweig, M.E. (1973) 'An Investor Expectations Stock Price Predictive Model Using Closed-End Fund Premiums', *The Journal of Finance*, 28(1), pp. 67-78.
אינדיקטורים ואסטרטגיות
First Passage Time - Distribution AnalysisThe First Passage Time (FPT) Distribution Analysis indicator is a sophisticated probabilistic tool that answers one of the most critical questions in trading: "How long will it take for price to reach my target, and what are the odds of getting there first?"
Unlike traditional technical indicators that focus on what might happen, this indicator tells you when it's likely to happen.
Mathematical Foundation: First Passage Time Theory
What is First Passage Time?
First Passage Time (FPT) is a concept in stochastic processes that measures the time it takes for a random process to reach a specific threshold for the first time. Originally developed in physics and mathematics, FPT has applications in:
Quantitative Finance: Option pricing, risk management, and algorithmic trading
Neuroscience: Modeling neural firing patterns
Biology: Population dynamics and disease spread
Engineering: Reliability analysis and failure prediction
The Mathematics Behind It
This indicator uses Geometric Brownian Motion (GBM), the same stochastic model used in the Black-Scholes option pricing formula:
dS = μS dt + σS dW
Where:
S = Asset price
μ = Drift (trend component)
σ = Volatility (uncertainty component)
dW = Wiener process (random walk)
Through Monte Carlo simulation, the indicator runs 1,000+ price path simulations to statistically determine:
When each threshold (+X% or -X%) is likely to be hit
Which threshold is hit first (directional bias)
How often each scenario occurs (probability distribution)
🎯 How This Indicator Works
Core Algorithm Workflow:
Calculate Historical Statistics
Measures recent price volatility (standard deviation of log returns)
Calculates drift (average directional movement)
Annualizes these metrics for meaningful comparison
Run Monte Carlo Simulations
Generates 1,000+ random price paths based on historical behavior
Tracks when each path hits the upside (+X%) or downside (-X%) threshold
Records which threshold was hit first in each simulation
Aggregate Statistical Results
Calculates percentile distributions (10th, 25th, 50th, 75th, 90th)
Computes "first hit" probabilities (upside vs downside)
Determines average and median time-to-target
Visual Representation
Displays thresholds as horizontal lines
Shows gradient risk zones (purple-to-blue)
Provides comprehensive statistics table
📈 Use Cases
1. Options Trading
Selling Options: Determine if your strike price is likely to be hit before expiration
Buying Options: Estimate probability of reaching profit targets within your time window
Time Decay Management: Compare expected time-to-target vs theta decay
Example: You're considering selling a 30-day call option 5% out of the money. The indicator shows there's a 72% chance price hits +5% within 12 days. This tells you the trade has high assignment risk.
2. Swing Trading
Entry Timing: Wait for higher probability setups when directional bias is strong
Target Setting: Use median time-to-target to set realistic profit expectations
Stop Loss Placement: Understand probability of hitting your stop before target
Example: The indicator shows 85% upside probability with median time of 3.2 days. You can confidently enter long positions with appropriate position sizing.
3. Risk Management
Position Sizing: Larger positions when probability heavily favors one direction
Portfolio Allocation: Reduce exposure when probabilities are near 50/50 (high uncertainty)
Hedge Timing: Know when to add protective positions based on downside probability
Example: Indicator shows 55% upside vs 45% downside—nearly neutral. This signals high uncertainty, suggesting reduced position size or wait for better setup.
4. Market Regime Detection
Trending Markets: High directional bias (70%+ one direction)
Range-bound Markets: Balanced probabilities (45-55% both directions)
Volatility Regimes: Compare actual vs theoretical minimum time
Example: Consistent 90%+ bullish bias across multiple timeframes confirms strong uptrend—stay long and avoid counter-trend trades.
First Hit Rate (Most Important!)
Shows which threshold is likely to be hit FIRST:
Upside %: Probability of hitting upside target before downside
Downside %: Probability of hitting downside target before upside
These always sum to 100%
⚠️ Warning: If you see "Low Hit Rate" warning, increase this parameter!
Advanced Parameters
Drift Mode
Allows you to explore different scenarios:
Historical: Uses actual recent trend (default—most realistic)
Zero (Neutral): Assumes no trend, only volatility (symmetric probabilities)
50% Reduced: Dampens trend effect (conservative scenario)
Use Case: Switch to "Zero (Neutral)" to see what happens in a pure volatility environment, useful for range-bound markets.
Distribution Type
Percentile: Shows 10%, 25%, 50%, 75%, 90% levels (recommended for most users)
Sigma: Shows standard deviation levels (1σ, 2σ)—useful for statistical analysis
⚠️ Important Limitations & Best Practices
Limitations
Assumes GBM: Real markets have fat tails, jumps, and regime changes not captured by GBM
Historical Parameters: Uses recent volatility/drift—may not predict regime shifts
No Fundamental Events: Cannot predict earnings, news, or macro shocks
Computational: Runs only on last bar—doesn't give historical signals
Remember: Probabilities are not certainties. Use this indicator as part of a comprehensive trading plan with proper risk management.
Created by: Henrique Centieiro. feedback is more than welcome!
PongExperience PONG! The classic arcade game, now on your charts!
With this indicator, you can finally achieve your lifelong dream of beating the Markets. . . at PONG!
Pong is jam-packed with features! Such as:
2 Paddles
A moving dot
Floating numbers
The idea of a net
This indicator is solely a visualization, it serves simply as an exercise to depict what is capable through PineScript. It can be used to re-skin other indicators or data, but on its own, is not intended as a market indicator.
With that out of the way...
> PONG
The Pong indicator is a recreation of the classic arcade game Pong developed to pit the markets against the cold hard logic of a CPU player.
Given the lack of interaction that is capable, the game is not played in the typical sense, by a player and computer or 2 players.
This version of Pong uses the chart price movements to control the "Market" Paddle, and it is contrasted by a (not AI) "CPU" Paddle, which is controlled by its own set of logic.
> Market Paddle
The Market Paddle is controlled by a data source which can be input by the user.
By default (Auto Mode), the Market Paddle is controlled through a fixed length Donchian channel range, pinning the range high to 100 and range low to 0. As seen below.
This can be altered to use data from different symbols or indicators, and can optionally be smoothed using multiple types of Moving Averages.
In the chart below, you can see how the RSI indicator is imported and smoothed to control the Market Paddle.
Note: The Market Paddle follows the moving average. If not desired, simply set the "Smoothing" input to "NONE".
> CPU Paddle
In simple terms, the CPU Paddle is a handicapped Aimbot.
Its logic is, more or less, "move directly towards the ball's vertical location".
If it were allowed to have full range of the screen, it would be impossible for it to lose a point. Due to this, we must slow it down to "play fair"... as fair as that may be.
The CPU Paddle is allowed to move at a rate specified by a certain Percent of its vertical width. By default, this is set to 2%.
Each update, the CPU Paddle can advance up or down 2% of its vertical width. The directional movement is determined based on the angle of the ball, and it's current position relative to the CPU Paddle's position. Given that it is not a direct follow, it may at times seem more... "human".
When a point is scored, the CPU paddle maintains its position, similar to the original Pong game, the paddles were controlled solely by the raw output of the controllers and did not reset.
> Ball
At the start of each point, the ball begins at the center of the screen and moves in a randomly determined angle at its base speed.
The direction is determined by the player who scored the last point. The loser of the last point "serves" the ball.
Given the circumstances, serving is a gigantic advantage. So the loser serving is just another place where the Market is given an advantage.
The ball's base speed is 1, it will move 1 (horizontal) bar on each update of the script. This speed can "technically" increase to infinity over time, if given the perfect rally. This is due to the hit logic as described below.
Note: The minimum ball speed is also 1.
> Bonk Math
When the ball hits a paddle, essentially 3 outcomes can occur, each resulting in the ball's direction being changed from positive to negative.
Action A: Its angle is doubled, and its speed is doubled.
Action B: Its angle is reversed, and its speed is decreased if it is going faster than base speed.
Action C: Its angle is preserved, and its speed is preserved. "Basic Bounce"
Each paddle is segmented into 3 zones, with the higher and lower tips (20%) of the paddles producing special actions.
The central 60% of each paddle produces a basic bounce. The special actions are determined by the trajectory of the ball and location on the paddle.
> Custom Mode
As stated above, the script loads in "Auto Mode" by default. While this works fine to simply watch the gameplay, the Custom Mode unlocks the ability to visualize countless possibilities of indicators and analyses playing Pong!
In the chart below, we have set up the game to use the NYSE TICK Index as our Market Player. The NYSE TICK Index shows the number of NYSE stocks trading on an uptick minus those on a downtick. Its values fluctuate throughout the day, typically ranging between +1000 and -1000.
Therefore, we have set up Pong to use Ticker USI:TICK and set the Upper Boundary to 1000 and Lower Boundary to -1000. With this method, the paddle is directly controlled by the overall (NYSE) market behaviors.
As seen in a chart earlier, you can also take advantage of the Custom Mode to overlay Pong onto traditional oscillators for use anywhere!
> Styles
This version of Pong comes stocked with 5 colorways to suit your chart vibes!
> Pro Tips & Additional Information
- This game has sound! For the full experience, set alerts for this indicator and a notification sound will play on each hit!*
*Due to server processing, the notification sounds are not precisely played at each hit. :(
- In auto mode, decreasing the length used will give an advantage to the market, as its actions become more sporadic over this window.
- The CPU logic system actually allows the market to have a "technical" edge, since the Market Paddle is not bound to any speed, and is solely controlled by the raw market movements/data input.
- This type of visualization only works on live charts, charts without updates will not see any movement.
- Indicator sources can only be imported from other indicators on the same chart.
- The base screen resolution is 159 bars wide, with the height determined by the boundaries.
- When using a symbol and an outside source, be mindful that the script is attempting to pull the source from the input symbol. Data can appear wonky when not considering the interactions of these inputs.
There are many small interesting details that can't be seen through the description. For example, the mid-line is made from a box. This is because a line object would not appear on top of the box used for the screen. For those keen eye'd coders, feel free to poke around in the source code to make the game truly custom.
Just remember:
The market may never be fair, but now at least it can play Pong!
Enjoy!
Options Max Pain Calculator [BackQuant]Options Max Pain Calculator
A visualization tool that models option expiry dynamics by calculating "max pain" levels, displaying synthetic open interest curves, gamma exposure profiles, and pin-risk zones to help identify where market makers have the least payout exposure.
What is Max Pain?
Max Pain is the theoretical expiration price where the total dollar value of outstanding options would be minimized. At this price level, option holders collectively experience maximum losses while option writers (typically market makers) have minimal payout obligations. This creates a natural gravitational pull as expiration approaches.
Core Features
Visual Analysis Components:
Max Pain Line: Horizontal line showing the calculated minimum pain level
Strike Level Grid: Major support and resistance levels at key option strikes
Pin Zone: Highlighted area around max pain where price may gravitate
Pain Heatmap: Color-coded visualization showing pain distribution across prices
Gamma Exposure Profile: Bar chart displaying net gamma at each strike level
Real-time Dashboard: Summary statistics and risk metrics
Synthetic Market Modeling**
Since Pine Script cannot access live options data, the indicator creates realistic synthetic open interest distributions based on configurable market parameters including volume patterns, put/call ratios, and market maker positioning.
How It Works
Strike Generation:
The tool creates a grid of option strikes centered around the current price. You can control the range, density, and whether strikes snap to realistic market increments.
Open Interest Modeling:
Using your inputs for average volume, put/call ratios, and market maker behavior, the indicator generates synthetic open interest that mirrors real market dynamics:
Higher volume at-the-money with decay as strikes move further out
Adjustable put/call bias to reflect current market sentiment
Market maker inventory effects and typical short-gamma positioning
Weekly options boost for near-term expirations
Pain Calculation:
For each potential expiry price, the tool calculates total option payouts:
Call options contribute pain when finishing in-the-money
Put options contribute pain when finishing in-the-money
The strike with minimum total pain becomes the Max Pain level
Gamma Analysis:
Net gamma exposure is calculated at each strike using standard option pricing models, showing where hedging flows may be most intense. Positive gamma creates price support while negative gamma can amplify moves.
Key Settings
Basic Configuration:
Number of Strikes: Controls grid density (recommended: 15-25)
Days to Expiration: Time until option expiry
Strike Range: Price range around current level (recommended: 8-15%)
Strike Increment: Spacing between strikes
Market Parameters:
Average Daily Volume: Baseline for synthetic open interest
Put/Call Volume Ratio: Market sentiment bias (>1.0 = bearish, <1.0 = bullish) It does not work if set to 1.0
Implied Volatility: Current option volatility estimate
Market Maker Factors: Dealer positioning and hedging intensity
Display Options:
Model Complexity: Simple (line only), Standard (+ zones), Advanced (+ heatmap/gamma)
Visual Elements: Toggle individual components on/off
Theme: Dark/Light mode
Update Frequency: Real-time or daily calculation
Reading the Display
Dashboard Table (Top Right):
Current Price vs Max Pain Level
Distance to Pain: Percentage gap (smaller = higher pin risk)
Pin Risk Assessment: HIGH/MEDIUM/LOW based on proximity and time
Days to Expiry and Strike Count
Model complexity level
Visual Elements:
Red Line: Max Pain level where payout is minimized
Colored Zone: Pin risk area around max pain
Dotted Lines: Major strike levels (green = support, orange = resistance)
Color Bar: Pain heatmap (blue = high pain, red = low pain/max pain zones)
Horizontal Bars: Gamma exposure (green = positive, red = negative)
Yellow Dotted Line: Gamma flip level where hedging behavior changes
Trading Applications
Expiration Pinning:
When price is near max pain with limited time remaining, there's increased probability of gravitating toward that level as market makers hedge their positions.
Support and Resistance:
High open interest strikes often act as magnets, with max pain representing the strongest gravitational pull.
Volatility Expectations:
Above gamma flip: Expect dampened volatility (long gamma environment)
Below gamma flip: Expect amplified moves (short gamma environment)
Risk Assessment:
The pin risk indicator helps gauge likelihood of price manipulation near expiry, with HIGH risk suggesting potential range-bound action.
Best Practices
Setup Recommendations
Start with Model Complexity set to "Standard"
Use realistic strike ranges (8-12% for most assets)
Set put/call ratio based on current market sentiment
Adjust implied volatility to match current levels
Interpretation Guidelines:
Small distance to pain + short time = high pin probability
Large gamma bars indicate key hedging levels to monitor
Heatmap intensity shows strength of pain concentration
Multiple nearby strikes can create wider pin zones
Update Strategy:
Use "Daily" updates for cleaner visuals during trading hours
Switch to "Every Bar" for real-time analysis near expiration
Monitor changes in max pain level as new options activity emerges
Important Disclaimers
This is a modeling tool using synthetic data, not live market information. While the calculations are mathematically sound and the modeling realistic, actual market dynamics involve numerous factors not captured in any single indicator.
Max pain represents theoretical minimum payout levels and suggests where natural market forces may create gravitational pull, but it does not guarantee price movement or predict exact expiration levels. Market gaps, news events, and changing volatility can override these dynamics.
Use this tool as additional context for your analysis, not as a standalone trading signal. The synthetic nature of the data makes it most valuable for understanding market structure and potential zones of interest rather than precise price prediction.
Technical Notes
The indicator uses established option pricing principles with simplified implementations optimized for Pine Script performance. Gamma calculations use standard financial models while pain calculations follow the industry-standard definition of minimized option payouts.
All visual elements use fixed positioning to prevent movement when scrolling charts, and the tool includes performance optimizations to handle real-time calculation without timeout errors.
Volume Profile 3D (Zeiierman)█ Overview
Volume Profile 3D (Zeiierman) is a next-generation volume profile that renders market participation as a 3D-style profile directly on your chart. Instead of flat histograms, you get a depth-aware profile with parallax, gradient transparency, and bull/bear separation, so you can see where liquidity stacked up and how it shifted during the move.
Highlights:
3D visual effect with perspective and depth shading for clarity.
Bull/Bear separation to see whether up bars or down bars created the volume.
Flexible colors and gradients that highlight where the most significant trading activity took place.
This is a state-of-the-art volume profile — visually powerful, highly flexible, and unlike anything else available.
█ How It Works
⚪ Profile Construction
The price range (from highest to lowest) is divided into a number of levels (buckets). Each bar’s volume is added to the correct level, based on its average price. This builds a map of where trading volume was concentrated.
You can choose to:
Aggregate all volume at each level, or
Split bullish vs. bearish volume , slightly offset for clarity.
This creates a clear view of which price zones matter most to the market.
⚪ 3D Effect Creation
The unique part of this indicator is how the 3D projection is built. Each volume block’s width is scaled to its relative size, then tilted with a slope factor to create a depth effect.
maxVol = bins.bu.max() + bins.be.max()
width = math.max(1, math.floor(bucketVol / maxVol * ((bar_index - start) * mult)))
slope = -(step * dev) / ((bar_index - start) * (mult/2))
factor = math.pow(math.min(1.0, math.abs(slope) / step), .5)
width → determines how far the volume extends, based on relative strength.
slope → creates the angled projection for the 3D look.
factor → adjusts perspective to make deeper areas shrink naturally.
The result is a 3D-style volume profile where large areas pop forward and smaller areas fade back, giving you immediate visual context.
█ How to Use
⚪ Support & Resistance Zones (HVNs and Value Area)
Regions where a lot of volume traded tend to act like walls:
If price approaches a high-volume area from above, it may act as support.
From below, it may act as resistance.
Traders often enter or exit near these zones because they represent strong agreement among market participants.
⚪ POC Rejections & Mean Reversions
The Point of Control (POC) is the single price level with the highest volume in the profile.
When price returns to the POC and rejects it, that’s often a signal for reversal trades.
In ranging markets, price may bounce between edges of the Value Area and revert to POC.
⚪ Breakouts via Low-Volume Zones (LVNs)
Low volume areas (gaps in the profile) offer path of least resistance:
Price often moves quickly through these thin zones when momentum builds.
Use them to spot breakouts or continuation trades.
⚪ Directional Insight
Use the bull/bear separation to see whether buyers or sellers dominated at key levels.
█ Settings
Use Active Chart – Profile updates with visible candles.
Custom Period – Fixed number of bars.
Up/Down – Adjust tilt for the 3D angle.
Left/Right – Scale width of the profile.
Aggregated – Merge bull/bear volume.
Bull/Bear Shift – Separate bullish and bearish volume.
Buckets – Number of price levels.
Choose from templates or set custom colors.
POC Gradient option makes high volume bolder, low volume lighter.
-----------------
Disclaimer
The content provided in my scripts, indicators, ideas, algorithms, and systems is for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any financial instruments. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
RiskMetrics█ OVERVIEW
This library is a tool for Pine programmers that provides functions for calculating risk-adjusted performance metrics on periodic price returns. The calculations used by this library's functions closely mirror those the Broker Emulator uses to calculate strategy performance metrics (e.g., Sharpe and Sortino ratios) without depending on strategy-specific functionality.
█ CONCEPTS
Returns, risk, and volatility
The return on an investment is the relative gain or loss over a period, often expressed as a percentage. Investment returns can originate from several sources, including capital gains, dividends, and interest income. Many investors seek the highest returns possible in the quest for profit. However, prudent investing and trading entails evaluating such returns against the associated risks (i.e., the uncertainty of returns and the potential for financial losses) for a clearer perspective on overall performance and sustainability.
One way investors and analysts assess the risk of an investment is by analyzing its volatility , i.e., the statistical dispersion of historical returns. Investors often use volatility in risk estimation because it provides a quantifiable way to gauge the expected extent of fluctuation in returns. Elevated volatility implies heightened uncertainty in the market, which suggests higher expected risk. Conversely, low volatility implies relatively stable returns with relatively minimal fluctuations, thus suggesting lower expected risk. Several risk-adjusted performance metrics utilize volatility in their calculations for this reason.
Risk-free rate
The risk-free rate represents the rate of return on a hypothetical investment carrying no risk of financial loss. This theoretical rate provides a benchmark for comparing the returns on a risky investment and evaluating whether its excess returns justify the risks. If an investment's returns are at or below the theoretical risk-free rate or the risk premium is below a desired amount, it may suggest that the returns do not compensate for the extra risk, which might be a call to reassess the investment.
Since the risk-free rate is a theoretical concept, investors often utilize proxies for the rate in practice, such as Treasury bills and other government bonds. Conventionally, analysts consider such instruments "risk-free" for a domestic holder, as they are a form of government obligation with a low perceived likelihood of default.
The average yield on short-term Treasury bills, influenced by economic conditions, monetary policies, and inflation expectations, has historically hovered around 2-3% over the long term. This range also aligns with central banks' inflation targets. As such, one may interpret a value within this range as a minimum proxy for the risk-free rate, as it may correspond to the minimum rate required to maintain purchasing power over time.
The built-in Sharpe and Sortino ratios that strategies calculate and display in the Performance Summary tab use a default risk-free rate of 2%, and the metrics in this library's example code use the same default rate. Users can adjust this value to fit their analysis needs.
Risk-adjusted performance
Risk-adjusted performance metrics gauge the effectiveness of an investment by considering its returns relative to the perceived risk. They aim to provide a more well-rounded picture of performance by factoring in the level of risk taken to achieve returns. Investors can utilize such metrics to help determine whether the returns from an investment justify the risks and make informed decisions.
The two most commonly used risk-adjusted performance metrics are the Sharpe ratio and the Sortino ratio.
1. Sharpe ratio
The Sharpe ratio , developed by Nobel laureate William F. Sharpe, measures the performance of an investment compared to a theoretically risk-free asset, adjusted for the investment risk. The ratio uses the following formula:
Sharpe Ratio = (𝑅𝑎 − 𝑅𝑓) / 𝜎𝑎
Where:
• 𝑅𝑎 = Average return of the investment
• 𝑅𝑓 = Theoretical risk-free rate of return
• 𝜎𝑎 = Standard deviation of the investment's returns (volatility)
A higher Sharpe ratio indicates a more favorable risk-adjusted return, as it signifies that the investment produced higher excess returns per unit of increase in total perceived risk.
2. Sortino ratio
The Sortino ratio is a modified form of the Sharpe ratio that only considers downside volatility , i.e., the volatility of returns below the theoretical risk-free benchmark. Although it shares close similarities with the Sharpe ratio, it can produce very different values, especially when the returns do not have a symmetrical distribution, since it does not penalize upside and downside volatility equally. The ratio uses the following formula:
Sortino Ratio = (𝑅𝑎 − 𝑅𝑓) / 𝜎𝑑
Where:
• 𝑅𝑎 = Average return of the investment
• 𝑅𝑓 = Theoretical risk-free rate of return
• 𝜎𝑑 = Downside deviation (standard deviation of negative excess returns, or downside volatility)
The Sortino ratio offers an alternative perspective on an investment's return-generating efficiency since it does not consider upside volatility in its calculation. A higher Sortino ratio signifies that the investment produced higher excess returns per unit of increase in perceived downside risk.
█ CALCULATIONS
Return period detection
Calculating risk-adjusted performance metrics requires collecting returns across several periods of a given size. Analysts may use different period sizes based on the context and their preferences. However, two widely used standards are monthly or daily periods, depending on the available data and the investment's duration. The built-in ratios displayed in the Strategy Tester utilize returns from either monthly or daily periods in their calculations based on the following logic:
• Use monthly returns if the history of closed trades spans at least two months.
• Use daily returns if the trades span at least two days but less than two months.
• Do not calculate the ratios if the trade data spans fewer than two days.
This library's `detectPeriod()` function applies related logic to available chart data rather than trade data to determine which period is appropriate:
• It returns true if the chart's data spans at least two months, indicating that it's sufficient to use monthly periods.
• It returns false if the chart's data spans at least two days but not two months, suggesting the use of daily periods.
• It returns na if the length of the chart's data covers less than two days, signifying that the data is insufficient for meaningful ratio calculations.
It's important to note that programmers should only call `detectPeriod()` from a script's global scope or within the outermost scope of a function called from the global scope, as it requires the time value from the first bar to accurately measure the amount of time covered by the chart's data.
Collecting periodic returns
This library's `getPeriodicReturns()` function tracks price return data within monthly or daily periods and stores the periodic values in an array . It uses a `detectPeriod()` call as the condition to determine whether each element in the array represents the return over a monthly or daily period.
The `getPeriodicReturns()` function has two overloads. The first overload requires two arguments and outputs an array of monthly or daily returns for use in the `sharpe()` and `sortino()` methods. To calculate these returns:
1. The `percentChange` argument should be a series that represents percentage gains or losses. The values can be bar-to-bar return percentages on the chart timeframe or percentages requested from a higher timeframe.
2. The function compounds all non-na `percentChange` values within each monthly or daily period to calculate the period's total return percentage. When the `percentChange` represents returns from a higher timeframe, ensure the requested data includes gaps to avoid compounding redundant values.
3. After a period ends, the function queues the compounded return into the array , removing the oldest element from the array when its size exceeds the `maxPeriods` argument.
The resulting array represents the sequence of closed returns over up to `maxPeriods` months or days, depending on the available data.
The second overload of the function includes an additional `benchmark` parameter. Unlike the first overload, this version tracks and collects differences between the `percentChange` and the specified `benchmark` values. The resulting array represents the sequence of excess returns over up to `maxPeriods` months or days. Passing this array to the `sharpe()` and `sortino()` methods calculates generalized Information ratios , which represent the risk-adjustment performance of a sequence of returns compared to a risky benchmark instead of a risk-free rate. For consistency, ensure the non-na times of the `benchmark` values align with the times of the `percentChange` values.
Ratio methods
This library's `sharpe()` and `sortino()` methods respectively calculate the Sharpe and Sortino ratios based on an array of returns compared to a specified annual benchmark. Both methods adjust the annual benchmark based on the number of periods per year to suit the frequency of the returns:
• If the method call does not include a `periodsPerYear` argument, it uses `detectPeriod()` to determine whether the returns represent monthly or daily values based on the chart's history. If monthly, the method divides the `annualBenchmark` value by 12. If daily, it divides the value by 365.
• If the method call does specify a `periodsPerYear` argument, the argument's value supersedes the automatic calculation, facilitating custom benchmark adjustments, such as dividing by 252 when analyzing collected daily stock returns.
When the array passed to these methods represents a sequence of excess returns , such as the result from the second overload of `getPeriodicReturns()`, use an `annualBenchmark` value of 0 to avoid comparing those excess returns to a separate rate.
By default, these methods only calculate the ratios on the last available bar to minimize their resource usage. Users can override this behavior with the `forceCalc` parameter. When the value is true , the method calculates the ratio on each call if sufficient data is available, regardless of the bar index.
Look first. Then leap.
█ FUNCTIONS & METHODS
This library contains the following functions:
detectPeriod()
Determines whether the chart data has sufficient coverage to use monthly or daily returns
for risk metric calculations.
Returns: (bool) `true` if the period spans more than two months, `false` if it otherwise spans more
than two days, and `na` if the data is insufficient.
getPeriodicReturns(percentChange, maxPeriods)
(Overload 1 of 2) Tracks periodic return percentages and queues them into an array for ratio
calculations. The span of the chart's historical data determines whether the function uses
daily or monthly periods in its calculations. If the chart spans more than two months,
it uses "1M" periods. Otherwise, if the chart spans more than two days, it uses "1D"
periods. If the chart covers less than two days, it does not store changes.
Parameters:
percentChange (float) : (series float) The change percentage. The function compounds non-na values from each
chart bar within monthly or daily periods to calculate the periodic changes.
maxPeriods (simple int) : (simple int) The maximum number of periodic returns to store in the returned array.
Returns: (array) An array containing the overall percentage changes for each period, limited
to the maximum specified by `maxPeriods`.
getPeriodicReturns(percentChange, benchmark, maxPeriods)
(Overload 2 of 2) Tracks periodic excess return percentages and queues the values into an
array. The span of the chart's historical data determines whether the function uses
daily or monthly periods in its calculations. If the chart spans more than two months,
it uses "1M" periods. Otherwise, if the chart spans more than two days, it uses "1D"
periods. If the chart covers less than two days, it does not store changes.
Parameters:
percentChange (float) : (series float) The change percentage. The function compounds non-na values from each
chart bar within monthly or daily periods to calculate the periodic changes.
benchmark (float) : (series float) The benchmark percentage to compare against `percentChange` values.
The function compounds non-na values from each bar within monthly or
daily periods and subtracts the results from the compounded `percentChange` values to
calculate the excess returns. For consistency, ensure this series has a similar history
length to the `percentChange` with aligned non-na value times.
maxPeriods (simple int) : (simple int) The maximum number of periodic excess returns to store in the returned array.
Returns: (array) An array containing monthly or daily excess returns, limited
to the maximum specified by `maxPeriods`.
method sharpeRatio(returnsArray, annualBenchmark, forceCalc, periodsPerYear)
Calculates the Sharpe ratio for an array of periodic returns.
Callable as a method or a function.
Namespace types: array
Parameters:
returnsArray (array) : (array) An array of periodic return percentages, e.g., returns over monthly or
daily periods.
annualBenchmark (float) : (series float) The annual rate of return to compare against `returnsArray` values. When
`periodsPerYear` is `na`, the function divides this value by 12 to calculate a
monthly benchmark if the chart's data spans at least two months or 365 for a daily
benchmark if the data otherwise spans at least two days. If `periodsPerYear`
has a specified value, the function divides the rate by that value instead.
forceCalc (bool) : (series bool) If `true`, calculates the ratio on every call. Otherwise, ratio calculation
only occurs on the last available bar. Optional. The default is `false`.
periodsPerYear (simple int) : (simple int) If specified, divides the annual rate by this value instead of the value
determined by the time span of the chart's data.
Returns: (float) The Sharpe ratio, which estimates the excess return per unit of total volatility.
method sortinoRatio(returnsArray, annualBenchmark, forceCalc, periodsPerYear)
Calculates the Sortino ratio for an array of periodic returns.
Callable as a method or a function.
Namespace types: array
Parameters:
returnsArray (array) : (array) An array of periodic return percentages, e.g., returns over monthly or
daily periods.
annualBenchmark (float) : (series float) The annual rate of return to compare against `returnsArray` values. When
`periodsPerYear` is `na`, the function divides this value by 12 to calculate a
monthly benchmark if the chart's data spans at least two months or 365 for a daily
benchmark if the data otherwise spans at least two days. If `periodsPerYear`
has a specified value, the function divides the rate by that value instead.
forceCalc (bool) : (series bool) If `true`, calculates the ratio on every call. Otherwise, ratio calculation
only occurs on the last available bar. Optional. The default is `false`.
periodsPerYear (simple int) : (simple int) If specified, divides the annual rate by this value instead of the value
determined by the time span of the chart's data.
Returns: (float) The Sortino ratio, which estimates the excess return per unit of downside
volatility.
Simple Decesion Matrix Classification Algorithm [SS]Hello everyone,
It has been a while since I posted an indicator, so thought I would share this project I did for fun.
This indicator is an attempt to develop a pseudo Random Forest classification decision matrix model for Pinescript.
This is not a full, robust Random Forest model by any stretch of the imagination, but it is a good way to showcase how decision matrices can be applied to trading and within Pinescript.
As to not market this as something it is not, I am simply calling it the "Simple Decision Matrix Classification Algorithm". However, I have stolen most of the aspects of this machine learning algo from concepts of Random Forest modelling.
How it works:
With models like Support Vector Machines (SVM), Random Forest (RF) and Gradient Boosted Machine Learning (GBM), which are commonly used in Machine Learning Classification Tasks (MLCTs), this model operates similarity to the basic concepts shared amongst those modelling types. While it is not very similar to SVM, it is very similar to RF and GBM, in that it uses a "voting" system.
What do I mean by voting system?
How most classification MLAs work is by feeding an input dataset to an algorithm. The algorithm sorts this data, categorizes it, then introduces something called a confusion matrix (essentially sorting the data in no apparently order as to prevent over-fitting and introduce "confusion" to the algorithm to ensure that it is not just following a trend).
From there, the data is called upon based on current data inputs (so say we are using RSI and Z-Score, the current RSI and Z-Score is compared against other RSI's and Z-Scores that the model has saved). The model will process this information and each "tree" or "node" will vote. Then a cumulative overall vote is casted.
How does this MLA work?
This model accepts 2 independent variables. In order to keep things simple, this model was kept as a three node model. This means that there are 3 separate votes that go in to get the result. A vote is casted for each of the two independent variables and then a cumulative vote is casted for the overall verdict (the result of the model's prediction).
The model actually displays this system diagrammatically and it will likely be easier to understand if we look at the diagram to ground the example:
In the diagram, at the very top we have the classification variable that we are trying to predict. In this case, we are trying to predict whether there will be a breakout/breakdown outside of the normal ATR range (this is either yes or no question, hence a classification task).
So the question forms the basis of the input. The model will track at which points the ATR range is exceeded to the upside or downside, as well as the other variables that we wish to use to predict these exceedences. The ATR range forms the basis of all the data flowing into the model.
Then, at the second level, you will see we are using Z-Score and RSI to predict these breaks. The circle will change colour according to "feature importance". Feature importance basically just means that the indicator has a strong impact on the outcome. The stronger the importance, the more green it will be, the weaker, the more red it will be.
We can see both RSI and Z-Score are green and thus we can say they are strong options for predicting a breakout/breakdown.
So then we move down to the actual voting mechanisms. You will see the 2 pink boxes. These are the first lines of voting. What is happening here is the model is identifying the instances that are most similar and whether the classification task we have assigned (remember out ATR exceedance classifier) was either true or false based on RSI and Z-Score.
These are our 2 nodes. They both cast an individual vote. You will see in this case, both cast a vote of 1. The options are either 1 or 0. A vote of 1 means "Yes" or "Breakout likely".
However, this is not the only voting the model does. The model does one final vote based on the 2 votes. This is shown in the purple box. We can see the final vote and result at the end with the orange circle. It is 1 which means a range exceedance is anticipated and the most likely outcome.
The Data Table Component
The model has many moving parts. I have tried to represent the pivotal functions diagrammatically, but some other important aspects and background information must be obtained from the companion data table.
If we bring back our diagram from above:
We can see the data table to the left.
The data table contains 2 sections, one for each independent variable. In this case, our independent variables are RSI and Z-Score.
The data table will provide you with specifics about the independent variables, as well as about the model accuracy and outcome.
If we take a look at the first row, it simply indicates which independent variable it is looking at. If we go down to the next row where it reads "Weighted Impact", we can see a corresponding percent. The "weighted impact" is the amount of representation each independent variable has within the voting scheme. So in this case, we can see its pretty equal, 45% and 55%, This tells us that there is a slight higher representation of z-score than RSI but nothing to worry about.
If there was a major over-respresentation of greater than 30 or 40%, then the model would risk being skewed and voting too heavily in favour of 1 variable over the other.
If we move down from there we will see the next row reads "independent accuracy". The voting of each independent variable's accuracy is considered separately. This is one way we can determine feature importance, by seeing how well one feature augments the accuracy. In this case, we can see that RSI has the greatest importance, with an accuracy of around 87% at predicting breakouts. That makes sense as RSI is a momentum based oscillator.
Then if we move down one more, we will see what each independent feature (node) has voted for. In this case, both RSI and Z-Score voted for 1 (Breakout in our case).
You can weigh these in collaboration, but its always important to look at the final verdict of the model, which if we move down, we can see the "Model prediction" which is "Bullish".
If you are using the ATR breakout, the model cannot distinguish between "Bullish" or "Bearish", must that a "Breakout" is likely, either bearish or bullish. However, for the other classification tasks this model can do, the results are either Bullish or Bearish.
Using the Function:
Okay so now that all that technical stuff is out of the way, let's get into using the function. First of all this function innately provides you with 3 possible classification tasks. These include:
1. Predicting Red or Green Candle
2. Predicting Bullish / Bearish ATR
3. Predicting a Breakout from the ATR range
The possible independent variables include:
1. Stochastics,
2. MFI,
3. RSI,
4. Z-Score,
5. EMAs,
6. SMAs,
7. Volume
The model can only accept 2 independent variables, to operate within the computation time limits for pine execution.
Let's quickly go over what the numbers in the diagram mean:
The numbers being pointed at with the yellow arrows represent the cases the model is sorting and voting on. These are the most identical cases and are serving as the voting foundation for the model.
The numbers being pointed at with the pink candle is the voting results.
Extrapolating the functions (For Pine Developers:
So this is more of a feature application, so feel free to customize it to your liking and add additional inputs. But here are some key important considerations if you wish to apply this within your own code:
1. This is a BINARY classification task. The prediction must either be 0 or 1.
2. The function consists of 3 separate functions, the 2 first functions serve to build the confusion matrix and then the final "random_forest" function serves to perform the computations. You will need all 3 functions for implementation.
3. The model can only accept 2 independent variables.
I believe that is the function. Hopefully this wasn't too confusing, it is very statsy, but its a fun function for me! I use Random Forest excessively in R and always like to try to convert R things to Pinescript.
Hope you enjoy!
Safe trades everyone!
Tick CVD [Kioseff Trading]Hello!
This script "Tick CVD" employs live tick data to calculate CVD and volume delta! No tick chart required.
Features
Live price ticks are recorded
CVD calculated using live ticks
Delta calculated using live ticks
Tick-based HMA, WMA, EMA, or SMA for CVD and price
Key tick levels (S/R CVD & price) are recorded and displayed
Price/CVD displayable as candles or lines
Polylines are used - data visuals are not limited to 500 points.
Efficiency mode - remove all the bells and whistles to capitalize on efficiently calculated/displayed tick CVD and price
How it works
While historical tick-data isn't available to non-professional subscribers, live tick data is programmatically accessible. Consequently, this indicator records live tick data to calculate CVD, delta, and other metrics for the user!
Generally, Pine Scripts use the following rules to calculate volume/price-related metrics:
Bullish Volume: When the close price is greater than the open price.
Bearish Volume: When the close price is less than the open price.
This script, however, improves on that logic by utilizing live ticks. Instead of relying on time-series charts, it records up ticks as buying volume and down ticks as selling volume. This allows the script to create a more accurate CVD, delta, or price tick chart by tracking real-time buying and selling activity.
Price can tick fast; therefore, tick aggregation can occur. While tick aggregation isn't necessarily "incorrect", if you prefer speed and efficiency it's advised to enable "efficiency mode" in a fast market.
The image above highlights the tick CVD and price tick graph!
Green price tick graph = price is greater than its origin point (first script load)
Red price tick graph = price is less than its origin point
Blue tick CVD graph = CVD, over the calculation period, is greater than 0.
Red tick CVD graph = CVD is less than 0 over the calculation period.
The image above explains the right-oriented scales. The upper scale is for the price graph and the lower scale for the CVD graph.
The image above explains the circles superimposed on the scale lines for the price graph and the CVD graph.
The image above explains the "wavy" lines shown by the indicator. The wavy lines correspond to tick delta - whether the recorded tick was an uptick or down tick and whether buy volume or sell volume transpired.
The image above explains the blue/red boxes displayed by the indicator. The boxes offer an alternative visualization of tick delta, including the magnitude of buying/selling volume for the recorded tick.
Blue boxes = buying volume
Red boxes = selling volume
Bright blue = high buying volume (relative)
Bright red = high selling volume (relative)
Dim blue = low buying volume (relative)
Dim red = low selling volume (relative)
The numbers displayed in the box show the numbered tick and the volume delta recorded for the tick.
The image above further explains visuals for the CVD graph.
Dotted red lines indicate key CVD peaks, while dotted blue lines indicate key CVD bottoms.
The white dotted line reflects the CVD average of your choice: HMA, WMA, EMA, SMA.
The image above offers a similar explanation of visuals for the price graph.
The image above offers an alternative view for the indicator!
The image above shows the indicator when efficiency mode is enabled. When trading a fast market, enabling efficiency mode is advised - the script will perform quicker.
Of course, thank you to @RicardoSantos for his awesome library I use in almost every script :D
Thank you for checking this out!
analytics_tablesLibrary "analytics_tables"
📝 Description
This library provides the implementation of several performance-related statistics and metrics, presented in the form of tables.
The metrics shown in the afforementioned tables where developed during the past years of my in-depth analalysis of various strategies in an atempt to reason about the performance of each strategy.
The visualization and some statistics where inspired by the existing implementations of the "Seasonality" script, and the performance matrix implementations of @QuantNomad and @ZenAndTheArtOfTrading scripts.
While this library is meant to be used by my strategy framework "Template Trailing Strategy (Backtester)" script, I wrapped it in a library hoping this can be usefull for other community strategy scripts that will be released in the future.
🤔 How to Guide
To use the functionality this library provides in your script you have to import it first!
Copy the import statement of the latest release by pressing the copy button below and then paste it into your script. Give a short name to this library so you can refer to it later on. The import statement should look like this:
import jason5480/analytics_tables/1 as ant
There are three types of tables provided by this library in the initial release. The stats table the metrics table and the seasonality table.
Each one shows different kinds of performance statistics.
The table UDT shall be initialized once using the `init()` method.
They can be updated using the `update()` method where the updated data UDT object shall be passed.
The data UDT can also initialized and get updated on demend depending on the use case
A code example for the StatsTable is the following:
var ant.StatsData statsData = ant.StatsData.new()
statsData.update(SideStats.new(), SideStats.new(), 0)
if (barstate.islastconfirmedhistory or (barstate.isrealtime and barstate.isconfirmed))
var statsTable = ant.StatsTable.new().init(ant.getTablePos('TOP', 'RIGHT'))
statsTable.update(statsData)
A code example for the MetricsTable is the following:
var ant.StatsData statsData = ant.StatsData.new()
statsData.update(ant.SideStats.new(), ant.SideStats.new(), 0)
if (barstate.islastconfirmedhistory or (barstate.isrealtime and barstate.isconfirmed))
var metricsTable = ant.MetricsTable.new().init(ant.getTablePos('BOTTOM', 'RIGHT'))
metricsTable.update(statsData, 10)
A code example for the SeasonalityTable is the following:
var ant.SeasonalData seasonalData = ant.SeasonalData.new().init(Seasonality.monthOfYear)
seasonalData.update()
if (barstate.islastconfirmedhistory or (barstate.isrealtime and barstate.isconfirmed))
var seasonalTable = ant.SeasonalTable.new().init(seasonalData, ant.getTablePos('BOTTOM', 'LEFT'))
seasonalTable.update(seasonalData)
🏋️♂️ Please refer to the "EXAMPLE" regions of the script for more advanced and up to date code examples!
Special thanks to @Mrcrbw for the proposal to develop this library and @DCNeu for the constructive feedback 🏆.
getTablePos(ypos, xpos)
Get table position compatible string
Parameters:
ypos (simple string) : The position on y axise
xpos (simple string) : The position on x axise
Returns: The position to be passed to the table
method init(this, pos, height, width, positiveTxtColor, negativeTxtColor, neutralTxtColor, positiveBgColor, negativeBgColor, neutralBgColor)
Initialize the stats table object with the given colors in the given position
Namespace types: StatsTable
Parameters:
this (StatsTable) : The stats table object
pos (simple string) : The table position string
height (simple float) : The height of the table as a percentage of the charts height. By default, 0 auto-adjusts the height based on the text inside the cells
width (simple float) : The width of the table as a percentage of the charts height. By default, 0 auto-adjusts the width based on the text inside the cells
positiveTxtColor (simple color) : The text color when positive
negativeTxtColor (simple color) : The text color when negative
neutralTxtColor (simple color) : The text color when neutral
positiveBgColor (simple color) : The background color with transparency when positive
negativeBgColor (simple color) : The background color with transparency when negative
neutralBgColor (simple color) : The background color with transparency when neutral
method init(this, pos, height, width, neutralBgColor)
Initialize the metrics table object with the given colors in the given position
Namespace types: MetricsTable
Parameters:
this (MetricsTable) : The metrics table object
pos (simple string) : The table position string
height (simple float) : The height of the table as a percentage of the charts height. By default, 0 auto-adjusts the height based on the text inside the cells
width (simple float) : The width of the table as a percentage of the charts width. By default, 0 auto-adjusts the width based on the text inside the cells
neutralBgColor (simple color) : The background color with transparency when neutral
method init(this, seas)
Initialize the seasonal data
Namespace types: SeasonalData
Parameters:
this (SeasonalData) : The seasonal data object
seas (simple Seasonality) : The seasonality of the matrix data
method init(this, data, pos, maxNumOfYears, height, width, extended, neutralTxtColor, neutralBgColor)
Initialize the seasonal table object with the given colors in the given position
Namespace types: SeasonalTable
Parameters:
this (SeasonalTable) : The seasonal table object
data (SeasonalData) : The seasonality data of the table
pos (simple string) : The table position string
maxNumOfYears (simple int) : The maximum number of years that fit into the table
height (simple float) : The height of the table as a percentage of the charts height. By default, 0 auto-adjusts the height based on the text inside the cells
width (simple float) : The width of the table as a percentage of the charts width. By default, 0 auto-adjusts the width based on the text inside the cells
extended (simple bool) : The seasonal table with extended columns for performance
neutralTxtColor (simple color) : The text color when neutral
neutralBgColor (simple color) : The background color with transparency when neutral
method update(this, wins, losses, numOfInconclusiveExits)
Update the strategy info data of the strategy
Namespace types: StatsData
Parameters:
this (StatsData) : The strategy statistics object
wins (SideStats)
losses (SideStats)
numOfInconclusiveExits (int) : The number of inconclusive trades
method update(this, stats, positiveTxtColor, negativeTxtColor, negativeBgColor, neutralBgColor)
Update the stats table object with the given data
Namespace types: StatsTable
Parameters:
this (StatsTable) : The stats table object
stats (StatsData) : The stats data to update the table
positiveTxtColor (simple color) : The text color when positive
negativeTxtColor (simple color) : The text color when negative
negativeBgColor (simple color) : The background color with transparency when negative
neutralBgColor (simple color) : The background color with transparency when neutral
method update(this, stats, buyAndHoldPerc, positiveTxtColor, negativeTxtColor, positiveBgColor, negativeBgColor)
Update the metrics table object with the given data
Namespace types: MetricsTable
Parameters:
this (MetricsTable) : The metrics table object
stats (StatsData) : The stats data to update the table
buyAndHoldPerc (float) : The buy and hold percetage
positiveTxtColor (simple color) : The text color when positive
negativeTxtColor (simple color) : The text color when negative
positiveBgColor (simple color) : The background color with transparency when positive
negativeBgColor (simple color) : The background color with transparency when negative
method update(this)
Update the seasonal data based on the season and eon timeframe
Namespace types: SeasonalData
Parameters:
this (SeasonalData) : The seasonal data object
method update(this, data, positiveTxtColor, negativeTxtColor, neutralTxtColor, positiveBgColor, negativeBgColor, neutralBgColor, timeBgColor)
Update the seasonal table object with the given data
Namespace types: SeasonalTable
Parameters:
this (SeasonalTable) : The seasonal table object
data (SeasonalData) : The seasonal cell data to update the table
positiveTxtColor (simple color) : The text color when positive
negativeTxtColor (simple color) : The text color when negative
neutralTxtColor (simple color) : The text color when neutral
positiveBgColor (simple color) : The background color with transparency when positive
negativeBgColor (simple color) : The background color with transparency when negative
neutralBgColor (simple color) : The background color with transparency when neutral
timeBgColor (simple color) : The background color of the time gradient
SideStats
Object that represents the strategy statistics data of one side win or lose
Fields:
numOf (series int)
sumFreeProfit (series float)
freeProfitStDev (series float)
sumProfit (series float)
profitStDev (series float)
sumGain (series float)
gainStDev (series float)
avgQuantityPerc (series float)
avgCapitalRiskPerc (series float)
avgTPExecutedCount (series float)
avgRiskRewardRatio (series float)
maxStreak (series int)
StatsTable
Object that represents the stats table
Fields:
table (series table) : The actual table
rows (series int) : The number of rows of the table
columns (series int) : The number of columns of the table
StatsData
Object that represents the statistics data of the strategy
Fields:
wins (SideStats)
losses (SideStats)
numOfInconclusiveExits (series int)
avgFreeProfitStr (series string)
freeProfitStDevStr (series string)
lossFreeProfitStDevStr (series string)
avgProfitStr (series string)
profitStDevStr (series string)
lossProfitStDevStr (series string)
avgQuantityStr (series string)
MetricsTable
Object that represents the metrics table
Fields:
table (series table) : The actual table
rows (series int) : The number of rows of the table
columns (series int) : The number of columns of the table
SeasonalData
Object that represents the seasonal table dynamic data
Fields:
seasonality (series Seasonality)
eonToMatrixRow (map)
numOfEons (series int)
mostRecentMatrixRow (series int)
balances (matrix)
returnPercs (matrix)
maxDDs (matrix)
eonReturnPercs (array)
eonCAGRs (array)
eonMaxDDs (array)
SeasonalTable
Object that represents the seasonal table
Fields:
table (series table) : The actual table
headRows (series int) : The number of head rows of the table
headColumns (series int) : The number of head columns of the table
eonRows (series int) : The number of eon rows of the table
seasonColumns (series int) : The number of season columns of the table
statsRows (series int)
statsColumns (series int) : The number of stats columns of the table
rows (series int) : The number of rows of the table
columns (series int) : The number of columns of the table
extended (series bool) : Whether the table has additional performance statistics
Adaptive Trend Classification: Moving Averages [InvestorUnknown]Adaptive Trend Classification: Moving Averages
Overview
The Adaptive Trend Classification (ATC) Moving Averages indicator is a robust and adaptable investing tool designed to provide dynamic signals based on various types of moving averages and their lengths. This indicator incorporates multiple layers of adaptability to enhance its effectiveness in various market conditions.
Key Features
Adaptability of Moving Average Types and Lengths: The indicator utilizes different types of moving averages (EMA, HMA, WMA, DEMA, LSMA, KAMA) with customizable lengths to adjust to market conditions.
Dynamic Weighting Based on Performance: ] Weights are assigned to each moving average based on the equity they generate, with considerations for a cutout period and decay rate to manage (reduce) the influence of past performances.
Exponential Growth Adjustment: The influence of recent performance is enhanced through an adjustable exponential growth factor, ensuring that more recent data has a greater impact on the signal.
Calibration Mode: Allows users to fine-tune the indicator settings for specific signal periods and backtesting, ensuring optimized performance.
Visualization Options: Multiple customization options for plotting moving averages, color bars, and signal arrows, enhancing the clarity of the visual output.
Alerts: Configurable alert settings to notify users based on specific moving average crossovers or the average signal.
User Inputs
Adaptability Settings
λ (Lambda): Specifies the growth rate for exponential growth calculations.
Decay (%): Determines the rate of depreciation applied to the equity over time.
CutOut Period: Sets the period after which equity calculations start, allowing for a focus on specific time ranges.
Robustness Lengths: Defines the range of robustness for equity calculation with options for Narrow, Medium, or Wide adjustments.
Long/Short Threshold: Sets thresholds for long and short signals.
Calculation Source: The data source used for calculations (e.g., close price).
Moving Averages Settings
Lengths and Weights: Allows customization of lengths and initial weights for each moving average type (EMA, HMA, WMA, DEMA, LSMA, KAMA).
Calibration Mode
Calibration Mode: Enables calibration for fine-tuning inputs.
Calibrate: Specifies which moving average type to calibrate.
Strategy View: Shifts entries and exits by one bar for non-repainting backtesting.
Calculation Logic
Rate of Change (R): Calculates the rate of change in the price.
Set of Moving Averages: Generates multiple moving averages with different lengths for each type.
diflen(length) =>
int L1 = na, int L_1 = na
int L2 = na, int L_2 = na
int L3 = na, int L_3 = na
int L4 = na, int L_4 = na
if robustness == "Narrow"
L1 := length + 1, L_1 := length - 1
L2 := length + 2, L_2 := length - 2
L3 := length + 3, L_3 := length - 3
L4 := length + 4, L_4 := length - 4
else if robustness == "Medium"
L1 := length + 1, L_1 := length - 1
L2 := length + 2, L_2 := length - 2
L3 := length + 4, L_3 := length - 4
L4 := length + 6, L_4 := length - 6
else
L1 := length + 1, L_1 := length - 1
L2 := length + 3, L_2 := length - 3
L3 := length + 5, L_3 := length - 5
L4 := length + 7, L_4 := length - 7
// Function to calculate different types of moving averages
ma_calculation(source, length, ma_type) =>
if ma_type == "EMA"
ta.ema(source, length)
else if ma_type == "HMA"
ta.sma(source, length)
else if ma_type == "WMA"
ta.wma(source, length)
else if ma_type == "DEMA"
ta.dema(source, length)
else if ma_type == "LSMA"
lsma(source,length)
else if ma_type == "KAMA"
kama(source, length)
else
na
// Function to create a set of moving averages with different lengths
SetOfMovingAverages(length, source, ma_type) =>
= diflen(length)
MA = ma_calculation(source, length, ma_type)
MA1 = ma_calculation(source, L1, ma_type)
MA2 = ma_calculation(source, L2, ma_type)
MA3 = ma_calculation(source, L3, ma_type)
MA4 = ma_calculation(source, L4, ma_type)
MA_1 = ma_calculation(source, L_1, ma_type)
MA_2 = ma_calculation(source, L_2, ma_type)
MA_3 = ma_calculation(source, L_3, ma_type)
MA_4 = ma_calculation(source, L_4, ma_type)
Exponential Growth Factor: Computes an exponential growth factor based on the current bar index and growth rate.
// The function `e(L)` calculates an exponential growth factor based on the current bar index and a given growth rate `L`.
e(L) =>
// Calculate the number of bars elapsed.
// If the `bar_index` is 0 (i.e., the very first bar), set `bars` to 1 to avoid division by zero.
bars = bar_index == 0 ? 1 : bar_index
// Define the cuttime time using the `cutout` parameter, which specifies how many bars will be cut out off the time series.
cuttime = time
// Initialize the exponential growth factor `x` to 1.0.
x = 1.0
// Check if `cuttime` is not `na` and the current time is greater than or equal to `cuttime`.
if not na(cuttime) and time >= cuttime
// Use the mathematical constant `e` raised to the power of `L * (bar_index - cutout)`.
// This represents exponential growth over the number of bars since the `cutout`.
x := math.pow(math.e, L * (bar_index - cutout))
x
Equity Calculation: Calculates the equity based on starting equity, signals, and the rate of change, incorporating a natural decay rate.
pine code
// This function calculates the equity based on the starting equity, signals, and rate of change (R).
eq(starting_equity, sig, R) =>
cuttime = time
if not na(cuttime) and time >= cuttime
// Calculate the rate of return `r` by multiplying the rate of change `R` with the exponential growth factor `e(La)`.
r = R * e(La)
// Calculate the depreciation factor `d` as 1 minus the depreciation rate `De`.
d = 1 - De
var float a = 0.0
// If the previous signal `sig ` is positive, set `a` to `r`.
if (sig > 0)
a := r
// If the previous signal `sig ` is negative, set `a` to `-r`.
else if (sig < 0)
a := -r
// Declare the variable `e` to store equity and initialize it to `na`.
var float e = na
// If `e ` (the previous equity value) is not available (first calculation):
if na(e )
e := starting_equity
else
// Update `e` based on the previous equity value, depreciation factor `d`, and adjustment factor `a`.
e := (e * d) * (1 + a)
// Ensure `e` does not drop below 0.25.
if (e < 0.25)
e := 0.25
e
else
na
Signal Generation: Generates signals based on crossovers and computes a weighted signal from multiple moving averages.
Main Calculations
The indicator calculates different moving averages (EMA, HMA, WMA, DEMA, LSMA, KAMA) and their respective signals, applies exponential growth and decay factors to compute equities, and then derives a final signal by averaging weighted signals from all moving averages.
Visualization and Alerts
The final signal, along with additional visual aids like color bars and arrows, is plotted on the chart. Users can also set up alerts based on specific conditions to receive notifications for potential trading opportunities.
Repainting
The indicator does support intra-bar changes of signal but will not repaint once the bar is closed, if you want to get alerts only for signals after bar close, turn on “Strategy View” while setting up the alert.
Conclusion
The Adaptive Trend Classification: Moving Averages Indicator is a sophisticated tool for investors, offering extensive customization and adaptability to changing market conditions. By integrating multiple moving averages and leveraging dynamic weighting based on performance, it aims to provide reliable and timely investing signals.
Statistics • Chi Square • P-value • SignificanceThe Statistics • Chi Square • P-value • Significance publication aims to provide a tool for combining different conditions and checking whether the outcome is significant using the Chi-Square Test and P-value.
🔶 USAGE
The basic principle is to compare two or more groups and check the results of a query test, such as asking men and women whether they want to see a romantic or non-romantic movie.
–––––––––––––––––––––––––––––––––––––––––––––
| | ROMANTIC | NON-ROMANTIC | ⬅︎ MOVIE |
–––––––––––––––––––––––––––––––––––––––––––––
| MEN | 2 | 8 | 10 |
–––––––––––––––––––––––––––––––––––––––––––––
| WOMEN | 7 | 3 | 10 |
–––––––––––––––––––––––––––––––––––––––––––––
|⬆︎ SEX | 10 | 10 | 20 |
–––––––––––––––––––––––––––––––––––––––––––––
We calculate the Chi-Square Formula, which is:
Χ² = Σ ( (Observed Value − Expected Value)² / Expected Value )
In this publication, this is:
chiSquare = 0.
for i = 0 to rows -1
for j = 0 to colums -1
observedValue = aBin.get(i).aFloat.get(j)
expectedValue = math.max(1e-12, aBin.get(i).aFloat.get(colums) * aBin.get(rows).aFloat.get(j) / sumT) //Division by 0 protection
chiSquare += math.pow(observedValue - expectedValue, 2) / expectedValue
Together with the 'Degree of Freedom', which is (rows − 1) × (columns − 1) , the P-value can be calculated.
In this case it is P-value: 0.02462
A P-value lower than 0.05 is considered to be significant. Statistically, women tend to choose a romantic movie more, while men prefer a non-romantic one.
Users have the option to choose a P-value, calculated from a standard table or through a math.ucla.edu - Javascript-based function (see references below).
Note that the population (10 men + 10 women = 20) is small, something to consider.
Either way, this principle is applied in the script, where conditions can be chosen like rsi, close, high, ...
🔹 CONDITION
Conditions are added to the left column ('CONDITION')
For example, previous rsi values (rsi ) between 0-100, divided in separate groups
🔹 CLOSE
Then, the movement of the last close is evaluated
UP when close is higher then previous close (close )
DOWN when close is lower then previous close
EQUAL when close is equal then previous close
It is also possible to use only 2 columns by adding EQUAL to UP or DOWN
UP
DOWN/EQUAL
or
UP/EQUAL
DOWN
In other words, when previous rsi value was between 80 and 90, this resulted in:
19 times a current close higher than previous close
14 times a current close lower than previous close
0 times a current close equal than previous close
However, the P-value tells us it is not statistical significant.
NOTE: Always keep in mind that past behaviour gives no certainty about future behaviour.
A vertical line is drawn at the beginning of the chosen population (max 4990)
Here, the results seem significant.
🔹 GROUPS
It is important to ensure that the groups are formed correctly. All possibilities should be present, and conditions should only be part of 1 group.
In the example above, the two top situations are acceptable; close against close can only be higher, lower or equal.
The two examples at the bottom, however, are very poorly constructed.
Several conditions can be placed in more than 1 group, and some conditions are not integrated into a group. Even if the results are significant, they are useless because of the group formation.
A population count is added as an aid to spot errors in group formation.
In this example, there is a discrepancy between the population and total count due to the absence of a condition.
The results when rsi was between 5-25 are not included, resulting in unreliable results.
🔹 PRACTICAL EXAMPLES
In this example, we have specific groups where the condition only applies to that group.
For example, the condition rsi > 55 and rsi <= 65 isn't true in another group.
Also, every possible rsi value (0 - 100) is present in 1 of the groups.
rsi > 15 and rsi <= 25 28 times UP, 19 times DOWN and 2 times EQUAL. P-value: 0.01171
When looking in detail and examining the area 15-25 RSI, we see this:
The population is now not representative (only checking for RSI between 15-25; all other RSI values are not included), so we can ignore the P-value in this case. It is merely to check in detail. In this case, the RSI values 23 and 24 seem promising.
NOTE: We should check what the close price did without any condition.
If, for example, the close price had risen 100 times out of 100, this would make things very relative.
In this case (at least two conditions need to be present), we set 1 condition at 'always true' and another at 'always false' so we'll get only the close values without any condition:
Changing the population or the conditions will change the P-value.
In the following example, the outcome is evaluated when:
close value from 1 bar back is higher than the close value from 2 bars back
close value from 1 bar back is lower/equal than the close value from 2 bars back
Or:
close value from 1 bar back is higher than the close value from 2 bars back
close value from 1 bar back is equal than the close value from 2 bars back
close value from 1 bar back is lower than the close value from 2 bars back
In both examples, all possibilities of close against close are included in the calculations. close can only by higher, equal or lower than close
Both examples have the results without a condition included (5 = 5 and 5 < 5) so one can compare the direction of current close.
🔶 NOTES
• Always keep in mind that:
Past behaviour gives no certainty about future behaviour.
Everything depends on time, cycles, events, fundamentals, technicals, ...
• This test only works for categorical data (data in categories), such as Gender {Men, Women} or color {Red, Yellow, Green, Blue} etc., but not numerical data such as height or weight. One might argue that such tests shouldn't use rsi, close, ... values.
• Consider what you're measuring
For example rsi of the current bar will always lead to a close higher than the previous close, since this is inherent to the rsi calculations.
• Be careful; often, there are na -values at the beginning of the series, which are not included in the calculations!
• Always keep in mind considering what the close price did without any condition
• The numbers must be large enough. Each entry must be five or more. In other words, it is vital to make the 'population' large enough.
• The code can be developed further, for example, by splitting UP, DOWN in close UP 1-2%, close UP 2-3%, close UP 3-4%, ...
• rsi can be supplemented with stochRSI, MFI, sma, ema, ...
🔶 SETTINGS
🔹 Population
• Choose the population size; in other words, how many bars you want to go back to. If fewer bars are available than set, this will be automatically adjusted.
🔹 Inputs
At least two conditions need to be chosen.
• Users can add up to 11 conditions, where each condition can contain two different conditions.
🔹 RSI
• Length
🔹 Levels
• Set the used levels as desired.
🔹 Levels
• P-value: P-value retrieved using a standard table method or a function.
• Used function, derived from Chi-Square Distribution Function; JavaScript
LogGamma(Z) =>
S = 1
+ 76.18009173 / Z
- 86.50532033 / (Z+1)
+ 24.01409822 / (Z+2)
- 1.231739516 / (Z+3)
+ 0.00120858003 / (Z+4)
- 0.00000536382 / (Z+5)
(Z-.5) * math.log(Z+4.5) - (Z+4.5) + math.log(S * 2.50662827465)
Gcf(float X, A) => // Good for X > A +1
A0=0., B0=1., A1=1., B1=X, AOLD=0., N=0
while (math.abs((A1-AOLD)/A1) > .00001)
AOLD := A1
N += 1
A0 := A1+(N-A)*A0
B0 := B1+(N-A)*B0
A1 := X*A0+N*A1
B1 := X*B0+N*B1
A0 := A0/B1
B0 := B0/B1
A1 := A1/B1
B1 := 1
Prob = math.exp(A * math.log(X) - X - LogGamma(A)) * A1
1 - Prob
Gser(X, A) => // Good for X < A +1
T9 = 1. / A
G = T9
I = 1
while (T9 > G* 0.00001)
T9 := T9 * X / (A + I)
G := G + T9
I += 1
G *= math.exp(A * math.log(X) - X - LogGamma(A))
Gammacdf(x, a) =>
GI = 0.
if (x<=0)
GI := 0
else if (x
Chisqcdf = Gammacdf(Z/2, DF/2)
Chisqcdf := math.round(Chisqcdf * 100000) / 100000
pValue = 1 - Chisqcdf
🔶 REFERENCES
mathsisfun.com, Chi-Square Test
Chi-Square Distribution Function
FiniteStateMachine🟩 OVERVIEW
A flexible framework for creating, testing and implementing a Finite State Machine (FSM) in your script. FSMs use rules to control how states change in response to events.
This is the first Finite State Machine library on TradingView and it's quite a different way to think about your script's logic. Advantages of using this vs hardcoding all your logic include:
• Explicit logic : You can see all rules easily side-by-side.
• Validation : Tables show your rules and validation results right on the chart.
• Dual approach : Simple matrix for straightforward transitions; map implementation for concurrent scenarios. You can combine them for complex needs.
• Type safety : Shows how to use enums for robustness while maintaining string compatibility.
• Real-world examples : Includes both conceptual (traffic lights) and practical (trading strategy) demonstrations.
• Priority control : Explicit control over which rules take precedence when multiple conditions are met.
• Wildcard system : Flexible pattern matching for states and events.
The library seems complex, but it's not really. Your conditions, events, and their potential interactions are complex. The FSM makes them all explicit, which is some work. However, like all "good" pain in life, this is front-loaded, and *saves* pain later, in the form of unintended interactions and bugs that are very hard to find and fix.
🟩 SIMPLE FSM (MATRIX-BASED)
The simple FSM uses a matrix to define transition rules with the structure: state > event > state. We look up the current state, check if the event in that row matches, and if it does, output the resulting state.
Each row in the matrix defines one rule, and the first matching row, counting from the top down, is applied.
A limitation of this method is that you can supply only ONE event.
You can design layered rules using widlcards. Use an empty string "" or the special string "ANY" for any state or event wildcard.
The matrix FSM is foruse where you have clear, sequential state transitions triggered by single events. Think traffic lights, or any logic where only one thing can happen at a time.
The demo for this FSM is of traffic lights.
🟩 CONCURRENT FSM (MAP-BASED)
The map FSM uses a more complex structure where each state is a key in the map, and its value is an array of event rules. Each rule maps a named condition to an output (event or next state).
This FSM can handle multiple conditions simultaneously. Rules added first have higher priority.
Adding more rules to existing states combines the entries in the map (if you use the supplied helper function) rather than overwriting them.
This FSM is for more complex scenarios where multiple conditions can be true simultaneously, and you need to control which takes precedence. Like trading strategies, or any system with concurrent conditions.
The demo for this FSM is a trading strategy.
🟩 HOW TO USE
Pine Script libraries contain reusable code for importing into indicators. You do not need to copy any code out of here. Just import the library and call the function you want.
For example, for version 1 of this library, import it like this:
import SimpleCryptoLife/FiniteStateMachine/1
See the EXAMPLE USAGE sections within the library for examples of calling the functions.
For more information on libraries and incorporating them into your scripts, see the Libraries section of the Pine Script User Manual.
🟩 TECHNICAL IMPLEMENTATION
Both FSM implementations support wildcards using blank strings "" or the special string "ANY". Wildcards match in this priority order:
• Exact state + exact event match
• Exact state + empty event (event wildcard)
• Empty state + exact event (state wildcard)
• Empty state + empty event (full wildcard)
When multiple rules match the same state + event combination, the FIRST rule encountered takes priority. In the matrix FSM, this means row order determines priority. In the map FSM, it's the order you add rules to each state.
The library uses user-defined types for the map FSM:
• o_eventRule : Maps a condition name to an output
• o_eventRuleWrapper : Wraps an array of rules (since maps can't contain arrays directly)
Everything uses strings for maximum library compatibility, though the examples show how to use enums for type safety by converting them to strings.
Unlike normal maps where adding a duplicate key overwrites the value, this library's `m_addRuleToEventMap()` method *combines* rules, making it intuitive to build rule sets without breaking them.
🟩 VALIDATION & ERROR HANDLING
The library includes comprehensive validation functions that catch common FSM design errors:
Error detection:
• Empty next states
• Invalid states not in the states array
• Duplicate rules
• Conflicting transitions
• Unreachable states (no entry/exit rules)
Warning detection:
• Redundant wildcards
• Empty states/events (potential unintended wildcards)
• Duplicate conditions within states
You can display validation results in tables on the chart, with tooltips providing detailed explanations. The helper functions to display the tables are exported so you can call them from your own script.
🟩 PRACTICAL EXAMPLES
The library includes four comprehensive demos:
Traffic Light Demo (Simple FSM) : Uses the matrix FSM to cycle through traffic light states (red → red+amber → green → amber → red) with timer events. Includes pseudo-random "break" events and repair logic to demonstrate wildcards and priority handling.
Trading Strategy Demo (Concurrent FSM) : Implements a realistic long-only trading strategy using BOTH FSM types:
• Map FSM converts multiple technical conditions (EMA crosses, gaps, fractals, RSI) into prioritised events
• Matrix FSM handles state transitions (idle → setup → entry → position → exit → re-entry)
• Includes position management, stop losses, and re-entry logic
Error Demonstrations : Both FSM types include error demos with intentionally malformed rules to showcase the validation system's capabilities.
🟩 BRING ON THE FUNCTIONS
f_printFSMMatrix(_mat_rules, _a_states, _tablePosition)
Prints a table of states and rules to the specified position on the chart. Works only with the matrix-based FSM.
Parameters:
_mat_rules (matrix)
_a_states (array)
_tablePosition (simple string)
Returns: The table of states and rules.
method m_loadMatrixRulesFromText(_mat_rules, _rulesText)
Loads rules into a rules matrix from a multiline string where each line is of the form "current state | event | next state" (ignores empty lines and trims whitespace).
This is the most human-readable way to define rules because it's a visually aligned, table-like format.
Namespace types: matrix
Parameters:
_mat_rules (matrix)
_rulesText (string)
Returns: No explicit return. The matrix is modified as a side-effect.
method m_addRuleToMatrix(_mat_rules, _currentState, _event, _nextState)
Adds a single rule to the rules matrix. This can also be quite readble if you use short variable names and careful spacing.
Namespace types: matrix
Parameters:
_mat_rules (matrix)
_currentState (string)
_event (string)
_nextState (string)
Returns: No explicit return. The matrix is modified as a side-effect.
method m_validateRulesMatrix(_mat_rules, _a_states, _showTable, _tablePosition)
Validates a rules matrix and a states array to check that they are well formed. Works only with the matrix-based FSM.
Checks: matrix has exactly 3 columns; no empty next states; all states defined in array; no duplicate states; no duplicate rules; all states have entry/exit rules; no conflicting transitions; no redundant wildcards. To avoid slowing down the script unnecessarily, call this method once (perhaps using `barstate.isfirst`), when the rules and states are ready.
Namespace types: matrix
Parameters:
_mat_rules (matrix)
_a_states (array)
_showTable (bool)
_tablePosition (simple string)
Returns: `true` if the rules and states are valid; `false` if errors or warnings exist.
method m_getStateFromMatrix(_mat_rules, _currentState, _event, _strictInput, _strictTransitions)
Returns the next state based on the current state and event, or `na` if no matching transition is found. Empty (not na) entries are treated as wildcards if `strictInput` is false.
Priority: exact match > event wildcard > state wildcard > full wildcard.
Namespace types: matrix
Parameters:
_mat_rules (matrix)
_currentState (string)
_event (string)
_strictInput (bool)
_strictTransitions (bool)
Returns: The next state or `na`.
method m_addRuleToEventMap(_map_eventRules, _state, _condName, _output)
Adds a single event rule to the event rules map. If the state key already exists, appends the new rule to the existing array (if different). If the state key doesn't exist, creates a new entry.
Namespace types: map
Parameters:
_map_eventRules (map)
_state (string)
_condName (string)
_output (string)
Returns: No explicit return. The map is modified as a side-effect.
method m_addEventRulesToMapFromText(_map_eventRules, _configText)
Loads event rules from a multiline text string into a map structure.
Format: "state | condName > output | condName > output | ..." . Pairs are ordered by priority. You can have multiple rules on the same line for one state.
Supports wildcards: Use an empty string ("") or the special string "ANY" for state or condName to create wildcard rules.
Examples: " | condName > output" (state wildcard), "state | > output" (condition wildcard), " | > output" (full wildcard).
Splits lines by , extracts state as key, creates/appends to array with new o_eventRule(condName, output).
Call once, e.g., on barstate.isfirst for best performance.
Namespace types: map
Parameters:
_map_eventRules (map)
_configText (string)
Returns: No explicit return. The map is modified as a side-effect.
f_printFSMMap(_map_eventRules, _a_states, _tablePosition)
Prints a table of map-based event rules to the specified position on the chart.
Parameters:
_map_eventRules (map)
_a_states (array)
_tablePosition (simple string)
Returns: The table of map-based event rules.
method m_validateEventRulesMap(_map_eventRules, _a_states, _a_validEvents, _showTable, _tablePosition)
Validates an event rules map to check that it's well formed.
Checks: map is not empty; wrappers contain non-empty arrays; no duplicate condition names per state; no empty fields in o_eventRule objects; optionally validates outputs against matrix events.
NOTE: Both "" and "ANY" are treated identically as wildcards for both states and conditions.
To avoid slowing down the script unnecessarily, call this method once (perhaps using `barstate.isfirst`), when the map is ready.
Namespace types: map
Parameters:
_map_eventRules (map)
_a_states (array)
_a_validEvents (array)
_showTable (bool)
_tablePosition (simple string)
Returns: `true` if the event rules map is valid; `false` if errors or warnings exist.
method m_getEventFromConditionsMap(_currentState, _a_activeConditions, _map_eventRules)
Returns a single event or state string based on the current state and active conditions.
Uses a map of event rules where rules are pre-sorted by implicit priority via load order.
Supports wildcards using empty string ("") or "ANY" for flexible rule matching.
Priority: exact match > condition wildcard > state wildcard > full wildcard.
Namespace types: series string, simple string, input string, const string
Parameters:
_currentState (string)
_a_activeConditions (array)
_map_eventRules (map)
Returns: The output string (event or state) for the first matching condition, or na if no match found.
o_eventRule
o_eventRule defines a condition-to-output mapping for the concurrent FSM.
Fields:
condName (series string) : The name of the condition to check.
output (series string) : The output (event or state) when the condition is true.
o_eventRuleWrapper
o_eventRuleWrapper wraps an array of o_eventRule for use as map values (maps cannot contain collections directly).
Fields:
a_rules (array) : Array of o_eventRule objects for a specific state.
Trading Activity Index (Zeiierman)█ Overview
Trading Activity Index (Zeiierman) is a volume-based market activity meter that transforms dollar-volume into a smooth, normalized “activity index.”
It highlights when market participation is unusually low or high with a dynamic color gradient:
Light Blue → Low Activity (thin participation, low liquidity conditions)
Red/Orange → High Activity (active markets, large trades flowing in)
Additional percentile bands (20/40/60/80%) give context, helping you see whether the current activity level is in the bottom quintile, mid-range, or near historical extremes.
█ How It Works
⚪ Dollar Volume Transformation
Each bar, dollar volume is computed:
float dlrVol = close * volume
float dlrVolAvg = ta.sma(dlrVol, len_form)
Dollar volume = price × volume, smoothed by a configurable SMA window.
The result is log-transformed, compressing large outliers for a more stable signal.
⚪ Rolling Percentiles & Ranking
The log-dollar-volume series is compared to its rolling history (len_hist bars):
float p20 = ta.percentile_linear_interpolation(vscale, len_hist, 20)
float p40 = ta.percentile_linear_interpolation(vscale, len_hist, 40)
float p60 = ta.percentile_linear_interpolation(vscale, len_hist, 60)
float p80 = ta.percentile_linear_interpolation(vscale, len_hist, 80)
A normalized rank (0–1) is produced to color the main Trading Activity line.
█ How to Use
⚪ Detect High-Impact Sessions
Quickly see if today’s session is active or quiet relative to its own history — great for filtering setups that need activity.
⚪ Spot Breakouts & Traps
Combine with price action:
High activity near breakouts = strong follow-through likely.
Low activity breakouts = vulnerable to fake-outs.
⚪ Market Regime Context
Percentile bands help you assess whether participation is building up, in the middle of the range, or drying out — valuable for timing mean-reversion trades.
Above 80th percentile (red/orange) → Market is highly active, breakout trades and trend strategies are favored.
Below 20th percentile (light blue) → Market is quiet; fade moves or wait for expansion.
Watch transitions from blue → orange as a signal of growing institutional participation.
█ Settings
Formation Window (bars) – Number of bars used to average dollar volume before log transform.
History Window (bars) – Lookback period for percentile calculations and rank normalization.
-----------------
Disclaimer
The content provided in my scripts, indicators, ideas, algorithms, and systems is for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any financial instruments. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
Expected Value Monte CarloI created this indicator after noticing that there was no Expected Value indicator here on TradingView.
The EVMC provides statistical Expected Value to what might happen in the future regarding the asset you are analyzing.
It uses 2 quantitative methods:
Historical Backtest to ground your analysis in long-term, factual data.
Monte Carlo Simulation to project a cone of probable future outcomes based on recent market behavior.
This gives you a data-driven edge to quantify risk, and make more informed trading decisions.
The indicator includes:
Dual analysis: Combines historical probability with forward-looking simulation.
Quantified projections: Provides the Expected Value ($ and %), Win Rate, and Sharpe Ratio for both methods.
Asset-aware: Automatically adjusts its calculations for Stocks (252 trading days) and Crypto (365 days) for mathematical accuracy.
The projection cone shows the mean expected path and the +/- 1 standard deviation range of outcomes.
No repainting
Calculation:
1. Historical Expected Value:
This is a systematic backtest over thousands of bars. It calculates the return Rᵢ for N past trades (buy-and-hold). The Historical EV is the simple average of these returns, giving a baseline performance measure.
Historical EV % = (Σ Rᵢ) / N
2. Monte Carlo Projection:
This projection uses the Geometric Brownian Motion (GBM) model to simulate thousands of future price paths based on the market's recent behavior.
It first measures the drift (μ), or recent trend, and volatility (σ), or recent risk, from the Projection Lookback period. It then projects a final return for each simulation using the core GBM formula:
Projected Return = exp( (μ - σ²/2)T + σ√T * Z ) - 1
(Where T is the time horizon and Z is a random variable for the simulation.)
The purple line on the chart is the average of all simulated outcomes (the Monte Carlo EV). The cone represents one standard deviation of those outcomes.
The dashed lines represent one standard deviation (+/- 1σ) from the average, forming a cone of probable outcomes. Roughly 68% of the simulated paths ended within this cone.
This projection answers the question: "If the recent trend and volatility continue, where is the price most likely to go?"
Here's how to read the indicator
Expected Value ($/%): Is my average trade profitable?
Win Rate: How often can I expect to be right?
Sharpe Ratio: Am I being adequately compensated for the risk I'm taking?
User Guide
Max trade duration (bars): This is your analysis timeframe. Are you interested in the probable outcome over the next month (21 bars), quarter (63 bars), or year (252 bars)?
Position size ($): Set this to your typical trade size to see the Expected Value in real dollar terms.
Projection lookback (bars): This is the most important input for the Monte Carlo model. A short lookback (e.g., 50) makes the projection highly sensitive to recent momentum. Use this to identify potential recency bias. A long lookback (e.g., 252) provides a more stable, long-term projection of trend and volatility.
Historical Lookback (bars): For the historical backtest, more data is always better. Use the maximum that your TradingView plan allows for the most statistically significant results.
Use TP/SL for Historical EV: Check this box to see how the historical performance would have changed if you had used a simple Take Profit and Stop Loss, rather than just holding for the full duration.
I hope you find this indicator useful and please let me know if you have any suggestions. 😊
Bar Index & TimeLibrary to convert a bar index to a timestamp and vice versa.
Utilizes runtime memory to store the 𝚝𝚒𝚖𝚎 and 𝚝𝚒𝚖𝚎_𝚌𝚕𝚘𝚜𝚎 values of every bar on the chart (and optional future bars), with the ability of storing additional custom values for every chart bar.
█ PREFACE
This library aims to tackle some problems that pine coders (from beginners to advanced) often come across, such as:
I'm trying to draw an object with a 𝚋𝚊𝚛_𝚒𝚗𝚍𝚎𝚡 that is more than 10,000 bars into the past, but this causes my script to fail. How can I convert the 𝚋𝚊𝚛_𝚒𝚗𝚍𝚎𝚡 to a UNIX time so that I can draw visuals using xloc.bar_time ?
I have a diagonal line drawing and I want to get the "y" value at a specific time, but line.get_price() only accepts a bar index value. How can I convert the timestamp into a bar index value so that I can still use this function?
I want to get a previous 𝚘𝚙𝚎𝚗 value that occurred at a specific timestamp. How can I convert the timestamp into a historical offset so that I can use 𝚘𝚙𝚎𝚗 ?
I want to reference a very old value for a variable. How can I access a previous value that is older than the maximum historical buffer size of 𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎 ?
This library can solve the above problems (and many more) with the addition of a few lines of code, rather than requiring the coder to refactor their script to accommodate the limitations.
█ OVERVIEW
The core functionality provided is conversion between xloc.bar_index and xloc.bar_time values.
The main component of the library is the 𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊 object, created via the 𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊() function which basically stores the 𝚝𝚒𝚖𝚎 and 𝚝𝚒𝚖𝚎_𝚌𝚕𝚘𝚜𝚎 of every bar on the chart, and there are 3 more overloads to this function that allow collecting and storing additional data. Once a 𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊 object is created, use any of the exported methods:
Methods to convert a UNIX timestamp into a bar index or bar offset:
𝚝𝚒𝚖𝚎𝚜𝚝𝚊𝚖𝚙𝚃𝚘𝙱𝚊𝚛𝙸𝚗𝚍𝚎𝚡(), 𝚐𝚎𝚝𝙽𝚞𝚖𝚋𝚎𝚛𝙾𝚏𝙱𝚊𝚛𝚜𝙱𝚊𝚌𝚔()
Methods to retrieve the stored data for a bar index:
𝚝𝚒𝚖𝚎𝙰𝚝𝙱𝚊𝚛𝙸𝚗𝚍𝚎𝚡(), 𝚝𝚒𝚖𝚎𝙲𝚕𝚘𝚜𝚎𝙰𝚝𝙱𝚊𝚛𝙸𝚗𝚍𝚎𝚡(), 𝚟𝚊𝚕𝚞𝚎𝙰𝚝𝙱𝚊𝚛𝙸𝚗𝚍𝚎𝚡(), 𝚐𝚎𝚝𝙰𝚕𝚕𝚅𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜𝙰𝚝𝙱𝚊𝚛𝙸𝚗𝚍𝚎𝚡()
Methods to retrieve the stored data at a number of bars back (i.e., historical offset):
𝚝𝚒𝚖𝚎(), 𝚝𝚒𝚖𝚎𝙲𝚕𝚘𝚜𝚎(), 𝚟𝚊𝚕𝚞𝚎()
Methods to retrieve all the data points from the earliest bar (or latest bar) stored in memory, which can be useful for debugging purposes:
𝚐𝚎𝚝𝙴𝚊𝚛𝚕𝚒𝚎𝚜𝚝𝚂𝚝𝚘𝚛𝚎𝚍𝙳𝚊𝚝𝚊(), 𝚐𝚎𝚝𝙻𝚊𝚝𝚎𝚜𝚝𝚂𝚝𝚘𝚛𝚎𝚍𝙳𝚊𝚝𝚊()
Note: the library's strong suit is referencing data from very old bars in the past, which is especially useful for scripts that perform its necessary calculations only on the last bar.
█ USAGE
Step 1
Import the library. Replace with the latest available version number for this library.
//@version=6
indicator("Usage")
import n00btraders/ChartData/
Step 2
Create a 𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊 object to collect data on every bar. Do not declare as `var` or `varip`.
chartData = ChartData.collectChartData() // call on every bar to accumulate the necessary data
Step 3
Call any method(s) on the 𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊 object. Do not modify its fields directly.
if barstate.islast
int firstBarTime = chartData.timeAtBarIndex(0)
int lastBarTime = chartData.time(0)
log.info("First `time`: " + str.format_time(firstBarTime) + ", Last `time`: " + str.format_time(lastBarTime))
█ EXAMPLES
• Collect Future Times
The overloaded 𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊() functions that accept a 𝚋𝚊𝚛𝚜𝙵𝚘𝚛𝚠𝚊𝚛𝚍 argument can additionally store time values for up to 500 bars into the future.
//@version=6
indicator("Example `collectChartData(barsForward)`")
import n00btraders/ChartData/1
chartData = ChartData.collectChartData(barsForward = 500)
var rectangle = box.new(na, na, na, na, xloc = xloc.bar_time, force_overlay = true)
if barstate.islast
int futureTime = chartData.timeAtBarIndex(bar_index + 100)
int lastBarTime = time
box.set_lefttop(rectangle, lastBarTime, open)
box.set_rightbottom(rectangle, futureTime, close)
box.set_text(rectangle, "Extending box 100 bars to the right. Time: " + str.format_time(futureTime))
• Collect Custom Data
The overloaded 𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊() functions that accept a 𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜 argument can additionally store custom user-specified values for every bar on the chart.
//@version=6
indicator("Example `collectChartData(variables)`")
import n00btraders/ChartData/1
var map variables = map.new()
variables.put("open", open)
variables.put("close", close)
variables.put("open-close midpoint", (open + close) / 2)
variables.put("boolean", open > close ? 1 : 0)
chartData = ChartData.collectChartData(variables = variables)
var fgColor = chart.fg_color
var table1 = table.new(position.top_right, 2, 9, color(na), fgColor, 1, fgColor, 1, true)
var table2 = table.new(position.bottom_right, 2, 9, color(na), fgColor, 1, fgColor, 1, true)
if barstate.isfirst
table.cell(table1, 0, 0, "ChartData.value()", text_color = fgColor)
table.cell(table2, 0, 0, "open ", text_color = fgColor)
table.merge_cells(table1, 0, 0, 1, 0)
table.merge_cells(table2, 0, 0, 1, 0)
for i = 1 to 8
table.cell(table1, 0, i, text_color = fgColor, text_halign = text.align_left, text_font_family = font.family_monospace)
table.cell(table2, 0, i, text_color = fgColor, text_halign = text.align_left, text_font_family = font.family_monospace)
table.cell(table1, 1, i, text_color = fgColor)
table.cell(table2, 1, i, text_color = fgColor)
if barstate.islast
for i = 1 to 8
float open1 = chartData.value("open", 5000 * i)
float open2 = i < 3 ? open : -1
table.cell_set_text(table1, 0, i, "chartData.value(\"open\", " + str.tostring(5000 * i) + "): ")
table.cell_set_text(table2, 0, i, "open : ")
table.cell_set_text(table1, 1, i, str.tostring(open1))
table.cell_set_text(table2, 1, i, open2 >= 0 ? str.tostring(open2) : "Error")
• xloc.bar_index → xloc.bar_time
The 𝚝𝚒𝚖𝚎 value (or 𝚝𝚒𝚖𝚎_𝚌𝚕𝚘𝚜𝚎 value) can be retrieved for any bar index that is stored in memory by the 𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊 object.
//@version=6
indicator("Example `timeAtBarIndex()`")
import n00btraders/ChartData/1
chartData = ChartData.collectChartData()
if barstate.islast
int start = bar_index - 15000
int end = bar_index - 100
// line.new(start, close, end, close) // !ERROR - `start` value is too far from current bar index
start := chartData.timeAtBarIndex(start)
end := chartData.timeAtBarIndex(end)
line.new(start, close, end, close, xloc.bar_time, width = 10)
• xloc.bar_time → xloc.bar_index
Use 𝚝𝚒𝚖𝚎𝚜𝚝𝚊𝚖𝚙𝚃𝚘𝙱𝚊𝚛𝙸𝚗𝚍𝚎𝚡() to find the bar that a timestamp belongs to.
If the timestamp falls in between the close of one bar and the open of the next bar,
the 𝚜𝚗𝚊𝚙 parameter can be used to determine which bar to choose:
𝚂𝚗𝚊𝚙.𝙻𝙴𝙵𝚃 - prefer to choose the leftmost bar (typically used for closing times)
𝚂𝚗𝚊𝚙.𝚁𝙸𝙶𝙷𝚃 - prefer to choose the rightmost bar (typically used for opening times)
𝚂𝚗𝚊𝚙.𝙳𝙴𝙵𝙰𝚄𝙻𝚃 (or 𝚗𝚊) - copies the same behavior as xloc.bar_time uses for drawing objects
//@version=6
indicator("Example `timestampToBarIndex()`")
import n00btraders/ChartData/1
startTimeInput = input.time(timestamp("01 Aug 2025 08:30 -0500"), "Session Start Time")
endTimeInput = input.time(timestamp("01 Aug 2025 15:15 -0500"), "Session End Time")
chartData = ChartData.collectChartData()
if barstate.islastconfirmedhistory
int startBarIndex = chartData.timestampToBarIndex(startTimeInput, ChartData.Snap.RIGHT)
int endBarIndex = chartData.timestampToBarIndex(endTimeInput, ChartData.Snap.LEFT)
line1 = line.new(startBarIndex, 0, startBarIndex, 1, extend = extend.both, color = color.new(color.green, 60), force_overlay = true)
line2 = line.new(endBarIndex, 0, endBarIndex, 1, extend = extend.both, color = color.new(color.green, 60), force_overlay = true)
linefill.new(line1, line2, color.new(color.green, 90))
// using Snap.DEFAULT to show that it is equivalent to drawing lines using `xloc.bar_time` (i.e., it aligns to the same bars)
startBarIndex := chartData.timestampToBarIndex(startTimeInput)
endBarIndex := chartData.timestampToBarIndex(endTimeInput)
line.new(startBarIndex, 0, startBarIndex, 1, extend = extend.both, color = color.yellow, width = 3)
line.new(endBarIndex, 0, endBarIndex, 1, extend = extend.both, color = color.yellow, width = 3)
line.new(startTimeInput, 0, startTimeInput, 1, xloc.bar_time, extend.both, color.new(color.blue, 85), width = 11)
line.new(endTimeInput, 0, endTimeInput, 1, xloc.bar_time, extend.both, color.new(color.blue, 85), width = 11)
• Get Price of Line at Timestamp
The pine script built-in function line.get_price() requires working with bar index values. To get the price of a line in terms of a timestamp, convert the timestamp into a bar index or offset.
//@version=6
indicator("Example `line.get_price()` at timestamp")
import n00btraders/ChartData/1
lineStartInput = input.time(timestamp("01 Aug 2025 08:30 -0500"), "Line Start")
chartData = ChartData.collectChartData()
var diagonal = line.new(na, na, na, na, force_overlay = true)
if time <= lineStartInput
line.set_xy1(diagonal, bar_index, open)
if barstate.islastconfirmedhistory
line.set_xy2(diagonal, bar_index, close)
if barstate.islast
int timeOneWeekAgo = timenow - (7 * timeframe.in_seconds("1D") * 1000)
// Note: could also use `timetampToBarIndex(timeOneWeekAgo, Snap.DEFAULT)` and pass the value directly to `line.get_price()`
int barsOneWeekAgo = chartData.getNumberOfBarsBack(timeOneWeekAgo)
float price = line.get_price(diagonal, bar_index - barsOneWeekAgo)
string formatString = "Time 1 week ago: {0,number,#} - Equivalent to {1} bars ago 𝚕𝚒𝚗𝚎.𝚐𝚎𝚝_𝚙𝚛𝚒𝚌𝚎(): {2,number,#.##}"
string labelText = str.format(formatString, timeOneWeekAgo, barsOneWeekAgo, price)
label.new(timeOneWeekAgo, price, labelText, xloc.bar_time, style = label.style_label_lower_right, size = 16, textalign = text.align_left, force_overlay = true)
█ RUNTIME ERROR MESSAGES
This library's functions will generate a custom runtime error message in the following cases:
𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊() is not called consecutively, or is called more than once on a single bar
Invalid 𝚋𝚊𝚛𝚜𝙵𝚘𝚛𝚠𝚊𝚛𝚍 argument in the 𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊() function
Invalid 𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜 argument in the 𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊() function
Invalid 𝚕𝚎𝚗𝚐𝚝𝚑 argument in any of the functions that accept a number of bars back
Note: there is no runtime error generated for an invalid 𝚝𝚒𝚖𝚎𝚜𝚝𝚊𝚖𝚙 or 𝚋𝚊𝚛𝙸𝚗𝚍𝚎𝚡 argument in any of the functions. Instead, the functions will assign 𝚗𝚊 to the returned values.
Any other runtime errors are due to incorrect usage of the library.
█ NOTES
• Function Descriptions
The library source code uses Markdown for the exported functions. Hover over a function/method call in the Pine Editor to display formatted, detailed information about the function/method.
//@version=6
indicator("Demo Function Tooltip")
import n00btraders/ChartData/1
chartData = ChartData.collectChartData()
int barIndex = chartData.timestampToBarIndex(timenow)
log.info(str.tostring(barIndex))
• Historical vs. Realtime Behavior
Under the hood, the data collector for this library is declared as `var`. Because of this, the 𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊 object will always reflect the latest available data on realtime updates. Any data that is recorded for historical bars will remain unchanged throughout the execution of a script.
//@version=6
indicator("Demo Realtime Behavior")
import n00btraders/ChartData/1
var map variables = map.new()
variables.put("open", open)
variables.put("close", close)
chartData = ChartData.collectChartData(variables)
if barstate.isrealtime
varip float initialOpen = open
varip float initialClose = close
varip int updateCount = 0
updateCount += 1
float latestOpen = open
float latestClose = close
float recordedOpen = chartData.valueAtBarIndex("open", bar_index)
float recordedClose = chartData.valueAtBarIndex("close", bar_index)
string formatString = "# of updates: {0} 𝚘𝚙𝚎𝚗 at update #1: {1,number,#.##} 𝚌𝚕𝚘𝚜𝚎 at update #1: {2,number,#.##} "
+ "𝚘𝚙𝚎𝚗 at update #{0}: {3,number,#.##} 𝚌𝚕𝚘𝚜𝚎 at update #{0}: {4,number,#.##} "
+ "𝚘𝚙𝚎𝚗 stored in memory: {5,number,#.##} 𝚌𝚕𝚘𝚜𝚎 stored in memory: {6,number,#.##}"
string labelText = str.format(formatString, updateCount, initialOpen, initialClose, latestOpen, latestClose, recordedOpen, recordedClose)
label.new(bar_index, close, labelText, style = label.style_label_left, force_overlay = true)
• Collecting Chart Data for Other Contexts
If your use case requires collecting chart data from another context, avoid directly retrieving the 𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊 object as this may exceed memory limits .
//@version=6
indicator("Demo Return Calculated Results")
import n00btraders/ChartData/1
timeInput = input.time(timestamp("01 Sep 2025 08:30 -0500"), "Time")
var int oneMinuteBarsAgo = na
// !ERROR - Memory Limits Exceeded
// chartDataArray = request.security_lower_tf(syminfo.tickerid, "1", ChartData.collectChartData())
// oneMinuteBarsAgo := chartDataArray.last().getNumberOfBarsBack(timeInput)
// function that returns calculated results (a single integer value instead of an entire `ChartData` object)
getNumberOfBarsBack() =>
chartData = ChartData.collectChartData()
chartData.getNumberOfBarsBack(timeInput)
calculatedResultsArray = request.security_lower_tf(syminfo.tickerid, "1", getNumberOfBarsBack())
oneMinuteBarsAgo := calculatedResultsArray.size() > 0 ? calculatedResultsArray.last() : na
if barstate.islast
string labelText = str.format("The selected timestamp occurs 1-minute bars ago", oneMinuteBarsAgo)
label.new(bar_index, hl2, labelText, style = label.style_label_left, size = 16, force_overlay = true)
• Memory Usage
The library's convenience and ease of use comes at the cost of increased usage of computational resources. For simple scripts, using this library will likely not cause any issues with exceeding memory limits. But for large and complex scripts, you can reduce memory issues by specifying a lower 𝚌𝚊𝚕𝚌_𝚋𝚊𝚛𝚜_𝚌𝚘𝚞𝚗𝚝 amount in the indicator() or strategy() declaration statement.
//@version=6
// !ERROR - Memory Limits Exceeded using the default number of bars available (~20,000 bars for Premium plans)
//indicator("Demo `calc_bars_count` parameter")
// Reduce number of bars using `calc_bars_count` parameter
indicator("Demo `calc_bars_count` parameter", calc_bars_count = 15000)
import n00btraders/ChartData/1
map variables = map.new()
variables.put("open", open)
variables.put("close", close)
variables.put("weekofyear", weekofyear)
variables.put("dayofmonth", dayofmonth)
variables.put("hour", hour)
variables.put("minute", minute)
variables.put("second", second)
// simulate large memory usage
chartData0 = ChartData.collectChartData(variables)
chartData1 = ChartData.collectChartData(variables)
chartData2 = ChartData.collectChartData(variables)
chartData3 = ChartData.collectChartData(variables)
chartData4 = ChartData.collectChartData(variables)
chartData5 = ChartData.collectChartData(variables)
chartData6 = ChartData.collectChartData(variables)
chartData7 = ChartData.collectChartData(variables)
chartData8 = ChartData.collectChartData(variables)
chartData9 = ChartData.collectChartData(variables)
log.info(str.tostring(chartData0.time(0)))
log.info(str.tostring(chartData1.time(0)))
log.info(str.tostring(chartData2.time(0)))
log.info(str.tostring(chartData3.time(0)))
log.info(str.tostring(chartData4.time(0)))
log.info(str.tostring(chartData5.time(0)))
log.info(str.tostring(chartData6.time(0)))
log.info(str.tostring(chartData7.time(0)))
log.info(str.tostring(chartData8.time(0)))
log.info(str.tostring(chartData9.time(0)))
if barstate.islast
result = table.new(position.middle_right, 1, 1, force_overlay = true)
table.cell(result, 0, 0, "Script Execution Successful ✅", text_size = 40)
█ EXPORTED ENUMS
Snap
Behavior for determining the bar that a timestamp belongs to.
Fields:
LEFT : Snap to the leftmost bar.
RIGHT : Snap to the rightmost bar.
DEFAULT : Default `xloc.bar_time` behavior.
Note: this enum is used for the 𝚜𝚗𝚊𝚙 parameter of 𝚝𝚒𝚖𝚎𝚜𝚝𝚊𝚖𝚙𝚃𝚘𝙱𝚊𝚛𝙸𝚗𝚍𝚎𝚡().
█ EXPORTED TYPES
Note: users of the library do not need to worry about directly accessing the fields of these types; all computations are done through method calls on an object of the 𝙲𝚑𝚊𝚛𝚝𝙳𝚊𝚝𝚊 type.
Variable
Represents a user-specified variable that can be tracked on every chart bar.
Fields:
name (series string) : Unique identifier for the variable.
values (array) : The array of stored values (one value per chart bar).
ChartData
Represents data for all bars on a chart.
Fields:
bars (series int) : Current number of bars on the chart.
timeValues (array) : The `time` values of all chart (and future) bars.
timeCloseValues (array) : The `time_close` values of all chart (and future) bars.
variables (array) : Additional custom values to track on all chart bars.
█ EXPORTED FUNCTIONS
collectChartData()
Collects and tracks the `time` and `time_close` value of every bar on the chart.
Returns: `ChartData` object to convert between `xloc.bar_index` and `xloc.bar_time`.
collectChartData(barsForward)
Collects and tracks the `time` and `time_close` value of every bar on the chart as well as a specified number of future bars.
Parameters:
barsForward (simple int) : Number of future bars to collect data for.
Returns: `ChartData` object to convert between `xloc.bar_index` and `xloc.bar_time`.
collectChartData(variables)
Collects and tracks the `time` and `time_close` value of every bar on the chart. Additionally, tracks a custom set of variables for every chart bar.
Parameters:
variables (simple map) : Custom values to collect on every chart bar.
Returns: `ChartData` object to convert between `xloc.bar_index` and `xloc.bar_time`.
collectChartData(barsForward, variables)
Collects and tracks the `time` and `time_close` value of every bar on the chart as well as a specified number of future bars. Additionally, tracks a custom set of variables for every chart bar.
Parameters:
barsForward (simple int) : Number of future bars to collect data for.
variables (simple map) : Custom values to collect on every chart bar.
Returns: `ChartData` object to convert between `xloc.bar_index` and `xloc.bar_time`.
█ EXPORTED METHODS
method timestampToBarIndex(chartData, timestamp, snap)
Converts a UNIX timestamp to a bar index.
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
timestamp (series int) : A UNIX time.
snap (series Snap) : A `Snap` enum value.
Returns: A bar index, or `na` if unable to find the appropriate bar index.
method getNumberOfBarsBack(chartData, timestamp)
Converts a UNIX timestamp to a history-referencing length (i.e., number of bars back).
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
timestamp (series int) : A UNIX time.
Returns: A bar offset, or `na` if unable to find a valid number of bars back.
method timeAtBarIndex(chartData, barIndex)
Retrieves the `time` value for the specified bar index.
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
barIndex (int) : The bar index.
Returns: The `time` value, or `na` if there is no `time` stored for the bar index.
method time(chartData, length)
Retrieves the `time` value of the bar that is `length` bars back relative to the latest bar.
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
length (series int) : Number of bars back.
Returns: The `time` value `length` bars ago, or `na` if there is no `time` stored for that bar.
method timeCloseAtBarIndex(chartData, barIndex)
Retrieves the `time_close` value for the specified bar index.
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
barIndex (series int) : The bar index.
Returns: The `time_close` value, or `na` if there is no `time_close` stored for the bar index.
method timeClose(chartData, length)
Retrieves the `time_close` value of the bar that is `length` bars back from the latest bar.
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
length (series int) : Number of bars back.
Returns: The `time_close` value `length` bars ago, or `na` if there is none stored.
method valueAtBarIndex(chartData, name, barIndex)
Retrieves the value of a custom variable for the specified bar index.
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
name (series string) : The variable name.
barIndex (series int) : The bar index.
Returns: The value of the variable, or `na` if that variable is not stored for the bar index.
method value(chartData, name, length)
Retrieves a variable value of the bar that is `length` bars back relative to the latest bar.
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
name (series string) : The variable name.
length (series int) : Number of bars back.
Returns: The value `length` bars ago, or `na` if that variable is not stored for the bar index.
method getAllVariablesAtBarIndex(chartData, barIndex)
Retrieves all custom variables for the specified bar index.
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
barIndex (series int) : The bar index.
Returns: Map of all custom variables that are stored for the specified bar index.
method getEarliestStoredData(chartData)
Gets all values from the earliest bar data that is currently stored in memory.
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
Returns: A tuple:
method getLatestStoredData(chartData, futureData)
Gets all values from the latest bar data that is currently stored in memory.
Namespace types: ChartData
Parameters:
chartData (series ChartData) : The `ChartData` object.
futureData (series bool) : Whether to include the future data that is stored in memory.
Returns: A tuple:
Liquidity Void Detector (Zeiierman)█ Overview
Liquidity Void Detector (Zeiierman) is an oscillator highlighting inefficient price displacements under low participation. It measures the most recent price move (standardized return) and amplifies it only when volume is below its own trend.
Positive readings ⇒ strong up-move on low volume → potential Buy-Side Imbalance (void below) that often refills.
Negative readings ⇒ strong down-move on low volume → potential Sell-Side Imbalance (void above) that often refills.
This tool provides a quantitative “void” proxy: when price travels far with unusually thin volume, the move is flagged as likely inefficient and prone to mean-reversion/mitigation.
█ How It Works
⚪ Volume Shock (Participation Filter)
Each bar, volume is compared to a rolling baseline. This is then z-scored.
// Volume Shock calculation
volTrend = ta.sma(volume, L)
vs = (volume > 0 and volTrend > 0) ? math.log(volume) - math.log(volTrend) : na
vsZ = zScore(vs, vzLen) // z-scored volume shock
lowVS = (vsZ <= vzThr) // low-volume condition
Bars with VolShock Z ≤ threshold are treated as low-volume (thin).
⚪ Prior Return Extremeness
The 1-bar log return is computed and z-scored.
// Prior return extremeness
r1 = math.log(close / close )
retZ = zScore(r1, rLen) // z-scored prior return
This shows whether the latest move is unusually large relative to recent history.
⚪ Void Oscillator
The oscillator is:
// Oscillator construction
weight = lowVS ? 1.0 : fadeNoLow
osc = retZ * weight
where Weight = 1 when volume is low, otherwise fades toward a user-set factor (0–1).
Osc > 0: up-move emphasized under low volume ⇒ Buy-Side Imbalance.
Osc < 0: down-move emphasized under low volume ⇒ Sell-Side Imbalance.
█ Why Use It
⚪ Targets Inefficient Moves
By filtering for low participation, the oscillator focuses on moves most likely driven by thin books/noise trading, which are statistically more likely to retrace.
⚪ Simple, Robust Logic
No need for tick data or order-book depth. It derives a practical void proxy from OHLCV, making it portable across assets and timeframes.
⚪ Complements Price-Action Tools
Use alongside FVG/imbalance zones, key levels, and volume profile to prioritize voids that carry the highest reversal probability.
█ How to Use
Sell-Side Imbalance = aggressive sell move (price goes down on low volume) → expect price to move up to fill it.
Buy-Side Imbalance = aggressive buy move (price goes up on low volume) → expect price to move down to fill it.
█ Settings
Volume Baseline Length — Bars for the volume trend used in VolShock. Larger = smoother baseline, fewer low-volume flags.
Vol Shock Z-Score Lookback — Bars to standardize VolShock; larger = smoother, fewer extremes.
Low-Volume Threshold (VolShock Z ≤) — Defines “thin participation.” Typical: −0.5 to −1.0.
Return Z-Score Lookback — Bars to standardize the 1-bar log return; larger = smoother “extremeness” measure.
Fade When Volume Not Low (0–1) — Weight applied when volume is not low. 0.00 = ignore non-low-volume bars entirely. 1.00 = treat volume condition as irrelevant (pure return extremeness).
Upper Threshold (Osc ≥) — Trigger for Sell-Side Imbalance (void below).
Lower Threshold (Osc ≤) — Trigger for Buy-Side Imbalance (void above).
-----------------
Disclaimer
The content provided in my scripts, indicators, ideas, algorithms, and systems is for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any financial instruments. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
DeltaFlow Volume Profile [BigBeluga]🔵 OVERVIEW
The DeltaFlow Volume Profile builds a compact volume profile next to price and enriches every bin with flow context : bullish vs. bearish participation (%), a per-bin Delta % , an optional Delta Heat Map , and a PoC band with the bin’s absolute volume. This lets you see not just where volume clustered, but who (buyers or sellers) dominated inside each price slice.
🔵 CONCEPTS
Binned Volume Profile : Price range over a user-defined LookBack is split into Bins ; each bin aggregates traded volume.
Bull/Bear Split : Within every bin, volume is separated by candle direction into Bull Volume and Bear Volume , then normalized to % of the bin’s displayed size.
Delta % : The difference between Bull % and Bear % for the bin. Positive = buyer dominance; negative = seller dominance.
Delta Heat Map : Bin background shading that scales with both total volume strength and delta bias.
PoC (Point of Control) : The most significant bin gets a PoC band and a label with its absolute volume.
🔵 FEATURES
Profile with Flow : A clean horizontal volume bar per bin plus stacked Bull % and Bear % .
Per-Bin Delta Label : A readable “Δ xx%” tag at the start of each bin shows dominance at a glance.
Delta Heat Map : Optional gradient that intensifies with higher volume and stronger delta.
PoC Highlight : Optional PoC band colored separately, labeled with absolute volume (e.g., “1.23M”).
Configurable Inputs : LookBack, number of Bins (10–100), toggles for Delta, Heat Map, Volume Bars, and PoC color.
Readable Colors : Separate inputs for bullish (volume +) and bearish (volume –) hues.
🔵 HOW TO USE
Set the window : Choose LookBack and Bins to balance detail vs. performance (more bins = finer resolution).
Enable “Volume Bars” to display the bull/bear split as two stacked percent bars inside each bin.
High Bull % near support → constructive demand.
High Bear % near resistance → active supply.
Use Δ labels (toggle “Delta”) to quickly spot bins with clear buyer/seller control; combine with price position for confluence.
Turn on Delta Heat Map to prioritize areas with both large volume and strong imbalance.
Watch the PoC : The PoC band marks the most traded (and often magnet) level; its label shows absolute size for context.
Trade ideas :
Breakout continuation when Δ stays positive across consecutive upper bins.
Reversion risk when price enters a large bearish-Δ cluster below.
Manage risk around the PoC; reactions there can be sharp.
🔵 CONCLUSION
DeltaFlow Volume Profile upgrades a classic profile with flow intelligence. The bull/bear split, explicit Δ %, heat-weighted backdrop, and PoC volume label make dominant participation and key price shelves obvious. Use it to filter levels, time entries with imbalance, and validate breakouts or fades with objective volume-flow evidence.
Volume by Time [LuxAlgo]The Volume by Time indicator collects volume data for every point in time over the day and displays the average volume of the specific dataset collected at each respective bar.
The indicator overlays the current volume and the historical average to allow for better comparisons.
🔶 USAGE
Throughout the day, the volume of every bar is stored in groups organized by the time when each bar occurred.
Over time, the datasets accumulate, and from that, we can simply determine the average value at each specific time of the day.
The display is a histogram style, which consists of hollow bars and solid filled columns.
-Hollow bars represent the average volume at that time of the day.
-Solid columns display the current volume from the current bar.
By default, the entire history of data is used, but if desired, the number of days under analysis can be specified to provide a more relevant point of view.
A readout of the number of days being analyzed can be seen in the status bar at any time.
Note: Due to partial sessions, it is typical to see this value change throughout the day; this is simply due to the fact that not every trading session has the exact same schedule 100% of the time.
The analysis type can also be specified; these can be either Average (Default) or Median.
Additionally, a Bi-directional can be toggled for a distinct difference between upwards volume and downwards volume.
🔶 SETTINGS
Analysis Type: Choose between Average or Median analysis modes.
Length (Days): Set the number of days to use for analysis. Set to 0 for full data (Default 0).
Bi-Directional Toggle: Toggle between one-sided or two-sided display.
FlowScope [Hapharmonic]FlowScope: Uncover the Market's True Intent 🔬
Ever wished you could look inside the candles and see where the real action is happening? FlowScope is your microscope for the market's flow, designed to give you a powerful edge by revealing the volume distribution that price action alone can't show you.
Instead of just looking at the open, high, low, and close, FlowScope lets you dive deeper into the market's auction process. It groups candles together and builds a detailed Volume Profile for that period, showing you exactly where the trading happened and revealing the story behind the price action.
Let's explore how you can use it to gain a powerful new edge.
🧐 Core Concept: How It Works
At its heart, FlowScope does three key things:
It Groups Candles: You decide how many candles to group together. For example, setting " Group Candles " to 4 on a 5-minute chart effectively gives you a detailed 20-minute candle and profile. This helps you see the bigger picture and filter out market noise.
It Builds a Volume Profile: For each group, FlowScope analyzes the volume at every single price level. It then displays this as a horizontal histogram (we call this a "footprint" or profile). Longer bars mean more volume was traded at that price, indicating a "fair" price or an area of acceptance. Shorter bars mean price moved through quickly, indicating rejection.
It Creates a Custom "Grouped Candle": To summarize the group's overall price action, FlowScope draws a single, custom candle representing the entire group's:
Open: The open of the first candle in the group.
High: The absolute highest price reached within the group.
Low: The absolute lowest price reached within the group.
Close: The close of the last candle in the group.
This gives you a crystal-clear view of the group's net result, free from the back-and-forth noise of the individual candles inside it.
Below are some of the stunning preset color palettes you can choose from to customize your view:
🚀 How to Use: Practical Applications
FlowScope isn't just for looking pretty; it's a powerful analysis tool. Here are a few ways to integrate it into your trading:
Identify High-Volume Nodes (HVNs): Look for the longest bars in the profile. These are price levels where the market spent the most time and traded the most volume. HVNs often act as powerful "magnets" for price, becoming key areas of support and resistance.
Spot Low-Volume Nodes (LVNs): These are areas with very short bars or gaps in the profile. They represent price levels that the market moved through quickly and inefficiently. If price returns to an LVN, it's likely to move through it quickly again.
Analyze the Summary Box: This is where the real magic happens! ✨
Total Volume (Σ): The total volume for the entire group.
Buy (B) vs. Sell (S) Volume: FlowScope analyzes the lower timeframe action to estimate the buying and selling pressure that made up the total volume. Is a big red candle mostly aggressive selling, or was it just a lack of buyers? The B/S data gives you clues. A high-volume candle with nearly 50/50 buy/sell pressure might indicate absorption or a potential reversal.
Use the Grouped Candle for Clarity: Is the market in a clear uptrend, or is it just choppy? The grouped candle can give you a much clearer signal. A series of strong, green grouped candles shows much more conviction than a mix of small green and red candles.
⚙️ Settings & Customization
This is where you can truly make FlowScope your own. Let's walk through each setting.
Profile Settings
Group Candles: The number of standard chart candles you want to combine into a single FlowScope profile. A setting of 1 will analyze every single bar. A higher number gives you a broader market view. When Group Candles is set to 5, the data from the 5 individual candles are combined, and the volume is calculated accordingly.
Max Profile Boxes: This setting is more than just a number; it's a smart limit that ensures your profiles are always readable and relevant to the current market conditions.
Adaptive Sizing (The Ideal Goal): FlowScope first tries to create the perfect profile by making each volume box's height proportional to the current market volatility. It calculates an "ideal" box height based on the Average True Range ( ATR / 10 ). This is powerful because it automatically adapts: you get smaller, more detailed boxes in quiet, low-volatility markets, and larger, clearer boxes in volatile, fast-moving markets.
The Safety Cap (Your Setting): However, what if you group several candles during a massive price move? The price range could be huge! If we only used the small, ATR-based box height, you might end up with hundreds of tiny, unreadable boxes. This is where your Max Profile Boxes setting (defaulting to 50) comes in. It acts as a maximum detail cap . If the adaptive, volatility-based calculation determines that it would need more boxes than your setting (e.g., more than 50), the indicator will override it. It will then simply divide the entire price range of the group into exactly the number of boxes you specified (e.g., 50).
In short: You are setting the maximum allowable detail. FlowScope intelligently adapts the profile's granularity below that limit based on market volatility, ensuring you always get a clear and meaningful picture.
Style
Show Profile BG: A simple toggle to show or hide the faint background color behind the volume bars. Turning it off can create a cleaner look.
Color Mode: This dropdown controls how the volume profile text is colored.
Custom Gradient: This mode uses the three custom colors you select in the "Profile Colors" section to create a beautiful gradient across the profile.
Candle Color: This mode colors the profile based on whether the grouped candle was bullish (green) or bearish (red). The color will be a gradient, with the most intense color applied to the box with the highest volume; the colors of the other boxes will fade out from that point. It's a great way to see the profile's "mood" at a glance.
Profile Colors 🎨
Use Preset Palette: This is the master switch!
If checked: You can choose from 10 stunning, pre-designed color palettes from the Palette dropdown. The custom color pickers below will be disabled.
If unchecked (Default): The Palette dropdown will be disabled, and you can now choose your own three colors for the gradient.
Palette: (Only active when "Use Preset Palette" is checked) . Choose from 10 luxurious, eye-catching color schemes like "Solar Flare" or "Deep Space" to instantly change the look and feel of your chart.
Low Price / Mid Price / High Price: (Only active when "Use Preset Palette" is unchecked) . These three color pickers allow you to design your own unique gradient for the Custom Gradient color mode.
Candle Display
These settings control the custom "Grouped Candle" that summarizes the profile. When using the "Show Custom Candle" feature, you should change the chart's candlestick display to Bars for a cleaner view.
Show Custom Candle: This is the main toggle. When you check this box, the original chart candles will be hidden, and your custom FlowScope candle will be displayed instead. This custom candle is intentionally small to ensure it does not visually overlap with the volume profile boxes.
Show Body: (Only active when "Show Custom Candle" is checked) . Toggles the visibility of the candle's body.
Wick Width & Body Width: (Only active when "Show Custom Candle" is checked) . These sliders let you control the thickness of the wick and body lines to match your personal style.
Up Color / Down Color: (Only active when "Show Custom Candle" is checked) . Choose the colors for your bullish and bearish custom candles.
Experiment with the settings, find a style that works for you, and start seeing the market in a whole new light.
Happy trading! 📈😊
VWAP Price ChannelVWAP Price Channel cuts the crust off of a traditional price channel (Donchian Channel) by anchoring VWAPs at the highs and lows. By doing this, the flat levels, characteristic of traditional Donchian Channels, are no more!
Author's Note: This indicator is formed with no inherent use, and serves solely as a thought experiment.
> Concept
I would be hesitant to call this a "predictive" indicator, however the behavior of it would suggest it could be considered at least partially predictive
Essentially, the Anchored VWAPs creates something from otherwise nothing.
While the DC upper or lower values are staying flat, the VWAPs improvise based on price and volume to project a level that may be a better representation of where future highs or lows may settle.
Visually, this looks like we have cut off the corners of the Donchian Channel.
Note: Notice how we are calculating values before the corners are realized.
> Implementation
While this is only a concept indicator, The specific application I've gone with for this, is a sort of supertrend-ish display (A Trend Flipping Trailing Stop Loss).
The script uses basic logic to create a trend direction, and then displays the Anchored VWAPs as a form of trailing stop loss.
While "In Trend", the script fills in the area between the VWAP and Price in the direction of trend.
When new highs or lows are made while in trend, the opposite VWAP will start to generate at the new highs or lows. These happen on every new high or low, so they are not indicating the trend shift, but could be interpreted as breakout levels for the current trend direction in order for continuation.
Note: All values are drawn live, but when using higher timeframes, there is a natural calculation discrepancy when using live data vs. historical.
> Technicals
In this script, I'm simply detecting new highs or lows from the DC and using those as the anchor frequency on the built-in VWAP function.
So each time a new high or low is made based on DC, the VWAP function re-anchors to the high or low of the candle.
Past that, I have implemented some logic in order to account for a common occurrence I faced during development.
Frequently, the price would outpace the anchored VWAP, so we would end up with the VWAP being further from price than the actual DC upper or lower.
Due to this, what I have ended up with was a third value which, rather than switching between raw VWAP values and DC values, it adjusts the value based on the change in the VWAP value.
This can be simply thought of as a "Start + Change" type of setup.
By doing this, I can use the change values from the actual anchored VWAP, and under normal conditions, this will also be the true VWAP value.
However, situationally, I am able to update the start value which we're applying the VWAP change to.
In other words, when these situations happen, the VWAP change is added to the new (closer to price) DC value.
The specific trend logic being used is nothing fancy at all, we are simply checking if a new high or low is created and setting the trend in that direction.
This is in line with some traditional DC Strategies.
To those who made it here,
Just remember:
The chart may be ugly, but it's the fastest analysis of the data you can get.
Nicer displays often come at the hidden cost of latency.
You have to shoot your shot to make it.
Choose 2: Fast, Clean, Useful
Enjoy!
Fibonacci Sequence Circles [BigBeluga]🔵 Overview
The Fibonacci Sequence Circles is a unique and visually intuitive indicator designed for the TradingView platform. It combines the principles of the Fibonacci sequence with geometric circles to help traders identify potential support and resistance levels, as well as price expansion zones. The indicator dynamically anchors to key price points, such as pivot highs, pivot lows, or timeframe changes (daily, weekly, monthly), and generates Fibonacci-based circles around these anchor points.
⚠️For proper indicators visualization use simple not logarithmic chart
🔵 Key Features
Customizable Anchor Points : The indicator can be anchored to Pivot Highs , Pivot Lows , or timeframe changes ( Daily, Weekly, Monthly ), making it adaptable to various trading strategies.
Fibonacci Sequence Logic : The circles are generated using the Fibonacci sequence, where the diameter of each circle is the sum of the diameters of the two preceding circles.
first = start_val
secon = start_val + int(start_val/2)
three = first + secon
four = secon + three
five = three + four
six = four + five
seven = five + six
eight = six + seven
nine = seven + eight
ten = eight + nine
Adjustable Start Value : Traders can modify the starting value of the sequence to scale the circles larger or smaller, ensuring they fit the current price action.
Color Customization : Each circle can be individually enabled or disabled, and its color can be customized for better visual clarity.
Visual Labels : The diameter of each circle (in bars) is displayed next to the circle, providing additional context for analysis.
🔵 Usage
Step 1: Set the Anchor Point - Choose the anchor type ( Pivot High, Pivot Low, Daily, Weekly, Monthly ) to define the center of the Fibonacci circles.
Step 2: Adjust the Start Value - Modify the starting value of the Fibonacci sequence to scale the circles according to the price action.
Step 3: Customize Circle Colors - Enable or disable specific circles and adjust their colors for better visualization.
Step 4: Analyze Price Action - Use the circles to identify potential support/resistance levels, price expansion zones, or trend continuation areas.
Step 5: Combine with Other Tools - Enhance your analysis by combining the indicator with other technical tools like trendlines, moving averages, or volume indicators.
The Fibonacci Sequence Circles is a powerful and flexible tool for traders who rely on Fibonacci principles and geometric patterns. Its ability to anchor to key price points and dynamically scale based on market conditions makes it suitable for various trading styles and timeframes. Whether you're a day trader or a long-term investor, this indicator can help you visualize and anticipate price movements with greater precision.
ATAI Volume Pressure Analyzer V 1.0 — Pure Up/DownATAI Volume Pressure Analyzer V 1.0 — Pure Up/Down
Overview
Volume is a foundational tool for understanding the supply–demand balance. Classic charts show only total volume and don’t tell us what portion came from buying (Up) versus selling (Down). The ATAI Volume Pressure Analyzer fills that gap. Built on Pine Script v6, it scans a lower timeframe to estimate Up/Down volume for each host‑timeframe candle, and presents “volume pressure” in a compact HUD table that’s comparable across symbols and timeframes.
1) Architecture & Global Settings
Global Period (P, bars)
A single global input P defines the computation window. All measures—host‑TF volume moving averages and the half‑window segment sums—use this length. Default: 55.
Timeframe Handling
The core of the indicator is estimating Up/Down volume using lower‑timeframe data. You can set a custom lower timeframe, or rely on auto‑selection:
◉ Second charts → 1S
◉ Intraday → 1 minute
◉ Daily → 5 minutes
◉ Otherwise → 60 minutes
Lower TFs give more precise estimates but shorter history; higher TFs approximate buy/sell splits but provide longer history. As a rule of thumb, scan thin symbols at 5–15m, and liquid symbols at 1m.
2) Up/Down Volume & Derived Series
The script uses TradingView’s library function tvta.requestUpAndDownVolume(lowerTf) to obtain three values:
◉ Up volume (buyers)
◉ Down volume (sellers)
◉ Delta (Up − Down)
From these we define:
◉ TF_buy = |Up volume|
◉ TF_sell = |Down volume|
◉ TF_tot = TF_buy + TF_sell
◉ TF_delta = TF_buy − TF_sell
A positive TF_delta indicates buyer dominance; a negative value indicates selling pressure. To smooth noise, simple moving averages of TF_buy and TF_sell are computed over P and used as baselines.
3) Key Performance Indicators (KPIs)
Half‑window segmentation
To track momentum shifts, the P‑bar window is split in half:
◉ C→B: the older half
◉ B→A: the newer half (toward the current bar)
For each half, the script sums buy, sell, and delta. Comparing the two halves reveals strengthening/weakening pressure. Example: if AtoB_delta < CtoB_delta, recent buying pressure has faded.
[ 4) HUD (Table) Display /i]
Colors & Appearance
Two main color inputs define the theme: a primary color and a negative color (used when Δ is negative). The panel background uses a translucent version of the primary color; borders use the solid primary color. Text defaults to the primary color and flips to the negative color when a block’s Δ is negative.
Layout
The HUD is a 4×5 table updated on the last bar of each candle:
◉ Row 1 (Meta): indicator name, P length, lower TF, host TF
◉ Row 2 (Host TF): current ↑Buy, ↓Sell, ΔDelta; plus Σ total and SMA(↑/↓)
◉ Row 3 (Segments): C→B and B→A blocks with ↑/↓/Δ
◉ Rows 4–5: reserved for advanced modules (Wings, α/β, OB/OS, Top
5) Advanced Modules
5.1 Wings
“Wings” visualize volume‑driven movement over C→B (left wing) and B→A (right wing) with top/bottom lines and a filled band. Slopes are ATR‑per‑bar normalized for cross‑symbol/TF comparability and converted to angles (degrees). Coloring mirrors HUD sign logic with a near‑zero threshold (default ~3°):
◉ Both lines rising → blue (bullish)
◉ Both falling → red (bearish)
◉ Mixed/near‑zero → gray
Left wing reflects the origin of the recent move; right wing reflects the current state.
5.2 α / β at Point B
We compute the oriented angle between the two wings at the midpoint B:
β is the bottom‑arc angle; α = 360° − β is the top‑arc angle.
◉ Large α (>180°) or small β (<180°) flags meaningful imbalance.
◉ Intuition: large α suggests potential selling pressure; small β implies fragile support. HUD cells highlight these conditions.
5.3 OB/OS Spike
OverBought/OverSold (OB/OS) labels appear when directional volume spikes align with a 7‑oscillator vote (RSI, Stoch, %R, CCI, MFI, DeMarker, StochRSI).
◉ OB label (red): unusually high sell volume + enough OB votes
◉ OS label (teal): unusually high buy volume + enough OS votes
Minimum votes and sync window are user‑configurable; dotted connectors can link labels to the candle wick.
5.4 Top3 Volume Peaks
Within the P window the script ranks the top three BUY peaks (B1–B3) and top three SELL peaks (S1–S3).
◉ B1 and S1 are drawn as horizontal resistance (at B1 High) and support (at S1 Low) zones with adjustable thickness (ticks/percent/ATR).
◉ The HUD dedicates six cells to show ↑/↓/Δ for each rank, and prints the exact High (B1) and Low (S1) inline in their cells.
6) Reading the HUD — A Quick Checklist
◉ Meta: Confirm P and both timeframes (host & lower).
◉ Host TF block: Compare current ↑/↓/Δ against their SMAs.
◉ Segments: Contrast C→B vs B→A deltas to gauge momentum change.
◉ Wings: Right‑wing color/angle = now; left wing = recent origin.
◉ α / β: Look for α > 180° or β < 180° as imbalance cues.
◉ OB/OS: Note labels, color (red/teal), and the vote count.
◉Top3: Keep B1 (resistance) and S1 (support) on your radar.
Use these together to sketch scenarios and invalidation levels; never rely on a single signal in isolation.
[ 7) Example Highlights (What the table conveys) /i]
◉ Row 1 shows the indicator name, the analysis length P (default 55), and both TFs used for computation and display.
◉ B1 / S1 blocks summarize each side’s peak within the window, with Δ indicating buyer/seller dominance at that peak and inline price (B1 High / S1 Low) for actionable levels.
◉ Angle cells for each wing report the top/bottom line angles vs. the horizontal, reflecting the directional posture.
◉ Ranks B2/B3 and S2/S3 extend context beyond the top peak on each side.
◉ α / β cells quantify the orientation gap at B; changes reflect shifting buyer/seller influence on trend strength.
Together these visuals often reveal whether the “wings” resemble a strong, upward‑tilted arm supported by buyer volume—but always corroborate with your broader toolkit
8) Practical Tips & Tuning
◉ Choose P by market structure. For daily charts, 34–89 bars often works well.
◉ Lower TF choice: Thin symbols → 5–15m; liquid symbols → 1m.
◉ Near‑zero angle: In noisy markets, consider 5–7° instead of 3°.
◉ OB/OS votes: Daily charts often work with 3–4 votes; lower TFs may prefer 4–5.
◉ Zone thickness: Tie B1/S1 zone thickness to ATR so it scales with volatility.
◉ Colors: Feel free to theme the primary/negative colors; keep Δ<0 mapped to the negative color for readability.
Combine with price action: Use this indicator alongside structure, trendlines, and other tools for stronger decisions.
Technical Notes
Pine Script v6.
◉ Up/Down split via TradingView/ta library call requestUpAndDownVolume(lowerTf).
◉ HUD‑first design; drawings for Wings/αβ/OBOS/Top3 align with the same sign/threshold logic used in the table.
Disclaimer: This indicator is provided solely for educational and analytical purposes. It does not constitute financial advice, nor is it a recommendation to buy or sell any security. Always conduct your own research and use multiple tools before making trading decisions.
Market Cap Landscape 3DHello, traders and creators! 👋
Market Cap Landscape 3D. This project is more than just a typical technical analysis tool; it's an exploration into what's possible when code meets artistry on the financial charts. It's a demonstration of how we can transcend flat, two-dimensional lines and step into a vibrant, three-dimensional world of data.
This project continues a journey that began with a previous 3D experiment, the T-Virus Sentiment, which you can explore here:
The Market Cap Landscape 3D builds on that foundation, visualizing market data—particularly crypto market caps—as a dynamic 3D mountain range. The entire landscape is procedurally generated and rendered in real-time using the powerful drawing capabilities of polyline.new() and line.new() , pushed to their creative limits.
This work is intended as a guide and a design example for all developers, born from the spirit of learning and a deep love for understanding the Pine Script™ language.
---
🧐 Core Concept: How It Works
The indicator synthesizes multiple layers of information into a single, cohesive 3D scene:
The Surface: The mountain range itself is a procedurally generated 3D mesh. Its peaks and valleys create a rich, textured landscape that serves as the canvas for our data.
Crypto Data Integration: The core feature is its ability to fetch market cap data for a list of cryptocurrencies you provide. It then sorts them in descending order and strategically places them onto the 3D surface.
The Summit: The highest point on the mountain is reserved for the asset with the #1 market cap in your list, visually represented by a flag and a custom emblem.
The Mountain Labels: The other assets are distributed across the mountainside, with their rank determining their general elevation. This creates an intuitive visual hierarchy.
The Leaderboard Pole: For clarity, a dedicated pole in the back-right corner provides a clean, ranked list of the symbols and their market caps, ensuring the data is always easy to read.
---
🧐 Example of adjusting the view
To evoke the feeling of flying over mountains
To evoke the feeling of looking at a mountain peak on a low plain
🧐 Example of predefined colors
---
🚀 How to Use
Getting started with the Market Cap Landscape 3D:
Add to Chart: Apply the "Market Cap Landscape 3D" indicator to your active chart.
Open Settings: Double-click anywhere on the 3D landscape or click the "Settings" icon next to the indicator's name.
Customize Your Crypto List: The most important setting is in the Crypto Data tab. In the "Symbols" text area, enter a comma-separated list of the crypto tickers you want to visualize (e.g., BTC,ETH,SOL,XRP ). The indicator supports up to 40 unique symbols.
> Important Note: This indicator exclusively uses TradingView's `CRYPTOCAP` data source. To find valid symbols, use the main symbol search bar on your chart. Type `CRYPTOCAP:` (including the colon) and you will see a list of available options. For example, typing `CRYPTOCAP:BTC` will confirm that `BTC` is a valid ticker for the indicator's settings. Using symbols that do not exist in the `CRYPTOCAP` index will result in a script error. or, to display other symbols, simply type CRYPTOCAP: (including the colon) and you will see a list of available options.
Adjust Your View: Use the settings in the Camera & Projection tab to rotate ( Yaw ), tilt ( Pitch ), and scale the landscape until you find a view you love.
Explore & Customize: Play with the color palettes, flag design, and other settings to make the landscape truly your own!
---
⚙️ Settings & Customization
This indicator is highly customizable. Here’s a breakdown of what each setting does:
#### 🪙 Crypto Data
Symbols: Enter the crypto tickers you want to track, separated by commas. The script automatically handles duplicates and case-insensitivity.
Show Market Cap on Mountain: When checked, it displays the full market cap value next to the symbol on the mountain. When unchecked, it shows a cleaner look with just the symbol and a colored circle background.
#### 📷 Camera & Projection
Yaw (°): Rotates the camera view horizontally (side to side).
Pitch (°): Tilts the camera view vertically (up and down).
Scale X, Y, Z: Stretches or compresses the landscape in width, depth, and height, respectively. Fine-tune these to get the perfect perspective.
#### 🏞️ Grid / Surface
Grid X/Y resolution: Controls the detail level of the 3D mesh. Higher values create a smoother surface but may use more resources.
Fill surface strips: Toggles the beautiful color gradient on the surface.
Show wireframe lines: Toggles the visibility of the grid lines.
Show nodes (markers): Toggles the small dots at each grid intersection point.
#### 🏔️ Peaks / Mountains
Fill peaks volume: Draws vertical lines on high peaks, giving them a sense of volume.
Fill peaks surface: Draws a cross-hatch pattern on the surface of high peaks.
Peak height threshold: Defines the minimum height for a peak to receive the fill effect.
Peak fill color/density: Customizes the appearance of the fill lines.
#### 🚩 Flags (3D)
Show Flag on Summit: A master switch to show or hide the flag and emblem entirely.
Flag height, width, etc.: Provides full control over the dimensions and orientation of the flag on the highest peak.
#### 🎨 Color Palette
Base Gradient Palette: Choose from 13 stunning, pre-designed color themes for the landscape, from the classic SUNSET_WAVE to vibrant themes like NEON_DREAM and OCEANIC .
#### 🛡️ Emblem / Badge Controls
This section gives you granular control over every element of the custom emblem on the flag. Tweak rotation, offsets, and scale to design your unique logo.
---
👨💻 Developer's Corner: Modifying the Core Logic
If you're a developer and wish to customize the indicator's core data source, this section is for you. The script is designed to be modular, making it easy to change what data is being ranked and visualized.
The heart of the data retrieval and ranking logic is within the f_getSortedCryptoData() function. Here’s how you can modify it:
1. Changing the Data Source (from Market Cap to something else):
The current logic uses request.security("CRYPTOCAP:" + syms.get(i), ...) to fetch market capitalization data. To change this, you need to modify this line.
Example: Ranking by RSI (14) on the Daily timeframe.
First, you'll need a function to calculate RSI. Add this function to the script:
f_getRSI(symbol, timeframe, length) =>
request.security(symbol, timeframe, ta.rsi(close, length))
Then, inside f_getSortedCryptoData() , find the `for` loop that populates the `caps` array and replace the `request.security` call:
// OLD LINE:
// caps.set(i, request.security("CRYPTOCAP:" + syms.get(i), timeframe.period, close))
// NEW LINE for RSI:
// Note: You'll need to decide how to format the symbol name (e.g., "BINANCE:" + syms.get(i) + "USDT")
caps.set(i, f_getRSI("BINANCE:" + syms.get(i) + "USDT", "D", 14))
2. Changing the Data Formatting:
The ranking values are formatted for display using the f_fmtCap() function, which currently formats large numbers into "M" (millions), "B" (billions), etc.
If you change the data source to something like RSI, you'll want to change the formatting. You can modify f_fmtCap() or create a new formatting function.
Example: Formatting for RSI.
// Modify f_fmtCap or create f_fmtRSI
f_fmtRSI(float v) =>
str.tostring(v, "#.##") // Simply format to two decimal places
Remember to update the calls to this function in the main drawing loop where the labels are created (e.g., str.format("{0}: {1}", crypto.symbol, f_fmtCap(crypto.cap)) ).
By modifying these key functions ( f_getSortedCryptoData and f_fmtCap ), you can adapt the Market Cap Landscape 3D to visualize and rank almost any dataset you can imagine, from technical indicators to fundamental data.
---
We hope you enjoy using the Market Cap Landscape 3D as much as we enjoyed creating it. Happy charting! ✨