SCOTTGO - Day Trade Stock Quote V4This Pine Script indicator, titled "SCOTTGO - Day Trade Stock Quote V4," is a comprehensive, customizable dashboard designed for active traders. It acts as a single, centralized reference point, displaying essential financial and technical data directly on your chart in a compact table overlay.
📊 Key Information Provided
The indicator is split into sections, aggregating various critical data points to provide a holistic picture of the stock's current state and momentum:
1. Ownership & Short Flow
This section provides fundamental context and short-interest data:
Market Cap, Shares Float, and Shares Outstanding: Key figures on the company's size and publicly tradable shares.
Short Volume %: Indicates the percentage of trading activity driven by short sellers.
Daily Change %: Shows the day's price movement relative to the previous close.
2. Price & Volatility
This tracks historical and immediate price levels:
Previous Close, Day High/Low: Key daily reference prices.
52-Week High/Low: Important long-term boundaries.
Earnings Date: A crucial fundamental date (currently displayed as a placeholder).
3. Momentum & Volume
These metrics are essential for understanding intraday buying and selling pressure:
Volume & Average Volume: The current trade volume compared to its historical average.
Relative Volume (RVOL): Measures how much volume is currently trading compared to the average rate for that time period (shown for both Daily and 5-Minute rates).
Volume Buzz (%): A percentage representation of how much current volume exceeds or falls below the average.
ADR % & ATR %: Measures of volatility.
RSI, U/D Ratio, and P/E Ratio: Momentum and valuation indicators.
4. Context
This provides background information on the security:
Includes the Symbol, Exchange, Industry, and Sector (note: some fields use placeholder data as this information is not always available via Pine Script).
⚙️ Customization
The dashboard is highly customizable via the indicator settings:
You can control the visibility of every single metric using the Section toggles.
You can change the position (Top Left, Top Right, etc.), size, and colors of the entire table.
In summary, this script is a powerful tool for day traders who need to monitor a large number of fundamental, technical, and volatility metrics simultaneously without cluttering the main chart area.
Statistics
NYSE CME Market Session Clock This indicator can only work on short-term timeframes, since the time before the opening and before the closing of the session is updated only with the appearance of a new candle.
FRAN CRASH PLAY RULESPurpose
It creates a fixed information panel in the top right corner of your chart that shows the "FRAN CRASH PLAY RULES" - a checklist of criteria for identifying potential crash play setups.
Key Features
Display Panel:
Shows 5 trading rules as bullet points
Permanently visible in the top right corner
Stays fixed while you scroll or zoom the chart
Current Rules Displayed:
DYNAMIC 3 TO 5 LEG RUN
NEAR VERTICAL ACCELERATION
FINAL BAR OF THE RUN UP MUST BE THE BIGGEST
3 FINGER SPREAD / DUAL SPACE
ATLEAST 2 OF 5 CRITERIA NEEDS TO HIT
Customization Options:
Editable Text - Change any of the 5 rules through the settings
Text Color - Adjust the color of the text
Text Size - Choose from tiny, small, normal, large, or huge
Background Color - Customize the panel background and transparency
Frame Color - Change the border color
Show/Hide Frame - Toggle the border on or off
Use Case
This indicator serves as a constant visual reminder of your trading strategy criteria, helping you stay disciplined and only take trades that meet your specific crash play requirements. It's essentially a "cheat sheet" that lives on your chart so you don't have to memorize or look elsewhere for your trading rules.
VB Finviz-style MTF Screener📊 VB Multi-Timeframe Stock Screener (Daily + 4H + 1H)
A structured, high-signal stock screener that blends Daily fundamentals, 4H trend confirmation, and 1H entry timing to surface strong trading opportunities with institutional discipline.
🟦 1. Daily Screener — Core Stock Selection
All fundamental and structural filters run strictly on Daily data for maximum stability and signal quality.
Daily filters include:
📈 Average Volume & Relative Volume
💲 Minimum Price Threshold
📊 Beta vs SPY
🏢 Market Cap (Billions)
🔥 ATR Liquidity Filter
🧱 Float Requirements
📘 Price Above Daily SMA50
🚀 Minimum Gap-Up Condition
This layer acts like a Finviz-style engine, identifying stocks worth trading before momentum or timing is considered.
🟩 2. 4H Trend Confirmation — Momentum Check
Once a stock passes the Daily screen, the 4-hour timeframe validates trend strength:
🔼 Price above 4H MA
📈 MA pointing upward
This removes structurally good stocks that are not in a healthy trend.
🟧 3. 1H Entry Alignment — Timing Layer
The Hourly timeframe refines near-term timing:
🔼 Price above 1H MA
📉 Short-term upward movement detected
This step ensures the stock isn’t just good on paper—it’s moving now.
🧪 MTF Debug Table (Your Transparency Engine)
A live diagnostic table shows:
All Daily values
All 4H checks
All 1H checks
Exact PASS/FAIL per condition
Perfect for tuning thresholds or understanding why a ticker qualifies or fails.
🎯 Who This Screener Is For
Swing traders
Momentum/trend traders
Systematic and rules-based traders
Traders who want clean, multi-timeframe alignment
By combining Daily fundamentals, 4H trend structure, and 1H momentum, this screener filters the market down to the stocks that are strong, aligned, and ready.
Weighted KDE Mode🙏🏻 The ‘ultimate’ typical value estimator, for the highest computational cost @ time complexity O(n^2). I am not afraid to say: this is the last resort BFG9000 you can ‘ever’ get to make dem market demons kneel before y’all
Quickguide
pls read it, you won’t find it anywhere else in open access
When to use:
If current market activity is so crazy || things on your charts are really so bad (contaminated data && (data has very heavy tails || very pronounced peak)), the only option left is to use the peak (mode) of Kernel Density Estimate , instead of median not even mentioning mean. So when WMA won’t help, when WPNR won’t help, you need this thing.
Setting it up:
Interval: choose what u need, you can use usual moving windows, but I also added yearly and session anchors alike in old VWAP (always prefer 24h instead of Session if your plan allows). Other options like cumulative window are also there.
Parameters: this script ain't no joke, it needs time to make calculations, so I added a setting to calculate only for the last N bars (when “starting at bar N” is put on 0). If it’s not zero it acts as a starting point after which the calculations happen (useful for backtesting). Other parameters keep em as they are, keep student5 kernel , turn off appropriate weights if u apply it to other than chart data, on other studies etc.
But instead of listening to me just experiment with parameters and see what they change, would take 5 mins max
Been always saying that VWAP is ish, not time-aware etc, volume info is incorporated in a lil bit wrong way… So I decided not just to fix VWAP (you can do it yourself in 5 mins), but instead to drop there the Ultimate xD typical value estimator that is ever possible to do. Time aware, volume / inferred volume aware, resistant to all kinds of BS. This is your shieldwall.
How it works:
You can easily do a weighted kernel density estimation, in our case including temporal and intensity information while accumulating densities. Here are some details worth mentioning about the thing:
Kernels are raw (not unit variance), that’s easier to work with later.
h_constants for each kernel were calculated ^^ given that ^^ with python mpmath module with high decimal precision.
In bandwidth calculation instead of using empirical standard deviation as a scaler, I use... ta.range(src, len) / math.sqrt(12)
...that takes data range and converts it to standard deviation, assuming data is uniformly distributed. That’s exactly what we need: a scaler that is coherent with the KDE, that has nothing to do with stdevs, as the kernels except for gaussian ones (that we don’t even need to use). More importantly, if u take multiple windows and see over time which distro they approach on the long term, that would be the uniform one (not the normal one as many think). Sometimes windows are multimodal, sometimes Laplace like etc, so in general all together they are uniform ish.
The one and only kernel you really need is Student t with v = 5 , for the use case I highlighted in the first part of the post for TV users. It’s as far as u can get until ish becomes crazy like undefined variance etc. It has the highest kurtosis = 9 of all distros, perfect for the real use case I mentioned. Otherwise, you don’t even need KDE 4 real, but still I included other senseful kernels for comparison or in case I am trippin there.
Btw, don’t believe in all that hype about Epanechnikov kernel which in essence is made from beta distribution with alpha = beta = 2, idk why folk call it with that weird name, it’s beta2 kernel. Yes on papers it really minimises AMISE (that’s how I calculated h constants for all dem kernels in the script), but for really crazy data (proper use case for us), it ain't provides even ‘closely’ compared with student5 kernel. Not much else to add.
Shout out to @RicardoSantos for inspiration, I saw your KDE script a long time ago brotha, finna got my hands on it.
∞
Hurst Exponent - Detrended Fluctuation AnalysisIn stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analyzing time series that appear to be long-memory processes and noise.
█ OVERVIEW
We have introduced the concept of Hurst Exponent in our previous open indicator Hurst Exponent (Simple). It is an indicator that measures market state from autocorrelation. However, we apply a more advanced and accurate way to calculate Hurst Exponent rather than simple approximation. Therefore, we recommend using this version of Hurst Exponent over our previous publication going forward. The method we used here is called detrended fluctuation analysis. (For folks that are not interested in the math behind the calculation, feel free to skip to "features" and "how to use" section. However, it is recommended that you read it all to gain a better understanding of the mathematical reasoning).
█ Detrend Fluctuation Analysis
Detrended Fluctuation Analysis was first introduced by by Peng, C.K. (Original Paper) in order to measure the long-range power-law correlations in DNA sequences . DFA measures the scaling-behavior of the second moment-fluctuations, the scaling exponent is a generalization of Hurst exponent.
The traditional way of measuring Hurst exponent is the rescaled range method. However DFA provides the following benefits over the traditional rescaled range method (RS) method:
• Can be applied to non-stationary time series. While asset returns are generally stationary, DFA can measure Hurst more accurately in the instances where they are non-stationary.
• According the the asymptotic distribution value of DFA and RS, the latter usually overestimates Hurst exponent (even after Anis- Llyod correction) resulting in the expected value of RS Hurst being close to 0.54, instead of the 0.5 that it should be. Therefore it's harder to determine the autocorrelation based on the expected value. The expected value is significantly closer to 0.5 making that threshold much more useful, using the DFA method on the Hurst Exponent (HE).
• Lastly, DFA requires lower sample size relative to the RS method. While the RS method generally requires thousands of observations to reduce the variance of HE, DFA only needs a sample size greater than a hundred to accomplish the above mentioned.
█ Calculation
DFA is a modified root-mean-squares (RMS) analysis of a random walk. In short, DFA computes the RMS error of linear fits over progressively larger bins (non-overlapped “boxes” of similar size) of an integrated time series.
Our signal time series is the log returns. First we subtract the mean from the log return to calculate the demeaned returns. Then, we calculate the cumulative sum of demeaned returns resulting in the cumulative sum being mean centered and we can use the DFA method on this. The subtraction of the mean eliminates the “global trend” of the signal. The advantage of applying scaling analysis to the signal profile instead of the signal, allows the original signal to be non-stationary when needed. (For example, this process converts an i.i.d. white noise process into a random walk.)
We slice the cumulative sum into windows of equal space and run linear regression on each window to measure the linear trend. After we conduct each linear regression. We detrend the series by deducting the linear regression line from the cumulative sum in each windows. The fluctuation is the difference between cumulative sum and regression.
We use different windows sizes on the same cumulative sum series. The window sizes scales are log spaced. Eg: powers of 2, 2,4,8,16... This is where the scale free measurements come in, how we measure the fractal nature and self similarity of the time series, as well as how the well smaller scale represent the larger scale.
As the window size decreases, we uses more regression lines to measure the trend. Therefore, the fitness of regression should be better with smaller fluctuation. It allows one to zoom into the “picture” to see the details. The linear regression is like rulers. If you use more rulers to measure the smaller scale details you will get a more precise measurement.
The exponent we are measuring here is to determine the relationship between the window size and fitness of regression (the rate of change). The more complex the time series are the more it will depend on decreasing window sizes (using more linear regression lines to measure). The less complex or the more trend in the time series, it will depend less. The fitness is calculated by the average of root mean square errors (RMS) of regression from each window.
Root mean Square error is calculated by square root of the sum of the difference between cumulative sum and regression. The following chart displays average RMS of different window sizes. As the chart shows, values for smaller window sizes shows more details due to higher complexity of measurements.
The last step is to measure the exponent. In order to measure the power law exponent. We measure the slope on the log-log plot chart. The x axis is the log of the size of windows, the y axis is the log of the average RMS. We run a linear regression through the plotted points. The slope of regression is the exponent. It's easy to see the relationship between RMS and window size on the chart. Larger RMS equals less fitness of the regression. We know the RMS will increase (fitness will decrease) as we increases window size (use less regressions to measure), we focus on the rate of RMS increasing (how fast) as window size increases.
If the slope is < 0.5, It means the rate of of increase in RMS is small when window size increases. Therefore the fit is much better when it's measured by a large number of linear regression lines. So the series is more complex. (Mean reversion, negative autocorrelation).
If the slope is > 0.5, It means the rate of increase in RMS is larger when window sizes increases. Therefore even when window size is large, the larger trend can be measured well by a small number of regression lines. Therefore the series has a trend with positive autocorrelation.
If the slope = 0.5, It means the series follows a random walk.
█ FEATURES
• Sample Size is the lookback period for calculation. Even though DFA requires a lower sample size than RS, a sample size larger > 50 is recommended for accurate measurement.
• When a larger sample size is used (for example = 1000 lookback length), the loading speed may be slower due to a longer calculation. Date Range is used to limit numbers of historical calculation bars. When loading speed is too slow, change the data range "all" into numbers of weeks/days/hours to reduce loading time. (Credit to allanster)
• “show filter” option applies a smoothing moving average to smooth the exponent.
• Log scale is my work around for dynamic log space scaling. Traditionally the smallest log space for bars is power of 2. It requires at least 10 points for an accurate regression, resulting in the minimum lookback to be 1024. I made some changes to round the fractional log space into integer bars requiring the said log space to be less than 2.
• For a more accurate calculation a larger "Base Scale" and "Max Scale" should be selected. However, when the sample size is small, a larger value would cause issues. Therefore, a general rule to be followed is: A larger "Base Scale" and "Max Scale" should be selected for a larger the sample size. It is recommended for the user to try and choose a larger scale if increasing the value doesn't cause issues.
The following chart shows the change in value using various scales. As shown, sometimes increasing the value makes the value itself messy and overshoot.
When using the lowest scale (4,2), the value seems stable. When we increase the scale to (8,2), the value is still alright. However, when we increase it to (8,4), it begins to look messy. And when we increase it to (16,4), it starts overshooting. Therefore, (8,2) seems to be optimal for our use.
█ How to Use
Similar to Hurst Exponent (Simple). 0.5 is a level for determine long term memory.
• In the efficient market hypothesis, market follows a random walk and Hurst exponent should be 0.5. When Hurst Exponent is significantly different from 0.5, the market is inefficient.
• When Hurst Exponent is > 0.5. Positive Autocorrelation. Market is Trending. Positive returns tend to be followed by positive returns and vice versa.
• Hurst Exponent is < 0.5. Negative Autocorrelation. Market is Mean reverting. Positive returns trends to follow by negative return and vice versa.
However, we can't really tell if the Hurst exponent value is generated by random chance by only looking at the 0.5 level. Even if we measure a pure random walk, the Hurst Exponent will never be exactly 0.5, it will be close like 0.506 but not equal to 0.5. That's why we need a level to tell us if Hurst Exponent is significant.
So we also computed the 95% confidence interval according to Monte Carlo simulation. The confidence level adjusts itself by sample size. When Hurst Exponent is above the top or below the bottom confidence level, the value of Hurst exponent has statistical significance. The efficient market hypothesis is rejected and market has significant inefficiency.
The state of market is painted in different color as the following chart shows. The users can also tell the state from the table displayed on the right.
An important point is that Hurst Value only represents the market state according to the past value measurement. Which means it only tells you the market state now and in the past. If Hurst Exponent on sample size 100 shows significant trend, it means according to the past 100 bars, the market is trending significantly. It doesn't mean the market will continue to trend. It's not forecasting market state in the future.
However, this is also another way to use it. The market is not always random and it is not always inefficient, the state switches around from time to time. But there's one pattern, when the market stays inefficient for too long, the market participants see this and will try to take advantage of it. Therefore, the inefficiency will be traded away. That's why Hurst exponent won't stay in significant trend or mean reversion too long. When it's significant the market participants see that as well and the market adjusts itself back to normal.
The Hurst Exponent can be used as a mean reverting oscillator itself. In a liquid market, the value tends to return back inside the confidence interval after significant moves(In smaller markets, it could stay inefficient for a long time). So when Hurst Exponent shows significant values, the market has just entered significant trend or mean reversion state. However, when it stays outside of confidence interval for too long, it would suggest the market might be closer to the end of trend or mean reversion instead.
Larger sample size makes the Hurst Exponent Statistics more reliable. Therefore, if the user want to know if long term memory exist in general on the selected ticker, they can use a large sample size and maximize the log scale. Eg: 1024 sample size, scale (16,4).
Following Chart is Bitcoin on Daily timeframe with 1024 lookback. It suggests the market for bitcoin tends to have long term memory in general. It generally has significant trend and is more inefficient at it's early stage.
Fast Autocorrelation Estimator█ Overview:
The Fast ACF and PACF Estimation indicator efficiently calculates the autocorrelation function (ACF) and partial autocorrelation function (PACF) using an online implementation. It helps traders identify patterns and relationships in financial time series data, enabling them to optimize their trading strategies and make better-informed decisions in the markets.
█ Concepts:
Autocorrelation, also known as serial correlation, is the correlation of a signal with a delayed copy of itself as a function of delay.
This indicator displays autocorrelation based on lag number. The autocorrelation is not displayed based over time on the x-axis. It's based on the lag number which ranges from 1 to 30. The calculations can be done with "Log Returns", "Absolute Log Returns" or "Original Source" (the price of the asset displayed on the chart).
When calculating autocorrelation, the resulting value will range from +1 to -1, in line with the traditional correlation statistic. An autocorrelation of +1 represents a perfect correlation (an increase seen in one time series leads to a proportionate increase in the other time series). An autocorrelation of -1, on the other hand, represents a perfect inverse correlation (an increase seen in one time series results in a proportionate decrease in the other time series). Lag number indicates which historical data point is autocorrelated. For example, if lag 3 shows significant autocorrelation, it means current data is influenced by the data three bars ago.
The Fast Online Estimation of ACF and PACF Indicator is a powerful tool for analyzing the linear relationship between a time series and its lagged values in TradingView. The indicator implements an online estimation of the Autocorrelation Function (ACF) and the Partial Autocorrelation Function (PACF) up to 30 lags, providing a real-time assessment of the underlying dependencies in your time series data. The Autocorrelation Function (ACF) measures the linear relationship between a time series and its lagged values, capturing both direct and indirect dependencies. The Partial Autocorrelation Function (PACF) isolates the direct dependency between the time series and a specific lag while removing the effect of any indirect dependencies.
This distinction is crucial in understanding the underlying relationships in time series data and making more informed decisions based on those relationships. For example, let's consider a time series with three variables: A, B, and C. Suppose that A has a direct relationship with B, B has a direct relationship with C, but A and C do not have a direct relationship. The ACF between A and C will capture the indirect relationship between them through B, while the PACF will show no significant relationship between A and C, as it accounts for the indirect dependency through B. Meaning that when ACF is significant at for lag 5, the dependency detected could be caused by an observation that came in between, and PACF accounts for that. This indicator leverages the Fast Moments algorithm to efficiently calculate autocorrelations, making it ideal for analyzing large datasets or real-time data streams. By using the Fast Moments algorithm, the indicator can quickly update ACF and PACF values as new data points arrive, reducing the computational load and ensuring timely analysis. The PACF is derived from the ACF using the Durbin-Levinson algorithm, which helps in isolating the direct dependency between a time series and its lagged values, excluding the influence of other intermediate lags.
█ How to Use the Indicator:
Interpreting autocorrelation values can provide valuable insights into the market behavior and potential trading strategies.
When applying autocorrelation to log returns, and a specific lag shows a high positive autocorrelation, it suggests that the time series tends to move in the same direction over that lag period. In this case, a trader might consider using a momentum-based strategy to capitalize on the continuation of the current trend. On the other hand, if a specific lag shows a high negative autocorrelation, it indicates that the time series tends to reverse its direction over that lag period. In this situation, a trader might consider using a mean-reversion strategy to take advantage of the expected reversal in the market.
ACF of log returns:
Absolute returns are often used to as a measure of volatility. There is usually significant positive autocorrelation in absolute returns. We will often see an exponential decay of autocorrelation in volatility. This means that current volatility is dependent on historical volatility and the effect slowly dies off as the lag increases. This effect shows the property of "volatility clustering". Which means large changes tend to be followed by large changes, of either sign, and small changes tend to be followed by small changes.
ACF of absolute log returns:
Autocorrelation in price is always significantly positive and has an exponential decay. This predictably positive and relatively large value makes the autocorrelation of price (not returns) generally less useful.
ACF of price:
█ Significance:
The significance of a correlation metric tells us whether we should pay attention to it. In this script, we use 95% confidence interval bands that adjust to the size of the sample. If the observed correlation at a specific lag falls within the confidence interval, we consider it not significant and the data to be random or IID (identically and independently distributed). This means that we can't confidently say that the correlation reflects a real relationship, rather than just random chance. However, if the correlation is outside of the confidence interval, we can state with 95% confidence that there is an association between the lagged values. In other words, the correlation is likely to reflect a meaningful relationship between the variables, rather than a coincidence. A significant difference in either ACF or PACF can provide insights into the underlying structure of the time series data and suggest potential strategies for traders. By understanding these complex patterns, traders can better tailor their strategies to capitalize on the observed dependencies in the data, which can lead to improved decision-making in the financial markets.
Significant ACF but not significant PACF: This might indicate the presence of a moving average (MA) component in the time series. A moving average component is a pattern where the current value of the time series is influenced by a weighted average of past values. In this case, the ACF would show significant correlations over several lags, while the PACF would show significance only at the first few lags and then quickly decay.
Significant PACF but not significant ACF: This might indicate the presence of an autoregressive (AR) component in the time series. An autoregressive component is a pattern where the current value of the time series is influenced by a linear combination of past values at specific lags.
Often we find both significant ACF and PACF, in that scenario simply and AR or MA model might not be sufficient and a more complex model such as ARMA or ARIMA can be used.
█ Features:
Source selection: User can choose either 'Log Returns' , 'Absolute Returns' or 'Original Source' for the input data.
Autocorrelation Selection: User can choose either 'ACF' or 'PACF' for the plot selection.
Plot Selection: User can choose either 'Autocorrelarrogram' or 'Historical Autocorrelation' for plotting the historical autocorrelation at a specified lag.
Max Lag: User can select the maximum number of lags to plot.
Precision: User can set the number of decimal points to display in the plot.
Expected Move BandsExpected move is the amount that an asset is predicted to increase or decrease from its current price, based on the current levels of volatility.
In this model, we assume asset price follows a log-normal distribution and the log return follows a normal distribution.
Note: Normal distribution is just an assumption, it's not the real distribution of return
Settings:
"Estimation Period Selection" is for selecting the period we want to construct the prediction interval.
For "Current Bar", the interval is calculated based on the data of the previous bar close. Therefore changes in the current price will have little effect on the range. What current bar means is that the estimated range is for when this bar close. E.g., If the Timeframe on 4 hours and 1 hour has passed, the interval is for how much time this bar has left, in this case, 3 hours.
For "Future Bars", the interval is calculated based on the current close. Therefore the range will be very much affected by the change in the current price. If the current price moves up, the range will also move up, vice versa. Future Bars is estimating the range for the period at least one bar ahead.
There are also other source selections based on high low.
Time setting is used when "Future Bars" is chosen for the period. The value in time means how many bars ahead of the current bar the range is estimating. When time = 1, it means the interval is constructing for 1 bar head. E.g., If the timeframe is on 4 hours, then it's estimating the next 4 hours range no matter how much time has passed in the current bar.
Note: It's probably better to use "probability cone" for visual presentation when time > 1
Volatility Models :
Sample SD: traditional sample standard deviation, most commonly used, use (n-1) period to adjust the bias
Parkinson: Uses High/ Low to estimate volatility, assumes continuous no gap, zero mean no drift, 5 times more efficient than Close to Close
Garman Klass: Uses OHLC volatility, zero drift, no jumps, about 7 times more efficient
Yangzhang Garman Klass Extension: Added jump calculation in Garman Klass, has the same value as Garman Klass on markets with no gaps.
about 8 x efficient
Rogers: Uses OHLC, Assume non-zero mean volatility, handles drift, does not handle jump 8 x efficient
EWMA: Exponentially Weighted Volatility. Weight recently volatility more, more reactive volatility better in taking account of volatility autocorrelation and cluster.
YangZhang: Uses OHLC, combines Rogers and Garmand Klass, handles both drift and jump, 14 times efficient, alpha is the constant to weight rogers volatility to minimize variance.
Median absolute deviation: It's a more direct way of measuring volatility. It measures volatility without using Standard deviation. The MAD used here is adjusted to be an unbiased estimator.
Volatility Period is the sample size for variance estimation. A longer period makes the estimation range more stable less reactive to recent price. Distribution is more significant on a larger sample size. A short period makes the range more responsive to recent price. Might be better for high volatility clusters.
Standard deviations:
Standard Deviation One shows the estimated range where the closing price will be about 68% of the time.
Standard Deviation two shows the estimated range where the closing price will be about 95% of the time.
Standard Deviation three shows the estimated range where the closing price will be about 99.7% of the time.
Note: All these probabilities are based on the normal distribution assumption for returns. It's the estimated probability, not the actual probability.
Manually Entered Standard Deviation shows the range of any entered standard deviation. The probability of that range will be presented on the panel.
People usually assume the mean of returns to be zero. To be more accurate, we can consider the drift in price from calculating the geometric mean of returns. Drift happens in the long run, so short lookback periods are not recommended. Assuming zero mean is recommended when time is not greater than 1.
When we are estimating the future range for time > 1, we typically assume constant volatility and the returns to be independent and identically distributed. We scale the volatility in term of time to get future range. However, when there's autocorrelation in returns( when returns are not independent), the assumption fails to take account of this effect. Volatility scaled with autocorrelation is required when returns are not iid. We use an AR(1) model to scale the first-order autocorrelation to adjust the effect. Returns typically don't have significant autocorrelation. Adjustment for autocorrelation is not usually needed. A long length is recommended in Autocorrelation calculation.
Note: The significance of autocorrelation can be checked on an ACF indicator.
ACF
The multimeframe option enables people to use higher period expected move on the lower time frame. People should only use time frame higher than the current time frame for the input. An error warning will appear when input Tf is lower. The input format is multiplier * time unit. E.g. : 1D
Unit: M for months, W for Weeks, D for Days, integers with no unit for minutes (E.g. 240 = 240 minutes). S for Seconds.
Smoothing option is using a filter to smooth out the range. The filter used here is John Ehler's supersmoother. It's an advance smoothing technique that gets rid of aliasing noise. It affects is similar to a simple moving average with half the lookback length but smoother and has less lag.
Note: The range here after smooth no long represent the probability
Panel positions can be adjusted in the settings.
X position adjusts the horizontal position of the panel. Higher X moves panel to the right and lower X moves panel to the left.
Y position adjusts the vertical position of the panel. Higher Y moves panel up and lower Y moves panel down.
Step line display changes the style of the bands from line to step line. Step line is recommended because it gets rid of the directional bias of slope of expected move when displaying the bands.
Warnings:
People should not blindly trust the probability. They should be aware of the risk evolves by using the normal distribution assumption. The real return has skewness and high kurtosis. While skewness is not very significant, the high kurtosis should be noticed. The Real returns have much fatter tails than the normal distribution, which also makes the peak higher. This property makes the tail ranges such as range more than 2SD highly underestimate the actual range and the body such as 1 SD slightly overestimate the actual range. For ranges more than 2SD, people shouldn't trust them. They should beware of extreme events in the tails.
Different volatility models provide different properties if people are interested in the accuracy and the fit of expected move, they can try expected move occurrence indicator. (The result also demonstrate the previous point about the drawback of using normal distribution assumption).
Expected move Occurrence Test
The prediction interval is only for the closing price, not wicks. It only estimates the probability of the price closing at this level, not in between. E.g., If 1 SD range is 100 - 200, the price can go to 80 or 230 intrabar, but if the bar close within 100 - 200 in the end. It's still considered a 68% one standard deviation move.
️Omega RatioThe Omega Ratio is a risk-return performance measure of an investment asset, portfolio, or strategy. It is defined as the probability-weighted ratio, of gains versus losses for some threshold return target. The ratio is an alternative for the widely used Sharpe ratio and is based on information the Sharpe ratio discards.
█ OVERVIEW
As we have mentioned many times, stock market returns are usually not normally distributed. Therefore the models that assume a normal distribution of returns may provide us with misleading information. The Omega Ratio improves upon the common normality assumption among other risk-return ratios by taking into account the distribution as a whole.
█ CONCEPTS
Two distributions with the same mean and variance, would according to the most commonly used Sharpe Ratio suggest that the underlying assets of the distribution offer the same risk-return ratio. But as we have mentioned in our Moments indicator, variance and standard deviation are not a sufficient measure of risk in the stock market since other shape features of a distribution like skewness and excess kurtosis come into play. Omega Ratio tackles this problem by employing all four Moments of the distribution and therefore taking into account the differences in the shape features of the distributions. Another important feature of the Omega Ratio is that it does not require any estimation but is rather calculated directly from the observed data. This gives it an advantage over standard statistical estimators that require estimation of parameters and are therefore sampling uncertainty in its calculations.
█ WAYS TO USE THIS INDICATOR
Omega calculates a probability-adjusted ratio of gains to losses, relative to the Minimum Acceptable Return (MAR). This means that at a given MAR using the simple rule of preferring more to less, an asset with a higher value of Omega is preferable to one with a lower value. The indicator displays the values of Omega at increasing levels of MARs and creating the so-called Omega Curve. Knowing this one can compare Omega Curves of different assets and decide which is preferable given the MAR of your strategy. The indicator plots two Omega Curves. One for the on chart symbol and another for the off chart symbol that u can use for comparison.
When comparing curves of different assets make sure their trading days are the same in order to ensure the same period for the Omega calculations. Value interpretation: Omega<1 will indicate that the risk outweighs the reward and therefore there are more excess negative returns than positive. Omega>1 will indicate that the reward outweighs the risk and that there are more excess positive returns than negative. Omega=1 will indicate that the minimum acceptable return equals the mean return of an asset. And that the probability of gain is equal to the probability of loss.
█ FEATURES
• "Low-Risk security" lets you select the security that you want to use as a benchmark for Omega calculations.
• "Omega Period" is the size of the sample that is used for the calculations.
• “Increments” is the number of Minimal Acceptable Return levels the calculation is carried on. • “Other Symbol” lets you select the source of the second curve.
• “Color Settings” you can set the color for each curve.
Linear Moments█ OVERVIEW
The Linear Moments indicator, also known as L-moments, is a statistical tool used to estimate the properties of a probability distribution. It is an alternative to conventional moments and is more robust to outliers and extreme values.
█ CONCEPTS
█ Four moments of a distribution
We have mentioned the concept of the Moments of a distribution in one of our previous posts. The method of Linear Moments allows us to calculate more robust measures that describe the shape features of a distribution and are anallougous to those of conventional moments. L-moments therefore provide estimates of the location, scale, skewness, and kurtosis of a probability distribution.
The first L-moment, λ₁, is equivalent to the sample mean and represents the location of the distribution. The second L-moment, λ₂, is a measure of the dispersion of the distribution, similar to the sample standard deviation. The third and fourth L-moments, λ₃ and λ₄, respectively, are the measures of skewness and kurtosis of the distribution. Higher order L-moments can also be calculated to provide more detailed information about the shape of the distribution.
One advantage of using L-moments over conventional moments is that they are less affected by outliers and extreme values. This is because L-moments are based on order statistics, which are more resistant to the influence of outliers. By contrast, conventional moments are based on the deviations of each data point from the sample mean, and outliers can have a disproportionate effect on these deviations, leading to skewed or biased estimates of the distribution parameters.
█ Order Statistics
L-moments are statistical measures that are based on linear combinations of order statistics, which are the sorted values in a dataset. This approach makes L-moments more resistant to the influence of outliers and extreme values. However, the computation of L-moments requires sorting the order statistics, which can lead to a higher computational complexity.
To address this issue, we have implemented an Online Sorting Algorithm that efficiently obtains the sorted dataset of order statistics, reducing the time complexity of the indicator. The Online Sorting Algorithm is an efficient method for sorting large datasets that can be updated incrementally, making it well-suited for use in trading applications where data is often streamed in real-time. By using this algorithm to compute L-moments, we can obtain robust estimates of distribution parameters while minimizing the computational resources required.
█ Bias and efficiency of an estimator
One of the key advantages of L-moments over conventional moments is that they approach their asymptotic normal closer than conventional moments. This means that as the sample size increases, the L-moments provide more accurate estimates of the distribution parameters.
Asymptotic normality is a statistical property that describes the behavior of an estimator as the sample size increases. As the sample size gets larger, the distribution of the estimator approaches a normal distribution, which is a bell-shaped curve. The mean and variance of the estimator are also related to the true mean and variance of the population, and these relationships become more accurate as the sample size increases.
The concept of asymptotic normality is important because it allows us to make inferences about the population based on the properties of the sample. If an estimator is asymptotically normal, we can use the properties of the normal distribution to calculate the probability of observing a particular value of the estimator, given the sample size and other relevant parameters.
In the case of L-moments, the fact that they approach their asymptotic normal more closely than conventional moments means that they provide more accurate estimates of the distribution parameters as the sample size increases. This is especially useful in situations where the sample size is small, such as when working with financial data. By using L-moments to estimate the properties of a distribution, traders can make more informed decisions about their investments and manage their risk more effectively.
Below we can see the empirical dsitributions of the Variance and L-scale estimators. We ran 10000 simulations with a sample size of 100. Here we can clearly see how the L-moment estimator approaches the normal distribution more closely and how such an estimator can be more representative of the underlying population.
█ WAYS TO USE THIS INDICATOR
The Linear Moments indicator can be used to estimate the L-moments of a dataset and provide insights into the underlying probability distribution. By analyzing the L-moments, traders can make inferences about the shape of the distribution, such as whether it is symmetric or skewed, and the degree of its spread and peakedness. This information can be useful in predicting future market movements and developing trading strategies.
One can also compare the L-moments of the dataset at hand with the L-moments of certain commonly used probability distributions. Finance is especially known for the use of certain fat tailed distributions such as Laplace or Student-t. We have built in the theoretical values of L-kurtosis for certain common distributions. In this way a person can compare our observed L-kurtosis with the one of the selected theoretical distribution.
█ FEATURES
Source Settings
Source - Select the source you wish the indicator to calculate on
Source Selection - Selec whether you wish to calculate on the source value or its log return
Moments Settings
Moments Selection - Select the L-moment you wish to be displayed
Lookback - Determine the sample size you wish the L-moments to be calculated with
Theoretical Distribution - This setting is only for investingating the kurtosis of our dataset. One can compare our observed kurtosis with the kurtosis of a selected theoretical distribution.
Historical Volatility EstimatorsHistorical volatility is a statistical measure of the dispersion of returns for a given security or market index over a given period. This indicator provides different historical volatility model estimators with percentile gradient coloring and volatility stats panel.
█ OVERVIEW There are multiple ways to estimate historical volatility. Other than the traditional close-to-close estimator. This indicator provides different range-based volatility estimators that take high low open into account for volatility calculation and volatility estimators that use other statistics measurements instead of standard deviation. The gradient coloring and stats panel provides an overview of how high or low the current volatility is compared to its historical values.
█ CONCEPTS We have mentioned the concepts of historical volatility in our previous indicators, Historical Volatility, Historical Volatility Rank, and Historical Volatility Percentile. You can check the definition of these scripts. The basic calculation is just the sample standard deviation of log return scaled with the square root of time. The main focus of this script is the difference between volatility models.
Close-to-Close HV Estimator: Close-to-Close is the traditional historical volatility calculation. It uses sample standard deviation. Note: the TradingView build in historical volatility value is a bit off because it uses population standard deviation instead of sample deviation. N – 1 should be used here to get rid of the sampling bias.
Pros:
• Close-to-Close HV estimators are the most commonly used estimators in finance. The calculation is straightforward and easy to understand. When people reference historical volatility, most of the time they are talking about the close to close estimator.
Cons:
• The Close-to-close estimator only calculates volatility based on the closing price. It does not take account into intraday volatility drift such as high, low. It also does not take account into the jump when open and close prices are not the same.
• Close-to-Close weights past volatility equally during the lookback period, while there are other ways to weight the historical data.
• Close-to-Close is calculated based on standard deviation so it is vulnerable to returns that are not normally distributed and have fat tails. Mean and Median absolute deviation makes the historical volatility more stable with extreme values.
Parkinson Hv Estimator:
• Parkinson was one of the first to come up with improvements to historical volatility calculation. • Parkinson suggests using the High and Low of each bar can represent volatility better as it takes into account intraday volatility. So Parkinson HV is also known as Parkinson High Low HV. • It is about 5.2 times more efficient than Close-to-Close estimator. But it does not take account into jumps and drift. Therefore, it underestimates volatility. Note: By Dividing the Parkinson Volatility by Close-to-Close volatility you can get a similar result to Variance Ratio Test. It is called the Parkinson number. It can be used to test if the market follows a random walk. (It is mentioned in Nassim Taleb's Dynamic Hedging book but it seems like he made a mistake and wrote the ratio wrongly.)
Garman-Klass Estimator:
• Garman Klass expanded on Parkinson’s Estimator. Instead of Parkinson’s estimator using high and low, Garman Klass’s method uses open, close, high, and low to find the minimum variance method.
• The estimator is about 7.4 more efficient than the traditional estimator. But like Parkinson HV, it ignores jumps and drifts. Therefore, it underestimates volatility.
Rogers-Satchell Estimator:
• Rogers and Satchell found some drawbacks in Garman-Klass’s estimator. The Garman-Klass assumes price as Brownian motion with zero drift.
• The Rogers Satchell Estimator calculates based on open, close, high, and low. And it can also handle drift in the financial series.
• Rogers-Satchell HV is more efficient than Garman-Klass HV when there’s drift in the data. However, it is a little bit less efficient when drift is zero. The estimator doesn’t handle jumps, therefore it still underestimates volatility.
Garman-Klass Yang-Zhang extension:
• Yang Zhang expanded Garman Klass HV so that it can handle jumps. However, unlike the Rogers-Satchell estimator, this estimator cannot handle drift. It is about 8 times more efficient than the traditional estimator.
• The Garman-Klass Yang-Zhang extension HV has the same value as Garman-Klass when there’s no gap in the data such as in cryptocurrencies.
Yang-Zhang Estimator:
• The Yang Zhang Estimator combines Garman-Klass and Rogers-Satchell Estimator so that it is based on Open, close, high, and low and it can also handle non-zero drift. It also expands the calculation so that the estimator can also handle overnight jumps in the data.
• This estimator is the most powerful estimator among the range-based estimators. It has the minimum variance error among them, and it is 14 times more efficient than the close-to-close estimator. When the overnight and daily volatility are correlated, it might underestimate volatility a little.
• 1.34 is the optimal value for alpha according to their paper. The alpha constant in the calculation can be adjusted in the settings. Note: There are already some volatility estimators coded on TradingView. Some of them are right, some of them are wrong. But for Yang Zhang Estimator I have not seen a correct version on TV.
EWMA Estimator:
• EWMA stands for Exponentially Weighted Moving Average. The Close-to-Close and all other estimators here are all equally weighted.
• EWMA weighs more recent volatility more and older volatility less. The benefit of this is that volatility is usually autocorrelated. The autocorrelation has close to exponential decay as you can see using an Autocorrelation Function indicator on absolute or squared returns. The autocorrelation causes volatility clustering which values the recent volatility more. Therefore, exponentially weighted volatility can suit the property of volatility well.
• RiskMetrics uses 0.94 for lambda which equals 30 lookback period. In this indicator Lambda is coded to adjust with the lookback. It's also easy for EWMA to forecast one period volatility ahead.
• However, EWMA volatility is not often used because there are better options to weight volatility such as ARCH and GARCH.
Adjusted Mean Absolute Deviation Estimator:
• This estimator does not use standard deviation to calculate volatility. It uses the distance log return is from its moving average as volatility.
• It’s a simple way to calculate volatility and it’s effective. The difference is the estimator does not have to square the log returns to get the volatility. The paper suggests this estimator has more predictive power.
• The mean absolute deviation here is adjusted to get rid of the bias. It scales the value so that it can be comparable to the other historical volatility estimators.
• In Nassim Taleb’s paper, he mentions people sometimes confuse MAD with standard deviation for volatility measurements. And he suggests people use mean absolute deviation instead of standard deviation when we talk about volatility.
Adjusted Median Absolute Deviation Estimator:
• This is another estimator that does not use standard deviation to measure volatility.
• Using the median gives a more robust estimator when there are extreme values in the returns. It works better in fat-tailed distribution.
• The median absolute deviation is adjusted by maximum likelihood estimation so that its value is scaled to be comparable to other volatility estimators.
█ FEATURES
• You can select the volatility estimator models in the Volatility Model input
• Historical Volatility is annualized. You can type in the numbers of trading days in a year in the Annual input based on the asset you are trading.
• Alpha is used to adjust the Yang Zhang volatility estimator value.
• Percentile Length is used to Adjust Percentile coloring lookbacks.
• The gradient coloring will be based on the percentile value (0- 100). The higher the percentile value, the warmer the color will be, which indicates high volatility. The lower the percentile value, the colder the color will be, which indicates low volatility.
• When percentile coloring is off, it won’t show the gradient color.
• You can also use invert color to make the high volatility a cold color and a low volatility high color. Volatility has some mean reversion properties. Therefore when volatility is very low, and color is close to aqua, you would expect it to expand soon. When volatility is very high, and close to red, you would it expect it to contract and cool down.
• When the background signal is on, it gives a signal when HVP is very low. Warning there might be a volatility expansion soon.
• You can choose the plot style, such as lines, columns, areas in the plotstyle input.
• When the show information panel is on, a small panel will display on the right.
• The information panel displays the historical volatility model name, the 50th percentile of HV, and HV percentile. 50 the percentile of HV also means the median of HV. You can compare the value with the current HV value to see how much it is above or below so that you can get an idea of how high or low HV is. HV Percentile value is from 0 to 100. It tells us the percentage of periods over the entire lookback that historical volatility traded below the current level. Higher HVP, higher HV compared to its historical data. The gradient color is also based on this value.
█ HOW TO USE If you haven’t used the hvp indicator, we suggest you use the HVP indicator first. This indicator is more like historical volatility with HVP coloring. So it displays HVP values in the color and panel, but it’s not range bound like the HVP and it displays HV values. The user can have a quick understanding of how high or low the current volatility is compared to its historical value based on the gradient color. They can also time the market better based on volatility mean reversion. High volatility means volatility contracts soon (Move about to End, Market will cooldown), low volatility means volatility expansion soon (Market About to Move).
█ FINAL THOUGHTS HV vs ATR The above volatility estimator concepts are a display of history in the quantitative finance realm of the research of historical volatility estimations. It's a timeline of range based from the Parkinson Volatility to Yang Zhang volatility. We hope these descriptions make more people know that even though ATR is the most popular volatility indicator in technical analysis, it's not the best estimator. Almost no one in quant finance uses ATR to measure volatility (otherwise these papers will be based on how to improve ATR measurements instead of HV). As you can see, there are much more advanced volatility estimators that also take account into open, close, high, and low. HV values are based on log returns with some calculation adjustment. It can also be scaled in terms of price just like ATR. And for profit-taking ranges, ATR is not based on probabilities. Historical volatility can be used in a probability distribution function to calculated the probability of the ranges such as the Expected Move indicator. Other Estimators There are also other more advanced historical volatility estimators. There are high frequency sampled HV that uses intraday data to calculate volatility. We will publish the high frequency volatility estimator in the future. There's also ARCH and GARCH models that takes volatility clustering into account. GARCH models require maximum likelihood estimation which needs a solver to find the best weights for each component. This is currently not possible on TV due to large computational power requirements. All the other indicators claims to be GARCH are all wrong.
SYMBOL NOTES - UNCORRELATED TRADING GROUPSWrite symbol-specific notes that only appear on that chart. Organized into 6 uncorrelated groups for safe multi-pair trading.
📝 SYMBOL NOTES - UNCORRELATED TRADING GROUPS
This indicator solves two problems every serious trader faces:
1. Keeping Track of Your Analysis
Write notes for each trading pair and they'll only appear when you view that specific chart. No more forgetting your key levels, trade ideas, or analysis!
2. Avoiding Correlated Risk
The symbols are organized into 6 groups where ALL pairs within each group are completely UNCORRELATED. Trade any combination from the same group without worrying about double exposure.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 THE PROBLEM THIS SOLVES
Have you ever:
- Opened XAUUSD and EURUSD at the same time, then Fed news hit and BOTH positions went against you?
- Traded GBPUSD and GBPJPY together, then BOE announcement stopped out both trades?
- Forgotten what levels you were watching on a pair?
This indicator helps you avoid these costly mistakes!
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📁 THE 6 UNCORRELATED GROUPS
Each group contains pairs that share NO common currency:
```
GRUP 1: XAUUSD • EURGBP • NZDJPY • AUDCHF • NATGAS
GRUP 2: EURUSD • GBPJPY • AUDNZD • CADCHF
GRUP 3: GBPUSD • EURJPY • AUDCAD • NZDCHF
GRUP 4: USDJPY • EURCHF • GBPAUD • NZDCAD
GRUP 5: USDCAD • EURAUD • GBPCHF
GRUP 6: NAS100 • DAX40 • UK100 • JPN225
```
**Example - GRUP 1:**
- XAUUSD → Uses USD + Gold
- EURGBP → Uses EUR + GBP
- NZDJPY → Uses NZD + JPY
- AUDCHF → Uses AUD + CHF
- NATGAS → Commodity (independent)
= 7 different currencies, ZERO overlap!
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**✅ HOW TO USE**
1. Add indicator to any chart
2. Open Settings (gear icon ⚙️)
3. Find your symbol's group and input field
4. Write your note (support levels, trade ideas, etc.)
5. Switch charts - your note appears only on that symbol!
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚙️ SETTINGS
- Note Position: Choose where the note box appears (6 positions)
- Text Size: Tiny, Small, Normal, or Large
- Show Group Name: Display which correlation group
- Show Symbol Name: Display current symbol
- Colors: Customize background, text, group label, and border colors
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 TRADING STRATEGY TIPS
Safe Multi-Pair Trading:
1. Pick ONE group for the day
2. Look for setups on ANY symbol in that group
3. Open positions freely - they won't correlate!
4. Even if major news hits, only ONE position is affected
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔧 COMPATIBLE WITH
- All major forex brokers
- Prop firms (FTMO, Alpha Capital, etc.)
- Works on any timeframe
- Futures symbols supported (MGC, M6E, etc.)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Visible RangeOverview This is a precision tool designed for quantitative traders and engineers who need exact control over their chart's visual scope. Unlike standard time calculations that fail in markets with trading breaks (like A-Shares, Futures, or Stocks), this indicator uses a loop-back mechanism to count the actual number of visible bars, ensuring your indicators (e.g., MA60, MA200) have sufficient sample data.
Why use this? If you use multi-timeframe layouts (e.g., Daily/Hourly/15s), it is critical to know exactly how much data is visible.
The Problem: In markets like the Chinese A-Share market (T+1, 4-hour trading day), calculating Time Range / Timeframe results in massive errors because it includes closed market hours (lunch breaks, nights, weekends).
The Solution: This script iterates through the visible range to count the true bar_index, providing 100% accurate data density metrics.
Key Features
True Bar Counting: Uses a for loop to count actual candles, ignoring market breaks. perfect for non-24/7 markets.
Integer Precision: Displays time ranges (Days, Hours, Mins, Secs) in clean integers. No messy decimals.
Compact UI: Displays information in a single line (e.g., View: 30 Days (120 Bars)), default to the Top Right corner to save screen space.
Fully Customizable: Adjustable position, text size, and colors to fit any dark/light theme.
Performance Optimized: Includes max_bars_back limits to prevent browser lag on deep history lookups.
Settings
Position: Default Top Right (can be moved to any corner).
Max Bar Count: Default 5000 (Safety limit for loop calculation).
ES-VIX Expected Daily MoveThis indicator calculates the expected daily price movement for ES futures based on current volatility levels as measured by the VIX (CBOE Volatility Index).
Formula:
Expected Daily Move = (ES Price × VIX Price) / √252 / 100
The calculation converts the annualized VIX volatility into an expected daily move by dividing by the square root of 252 (the approximate number of trading days per year).
Features:
Real-time calculation using current ES futures price and VIX level
Histogram visualization in a separate pane for easy trend analysis
Information table displaying:
Current ES futures price
Current VIX level
Expected daily move in points
Expected daily move as a percentage
Bitcoin vs. S&P 500 Performance Comparison**Full Description:**
**Overview**
This indicator provides an intuitive visual comparison of Bitcoin's performance versus the S&P 500 by shading the chart background based on relative strength over a rolling lookback period.
**How It Works**
- Calculates percentage returns for both Bitcoin and the S&P 500 (ES1! futures) over a specified lookback period (default: 75 bars)
- Compares the returns and shades the background accordingly:
- **Green/Teal Background**: Bitcoin is outperforming the S&P 500
- **Red/Maroon Background**: S&P 500 is outperforming Bitcoin
- Displays a real-time performance difference label showing the exact percentage spread
**Key Features**
✓ Rolling performance comparison using customizable lookback period (default 75 bars)
✓ Clean visual representation with adjustable transparency
✓ Works on any timeframe (optimized for daily charts)
✓ Real-time performance differential display
✓ Uses ES1! (E-mini S&P 500 continuous futures) for accurate comparison
✓ Fine-tuning adjustment factor for precise calibration
**Use Cases**
- Identify market regimes where Bitcoin outperforms or underperforms traditional equities
- Spot trend changes in relative performance
- Assess risk-on vs risk-off periods
- Compare Bitcoin's momentum against broader market conditions
- Time entries/exits based on relative strength shifts
**Settings**
- **S&P 500 Symbol**: Default ES1! (can be changed to SPX or other indices)
- **Lookback Period**: Number of bars for performance calculation (default: 75)
- **Adjustment Factor**: Fine-tune calibration to match specific data feeds
- **Transparency Controls**: Customize background shading intensity
- **Show/Hide Label**: Toggle performance difference display
**Best Practices**
- Use on daily timeframe for swing trading and position analysis
- Combine with other momentum indicators for confirmation
- Watch for color transitions as potential regime change signals
- Consider using multiple timeframes for comprehensive analysis
**Technical Details**
The indicator calculates rolling percentage returns using the formula: ((Current Price / Price ) - 1) × 100, then compares Bitcoin's return to the S&P 500's return over the same period. The background color dynamically updates based on which asset is showing stronger performance.
SIDD Table Volume multiframe (Modified)🚀 SIDD Volume Table – The Most Powerful Multi-Timeframe Volume Dashboard
Designed by Siddhartha Mukherjee (SIDD)
Free for the community.
Get an unfair edge with the cleanest, fastest, and most accurate multi-timeframe volume analyzer available on TradingView. This tool reveals where buyers and sellers are truly active across multiple timeframes—helping you confirm trends, avoid traps, and enter with confidence.
🔥 Why Traders Love This Indicator
✅ 1. Multi-Timeframe Volume Domination
Instantly view Buy% / Sell% / Total Volume for:
1m • 5m • 15m • 1H • 4H • 1D • 1W
Choose any combination you want!
✅ 2. Advanced Buy/Sell Volume Logic
Not simple volume…
This tool breaks it into:
Buy Volume% (green dominance)
Sell Volume% (red dominance)
Using candle structure (H-L-C), giving far more accurate pressure detection.
✅ 3. Realtime Candle Countdown
Never guess when a candle will close again.
Get:
Seconds (1m)
MM:SS (5m/15m/1H)
DD:HH:MM:SS (4H, 1D, 1W)
Perfect for scalpers, swing traders, and index traders.
✅ 4. Beautiful & Customizable Dashboard
Choose position anywhere on screen
Auto size or choose Tiny → Huge
Color-coded Bias (Green Buyers, Red Sellers)
Clean layout built for modern charts
Your chart stays clean while your data stays powerful.
💡 What This Helps You Identify
Where buyers are gaining strength
Where sellers are dominating
Multi-timeframe alignment (the key to big moves)
Real reversal pressure
Volume divergence across timeframes
Trend confirmation before breakouts
Perfect for:
NIFTY / BANKNIFTY / Stocks / Crypto / FX / Commodities
🧠 Who Should Use This?
Intraday traders
Swing traders
Options traders
Futures traders
Crypto scalpers
Professional volume analysts
If volume matters to you → this indicator becomes a must-have.
🛠 Built with Precision
Non-repainting
Multi-TF aligned
Fast + lightweight arrays
Uses BTC/ETH feed to stabilize ticks
Zero chart clutter
❤️ Free for Everyone
This tool is released 100% free to help the community trade with clarity and confidence.
Leave a like ⭐, comment 💬, or follow if you want more such institutional-grade tools.
⚠️ Disclaimer
This is for educational/analytical use only.
Not financial advice. Trade at your own risk.
ATR Risk Manager v5.2 [Auto-Extrapolate]If you ever had problems knowing how much contracts to use for a particular timeframe to keep your risk within acceptable levels, then this indicator should help. You just have to define your accepted risk based on ATR and also percetage of your drawdown, then the indicator will tell you how many contracts you should use. If the risk is too high, it will also tell you not to trade. This is only for futures NQ MNQ ES MES GC MGC CL MCL MYM and M2K.
Omega Correlation [OmegaTools]Omega Correlation (Ω CRR) is a cross-asset analytics tool designed to quantify both the strength of the relationship between two instruments and the tendency of one to move ahead of the other. It is intended for traders who work with indices, futures, FX, commodities, equities and ETFs, and who require something more robust than a simple linear correlation line.
The indicator operates in two distinct modes, selected via the “Show” parameter: Correlation and Anticipation. In Correlation mode, the script focuses on how tightly the current chart and the chosen second asset move together. In Anticipation mode, it shifts to a lead–lag perspective and estimates whether the second asset tends to behave as a leader or a follower relative to the symbol on the chart.
In both modes, the core inputs are the chart symbol and a user-selected second symbol. Internally, both assets are transformed into normalized log-returns: the script computes logarithmic returns, removes short-term mean and scales by realized volatility, then clips extreme values. This normalisation allows the tool to compare behaviour across assets with different price levels and volatility profiles.
In Correlation mode, the indicator computes a composite correlation score that typically ranges between –1 and +1. Values near +1 indicate strong and persistent positive co-movement, values near zero indicate an unstable or weak link, and values near –1 indicate a stable anti-correlation regime. The composite score is constructed from three components.
The first component is a normalized return co-movement measure. After transforming both instruments into normalized returns, the script evaluates how similar those returns are bar by bar. When the two assets consistently deliver returns of similar sign and magnitude, this component is high and positive. When they frequently diverge or move in opposite directions, it becomes negative. This captures short-term co-movement in a volatility-adjusted way.
The second component focuses on high–low swing alignment. Rather than looking only at closes, it examines the direction of changes in highs and lows for each bar. If both instruments are printing higher highs and higher lows together, or lower highs and lower lows together, the swing structure is considered aligned. Persistent alignment contributes positively to the correlation score, while repeated mismatches between the swing directions reduce it. This helps differentiate between superficial price noise and structural similarity in trend behaviour.
The third component is a classical Pearson correlation on closing prices, computed over a longer lookback. This serves as a stabilising backbone that summarises general co-movement over a broader window. By combining normalized return co-movement, swing alignment and standard price correlation with calibrated weights, the Correlation mode provides a richer view than a single linear measure, capturing both short-term dynamic interaction and longer-term structural linkage.
In Anticipation mode, Omega Correlation estimates whether the second asset tends to lead or lag the current chart. The output is again a continuous score around the range. Positive values suggest that the second asset is acting more as a leader, with its past moves bearing informative value for subsequent moves of the chart symbol. Negative values indicate that the second asset behaves more like a laggard or follower. Values near zero suggest that no stable lead–lag structure can be identified.
The anticipation score is built from four elements inspired by quantitative lead–lag and price discovery analysis. The first element is a residual lead correlation, conceptually similar to Granger-style logic. The script first measures how much of the chart symbol’s normalized returns can be explained by its own lagged values. It then removes that component and studies the correlation between the residuals and lagged returns of the second asset. If the second asset’s past returns consistently explain what the chart symbol does beyond its own autoregressive behaviour, this residual correlation becomes significantly positive.
The second element is an asymmetric lead–lag structure measure. It compares the strength of relationships in both directions across multiple lags: the correlation of the current symbol with lagged versions of the second asset (candidate leader) versus the correlation of lagged values of the current symbol with the present values of the second asset. If the forward direction (second asset leading the first) is systematically stronger than the backward direction, the structure is skewed toward genuine leadership of the second asset.
The third element is a relative price discovery score, constructed by building a dynamic hedge ratio between the two prices and defining a spread. The indicator looks at how changes in each asset contribute to correcting deviations in this spread over time. When the chart symbol tends to do most of the adjustment while the second asset remains relatively stable, it suggests that the second asset is taking a greater role in determining the equilibrium price and the chart symbol is adjusting to it. The difference in adjustment intensity between the two instruments is summarised into a single score.
The fourth element is a breakout follow-through causality component. The script scans for breakout events on the second asset, where its price breaks out of a recent high or low range while the chart symbol has not yet done so. It then evaluates whether the chart symbol subsequently confirms the breakout direction, remains neutral, or moves against it. Events where the second asset breaks and the first asset later follows in the same direction add positive contribution, while failed or contrarian follow-through reduce this component. The contribution is also lightly modulated by the strength of the breakout, via the underlying normalized return.
The four elements of the Anticipation mode are combined into a single leading correlation score, providing a compact and interpretable measure of whether the second asset currently behaves as an effective early signal for the symbol you trade.
To aid interpretation, Omega Correlation builds dynamic bands around the active series (correlation or anticipation). It estimates a long-term central tendency and a typical deviation around it, plotting upper and lower bands that highlight unusually high or low values relative to recent history. These bands can be used to distinguish routine fluctuations from genuinely extreme regimes.
The script also computes percentile-based levels for the correlation series and uses them to track two special price levels on the main chart: lost correlation levels and gained correlation levels. When the correlation drops below an upper percentile threshold, the current price is stored as a lost correlation level and plotted as a horizontal line. When the correlation rises above a lower percentile threshold, the current price is stored as a gained correlation level. These levels mark zones where a historically strong relationship between the two markets broke down or re-emerged, and can be used to frame divergence, convergence and spread opportunities.
An information panel summarises, in real time, whether the second asset is behaving more as a leading, lagging or independent instrument according to the anticipation score, and suggests whether the current environment is more conducive to de-alignment, re-alignment or classic spread behaviour based on the correlation regime. This makes the tool directly interpretable even for users who are not familiar with all the underlying statistical details.
Typical applications for Omega Correlation include intermarket analysis (for example, index vs index, commodity vs related equity sector, FX vs bonds), dynamic hedge sizing, regime detection for algorithmic strategies, and the identification of lead–lag structures where a macro driver or benchmark can be monitored as an early signal for the instrument actually traded. The indicator can be applied across intraday and higher timeframes, with the understanding that the strength and nature of relationships will differ across horizons.
Omega Correlation is designed as an advanced analytical framework, not as a standalone trading system. Correlation and lead–lag relationships are statistical in nature and can change abruptly, especially around macro events, regime shifts or liquidity shocks. A positive anticipation reading does not guarantee that the second asset will always move first, and a high correlation regime can break without warning. All outputs of this tool should be combined with independent analysis, sound risk management and, when appropriate, backtesting or forward testing on the user’s specific instruments and timeframes.
The intention behind Omega Correlation is to bring techniques inspired by quantitative research, such as normalized return analysis, residual correlation, asymmetric lead–lag structure, price discovery logic and breakout event studies, into an accessible TradingView indicator. It is intended for traders who want a structured, professional way to understand how markets interact and to incorporate that information into their discretionary or systematic decision-making processes.
Roshan Dash Ultimate Trading DashboardHas the key moving averages sma (10,20,50,200) in daily and above timeframe. And for lower timeframe it has ema (10,20,50,200) and vwap. Displays key information like marketcap, sector, lod%, atr, atr% and distance of atr from 50sma . All things which help determine whether or not to take trade.
Multi-Ticker Anchored CandlesMulti-Ticker Anchored Candles (MTAC) is a simple tool for overlaying up to 3 tickers onto the same chart. This is achieved by interpreting each symbol's OHLC data as percentages, then plotting their candle points relative to the main chart's open. This allows for a simple comparison of tickers to track performance or locate relationships between them.
> Background
The concept of multi-ticker analysis is not new, this type of analysis can be extremely helpful to get a gauge of the over all market, and it's sentiment. By analyzing more than one ticker at a time, relationships can often be observed between tickers as time progresses.
While seeing multiple charts on top of each other sounds like a good idea...each ticker has its own price scale, with some being only cents while others are thousands of dollars.
Directly overlaying these charts is not possible without modification to their sources.
By using a fixed point in time (Period Open) and percentage performance relative to that point for each ticker, we are able to directly overlay symbols regardless of their price scale differences.
The entire process used to make this indicator can be summed up into 2 keywords, "Scaling & Anchoring".
> Scaling
First, we start by determining a frame of reference for our analysis. The indicator uses timeframe inputs to determine sessions which are used, by default this is set to 1 day.
With this in place, we then determine our point of reference for scaling. While this could be any point in time, the most sensible for our application is the daily (or session) open.
Each symbol shares time, therefore, we can take a price point from a specified time (Opening Price) and use it to sync our analysis over each period.
Over the day, we track the percentage performance of each ticker's OHLC values relative to its daily open (% change from open).
Since each ticker's data is now tracked based on its opening price, all data is now using the same scale.
The scale is simply "% change from open".
> Anchoring
Now that we have our scaled data, we need to put it onto the chart.
Since each point of data is relative to it's daily open (anchor point), relatively speaking, all daily opens are now equal to each other.
By adding the scaled ticker data to the main chart's daily open, each of our resulting series will be properly scaled to the main chart's data based on percentages.
Congratulations, We have now accurately scaled multiple tickers onto one chart.
> Display
The indicator shows each requested ticker as different colored candlesticks plotted on top of the main chart.
Each ticker has an associated label in front of the current bar, each component of this label can be toggled on or off to allow only the desired information to be displayed.
To retain relevance, at the start of each session, a "Session Break" line is drawn, as well as the opening price for the session. These can also be toggled.
Note: The opening price is the opening price for ALL tickers, when a ticker crosses the open on the main chart, it is crossing its own opening price as well.
> Examples
In the chart below, we can see NYSE:MCD NASDAQ:WEN and NASDAQ:JACK overlaid on a NASDAQ:SBUX chart.
From this, we can see NASDAQ:JACK was the top gainer on the day. While this was the case, it also fell roughly 4% from its peak near lunchtime. Unlike the top gainer, we can see the other 3 tickers ended their day near their daily high.
In the explanations above, the daily timeframe is used since it is the default; however, the analysis is not constrained to only days. The anchoring period can be set to any timeframe period.
In the chart below, you can observe the Daily, Weekly, and Monthly anchored charts side-by-side.
This can be used on all tickers, timeframes, and markets. While a typical application may be comparing relevant assets... the script is not limited.
Below we have a chart tracking COMEX:GCV2026 , FX:EURUSD , and COINBASE:DOGEUSD on the AMEX:SPY chart.
While these tickers are not typically compared side-by-side, here it is simply a display of the capabilities of the script.
Enjoy!
Advanced Trading System - Volume Profile + BB + RSI + FVG + FibAdvanced Multi-Indicator Trading System with Volume Profile, Bollinger Bands, RSI, FVG & Fibonacci
Overview
This comprehensive trading indicator combines five powerful technical analysis tools into one unified system, designed to identify high-probability trading opportunities with precision entry and exit signals. The indicator integrates Volume Profile analysis, Bollinger Bands, RSI momentum, Fair Value Gaps (FVG), and Fibonacci retracement levels to provide traders with a complete market analysis framework.
Key Features
1. Volume Profile & Point of Control (POC)
Automatically calculates the Point of Control - the price level with the highest trading volume
Identifies Value Area High (VAH) and Value Area Low (VAL)
Updates dynamically based on customizable lookback periods
Helps identify key support and resistance zones where institutional traders are active
2. Bollinger Bands Integration
Standard 20-period Bollinger Bands with customizable multiplier
Identifies overbought and oversold conditions
Measures market volatility through band width
Signals generated when price approaches extreme levels
3. RSI Momentum Analysis
14-period Relative Strength Index with visual background coloring
Overbought (70) and oversold (30) threshold alerts
Integrated into buy/sell signal logic for confirmation
Real-time momentum tracking in info dashboard
4. Fair Value Gap (FVG) Detection
Automatically identifies bullish and bearish fair value gaps
Visual representation with colored boxes
Highlights imbalance zones where price may return
Used for high-probability entry confirmation
5. Fibonacci Retracement Levels
Auto-calculated based on recent swing high/low
Key levels: 23.6%, 38.2%, 50%, 61.8%, 78.6%
Perfect for identifying profit-taking zones
Dynamic lines that update with market movement
6. Smart Signal Generation
The indicator generates BUY and SELL signals based on multi-condition confluence:
BUY Signal Requirements:
Price near lower Bollinger Band
RSI in oversold territory (< 30)
High volume confirmation (optional)
Bullish FVG or POC alignment
SELL Signal Requirements:
Price near upper Bollinger Band
RSI in overbought territory (> 70)
High volume confirmation (optional)
Bearish FVG or POC alignment
7. Automated Take Profit Levels
Three dynamic profit targets: 1%, 2%, and 3%
Automatically calculated from entry price
Visual markers on chart
Individual alerts for each level
8. Comprehensive Alert System
The indicator includes 10+ alert types:
Buy signal alerts
Sell signal alerts
Take profit level alerts (TP1, TP2, TP3)
Fibonacci level cross alerts
RSI overbought/oversold alerts
Bullish/Bearish FVG detection alerts
9. Real-Time Info Dashboard
Live display of all key metrics
Color-coded for quick visual analysis
Shows RSI, BB Width, Volume ratio, POC, Fib levels
Current signal status (BUY/SELL/WAIT)
How to Use
Setup
Add the indicator to your chart
Adjust parameters based on your trading style and timeframe
Set up alerts by clicking "Create Alert" and selecting desired conditions
Recommended Timeframes
Scalping: 5m - 15m
Day Trading: 15m - 1H
Swing Trading: 4H - Daily
Parameter Customization
Volume Profile Settings:
Length: 100 (adjust for more/less historical data)
Rows: 24 (granularity of volume distribution)
Bollinger Bands:
Length: 20 (standard period)
Multiplier: 2.0 (adjust for tighter/wider bands)
RSI Settings:
Length: 14 (standard momentum period)
Overbought: 70
Oversold: 30
Fibonacci:
Lookback: 50 (swing high/low detection period)
Signal Settings:
Volume Filter: Enable/disable volume confirmation
Volume MA Length: 20 (for volume comparison)
Trading Strategy Examples
Strategy 1: Trend Reversal
Wait for BUY signal at lower Bollinger Band
Confirm with bullish FVG or POC support
Enter position
Take partial profits at Fib 38.2% and 50%
Exit remaining position at TP3 or SELL signal
Strategy 2: Breakout Confirmation
Monitor price approaching POC level
Wait for volume spike
Enter on signal confirmation with FVG alignment
Use Fibonacci levels for scaling out
Strategy 3: Range Trading
Identify POC as range midpoint
Buy at lower BB with oversold RSI
Sell at upper BB with overbought RSI
Use FVG zones for additional confirmation
Best Practices
✅ Do:
Use multiple timeframe analysis
Combine with price action analysis
Set stop losses below/above recent swing points
Scale out at Fibonacci levels
Wait for volume confirmation on signals
❌ Don't:
Trade every signal blindly
Ignore overall market context
Use on extremely low timeframes without testing
Neglect risk management
Trade during low liquidity periods
Risk Management
Always use stop losses
Risk no more than 1-2% per trade
Consider market conditions and volatility
Scale position sizes based on signal strength
Use the volume filter for additional confirmation
Technical Specifications
Pine Script Version: 6
Overlay: Yes (displays on main chart)
Max Boxes: 500 (for FVG visualization)
Max Lines: 500 (for Fibonacci levels)
Alerts: 10+ customizable conditions
Performance Notes
This indicator works best in:
Trending markets with clear momentum
High-volume trading sessions
Assets with good liquidity
When multiple signals align
Less effective in:
Extremely choppy/sideways markets
Low-volume periods
During major news events (high volatility)
Updates & Support
This indicator is actively maintained and updated. Future enhancements may include:
Additional volume profile features
More sophisticated FVG tracking
Enhanced alert customization
Backtesting integration
Disclaimer
This indicator is for educational and informational purposes only. It does not constitute financial advice. Past performance does not guarantee future results. Always conduct your own research and consider consulting with a financial advisor before making trading decisions. Trading involves substantial risk of loss.
Multivariate Kalman Filter🙏🏻 I see no1 ever posted an open source Multivariate Kalman filter on TV, so here it is, for you. Tested and mathematically correct implementation, with numerical safeties in place that do not affect the final results at all. That’s the main purpose of this drop, just to make the tool available here. Linear algebra everywhere, Neo would approve 4 sure.
...
Personally I haven't found any real use case of it for myself, aside from a very specific one I will explain later, but others usually do…
Almost every1 in the quant industry who uses Kalman is in fact misusing it, because by its real definition, it should be applied to Not the exact known values (e.g as real ticks provided by transparent audited regulated exchange), but “measurements” of those ‘with errors’.
If your measurements don’t have errors or you have real precise data, by its internal formulas Kalman will output the exact inputs. So most who use it come up with some imaginary errors of sorts, like from some kind of imaginary fair price etc. The important easy to miss point, the errors of your measurements have to be symmetric around its mean ‘ at least ’, if errors are biased, Kalman won’t provide.
For most tasks there are better tools, including other state space models , but still Multivariate Kalman is a very powerful instrument, you can make it do all kinds of stuff. Also as a state space model it can also provide confidence & prediction intervals without explicit calculations of dem.
...
In this script I included 2 example use cases, the first one is the classic tho perfectly working misuse, the second one is what I do with it:
One
Naive, estimates “hidden” adaptive moving regression endpoint. The result you can see on the chart above. You can imagine that your real datapoints are in fact non perfect measures of some hidden state, and by defining measurement noise and process noise, and by constructing the input matrixes in certain ways, you can express what that state should be.
Two
Upscaling tick lattice, aka modelling prices as if native tick size would’ve been lower. Kinda very specific task, mostly needed in HFT or just for analytical purposes. Some like ZN have huge tick sizes, they are traded a lot but barely do more than 20 ticks range in a session. The idea is to model raw data as AR2 process , learn the phi1 and phi2, make one point forecasts based on dem, and the process noise would be the variance of errors from these forecasts. The measurement noise here is legit, it’s quantization noise based on tick size, no need in olympic gold in mental gymnastics xd
^^ artificially upscaling ZN futures tick lattice
...
I really made it available there so You guys can take it and some crazy ish with it, just let state space models abduct your conciseness and never look back
∞






















