LTI_FiltersLinear Time-Invariant (LTI) filters are fundamental tools in signal processing that operate with consistent behavior over time and linearly respond to input signals. They are crucial for analyzing and manipulating signals in various applications, ensuring the output signal's integrity is maintained regardless of when an input is applied or its magnitude. The Windowed Sinc filter is a specific type of LTI filter designed for digital signal processing. It employs a Sinc function, ideal for low-pass filtering, truncated and shaped within a finite window to make it practically implementable. This process involves multiplying the Sinc function by a window function, which tapers off towards the ends, making the filter finite and suitable for digital applications. Windowed Sinc filters are particularly effective for tasks like data smoothing and removing unwanted frequency components, balancing between sharp cutoff characteristics and minimal distortion. The efficiency of Windowed Sinc filters in digital signal processing lies in their adept use of linear algebra, particularly in the convolution process, which combines input data with filter coefficients to produce the desired output. This mathematical foundation allows for precise control over the filtering process, optimizing the balance between filtering performance and computational efficiency. By leveraging linear algebra techniques such as matrix multiplication and Toeplitz matrices, these filters can efficiently handle large datasets and complex filtering tasks, making them invaluable in applications requiring high precision and speed, such as audio processing, financial signal analysis, and image restoration.
Library "LTI_Filters"
offset(length, enable)
Calculates the time offset required for aligning the output of a filter with its input, based on the filter's length. This is useful for centered filters where the output is naturally shifted due to the filter's operation.
Parameters:
length (simple int) : The length of the filter.
enable (simple bool) : A boolean flag to enable or dissable the offset calculation.
Returns: The calculated offset if enabled; otherwise, returns 0.
lti_filter(filter_type, source, length, prefilter, centered, fc, window_type)
General-purpose Linear Time-Invariant (LTI) filter function that can apply various filter types to a data series. Can be used to apply a variety of LTI filters with different characteristics to financial data series or other time series data.
Parameters:
filter_type (simple string) : Specifies the type of filter. ("Sinc", "SMA", "WMA")
source (float) : The input data series to filter.
length (simple int) : The length of the filter.
prefilter (simple bool) : Boolean indicating whether to prefilter the input data.
centered (simple bool) : Determines whether the filter coefficients are centered.
fc (simple float) : Filter cutoff. Expressed like a length.
window_type (simple string) : Type of window function to apply. ("Hann", "Hamming", "Blackman", "Triangular", "Lanczos", "None")
Returns: The filtered data series.
lti_sma(source, length, prefilter)
Applies a Simple Moving Average (SMA) filter to the data series. Useful for smoothing data series to identify trends or for use as a component in more complex indicators.
Parameters:
source (float) : The input data series to filter.
length (simple int) : The length of the SMA filter.
prefilter (simple bool) : Boolean indicating whether to prefilter the input data.
Returns: The SMA-filtered data series.
lti_wma(source, length, prefilter, centered)
Applies a Weighted Moving Average (WMA) filter to a data series. Ideal for smoothing data with emphasis on more recent values, allowing for dynamic adjustments to the weighting scheme.
Parameters:
source (float) : The input data series to filter.
length (simple int) : The length of the WMA filter.
prefilter (simple bool) : Boolean indicating whether to prefilter the input data.
centered (simple bool) : Determines whether the filter coefficients are centered.
Returns: The WMA-filtered data series.
lti_sinc(source, length, prefilter, centered, fc, window_type)
Applies a Sinc filter to a data series, optionally using a window function. Particularly useful for signal processing tasks within financial analysis, such as smoothing or trend identification, with the ability to fine-tune filter characteristics.
Parameters:
source (float) : The input data series to filter.
length (simple int) : The length of the Sinc filter.
prefilter (simple bool) : Boolean indicating whether to prefilter the input data.
centered (simple bool) : Determines whether the filter coefficients are centered.
fc (simple float) : Filter cutoff. Expressed like a length.
window_type (simple string) : Type of window function to apply. ("Hann", "Hamming", "Blackman", "Triangular", "Lanczos", "None")
Returns: The Sinc-filtered data series.
אינדיקטורים ואסטרטגיות
support_array_methodsLibrary "support_array_methods"
Contains methods for support work with arrays:
1. unic
description: delete duplicate element from array
param _array: array for delete duplicate. Support types: int,float,string,bool
return: duplicate-free array
ZigZag LibraryThis is yet another ZigZag library.
🔵 Key Features
1. Lightning-Fast Performance : Optimized code ensures minimal lag and swift chart updates.
2. Real-Time Swing Detection : No more waiting for swings to finalize! This library continuously identifies the latest swing formation.
3. Amplitude-Aware : Discover significant swings earlier, even if they haven't reached the standard bar length.
4. Customizable Visualization : Draw ZigZag on-demand using polylines for a tailored analysis experience.
Stay tuned for more features as this library is being continuously enhanced. For the latest updates, please refer to the release information.
🔵 API
// Import this library. Remember to check the latest version of this library and replace the version number below.
import algotraderdev/zigzag/1 as zz
// Initialize the ZigZag instance.
var zz.ZigZag zig = zz.ZigZag.new().init(
zz.Settings.new(
swingLen = 5,
lineColor = color.blue,
lineStyle = line.style_solid,
lineWidth = 1))
// Analyze the ZigZag using the latest bar's data.
zig.tick()
// Draw the ZigZag.
if barstate.islast
zig.draw()
Chess_Data_5This library supplies a randomized list of 1-Move Chess Puzzles, this is 5/5 in my collection of puzzles on Tradingview.
This library contains 730 chess puzzles, this is enough for 1 unique chess puzzle for 2 years (730/365 = 2)
The Puzzles are sourced from Lichess's open-source database found here -> | database.lichess.org
This data has been reduced to only included 1-Move chess puzzles with a popularity rating of > 70, and condensed for ease of formatting and less characters.
The reduced format of the data in this library reads:
"Puzzle Code, Modified FEN, Moves, Puzzle Rating, Popularity Rating"
Puzzle Code: Lichess Codes Identifying each puzzle, this allows them to be retrieved from their website based on this Code.
Modified FEN: Forsyth-Edwards Notation is the standard notation to describe positions of a chess game. This includes the active move tacked onto the end after the last '/', this simplifies the process to retrieve the active move in PineScript.
Moves: This holds the first move seen by the player in the puzzle (opposite color), and then the correct next move which is Puzzle Solution, that the player is trying to determine.
Puzzle Rating: Difficulty Rating of the Puzzle, Generally speaking | Under 1500 = Beginner | 1500 to 1800 Casual | 1800 to 2100 Intermediate | 2100+ Advanced
Popularity Ranking: This is the popularity ranking calculated by lichess based on their own data of user feedback.
Note: After Reducing the amount of data down to only 1-Move puzzles with a popularity rating of > 70%, there is still around 340k puzzles. (Enough for over 900 Years!)
> Functions [/b
get()
Returns the list of chess puzzle data.
Chess_Data_4This library supplies a randomized list of 1-Move Chess Puzzles, this is 4/5 in my collection of puzzles on Tradingview.
This library contains 730 chess puzzles, this is enough for 1 unique chess puzzle for 2 years (730/365 = 2)
The Puzzles are sourced from Lichess's open-source database found here -> | database.lichess.org
This data has been reduced to only included 1-Move chess puzzles with a popularity rating of > 70, and condensed for ease of formatting and less characters.
The reduced format of the data in this library reads:
"Puzzle Code, Modified FEN, Moves, Puzzle Rating, Popularity Rating"
Puzzle Code: Lichess Codes Identifying each puzzle, this allows them to be retrieved from their website based on this Code.
Modified FEN: Forsyth-Edwards Notation is the standard notation to describe positions of a chess game. This includes the active move tacked onto the end after the last '/', this simplifies the process to retrieve the active move in PineScript.
Moves: This holds the first move seen by the player in the puzzle (opposite color), and then the correct next move which is Puzzle Solution, that the player is trying to determine.
Puzzle Rating: Difficulty Rating of the Puzzle, Generally speaking | Under 1500 = Beginner | 1500 to 1800 Casual | 1800 to 2100 Intermediate | 2100+ Advanced
Popularity Ranking: This is the popularity ranking calculated by lichess based on their own data of user feedback.
Note: After Reducing the amount of data down to only 1-Move puzzles with a popularity rating of > 70%, there is still around 340k puzzles. (Enough for over 900 Years!)
> Functions [/b
get()
Returns the list of chess puzzle data.
Chess_Data_3This library supplies a randomized list of 1-Move Chess Puzzles, this is 3/5 in my collection of puzzles on Tradingview.
This library contains 730 chess puzzles, this is enough for 1 unique chess puzzle for 2 years (730/365 = 2)
The Puzzles are sourced from Lichess's open-source database found here -> | database.lichess.org
This data has been reduced to only included 1-Move chess puzzles with a popularity rating of > 70, and condensed for ease of formatting and less characters.
The reduced format of the data in this library reads:
"Puzzle Code, Modified FEN, Moves, Puzzle Rating, Popularity Rating"
Puzzle Code: Lichess Codes Identifying each puzzle, this allows them to be retrieved from their website based on this Code.
Modified FEN: Forsyth-Edwards Notation is the standard notation to describe positions of a chess game. This includes the active move tacked onto the end after the last '/', this simplifies the process to retrieve the active move in PineScript.
Moves: This holds the first move seen by the player in the puzzle (opposite color), and then the correct next move which is Puzzle Solution, that the player is trying to determine.
Puzzle Rating: Difficulty Rating of the Puzzle, Generally speaking | Under 1500 = Beginner | 1500 to 1800 Casual | 1800 to 2100 Intermediate | 2100+ Advanced
Popularity Ranking: This is the popularity ranking calculated by lichess based on their own data of user feedback.
Note: After Reducing the amount of data down to only 1-Move puzzles with a popularity rating of > 70%, there is still around 340k puzzles. (Enough for over 900 Years!)
> Functions [/b
get()
Returns the list of chess puzzle data.
Chess_Data_2This library supplies a randomized list of 1-Move Chess Puzzles, this is 2/5 in my collection of puzzles on Tradingview.
This library contains 730 chess puzzles, this is enough for 1 unique chess puzzle for 2 years (730/365 = 2)
The Puzzles are sourced from Lichess's open-source database found here -> | database.lichess.org
This data has been reduced to only included 1-Move chess puzzles with a popularity rating of > 70, and condensed for ease of formatting and less characters.
The reduced format of the data in this library reads:
"Puzzle Code, Modified FEN, Moves, Puzzle Rating, Popularity Rating"
Puzzle Code: Lichess Codes Identifying each puzzle, this allows them to be retrieved from their website based on this Code.
Modified FEN: Forsyth-Edwards Notation is the standard notation to describe positions of a chess game. This includes the active move tacked onto the end after the last '/', this simplifies the process to retrieve the active move in PineScript.
Moves: This holds the first move seen by the player in the puzzle (opposite color), and then the correct next move which is Puzzle Solution, that the player is trying to determine.
Puzzle Rating: Difficulty Rating of the Puzzle, Generally speaking | Under 1500 = Beginner | 1500 to 1800 Casual | 1800 to 2100 Intermediate | 2100+ Advanced
Popularity Ranking: This is the popularity ranking calculated by lichess based on their own data of user feedback.
Note: After Reducing the amount of data down to only 1-Move puzzles with a popularity rating of > 70%, there is still around 340k puzzles. (Enough for over 900 Years!)
> Functions [/b
get()
Returns the list of chess puzzle data.
Chess_Data_1This library supplies a randomized list of 1-Move Chess Puzzles, this is 1/5 in my collection of puzzles on Tradingview.
This library contains 730 chess puzzles, this is enough for 1 unique chess puzzle for 2 years (730/365 = 2)
The Puzzles are sourced from Lichess's open-source database found here -> | database.lichess.org
This data has been reduced to only included 1-Move chess puzzles with a popularity rating of > 70, and condensed for ease of formatting and less characters.
The reduced format of the data in this library reads:
"Puzzle Code, Modified FEN, Moves, Puzzle Rating, Popularity Rating"
Puzzle Code: Lichess Codes Identifying each puzzle, this allows them to be retrieved from their website based on this Code.
Modified FEN: Forsyth-Edwards Notation is the standard notation to describe positions of a chess game. This includes the active move tacked onto the end after the last '/', this simplifies the process to retrieve the active move in PineScript.
Moves: This holds the first move seen by the player in the puzzle (opposite color), and then the correct next move which is Puzzle Solution, that the player is trying to determine.
Puzzle Rating: Difficulty Rating of the Puzzle, Generally speaking | Under 1500 = Beginner | 1500 to 1800 Casual | 1800 to 2100 Intermediate | 2100+ Advanced
Popularity Ranking: This is the popularity ranking calculated by lichess based on their own data of user feedback.
Note: After Reducing the amount of data down to only 1-Move puzzles with a popularity rating of > 70%, there is still around 340k puzzles. (Enough for over 900 Years!)
> Functions [/b
get()
Returns the list of chess puzzle data.
HT: Functions LibLibrary "Functions"
is_date_equal(date1, date2, time_zone)
Parameters:
date1 (int)
date2 (int)
time_zone (string)
is_date_equal(date1, date2_str, time_zone)
Parameters:
date1 (int)
date2_str (string)
time_zone (string)
is_date_between(date_, start_year, start_month, end_year, end_month, time_zone_)
Parameters:
date_ (int)
start_year (int)
start_month (int)
end_year (int)
end_month (int)
time_zone_ (string)
is_time_equal(time1, time2_str, time_zone)
Parameters:
time1 (int)
time2_str (string)
time_zone (string)
is_time_equal(time1, time2, time_zone)
Parameters:
time1 (int)
time2 (int)
time_zone (string)
is_time_between(time_, start_hour, start_minute, end_hour, end_minute, time_zone_)
Parameters:
time_ (int)
start_hour (int)
start_minute (int)
end_hour (int)
end_minute (int)
time_zone_ (string)
is_time_between(time_, start_time, end_time, time_zone_)
Parameters:
time_ (int)
start_time (string)
end_time (string)
time_zone_ (string)
is_close(value, level, ticks)
Parameters:
value (float)
level (float)
ticks (int)
is_inrange(value, lb, hb)
Parameters:
value (float)
lb (float)
hb (float)
is_above(value, level, ticks)
Parameters:
value (float)
level (float)
ticks (int)
is_below(value, level, ticks)
Parameters:
value (float)
level (float)
ticks (int)
HT: Levels LibLibrary "Levels"
method initialize(id)
Namespace types: levels_collection
Parameters:
id (levels_collection)
method create_level(id, name, value, level_start_bar, level_color, show)
Namespace types: levels_collection
Parameters:
id (levels_collection)
name (string)
value (float)
level_start_bar (int)
level_color (color)
show (bool)
method set_level(id, name, value, level_start_bar, show)
Namespace types: levels_collection
Parameters:
id (levels_collection)
name (string)
value (float)
level_start_bar (int)
show (bool)
method find_resistance(id)
Namespace types: levels_collection
Parameters:
id (levels_collection)
method find_support(id)
Namespace types: levels_collection
Parameters:
id (levels_collection)
method draw_level(id)
Namespace types: level_info
Parameters:
id (level_info)
method draw_all_levels(id)
Namespace types: levels_collection
Parameters:
id (levels_collection)
level_info
Fields:
name (series__string)
value (series__float)
bar_num (series__integer)
level_line (series__line)
line_start_bar (series__integer)
level_color (series__color)
show (series__bool)
ss (series__bool)
sr (series__bool)
levels_collection
Fields:
levels (array__|level_info|#OBJ)
Word_Puzzle_Data_R2ZLibrary "Word_Puzzle_Data_R2Z"
This Library consists of functions for returning arrays of words starting with R through Z.
By splitting the data through multiple libraries, I can import more tokens into my final compiled script, so having this data separately is extremely helpful.
This library is the the container 1/3 for my database of 5 Letter words uses in my "Word Puzzle" Game.
The List was Obtained from this master list| gist.github.com
The list was also filtered for profanity.
If there were more than 999 words under 1 first letter, then I have made the array for the 1 letter into 2. 'letter1' & 'letter2', these are used for the letters "P, B, & S".
All words are lowercase
r_ary()
- Returns an array of words starting with "R"
s1_ary()
- Returns an array of words starting with "S"
s2_ary()
- Returns an array of words starting with "S"
t_ary()
- Returns an array of words starting with "T"
u_ary()
- Returns an array of words starting with "U"
v_ary()
- Returns an array of words starting with "V"
w_ary()
- Returns an array of words starting with "W"
x_ary()
- Returns an array of words starting with "X"
y_ary()
- Returns an array of words starting with "Y"
z_ary()
- Returns an array of words starting with "Z"
Word_Puzzle_Data_I2QLibrary "Word_Puzzle_Data_I2Q"
This Library consists of functions for returning arrays of words starting with I through Q.
By splitting the data through multiple libraries, I can import more tokens into my final compiled script, so having this data separately is extremely helpful.
This library is the the container 1/3 for my database of 5 Letter words uses in my "Word Puzzle" Game.
The List was Obtained from this master list| gist.github.com
The list was also filtered for profanity.
If there were more than 999 words under 1 first letter, then I have made the array for the 1 letter into 2. 'letter1' & 'letter2', these are used for the letters "P, B, & S".
All words are lowercase
i_ary()
- Returns an array of words starting with "I"
j_ary()
- Returns an array of words starting with "J"
k_ary()
- Returns an array of words starting with "K"
l_ary()
- Returns an array of words starting with "L"
m_ary()
- Returns an array of words starting with "M"
n_ary()
- Returns an array of words starting with "N"
o_ary()
- Returns an array of words starting with "O"
p1_ary()
- Returns an array of words starting with "P"
p2_ary()
- Returns an array of words starting with "P"
q_ary()
- Returns an array of words starting with "Q"
Word_Puzzle_Data_A2HLibrary "Word_Puzzle_Data_A2H"
This Library consists of functions for returning arrays of words starting with A through H.
By splitting the data through multiple libraries, I can import more tokens into my final compiled script, so having this data separately is extremely helpful.
This library is the the container 1/3 for my database of 5 Letter words uses in my "Word Puzzle" Game.
The List was Obtained from this master list| gist.github.com
The list was also filtered for profanity.
If there were more than 999 words under 1 first letter, then I have made the array for the 1 letter into 2. 'letter1' & 'letter2', these are used for the letters "P, B, & S".
All words are lowercase
a_ary()
- Returns an array of words starting with "'A"
b1_ary()
- Returns an array of words starting with "B"
b2_ary()
- Returns an array of words starting with "B"
c_ary()
- Returns an array of words starting with "C"
d_ary()
- Returns an array of words starting with "D"
e_ary()
- Returns an array of words starting with "E"
f_ary()
- Returns an array of words starting with "F"
g_ary()
- Returns an array of words starting with "G"
h_ary()
- Returns an array of words starting with "H"
NormalDistributionFunctionsLibrary "NormalDistributionFunctions"
The NormalDistributionFunctions library encompasses a comprehensive suite of statistical tools for financial market analysis. It provides functions to calculate essential statistical measures such as mean, standard deviation, skewness, and kurtosis, alongside advanced functionalities for computing the probability density function (PDF), cumulative distribution function (CDF), Z-score, and confidence intervals. This library is designed to assist in the assessment of market volatility, distribution characteristics of asset returns, and risk management calculations, making it an invaluable resource for traders and financial analysts.
meanAndStdDev(source, length)
Calculates and returns the mean and standard deviation for a given data series over a specified period.
Parameters:
source (float) : float: The data series to analyze.
length (int) : int: The lookback period for the calculation.
Returns: Returns an array where the first element is the mean and the second element is the standard deviation of the data series for the given period.
skewness(source, mean, stdDev, length)
Calculates and returns skewness for a given data series over a specified period.
Parameters:
source (float) : float: The data series to analyze.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
length (int) : int: The lookback period for the calculation.
Returns: Returns skewness value
kurtosis(source, mean, stdDev, length)
Calculates and returns kurtosis for a given data series over a specified period.
Parameters:
source (float) : float: The data series to analyze.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
length (int) : int: The lookback period for the calculation.
Returns: Returns kurtosis value
pdf(x, mean, stdDev)
pdf: Calculates the probability density function for a given value within a normal distribution.
Parameters:
x (float) : float: The value to evaluate the PDF at.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
Returns: Returns the probability density function value for x.
cdf(x, mean, stdDev)
cdf: Calculates the cumulative distribution function for a given value within a normal distribution.
Parameters:
x (float) : float: The value to evaluate the CDF at.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
Returns: Returns the cumulative distribution function value for x.
confidenceInterval(mean, stdDev, size, confidenceLevel)
Calculates the confidence interval for a data series mean.
Parameters:
mean (float) : float: The mean of the data series.
stdDev (float) : float: The standard deviation of the data series.
size (int) : int: The sample size.
confidenceLevel (float) : float: The confidence level (e.g., 0.95 for 95% confidence).
Returns: Returns the lower and upper bounds of the confidence interval.
ApproximateGaussianSmoothingLibrary "ApproximateGaussianSmoothing"
This library provides a novel smoothing function for time-series data, serving as an alternative to SMA and EMA. Additionally, it provides some statistical processing, using moving averages as expected values in statistics.
'Approximate Gaussian Smoothing' (AGS) is designed to apply weights to time-series data that closely resemble Gaussian smoothing weights. it is easier to calculate than the similar ALMA.
In case AGS is used as a moving average, I named it 'Approximate Gaussian Weighted Moving Average' (AGWMA).
The formula is:
AGWMA = (EMA + EMA(EMA) + EMA(EMA(EMA)) + EMA(EMA(EMA(EMA)))) / 4
The EMA parameter alpha is 5 / (N + 4) , using time period N (or length).
ma(src, length)
Calculate moving average using AGS (AGWMA).
Parameters:
src (float) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Moving average.
analyse(src, length)
Calculate mean and variance using AGS.
Parameters:
src (float) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Mean and variance.
analyse(dimensions, sources, length)
Calculate mean and variance covariance matrix using AGS.
Parameters:
dimensions (simple int) : Dimensions of sources to process.
sources (array) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Mean and variance covariance matrix.
trend(src, length)
Calculate intercept (LSMA) and slope using AGS.
Parameters:
src (float) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Intercept and slope.
aproxLibrary "aprox"
It's a library of the aproximations of a price or Series float it uses Fourier transform and
Euler's Theoreum for Homogenus White noice operations. Calling functions without source value it automatically take close as the default source value.
Copy this indicator to see how each approximations interact between each other.
import Celje_2300/aprox/1 as aprox
//@version=5
indicator("Close Price with Aproximations", shorttitle="Close and Aproximations", overlay=false)
// Sample input data (replace this with your own data)
inputData = close
// Plot Close Price
plot(inputData, color=color.blue, title="Close Price")
dtf32_result = aprox.DTF32()
plot(dtf32_result, color=color.green, title="DTF32 Aproximation")
fft_result = aprox.FFT()
plot(fft_result, color=color.red, title="DTF32 Aproximation")
wavelet_result = aprox.Wavelet()
plot(wavelet_result, color=color.orange, title="Wavelet Aproximation")
wavelet_std_result = aprox.Wavelet_std()
plot(wavelet_std_result, color=color.yellow, title="Wavelet_std Aproximation")
DFT3(xval, _dir)
Parameters:
xval (float)
_dir (int)
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - DFT3", shorttitle="DFT3 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply DFT3
result = aprox.DFT3(inputData, 2)
// Plot the result
plot(result, color=color.blue, title="DFT3 Result")
DFT2(xval, _dir)
Parameters:
xval (float)
_dir (int)
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - DFT2", shorttitle="DFT2 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply DFT2
result = aprox.DFT2(inputData, inputData, 1)
// Plot the result
plot(result, color=color.green, title="DFT2 Result")
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - DFT2", shorttitle="DFT2 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply DFT2
result = aprox.DFT2(inputData, 1)
// Plot the result
plot(result, color=color.green, title="DFT2 Result")
FFT(xval)
FFT: Fast Fourier Transform
Parameters:
xval (float)
Returns: Aproxiated source value
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - FFT", shorttitle="FFT Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply FFT
result = aprox.FFT(inputData)
// Plot the result
plot(result, color=color.red, title="FFT Result")
DTF32(xval)
DTF32: Combined Discrete Fourier Transforms
Parameters:
xval (float)
Returns: Aproxiated source value
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - DTF32", shorttitle="DTF32 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply DTF32
result = aprox.DTF32(inputData)
// Plot the result
plot(result, color=color.purple, title="DTF32 Result")
whitenoise(indic_, _devided, minEmaLength, maxEmaLength, src)
whitenoise: Ehler's Universal Oscillator with White Noise, without extra aproximated src
Parameters:
indic_ (float)
_devided (int)
minEmaLength (int)
maxEmaLength (int)
src (float)
Returns: Smoothed indicator value
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - whitenoise", shorttitle="whitenoise Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply whitenoise
result = aprox.whitenoise(aprox.FFT(inputData))
// Plot the result
plot(result, color=color.orange, title="whitenoise Result")
whitenoise(indic_, dft1, _devided, minEmaLength, maxEmaLength, src)
whitenoise: Ehler's Universal Oscillator with White Noise and DFT1
Parameters:
indic_ (float)
dft1 (float)
_devided (int)
minEmaLength (int)
maxEmaLength (int)
src (float)
Returns: Smoothed indicator value
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - whitenoise with DFT1", shorttitle="whitenoise-DFT1 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply whitenoise with DFT1
result = aprox.whitenoise(inputData, aprox.DFT1(inputData))
// Plot the result
plot(result, color=color.yellow, title="whitenoise-DFT1 Result")
smooth(dft1, indic__, _devided, minEmaLength, maxEmaLength, src)
smooth: Smoothing source value with help of indicator series and aproximated source value
Parameters:
dft1 (float)
indic__ (float)
_devided (int)
minEmaLength (int)
maxEmaLength (int)
src (float)
Returns: Smoothed source series
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - smooth", shorttitle="smooth Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply smooth
result = aprox.smooth(inputData, aprox.FFT(inputData))
// Plot the result
plot(result, color=color.gray, title="smooth Result")
smooth(indic__, _devided, minEmaLength, maxEmaLength, src)
smooth: Smoothing source value with help of indicator series
Parameters:
indic__ (float)
_devided (int)
minEmaLength (int)
maxEmaLength (int)
src (float)
Returns: Smoothed source series
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - smooth without DFT1", shorttitle="smooth-NoDFT1 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply smooth without DFT1
result = aprox.smooth(aprox.FFT(inputData))
// Plot the result
plot(result, color=color.teal, title="smooth-NoDFT1 Result")
vzo_ema(src, len)
vzo_ema: Volume Zone Oscillator with EMA smoothing
Parameters:
src (float)
len (simple int)
Returns: VZO value
vzo_sma(src, len)
vzo_sma: Volume Zone Oscillator with SMA smoothing
Parameters:
src (float)
len (int)
Returns: VZO value
vzo_wma(src, len)
vzo_wma: Volume Zone Oscillator with WMA smoothing
Parameters:
src (float)
len (int)
Returns: VZO value
alma2(series, windowsize, offset, sigma)
alma2: Arnaud Legoux Moving Average 2 accepts sigma as series float
Parameters:
series (float)
windowsize (int)
offset (float)
sigma (float)
Returns: ALMA value
Wavelet(src, len, offset, sigma)
Wavelet: Wavelet Transform
Parameters:
src (float)
len (int)
offset (simple float)
sigma (simple float)
Returns: Wavelet-transformed series
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - Wavelet", shorttitle="Wavelet Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply Wavelet
result = aprox.Wavelet(inputData)
// Plot the result
plot(result, color=color.blue, title="Wavelet Result")
Wavelet_std(src, len, offset, mag)
Wavelet_std: Wavelet Transform with Standard Deviation
Parameters:
src (float)
len (int)
offset (float)
mag (int)
Returns: Wavelet-transformed series
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - Wavelet_std", shorttitle="Wavelet_std Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply Wavelet_std
result = aprox.Wavelet_std(inputData)
// Plot the result
plot(result, color=color.green, title="Wavelet_std Result")
FVG Detector LibraryLibrary "FVG Detector Library"
🔵 Introduction
To save time and improve accuracy in your scripts for identifying Fair Value Gaps (FVGs), you can utilize this library. Apart from detecting and plotting FVGs, one of the most significant advantages of this script is the ability to filter FVGs, which you'll learn more about below. Additionally, the plotting of each FVG continues until either a new FVG occurs or the current FVG is mitigated.
🔵 Definition
Fair Value Gap (FVG) refers to a situation where three consecutive candlesticks do not overlap. Based on this definition, the minimum conditions for detecting a fair gap in the ascending scenario are that the minimum price of the last candlestick should be greater than the maximum price of the third candlestick, and in the descending scenario, the maximum price of the last candlestick should be smaller than the minimum price of the third candlestick.
If the filter is turned off, all FVGs that meet at least the minimum conditions are identified. This mode is simplistic and results in a high number of identified FVGs.
If the filter is turned on, you have four options to filter FVGs :
1. Very Aggressive : In addition to the initial condition, another condition is added. For ascending FVGs, the maximum price of the last candlestick should be greater than the maximum price of the middle candlestick. Similarly, for descending FVGs, the minimum price of the last candlestick should be smaller than the minimum price of the middle candlestick. In this mode, a very small number of FVGs are eliminated.
2. Aggressive : In addition to the conditions of the Very Aggressive mode, in this mode, the size of the middle candlestick should not be small. This mode eliminates more FVGs compared to the Very Aggressive mode.
3. Defensive : In addition to the conditions of the Very Aggressive mode, in this mode, the size of the middle candlestick should be relatively large, and most of it should consist of the body. Also, for identifying ascending FVGs, the second and third candlesticks must be positive, and for identifying descending FVGs, the second and third candlesticks must be negative. In this mode, a significant number of FVGs are eliminated, and the remaining FVGs have a decent quality.
4. Very Defensive : In addition to the conditions of the Defensive mode, the first and third candlesticks should not resemble very small-bodied doji candlesticks. In this mode, the majority of FVGs are filtered out, and the remaining ones are of higher quality.
By default, we recommend using the Defensive mode.
🔵 How to Use
🟣 Parameters
To utilize this library, you need to provide four input parameters to the function.
"FVGFilter" determines whether you wish to apply a filter on FVGs or not. The possible inputs for this parameter are "On" and "Off", provided as strings.
"FVGFilterType" determines the type of filter to be applied to the found FVGs. These filters include four modes: "Very Defensive", "Defensive", "Aggressive", and "Very Aggressive", respectively exhibiting decreasing sensitivity and indicating a higher number of Fair Value Gaps (FVG).
The parameter "ShowDeFVG" is a Boolean value defined as either "true" or "false". If this value is "true", FVGs are shown during the Bullish Trend; however, if it is "false", they are not displayed.
The parameter "ShowSuFVG" is a Boolean value defined as either "true" or "false". If this value is "true", FVGs are displayed during the Bearish Trend; however, if it is "false", they are not displayed.
FVGDetector(FVGFilter, FVGFilterType, ShowDeFVG, ShowSuFVG)
Parameters:
FVGFilter (string)
FVGFilterType (string)
ShowDeFVG (bool)
ShowSuFVG (bool)
🟣 Import Library
You can use the "FVG Detector" library in your script using the following expression:
import TFlab/FVGDetectorLibrary/1 as FVG
🟣 Input Parameters
The descriptions related to the input parameters were provided in the "Parameter" section. In this section, for your convenience, the code related to the inputs is also included, and you can copy and paste it into your script.
PFVGFilter = input.string('On', 'FVG Filter', )
PFVGFilterType = input.string('Defensive', 'FVG Filter Type', )
PShowDeFVG = input.bool(true, ' Show Demand FVG')
PShowSuFVG = input.bool(true, ' Show Supply FVG')
🟣 Call Function
You can copy the following code into your script to call the FVG function. This code is based on the naming conventions provided in the "Input Parameter" section, so if you want to use exactly this code, you should have similar parameter names or have copied the "Input Parameter" values.
FVG.FVGDetector(PFVGFilter, PFVGFilterType, PShowDeFVG, PShowSuFVG)
Material Design ColorsThis library provides a standard set of colors defined in Material Design 2.0.
🔵 API
Step 1: Import this library.
import algotraderdev/material/1
// remember to check the latest version of this library and replace the 1 above.
Step 2: Get the color you like. Check the source code or the screenshot above to see all the supported colors.
material.red()
Each color function (except for `black()` and `white()`) accepts an optional `variant` parameter. You can choose any of 50, 100, 200, 300, 400, 500, 600, 700, 800, and 900. By default, 500 is chosen if this parameter is not provided.
FibonacciAveragesOscillatorLibraryLibrary "FibonacciAveragesOscillatorLibrary"
The FibonacciAveragesOscillator library provides a streamlined way to analyze market trends using Fibonacci intervals and smoothed averages.
fibAvgOscillator(maxFibNumber, smoothLevel)
Parameters:
maxFibNumber (string) : string: The maximum Fibonacci number to use, affecting analysis depth.
smoothLevel (simple int) : simple int: Smoothing level for the oscillator, higher values produce smoother results.
@return series float: The Fibonacci averages trend oscillator value, smoothed over the specified level.
MLMomentumIndexLibrary "MLMomentumIndex"
Enables market momentum analysis with k-NN predictions on pivot points, offering customizable parameters for dynamic trading strategies.
momentumIndexPivots(source, pivotBars, momentumWindow, maxData, numNeighbors, predictionSmoothing)
Parameters:
source (float)
pivotBars (int)
momentumWindow (int)
maxData (int)
numNeighbors (int)
predictionSmoothing (int)
MLPivotsBreakoutsLibrary "MLPivotsBreakouts"
Utilizes k-NN machine learning to predict breakout zones from pivot points, aiding traders in identifying potential bullish and bearish market movements. Ideal for trend-following and breakout strategies.
breakouts(source, pivotBars, numNeighbors, maxData, predictionSmoothing)
Parameters:
source (float) : series float: Price data for analysis.
pivotBars (int) : int: Number of bars for pivot point detection.
numNeighbors (int) : int: Neighbors count for k-NN prediction.
maxData (int) : int: Maximum pivot data points for analysis.
predictionSmoothing (int) : int: Smoothing period for predictions.
@return : Lower and higher prediction bands plus pivot signal, 1 for ph and -1 for pl.
DynamicMAsLibrary "DynamicMAs"
Custom MA's that allow a dynamic calculation beginning from the first bar, irrespective of lookback period.
SMA(src, length)
Dynamic SMA
Parameters:
src (float)
length (int)
EMA(src, length)
Dynamic EMA
Parameters:
src (float)
length (int)
DEMA(src, length)
Dynamic DEMA
Parameters:
src (float)
length (int)
TEMA(src, length)
Dynamic TEMA
Parameters:
src (float)
length (int)
WMA(src, length)
Dynamic WMA
Parameters:
src (float)
length (int)
HMA(src, length)
Dynamic HMA
Parameters:
src (float)
length (int)
VWMA(src, length)
Dynamic VWMA
Parameters:
src (float)
length (int)
SMMA(src, length)
Dynamic SMMA
Parameters:
src (float)
length (int)
LSMA(src, length)
Dynamic LSMA
Parameters:
src (float)
length (int)
ALMA(src, length, offset_sigma, sigma)
Dynamic ALMA
Parameters:
src (float)
length (int)
offset_sigma (float)
sigma (float)
HyperMA(src, length)
Dynamic HyperbolicMA
Parameters:
src (float)
length (int)
lib_risk_managementLibrary "lib_risk_management"
a lib to help with dynamic position sizing
position_size(risk, account_balance, entry_price, sl_price)
calculate the position size required to meet the account size based risk given when the stop loss is triggered
Parameters:
risk (float) : percentage of account balance to risk (1-100)
account_balance (float) : account balance in instrument currency
entry_price (float) : entry price
sl_price (float) : stop loss price
Returns: the position size in instrument currency that will loose the given risk percentage of the account balance when a stop loss is triggered
account_balance(to_currency, live)
converts the (current(default)/initial) account balance to the given currency at the daily rate
Parameters:
to_currency (simple string) The currency in which the account balance is to be converted. Possible values: a three-letter string with the currency code in the ISO 4217 format (e.g. "USD"), or one of the built-in variables that return currency codes, like syminfo.currency or currency.USD.
live (bool) converts the current account balance (strategy.equity) (default:true) or otherwise the initial capital (strategy.initial_capital)
Returns: the (current/initial) account balance converted to the given currency with at the current daily rate