PINE LIBRARY
MLMatrixLib

Overview
MLMatrixLib is a comprehensive Pine Script v6 library implementing machine learning algorithms using native matrix operations. This library provides traders and developers with a toolkit of statistical and ML methods for building quantitative trading systems, performing data analysis, and creating adaptive indicators.
How It Works
The library leverages Pine Script's native matrix type to perform efficient linear algebra operations. Each algorithm is implemented from first principles, using matrix decomposition, iterative optimization, and statistical estimation techniques. All functions are designed for numerical stability with careful handling of edge cases.
---
Library Contents (34 Sections)
Section 1: Utility Functions & Matrix Operations
Core building blocks including:
• identity(n) - Creates n×n identity matrix
• diagonal(values) - Creates diagonal matrix from array
• ones(rows, cols) / zeros(rows, cols) - Matrix constructors
• frobeniusNorm(m) / l1Norm(m) - Matrix norm calculations
• hadamard(m1, m2) - Element-wise multiplication
• columnMeans(m) / rowMeans(m) - Statistical aggregations
• standardize(m) - Z-score normalization (zero mean, unit variance)
• minMaxNormalize(m) - Scale values to [0,1] range
• fitStandardScaler(m) / fitMinMaxScaler(m) - Reusable scaler parameters
• addBiasColumn(m) - Prepend column of ones for regression
• arrayMedian(arr) / arrayPercentile(arr, p) - Array statistics
Section 2: Activation Functions
Numerically stable implementations:
• sigmoid(x) / sigmoidMatrix(m) - Logistic function with overflow protection
• tanhActivation(x) / tanhMatrix(m) - Hyperbolic tangent
• relu(x) / reluMatrix(m) - Rectified Linear Unit
• leakyRelu(x, alpha) - Leaky ReLU with configurable slope
• elu(x, alpha) - Exponential Linear Unit
• Derivatives for backpropagation: sigmoidDerivative, tanhDerivative, reluDerivative
Section 3: Linear Regression (OLS)
Ordinary Least Squares implementation using the normal equation (X'X)⁻¹X'y:
• fitLinearRegression(X, y) - Fits model, returns coefficients, R², standard error
• fitSimpleLinearRegression(x, y) - Single-variable regression
• predictLinear(model, X) - Generate predictions
• predictionInterval(model, X, confidence) - Confidence intervals using t-distribution
• Model type stores: coefficients, R-squared, residuals, standard error
Section 4: Weighted Linear Regression
Generalized least squares with observation weights:
• fitWeightedLinearRegression(X, y, weights) - Solves (X'WX)⁻¹X'Wy
• Useful for downweighting outliers or emphasizing recent data
Section 5: Polynomial Regression
Fits polynomials of arbitrary degree:
• fitPolynomialRegression(x, y, degree) - Constructs Vandermonde matrix
• predictPolynomial(model, x) - Evaluate polynomial at points
Section 6: Ridge Regression (L2 Regularization)
Adds penalty term λ||β||² to prevent overfitting:
• fitRidgeRegression(X, y, lambda) - Solves (X'X + λI)⁻¹X'y
• Lambda parameter controls regularization strength
Section 7: LASSO Regression (L1 Regularization)
Coordinate descent algorithm for sparse solutions:
• fitLassoRegression(X, y, lambda, maxIter, tolerance) - Iterative soft-thresholding
• Produces sparse coefficients by driving some to exactly zero
• softThreshold(x, lambda) - Core shrinkage operator
Section 8: Elastic Net (L1 + L2 Regularization)
Combines LASSO and Ridge penalties:
• fitElasticNet(X, y, lambda, alpha, maxIter, tolerance)
• Alpha balances L1 vs L2: alpha=1 is LASSO, alpha=0 is Ridge
Section 9: Huber Robust Regression
Iteratively Reweighted Least Squares (IRLS) for outlier resistance:
• fitHuberRegression(X, y, delta, maxIter, tolerance)
• Delta parameter defines transition between L1 and L2 loss
• Downweights observations with large residuals
Section 10: Quantile Regression
Estimates conditional quantiles using linear programming approximation:
• fitQuantileRegression(X, y, tau, maxIter, tolerance)
• Tau specifies quantile (0.5 = median, 0.25 = lower quartile, etc.)
Section 11: Logistic Regression (Binary Classification)
Gradient descent optimization of cross-entropy loss:
• fitLogisticRegression(X, y, learningRate, maxIter, tolerance)
• predictProbability(model, X) - Returns probabilities
• predictClass(model, X, threshold) - Returns binary predictions
Section 12: Linear SVM (Support Vector Machine)
Sub-gradient descent with hinge loss:
• fitLinearSVM(X, y, C, learningRate, maxIter)
• C parameter controls regularization (higher = harder margin)
• predictSVM(model, X) - Returns class predictions
Section 13: Recursive Least Squares (RLS)
Online learning with exponential forgetting:
• createRLSState(nFeatures, lambda, delta) - Initialize state
• updateRLS(state, x, y) - Update with new observation
• Lambda is forgetting factor (0.95-0.99 typical)
• Useful for adaptive indicators that update incrementally
Section 14: Covariance and Correlation
Matrix statistics:
• covarianceMatrix(m) - Sample covariance
• correlationMatrix(m) - Pearson correlations
• pearsonCorrelation(x, y) - Single correlation coefficient
• spearmanCorrelation(x, y) - Rank-based correlation
Section 15: Principal Component Analysis (PCA)
Dimensionality reduction via eigendecomposition:
• fitPCA(X, nComponents) - Power iteration method
• transformPCA(X, model) - Project data onto principal components
• Returns components, explained variance, and mean
Section 16: K-Means Clustering
Lloyd's algorithm with k-means++ initialization:
• fitKMeans(X, k, maxIter, tolerance) - Cluster data points
• predictCluster(model, X) - Assign new points to clusters
• withinClusterVariance(model) - Measure cluster compactness
Section 17: Gaussian Mixture Model (GMM)
Expectation-Maximization algorithm:
• fitGMM(X, k, maxIter, tolerance) - Soft clustering with probabilities
• predictProbaGMM(model, X) - Returns membership probabilities
• Models data as mixture of Gaussian distributions
Section 18: Kalman Filter
Linear state estimation:
• createKalman1D(processNoise, measurementNoise, ...) - 1D filter
• createKalman2D(processNoise, measurementNoise) - Position + velocity tracking
• kalmanStep(state, measurement) - Predict-update cycle
• Optimal filtering for noisy measurements
Section 19: K-Nearest Neighbors (KNN)
Instance-based learning:
• fitKNN(X, y) - Store training data
• predictKNN(model, X, k) - Classify by majority vote
• predictKNNRegression(model, X, k) - Average of k neighbors
• predictKNNWeighted(model, X, k) - Distance-weighted voting
Section 20: Neural Network (Feedforward)
Multi-layer perceptron:
• createNeuralNetwork(architecture) - Define layer sizes
• trainNeuralNetwork(nn, X, y, learningRate, epochs) - Backpropagation
• predictNN(nn, X) - Forward pass
• Supports configurable hidden layers
Section 21: Naive Bayes Classifier
Gaussian Naive Bayes:
• fitNaiveBayes(X, y) - Estimate class-conditional distributions
• predictNaiveBayes(model, X) - Maximum a posteriori classification
• Assumes feature independence given class
Section 22: Anomaly Detection
Statistical outlier detection:
• fitAnomalyDetector(X, contamination) - Mahalanobis distance-based
• detectAnomalies(model, X) - Returns anomaly scores
• isAnomaly(model, X, threshold) - Binary classification
Section 23: Dynamic Time Warping (DTW)
Time series similarity:
• dtw(series1, series2) - Compute DTW distance
• Handles sequences of different lengths
• Useful for pattern matching
Section 24: Markov Chain / Regime Detection
Discrete state transitions:
• fitMarkovChain(states, nStates) - Estimate transition matrix
• predictNextState(transitionMatrix, currentState) - Most likely next state
• stationaryDistribution(transitionMatrix) - Long-run probabilities
Section 25: Hidden Markov Model (Simple)
Baum-Welch algorithm:
• fitHMM(observations, nStates, maxIter) - EM training
• viterbi(model, observations) - Most likely state sequence
• Useful for regime detection
Section 26: Exponential Smoothing & Holt-Winters
Time series smoothing:
• exponentialSmooth(data, alpha) - Simple exponential smoothing
• holtWinters(data, alpha, beta, gamma, seasonLength) - Triple smoothing
• Captures trend and seasonality
Section 27: Entropy and Information Theory
Information measures:
• entropy(probabilities) - Shannon entropy in bits
• conditionalEntropy(jointProbs, marginalProbs) - H(X|Y)
• mutualInformation(probsX, probsY, jointProbs) - I(X;Y)
• kldivergence(p, q) - Kullback-Leibler divergence
Section 28: Hurst Exponent
Long-range dependence measure:
• hurstExponent(data) - R/S analysis
• H < 0.5: mean-reverting, H = 0.5: random walk, H > 0.5: trending
Section 29: Change Detection (CUSUM)
Cumulative sum control chart:
• cusumChangeDetection(data, threshold, drift) - Detect regime changes
• cusumOnline(value, prevCusumPos, prevCusumNeg, target, drift) - Streaming version
Section 30: Autocorrelation
Serial dependence analysis:
• autocorrelation(data, maxLag) - ACF for all lags
• partialAutocorrelation(data, maxLag) - PACF via Durbin-Levinson
• Useful for time series model identification
Section 31: Ensemble Methods
Model combination:
• baggingPredict(models, X) - Average predictions
• votingClassify(models, X) - Majority vote
• Improves robustness through aggregation
Section 32: Model Evaluation Metrics
Performance assessment:
• mse(actual, predicted) / rmse / mae / mape - Regression metrics
• accuracy(actual, predicted) - Classification accuracy
• precision / recall / f1Score - Binary classification metrics
• confusionMatrix(actual, predicted, nClasses) - Multi-class evaluation
• rSquared(actual, predicted) / adjustedRSquared - Goodness of fit
Section 33: Cross-Validation
Model validation:
• trainTestSplit(X, y, trainRatio) - Random split
• Foundation for walk-forward validation
Section 34: Trading Convenience Functions
Trading-specific utilities:
• priceMatrix(length) - OHLC data as matrix
• logReturns(length) - Log return series
• rollingSlope(src, length) - Linear trend strength
• kalmanFilter(src, processNoise, measurementNoise) - Filtered price
• kalmanFilter2D(src, ...) - Price with velocity estimate
• adaptiveMA(src, sensitivity) - Kalman-based adaptive moving average
• volAdjMomentum(src, length) - Volatility-normalized momentum
• detectSRLevels(length, nLevels) - K-means based S/R detection
• buildFeatures(src, lengths) - Multi-timeframe feature construction
• technicalFeatures(length) - Standard indicator feature set (RSI, MACD, BB, ATR, etc.)
• lagFeatures(src, lags) - Time-lagged features
• sharpeRatio(returns) - Risk-adjusted return measure
• sortinoRatio(returns) - Downside risk-adjusted return
• maxDrawdown(equity) - Maximum peak-to-trough decline
• calmarRatio(returns, equity) - Return/drawdown ratio
• kellyCriterion(winRate, avgWin, avgLoss) - Optimal position sizing
• fractionalKelly(...) - Conservative Kelly sizing
• rollingBeta(assetReturns, benchmarkReturns) - Market exposure
• fractalDimension(data) - Market complexity measure
---
Usage Example
```
import YourUsername/MLMatrixLib/1 as ml
// Create feature matrix
matrix<float> X = ml.priceMatrix(50)
X := ml.standardize(X)
// Fit linear regression
ml.LinearRegressionModel model = ml.fitLinearRegression(X, y)
float prediction = ml.predictLinear(model, X_new)
// Kalman filter for smoothing
float smoothedPrice = ml.kalmanFilter(close, 0.01, 1.0)
// Detect support/resistance levels
array<float> levels = ml.detectSRLevels(100, 3)
// K-means clustering for regime detection
ml.KMeansModel km = ml.fitKMeans(features, 3)
int cluster = ml.predictCluster(km, newFeature)
```
---
Technical Notes
• All matrix operations use Pine Script's native matrix<float> type
• Numerical stability ensured through:
- Clamping exponential arguments to prevent overflow
- Division by zero protection with epsilon thresholds
- Iterative algorithms with convergence tolerance
• Designed for bar-by-bar execution in Pine Script's event-driven model
• Compatible with Pine Script v6
---
Disclaimer
This library provides mathematical tools for quantitative analysis. It does not constitute financial advice. Past performance of any algorithm does not guarantee future results. Users are responsible for validating models on their specific use cases and understanding the limitations of each method.
MLMatrixLib is a comprehensive Pine Script v6 library implementing machine learning algorithms using native matrix operations. This library provides traders and developers with a toolkit of statistical and ML methods for building quantitative trading systems, performing data analysis, and creating adaptive indicators.
How It Works
The library leverages Pine Script's native matrix type to perform efficient linear algebra operations. Each algorithm is implemented from first principles, using matrix decomposition, iterative optimization, and statistical estimation techniques. All functions are designed for numerical stability with careful handling of edge cases.
---
Library Contents (34 Sections)
Section 1: Utility Functions & Matrix Operations
Core building blocks including:
• identity(n) - Creates n×n identity matrix
• diagonal(values) - Creates diagonal matrix from array
• ones(rows, cols) / zeros(rows, cols) - Matrix constructors
• frobeniusNorm(m) / l1Norm(m) - Matrix norm calculations
• hadamard(m1, m2) - Element-wise multiplication
• columnMeans(m) / rowMeans(m) - Statistical aggregations
• standardize(m) - Z-score normalization (zero mean, unit variance)
• minMaxNormalize(m) - Scale values to [0,1] range
• fitStandardScaler(m) / fitMinMaxScaler(m) - Reusable scaler parameters
• addBiasColumn(m) - Prepend column of ones for regression
• arrayMedian(arr) / arrayPercentile(arr, p) - Array statistics
Section 2: Activation Functions
Numerically stable implementations:
• sigmoid(x) / sigmoidMatrix(m) - Logistic function with overflow protection
• tanhActivation(x) / tanhMatrix(m) - Hyperbolic tangent
• relu(x) / reluMatrix(m) - Rectified Linear Unit
• leakyRelu(x, alpha) - Leaky ReLU with configurable slope
• elu(x, alpha) - Exponential Linear Unit
• Derivatives for backpropagation: sigmoidDerivative, tanhDerivative, reluDerivative
Section 3: Linear Regression (OLS)
Ordinary Least Squares implementation using the normal equation (X'X)⁻¹X'y:
• fitLinearRegression(X, y) - Fits model, returns coefficients, R², standard error
• fitSimpleLinearRegression(x, y) - Single-variable regression
• predictLinear(model, X) - Generate predictions
• predictionInterval(model, X, confidence) - Confidence intervals using t-distribution
• Model type stores: coefficients, R-squared, residuals, standard error
Section 4: Weighted Linear Regression
Generalized least squares with observation weights:
• fitWeightedLinearRegression(X, y, weights) - Solves (X'WX)⁻¹X'Wy
• Useful for downweighting outliers or emphasizing recent data
Section 5: Polynomial Regression
Fits polynomials of arbitrary degree:
• fitPolynomialRegression(x, y, degree) - Constructs Vandermonde matrix
• predictPolynomial(model, x) - Evaluate polynomial at points
Section 6: Ridge Regression (L2 Regularization)
Adds penalty term λ||β||² to prevent overfitting:
• fitRidgeRegression(X, y, lambda) - Solves (X'X + λI)⁻¹X'y
• Lambda parameter controls regularization strength
Section 7: LASSO Regression (L1 Regularization)
Coordinate descent algorithm for sparse solutions:
• fitLassoRegression(X, y, lambda, maxIter, tolerance) - Iterative soft-thresholding
• Produces sparse coefficients by driving some to exactly zero
• softThreshold(x, lambda) - Core shrinkage operator
Section 8: Elastic Net (L1 + L2 Regularization)
Combines LASSO and Ridge penalties:
• fitElasticNet(X, y, lambda, alpha, maxIter, tolerance)
• Alpha balances L1 vs L2: alpha=1 is LASSO, alpha=0 is Ridge
Section 9: Huber Robust Regression
Iteratively Reweighted Least Squares (IRLS) for outlier resistance:
• fitHuberRegression(X, y, delta, maxIter, tolerance)
• Delta parameter defines transition between L1 and L2 loss
• Downweights observations with large residuals
Section 10: Quantile Regression
Estimates conditional quantiles using linear programming approximation:
• fitQuantileRegression(X, y, tau, maxIter, tolerance)
• Tau specifies quantile (0.5 = median, 0.25 = lower quartile, etc.)
Section 11: Logistic Regression (Binary Classification)
Gradient descent optimization of cross-entropy loss:
• fitLogisticRegression(X, y, learningRate, maxIter, tolerance)
• predictProbability(model, X) - Returns probabilities
• predictClass(model, X, threshold) - Returns binary predictions
Section 12: Linear SVM (Support Vector Machine)
Sub-gradient descent with hinge loss:
• fitLinearSVM(X, y, C, learningRate, maxIter)
• C parameter controls regularization (higher = harder margin)
• predictSVM(model, X) - Returns class predictions
Section 13: Recursive Least Squares (RLS)
Online learning with exponential forgetting:
• createRLSState(nFeatures, lambda, delta) - Initialize state
• updateRLS(state, x, y) - Update with new observation
• Lambda is forgetting factor (0.95-0.99 typical)
• Useful for adaptive indicators that update incrementally
Section 14: Covariance and Correlation
Matrix statistics:
• covarianceMatrix(m) - Sample covariance
• correlationMatrix(m) - Pearson correlations
• pearsonCorrelation(x, y) - Single correlation coefficient
• spearmanCorrelation(x, y) - Rank-based correlation
Section 15: Principal Component Analysis (PCA)
Dimensionality reduction via eigendecomposition:
• fitPCA(X, nComponents) - Power iteration method
• transformPCA(X, model) - Project data onto principal components
• Returns components, explained variance, and mean
Section 16: K-Means Clustering
Lloyd's algorithm with k-means++ initialization:
• fitKMeans(X, k, maxIter, tolerance) - Cluster data points
• predictCluster(model, X) - Assign new points to clusters
• withinClusterVariance(model) - Measure cluster compactness
Section 17: Gaussian Mixture Model (GMM)
Expectation-Maximization algorithm:
• fitGMM(X, k, maxIter, tolerance) - Soft clustering with probabilities
• predictProbaGMM(model, X) - Returns membership probabilities
• Models data as mixture of Gaussian distributions
Section 18: Kalman Filter
Linear state estimation:
• createKalman1D(processNoise, measurementNoise, ...) - 1D filter
• createKalman2D(processNoise, measurementNoise) - Position + velocity tracking
• kalmanStep(state, measurement) - Predict-update cycle
• Optimal filtering for noisy measurements
Section 19: K-Nearest Neighbors (KNN)
Instance-based learning:
• fitKNN(X, y) - Store training data
• predictKNN(model, X, k) - Classify by majority vote
• predictKNNRegression(model, X, k) - Average of k neighbors
• predictKNNWeighted(model, X, k) - Distance-weighted voting
Section 20: Neural Network (Feedforward)
Multi-layer perceptron:
• createNeuralNetwork(architecture) - Define layer sizes
• trainNeuralNetwork(nn, X, y, learningRate, epochs) - Backpropagation
• predictNN(nn, X) - Forward pass
• Supports configurable hidden layers
Section 21: Naive Bayes Classifier
Gaussian Naive Bayes:
• fitNaiveBayes(X, y) - Estimate class-conditional distributions
• predictNaiveBayes(model, X) - Maximum a posteriori classification
• Assumes feature independence given class
Section 22: Anomaly Detection
Statistical outlier detection:
• fitAnomalyDetector(X, contamination) - Mahalanobis distance-based
• detectAnomalies(model, X) - Returns anomaly scores
• isAnomaly(model, X, threshold) - Binary classification
Section 23: Dynamic Time Warping (DTW)
Time series similarity:
• dtw(series1, series2) - Compute DTW distance
• Handles sequences of different lengths
• Useful for pattern matching
Section 24: Markov Chain / Regime Detection
Discrete state transitions:
• fitMarkovChain(states, nStates) - Estimate transition matrix
• predictNextState(transitionMatrix, currentState) - Most likely next state
• stationaryDistribution(transitionMatrix) - Long-run probabilities
Section 25: Hidden Markov Model (Simple)
Baum-Welch algorithm:
• fitHMM(observations, nStates, maxIter) - EM training
• viterbi(model, observations) - Most likely state sequence
• Useful for regime detection
Section 26: Exponential Smoothing & Holt-Winters
Time series smoothing:
• exponentialSmooth(data, alpha) - Simple exponential smoothing
• holtWinters(data, alpha, beta, gamma, seasonLength) - Triple smoothing
• Captures trend and seasonality
Section 27: Entropy and Information Theory
Information measures:
• entropy(probabilities) - Shannon entropy in bits
• conditionalEntropy(jointProbs, marginalProbs) - H(X|Y)
• mutualInformation(probsX, probsY, jointProbs) - I(X;Y)
• kldivergence(p, q) - Kullback-Leibler divergence
Section 28: Hurst Exponent
Long-range dependence measure:
• hurstExponent(data) - R/S analysis
• H < 0.5: mean-reverting, H = 0.5: random walk, H > 0.5: trending
Section 29: Change Detection (CUSUM)
Cumulative sum control chart:
• cusumChangeDetection(data, threshold, drift) - Detect regime changes
• cusumOnline(value, prevCusumPos, prevCusumNeg, target, drift) - Streaming version
Section 30: Autocorrelation
Serial dependence analysis:
• autocorrelation(data, maxLag) - ACF for all lags
• partialAutocorrelation(data, maxLag) - PACF via Durbin-Levinson
• Useful for time series model identification
Section 31: Ensemble Methods
Model combination:
• baggingPredict(models, X) - Average predictions
• votingClassify(models, X) - Majority vote
• Improves robustness through aggregation
Section 32: Model Evaluation Metrics
Performance assessment:
• mse(actual, predicted) / rmse / mae / mape - Regression metrics
• accuracy(actual, predicted) - Classification accuracy
• precision / recall / f1Score - Binary classification metrics
• confusionMatrix(actual, predicted, nClasses) - Multi-class evaluation
• rSquared(actual, predicted) / adjustedRSquared - Goodness of fit
Section 33: Cross-Validation
Model validation:
• trainTestSplit(X, y, trainRatio) - Random split
• Foundation for walk-forward validation
Section 34: Trading Convenience Functions
Trading-specific utilities:
• priceMatrix(length) - OHLC data as matrix
• logReturns(length) - Log return series
• rollingSlope(src, length) - Linear trend strength
• kalmanFilter(src, processNoise, measurementNoise) - Filtered price
• kalmanFilter2D(src, ...) - Price with velocity estimate
• adaptiveMA(src, sensitivity) - Kalman-based adaptive moving average
• volAdjMomentum(src, length) - Volatility-normalized momentum
• detectSRLevels(length, nLevels) - K-means based S/R detection
• buildFeatures(src, lengths) - Multi-timeframe feature construction
• technicalFeatures(length) - Standard indicator feature set (RSI, MACD, BB, ATR, etc.)
• lagFeatures(src, lags) - Time-lagged features
• sharpeRatio(returns) - Risk-adjusted return measure
• sortinoRatio(returns) - Downside risk-adjusted return
• maxDrawdown(equity) - Maximum peak-to-trough decline
• calmarRatio(returns, equity) - Return/drawdown ratio
• kellyCriterion(winRate, avgWin, avgLoss) - Optimal position sizing
• fractionalKelly(...) - Conservative Kelly sizing
• rollingBeta(assetReturns, benchmarkReturns) - Market exposure
• fractalDimension(data) - Market complexity measure
---
Usage Example
```
import YourUsername/MLMatrixLib/1 as ml
// Create feature matrix
matrix<float> X = ml.priceMatrix(50)
X := ml.standardize(X)
// Fit linear regression
ml.LinearRegressionModel model = ml.fitLinearRegression(X, y)
float prediction = ml.predictLinear(model, X_new)
// Kalman filter for smoothing
float smoothedPrice = ml.kalmanFilter(close, 0.01, 1.0)
// Detect support/resistance levels
array<float> levels = ml.detectSRLevels(100, 3)
// K-means clustering for regime detection
ml.KMeansModel km = ml.fitKMeans(features, 3)
int cluster = ml.predictCluster(km, newFeature)
```
---
Technical Notes
• All matrix operations use Pine Script's native matrix<float> type
• Numerical stability ensured through:
- Clamping exponential arguments to prevent overflow
- Division by zero protection with epsilon thresholds
- Iterative algorithms with convergence tolerance
• Designed for bar-by-bar execution in Pine Script's event-driven model
• Compatible with Pine Script v6
---
Disclaimer
This library provides mathematical tools for quantitative analysis. It does not constitute financial advice. Past performance of any algorithm does not guarantee future results. Users are responsible for validating models on their specific use cases and understanding the limitations of each method.
ספריית Pine
ברוח האמיתית של TradingView, המחבר לפרסם את הקוד של Pine הזה כספרייה בקוד פתוח, כך שמתכנתים אחרים בקהילתנו יעשו שימוש חוזר. כל הכבוד למחבר! ניתן להשתמש בספרייה הזו באופן פרטי או בומים בקוד פתוח, אך השימוש בחוזר בקוד הזה בפרסומים כפוף ל-כללי הבית.
כתב ויתור
המידע והפרסומים אינם מיועדים להיות, ואינם מהווים, ייעוץ או המלצה פיננסית, השקעתית, מסחרית או מכל סוג אחר המסופקת או מאושרת על ידי TradingView. קרא עוד ב־תנאי השימוש.
ספריית Pine
ברוח האמיתית של TradingView, המחבר לפרסם את הקוד של Pine הזה כספרייה בקוד פתוח, כך שמתכנתים אחרים בקהילתנו יעשו שימוש חוזר. כל הכבוד למחבר! ניתן להשתמש בספרייה הזו באופן פרטי או בומים בקוד פתוח, אך השימוש בחוזר בקוד הזה בפרסומים כפוף ל-כללי הבית.
כתב ויתור
המידע והפרסומים אינם מיועדים להיות, ואינם מהווים, ייעוץ או המלצה פיננסית, השקעתית, מסחרית או מכל סוג אחר המסופקת או מאושרת על ידי TradingView. קרא עוד ב־תנאי השימוש.