A technique to evaluate model performance on unseen data. In K-fold cross-validation, the dataset is split into K parts; the model trains on K–1 parts and is tested on the held-out part, repeated K times. This gives a robust estimate of generalization. In forex ML, one might use time-series cross-validation (e.g. walk-forward) to avoid lookahead bias. Scikit-learn’s cross_val_score(model, X, y, cv=K)
is commonly used. Cross-validation guards against overfitting and ensures a model that performs well on historical data is more likely to work live.