Core Module Referenceο
This section documents the core modules of Cerebrum Forex.
MT5 Connectorο
Cerebrum Forex - MT5 Connector Handles connection to MetaTrader 5 and OHLC data extraction.
- class core.mt5_connector.MT5Connector(symbol='EURUSD')[source]ο
Bases:
objectMetaTrader 5 connector for OHLC data extraction
- Parameters:
symbol (str)
- close_all_positions()[source]ο
PANIC BUTTON: Close all open positions immediately. Returns: (closed_count, error_count)
- preload_history(timeframe=None)[source]ο
Force MT5 to download maximum available history for timeframes. This triggers the broker server to send all cached historical data.
- get_current_server_time()[source]ο
Get the ABSOLUTE LATEST server time from the last tick/quote. This represents βNOWβ on the server.
- Return type:
- get_time_offset()[source]ο
Calculate seconds offset between Local Time and Server Time. Positive = Local is ahead of Server. Negative = Local is behind Server.
- Return type:
- get_market_status()[source]ο
Check real-time market status for the current symbol via MT5. Returns a dict with βstatusβ, βdescriptionβ, and βis_openβ boolean.
- Return type:
- extract_ohlc(timeframe, from_date=None, to_date=None, is_update=False)[source]ο
Extract OHLC data for a timeframe.
- update_all_ohlc(timeframes=None, callback=None)[source]ο
Bulk update OHLC for all (or specified) timeframes.
This is a DEDICATED extraction phase that should be called BEFORE training, so that training threads never need to wait for MT5 IPC.
- Parameters:
timeframes (
list) β List of TF names to update. Defaults to all standard TFs.callback (
callable) β Optional callback(tf, status, msg) for progress updates.
- Returns:
candle_count or error}
- Return type:
Dict with {timeframe
- get_buffer(timeframe, n=2000)[source]ο
Get latest N candles directly for In-Memory prediction (FAST BUFFER).
NOTE: This is NOT for training. It fetches a small buffer (e.g. 2000 candles) to allow rapid indicator calculation for real-time signals.
CRITICAL OPTIMIZATION: save_to_disk=False We do NOT update the massive CSV on every tick. TrainingManager handles historical updates (βSmart Updateβ). Prediction only needs the in-memory data.
- get_latest_candles(timeframe, n=1000, save_to_disk=True)[source]ο
Get latest N candles from MT5 (lightweight)
- load_ohlc_buffer(timeframe, n=5000)[source]ο
Fast disk load: reads only the last N rows of the CSV. Crucial for large 1m files (200MB+) to prevent real-time stalls.
Prediction Engineο
Cerebrum Forex - Prediction Engine V2 Generate signals using Congress Engine weighted ensemble.
Key features: - Weighted aggregation via Congress Engine - Regime-adaptive weights - Full audit trail - Standardized model outputs
- class core.prediction_engine.PredictionEngine(training_manager, mt5_connector)[source]ο
Bases:
objectGenerate predictions using Congress Engine weighted ensemble.
V2 features: - Congress Engine for weighted aggregation - Regime-aware weight adjustment - Full audit trail for explainability - Drift detection monitoring
- Parameters:
training_manager (TrainingManager)
mt5_connector (MT5Connector)
- add_callback(callback)[source]ο
Add callback for prediction status updates
- Parameters:
callback (Callable)
- predict_timeframe(timeframe, df=None)[source]ο
Generate prediction for a single timeframe using Congress Engine.
- reset_engine()[source]ο
[PR-400] Reset the prediction engine state. Flushes all caches and buffers.
Training Managerο
Cerebrum Forex - Training Manager V2 Orchestrates model training with KPI-based labeling, walk-forward validation, and Congress Engine integration.
Key features: - KPI-based labeling (Noble Safe Range) - Quantile feature normalization - Walk-forward cross-validation - Weighted loss (NEUTRAL penalized) - Drift detection for conditional retraining
- class core.training_manager.TrainingLogBridge(tm_callback)[source]ο
Bases:
HandlerBridge for standard logging -> Training Manager UI callbacks
- exception core.training_manager.RecoveryRequired[source]ο
Bases:
ExceptionInternal signal to restart timeframe training in Global mode
- class core.training_manager.TrainingManager(mt5_connector)[source]ο
Bases:
objectManages training of all models across all timeframes.
V2 features: - KPI-based labeling using Noble Safe Range - Walk-forward validation (no shuffle) - Quantile normalization for features - Weighted loss penalizing NEUTRAL - Drift detection for conditional retraining
- Parameters:
mt5_connector (MT5Connector)
- add_callback(callback)[source]ο
Add callback for training status updates
- Parameters:
callback (Callable)
- get_normalizer(timeframe)[source]ο
Get feature normalizer for a timeframe. Auto-loads from disk if not already in memory/fitted.
- Return type:
FeatureNormalizer- Parameters:
timeframe (str)
- should_train_incremental(timeframe)[source]ο
Determine if training should be incremental.
Checks if models have been trained recently (within 6 hours). If so, returns (True, last_training_date) to train only on new data. Otherwise returns (False, None) for full training.
- prepare_training_data(timeframe, from_date=None, use_kpi_labels=True)[source]ο
Prepare features and labels for training.
- train_timeframe(timeframe, force_global=False, precomputed_data=None, _recovery_depth=0, use_walk_forward=False)[source]ο
Train all models for a single timeframe. (Automatic Recovery with depth limit) Accepts optional precomputed_data to support Hybrid Parallel flow.
- train_all(force_global=False)[source]ο
HyperSafe Training (V4): Sequential, Memory-Safe Batch Training.
Refactored to avoid OOM (Out of Memory) crashes by processing timeframes one-by-one and releasing memory immediately after each cycle.
- Parameters:
force_global (bool)
Feature Engineο
Cerebrum Forex - Feature Engine Calculate technical indicators from OHLC data.
- class core.feature_engine.FeatureEngine[source]ο
Bases:
objectCalculate technical indicators and features for ML models
- calculate_all_features(df, timeframe='1h', progress_callback=None, is_training=False)[source]ο
Calculate all technical indicators with Turbo I/O (Parquet Caching).
- Parameters:
- Return type:
DataFrame- Returns:
DataFrame with 1000+ calculated features
App Controllerο
Cerebrum Forex - Application Controller Central controller connecting all components.
- class core.app_controller.AppController[source]ο
Bases:
objectApplication controller connecting all components
- property mt5ο
- property training_managerο
- property prediction_engineο
- property schedulerο
- extract_all(is_update=True)[source]ο
Extract OHLC data for all timeframes
- Parameters:
is_update (bool)
- predict_manual(timeframe)[source]ο
Execute a MANUAL prediction for a specific timeframe. No scheduling, no background loop. Direct execution.
- save_settings(settings)[source]ο
Save settings to database and sync to memory
- Parameters:
settings (dict)
- save_congress_config(config)[source]ο
Save Congress AI configuration to JSON
- Parameters:
config (dict)
Noble KPIο
Noble Safe Range KPI - Core Moduleο
Scientifically validated price containment model for EUR/USD.
Version: 3.0 Date: December 2025 Validation: 99.9% containment on 6.5M+ candles (2000-2025)
References
Wilder, J.W. (1978). New Concepts in Technical Trading Systems
Bollerslev, T. (1986). GARCH models
Embrechts, P. (1997). Extreme Value Theory
- class core.noble_kpi.MarketRegime(value)[source]ο
Bases:
EnumMarket regime classification
- NORMAL = 'normal'ο
- LOW_VOL = 'low_vol'ο
- HIGH_VOL = 'high_vol'ο
- CRISIS = 'crisis'ο
- core.noble_kpi.calculate_atr_series(high, low, close, period=14)[source]ο
Vectorized ATR calculation for Series.
- Return type:
Series- Parameters:
high (Series)
low (Series)
close (Series)
period (int)
- core.noble_kpi.detect_regime_series(df, timeframe=None)[source]ο
Detect market regime (βtrendβ or βrangeβ) for a whole series. Compatible with CongressEngine expectations.
- Return type:
Series- Parameters:
df (DataFrame)
timeframe (str)
- core.noble_kpi.calculate_atr(df, period=14)[source]ο
Calculate Average True Range (Wilder, 1978).
- Parameters:
df (
DataFrame) β DataFrame with βhighβ, βlowβ, βcloseβ columnsperiod (
int) β ATR period (default 14)
- Return type:
Series- Returns:
Series of ATR values
- core.noble_kpi.detect_regime(df, lookback=100)[source]ο
Detect current market regime based on volatility.
- Return type:
- Returns:
MarketRegime enum value
- Parameters:
df (DataFrame)
lookback (int)
- core.noble_kpi.noble_safe_range(open_price, atr, timeframe, regime=MarketRegime.NORMAL)[source]ο
Calculate Noble Safe Range bounds.
The Noble Equation guarantees 99.9% price containment based on 25 years of empirical validation (2000-2025).
- Parameters:
open_price (
float) β Opening price of current candleatr (
float) β 14-period Average True Rangetimeframe (
str) β One of β1mβ,β5mβ,β15mβ,β30mβ,β1hβ,β4hβ,β1dβ,β1wβ,βMNβregime (
MarketRegime) β Current market regime (optional)
- Return type:
- Returns:
(safe_low, safe_high) tuple
Example
>>> safe_low, safe_high = noble_safe_range(1.0520, 0.0015, '4h') >>> print(f"Safe Range: {safe_low:.5f} - {safe_high:.5f}") Safe Range: 1.04570 - 1.05830
- core.noble_kpi.calculate_position(current_price, safe_low, safe_high)[source]ο
Calculate price position within Safe Range.
- core.noble_kpi.get_recommendation(position)[source]ο
Get trading recommendation based on position.
- core.noble_kpi.analyze_timeframe(df, timeframe)[source]ο
Complete Safe Range analysis for a timeframe.
Congress Engineο
Cerebrum Forex - Congress Engine Weighted ensemble aggregation with regime detection.
The Congress Engine combines predictions from multiple specialized models using adaptive weights based on market regime and timeframe.
Key features: - Weighted aggregation with adaptive threshold - Regime detection (trend vs range) - Full audit trail for explainability - Dynamic weight adjustment based on model performance
- class core.congress_engine.AladdinPersona(value)[source]ο
Bases:
EnumAladdin βCouncil of Expertsβ Personas
- QUANT = 'quant'ο
- ARCHIVIST = 'archivist'ο
- FUTURIST = 'futurist'ο
- GUARDIAN = 'guardian'ο
- LEADER = 'leader'ο
- SENTINEL = 'sentinel'ο
- class core.congress_engine.ModelPrediction(model_name, model_role, score, confidence, regime, timeframe, timestamp=<factory>)[source]ο
Bases:
objectStandardized model output
- Parameters:
-
model_role:
AladdinPersonaο
- class core.congress_engine.CongressDecision(final_signal, final_score, final_confidence, detected_regime, certainty_boost=0.0, flux_boost=0.0, trend_certainty=0.0, duration_tf5=0, threshold=0.0, model_predictions=<factory>, flux_details=<factory>, weights_applied=<factory>, timestamp=<factory>, override_type=None)[source]ο
Bases:
objectFinal decision with full audit trail
- Parameters:
-
model_predictions:
List[ModelPrediction]ο
- class core.congress_engine.CongressEngine(custom_weights=None)[source]ο
Bases:
objectWeighted ensemble aggregation with Aladdin βCouncil of Expertsβ.
The Congress Engine combines model predictions using: FinalScore = Ξ£(wi(regime, TF) Γ score_i Γ confidence_i) + CertaintyBoost
Weights are adaptive based on: - Market regime (trend vs range) - Aladdin Persona (Expert Specialization)
- Parameters:
custom_weights (Dict | None)
- CONGRESS_WEIGHTS = {'range': {'archivist': 0.2, 'futurist': 0.25, 'leader': 0.35, 'quant': 0.2}, 'trend': {'archivist': 0.2, 'futurist': 0.15, 'leader': 0.45, 'quant': 0.2}}ο
- DEFAULT_WEIGHTS = {'range': {'archivist': 0.2, 'futurist': 0.25, 'leader': 0.35, 'quant': 0.2}, 'trend': {'archivist': 0.2, 'futurist': 0.15, 'leader': 0.45, 'quant': 0.2}}ο
- BASE_THRESHOLD = 0.07ο
- LAMBDA = 1.8ο
- KPI_GAMMA = 0.5ο
- CERTAINTY_BONUS = 0.15ο
- detect_market_regime(df, adx_threshold=20.0)[source]ο
Detect market regime using ADX and volatility.
- calculate_adaptive_threshold(df, timeframe)[source]ο
Calculate adaptive decision threshold based on volatility.
Higher volatility = higher threshold (more confident signals only)
- aggregate(predictions, df, timeframe, smc_signals=None, flux_boost=0.0, flux_details=None)[source]ο
Aggregate model predictions into final signal. With new Flux Boost (Physics) integration.
- Return type:
- Parameters:
- simple_aggregate(predictions, timeframe='1h')[source]ο
Simple weighted aggregation using config.json weights.
This is a simpler interface for the prediction engine that uses model names (xgboost, lightgbm, randomforest, stacking) instead of ModelRole.
- make_decision_batch(predictions_df, regime_series, atr_series, timeframe, noble_bias_series=None)[source]ο
Vectorized version of make_decision for Backtesting.
- Parameters:
- Return type:
- Returns:
(signals, scores, confidences, thresholds) as numpy arrays
- aggregate_with_regime(predictions, df, timeframe='1h')[source]ο
Weighted aggregation with regime classification.
Combines simple_aggregate with RegimeClassifier output.
- full_aggregate(predictions, df, timeframe='1h', current_price=None, safe_low=None, safe_high=None)[source]ο
Complete prediction pipeline with all features.
Combines: 1. Weighted aggregation (configurable weights) 2. Regime classification 3. Risk filter validation (KPI bounds, volatility, momentum)
Model Scorerο
- class core.model_scorer.PredictionOutcome(model_name, timeframe, predicted_signal, actual_outcome, confidence, timestamp=<factory>)[source]ο
Bases:
objectSingle prediction outcome for tracking
- Parameters:
- class core.model_scorer.ModelScorer(data_dir=None)[source]ο
Bases:
objectDynamic model performance scorer.
Tracks prediction accuracy and adjusts weights based on recent performance. Uses exponential moving average for smooth weight transitions.
- Parameters:
data_dir (Path)
- HISTORY_SIZE = 100ο
- MIN_SAMPLES = 10ο
- DYNAMIC_WEIGHT = 0.3ο
- EMA_ALPHA = 0.1ο
- record_outcome(model_name, timeframe, predicted_signal, actual_outcome, confidence=0.5)[source]ο
Record a prediction outcome for a model.