Core Module Reference

This section documents the core modules of Cerebrum Forex.

MT5 Connector

Cerebrum Forex - MT5 Connector Handles connection to MetaTrader 5 and OHLC data extraction.

class core.mt5_connector.MT5Connector(symbol='EURUSD')[source]

Bases: object

MetaTrader 5 connector for OHLC data extraction

Parameters:

symbol (str)

add_callback(callback)[source]

Add callback for status updates

Parameters:

callback (Callable)

close_all_positions()[source]

PANIC BUTTON: Close all open positions immediately. Returns: (closed_count, error_count)

Return type:

Tuple[int, int]

connect(timeout=5.0)[source]

Connect to MT5 terminal with timeout protection

Return type:

bool

Parameters:

timeout (float)

check_health()[source]

Detailed connection diagnostic

Return type:

tuple[bool, str]

disconnect()[source]

Disconnect from MT5

test_connection()[source]

Test MT5 connection

Return type:

bool

preload_history(timeframe=None)[source]

Force MT5 to download maximum available history for timeframes. This triggers the broker server to send all cached historical data.

Parameters:

timeframe (str) – Specific timeframe to preload, or None for all

Return type:

dict

Returns:

dict with preload results per timeframe

get_terminal_data_path()[source]

Get MT5 terminal data path (AppData directory)

Return type:

Optional[str]

get_current_server_time()[source]

Get the ABSOLUTE LATEST server time from the last tick/quote. This represents β€˜NOW’ on the server.

Return type:

datetime

get_server_time()[source]

Get current server time (Legacy/Optional Wrapper)

Return type:

Optional[datetime]

get_time_offset()[source]

Calculate seconds offset between Local Time and Server Time. Positive = Local is ahead of Server. Negative = Local is behind Server.

Return type:

float

get_account_info()[source]

Get MT5 account information (Non-blocking for UI)

Return type:

Optional[dict]

get_market_status()[source]

Check real-time market status for the current symbol via MT5. Returns a dict with β€˜status’, β€˜description’, and β€˜is_open’ boolean.

Return type:

dict

get_ohlc_path(timeframe)[source]

Get path for OHLC CSV file

Return type:

Path

Parameters:

timeframe (str)

extract_ohlc(timeframe, from_date=None, to_date=None, is_update=False)[source]

Extract OHLC data for a timeframe.

Return type:

Optional[DataFrame]

Parameters:
update_all_ohlc(timeframes=None, callback=None)[source]

Bulk update OHLC for all (or specified) timeframes.

This is a DEDICATED extraction phase that should be called BEFORE training, so that training threads never need to wait for MT5 IPC.

Parameters:
  • timeframes (list) – List of TF names to update. Defaults to all standard TFs.

  • callback (callable) – Optional callback(tf, status, msg) for progress updates.

Returns:

candle_count or error}

Return type:

Dict with {timeframe

get_buffer(timeframe, n=2000)[source]

Get latest N candles directly for In-Memory prediction (FAST BUFFER).

NOTE: This is NOT for training. It fetches a small buffer (e.g. 2000 candles) to allow rapid indicator calculation for real-time signals.

CRITICAL OPTIMIZATION: save_to_disk=False We do NOT update the massive CSV on every tick. TrainingManager handles historical updates (β€œSmart Update”). Prediction only needs the in-memory data.

Return type:

Optional[DataFrame]

Parameters:
get_latest_candles(timeframe, n=1000, save_to_disk=True)[source]

Get latest N candles from MT5 (lightweight)

Return type:

Optional[DataFrame]

Parameters:
load_ohlc(timeframe)[source]

Load OHLC data from CSV

Return type:

Optional[DataFrame]

Parameters:

timeframe (str)

load_ohlc_buffer(timeframe, n=5000)[source]

Fast disk load: reads only the last N rows of the CSV. Crucial for large 1m files (200MB+) to prevent real-time stalls.

Return type:

Optional[DataFrame]

Parameters:
get_ohlc_status(timeframe)[source]

Get status of OHLC file for a timeframe

Return type:

dict

Parameters:

timeframe (str)

extract_all(is_update=True, from_year=None)[source]

Extract OHLC for all timeframes

Parameters:
core.mt5_connector.get_mt5_connector()[source]

Get or create the global MT5Connector instance.

Return type:

MT5Connector

Prediction Engine

Cerebrum Forex - Prediction Engine V2 Generate signals using Congress Engine weighted ensemble.

Key features: - Weighted aggregation via Congress Engine - Regime-adaptive weights - Full audit trail - Standardized model outputs

class core.prediction_engine.PredictionEngine(training_manager, mt5_connector)[source]

Bases: object

Generate predictions using Congress Engine weighted ensemble.

V2 features: - Congress Engine for weighted aggregation - Regime-aware weight adjustment - Full audit trail for explainability - Drift detection monitoring

Parameters:
add_callback(callback)[source]

Add callback for prediction status updates

Parameters:

callback (Callable)

remove_callback(callback)[source]

Remove callback

Parameters:

callback (Callable)

get_signal_path(timeframe)[source]

Get path to signal CSV file

Return type:

Path

Parameters:

timeframe (str)

get_enabled_models()[source]

Get list of enabled model types from settings

Return type:

List[str]

predict_timeframe(timeframe, df=None)[source]

Generate prediction for a single timeframe using Congress Engine.

Return type:

Optional[dict]

Parameters:
  • timeframe (str)

  • df (DataFrame | None)

load_signal(timeframe)[source]

Load current signal from CSV file

Return type:

Optional[dict]

Parameters:

timeframe (str)

reset_engine()[source]

[PR-400] Reset the prediction engine state. Flushes all caches and buffers.

predict_all()[source]

Generate predictions for all timeframes in parallel (Turbo Prediction)

Return type:

Dict[str, dict]

get_prediction_status(timeframe)[source]

Get prediction status for a timeframe

Return type:

dict

Parameters:

timeframe (str)

get_combined_signal()[source]

Get overall signal combining all timeframes using Congress Engine.

This aggregates signals across timeframes with timeframe-aware weighting.

Return type:

dict

get_congress_history(limit=50)[source]

Get recent Congress decision history

Return type:

List[dict]

Parameters:

limit (int)

Training Manager

Cerebrum Forex - Training Manager V2 Orchestrates model training with KPI-based labeling, walk-forward validation, and Congress Engine integration.

Key features: - KPI-based labeling (Noble Safe Range) - Quantile feature normalization - Walk-forward cross-validation - Weighted loss (NEUTRAL penalized) - Drift detection for conditional retraining

class core.training_manager.TrainingLogBridge(tm_callback)[source]

Bases: Handler

Bridge for standard logging -> Training Manager UI callbacks

emit(record)[source]

Do whatever it takes to actually log the specified logging record.

This version is intended to be implemented by subclasses and so raises a NotImplementedError.

exception core.training_manager.RecoveryRequired[source]

Bases: Exception

Internal signal to restart timeframe training in Global mode

class core.training_manager.TrainingManager(mt5_connector)[source]

Bases: object

Manages training of all models across all timeframes.

V2 features: - KPI-based labeling using Noble Safe Range - Walk-forward validation (no shuffle) - Quantile normalization for features - Weighted loss penalizing NEUTRAL - Drift detection for conditional retraining

Parameters:

mt5_connector (MT5Connector)

add_callback(callback)[source]

Add callback for training status updates

Parameters:

callback (Callable)

get_model(timeframe, model_type)[source]

Get a specific model with LRU loading/unloading logic.

Parameters:
  • timeframe (str)

  • model_type (str)

get_normalizer(timeframe)[source]

Get feature normalizer for a timeframe. Auto-loads from disk if not already in memory/fitted.

Return type:

FeatureNormalizer

Parameters:

timeframe (str)

get_training_status(timeframe)[source]

Get training status for a timeframe

Return type:

dict

Parameters:

timeframe (str)

should_train_incremental(timeframe)[source]

Determine if training should be incremental.

Checks if models have been trained recently (within 6 hours). If so, returns (True, last_training_date) to train only on new data. Otherwise returns (False, None) for full training.

Return type:

Tuple[bool, Optional[datetime]]

Returns:

Tuple of (is_incremental, train_from_date)

Parameters:

timeframe (str)

prepare_training_data(timeframe, from_date=None, use_kpi_labels=True)[source]

Prepare features and labels for training.

Return type:

Tuple[Optional[ndarray], Optional[ndarray], int, List[str], Optional[DataFrame]]

Parameters:
train_timeframe(timeframe, force_global=False, precomputed_data=None, _recovery_depth=0, use_walk_forward=False)[source]

Train all models for a single timeframe. (Automatic Recovery with depth limit) Accepts optional precomputed_data to support Hybrid Parallel flow.

Parameters:
  • use_walk_forward (bool) – If True, run Walk-Forward CV before normal training to log fold metrics.

  • timeframe (str)

  • force_global (bool)

  • precomputed_data (tuple)

  • _recovery_depth (int)

Return type:

dict

train_all(force_global=False)[source]

HyperSafe Training (V4): Sequential, Memory-Safe Batch Training.

Refactored to avoid OOM (Out of Memory) crashes by processing timeframes one-by-one and releasing memory immediately after each cycle.

Parameters:

force_global (bool)

train_with_drift_check(timeframe)[source]

Train only if drift is detected.

Parameters:

timeframe (str) – Timeframe to check and potentially train

Return type:

dict

Returns:

Dict with drift status and training results

start_scheduled_training(interval_hours=12)[source]

Start scheduled training in background thread

Parameters:

interval_hours (int)

stop_scheduled_training()[source]

Stop scheduled training

load_all_models()[source]

Load all saved models from disk

Feature Engine

Cerebrum Forex - Feature Engine Calculate technical indicators from OHLC data.

class core.feature_engine.FeatureEngine[source]

Bases: object

Calculate technical indicators and features for ML models

calculate_all_features(df, timeframe='1h', progress_callback=None, is_training=False)[source]

Calculate all technical indicators with Turbo I/O (Parquet Caching).

Parameters:
  • df (DataFrame) – Input OHLC DataFrame

  • timeframe (str) – Current timeframe (for cache naming)

  • progress_callback – Optional UI status function

  • is_training (bool) – If True, bypasses 5000 row limit and uses caching

Return type:

DataFrame

Returns:

DataFrame with 1000+ calculated features

get_feature_columns()[source]

Get list of feature column names (excluding OHLC and time)

Return type:

list

select_best_features(df, target, n_features=30)[source]

Select top N features using XGBoost importance. OPTIMIZED: Uses larger sample (50k) and outlier treatment for robust selection.

Return type:

list

Parameters:
  • df (DataFrame)

  • n_features (int)

App Controller

Cerebrum Forex - Application Controller Central controller connecting all components.

class core.app_controller.AppController[source]

Bases: object

Application controller connecting all components

property mt5
property training_manager
property prediction_engine
property scheduler
start()[source]

Start the application - minimal startup, just connect MT5

stop()[source]

Stop the application (deactivate)

extract_all(is_update=True)[source]

Extract OHLC data for all timeframes

Parameters:

is_update (bool)

predict_manual(timeframe)[source]

Execute a MANUAL prediction for a specific timeframe. No scheduling, no background loop. Direct execution.

Return type:

dict

Parameters:

timeframe (str)

train_all(force_global=False, source='MANUAL')[source]

Train all models for all timeframes

Parameters:
predict_all()[source]

Generate predictions for all timeframes

get_signal(timeframe)[source]

Get current signal for a timeframe (from cache)

Return type:

Optional[dict]

Parameters:

timeframe (str)

get_combined_signal()[source]

Get combined signal from all timeframes

Return type:

dict

get_ohlc_status(timeframe)[source]

Get OHLC file status for a timeframe

Return type:

dict

Parameters:

timeframe (str)

get_training_status(timeframe)[source]

Get training status for a timeframe

Return type:

dict

Parameters:

timeframe (str)

get_model(timeframe, model_type)[source]

Get a specific model

Parameters:
  • timeframe (str)

  • model_type (str)

get_settings()[source]

Get current settings

Return type:

dict

get_cached_account_info()[source]

Get account info from cache (Instant)

Return type:

Optional[dict]

save_settings(settings)[source]

Save settings to database and sync to memory

Parameters:

settings (dict)

get_congress_config()[source]

Get Congress AI configuration

Return type:

dict

save_congress_config(config)[source]

Save Congress AI configuration to JSON

Parameters:

config (dict)

update_scheduler_config(enabled, cycle)[source]

Update scheduler configuration and reset timer

Parameters:
sync_ea_config()[source]

Headless synchronization of EA settings to MT5 folder

log_training_event(source, status, details)[source]

Log a training event to history file with source [AUTO|MANUAL|SYSTEM]

Parameters:

Noble KPI

Noble Safe Range KPI - Core Module

Scientifically validated price containment model for EUR/USD.

Version: 3.0 Date: December 2025 Validation: 99.9% containment on 6.5M+ candles (2000-2025)

References

  • Wilder, J.W. (1978). New Concepts in Technical Trading Systems

  • Bollerslev, T. (1986). GARCH models

  • Embrechts, P. (1997). Extreme Value Theory

class core.noble_kpi.MarketRegime(value)[source]

Bases: Enum

Market regime classification

NORMAL = 'normal'
LOW_VOL = 'low_vol'
HIGH_VOL = 'high_vol'
CRISIS = 'crisis'
core.noble_kpi.calculate_atr_series(high, low, close, period=14)[source]

Vectorized ATR calculation for Series.

Return type:

Series

Parameters:
  • high (Series)

  • low (Series)

  • close (Series)

  • period (int)

core.noble_kpi.detect_regime_series(df, timeframe=None)[source]

Detect market regime (β€˜trend’ or β€˜range’) for a whole series. Compatible with CongressEngine expectations.

Return type:

Series

Parameters:
  • df (DataFrame)

  • timeframe (str)

core.noble_kpi.calculate_atr(df, period=14)[source]

Calculate Average True Range (Wilder, 1978).

Parameters:
  • df (DataFrame) – DataFrame with β€˜high’, β€˜low’, β€˜close’ columns

  • period (int) – ATR period (default 14)

Return type:

Series

Returns:

Series of ATR values

core.noble_kpi.detect_regime(df, lookback=100)[source]

Detect current market regime based on volatility.

Return type:

MarketRegime

Returns:

MarketRegime enum value

Parameters:
  • df (DataFrame)

  • lookback (int)

core.noble_kpi.noble_safe_range(open_price, atr, timeframe, regime=MarketRegime.NORMAL)[source]

Calculate Noble Safe Range bounds.

The Noble Equation guarantees 99.9% price containment based on 25 years of empirical validation (2000-2025).

Parameters:
  • open_price (float) – Opening price of current candle

  • atr (float) – 14-period Average True Range

  • timeframe (str) – One of β€˜1m’,’5m’,’15m’,’30m’,’1h’,’4h’,’1d’,’1w’,’MN’

  • regime (MarketRegime) – Current market regime (optional)

Return type:

Tuple[float, float]

Returns:

(safe_low, safe_high) tuple

Example

>>> safe_low, safe_high = noble_safe_range(1.0520, 0.0015, '4h')
>>> print(f"Safe Range: {safe_low:.5f} - {safe_high:.5f}")
Safe Range: 1.04570 - 1.05830
core.noble_kpi.calculate_position(current_price, safe_low, safe_high)[source]

Calculate price position within Safe Range.

Return type:

float

Returns:

Position as percentage (0-100) 0% = at Safe Low, 100% = at Safe High

Parameters:
core.noble_kpi.get_recommendation(position)[source]

Get trading recommendation based on position.

Parameters:

position (float) – Price position in range (0-100)

Return type:

Tuple[str, str]

Returns:

(signal, description) tuple

core.noble_kpi.analyze_timeframe(df, timeframe)[source]

Complete Safe Range analysis for a timeframe.

Parameters:
  • df (DataFrame) – DataFrame with OHLC data

  • timeframe (str) – Timeframe string

Return type:

Dict

Returns:

Dictionary with all analysis results

core.noble_kpi.validate_kpi(df, timeframe)[source]

Validate KPI performance on historical data.

Returns breach rate and success metrics.

Return type:

Dict

Parameters:
  • df (DataFrame)

  • timeframe (str)

core.noble_kpi.calculate_noble_bias(row, timeframe)[source]

Calculate Noble Bias (Guardian Force) for a single row. Returns a float added to the prediction score (e.g. +0.05).

Return type:

float

Parameters:
  • row (Series)

  • timeframe (str)

Congress Engine

Cerebrum Forex - Congress Engine Weighted ensemble aggregation with regime detection.

The Congress Engine combines predictions from multiple specialized models using adaptive weights based on market regime and timeframe.

Key features: - Weighted aggregation with adaptive threshold - Regime detection (trend vs range) - Full audit trail for explainability - Dynamic weight adjustment based on model performance

class core.congress_engine.AladdinPersona(value)[source]

Bases: Enum

Aladdin β€˜Council of Experts’ Personas

QUANT = 'quant'
ARCHIVIST = 'archivist'
FUTURIST = 'futurist'
GUARDIAN = 'guardian'
LEADER = 'leader'
SENTINEL = 'sentinel'
class core.congress_engine.ModelPrediction(model_name, model_role, score, confidence, regime, timeframe, timestamp=<factory>)[source]

Bases: object

Standardized model output

Parameters:
model_name: str
model_role: AladdinPersona
score: float
confidence: float
regime: str
timeframe: str
timestamp: datetime
class core.congress_engine.CongressDecision(final_signal, final_score, final_confidence, detected_regime, certainty_boost=0.0, flux_boost=0.0, trend_certainty=0.0, duration_tf5=0, threshold=0.0, model_predictions=<factory>, flux_details=<factory>, weights_applied=<factory>, timestamp=<factory>, override_type=None)[source]

Bases: object

Final decision with full audit trail

Parameters:
final_signal: str
final_score: float
final_confidence: float
detected_regime: str
certainty_boost: float = 0.0
flux_boost: float = 0.0
trend_certainty: float = 0.0
duration_tf5: int = 0
threshold: float = 0.0
model_predictions: List[ModelPrediction]
flux_details: Dict
weights_applied: Dict[str, float]
timestamp: datetime
override_type: Optional[str] = None
to_dict()[source]

Convert to dictionary for storage

Return type:

dict

class core.congress_engine.CongressEngine(custom_weights=None)[source]

Bases: object

Weighted ensemble aggregation with Aladdin β€˜Council of Experts’.

The Congress Engine combines model predictions using: FinalScore = Ξ£(wi(regime, TF) Γ— score_i Γ— confidence_i) + CertaintyBoost

Weights are adaptive based on: - Market regime (trend vs range) - Aladdin Persona (Expert Specialization)

Parameters:

custom_weights (Dict | None)

CONGRESS_WEIGHTS = {'range': {'archivist': 0.2, 'futurist': 0.25, 'leader': 0.35, 'quant': 0.2}, 'trend': {'archivist': 0.2, 'futurist': 0.15, 'leader': 0.45, 'quant': 0.2}}
DEFAULT_WEIGHTS = {'range': {'archivist': 0.2, 'futurist': 0.25, 'leader': 0.35, 'quant': 0.2}, 'trend': {'archivist': 0.2, 'futurist': 0.15, 'leader': 0.45, 'quant': 0.2}}
BASE_THRESHOLD = 0.07
LAMBDA = 1.8
KPI_GAMMA = 0.5
CERTAINTY_BONUS = 0.15
__init__(custom_weights=None)[source]

Initialize Congress Engine.

Parameters:

custom_weights (Optional[Dict]) – Optional custom weight configuration

detect_market_regime(df, adx_threshold=20.0)[source]

Detect market regime using ADX and volatility.

Parameters:
  • df (DataFrame) – DataFrame with OHLC data

  • adx_threshold (float) – ADX threshold for trend detection

Return type:

str

Returns:

β€˜trend’ or β€˜range’

calculate_adaptive_threshold(df, timeframe)[source]

Calculate adaptive decision threshold based on volatility.

Higher volatility = higher threshold (more confident signals only)

Parameters:
  • df (DataFrame) – DataFrame with OHLC data

  • timeframe (str) – Current timeframe

Return type:

float

Returns:

Threshold ΞΈ in [0.2, 0.5]

get_weights(regime, timeframe)[source]

Get weights for current regime and timeframe.

Return type:

Dict[str, float]

Parameters:
  • regime (str)

  • timeframe (str)

aggregate(predictions, df, timeframe, smc_signals=None, flux_boost=0.0, flux_details=None)[source]

Aggregate model predictions into final signal. With new Flux Boost (Physics) integration.

Return type:

CongressDecision

Parameters:
get_decision_history(limit=100)[source]

Get recent decision history as dicts

Return type:

List[dict]

Parameters:

limit (int)

simple_aggregate(predictions, timeframe='1h')[source]

Simple weighted aggregation using config.json weights.

This is a simpler interface for the prediction engine that uses model names (xgboost, lightgbm, randomforest, stacking) instead of ModelRole.

Parameters:
  • predictions (List[Dict]) – List of {model, signal, confidence}

  • timeframe (str) – For timeframe-specific weight profiles

Return type:

Dict

Returns:

{signal, confidence, score, details}

make_decision_batch(predictions_df, regime_series, atr_series, timeframe, noble_bias_series=None)[source]

Vectorized version of make_decision for Backtesting.

Parameters:
  • predictions_df (DataFrame) – DataFrame where cols are model names, rows are samples

  • regime_series (Series) – Series of β€˜trend’ or β€˜range’ strings

  • atr_series (Series) – Series of ATR values for adaptive threshold

  • timeframe (str) – Timeframe string

  • noble_bias_series (ndarray | None)

Return type:

Tuple[ndarray, ndarray, ndarray, ndarray]

Returns:

(signals, scores, confidences, thresholds) as numpy arrays

aggregate_with_regime(predictions, df, timeframe='1h')[source]

Weighted aggregation with regime classification.

Combines simple_aggregate with RegimeClassifier output.

Parameters:
  • predictions (List[Dict]) – List of {model, signal, confidence}

  • df (DataFrame) – OHLC DataFrame for regime detection

  • timeframe (str) – Current timeframe

Return type:

Dict

Returns:

Dict with signal, regime, and full audit trail

full_aggregate(predictions, df, timeframe='1h', current_price=None, safe_low=None, safe_high=None)[source]

Complete prediction pipeline with all features.

Combines: 1. Weighted aggregation (configurable weights) 2. Regime classification 3. Risk filter validation (KPI bounds, volatility, momentum)

Parameters:
  • predictions (List[Dict]) – List of {model, signal, confidence}

  • df (DataFrame) – OHLC DataFrame

  • timeframe (str) – Current timeframe

  • current_price (Optional[float]) – Current market price (for KPI check)

  • safe_low (Optional[float]) – KPI Safe Low bound

  • safe_high (Optional[float]) – KPI Safe High bound

Return type:

Dict

Returns:

Complete prediction dict with all audit info

save_prediction_json(result, symbol='EURUSD', output_dir=None)[source]

Save prediction result to JSON file.

Parameters:
  • result (Dict) – Prediction result from full_aggregate

  • symbol (str) – Trading symbol

  • output_dir (Optional[str]) – Output directory (default: data/signals)

Return type:

str

Returns:

Path to saved file

core.congress_engine.get_congress_engine()[source]

Get or create Congress Engine singleton

Return type:

CongressEngine

Model Scorer

class core.model_scorer.PredictionOutcome(model_name, timeframe, predicted_signal, actual_outcome, confidence, timestamp=<factory>)[source]

Bases: object

Single prediction outcome for tracking

Parameters:
model_name: str
timeframe: str
predicted_signal: str
actual_outcome: str
confidence: float
timestamp: datetime
class core.model_scorer.ModelScorer(data_dir=None)[source]

Bases: object

Dynamic model performance scorer.

Tracks prediction accuracy and adjusts weights based on recent performance. Uses exponential moving average for smooth weight transitions.

Parameters:

data_dir (Path)

HISTORY_SIZE = 100
MIN_SAMPLES = 10
DYNAMIC_WEIGHT = 0.3
EMA_ALPHA = 0.1
record_outcome(model_name, timeframe, predicted_signal, actual_outcome, confidence=0.5)[source]

Record a prediction outcome for a model.

Parameters:
  • model_name (str) – xgboost, lightgbm, randomforest, stacking

  • timeframe (str) – 1m, 5m, 15m, 30m, 1h, 4h

  • predicted_signal (str) – BUY, SELL, NEUTRAL

  • actual_outcome (str) – PROFIT, LOSS, BREAKEVEN

  • confidence (float) – Model’s confidence [0, 1]

get_dynamic_weights(base_weights, model_role_map=None)[source]

Get dynamically adjusted weights based on recent performance.

Blends static weights (70%) with dynamic performance (30%).

Parameters:
  • base_weights (Dict[str, float]) – Static weights from config {role: weight}

  • model_role_map (Dict[str, str]) – Map model name to role {xgboost: quant, …}

Returns:

weight}

Return type:

Adjusted weights {role

get_model_stats()[source]

Get performance statistics for all models

Return type:

Dict[str, Dict]

core.model_scorer.get_model_scorer()[source]

Get singleton ModelScorer instance

Return type:

ModelScorer