ZIThe Zhong Institute

Research Philosophy

Explainability First

Why it matters: Policymakers must understand why an indicator signals risk, not just that it does. Trust requires transparency.

How we implement:
  • SHAP (SHapley Additive exPlanations) for feature attribution
  • Partial dependence plots for relationship visualization
  • Decision path analysis for individual predictions
  • Plain-language summaries with technical outputs

Production Quality

Why it matters: Academic prototypes are insufficient for policy applications. Models must perform under stress.

How we implement:
  • Continuous integration/continuous deployment (CI/CD)
  • Comprehensive test suites (unit, integration, backtesting)
  • Version-controlled code and data
  • Secure deployment for sensitive applications

Open Science

Why it matters: Science advances through scrutiny and replication. Transparency enables error correction.

How we implement:
  • Full methodology documentation for all public tools
  • Reproducible code in version-controlled repositories
  • Model cards documenting assumptions and limitations
  • External methodology audits

Policy Relevance

Why it matters: Technical sophistication means nothing if outputs don't inform decisions. Our goal is impact, not publications.

How we implement:
  • Indicators designed around policy-relevant questions
  • Visualization and communication as core deliverables
  • Regular engagement with policy practitioners
  • Feedback loops from users to methodology development

Methodological Framework

Data Infrastructure

Sources

CategorySources
MacroeconomicIMF, World Bank, national statistical offices, central banks
Financial MarketsBloomberg, Refinitiv, central bank publications
Banking SectorRegulatory filings, central bank reports, BIS statistics
High-FrequencyDaily/weekly market data, real-time spreads

Data Quality

  • Automated validation checks for outliers and anomalies
  • Revision tracking and impact monitoring
  • Documented imputation strategies
  • Full lineage tracking from source to indicator

Early-Warning Methodology

Crisis Dating

  • Systematic review of historical episodes
  • Multiple indicator thresholds
  • Harmonized dating across countries
  • Sensitivity analysis to dating choices

Feature Engineering

  • Temporal: Levels, changes, acceleration, trends
  • Cross-sectional: Peer comparisons, global factors
  • Interaction: Cross-domain linkages

Model Selection

  • Gradient boosting (XGBoost, LightGBM)
  • Regularized regression (Elastic Net)
  • Model averaging for robustness

Gap-Aware Cross-Validation

Standard cross-validation fails with financial time series. Our approach:

Challenges Addressed

  • Temporal dependence in data
  • Crisis clustering effects
  • Missing data handling
  • Real-time vs. revised data differences

Our Solution

  • Strictly temporal train/test splits
  • Embargo periods between training and testing
  • Gap-aware imputation with uncertainty
  • Real-time data reconstruction

Interpretability Stack

📊

SHAP-Based Explanations

Feature contributions to individual predictions, global importance rankings, interaction effects, and consistency with game-theoretic fairness principles.

📋

Model Cards

Standardized documentation covering model details, intended use, limitations, performance metrics, training data, ethical considerations, and usage recommendations.

📄

Datasheets

Comprehensive data documentation covering motivation, composition, collection methods, preprocessing, distribution policies, and maintenance schedules.

Model Governance

Version Control

  • Semantic versioning (major.minor.patch)
  • Change logs documenting all modifications
  • Ability to reproduce any historical version
  • Clear deprecation policies

Monitoring

  • Performance tracking over time
  • Drift detection for input distributions
  • Automated alert systems
  • Clear retraining triggers

Audit Trail

  • Training data snapshots
  • Hyperparameter configurations
  • Validation results
  • Post-deployment performance

Research Outputs

Coming Soon

Working Papers

Methodological innovations, empirical findings, validation studies, and policy applications.

Coming Soon

Methodology White Papers

Detailed technical documentation of our EWS, liquidity indicators, and interpretability approaches.

Coming Soon

Data Documentation

Comprehensive documentation of sources, construction, quality controls, and update schedules.

External Engagement

Methodology Audits

We commission periodic external reviews: independent assessment, replication of results, recommendations for improvement, and published audit summaries.

Academic Collaboration

We engage with the academic community to incorporate advances, subject our work to peer scrutiny, contribute to open-source tools, and participate in conferences.

Standards Development

We contribute to emerging standards for AI/ML documentation, financial stability indicators, open data practices, and responsible AI in policy.