This document details the end-to-end process of how a chess game is analyzed in Chess Analyzer Pro, from the moment a game is loaded to the final display of Move Classifications and Accuracy.
1. Game Ingestion (Loading & Parsing)
The process begins when a user loads a game via PGN File, API (Chess.com/Lichess), URL, or pasted text.
The PGN Parser
The PGNParser (src/backend/pgn_parser.py) converts raw game data into our internal format.
- Reading: Uses
python-chessto read the game structure. - Conversion: Converts PGN nodes into
GameAnalysisandMoveAnalysisobjects. - Initialization: Captures FEN, UCI, and basic metadata (players, result, date, opening).
Data Snapshot: Parsed Game
Right after parsing, the data looks like this structure:
GameAnalysis(
game_id="uuid-1234...",
metadata=GameMetadata(
white="Magnus Carlsen",
black="Hikaru Nakamura",
white_elo="2850",
black_elo="2820",
result="1-0",
opening="Sicilian Defense",
termination="resignation",
source="chesscom"
),
moves=[
MoveAnalysis(
move_number=1,
ply=1,
san="e4",
uci="e2e4",
fen_before="rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1",
classification="Book", # Not yet analyzed
eval_before_cp=None
),
# ... more moves
]
)
2. The Analysis Worker Pipeline
The AnalysisWorker (src/gui/analysis_worker.py) runs the analysis in a background thread to keep the UI responsive. It creates an Analyzer instance and calls analyze_game().
Analysis Flow
MainWindow.start_analysis()
│
├─▶ AnalysisWorker (QThread)
│ │
│ └─▶ Analyzer.analyze_game()
│ │
│ ├─▶ _analyze_positions() # Engine work
│ ├─▶ _classify_and_calculate_stats() # Classification
│ ├─▶ _calculate_final_accuracy() # Accuracy
│ └─▶ GameHistoryManager.save_game() # Persistence
│
└─▶ on_analysis_finished() # UI Update
3. Stockfish Engine Interaction
For each move, Analyzer (src/backend/analyzer.py) asks Stockfish for the evaluation of the position.
Engine Query
engine.analyse(board, limit=Depth(18), multipv=3)
- Depth 18: Deep enough for accurate evaluation, fast enough for user experience.
- MultiPV 3: Get the top 3 moves to enable "Great" move detection.
Cache Layer
Before querying the engine, we check the AnalysisCache for previously analyzed positions:
# Check cache first
cached_result = self.cache.get_analysis(fen, config)
if cached_result:
return cached_result # Skip engine query
Data Snapshot: Engine Output
The engine returns a list of principal variations (PV):
[
# PV 1 (The Best Move)
{
"pv": ["f3e5", "d6e5", "d2d4"],
"cp": 35, # +0.35 for White
"mate": None
},
# PV 2 (Second Best)
{
"pv": ["d2d4", "c5d4"],
"cp": 10,
"mate": None
},
# PV 3 (Third Best)
{
"pv": ["b1c3"],
"cp": -15, # Slight disadvantage
"mate": None
}
]
4. Evaluation to Win Probability
We convert raw Centipawn (CP) scores into Win Probability (0.0 - 1.0) using a logistic regression formula. This normalizes the evaluation to human-understandable percentages.
The Formula
def get_win_probability(cp, mate):
if mate is not None:
return 1.0 if mate > 0 else 0.0
multiplier = -0.00368208 * cp
win_percent = 50 + 50 * (2 / (1 + exp(multiplier)) - 1)
return win_percent / 100.0
Examples
| Input | Calculation | Win Probability |
|---|---|---|
+150 CP | Logistic curve | 63.5% for White |
+350 CP | Logistic curve | 85.2% for White |
-200 CP | Logistic curve | 32.0% for White |
Mate in 3 | Forced win | 100% |
Mate in -5 | Opponent has mate | 0% |
5. Move Classification
We classify each move based on Win Probability Loss (WPL) — the change in win probability from the player's perspective.
Classification Priority
Moves are classified in this order (first match wins):
- Delivering Checkmate →
Best(always optimal) - Matches Engine's Top Move →
BestorGreat - Missed Forced Mate →
Miss - Missed Winning Position →
Miss - WPL Thresholds → Based on how much winning chance was lost
WPL Thresholds
| WPL Range | Classification | Meaning |
|---|---|---|
| ≥ 20% | Blunder | Critical error, likely loses game |
| 8% - 20% | Mistake | Significant error, loses evaluation |
| 3% - 8% | Inaccuracy | Suboptimal but not serious |
| 1% - 3% | Good | Solid, minor inaccuracy |
| < 1% | Excellent | Near-optimal play |
Special Classifications
| Classification | Condition |
|---|---|
| Best | Played the engine's #1 recommended move |
| Great | Played the only good move (>15% better than alternatives) |
| Miss | Had mate/winning position (>80%) but dropped significantly |
| Book | Recognized opening theory move |
Data Snapshot: Classification Example
MoveAnalysis(
san="Bb4?",
uci="c5b4",
# 1. We had a winning position
eval_before_cp=450,
win_chance_before=0.90, # 90%
# 2. After our move, it's equal
eval_after_cp=0,
win_chance_after=0.50, # 50%
# 3. The Loss (from player's perspective)
# WPL = 0.90 - 0.50 = 0.40 (40% loss)
# 4. Resulting Classification
classification="Blunder", # >= 20%
explanation="Lost 40.0% winning chances."
)
6. Move Accuracy Calculation
Each move receives an accuracy score (0-100) based on how much win probability was preserved.
The Formula
def _calculate_move_accuracy(win_prob_before, win_prob_after):
wp_before = win_prob_before * 100.0
wp_after = win_prob_after * 100.0
diff = wp_before - wp_after
if diff <= 0:
return 100.0 # Improvement = perfect accuracy
# Exponential decay formula
raw = 103.1668 * exp(-0.05 * diff) - 3.1669
return max(0.0, min(100.0, raw))
Accuracy Overrides
Certain moves receive special accuracy treatment:
| Condition | Accuracy Override |
|---|---|
| Book Move | 100% (opening theory) |
| Delivers Checkmate | 100% (optimal) |
| Leads to Forced Mate | 100% (winning position) |
| Best/Great Move | Minimum 80% |
| All Moves | Minimum 10% floor |
[!NOTE] The 10% minimum floor prevents single bad moves from destroying the harmonic mean calculation.
7. Game Accuracy Calculation (Lichess Algorithm)
We calculate the overall game accuracy using the Lichess algorithm, which combines two methods for a balanced result.
Algorithm Overview
Final Accuracy = (Volatility-Weighted Mean + Harmonic Mean) / 2
This approach:
- Volatility-Weighted Mean: Gives more weight to critical positions (high tension)
- Harmonic Mean: Penalizes inconsistency more than arithmetic mean
Step 1: Calculate Volatility Weights
We use a sliding window to calculate position volatility (how much the evaluation swings):
def _calculate_volatility_weights(win_percents, window_size=8):
weights = []
for window in sliding_windows:
std_dev = standard_deviation(window)
# Clamp between 0.5 and 12.0 (per Lichess source)
weight = clamp(std_dev, 0.5, 12.0)
weights.append(weight)
return weights
Higher volatility = more important position = higher weight.
Step 2: Volatility-Weighted Mean
weighted_mean = sum(accuracy[i] * weight[i]) / sum(weights)
Step 3: Harmonic Mean
harmonic_mean = n / sum(1/accuracy[i] for each move)
The harmonic mean penalizes bad moves more heavily than an arithmetic mean.
Step 4: Final Accuracy
final_accuracy = (weighted_mean + harmonic_mean) / 2
Data Snapshot: Final Summary
After all moves are processed, game_analysis.summary contains:
{
"white": {
"acpl": 15.4, # Average CP Loss per move
"accuracy": 92.5, # Game accuracy percentage
"move_count": 42,
"Brilliant": 1,
"Great": 2,
"Best": 25,
"Excellent": 5,
"Good": 3,
"Inaccuracy": 2,
"Mistake": 1,
"Blunder": 0,
"Miss": 0,
"Book": 5
},
"black": {
"acpl": 45.2,
"accuracy": 78.1,
"move_count": 41,
# ... classification counts ...
}
}
8. Persistence & UI
Finally, the fully populated GameAnalysis object is saved and displayed.
Database Storage
GameHistoryManager.save_game(game_analysis, pgn_content)
This stores:
- All metadata (players, ratings, opening, result)
- Complete move analysis with classifications
- Summary statistics
- Original PGN text
UI Components
| Component | Data Displayed |
|---|---|
| Move List | Move SAN with classification icons (🔵 Brilliant, 🔴 Blunder, etc.) |
| Evaluation Graph | eval_after_cp plotted for each move index |
| Stats Panel | Accuracy %, ACPL, classification breakdown |
| Board | Position with arrows showing best move |
Algorithm Comparison
| Platform | Primary Method | Key Difference |
|---|---|---|
| Chess.com | Proprietary | Uses adaptive thresholds based on rating |
| Lichess | Volatility + Harmonic | Open source, position-weighted |
| Chess Analyzer Pro | Lichess-style | Tuned decay constant, similar thresholds |
Our implementation closely follows the Lichess AccuracyPercent.scala source, with minor adjustments to the decay constant to better match Chess.com's familiar scoring range.