Skip to content

Perf / reduce IndexOptimize overhead when there's nothing to do#999

Open
rducom wants to merge 4 commits intoolahallengren:mainfrom
LuccaSA:perf/index-optimize-edition-and-bulk-loading
Open

Perf / reduce IndexOptimize overhead when there's nothing to do#999
rducom wants to merge 4 commits intoolahallengren:mainfrom
LuccaSA:perf/index-optimize-edition-and-bulk-loading

Conversation

@rducom
Copy link
Copy Markdown

@rducom rducom commented Apr 10, 2026

Context

On Standard Edition with ~1000 indexes, IndexOptimize took 30+ seconds even when no action was needed. The overhead was entirely in the stored procedure's control flow, not in actual index operations.

Bulk-load DMV data instead of per-index queries

The biggest win. Previously, dm_db_index_physical_stats and dm_db_stats_properties were queried per index via sp_executesql inside the WHILE loop (~1000 round-trips × 15-50ms each). Now we bulk-load all fragmentation and stats data in 2 queries per database before the loop, with a fallback to per-item queries if the bulk approach fails.

Pre-filter rows that need no action

Before entering the main WHILE loop, a single DELETE removes rows from @tmpIndexesStatistics where the bulk-loaded data already shows no action is needed (page count below threshold, fragmentation below threshold, modification counter below threshold). This avoids thousands of loop iterations that would just skip anyway.

Cache SERVERPROPERTY('EngineEdition')

Replaced ~13 SERVERPROPERTY('EngineEdition') calls with a single @EngineEdition variable at the top.

Skip Enterprise-only subqueries on Standard Edition

8 expensive correlated subqueries in the index metadata collection query (IsImageText, IsNewLOB, IsFileStream, HasClusteredColumnstore, HasNonClusteredColumnstore, IsComputed, IsClusteredIndexComputed, IsTimestamp) are now wrapped in CASE WHEN @EngineEdition IN (3, 5, 8) — they return '0' on Standard Edition where those features aren't available anyway.

Per-database pre-checks for rare conditions

  • Read-only filegroups: a quick EXISTS on sys.filegroups WHERE is_read_only = 1 per database, and the per-index OnReadOnlyFileGroup subquery is skipped entirely if none exist.
  • Resumable index operations: same pattern with sys.index_resumable_operations WHERE state_desc = 'PAUSED'.

Optimize the WHILE loop internals

  • Pre-computed flags @HasActionsPreferred / @HasDistinctActions replace 6× EXISTS + 2× GROUP BY HAVING checks that ran on every iteration.
  • ALTER INDEX / UPDATE STATISTICS WITH clause: replaced nested WHILE loops + table variables with direct ISNULL(@var + ', ', '') + 'ARGUMENT' string concatenation.

PartitionCount via window function

Replaced a GROUP BY subquery + LEFT JOIN with COUNT(*) OVER (PARTITION BY object_id, index_id) directly in the main metadata query.

Regression fix

The bulk fragmentation lookup initially broke @PartitionLevel = 'N' mode: dm_db_index_physical_stats always returns partition_number = 1 (never NULL), but @tmpIndexesStatistics stores NULL when @PartitionLevel = 'N'. Fixed by matching all partitions when @CurrentPartitionNumber IS NULL and aggregating with MAX()/SUM().

Results

31s -> 1s

rducom added 4 commits April 10, 2026 16:52
… and bulk-load DMV data

- Cache EngineEdition in @EngineEdition variable (1 call instead of 13)
- Skip 8 expensive correlated subqueries (IsImageText, IsNewLOB, IsFileStream,
  HasClusteredColumnstore, HasNonClusteredColumnstore, IsComputed,
  IsClusteredIndexComputed, IsTimestamp) on Standard Edition since online
  rebuild is not available anyway
- Pre-check for read-only filegroups per database; skip 3 EXISTS subqueries
  when none exist
- Pre-check for paused resumable index operations per database; skip EXISTS
  subquery when none exist
- Replace PartitionCount GROUP BY subquery with COUNT(*) OVER() window function
- Bulk-load dm_db_index_physical_stats for entire database in one call instead
  of per-index sp_executesql (biggest perf win)
- Bulk-load dm_db_stats_properties for entire database in one call instead of
  per-statistic sp_executesql
- Fallback to original per-item queries if bulk loading fails
… and repeated table scans

- Pre-compute @HasActionsPreferred and @HasDistinctActions flags before the
  WHILE loop (was: 6x EXISTS + 2x GROUP BY HAVING per iteration)
- Replace ALTER INDEX WITH clause construction: direct string concatenation
  via ISNULL pattern instead of INSERT into table variable + WHILE loop with
  SELECT TOP 1 / UPDATE per argument
- Replace UPDATE STATISTICS WITH clause construction: same approach
- Remove @CurrentAlterIndexWithClauseArguments and
  @CurrentUpdateStatisticsWithClauseArguments table variables entirely
- Remove 2 DELETE FROM table variable calls per iteration
…loaded data

- DELETE index rows from @tmpIndexesStatistics before the loop when:
  - Page count < @MinNumberOfPages (default 1000) or > @MaxNumberOfPages
  - Low fragmentation with no action configured for Low group (default)
  - Medium fragmentation with no action configured for Medium group
  AND no statistics update is needed for the row
- DELETE statistics-only rows where modification_counter = 0 and
  @OnlyModifiedStatistics = 'Y' or @StatisticsModificationLevel is set
- Add PRIMARY KEY to @tmpFragmentation for efficient OUTER APPLY lookups
- Guards: preserves read-only filegroup rows, resumable ops, memory-optimized
  table stats on old versions, and rows needing unconditional stats update

With defaults (@FragmentationLow=NULL, @MinNumberOfPages=1000), this
eliminates 50-80% of loop iterations on typical databases.
…l = 'N'

dm_db_index_physical_stats always returns non-NULL partition_number (1 for
non-partitioned indexes), but @tmpIndexesStatistics has NULL PartitionNumber
when @PartitionLevel = 'N'. The join condition required both to be NULL,
which never matched. Changed to match any partition when @CurrentPartitionNumber
IS NULL, and aggregate with MAX/SUM to handle multiple partitions.
@rducom
Copy link
Copy Markdown
Author

rducom commented Apr 13, 2026

Before / After with 2500 databases : 17h -> 6h

image

BriceFrancois

This comment was marked as low quality.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants