Perf / reduce IndexOptimize overhead when there's nothing to do#999
Open
rducom wants to merge 4 commits intoolahallengren:mainfrom
Open
Perf / reduce IndexOptimize overhead when there's nothing to do#999rducom wants to merge 4 commits intoolahallengren:mainfrom
rducom wants to merge 4 commits intoolahallengren:mainfrom
Conversation
… and bulk-load DMV data - Cache EngineEdition in @EngineEdition variable (1 call instead of 13) - Skip 8 expensive correlated subqueries (IsImageText, IsNewLOB, IsFileStream, HasClusteredColumnstore, HasNonClusteredColumnstore, IsComputed, IsClusteredIndexComputed, IsTimestamp) on Standard Edition since online rebuild is not available anyway - Pre-check for read-only filegroups per database; skip 3 EXISTS subqueries when none exist - Pre-check for paused resumable index operations per database; skip EXISTS subquery when none exist - Replace PartitionCount GROUP BY subquery with COUNT(*) OVER() window function - Bulk-load dm_db_index_physical_stats for entire database in one call instead of per-index sp_executesql (biggest perf win) - Bulk-load dm_db_stats_properties for entire database in one call instead of per-statistic sp_executesql - Fallback to original per-item queries if bulk loading fails
… and repeated table scans - Pre-compute @HasActionsPreferred and @HasDistinctActions flags before the WHILE loop (was: 6x EXISTS + 2x GROUP BY HAVING per iteration) - Replace ALTER INDEX WITH clause construction: direct string concatenation via ISNULL pattern instead of INSERT into table variable + WHILE loop with SELECT TOP 1 / UPDATE per argument - Replace UPDATE STATISTICS WITH clause construction: same approach - Remove @CurrentAlterIndexWithClauseArguments and @CurrentUpdateStatisticsWithClauseArguments table variables entirely - Remove 2 DELETE FROM table variable calls per iteration
…loaded data - DELETE index rows from @tmpIndexesStatistics before the loop when: - Page count < @MinNumberOfPages (default 1000) or > @MaxNumberOfPages - Low fragmentation with no action configured for Low group (default) - Medium fragmentation with no action configured for Medium group AND no statistics update is needed for the row - DELETE statistics-only rows where modification_counter = 0 and @OnlyModifiedStatistics = 'Y' or @StatisticsModificationLevel is set - Add PRIMARY KEY to @tmpFragmentation for efficient OUTER APPLY lookups - Guards: preserves read-only filegroup rows, resumable ops, memory-optimized table stats on old versions, and rows needing unconditional stats update With defaults (@FragmentationLow=NULL, @MinNumberOfPages=1000), this eliminates 50-80% of loop iterations on typical databases.
…l = 'N' dm_db_index_physical_stats always returns non-NULL partition_number (1 for non-partitioned indexes), but @tmpIndexesStatistics has NULL PartitionNumber when @PartitionLevel = 'N'. The join condition required both to be NULL, which never matched. Changed to match any partition when @CurrentPartitionNumber IS NULL, and aggregate with MAX/SUM to handle multiple partitions.
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Context
On Standard Edition with ~1000 indexes,
IndexOptimizetook 30+ seconds even when no action was needed. The overhead was entirely in the stored procedure's control flow, not in actual index operations.Bulk-load DMV data instead of per-index queries
The biggest win. Previously,
dm_db_index_physical_statsanddm_db_stats_propertieswere queried per index viasp_executesqlinside the WHILE loop (~1000 round-trips × 15-50ms each). Now we bulk-load all fragmentation and stats data in 2 queries per database before the loop, with a fallback to per-item queries if the bulk approach fails.Pre-filter rows that need no action
Before entering the main WHILE loop, a single DELETE removes rows from
@tmpIndexesStatisticswhere the bulk-loaded data already shows no action is needed (page count below threshold, fragmentation below threshold, modification counter below threshold). This avoids thousands of loop iterations that would just skip anyway.Cache
SERVERPROPERTY('EngineEdition')Replaced ~13
SERVERPROPERTY('EngineEdition')calls with a single@EngineEditionvariable at the top.Skip Enterprise-only subqueries on Standard Edition
8 expensive correlated subqueries in the index metadata collection query (
IsImageText,IsNewLOB,IsFileStream,HasClusteredColumnstore,HasNonClusteredColumnstore,IsComputed,IsClusteredIndexComputed,IsTimestamp) are now wrapped inCASE WHEN @EngineEdition IN (3, 5, 8)— they return'0'on Standard Edition where those features aren't available anyway.Per-database pre-checks for rare conditions
EXISTSonsys.filegroups WHERE is_read_only = 1per database, and the per-indexOnReadOnlyFileGroupsubquery is skipped entirely if none exist.sys.index_resumable_operations WHERE state_desc = 'PAUSED'.Optimize the WHILE loop internals
@HasActionsPreferred/@HasDistinctActionsreplace 6×EXISTS+ 2×GROUP BY HAVINGchecks that ran on every iteration.ISNULL(@var + ', ', '') + 'ARGUMENT'string concatenation.PartitionCount via window function
Replaced a
GROUP BYsubquery +LEFT JOINwithCOUNT(*) OVER (PARTITION BY object_id, index_id)directly in the main metadata query.Regression fix
The bulk fragmentation lookup initially broke
@PartitionLevel = 'N'mode:dm_db_index_physical_statsalways returnspartition_number = 1(never NULL), but@tmpIndexesStatisticsstores NULL when@PartitionLevel = 'N'. Fixed by matching all partitions when@CurrentPartitionNumber IS NULLand aggregating withMAX()/SUM().Results
31s -> 1s