DEV Community

Franck Pachot
Franck Pachot

Posted on

7 2 1 1 1

MongoDB Multi-Planner Optimizer and Plan Cache

Database systems utilize query planners to optimize data retrieval, primarily through two methods: Rule-Based Optimizers (RBO) and Cost-Based Optimizers (CBO).

  • Rule-Based Optimizer (RBO): This method employs predefined rules to select execution plans, resulting in stable but simplistic query plans. It often struggle with complex queries involving multiple joins but with a document model, the join ordering is solved upfront.

  • Cost-Based Optimizer (CBO): CBO analyzes data distribution statistics to evaluate potential plans and choose the most cost-effective option. Its reliance on possibly outdated statistics can lead to suboptimal decisions, so gathering statistics is crucial. Planning complex queries can take time, so it either relies on a shared plan cache or switches back to simple genetic algorithms when there are many joins.

MongoDB utilizes a document model that minimizes joins, focusing primarily on selecting the appropriate index. It defers execution decisions until runtime instead of relying on RBO or CBO to pick one. Through a shared plan cache and multi-planner mechanism, MongoDB dynamically evaluates and adjusts query plans for optimal execution.

Let's explain with a simple example and use the most challenging situation for a query planner: column skew, where a field has some unique values and a few popular ones. A business case I encountered was at a coffee capsule vendor. Online customers use their customer ID and order some capsules every three months on average. Shops have a special customer ID and record thousands of orders every day. For a query per customer ID and some other criteria, the index to use must be different, but the application obeys the 'parse one execute many' best practice, and the first who runs determines the execution plan for the others. Tom Kyte told the story of the database being slow when it rains (because the first user to parse the statement depended on a user coming by bike or car).

Initialize a collection

I'll build a simple example, a collection of ten million documents with two fields: a and b:

  • a is very selective for a few documents (with value less than 50) but the millions of remaining documents all have the same value of 50.
  • b is uniform, with values from 1 to 10, and good selectivity: one value returns ten documents.

Here here how I generated this with a loop on i, a is generated with Math.min(i,50) and b is generated with i%1e6:

// Drop the collection if it already exists
db.franck.drop();
// Insert documents with different data distributions
const bulk = db.franck.initializeUnorderedBulkOp();
const num=1e7;
for (let i = 0; i < num; i++) {
    bulk.insert({ a: Math.min(i,50), b: i%1e6 });
}
const r = bulk.execute();

console.log(`Bulk Operation Summary: Inserted: ${r.insertedCount}, Matched: ${r.matchedCount}, Modified: ${r.modifiedCount}, Deleted: ${r.deletedCount}, Upserted: ${r.upsertedCount}`);
Enter fullscreen mode Exit fullscreen mode

This inserted ten million documents:

test> console.log(`Bulk Operation Summary: Inserted: ${r.insertedCount}, Matched: ${r.matchedCount}, Modified: ${r.modifiedCount}, Deleted: ${r.deletedCount}, Upserted: ${r.upsertedCount}`);             
Bulk Operation Summary: Inserted: 10000000, Matched: 0, Modified: 0, Deleted: 0, Upserted: 0
Enter fullscreen mode Exit fullscreen mode

Create Indexes

As the goal is to show two possible execution plans, I create two indexes, one on each field:

test> db.franck.createIndex({ a: 1 });
a_1
test> db.franck.createIndex({ b: 1 });
b_1
Enter fullscreen mode Exit fullscreen mode

Verify Data

test> // "b" has a good selectivity

test> db.franck.countDocuments({ b: 42 });
10

test> // "a" has a better selectivity when a<50

test> db.franck.countDocuments({ a: 42         });
1

test> // "a" has a very bad selectivity when a=50 (popular value)

test> db.franck.countDocuments({ a: 50         });
9999950

Enter fullscreen mode Exit fullscreen mode

My query will have a predicate on each column. I'll test { a: 42, b: 42 } which returns one document, and { a: 50, b: 42 } which returns ten documents.

Execution profile and plan cache

I'll look at the statistics executions with profiling and at the query plan cache, so I reset them in my lab:

db.franck.getPlanCache().clear();
db.franck.getPlanCache().list();
db.setProfilingLevel(0);
db.system.profile.drop();
Enter fullscreen mode Exit fullscreen mode

First execution and plan cache

I set profiling, and I run a first execution that returns one document:

test> db.setProfilingLevel(2 );
{ was: 0, slowms: 0, sampleRate: 1, ok: 1 }
test> db.franck.find({ a: 42, b: 42 });
[ { _id: ObjectId('67d34e6128ca7c6f95a00acb'), a: 42, b: 42 } ]
test> db.setProfilingLevel(0);
{ was: 2, slowms: 0, sampleRate: 1, ok: 1 }
Enter fullscreen mode Exit fullscreen mode

The initial execution of the MongoDB query planner generated all possible plans stored in the cache: one plan executes an IXSCAN on index a, another performs an IXSCAN on index b, and a third combines both indexes with an AND_SORTED operation:

test> db.franck.getPlanCache().list();

[
  {
    version: '1',
    queryHash: '20F294D4',
    planCacheKey: '2AAF6E88',
    isActive: false,
    works: Long('2'),
    timeOfCreation: ISODate('2025-03-13T21:43:31.189Z'),
    createdFromQuery: { query: { a: 42, b: 42 }, sort: {}, projection: {} },
    cachedPlan: {
      stage: 'FETCH',
      filter: { b: { '$eq': 42 } },
      inputStage: {
        stage: 'IXSCAN',
        keyPattern: { a: 1 },
        indexName: 'a_1',
        isMultiKey: false,
        multiKeyPaths: { a: [] },
        isUnique: false,
        isSparse: false,
        isPartial: false,
        indexVersion: 2,
        direction: 'forward',
        indexBounds: { a: [ '[42, 42]' ] }
      }
    },
    creationExecStats: [
      {
        nReturned: 1,
        executionTimeMillisEstimate: 0,
        totalKeysExamined: 1,
        totalDocsExamined: 1,
        executionStages: {
          stage: 'FETCH',
          filter: { b: { '$eq': 42 } },
          nReturned: 1,
          executionTimeMillisEstimate: 0,
          works: 2,
          advanced: 1,
          needTime: 0,
          needYield: 0,
          saveState: 0,
          restoreState: 0,
          isEOF: 1,
          docsExamined: 1,
          alreadyHasObj: 0,
          inputStage: {
            stage: 'IXSCAN',
            nReturned: 1,
            executionTimeMillisEstimate: 0,
            works: 2,
            advanced: 1,
            needTime: 0,
            needYield: 0,
            saveState: 0,
            restoreState: 0,
            isEOF: 1,
            keyPattern: { a: 1 },
            indexName: 'a_1',
            isMultiKey: false,
            multiKeyPaths: { a: [] },
            isUnique: false,
            isSparse: false,
            isPartial: false,
            indexVersion: 2,
            direction: 'forward',
            indexBounds: { a: [Array] },
            keysExamined: 1,
            seeks: 1,
            dupsTested: 0,
            dupsDropped: 0
          }
        }
      },
      {
        nReturned: 1,
        executionTimeMillisEstimate: 0,
        totalKeysExamined: 2,
        totalDocsExamined: 2,
        executionStages: {
          stage: 'FETCH',
          filter: { a: { '$eq': 42 } },
          nReturned: 1,
          executionTimeMillisEstimate: 0,
          works: 2,
          advanced: 1,
          needTime: 1,
          needYield: 0,
          saveState: 0,
          restoreState: 0,
          isEOF: 0,
          docsExamined: 2,
          alreadyHasObj: 0,
          inputStage: {
            stage: 'IXSCAN',
            nReturned: 2,
            executionTimeMillisEstimate: 0,
            works: 2,
            advanced: 2,
            needTime: 0,
            needYield: 0,
            saveState: 0,
            restoreState: 0,
            isEOF: 0,
            keyPattern: { b: 1 },
            indexName: 'b_1',
            isMultiKey: false,
            multiKeyPaths: { b: [] },
            isUnique: false,
            isSparse: false,
            isPartial: false,
            indexVersion: 2,
            direction: 'forward',
            indexBounds: { b: [Array] },
            keysExamined: 2,
            seeks: 1,
            dupsTested: 0,
            dupsDropped: 0
          }
        }
      },
      {
        nReturned: 1,
        executionTimeMillisEstimate: 0,
        totalKeysExamined: 2,
        totalDocsExamined: 1,
        executionStages: {
          stage: 'FETCH',
          filter: { '$and': [ [Object], [Object] ] },
          nReturned: 1,
          executionTimeMillisEstimate: 0,
          works: 2,
          advanced: 1,
          needTime: 1,
          needYield: 0,
          saveState: 0,
          restoreState: 0,
          isEOF: 0,
          docsExamined: 1,
          alreadyHasObj: 0,
          inputStage: {
            stage: 'AND_SORTED',
            nReturned: 1,
            executionTimeMillisEstimate: 0,
            works: 2,
            advanced: 1,
            needTime: 1,
            needYield: 0,
            saveState: 0,
            restoreState: 0,
            isEOF: 0,
            failedAnd_0: 0,
            failedAnd_1: 0,
            inputStages: [ [Object], [Object] ]
          }
        }
      }
    ],
    candidatePlanScores: [ 2.5002, 1.5002, 1.5001 ],
    indexFilterSet: false,
    estimatedSizeBytes: Long('4693'),
    host: '4b49f6ca6400:27017'
  }
]
Enter fullscreen mode Exit fullscreen mode

Those plans were evaluated (creationExecStats) with { a: { '$eq': 42 } and { b: { '$eq': 42 } and scored. The best score goes to the most selective one, the index on a, which examines one key and fetches one document.

Next executions with the same values

I run the same nine more times:

db.setProfilingLevel(2 );
db.franck.find({ a: 42, b: 42 });
db.franck.find({ a: 42, b: 42 });
db.franck.find({ a: 42, b: 42 });
db.franck.find({ a: 42, b: 42 });
db.franck.find({ a: 42, b: 42 });
db.franck.find({ a: 42, b: 42 });
db.franck.find({ a: 42, b: 42 });
db.franck.find({ a: 42, b: 42 });
db.franck.find({ a: 42, b: 42 });
db.setProfilingLevel(0);
Enter fullscreen mode Exit fullscreen mode

I extract some interesting information from the profiling of those ten executions:

test> db.system.profile.find({ ns: "test.franck" }).sort({ ts: 1 }).limit(100).forEach((doc) => { if (doc.execStats) console.log(`${doc.ts.toISOString().slice(11, 23)} works: ${doc.execStats.works.toString().padStart(5)} keys: ${doc.keysExamined.toString().padStart(5)} docs: ${doc.docsExamined.toString().padStart(5)} ret: ${doc.nreturned.toString().padStart(5)} ${doc.execStats.stage.padStart(12)} ${doc.planSummary.padStart(12)} exec(plan): ${doc.millis.toString().padStart(5)}ms (${doc.planningTimeMicros.toString().padStart(8)}us) query/plan ${doc.queryHash}/${doc.planCacheKey} ${doc.queryFramework} ${doc.fromPlanCache ? 'fromPlanCache ' : ''}${doc.fromMultiPlanner ? 'fromMultiPlanner ' : ''}${doc.replanned ? doc.replanReason : ''}`); });

21:43:31.191 works:     3 keys:     1 docs:     1 ret:     1        FETCH IXSCAN { a: 1 } exec(plan):     7ms (    7063us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner                                                             
21:56:49.938 works:     3 keys:     1 docs:     1 ret:     1        FETCH IXSCAN { a: 1 } exec(plan):     9ms (    8899us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner                                                             
21:56:49.982 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     1ms (    1264us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache                                                                
21:56:50.014 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     233us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache                                                                
21:56:50.053 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     240us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache                                                                
21:56:50.085 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     235us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache                                                                
21:56:50.108 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     254us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache                                                                
21:56:50.133 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     230us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache                                                                
21:56:50.157 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     246us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache                                                                
21:56:50.188 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     246us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache     
Enter fullscreen mode Exit fullscreen mode

In the first two executions, all plans were evaluated using the Multi-Planner. From the third execution onward, only the winning plan from the Plan Cache was executed, resulting in decreased execution time, including planning time. The best index, which was fully executed, is the one on a.

More executions with different values

I execute the same query shape but with a different value, for wich I know that the index on a is not good:

db.setProfilingLevel(2);
db.franck.find({ a: 50, b: 42 });
db.franck.find({ a: 50, b: 42 });
db.franck.find({ a: 50, b: 42 });
db.franck.find({ a: 50, b: 42 });
db.franck.find({ a: 50, b: 42 });
db.franck.find({ a: 50, b: 42 });
db.franck.find({ a: 50, b: 42 });
db.franck.find({ a: 50, b: 42 });
db.franck.find({ a: 50, b: 42 });
db.franck.find({ a: 50, b: 42 });
db.setProfilingLevel(0);
Enter fullscreen mode Exit fullscreen mode

I gather the profile of all executions:

test> db.system.profile.find({ ns: "test.franck" }).sort({ ts: 1 }).limit(100).forEach((doc) => { if (doc.execStats) console.log(`${doc.ts.toISOString().slice(11, 23)} works: ${doc.execStats.works.toString().padStart(5)} keys: ${doc.keysExamined.toString().padStart(5)} docs: ${doc.docsExamined.toString().padStart(5)} ret: ${doc.nreturned.toString().padStart(5)} ${doc.execStats.stage.padStart(12)} ${doc.planSummary.padStart(12)} exec(plan): ${doc.millis.toString().padStart(5)}ms (${doc.planningTimeMicros.toString().padStart(8)}us) query/plan ${doc.queryHash}/${doc.planCacheKey} ${doc.queryFramework} ${doc.fromPlanCache ? 'fromPlanCache ' : ''}${doc.fromMultiPlanner ? 'fromMultiPlanner ' : ''}${doc.replanned ? doc.replanReason : ''}`); });

21:43:31.191 works:     3 keys:     1 docs:     1 ret:     1        FETCH IXSCAN { a: 1 } exec(plan):     7ms (    7063us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner 
21:56:49.938 works:     3 keys:     1 docs:     1 ret:     1        FETCH IXSCAN { a: 1 } exec(plan):     9ms (    8899us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner 
21:56:49.982 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     1ms (    1264us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.014 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     233us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.053 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     240us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.085 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     235us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.108 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     254us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.133 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     230us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.157 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     246us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.188 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     246us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.435 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     2ms (    2851us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner cached plan was less efficient than expected: expected trial execution to take 2 works but it took at least 20 works
22:12:02.474 works:    12 keys:    10 docs:    10 ret:     9        FETCH IXSCAN { b: 1 } exec(plan):     0ms (     437us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner 
22:12:02.505 works:    12 keys:    10 docs:    10 ret:     9        FETCH IXSCAN { b: 1 } exec(plan):     0ms (     421us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner 
22:12:02.541 works:    12 keys:    10 docs:    10 ret:     9        FETCH IXSCAN { b: 1 } exec(plan):     0ms (     473us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner 
22:12:02.566 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     291us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.591 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     339us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.613 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     287us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.643 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     315us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.659 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     320us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.685 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     472us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
Enter fullscreen mode Exit fullscreen mode

Unlike many other databases that require failure before re-optimization, MongoDB never utilized the wrong index. It started with the cached plan but in trial mode. It detected a works value exceeding ten times the expected amount and switched back to the Multi-Planner, where all plans are executed. The index on b was used, which is a good choice. The next three executions did the same before switching to use the plan from the cache with this index. Note that you can re-connect for this exercise as the plan cache is shared.

More executions like the first ones

When you run again a selective query such as find({ a: 42, b: 42 }), you'll notice that the index remains on b. This occurs because, despite a potentially being a better option, the difference isn't significant enough to warrant a change.

test> db.system.profile.find({ ns: "test.franck" }).sort({ ts: 1 }).limit(100).forEach((doc) => { if (doc.execStats) console.log(`${doc.ts.toISOString().slice(11, 23)} works: ${doc.execStats.works.toString().padStart(5)} keys: ${doc.keysExamined.toString().padStart(5)} docs: ${doc.docsExamined.toString().padStart(5)} ret: ${doc.nreturned.toString().padStart(5)} ${doc.execStats.stage.padStart(12)} ${doc.planSummary.padStart(12)} exec(plan): ${doc.millis.toString().padStart(5)}ms (${doc.planningTimeMicros.toString().padStart(8)}us) query/plan ${doc.queryHash}/${doc.planCacheKey} ${doc.queryFramework} ${doc.fromPlanCache ? 'fromPlanCache ' : ''}${doc.fromMultiPlanner ? 'fromMultiPlanner ' : ''}${doc.replanned ? doc.replanReason : ''}`); });

21:43:31.191 works:     3 keys:     1 docs:     1 ret:     1        FETCH IXSCAN { a: 1 } exec(plan):     7ms (    7063us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner 
21:56:49.938 works:     3 keys:     1 docs:     1 ret:     1        FETCH IXSCAN { a: 1 } exec(plan):     9ms (    8899us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner 
21:56:49.982 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     1ms (    1264us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.014 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     233us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.053 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     240us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.085 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     235us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.108 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     254us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.133 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     230us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.157 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     246us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
21:56:50.188 works:     2 keys:     1 docs:     1 ret:     1  CACHED_PLAN IXSCAN { a: 1 } exec(plan):     0ms (     246us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.435 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     2ms (    2851us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner cached plan was less efficient than expected: expected trial execution to take 2 works but it took at least 20 works
22:12:02.474 works:    12 keys:    10 docs:    10 ret:     9        FETCH IXSCAN { b: 1 } exec(plan):     0ms (     437us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner 
22:12:02.505 works:    12 keys:    10 docs:    10 ret:     9        FETCH IXSCAN { b: 1 } exec(plan):     0ms (     421us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner 
22:12:02.541 works:    12 keys:    10 docs:    10 ret:     9        FETCH IXSCAN { b: 1 } exec(plan):     0ms (     473us) query/plan 20F294D4/2AAF6E88 classic fromMultiPlanner 
22:12:02.566 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     291us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.591 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     339us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.613 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     287us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.643 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     315us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.659 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     320us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:12:02.685 works:    10 keys:    10 docs:    10 ret:     9  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     472us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:20:26.110 works:     2 keys:    10 docs:    10 ret:     1  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     312us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:20:26.145 works:     2 keys:    10 docs:    10 ret:     1  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     311us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:20:26.174 works:     2 keys:    10 docs:    10 ret:     1  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     281us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:20:26.201 works:     2 keys:    10 docs:    10 ret:     1  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     303us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
22:20:36.089 works:     2 keys:    10 docs:    10 ret:     1  CACHED_PLAN IXSCAN { b: 1 } exec(plan):     0ms (     294us) query/plan 20F294D4/2AAF6E88 classic fromPlanCache 
Enter fullscreen mode Exit fullscreen mode

This re-plan algorithm is simple but does the perfect job: it avoids running bad indexes and reuses the plans that are good enough without trying to always find the best.

I've often been concerned about the Cost-Based Optimizers that always try to find the best execution plan, at the risk of huge performance regression if they fail in their estimations. Most users prefer an execution time that is good enough and predictable. By not relying on statistics, there are no problems of stale, out-of-bound, or volatile tables that we encounter in other databases.

MongoDB offers valuable insights from the Plan Cache through candidatePlanScores. This feature not only identifies the best plan but also indicates its superiority over others. Such information aids in evaluating the risk of plan instability and can inform decisions on removing unnecessary indexes.

Top comments (0)

👋 Kindness is contagious

Engage with a wealth of insights in this thoughtful article, valued within the supportive DEV Community. Coders of every background are welcome to join in and add to our collective wisdom.

A sincere "thank you" often brightens someone’s day. Share your gratitude in the comments below!

On DEV, the act of sharing knowledge eases our journey and fortifies our community ties. Found value in this? A quick thank you to the author can make a significant impact.

Okay