Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Snyk] Upgrade mongodb from 6.3.0 to 6.10.0 #106

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

issam-seghir
Copy link
Owner

snyk-top-banner

Snyk has created this PR to upgrade mongodb from 6.3.0 to 6.10.0.

ℹ️ Keep your dependencies up-to-date. This makes it easier to fix existing vulnerabilities and to more quickly identify and fix newly disclosed vulnerabilities when they affect your project.


  • The recommended version is 122 versions ahead of your current version.

  • The recommended version was released on a month ago.

Release notes
Package name: mongodb
  • 6.10.0 - 2024-10-21

    6.10.0 (2024-10-21)

    The MongoDB Node.js team is pleased to announce version 6.10.0 of the mongodb package!

    Release Notes

    Warning

    Server versions 3.6 and lower will get a compatibility error on connection and support for MONGODB-CR authentication is now also removed.

    Support for new client bulkWrite API (8.0+)

    A new bulk write API on the MongoClient is now supported for users on server versions 8.0 and higher.
    This API is meant to replace the existing bulk write API on the Collection as it supports a bulk
    write across multiple databases and collections in a single call.

    Usage

    Users of this API call MongoClient#bulkWrite and provide a list of bulk write models and options.
    The models have a structure as follows:

    Insert One

    Note that when no _id field is provided in the document, the driver will generate a BSON ObjectId
    automatically.

    {
      namespace: '<db>.<collection>',
      name: 'insertOne',
      document: Document
    }

    Update One

    {
      namespace: '<db>.<collection>',
      name: 'updateOne',
      filter: Document,
      update: Document | Document[],
      arrayFilters?: Document[],
      hint?: Document | string,
      collation?: Document,
      upsert: boolean
    }

    Update Many

    Note that write errors occuring with an update many model present are not retryable.

    {
      namespace: '<db>.<collection>',
      name: 'updateMany',
      filter: Document,
      update: Document | Document[],
      arrayFilters?: Document[],
      hint?: Document | string,
      collation?: Document,
      upsert: boolean
    }

    Replace One

    {
      namespace: '<db>.<collection>',
      name: 'replaceOne',
      filter: Document,
      replacement: Document,
      hint?: Document | string,
      collation?: Document
    }

    Delete One

    {
      namespace: '<db>.<collection>',
      name: 'deleteOne',
      filter: Document,
      hint?: Document | string,
      collation?: Document
    }

    Delete Many

    Note that write errors occuring with a delete many model present are not retryable.*

    {
      namespace: '<db>.<collection>',
      name: 'deleteMany',
      filter: Document,
      hint?: Document | string,
      collation?: Document
    }

    Example

    Below is a mixed model example of using the new API:

    const client = new MongoClient(process.env.MONGODB_URI);
    const models = [
      {
        name: 'insertOne',
        namespace: 'db.authors',
        document: { name: 'King' }
      },
      {
        name: 'insertOne',
        namespace: 'db.books',
        document: { name: 'It' }
      },
      {
        name: 'updateOne',
        namespace: 'db.books',
        filter: { name: 'it' },
        update: { $set: { year: 1986 } }
      }
    ];
    const result = await client.bulkWrite(models);

    The bulk write specific options that can be provided to the API are as follows:

    • ordered: Optional boolean that indicates the bulk write as ordered. Defaults to true.
    • verboseResults: Optional boolean to indicate to provide verbose results. Defaults to false.
    • bypassDocumentValidation: Optional boolean to bypass document validation rules. Defaults to false.
    • let: Optional document of parameter names and values that can be accessed using $$var. No default.

    The object returned by the bulk write API is:

    interface ClientBulkWriteResult {
      // Whether the bulk write was acknowledged.
      readonly acknowledged: boolean;
      // The total number of documents inserted across all insert operations.
      readonly insertedCount: number;
      // The total number of documents upserted across all update operations.
      readonly upsertedCount: number;
      // The total number of documents matched across all update operations.
      readonly matchedCount: number;
      // The total number of documents modified across all update operations.
      readonly modifiedCount: number;
      // The total number of documents deleted across all delete operations.
      readonly deletedCount: number;
      // The results of each individual insert operation that was successfully performed.
      // Note the keys in the map are the associated index in the models array.
      readonly insertResults?: ReadonlyMap<number, ClientInsertOneResult>;
      // The results of each individual update operation that was successfully performed.
      // Note the keys in the map are the associated index in the models array.
      readonly updateResults?: ReadonlyMap<number, ClientUpdateResult>;
      // The results of each individual delete operation that was successfully performed.
      // Note the keys in the map are the associated index in the models array.
      readonly deleteResults?: ReadonlyMap<number, ClientDeleteResult>;
    }

    Error Handling

    Server side errors encountered during a bulk write will throw a MongoClientBulkWriteError. This error
    has the following properties:

    • writeConcernErrors: Ann array of documents for each write concern error that occurred.
    • writeErrors: A map of index pointing at the models provided and the individual write error.
    • partialResult: The client bulk write result at the point where the error was thrown.

    Schema assertion support

    interface Book {
    name: string;
    authorName: string;
    }

    interface Author {
    name: string;
    }

    type MongoDBSchemas = {
    'db.books': Book;
    'db.authors': Author;
    }

    const model: ClientBulkWriteModel<MongoDBSchemas> = {
    namespace: 'db.books'
    name: 'insertOne',
    document: { title: 'Practical MongoDB Aggregations', authorName: 3 }
    // error authorName cannot be number
    };

    Notice how authorName is type checked against the Book type because namespace is set to "db.books".

    Allow SRV hostnames with fewer than three . separated parts

    In an effort to make internal networking solutions easier to use like deployments using kubernetes, the client now accepts SRV hostname strings with one or two . separated parts.

    await new MongoClient('mongodb+srv://mongodb.local').connect();

    For security reasons, the returned addresses of SRV strings with less than three parts must end with the entire SRV hostname and contain at least one additional domain level. This is because this added validation ensures that the returned address(es) are from a known host. In future releases, we plan on extending this validation to SRV strings with three or more parts, as well.

    // Example 1: Validation fails since the returned address doesn't end with the entire SRV hostname
    'mongodb+srv://mySite.com' => 'myEvilSite.com'

    // Example 2: Validation fails since the returned address is identical to the SRV hostname
    'mongodb+srv://mySite.com' => 'mySite.com'

    // Example 3: Validation passes since the returned address ends with the entire SRV hostname and contains an additional domain level
    'mongodb+srv://mySite.com' => 'cluster_1.mySite.com'

    Explain now supports maxTimeMS

    Driver CRUD commands can be explained by providing the explain option:

    collection.find({}).explain('queryPlanner'); // using the fluent cursor API
    collection.deleteMany({}, { explain: 'queryPlanner' }); // as an option

    However, if maxTimeMS was provided, the maxTimeMS value was applied to the command to explain, and consequently the server could take more than maxTimeMS to respond.

    Now, maxTimeMS can be specified as a new option for explain commands:

    collection.find({}).explain({ verbosity: 'queryPlanner', maxTimeMS: 2000 }); // using the fluent cursor API
    collection.deleteMany({}, { 
    	explain: {
    		verbosity: 'queryPlanner',
    		maxTimeMS: 2000
    	}
    ); // as an option

    If a top-level maxTimeMS option is provided in addition to the explain maxTimeMS, the explain-specific maxTimeMS is applied to the explain command, and the top-level maxTimeMS is applied to the explained command:

    collection.deleteMany({}, {
    maxTimeMS: 1000,
    explain: {
    verbosity: 'queryPlanner',
    maxTimeMS: 2000
    }
    );

    // the actual command that gets sent to the server looks like:
    {
    explain: { delete: <collection name>, ..., maxTimeMS: 1000 },
    verbosity: 'queryPlanner',
    maxTimeMS: 2000
    }

    Find and Aggregate Explain in Options is Deprecated

    Note

    Specifying explain for cursors in the operation's options is deprecated in favor of the .explain() methods on cursors:

    collection.find({}, { explain: true })
    // -> collection.find({}).explain()

    collection.aggregate([], { explain: true })
    // -> collection.aggregate([]).explain()

    db.find([], { explain: true })
    // -> db.find([]).explain()

    Fixed writeConcern.w set to 0 unacknowledged write protocol trigger

    The driver now correctly handles w=0 writes as 'fire-and-forget' messages, where the server does not reply and the driver does not wait for a response. This change eliminates the possibility of encountering certain rare protocol format, BSON type, or network errors that could previously arise during server replies. As a result, w=0 operations now involve less I/O, specifically no socket read.

    In addition, when command monitoring is enabled, the reply field of a CommandSucceededEvent of an unacknowledged write will always be { ok: 1 }.

    Fixed indefinite hang bug for high write load scenarios

    When performing large and many write operations, the driver will likely encounter buffering at the socket layer. The logic that waited until buffered writes were complete would mistakenly drop 'data' (reading from the socket), causing the driver to hang indefinitely or until a socket timeout. Using pausing and resuming mechanisms exposed by Node streams we have eliminated the possibility for data events to go unhandled.

    Shout out to @ hunkydoryrepair for debugging and finding this issue!

    Fixed change stream infinite resume

    Before this fix, when change streams would fail to establish a cursor on the server, the driver would infinitely attempt to resume the change stream. Now, when the aggregate to establish the change stream fails, the driver will throw an error and clos the change stream.

    ClientSession.commitTransaction() no longer unconditionally overrides write concern

    Prior to this change, ClientSession.commitTransaction() would always override any previously configured writeConcern on the initial attempt. This overriding behaviour now only applies to internal and user-initiated retries of ClientSession.commitTransaction() for a given transaction.

    Features

    Bug Fixes

    Documentation

    We invite you to try the mongodb library immediately, and report any issues to the NODE project.

  • 6.10.0-dev.20241120.sha.adb15feb - 2024-11-20
  • 6.10.0-dev.20241116.sha.aa986f8e - 2024-11-16
  • 6.10.0-dev.20241115.sha.1320ad87 - 2024-11-15
  • 6.10.0-dev.20241113.sha.20564f7a - 2024-11-13
  • 6.10.0-dev.20241112.sha.48ed47ec - 2024-11-12
  • 6.10.0-dev.20241109.sha.ed25d561 - 2024-11-09
  • 6.10.0-dev.20241108.sha.fd7acde6 - 2024-11-08
  • 6.10.0-dev.20241107.sha.e5582ed7 - 2024-11-07
  • 6.10.0-dev.20241106.sha.dc3fe957 - 2024-11-06
  • 6.10.0-dev.20241102.sha.2f3fb466 - 2024-11-02
  • 6.10.0-dev.20241101.sha.5e6638a2 - 2024-11-01
  • 6.10.0-dev.20241031.sha.f62c45d2 - 2024-10-31
  • 6.10.0-dev.20241024.sha.5c4355ad - 2024-10-24
  • 6.10.0-dev.20241022.sha.678e9322 - 2024-10-22
  • 6.9.0 - 2024-09-12

    6.9.0 (2024-09-06)

    The MongoDB Node.js team is pleased to announce version 6.9.0 of the mongodb package!

    Release Notes

    Driver support of upcoming MongoDB server release

    Increased the driver's max supported Wire Protocol version and server version in preparation for the upcoming release of MongoDB 8.0.

    MongoDB 3.6 server support deprecated

    Warning

    Support for 3.6 servers is deprecated and will be removed in a future version.

    Support for explicit resource management

    The driver now natively supports explicit resource management for MongoClient, ClientSession, ChangeStreams and cursors. Additionally, on compatible Node.js versions, explicit resource management can be used with cursor.stream() and the GridFSDownloadStream, since these classes inherit resource management from Node.js' readable streams.

    This feature is experimental and subject to changes at any time. This feature will remain experimental until the proposal has reached stage 4 and Node.js declares its implementation of async disposable resources as stable.

    To use explicit resource management with the Node driver, you must:

    • Use Typescript 5.2 or greater (or another bundler that supports resource management)
    • Enable tslib polyfills for your application
    • Either use a compatible Node.js version or polyfill Symbol.asyncDispose (see the TS 5.2 release announcement for more information).

    Explicit resource management is a feature that ensures that resources' disposal methods are always called when the resources' scope is exited. For driver resources, explicit resource management guarantees that the resources' corresponding close method is called when the resource goes out of scope.

    // before:
    {
    try {
    const client = MongoClient.connect('<uri>');
    try {
    const session = client.startSession();
    const cursor = client.db('my-db').collection("my-collection").find({}, { session });
    try {
    const doc = await cursor.next();
    } finally {
    await cursor.close();
    }
    } finally {
    await session.endSession();
    }
    } finally {
    await client.close();
    }
    }

    // with explicit resource management:
    {
    await using client = MongoClient.connect('<uri>');

    await using session = client.startSession();
    await using cursor = client.db('my-db').collection('my-collection').find({}, { session });

    const doc = await cursor.next();
    }
    // outside of scope, the cursor, session and mongo client will be cleaned up automatically.

    The full explicit resource management proposal can be found here.

    Driver now supports auto selecting between IPv4 and IPv6 connections

    For users on Node versions that support the autoSelectFamily and autoSelectFamilyAttemptTimeout options (Node 18.13+), they can now be provided to the MongoClient and will be passed through to socket creation. autoSelectFamily will default to true with autoSelectFamilyAttemptTimeout by default not defined. Example:

    const client = new MongoClient(process.env.MONGODB_URI, { autoSelectFamilyAttemptTimeout: 100 });

    Allow passing through allowPartialTrustChain Node.js TLS option

    This option is now exposed through the MongoClient constructor's options parameter and controls the X509_V_FLAG_PARTIAL_CHAIN OpenSSL flag.

    Fixed enableUtf8Validation option

    Starting in v6.8.0 we inadvertently removed the ability to disable UTF-8 validation when deserializing BSON. Validation is normally a good thing, but it was always meant to be configurable and the recent Node.js runtime issues (v22.7.0) make this option indispensable for avoiding errors from mistakenly generated invalid UTF-8 bytes.

    Add duration indicating time elapsed between connection creation and when the connection is ready

    ConnectionReadyEvent now has a durationMS property that represents the time between the connection creation event and when the connection ready event is fired.

    Add duration indicating time elapsed between the beginning and end of a connection checkout operation

    ConnectionCheckedOutEvent/ConnectionCheckFailedEvent now have a durationMS property that represents the time between checkout start and success/failure.

    Create native cryptoCallbacks 🔐

    Node.js bundles OpenSSL, which means we can access the crypto APIs from C++ directly, avoiding the need to define them in JavaScript and call back into the JS engine to perform encryption. Now, when running the bindings in a version of Node.js that bundles OpenSSL 3 (should correspond to Node.js 18+), the cryptoCallbacks option will be ignored and C++ defined callbacks will be used instead. This improves the performance of encryption dramatically, as much as 5x faster. 🚀

    This improvement was made to [email protected] which is available now!

    Only permit mongocryptd spawn path and arguments to be own properties

    We have added some defensive programming to the options that specify spawn path and spawn arguments for mongocryptd due to the sensitivity of the system resource they control, namely, launching a process. Now, mongocryptdSpawnPath and mongocryptdSpawnArgs must be own properties of autoEncryption.extraOptions. This makes it more difficult for a global prototype pollution bug related to these options to occur.

    Support for range v2: Queryable Encryption supports range queries

    Queryable encryption range queries are now officially supported. To use this feature, you must:

    • use a version of mongodb-client-encryption > 6.1.0
    • use a Node driver version > 6.9.0
    • use an 8.0+ MongoDB enterprise server

    Important

    Collections and documents encrypted with range queryable fields with a 7.0 server are not compatible with range queries on 8.0 servers.

    Documentation for queryable encryption can be found in the MongoDB server manual.

    insertMany and bulkWrite accept ReadonlyArray inputs

    This improves the typescript developer experience, developers tend to use ReadonlyArray because it can help understand where mutations are made and when enabling noUncheckedIndexedAccess leads to a better type narrowing experience.

    Please note, that the array is read only but not the documents, the driver adds _id fields to your documents unless you request that the server generate the _id with forceServerObjectId

    Fix retryability criteria for write concern errors on pre-4.4 sharded clusters

    Previously, the driver would erroneously retry writes on pre-4.4 sharded clusters based on a nested code in the server response (error.result.writeConcernError.code). Per the common drivers specification, retryability should be based on the top-level code (error.code). With this fix, the driver avoids unnecessary retries.

    The LocalKMSProviderConfiguration's key property accepts Binary for auto encryption

    In #4160 we fixed a type issue where a local KMS provider at runtime accepted a BSON Binary instance but the Typescript inaccurately only permitted Buffer and string. The same change has now been applied to AutoEncryptionOptions.

    BulkOperationBase (superclass of UnorderedBulkOperation and OrderedBulkOperation) now reports length property in Typescript

    The length getter for these classes was defined manually using Object.defineProperty which hid it from typescript. Thanks to @ sis0k0 we now have the getter defined on the class, which is functionally the same, but a greatly improved DX when working with types. 🎉

    MongoWriteConcernError.code is overwritten by nested code within MongoWriteConcernError.result.writeConcernError.code

    MongoWriteConcernError is now correctly formed such that the original top-level code is preserved

    • If no top-level code exists, MongoWriteConcernError.code should be set to MongoWriteConcernError.result.writeConcernError.code
    • If a top-level code is passed into the constructor, it shouldn't be changed or overwritten by the nested writeConcernError.code

    Optimized cursor.toArray()

    Prior to this change, toArray() simply used the cursor's async iterator API, which parses BSON documents lazily (see more here). toArray(), however, eagerly fetches the entire set of results, pushing each document into the returned array. As such, toArray does not have the same benefits from lazy parsing as other parts of the cursor API.

    With this change, when toArray() accumulates documents, it empties the current batch of documents into the array before calling the async iterator again, which means each iteration will fetch the next batch rather than wrap each document in a promise. This allows the cursor.toArray() to avoid the required delays associated with async/await execution, and allows for a performance improvement of up to 5% on average! 🎉

    Note: This performance optimization does not apply if a transform has been provided to cursor.map() before toArray is called.

    Fixed mixed use of cursor.next() and cursor[Symbol.asyncIterator]

    In 6.8.0, we inadvertently prevented the use of cursor.next() along with using for await syntax to iterate cursors. If your code made use of the following pattern and the call to cursor.next retrieved all your documents in the first batch, then the for-await loop would never be entered. This issue is now fixed.

    const firstDoc = await cursor.next();

    for await (const doc of cursor) {
    // process doc
    // ...
    }

    Features

Snyk has created this PR to upgrade mongodb from 6.3.0 to 6.10.0.

See this package in npm:
mongodb

See this project in Snyk:
https://app.snyk.io/org/issam-seghir/project/cd962cf4-e38a-48d0-992f-eced5ee97fd3?utm_source=github&utm_medium=referral&page=upgrade-pr
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants