Skip to content

Releases: milvus-io/milvus

milvus-2.2.7

28 Apr 14:27
cbcdebd
Compare
Choose a tag to compare

v2.2.7

Release date: 28 April, 2023

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.2.7 2.2.8 2.2.5 2.2.2 2.2.7

In this update, we have focused on resolving various issues reported by our users, enhancing the software's overall stability and functionality. Additionally, we have implemented several optimizations, such as load balancing, search grouping, and memory usage improvements.

Bugfix

  • Fixed a panic caused by not removing metadata of a dropped segment from the DataNode. (#23492)
  • Fixed a bug that caused forever blocking due to the release of a non-loaded partition. (#23612)
  • To prevent the query service from becoming unavailable, automatic balancing at the channel level has been disabled as a workaround. (#23632) (#23724)
  • Cancel failed tasks in the scheduling queue promptly to prevent an increase in QueryCoord scheduling latency. (#23649)
  • Fixed compatibility bug and recalculate segment rows to prevent service queries from being unavailable. (#23696)
  • Fixed a bug in the superuser password validation logic. (#23729)
  • Fixed the issue of shard detector rewatch failure, which was caused by returning a closed channel. (#23734)
  • Fixed a loading failure caused by unhandled interrupts in the AWS SDK. (#23736)
  • Fixed the "HasCollection" check in DataCoord. (#23709)
  • Fixed the bug that assigned all available nodes to a single replica incorrectly. (#23626)

Enhancement

  • Optimized the display of RootCoord histogram metrics. (#23567)
  • Reduced peak memory consumption during collection loading. (#23138)
  • Removed unnecessary handoff event-related metadata. (#23565)
  • Added a plugin logic to QueryNode to support the dynamic loading of shared library files. (#23599)
  • Supports load balancing with replica granularity. (#23629)
  • Released a load-balancing strategy based on scores. (#23805)
  • Added a coroutine pool to limit the concurrency of cgo calls triggered by "delete". (#23680)
  • Improved the compaction algorithm to make the distribution of segment sizes tend towards the ideal value. (#23692)
  • Changed the default shard number to 1. (#23593)
  • Improved search grouping algorithm to enhance throughput. (#23721)
  • Code refactoring: Separated the read, build, and load DiskANN parameters. (#23722)
  • Updated etcd and Minio versions. (#23765)

milvus-2.2.6

18 Apr 06:40
41d9ab3
Compare
Choose a tag to compare

v2.2.6

Release date: 18 April, 2023

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.2.6 2.2.7 2.2.5 2.2.1 2.2.4

Upgrade to Milvus 2.2.6 as soon as possible!

You are advised to refrain from using version 2.2.5 due to several critical issues that require immediate attention. Version 2.2.6 addresses these issues. One of the critical issues is the inability to recycle dirty binlog data. We highly recommend using version 2.2.6 version instead of version 2.2.5 to avoid any potential complications.

If you hit the issue where data on object storage cannot be recycled, upgrade your Milvus to v2.2.6 to fix these issues.

Bugfix

  • Fixed the problem of DataCoord GC failure (#23298)
  • Fixed the problem that index parameters passed when creating a collection will override those passed in subsequent create_index operations (#23242)
  • Fix the problem that the message backlog occurs in RootCoord, which causes the delay of the whole system to increase (#23267)
  • Fixed the accuracy of metric RootCoordInsertChannelTimeTick (#23284)
  • Fixed the issue that the timestamp reported by the proxy may stop in some cases (#23291)
  • Fixed the problem that the coordinator role may self-destruct by mistake during the restart process (#23344)
  • Fixed the problem that the checkpoint is left behind due to the abnormal exit of the garbage collection goroutine caused by the etcd restart (#23401)

Enhancement

  • Added slow logging performance for query/search when the latency is not less than 5 seconds (#23274)

milvus-2.2.5

29 Mar 13:36
47e28fb
Compare
Choose a tag to compare

2.2.5

Release date: 29 March, 2023

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.2.5 2.2.4 2.2.4 2.2.1 2.2.4

Security

Fixed MinIO CVE-2023-28432 by upgrading MinIO to RELEASE.2023-03-20T20-16-18Z。

New Features

  • First/Random replica selection policy

    This policy allows for a random replica selection if the first replica chosen under the round-robin selection policy fails. This improves the throughput of database operations.

Bug fixes

  • Fixed index data loss during the upgrade from Milvus 2.2.0 to 2.2.3.

    • Fixed an issue to prevent DataCoord from calculating segment lines by stale log entries num (#23069)
    • Fixed DataCoord's meta that may be broken with DataNode of the prior version (#23031)
  • Fixed DataCoord Out-of-Memory (OOM) with large fresh pressure.

    • Fixed an issue to make DataNode's tt interval configurable (#22990)
    • Fixed endless appending SIDs (#22989)
  • Fixed a concurrency issue in the LRU cache that was caused by concurrent queries with specified output fields.

    • Fixed an issue to use single-flight to limit the readWithCache concurrent operation (#23037)
    • Fixed LRU cache concurrency (#23041)
    • Fixed query performance issue with a large number of segments (#23028)
    • Fixed shard leader cache
    • Fixed GetShardLeader returns old leader (#22887) (#22903)
    • Fixed an issue to deprecate the shard cache immediately if a query failed (#22848)
    • Fixed an issue to enable batch delete files on GCP of MinIO (#23052) (#23083)
    • Fixed flush delta buffer if SegmentID equals 0 (#23064)
    • fixed unassigned from resource group (#22800)
    • Fixed load partition timeout logic still using createdAt (#23022)
    • Fixed unsub channel always removes QueryShard (#22961)

Enhancements

  • Added memory Protection by using the buffer size in memory synchronization policy (#22797)
  • Added dimension checks upon inserted records (#22819) (#22826)
  • Added configuration item to disable BF load (#22998)
  • Aligned the maximum dimensions of the DisANN index and that of a collection (#23027)
  • Added checks whether all columns aligned with same num_rows (#22968) (#22981)
  • Upgraded Knowhere to 1.3.11 (#22975)
  • Added the user RPC counter (#22870)

milvus-2.3.0 beta

20 Mar 14:30
f547c1f
Compare
Choose a tag to compare
milvus-2.3.0 beta Pre-release
Pre-release

2.3.0 beta

Release date: 20 March, 2023

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.3.0 beta 2.2.3b1 N/A N/A N/A

The latest release of Milvus introduced a new feature that will please many users: Nvidia GPU support. This new feature brings the ability to support heterogeneous computing, which can significantly accelerate specialized workloads. With GPU support, users can expect faster and more efficient vector data searches, ultimately improving productivity and performance.

Features

GPU support

Milvus now supports two GPU-based IVF indexes: RAFT and FAISS. According to a benchmark on RAFT's GPU-based IVF-series indexes, GPU indexing achieves a 10x increase in search performance on large NQ cases.

  • Benchmark

    We have compared RAFT-IVF-Flat with IVF-Flat and HNSW at a recall rate of 95%, and obtained the following results.

    Datasets SIFT GIST GLOVE Deep
    HNSW (VPS) 14,537 791 1516 5761
    IVF-Flat (VPS) 3097 142 791 723
    RAFT-IVF-Flat (VPS) 121,568 5737 20,163 16,557

    Also we benchmarked RAFT-IVF-PQ comparing Knowhere's fastest index HNSW at 80% recall.

    Datasets SIFT GIST GLOVE Deep
    HNSW(VPS) 20,809 2593 8005 13,291
    RAFT-IVF-PQ(VPS) 271,885 7448 38,989 80,363

    These benchmarks run against Knowhere on a host with an 8-core CPU, 32 GB of RAM, and an Nvidia A100 GPU with an NQ of 100.

    For details on these benchmarks, refer to the release notes of Knowhere v2.1.0.

    Special thanks go to @wphicks and @cjnolet from Nvidia for their contributions to the RAFT code.

Memory-mapped (mmap) file I/O

In scenarios where there is not sufficient memory for large datasets and it is insensitive to query performance, Milvus uses mmap to allow the system to treat parts of a file as if they were in memory. This can reduce memory usage and improve performance if all data is held in the system page cache.

Range search

The range search method returns all vectors within a certain radius around the query point, as opposed to the k-nearest ones. Range search is a valuable tool for querying vectors within a specific distance, for use cases such as anomaly detection and object distinction.

Upsert

Milvus now supports record upsert, similar to that in a relational database. This operation atomically deletes the original entity with the primary key (PK) and inserts a new entity. Note that upserts can only be applied to a given primary key.

Change Data Capture(CDC)

Change Data Capture is a process that identifies and tracks changes to data in a database. Milvus CDC provides real-time subscriptions to data and database events as they occur.

In addition to the aforementioned features, later release 2.3 of Milvus will also introduce new features such as accurate count support, Feder visualization support and growing segment indexing.

Milvus later will offer Dynamic Partitioning, which allows users to conveniently create and load a partition without releasing the collection. In addition, Milvus 2.3.0 will improve memory management, performance, and manageability under multi-partition cases.

Now, you can download Milvus and get started.

milvus-2.2.4

17 Mar 10:47
bbc21fe
Compare
Choose a tag to compare

2.2.4

Release date: 17 March, 2023

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.2.4 2.2.3 2.2.3 2.2.1 2.2.4

Milvus 2.2.4 is a minor update to Milvus 2.2.0. It introduces new features, such as namespace-based resource grouping, collection-level physical isolation, and collection renaming.

In addition to these features, Milvus 2.2.4 also addresses several issues related to rolling upgrades, failure recovery, and load balancing. These bug fixes contribute to a more stable and reliable system.

We have also made several enhancements to make your Milvus cluster faster and consume less memory with reduced convergence time for failure recovery.

New Features

  • Resource grouping

    Milvus has implemented resource grouping for QueryNodes. A resource group is a collection of QueryNodes. Milvus supports grouping QueryNodes in the cluster into different resource groups, where access to physical resources in different resource groups is completely isolated. See Manage Resource Group for more information.

  • Collection renaming

    The Collection-renaming API provides a way for users to change the name of a collection. Currently, PyMilvus supports this API, and SDKs for other programming languages are on the way. See Rename a Collection for details.

  • Google Cloud Storage support

    Milvus now supports Google Cloud Storage as the object storage.

  • New option to the search and query APIs

    If you are more concerned with performance rather than data freshness, enabling this option will skip search on all growing segments and offer better search performance under the scenario search with insertion. See search(/api-reference/pymilvus/v2.2.3/Collection/search().md) and query() for details.

Bugfix

  • Fixed segment not found when forwarding delete to empty segment #22528#22551
  • Fixed possible broken channel checkpoint in v2.2.2 #22205 #22227
  • Fixed entity number mismatch with some entities inserted #22306
  • Fixed DiskANN recovery failure after QueryNode reboots #22488 #22514
  • Fixed search/release on same segment #22414
  • Fixed file system crash during bulk-loading files prefixed with a '.' #22215
  • Added tickle for DataCoord watch event #21193 #22209
  • Fixed deadlock when releasing segments and removing nodes concurrently #22584
  • Added channel balancer on DataCoord #22324 #22377
  • Fixed balance generated reduce task #22236 #22326
  • Fixed QueryCoord panic caused by balancing #22486
  • Added scripts for rolling update Milvus's component installed with helm #22124
  • Added NotFoundTSafer and NoReplicaAvailable to retriable error code #22505
  • Fixed no retires upon gRPC error #22529
  • Fixed an issue for automatic component state update to healthy after start #22084
  • Added graceful-stop for sessions #22386
  • Added retry op for all servers #22274
  • Fixed metrics info panic when network error happens #22802
  • Fixed disordered minimum timestamp in proxy's pchan statistics #22756
  • Fixed an issue to ensure segment ID recovery upon failures to send time-tick #22771
  • Added segment info retrieval without the binlog path #22741
  • Added distribution.Peek for GetDataDistribution in case of blocked by release #22752
  • Fixed the segment not found error #22739
  • Reset delta position to vchannel in packSegmentLoadReq #22721
  • Added vector float data verification for bulkinsert and insert #22729
  • Upgraded Knowhere to 1.3.10 to fix bugs #22746
  • Fixed RootCoord double updates TSO #22715 #22723
  • Fixed confused time-tick logs #22733 #22734
  • Fixed session nil point #22696
  • Upgraded Knowhere to 1.3.10 #22614
  • Fixed incorrect sequence of timetick statistics on proxy#21855 #22560
  • Enabled DataCoord to handle GetIndexedSegment error from IndexCoord #22673
  • Fixed an issue for Milvus writes flushed segment key only after the segment is flushed #22667
  • Marked cache deprecated instead of removing it #22675
  • Updated shard leader cache #22632
  • Fixed an issue for the replica observer to assign node #22635
  • Fixed the not found issue when retrieving collection creation timestamp #22629 #22634
  • Fixed time-tick running backwards during DDLs #22617 #22618
  • Fixed max collection name case #22601
  • Fixed DataNode tickle not run default #22622-
  • Fixed DataCoord panic while reading timestamp of an empty segment #22598
  • Added scripts to get etcd info #22589
  • Fixed concurrent loading timeout during DiskANN indexing #22548
  • Fixed an issue to ensure index file not finish early because of compaction #22509
  • Added MultiQueryNodes tag for resource group #22527 #22544

Enhancement

  • Performance

    • Improved query performance by avoiding counting all bits #21909 #22285
    • Fixed dual copy of varchar fields while loading #22114 #22291
    • Updated DataCoord compaction panic after DataNode update plan to ensure consistency #22143 #22329
    • Improved search performance by avoiding allocating a zero-byte vector during searches #22219 #22357
    • Upgraded Knowhere to 1.3.9 to accelerate IVF/BF #22368
    • Improved search task merge policy #22006 #22287
    • Refined Read method of MinioChunkManager to reduce IO#22257
  • Memory Usage

    • Saved index files by 16m to save memory usage while indexing #22369
    • Added memory usage too large sync policy #22241
  • Others

    • Removed constraints that compaction happens only on indexed segment #22145
    • Changed RocksMQ page size to 256M to reduce RocksMQ disk usage #22433
    • Changed the etcd session timeout to 20s to improve recovery speed#22400
    • Added the RBAC for the GetLoadingProgress and GetLoadState API #22313

milvus-2.2.3

10 Feb 11:29
6313a45
Compare
Choose a tag to compare

2.2.3

Release date: 10 Feb, 2023

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.2.3 2.2.2 2.2.3 coming soon 2.2.1

Milvus 2.2.3 introduces the rolling upgrade capability to Milvus clusters and brings high availability settings to RootCoords. The former greatly reduces the impacts brought by the upgrade and restart of the Milvus cluster in production to the minimum, while the latter enables coordinators to work in active-standby mode and ensures a short failure recovery time of no more than 30 seconds.

In this release, Milvus also ships with a lot of improvements and enhancements in performance, including a fast bulk-insert experience with reduced memory usage and less loading time.

Breaking changes

In 2.2.3, the maximum number of fields in a collection is reduced from 256 to 64. (#22030)

Features

  • Rolling upgrade using Helm

    The rolling upgrade feature allows Milvus to respond to incoming requests during the upgrade, which is not possible in previous releases. In such releases, upgrading a Milvus instance requires it to be stopped first and then restarted after the upgrade is complete, leaving all incoming requests unanswered.

    Currently, this feature applies only to Milvus instances installed using Milvus Helm charts.

    Related issues:

    • Graceful stop of index nodes implemented (#21556)
    • Graceful stop of query nodes implemented (#21528)
    • Auto-sync of segments on closing implemented (#21576)
    • Graceful stop APIs and error messages improved (#21580)
    • Issues identified and fixed in the code of QueryNode and QueryCoord (#21565)
  • Coordinator HA

    Coordinator HA allows Milvus coordinators to work in active-standby mode to avoid single-point of failures.

    Related issues:

    • HA-related issues identified and fixed in QueryCoordV2 (#21501)
    • Auto-registration on the startup of Milvus was implemented to prevent both coordinators from working as the active coordinators. (#21641)
    • HA-related issues identified and fixed in RootCoords (#21700)
    • Issues identified and fixed in active-standby switchover (#21747)

Enhancements

  • Bulk-insert performance enhanced

    • Bulk-insert enhancement implemented (#20986 #21532)
    • JSON parser optimized for data import (#21332)
    • Stream-reading NumPy data implemented (#21540)
    • Bulk-insert progress report implemented (#21612)
    • Issues identified and fixed so that Milvus checks indexes before flushes segments before bulk-insert is complete (#21604)
    • Issues related to bulk-insert progress identified and fixed (#21668)
    • Issues related to bulk-insert report identified and fixed (#21758)
    • Issues identified and fixed so that Milvus does not seal failed segments while performing bulk-insert operations. (#21779)
    • Issues identified and fixed so that bulk-insert operations do not cause a slow flush (#21918)
    • Issues identified and fixed so that bulk-insert operations do not crash the DataNodes (#22040)
    • Refresh option added to LoadCollection and LoadPartition APIs (#21811)
    • Segment ID update on data import implemented (#21583)
  • Memory usage reduced

    • Issues identified and fixed so that loading failures do not return insufficient memory (#21592)
    • Arrow usage removed from FieldData (#21523)
    • Memory usage reduced in indexing scalar fields (#21970) (#21978)
  • Monitoring metrics optimized

    • Issues related to unregistered metrics identified and fixed (#22098)
    • A new segment metric that counts the number of binlog files added (#22085)
    • Many new metrics added (#21975)
    • Minor fix on segment metric (#21977)
  • Meta storage performance improved

    • Improved ListSegments performance for Datacoord catalog. (#21600)
    • Improved LoadWithPrefix performance for SuffixSnapshot. (#21601)
    • Removed redundant LoadPrefix requests for Catalog ListCollections. (#21551) (#21594)
    • Added A WalkWithPrefix API for MetaKv interface. (#21585)
    • Added GC for snapshot KV based on time-travel. (#21417) (#21763)
  • Performance improved

    • Upgraded Knowhere to 1.3.7. (#21735)
    • Upgraded Knowhere to 1.3.8. (#22024)
    • Skipped search GRPC call for standalone. (#21630)
    • Optimized some low-efficient code. (#20529) (#21683)
    • Fixed fill the string field twice when string index exists. (#21852) (#21865)
    • Used all() API for bitset check. (#20462) (#21682)
  • Others

    • Implemented the GetLoadState API. (#21533)
    • Added a task to unsubscribe dmchannel. (#21513) (#21794)
    • Explicitly list the triggering reasons when Milvus denies reading/writing. (#21553)
    • Verified and adjusted the number of rows in a segment before saving and passing SegmentInfo. (#21200)
    • Added a segment seal policy by the number of binlog files. (#21941)
    • Upgraded etcd to 3.5.5. (#22007

Bug Fixes

  • QueryCoord segment replacement fixed

    • Fixed the mismatch of sealed segments IDs after enabling load-balancing in 2.2. (#21322)
    • Fixed the sync logic of the leader observer. (#20478) (#21315)
    • Fixed the issues that observers may update the current target to an unfinished next target. (#21107) (#21280)
    • Fixed the load timeout after the next target updates. (#21759) (#21770)
    • Fixed the issue that the current target may be updated to an invalid target. (#21742) (#21762)
    • Fixed the issue that a failed node may update the current target to an unavailable target. (#21743)
  • Improperly invalidated proxy cache fixed

    • Fixed the issue that the proxy does not update the shard leaders cache for some types of error (#21185) (#21303)
    • Fixed the issue that Milvus invalidates the proxy cache first when the shard leader list contains error (#21451) (#21464)
  • CheckPoint and GC Related issues fixed

    • Fixed the issue that the checkpoint will not update after data delete and compact (#21495)
    • Fixed issues related to channel checkpoint and GC (#22027)
    • Added restraints on segment GC of DML position before channel copy (#21773)
    • Removed collection meta after GC is complete (#21595) (#21671)
  • Issues...

Read more

milvus-2.2.2

22 Dec 11:00
0fdc1a0
Compare
Choose a tag to compare

2.2.2

Release date: 22 December, 2022

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.2.2 2.2.1 2.2.1 2.2.0 2.2.1

Milvus 2.2.2 is a minor fix of Milvus 2.2.1. It fixed a few loading failure issues as of the upgrade to 2.2.1 and the issue that the proxy cache is not cleaned upon some types of errors.

Bug Fixes

  • Fixed the issue that the proxy doesn't update the cache of shard leaders due to some types of errors. (#21320)
  • Fixed the issue that the loaded info is not cleaned for released collections/partitions. (#21321)
  • Fixed the issue that the load count is not cleared on time. (#21314)

milvus-2.2.1

15 Dec 14:40
ae5259c
Compare
Choose a tag to compare

v2.2.1

Release date: 15 December, 2022

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.2.1 2.2.0 2.2.1 2.2.0 2.2.0

Milvus 2.2.1 is the minor fixed version of Milvus 2.2.0. It supports authentication and TLS on all dependencies, optimizes the performance ludicrously on searches and fixes some critical issues. With tremendous contribution from the community, this release managed to resolve over 280 issues, so please try the new release and give us feedback on stability, performance and ease of use.

New Features

  • Supports Pulsa tenant and authentication. (#20762)
  • Supports TLS in etcd config source. (#20910)

Performance

After upgrading the Knowhere vector engine and changing the parallelism strategy, Milvus 2.2.1 improves search performance by over 30%.

Optimizes the scheduler, and increases merge tasks probability. (#20931)

Bug Fixes

  • Fixed term filtering failures on indexed scalar fields. (#20840)
  • Fixed the issue that only partial data returned upon QueryNode restarts. (#21139)(#20976)
  • Fixed IndexNode panic upon failures to create an index. (#20826)
  • Fixed endless BinaryVector compaction and generation of data on Minio. (#21119) (#20971)
  • Fixed the issue that meta_cache of proxy partially updates. (#21232)
  • Fixed slow segment loading due to staled checkpoints. (#21150)
  • Fixed concurrently loaded Casbin model causing concurrent write operations. (#21132)(#21145)(#21073)
  • Forbade garbage-collecting index meta when creating an index. (#21024)
  • Fixed a bug that the index data can not be garbage-collected because ListWithPrefix from Minio with recursive is false. (#21040)
  • Fixed an issue that an error code is returned when a query expression does not match any results. (#21066)
  • Fixed search failures on disk index when search_list equals to limit. (#21114)
  • Filled collection schema after DataCoord restarts. (#21164)
  • Fixed an issue that the compaction handler may double release and hang. (#21019)
  • [restapi] Fixed precision loss for Int64 fields upon insert requests. (#20827)
  • Increased MaxWatchDuration and make it configurable to prevent shards with large data loads from timing out. (#21010)
  • Fixed the issue that the compaction target segment rowNum is always 0. (#20941)
  • Fixed the issue that IndexCoord deletes segment index by mistake because IndexMeta is not stored in time. (#21058)
  • Fixed the issue that DataCoord crushes if auto-compaction is disabled. (#21079)
  • Fixed the issue that searches on growing segments even though the segments are indexed. (#21215)

Improvements

  • Refined logs and the default log level is set to INFO.
  • Fixed incorrect metrics and refined the metric dashboard.
  • Made TopK limit configurable (#21155)

Breaking changes

Milvus now limits each RPC to 64 MB to avoid OOM and generating large message packs.

milvus-2.2.0

18 Nov 16:23
e7429f8
Compare
Choose a tag to compare

V2.2.0

Release date: 18 November, 2022

Compatibility

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.2.0 2.2.0 2.2.0 coming soon 2.2.0

Milvus 2.2.0 introduces many new features including support for Disk-based approximate nearest neighbor (ANN) algorithm, bulk insertion of entities from files, and role-based access control (RBAC) for an improved security. In addition, this major release also ushers in a new era for vector search with enhanced stability, faster search speed, and more flexible scalability.

Since metadata storage is refined and API usage is normalize, Milvus 2.2 is not fully compatible with earlier releases. Read these guides to learn how to safely upgrade from Milvus 2.1.x to 2.2.0.

Features

  • Support for bulk insertion of entities from files
    Milvus now offers a new set of bulk insertion APIs to make data insertion more efficient. You can now upload entities in a Json file directly to Milvus. See Insert Entities from Files for details.

  • Query Result Pagination
    To avoid massive search and query results returned in a single RPC, Milvus now supports configuring offset and filtering results with keywords in searches and queries. See Search and Query for details.

  • Role-based access control (RBAC)
    Like other traditional databases, Milvus now supports RBAC so that you can manages users, roles and privileges. See Enable RBAC for details.

  • Quota limitation
    Quota is a new mechanism that protects the system from OOM and crash under a burst of traffic. By imposing quota limitations, you can limit ingestion rate, search rate, etc. See Quota and Limitation Configurations for details.

  • Time to live (TTL) at a collection level
    In prior releases, we only support configuring TTL at a cluster level. Milvus 2.2.0 now supports configuring collection TTL when you create or modify a collection. After setting TTL for a collection, the entities in this collection automatically expires after the specified period of time. See Create a collection or Modify a collection for details.

  • Support for Disk-based approximate nearest neighbor search (ANNS) indexes (Beta)
    Traditionally, you need to load the entire index into memory before search. Now with DiskANN, an SSD-resident and Vamana graph-based ANNS algorithm, you can directly search on large-scale datasets and save up to 10 times the memory.

  • Data backup (Beta)
    Thanks to the contribution from Zilliz, Milvus 2.2.0 now provides milvus-backup to back up and restore data. The tool can be used either in a command line or an API server for data security.

Bug fixes and stability

  • Implements query coord V2, which handles all channel/segment allocation in a fully event-driven and asynchronous mode. Query coord V2 address all issues of stuck searches and accelerates failure recovery.
  • Root coord and index coord are refactored for more elegant handling of errors and better task scheduling.
  • Fixes the issue of invalid RocksMQ retention mechanism when Milvus Standalone restarts.
  • Meta storage format in etcd is refactored. With the new compression mechanism, etcd kv size is reduced by 10 times and the issues of etcd memory and space are solved.
  • Fixes a couple of memory issues when entities are continuously inserted or deleted.

Improvements

  • Performance
    • Fixes performance bottleneck to that Milvus can fully utilize all cores when CPU is more than 8 cores.
    • Dramatically improves the search throughput and reduce the latency.
    • Decreases load speed by processing load in parallel.
  • Observability
    • Changes all log levels to info by default.
    • Added collection-level latency metrics for search, query, insertion, and deletion.
  • Debug tool
    • BirdWatcher, the debug tool for Milvus, is further optimized as it can now connect to Milvus meta storage and inspect the part of the internal status of the Milvus system.

Others

  • Index and load
    • A collection can only be loaded with an index created on it.
    • Indexes cannot be created after a collection is loaded.
    • A loaded collection must be released before dropping the index created on this collection.
  • Flush
    • Flush API, which forces a seal on a growing segment and syncs the segment to object storage, is now exposed to users. Calling flush() frequently may affect search performance as too many small segments are created.
    • No auto-flush is triggered by any SDK APIs such as num_entities(), create_index(), etc.
  • Time Travel
    • In Milvus 2.2, Time Travel is disabled by default to save disk usage. To enable Time Travel, configure the parameter common.retentionDuration manually.

milvus-2.1.4

29 Sep 11:28
9122e34
Compare
Choose a tag to compare

v2.1.4

Release date: 29 September 2022

Milvus version Python SDK version Java SDK version Go SDK version Node.js SDK version
2.1.4 2.1.3 2.1.0 2.1.2 2.1.3

Milvus 2.1.4 is a minor bug fix version of Milvus 2.1.0. The highlight of this version is that we have remarkably reduced memory usage for scalar data. It also fixed a few issues with data loading, query coord deadlock when restarting, garbage collection on the wrong path and search crash.

Bug Fixes

  • 19326, 19309 Fixes failure to load collection with MARISA string index.

  • 19353 Fixes garbage collection on the wrong path.

  • 19402 Fixes query coord init deadlock when restarting.

  • 19312 Adds SyncSegments to sync meta between DN and DC.

  • 19486 Fixes DML stream leakage in proxy.

  • 19148, 19487, 19465 Fixes the failure of CGO to lock OS thread.

  • 19524 Fixes offset in search being equal to insert barrier.

Improvements

  • 19436 Ignores cases when comparing metric type in Segcore.

  • 19197, 19245, 19421 Optimizes large memory usage of InsertRecord.

v2.1.2

Release date: 16 September 2022

  • #18383, #18432 Fixed garbage collector parse segment ID panics with bad input.

  • #18418 Fixes metatable related error when etcd compaction error happens.

  • #18568 Closes Node/Segment detector when closing ShardCluster.