diff --git a/NEWS.md b/NEWS.md
index c77f3df34..05b70d433 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -8,6 +8,7 @@
### Features
* Move Instance sub-entities population from database trigger to code ([MSEARCH-887](https://folio-org.atlassian.net/browse/MSEARCH-887))
+* Call Numbers Browse: Implement Database Structure and Logic for Managing Call Numbers ([MSEARCH-862](https://folio-org.atlassian.net/browse/MSEARCH-862))
### Bug fixes
* Remove shelving order calculation for local call-number types
diff --git a/README.md b/README.md
index 6422f2eba..db56e86d8 100644
--- a/README.md
+++ b/README.md
@@ -1011,4 +1011,4 @@ and the [Docker image](https://hub.docker.com/r/folioorg/mod-search/)
### Development tips
-The development tips are described on the following page: [Development tips](doc/development.md)
+The development tips are described on the following page: [Development tips](development.md)
diff --git a/doc/development.md b/development.md
similarity index 82%
rename from doc/development.md
rename to development.md
index 4e8c07521..9d36a5572 100644
--- a/doc/development.md
+++ b/development.md
@@ -1,3 +1,115 @@
+## Local Development Setup Using Docker Compose for mod-search Module
+
+This guide will walk you through setting up your local development environment for the `mod-search` module using Docker Compose.
+It includes setting up various services like API mock servers, OpenSearch, Kafka, PostgreSQL, and their respective UIs to aid during development.
+
+### Prerequisites
+
+Before you begin, ensure you have the following installed:
+- [Docker](https://docs.docker.com/get-docker/)
+- [Docker Compose](https://docs.docker.com/compose/install/)
+
+Make sure your [.env file](docker/.env) includes the necessary variables: `DB_USERNAME`, `DB_PASSWORD`, `DB_DATABASE`, `PGADMIN_PORT`, `PGADMIN_DEFAULT_EMAIL`, and `PGADMIN_DEFAULT_PASSWORD`.
+
+### Setup Environment
+
+1. **Start Services**
+
+ Navigate to the docker folder in the project and execute:
+ ```shell
+ docker compose -f docker/docker-compose.yml up -d
+ ```
+
+2. **Start the mod-search Application**
+
+ First of all the application should be packaged:
+ ```shell
+ mvn clean package
+ ```
+
+ To run the `mod-search` application, you have two options:
+ - **Build and Run the Docker Image:**
+ ```shell
+ docker build -t dev.folio/mod-search .
+ docker run -p 8081:8081 -e "DB_HOST=postgres" -e "KAFKA_HOST=kafka" -e "ELASTICSEARCH_URL=http://elasticsearch:9200" dev.folio/mod-search
+ ```
+ - **Run the Application Directly:** You can also run the application directly if your development environment is set up with the necessary Java runtime. Execute the following command from the root of your project:
+ ```shell
+ java -jar target/mod-search-fat.jar
+ ```
+
+3. **Initialize Environment**
+
+ After starting the services and the mod-search application, invoke the following CURL command to post a tenant which will help in bringing up Kafka listeners and get indices created:
+ ```shell
+ curl --location --request POST 'http://localhost:8081/_/tenant' \
+ --header 'Content-Type: application/json' \
+ --header 'x-okapi-tenant: test_tenant' \
+ --header 'x-okapi-url: http://localhost:9130' \
+ --data-raw '{
+ "module_to": "mod-search"
+ }'
+ ```
+ You can check which tenants are enabled by wiremock in the file located at `src/test/resources/mappings/user-tenants.json`.
+
+4. **Consortium Support for Local Environment Testing**
+
+ Consortium feature is defined automatically by calling the `/user-tenants` endpoint as outlined in the following CURL requests:
+ - **To enable the consortium feature:**
+ ```shell
+ curl --location --request POST 'http://localhost:8081/_/tenant' \
+ --header 'Content-Type: application/json' \
+ --header 'x-okapi-tenant: consortium' \
+ --header 'x-okapi-url: http://localhost:9130' \
+ --data-raw '{
+ "module_to": "mod-search",
+ "parameters": [
+ {
+ "key": "centralTenantId",
+ "value": "consortium"
+ }
+ ]
+ }'
+ ```
+
+ - **Enable member tenant:**
+ ```shell
+ curl --location --request POST 'http://localhost:8081/_/tenant' \
+ --header 'Content-Type: application/json' \
+ --header 'x-okapi-tenant: member_tenant' \
+ --header 'x-okapi-url: http://localhost:9130' \
+ --data-raw '{
+ "module_to": "mod-search",
+ "parameters": [
+ {
+ "key": "centralTenantId",
+ "value": "consortium"
+ }
+ ]
+ }'
+ ```
+
+### Access Services
+
+- **API Mock Server**: http://localhost:9130
+- **OpenSearch Dashboard**: http://localhost:5601
+- **Kafka UI**: http://localhost:8080
+- **PgAdmin**: http://localhost:5050
+
+### Monitoring and Logs
+
+To monitor the logs for any of the services:
+```
+docker-compose logs [service_name]
+```
+
+### Stopping Services
+
+To stop and remove all containers associated with the compose file:
+```shell
+docker compose -f docker/docker-compose.yml down
+```
+
## Overview
`mod-search` is based on metadata-driven approach. It means that resource description is specified using JSON file and
@@ -44,7 +156,7 @@ the [full-text queries](https://www.elastic.co/guide/en/elasticsearch/reference/
|:--------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| searchTypes | List of search types that are supported for the current field. Allowed values: `facet`, `filter`, `sort` |
| searchAliases | List of aliases that can be used as a field name in the CQL search query. It can be used to combine several fields together during the search. For example, a query `keyword all title` combines for instance record following fields - `title`, `alternativeTitles.alternativeTitle`, `indexTitle`, `identifiers.value`, `contributors.name`
Other way of using it - is to rename field keeping the backward compatibility without required reindex. |
-| index | Reference to the Elasticsearch mappings that are specified in [index-field-types](../src/main/resources/elasticsearch/index-field-types.json) |
+| index | Reference to the Elasticsearch mappings that are specified in [index-field-types](src/main/resources/elasticsearch/index-field-types.json) |
| showInResponse | Marks field to be returned during the search operation. `mod-search` adds to the Elasticsearch query all marked field paths. See also: [Source filtering](https://www.elastic.co/guide/en/elasticsearch/reference/master/search-fields.html#source-filtering) |
| searchTermProcessor | Search term processor, which pre-processes incoming value from CQL query for the search request. |
| mappings | Elasticsearch fields mappings. It can contain new field mapping or can enrich referenced mappings, that comes from `index-field-types` |
@@ -251,77 +363,4 @@ assertThatThrownBy(() -> service.doExceptionalOperation())
### Integration testing
The module uses [Testcontainers](https://www.testcontainers.org/) to run Elasticsearch, Apache Kafka and PostgreSQL
-in embedded mode. It is required to have Docker installed and available on the host where the tests are executed.
-
-### Local environment testing
-Navigate to the docker folder in the project and run `docker-compose up`.
-This will build local mod-search image and bring it up along with all necessary infrastructure:
- - elasticsearch along with dashboards (kibana analogue from opensearch)
- - kafka along with zookeeper
- - postgres
- - wiremock server for mocking external api calls (for example authorization)
-
-Then, you should invoke
-```shell
-curl --location --request POST 'http://localhost:8081/_/tenant' \
---header 'Content-Type: application/json' \
---header 'x-okapi-tenant: test_tenant' \
---header 'x-okapi-url: http://api-mock:8080' \
---data-raw '{
- "module_to": "mod-search-$version$",
- "purge": "false"
-}
-```
-to post some tenant in order to bring up kafka listeners and get indices created.
-You can check which tenants enabled by wiremock in the `src/test/resources/mappings/user-tenants.json`
-
-To rebuild mod-search image you should:
- - bring down existing containers by running `docker-compose down`
- - run `docker-compose build mod-search` to build new mod-search image
- - run `docker-compose up` to bring up infrastructure
-
-Hosts/ports of containers to access functionality:
- - `http://localhost:5601/` - dashboards UI for elastic monitoring, data modification through dev console
- - `localhost` - host, `5010` - port for remote JVM debug
- - `http://localhost:8081` - for calling mod-search REST api. Note that header `x-okapi-url: http://api-mock:8080` should be added to request for apis that take okapi url from headers
- - `localhost:29092` - for kafka interaction. If you are sending messages to kafka from java application with `spring-kafka` then this host shoulb be added to `spring.kafka.bootstrap-servers` property of `application.yml`
-
-### Consortium support for Local environment testing
-Consortium feature is defined automatically at runtime by calling /user-tenants endpoint.
-Consortium feature on module enable is defined by 'centralTenantId' tenant parameter.
-
-Invoke the following
-```shell
-curl --location --request POST 'http://localhost:8081/_/tenant' \
---header 'Content-Type: application/json' \
---header 'x-okapi-tenant: consortium' \
---header 'x-okapi-url: http://api-mock:8080' \
---data-raw '{
- "module_to": "mod-search-$version$",
- "parameters": [
- {
- "key": "centralTenantId",
- "value": "consortium"
- }
- ]
-}
-```
-
-Then execute the following to enable `member tenant`
-```shell
-curl --location --request POST 'http://localhost:8081/_/tenant' \
---header 'Content-Type: application/json' \
---header 'x-okapi-tenant: member_tenant' \
---header 'x-okapi-url: http://api-mock:8080' \
---data-raw '{
- "module_to": "mod-search-$version$",
- "parameters": [
- {
- "key": "centralTenantId",
- "value": "consortium"
- }
- ]
-}
-```
-Consider that `tenantParameters` like `loadReference` and `loadSample` won't work because `loadReferenceData`
-method is not implemented in the `SearchTenantService` yet.
+in embedded mode. It is required to have Docker installed and available on the host where the tests are executed.
\ No newline at end of file
diff --git a/docker/.env b/docker/.env
index 1308ac0c6..92f093752 100644
--- a/docker/.env
+++ b/docker/.env
@@ -1,18 +1,10 @@
COMPOSE_PROJECT_NAME=folio-mod-search
-DB_HOST=postgres
-DB_PORT=5432
+
+# Postgres variables
DB_DATABASE=okapi_modules
DB_USERNAME=folio_admin
DB_PASSWORD=folio_admin
+
+# PgAdmin variables
PGADMIN_DEFAULT_EMAIL=user@domain.com
PGADMIN_DEFAULT_PASSWORD=admin
-PGADMIN_PORT=5050
-KAFKA_HOST=kafka
-KAFKA_PORT=9092
-REPLICATION_FACTOR=1
-ENV=folio
-DEBUG_PORT=5005
-OKAPI_URL=http://api-mock:8080
-PGADMIN_DEFAULT_EMAIL=user@domain.com
-PGADMIN_DEFAULT_PASSWORD=admin
-PGADMIN_PORT=5050
\ No newline at end of file
diff --git a/docker/dashboards/Dockerfile b/docker/dashboards/Dockerfile
deleted file mode 100644
index 661288336..000000000
--- a/docker/dashboards/Dockerfile
+++ /dev/null
@@ -1,5 +0,0 @@
-FROM opensearchproject/opensearch-dashboards:1.3.2
-
-RUN /usr/share/opensearch-dashboards/bin/opensearch-dashboards-plugin remove securityDashboards
-
-COPY --chown=opensearch-dashboards:opensearch-dashboards opensearch_dashboards.yml /usr/share/opensearch-dashboards/config/
diff --git a/docker/dashboards/opensearch_dashboards.yml b/docker/dashboards/opensearch_dashboards.yml
deleted file mode 100644
index c6f9ae50d..000000000
--- a/docker/dashboards/opensearch_dashboards.yml
+++ /dev/null
@@ -1,3 +0,0 @@
-server.name: opensearch-dashboards
-server.host: "0"
-opensearch.hosts: http://elasticsearch:9200
\ No newline at end of file
diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
index bfb11c250..0cd0a3c17 100644
--- a/docker/docker-compose.yml
+++ b/docker/docker-compose.yml
@@ -1,35 +1,4 @@
-version: "3.8"
-
services:
- mod-search:
- container_name: mod-search
- image: dev.folio/mod-search
- build:
- context: ../
- dockerfile: Dockerfile
- networks:
- - mod-search-local
- ports:
- - "8081:8081"
- - "${DEBUG_PORT}:${DEBUG_PORT}"
- depends_on:
- - api-mock
- - opensearch
- - kafka
- - postgres
- environment:
- ELASTICSEARCH_URL: http://opensearch:9200
- ENV: ${ENV}
- KAFKA_HOST: ${KAFKA_HOST}
- KAFKA_PORT: ${KAFKA_PORT}
- REPLICATION_FACTOR: ${REPLICATION_FACTOR}
- DB_USERNAME: ${DB_USERNAME}
- DB_PORT: ${DB_PORT}
- DB_HOST: ${DB_HOST}
- DB_DATABASE: ${DB_DATABASE}
- DB_PASSWORD: ${DB_PASSWORD}
- OKAPI_URL: http://api-mock:8080
- JAVA_OPTIONS: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:${DEBUG_PORT}"
api-mock:
container_name: api-mock_mod-search
@@ -45,86 +14,79 @@ services:
opensearch:
container_name: opensearch_mod-search
- image: dev.folio/opensearch:1.3.2
+ image: dev.folio/opensearch:latest
build:
context: opensearch
dockerfile: Dockerfile
networks:
+ - opensearch-net
- mod-search-local
ports:
- "9200:9200"
- - "9300:9300"
volumes:
- es-data:/usr/share/elasticsearch/data
environment:
- discovery.type=single-node
- - discovery.zen.minimum_master_nodes=1
- - "DISABLE_SECURITY_PLUGIN=true"
- - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
+ - "DISABLE_INSTALL_DEMO_CONFIG=true"
+ - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
opensearch-dashboards:
container_name: opensearch-dashboards_mod-search
- image: dev.folio/opensearch-dashboards:1.3.2
- build:
- context: dashboards
- dockerfile: Dockerfile
+ image: opensearchproject/opensearch-dashboards:2
ports:
- "5601:5601"
+ expose:
+ - "5601"
environment:
OPENSEARCH_HOSTS: '["http://opensearch:9200"]'
+ DISABLE_SECURITY_DASHBOARDS_PLUGIN: true
networks:
+ - opensearch-net
- mod-search-local
depends_on:
- opensearch
- zookeeper:
- container_name: zookeeper_mod-search
- image: wurstmeister/zookeeper:3.4.6
- networks:
- - mod-search-local
- ports:
- - "2181:2181"
-
kafka:
container_name: kafka_mod-search
- image: wurstmeister/kafka:2.13-2.8.1
+ image: apache/kafka-native
networks:
- mod-search-local
- depends_on:
- - zookeeper
ports:
- "9092:9092"
- - "29092:29092"
+ - "9093:9093"
environment:
- KAFKA_BROKER_ID: 1
- KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:29092
- KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://localhost:29092
- KAFKA_ADVERTISED_HOST_NAME: kafka
- KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
- KAFKA_MESSAGE_MAX_BYTES: 1000000
- KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
+ # Configure listeners for both docker and host communication
+ KAFKA_LISTENERS: CONTROLLER://localhost:9091,HOST://0.0.0.0:9092,DOCKER://0.0.0.0:9093
+ KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,DOCKER://kafka:9093
+ KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,DOCKER:PLAINTEXT,HOST:PLAINTEXT
+ # Settings required for KRaft mode
+ KAFKA_NODE_ID: 1
+ KAFKA_PROCESS_ROLES: broker,controller
+ KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
+ KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9091
+ # Listener to use for broker-to-broker communication
+ KAFKA_INTER_BROKER_LISTENER_NAME: DOCKER
+ # Required for a single node cluster
+ KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
+
kafka-ui:
container_name: kafka-ui_mod-search
- image: provectuslabs/kafka-ui:latest
+ image: ghcr.io/kafbat/kafka-ui:latest
+ networks:
+ - mod-search-local
ports:
- "8080:8080"
- depends_on:
- - zookeeper
- - kafka
environment:
+ DYNAMIC_CONFIG_ENABLED: 'true'
KAFKA_CLUSTERS_0_NAME: local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
- KAFKA_CLUSTERS_0_ZOOKEEPER: zookeeper:2181
- KAFKA_CLUSTERS_0_JMXPORT: 9997
- networks:
- - mod-search-local
+ KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9093
+ depends_on:
+ - kafka
postgres:
container_name: postgres_mod-search
- image: postgres:12-alpine
+ image: postgres:16-alpine
networks:
- mod-search-local
ports:
@@ -136,11 +98,11 @@ services:
pgadmin:
container_name: pgadmin_mod-search
- image: dpage/pgadmin4:6.7
+ image: dpage/pgadmin4:8.13
networks:
- mod-search-local
ports:
- - ${PGADMIN_PORT}:80
+ - "5050:80"
volumes:
- "pgadmin-data:/var/lib/pgadmin"
environment:
@@ -151,6 +113,7 @@ services:
networks:
mod-search-local:
driver: bridge
+ opensearch-net:
volumes:
es-data: { }
diff --git a/docker/opensearch/Dockerfile b/docker/opensearch/Dockerfile
index 1290863f9..2374849f9 100644
--- a/docker/opensearch/Dockerfile
+++ b/docker/opensearch/Dockerfile
@@ -13,3 +13,5 @@ RUN opensearch-plugin install --batch \
analysis-smartcn \
analysis-nori \
analysis-phonetic
+
+RUN opensearch-plugin remove opensearch-security
\ No newline at end of file
diff --git a/pom.xml b/pom.xml
index ff3bf8efe..e569507a8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -290,8 +290,9 @@
-
+ mod-search-fat
+
org.apache.maven.plugins
maven-clean-plugin
diff --git a/src/main/java/org/folio/search/integration/message/interceptor/PopulateInstanceBatchInterceptor.java b/src/main/java/org/folio/search/integration/message/interceptor/PopulateInstanceBatchInterceptor.java
index e369da8bb..a0454f427 100644
--- a/src/main/java/org/folio/search/integration/message/interceptor/PopulateInstanceBatchInterceptor.java
+++ b/src/main/java/org/folio/search/integration/message/interceptor/PopulateInstanceBatchInterceptor.java
@@ -119,10 +119,9 @@ private void process(String tenant, List batch) {
repository.deleteEntities(idsToDrop);
}
- if (ResourceType.INSTANCE.getName().equals(recordCollection.getKey())) {
- var noShadowCopiesInstanceEvents = recordByOperation.values().stream().flatMap(Collection::stream).toList();
- instanceChildrenResourceService.persistChildren(tenant, noShadowCopiesInstanceEvents);
- }
+ var noShadowCopiesInstanceEvents = recordByOperation.values().stream().flatMap(Collection::stream).toList();
+ instanceChildrenResourceService.persistChildren(tenant, ResourceType.byName(recordCollection.getKey()),
+ noShadowCopiesInstanceEvents);
}
}
diff --git a/src/main/java/org/folio/search/model/entity/CallNumberEntity.java b/src/main/java/org/folio/search/model/entity/CallNumberEntity.java
new file mode 100644
index 000000000..6ca195070
--- /dev/null
+++ b/src/main/java/org/folio/search/model/entity/CallNumberEntity.java
@@ -0,0 +1,135 @@
+package org.folio.search.model.entity;
+
+import static org.apache.commons.lang3.StringUtils.truncate;
+
+import java.util.Objects;
+import lombok.Getter;
+import org.folio.search.utils.ShaUtils;
+import org.jetbrains.annotations.NotNull;
+
+@Getter
+public class CallNumberEntity implements Comparable {
+
+ private static final int CALL_NUMBER_MAX_LENGTH = 50;
+ private static final int CALL_NUMBER_PREFIX_MAX_LENGTH = 20;
+ private static final int CALL_NUMBER_SUFFIX_MAX_LENGTH = 25;
+ private static final int CALL_NUMBER_TYPE_MAX_LENGTH = 40;
+ private static final int VOLUME_MAX_LENGTH = 50;
+ private static final int ENUMERATION_MAX_LENGTH = 50;
+ private static final int CHRONOLOGY_MAX_LENGTH = 50;
+ private static final int COPY_NUMBER_MAX_LENGTH = 10;
+
+ private String id;
+ private String callNumber;
+ private String callNumberPrefix;
+ private String callNumberSuffix;
+ private String callNumberTypeId;
+ private String volume;
+ private String enumeration;
+ private String chronology;
+ private String copyNumber;
+
+ CallNumberEntity(String id, String callNumber, String callNumberPrefix, String callNumberSuffix,
+ String callNumberTypeId, String volume, String enumeration, String chronology, String copyNumber) {
+ this.id = id;
+ this.callNumber = callNumber;
+ this.callNumberPrefix = callNumberPrefix;
+ this.callNumberSuffix = callNumberSuffix;
+ this.callNumberTypeId = callNumberTypeId;
+ this.volume = volume;
+ this.enumeration = enumeration;
+ this.chronology = chronology;
+ this.copyNumber = copyNumber;
+ }
+
+ public static CallNumberEntityBuilder builder() {
+ return new CallNumberEntityBuilder();
+ }
+
+ @Override
+ public int hashCode() {
+ return Objects.hashCode(id);
+ }
+
+ @Override
+ public boolean equals(Object o) {
+ if (!(o instanceof CallNumberEntity that)) {
+ return false;
+ }
+ return Objects.equals(id, that.id);
+ }
+
+ @Override
+ public int compareTo(@NotNull CallNumberEntity o) {
+ return id.compareTo(o.id);
+ }
+
+ public static class CallNumberEntityBuilder {
+ private String id;
+ private String callNumber;
+ private String callNumberPrefix;
+ private String callNumberSuffix;
+ private String callNumberTypeId;
+ private String volume;
+ private String enumeration;
+ private String chronology;
+ private String copyNumber;
+
+ CallNumberEntityBuilder() { }
+
+ public CallNumberEntityBuilder id(String id) {
+ this.id = id;
+ return this;
+ }
+
+ public CallNumberEntityBuilder callNumber(String callNumber) {
+ this.callNumber = truncate(callNumber, CALL_NUMBER_MAX_LENGTH);
+ return this;
+ }
+
+ public CallNumberEntityBuilder callNumberPrefix(String callNumberPrefix) {
+ this.callNumberPrefix = truncate(callNumberPrefix, CALL_NUMBER_PREFIX_MAX_LENGTH);
+ return this;
+ }
+
+ public CallNumberEntityBuilder callNumberSuffix(String callNumberSuffix) {
+ this.callNumberSuffix = truncate(callNumberSuffix, CALL_NUMBER_SUFFIX_MAX_LENGTH);
+ return this;
+ }
+
+ public CallNumberEntityBuilder callNumberTypeId(String callNumberTypeId) {
+ this.callNumberTypeId = truncate(callNumberTypeId, CALL_NUMBER_TYPE_MAX_LENGTH);
+ return this;
+ }
+
+ public CallNumberEntityBuilder volume(String volume) {
+ this.volume = truncate(volume, VOLUME_MAX_LENGTH);
+ return this;
+ }
+
+ public CallNumberEntityBuilder enumeration(String enumeration) {
+ this.enumeration = truncate(enumeration, ENUMERATION_MAX_LENGTH);
+ return this;
+ }
+
+ public CallNumberEntityBuilder chronology(String chronology) {
+ this.chronology = truncate(chronology, CHRONOLOGY_MAX_LENGTH);
+ return this;
+ }
+
+ public CallNumberEntityBuilder copyNumber(String copyNumber) {
+ this.copyNumber = truncate(copyNumber, COPY_NUMBER_MAX_LENGTH);
+ return this;
+ }
+
+ public CallNumberEntity build() {
+ if (id == null) {
+ this.id = ShaUtils.sha(callNumber, callNumberPrefix, callNumberSuffix, callNumberTypeId,
+ volume, enumeration, chronology, copyNumber);
+ }
+ return new CallNumberEntity(this.id, this.callNumber, this.callNumberPrefix, this.callNumberSuffix,
+ this.callNumberTypeId, this.volume, this.enumeration, this.chronology, this.copyNumber);
+ }
+
+ }
+}
diff --git a/src/main/java/org/folio/search/model/entity/ChildResourceEntityBatch.java b/src/main/java/org/folio/search/model/entity/ChildResourceEntityBatch.java
new file mode 100644
index 000000000..9acb283db
--- /dev/null
+++ b/src/main/java/org/folio/search/model/entity/ChildResourceEntityBatch.java
@@ -0,0 +1,9 @@
+package org.folio.search.model.entity;
+
+import java.util.Collection;
+import java.util.Map;
+
+public record ChildResourceEntityBatch(Collection