Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MODINVSTOR-1262 Merge ecs-tlr-feature into master #1113

Merged
merged 26 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
b7f3a16
MODINVSTOR-1179 Add ecsRequestRouting field to service point schema (…
MagzhanArtykov Apr 12, 2024
0c567d2
MODINVSTOR-1219: Do not return routing service points by default (#1022)
OleksandrVidinieiev May 23, 2024
6101067
Merge remote-tracking branch 'origin/master' into ecs-tlr-feature
OleksandrVidinieiev Sep 10, 2024
f7f9680
MODINVSTOR-1245: Implement synchronization operation for service poin…
Maksat-Galymzhan Sep 30, 2024
8a16e4b
Merge branch 'master' of github.com:folio-org/mod-inventory-storage i…
roman-barannyk Oct 18, 2024
08dc519
Merge with master
alexanderkurash Oct 23, 2024
da7e00d
Fix mockConsortiumTenants after merge
alexanderkurash Oct 23, 2024
0312ba5
Disable canRequestOaiPmhViewWhenEmptyDb test
alexanderkurash Oct 23, 2024
7baaf00
Merge branch 'master' into ecs-tlr-feature
alexanderkurash Nov 5, 2024
19751a6
MODINVSTOR-1262 Re-enable disabled tests
alexanderkurash Nov 5, 2024
1a48117
MODINVSTOR-1262 Update NEWS
alexanderkurash Nov 7, 2024
8deee0d
MODINVSTOR-1262 Remove unneeded dependency
alexanderkurash Nov 8, 2024
676d236
Merge branch 'master' into ecs-tlr-feature
roman-barannyk Nov 8, 2024
3c47801
Merge branch 'ecs-tlr-feature' of github.com:folio-org/mod-inventory-…
roman-barannyk Nov 8, 2024
89c4bfe
MODINVSTOR-1262 improve test coverage, update readme
roman-barannyk Nov 9, 2024
d88aab9
MODINVSTOR-1262 improve test coverage
roman-barannyk Nov 9, 2024
87b9b1c
MODINVSTOR-1262 improve test coverage
roman-barannyk Nov 11, 2024
dd0f4ef
MODINVSTOR-1262 improve test coverage
roman-barannyk Nov 11, 2024
3cd215a
MODINVSTOR-1262 add missing permission
roman-barannyk Nov 11, 2024
1f8da1b
MODINVSTOR-1262 add test coverage
roman-barannyk Nov 11, 2024
e8822af
MODINVSTOR-1262 add test coverage
roman-barannyk Nov 11, 2024
9024851
MODINVSTOR-1262 fix code smell
roman-barannyk Nov 11, 2024
ad2a2e1
MODINVSTOR-1262 fix code smell
roman-barannyk Nov 11, 2024
68327ed
Merge branch 'master' into ecs-tlr-feature
alexanderkurash Nov 12, 2024
c7ed656
MODINVSTOR-1262 Move ECS TLR news to In progress
alexanderkurash Nov 12, 2024
6ada10a
Merge branch 'ecs-tlr-feature' of https://github.com/folio-org/mod-in…
alexanderkurash Nov 12, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,10 @@
### Features
* Modify endpoint for bulk instances upsert with publish events flag ([MODINVSTOR-1283](https://folio-org.atlassian.net/browse/MODINVSTOR-1283))
* Change Kafka event publishing keys for holdings and items ([MODINVSTOR-1281](https://folio-org.atlassian.net/browse/MODINVSTOR-1281))
* Merge custom ECS TLR feature branch into master ([MODINVSTOR-1262](https://folio-org.atlassian.net/browse/MODINVSTOR-1262))
* Service points synchronization: implement processors ([MODINVSTOR-1246](https://folio-org.atlassian.net/browse/MODINVSTOR-1246))
* Service points synchronization: create a verticle ([MODINVSTOR-1245](https://folio-org.atlassian.net/browse/MODINVSTOR-1245))
* Do not return routing service points by default ([MODINVSTOR-1219](https://folio-org.atlassian.net/browse/MODINVSTOR-1219))

### Bug fixes
* Description ([ISSUE](https://folio-org.atlassian.net/browse/ISSUE))
Expand Down Expand Up @@ -52,6 +56,7 @@
### Features
* Add floating collection flag in location schema ([MODINVSTOR-1250](https://issues.folio.org/browse/MODINVSTOR-1250))
* Implement domain event production for location create/update/delete ([MODINVSTOR-1181](https://issues.folio.org/browse/MODINVSTOR-1181))
alexanderkurash marked this conversation as resolved.
Show resolved Hide resolved
* Add a new boolean field ecsRequestRouting to the service point schema ([MODINVSTOR-1179](https://issues.folio.org/browse/MODINVSTOR-1179))
* Implement domain event production for library create/update/delete ([MODINVSTOR-1216](https://issues.folio.org/browse/MODINVSTOR-1216))
* Implement domain event production for campus create/update/delete ([MODINVSTOR-1217](https://issues.folio.org/browse/MODINVSTOR-1217))
* Implement domain event production for institution create/update/delete ([MODINVSTOR-1218](https://issues.folio.org/browse/MODINVSTOR-1218))
Expand Down
1 change: 1 addition & 0 deletions README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -186,6 +186,7 @@ These environment variables configure the module interaction with S3-compatible
* `S3_ACCESS_KEY_ID`
* `S3_SECRET_ACCESS_KEY`
* `S3_IS_AWS` (default value - `false`)
* `ECS_TLR_FEATURE_ENABLED` (default value - `false`)

# Local Deployment using Docker

Expand Down
20 changes: 13 additions & 7 deletions descriptors/ModuleDescriptor-template.json
alexanderkurash marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -1120,28 +1120,33 @@
},
{
"id": "service-points",
"version": "3.3",
"version": "3.4",
"handlers": [
{
"methods": ["GET"],
"pathPattern": "/service-points",
"permissionsRequired": ["inventory-storage.service-points.collection.get"]
"permissionsRequired": ["inventory-storage.service-points.collection.get"],
"modulePermissions": ["user-tenants.collection.get"]
}, {
"methods": ["GET"],
"pathPattern": "/service-points/{id}",
"permissionsRequired": ["inventory-storage.service-points.item.get"]
"permissionsRequired": ["inventory-storage.service-points.item.get"],
"modulePermissions": ["user-tenants.collection.get"]
}, {
"methods": ["POST"],
"pathPattern": "/service-points",
"permissionsRequired": ["inventory-storage.service-points.item.post"]
"permissionsRequired": ["inventory-storage.service-points.item.post"],
"modulePermissions": ["user-tenants.collection.get"]
}, {
"methods": ["PUT"],
"pathPattern": "/service-points/{id}",
"permissionsRequired": ["inventory-storage.service-points.item.put"]
"permissionsRequired": ["inventory-storage.service-points.item.put"],
"modulePermissions": ["user-tenants.collection.get"]
}, {
"methods": ["DELETE"],
"pathPattern": "/service-points/{id}",
"permissionsRequired": ["inventory-storage.service-points.item.delete"]
"permissionsRequired": ["inventory-storage.service-points.item.delete"],
"modulePermissions": ["user-tenants.collection.get"]
}
]
},
Expand Down Expand Up @@ -2900,7 +2905,8 @@
{ "name": "S3_BUCKET", "value": "marc-migrations" },
{ "name": "S3_ACCESS_KEY_ID", "value": "" },
{ "name": "S3_SECRET_ACCESS_KEY", "value": "" },
{ "name": "S3_IS_AWS", "value": "true" }
{ "name": "S3_IS_AWS", "value": "true" },
{ "name": "ECS_TLR_FEATURE_ENABLED", "value": "false"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please document it in README

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

]
}
}
7 changes: 7 additions & 0 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@
<rest-assured.version>5.5.0</rest-assured.version>
<awaitility.version>4.2.2</awaitility.version>
<assertj.version>3.26.3</assertj.version>
<system-stubs-junit4.version>2.1.7</system-stubs-junit4.version>

<maven-compiler-plugin.version>3.13.0</maven-compiler-plugin.version>
<build-helper-maven-plugin.version>3.6.0</build-helper-maven-plugin.version>
Expand Down Expand Up @@ -273,6 +274,12 @@
<artifactId>log4j-slf4j2-impl</artifactId>
<scope>test</scope>
</dependency>
<dependency> <!-- for mocking env variable -->
<groupId>uk.org.webcompere</groupId>
<artifactId>system-stubs-junit4</artifactId>
<version>${system-stubs-junit4.version}</version>
<scope>test</scope>
</dependency>
</dependencies>

<build>
Expand Down
8 changes: 7 additions & 1 deletion ramls/service-point.raml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#%RAML 1.0
title: Service Points API
version: v3.3
version: v3.4
protocols: [ HTTP, HTTPS ]
baseUri: http://localhost

Expand Down Expand Up @@ -34,6 +34,12 @@ resourceTypes:
searchable: { description: "with valid searchable fields", example: "name=aaa"},
pageable
]
queryParameters:
includeRoutingServicePoints:
description: "Should ECS request routing service points be included in the response"
default: false
required: false
type: boolean
description: Return a list of service points
post:
description: Create a new service point
Expand Down
5 changes: 5 additions & 0 deletions ramls/servicepoint.json
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,11 @@
]
}
},
"ecsRequestRouting": {
"type": "boolean",
"description": "Indicates a service point used for the ECS functionality",
"default" : false
},
"metadata": {
"type": "object",
"$ref": "raml-util/schemas/metadata.schema",
Expand Down
19 changes: 19 additions & 0 deletions src/main/java/org/folio/rest/impl/InitApiImpl.java
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,13 @@
import io.vertx.core.Future;
import io.vertx.core.Handler;
import io.vertx.core.Promise;
import io.vertx.core.ThreadingModel;
import io.vertx.core.Vertx;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.folio.rest.resource.interfaces.InitAPI;
import org.folio.services.caches.ConsortiumDataCache;
import org.folio.services.consortium.ServicePointSynchronizationVerticle;
import org.folio.services.consortium.ShadowInstanceSynchronizationVerticle;
import org.folio.services.consortium.SynchronizationVerticle;
import org.folio.services.migration.async.AsyncMigrationConsumerVerticle;
Expand All @@ -25,6 +27,7 @@ public void init(Vertx vertx, Context context, Handler<AsyncResult<Boolean>> han
initAsyncMigrationVerticle(vertx)
.compose(v -> initShadowInstanceSynchronizationVerticle(vertx, getConsortiumDataCache(context)))
.compose(v -> initSynchronizationVerticle(vertx, getConsortiumDataCache(context)))
.compose(v -> initServicePointSynchronizationVerticle(vertx, getConsortiumDataCache(context)))
.map(true)
.onComplete(handler);
}
Expand Down Expand Up @@ -76,6 +79,22 @@ private Future<Object> initSynchronizationVerticle(Vertx vertx, ConsortiumDataCa
.mapEmpty();
}

private Future<Object> initServicePointSynchronizationVerticle(Vertx vertx,
ConsortiumDataCache consortiumDataCache) {

DeploymentOptions options = new DeploymentOptions()
.setThreadingModel(ThreadingModel.WORKER)
.setInstances(1);

return vertx.deployVerticle(() -> new ServicePointSynchronizationVerticle(consortiumDataCache),
options)
.onSuccess(v -> log.info("initServicePointSynchronizationVerticle:: "
+ "ServicePointSynchronizationVerticle verticle was successfully started"))
.onFailure(e -> log.error("initServicePointSynchronizationVerticle:: "
+ "ServicePointSynchronizationVerticle verticle was not successfully started", e))
.mapEmpty();
}

private void initConsortiumDataCache(Vertx vertx, Context context) {
ConsortiumDataCache consortiumDataCache = new ConsortiumDataCache(vertx, vertx.createHttpClient());
context.put(ConsortiumDataCache.class.getName(), consortiumDataCache);
Expand Down
43 changes: 27 additions & 16 deletions src/main/java/org/folio/rest/impl/ServicePointApi.java
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

import static io.vertx.core.Future.succeededFuture;
import static java.lang.Boolean.TRUE;
import static org.apache.commons.lang3.StringUtils.isNotBlank;
import static org.folio.rest.support.EndpointFailureHandler.handleFailure;

import io.vertx.core.AsyncResult;
Expand Down Expand Up @@ -31,15 +32,18 @@ public class ServicePointApi implements org.folio.rest.jaxrs.resource.ServicePoi
"Hold shelf expiry period must be specified when service point can be used for pickup.";
public static final String SERVICE_POINT_CREATE_ERR_MSG_WITHOUT_BEING_PICKUP_LOC =
"Hold shelf expiry period cannot be specified when service point cannot be used for pickup";
private static final String ECS_ROUTING_QUERY_FILTER = "cql.allRecords=1 NOT ecsRequestRouting=true";
private static final Logger logger = LogManager.getLogger();

@Validate
@Override
public void getServicePoints(String query, String totalRecords, int offset, int limit,
public void getServicePoints(boolean includeRoutingServicePoints, String query,
String totalRecords, int offset, int limit,
Map<String, String> okapiHeaders,
Handler<AsyncResult<Response>> asyncResultHandler,
Context vertxContext) {

query = updateGetServicePointsQuery(query, includeRoutingServicePoints);
PgUtil.get(SERVICE_POINT_TABLE, Servicepoint.class, Servicepoints.class,
query, offset, limit, okapiHeaders, vertxContext, GetServicePointsResponse.class, asyncResultHandler);
}
Expand Down Expand Up @@ -67,11 +71,11 @@ public void postServicePoints(Servicepoint entity,
id = UUID.randomUUID().toString();
entity.setId(id);
}
String tenantId = getTenant(okapiHeaders);
PostgresClient pgClient = getPgClient(vertxContext, tenantId);
pgClient.save(SERVICE_POINT_TABLE, id, entity, saveReply -> {
if (saveReply.failed()) {
String message = logAndSaveError(saveReply.cause());
new ServicePointService(vertxContext, okapiHeaders)
.createServicePoint(id, entity)
.onSuccess(response -> asyncResultHandler.handle(succeededFuture(response)))
.onFailure(throwable -> {
String message = logAndSaveError(throwable);
if (isDuplicate(message)) {
asyncResultHandler.handle(Future.succeededFuture(
PostServicePointsResponse.respond422WithApplicationJson(
Expand All @@ -82,15 +86,7 @@ public void postServicePoints(Servicepoint entity,
PostServicePointsResponse.respond500WithTextPlain(
getErrorResponse(message))));
}
} else {
String ret = saveReply.result();
entity.setId(ret);
asyncResultHandler.handle(Future.succeededFuture(
PostServicePointsResponse
.respond201WithApplicationJson(entity,
PostServicePointsResponse.headersFor201().withLocation(LOCATION_PREFIX + ret))));
}
});
});
} catch (Exception e) {
String message = logAndSaveError(e);
asyncResultHandler.handle(Future.succeededFuture(
Expand Down Expand Up @@ -244,7 +240,7 @@ private boolean isDuplicate(String errorMessage) {
"duplicate key value violates unique constraint");
}

private String validateServicePoint(Servicepoint svcpt) {
public static String validateServicePoint(Servicepoint svcpt) {

HoldShelfExpiryPeriod holdShelfExpiryPeriod = svcpt.getHoldShelfExpiryPeriod();
boolean pickupLocation = svcpt.getPickupLocation() == null ? Boolean.FALSE : svcpt.getPickupLocation();
Expand All @@ -261,4 +257,19 @@ private Future<Boolean> checkServicePointInUse() {
return Future.succeededFuture(false);
}

private static String updateGetServicePointsQuery(String query, boolean includeRoutingServicePoints) {
if (includeRoutingServicePoints) {
return query;
}

logger.debug("updateGetServicePointsQuery:: original query: {}", query);
String newQuery = ECS_ROUTING_QUERY_FILTER;
if (isNotBlank(query)) {
newQuery += " and " + query;
}
logger.debug("updateGetServicePointsQuery:: updated query: {}", newQuery);

return newQuery;
}

}
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
package org.folio.services.consortium;

import static org.folio.rest.tools.utils.ModuleName.getModuleName;
import static org.folio.rest.tools.utils.ModuleName.getModuleVersion;
import static org.folio.services.domainevent.ServicePointEventType.SERVICE_POINT_CREATED;
import static org.folio.services.domainevent.ServicePointEventType.SERVICE_POINT_DELETED;
import static org.folio.services.domainevent.ServicePointEventType.SERVICE_POINT_UPDATED;

import io.vertx.core.AbstractVerticle;
import io.vertx.core.Future;
import io.vertx.core.Promise;
import io.vertx.core.http.HttpClient;
import java.util.ArrayList;
import java.util.List;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.folio.kafka.AsyncRecordHandler;
import org.folio.kafka.GlobalLoadSensor;
import org.folio.kafka.KafkaConfig;
import org.folio.kafka.KafkaConsumerWrapper;
import org.folio.kafka.SubscriptionDefinition;
import org.folio.kafka.services.KafkaEnvironmentProperties;
import org.folio.kafka.services.KafkaTopic;
import org.folio.services.caches.ConsortiumDataCache;
import org.folio.services.consortium.handler.ServicePointSynchronizationCreateHandler;
import org.folio.services.consortium.handler.ServicePointSynchronizationDeleteHandler;
import org.folio.services.consortium.handler.ServicePointSynchronizationUpdateHandler;
import org.folio.services.domainevent.ServicePointEventType;

public class ServicePointSynchronizationVerticle extends AbstractVerticle {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is already existing SynchronizationVerticle that would be nice to utilize instead of creating an additional one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Create this tech debt ticket for refactoring https://folio-org.atlassian.net/browse/MODINVSTOR-1285


private static final Logger log = LogManager.getLogger(ServicePointSynchronizationVerticle.class);
private static final String TENANT_PATTERN = "\\w{1,}";
private static final String MODULE_ID = getModuleId();
private static final int DEFAULT_LOAD_LIMIT = 5;
private final ConsortiumDataCache consortiumDataCache;

private final List<KafkaConsumerWrapper<String, String>> consumers = new ArrayList<>();

public ServicePointSynchronizationVerticle(final ConsortiumDataCache consortiumDataCache) {
this.consortiumDataCache = consortiumDataCache;
}

@Override
public void start(Promise<Void> startPromise) throws Exception {
var httpClient = vertx.createHttpClient();

createConsumers(httpClient)
.onSuccess(v -> log.info("start:: verticle started"))
.onFailure(t -> log.error("start:: verticle start failed", t))
.onComplete(startPromise);
}

private Future<Void> createConsumers(HttpClient httpClient) {
final var config = getKafkaConfig();

return createEventConsumer(SERVICE_POINT_CREATED, config,
new ServicePointSynchronizationCreateHandler(consortiumDataCache, httpClient, vertx))
.compose(r -> createEventConsumer(SERVICE_POINT_UPDATED, config,
new ServicePointSynchronizationUpdateHandler(consortiumDataCache, httpClient, vertx)))
.compose(r -> createEventConsumer(SERVICE_POINT_DELETED, config,
new ServicePointSynchronizationDeleteHandler(consortiumDataCache, httpClient, vertx)))
.mapEmpty();
}

private Future<KafkaConsumerWrapper<String, String>> createEventConsumer(
ServicePointEventType eventType, KafkaConfig kafkaConfig,
AsyncRecordHandler<String, String> handler) {

var subscriptionDefinition = SubscriptionDefinition.builder()
.eventType(eventType.name())
.subscriptionPattern(buildSubscriptionPattern(eventType.getKafkaTopic(), kafkaConfig))
.build();

return createConsumer(kafkaConfig, subscriptionDefinition, handler);
}

private Future<KafkaConsumerWrapper<String, String>> createConsumer(KafkaConfig kafkaConfig,
SubscriptionDefinition subscriptionDefinition,
AsyncRecordHandler<String, String> recordHandler) {

var consumer = KafkaConsumerWrapper.<String, String>builder()
.context(context)
.vertx(vertx)
.kafkaConfig(kafkaConfig)
.loadLimit(DEFAULT_LOAD_LIMIT)
.globalLoadSensor(new GlobalLoadSensor())
.subscriptionDefinition(subscriptionDefinition)
.build();

return consumer.start(recordHandler, MODULE_ID)
.onSuccess(v -> consumers.add(consumer))
.map(consumer);
}

private static String buildSubscriptionPattern(KafkaTopic kafkaTopic, KafkaConfig kafkaConfig) {
return kafkaTopic.fullTopicName(kafkaConfig, TENANT_PATTERN);
}

private static String getModuleId() {
return getModuleName().replace("_", "-") + "-" + getModuleVersion();
}

private KafkaConfig getKafkaConfig() {
return KafkaConfig.builder()
.envId(KafkaEnvironmentProperties.environment())
.kafkaHost(KafkaEnvironmentProperties.host())
.kafkaPort(KafkaEnvironmentProperties.port())
.build();
}
}
Loading
Loading