Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allows using Collection and/or UDT fields for ttl & writetime calculations #319

Merged
merged 3 commits into from
Oct 21, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,9 +146,9 @@ spark-submit --properties-file cdm.properties \
# Things to know
- Each run (Migration or Validation) can be tracked (when enabled). You can find summary and details of the same in tables `cdm_run_info` and `cdm_run_details` in the target keyspace.
- CDM does not migrate `ttl` & `writetime` at the field-level (for optimization reasons). It instead finds the field with the highest `ttl` & the field with the highest `writetime` within an `origin` row and uses those values on the entire `target` row.
- CDM ignores `ttl` & `writetime` on collection and UDT fields while computing the highest value
- If a table has only collection and/or UDT non-key columns and not table-level `ttl` configuration, the target will have no `ttl`, which can lead to inconsistencies between `origin` and `target` as rows expire on `origin` due to `ttl` expiry.
- If a table has only collection and/or UDT non-key columns, the `writetime` used on target will be time the job was run. Alternatively if needed, the param `spark.cdm.transform.custom.writetime` can be used to set a static custom value for `writetime`.
- CDM ignores using collection and UDT fields for `ttl` & `writetime` calculations by default for performance reasons. If you want to include such fields, set `spark.cdm.schema.ttlwritetime.calc.useCollections` param to `true`.
- If a table has only collection and/or UDT non-key columns and no table-level `ttl` configuration, the target will have no `ttl`, which can lead to inconsistencies between `origin` and `target` as rows expire on `origin` due to `ttl` expiry. If you want to avoid this, we recommend setting `spark.cdm.schema.ttlwritetime.calc.useCollections` param to `true` in such scenarios.
- If a table has only collection and/or UDT non-key columns, the `writetime` used on target will be time the job was run. If you want to avoid this, we recommend setting `spark.cdm.schema.ttlwritetime.calc.useCollections` param to `true` in such scenarios.
- When CDM migration (or validation with autocorrect) is run multiple times on the same table (for whatever reasons), it could lead to duplicate entries in `list` type columns. Note this is [due to a Cassandra/DSE bug](https://issues.apache.org/jira/browse/CASSANDRA-11368) and not a CDM issue. This issue can be addressed by enabling and setting a positive value for `spark.cdm.transform.custom.writetime.incrementBy` param. This param was specifically added to address this issue.
- When you rerun job to resume from a previous run, the run metrics (read, write, skipped, etc.) captured in table `cdm_run_info` will be only for the current run. If the previous run was killed for some reasons, its run metrics may not have been saved. If the previous run did complete (not killed) but with errors, then you will have all run metrics from previous run as well.

Expand Down
3 changes: 3 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# Release Notes
## [4.6.0] - 2024-10-18
- Allow using Collections and/or UDTs for `ttl` & `writetime` calculations. This is specifically helpful in scenarios where the only non-key columns are Collections and/or UDTs.

## [4.5.1] - 2024-10-11
- Made CDM generated SCB unique & much short-lived when using the TLS option to connect to Astra more securely.

Expand Down
43 changes: 18 additions & 25 deletions src/main/java/com/datastax/cdm/cql/EnhancedSession.java
Original file line number Diff line number Diff line change
Expand Up @@ -15,24 +15,27 @@
*/
package com.datastax.cdm.cql;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.datastax.cdm.cql.codec.CodecFactory;
import com.datastax.cdm.cql.codec.Codecset;
import com.datastax.cdm.cql.statement.*;
import com.datastax.cdm.cql.statement.OriginSelectByPKStatement;
import com.datastax.cdm.cql.statement.OriginSelectByPartitionRangeStatement;
import com.datastax.cdm.cql.statement.TargetInsertStatement;
import com.datastax.cdm.cql.statement.TargetSelectByPKStatement;
import com.datastax.cdm.cql.statement.TargetUpdateStatement;
import com.datastax.cdm.cql.statement.TargetUpsertStatement;
import com.datastax.cdm.data.PKFactory;
import com.datastax.cdm.properties.KnownProperties;
import com.datastax.cdm.properties.PropertyHelper;
import com.datastax.cdm.schema.CqlTable;
import com.datastax.oss.driver.api.core.CqlSession;
import com.datastax.oss.driver.api.core.type.DataType;
import com.datastax.oss.driver.api.core.type.codec.CodecNotFoundException;
import com.datastax.oss.driver.api.core.type.codec.TypeCodec;
import com.datastax.oss.driver.api.core.type.codec.registry.MutableCodecRegistry;
import com.datastax.oss.driver.api.core.type.reflect.GenericType;

public class EnhancedSession {
public Logger logger = LoggerFactory.getLogger(this.getClass().getName());
Expand Down Expand Up @@ -96,26 +99,16 @@ public TargetUpsertStatement getTargetUpsertStatement() {
}

private CqlSession initSession(PropertyHelper propertyHelper, CqlSession session) {
List<String> codecList = propertyHelper.getStringList(KnownProperties.TRANSFORM_CODECS);
if (null != codecList && !codecList.isEmpty()) {
MutableCodecRegistry registry = (MutableCodecRegistry) session.getContext().getCodecRegistry();

for (String codecString : codecList) {
Codecset codecEnum = Codecset.valueOf(codecString);
for (TypeCodec<?> codec : CodecFactory.getCodecPair(propertyHelper, codecEnum)) {
DataType dataType = codec.getCqlType();
GenericType<?> javaType = codec.getJavaType();
if (logDebug)
logger.debug("Registering Codec {} for CQL type {} and Java type {}",
codec.getClass().getSimpleName(), dataType, javaType);
try {
registry.codecFor(dataType, javaType);
} catch (CodecNotFoundException e) {
registry.register(codec);
}
}
}
}
// BIGINT_BIGINTEGER codec is always needed to compare C* writetimes in collection columns
List<String> codecList = new ArrayList<>(Arrays.asList("BIGINT_BIGINTEGER"));

if (null != propertyHelper.getStringList(KnownProperties.TRANSFORM_CODECS))
codecList.addAll(propertyHelper.getStringList(KnownProperties.TRANSFORM_CODECS));
MutableCodecRegistry registry = (MutableCodecRegistry) session.getContext().getCodecRegistry();

codecList.stream().map(Codecset::valueOf).map(codec -> CodecFactory.getCodecPair(propertyHelper, codec))
.flatMap(List::stream).forEach(registry::register);

return session;
}

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
/*
* Copyright DataStax, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.datastax.cdm.cql.codec;

import java.math.BigInteger;
import java.nio.ByteBuffer;

import org.jetbrains.annotations.NotNull;

import com.datastax.cdm.properties.PropertyHelper;
import com.datastax.oss.driver.api.core.ProtocolVersion;
import com.datastax.oss.driver.api.core.type.DataType;
import com.datastax.oss.driver.api.core.type.DataTypes;
import com.datastax.oss.driver.api.core.type.codec.TypeCodecs;
import com.datastax.oss.driver.api.core.type.reflect.GenericType;

public class BIGINT_BigIntegerCodec extends AbstractBaseCodec<BigInteger> {

public BIGINT_BigIntegerCodec(PropertyHelper propertyHelper) {
super(propertyHelper);
}

@Override
public @NotNull GenericType<BigInteger> getJavaType() {
return GenericType.BIG_INTEGER;
}

@Override
public @NotNull DataType getCqlType() {
return DataTypes.BIGINT;
}

@Override
public ByteBuffer encode(BigInteger value, @NotNull ProtocolVersion protocolVersion) {
if (value == null) {
return null;
} else {
return TypeCodecs.BIGINT.encode(value.longValue(), protocolVersion);
}
}

@Override
public BigInteger decode(ByteBuffer bytes, @NotNull ProtocolVersion protocolVersion) {
return BigInteger.valueOf(TypeCodecs.BIGINT.decode(bytes, protocolVersion));
}

@Override
public @NotNull String format(BigInteger value) {
return TypeCodecs.BIGINT.format(value.longValue());
}

@Override
public BigInteger parse(String value) {
return BigInteger.valueOf(TypeCodecs.BIGINT.parse(value));
}

}
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
/*
* Copyright DataStax, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.datastax.cdm.cql.codec;

import java.nio.ByteBuffer;

import org.jetbrains.annotations.NotNull;

import com.datastax.cdm.properties.PropertyHelper;
import com.datastax.oss.driver.api.core.ProtocolVersion;
import com.datastax.oss.driver.api.core.type.DataType;
import com.datastax.oss.driver.api.core.type.DataTypes;
import com.datastax.oss.driver.api.core.type.codec.TypeCodecs;
import com.datastax.oss.driver.api.core.type.reflect.GenericType;

public class BigInteger_BIGINTCodec extends AbstractBaseCodec<Integer> {

public BigInteger_BIGINTCodec(PropertyHelper propertyHelper) {
super(propertyHelper);
}

@Override
public @NotNull GenericType<Integer> getJavaType() {
return GenericType.INTEGER;
}

@Override
public @NotNull DataType getCqlType() {
return DataTypes.INT;
}

@Override
public ByteBuffer encode(Integer value, @NotNull ProtocolVersion protocolVersion) {
if (value == null) {
return null;
} else {
return TypeCodecs.INT.encode(value, protocolVersion);
}
}

@Override
public Integer decode(ByteBuffer bytes, @NotNull ProtocolVersion protocolVersion) {
return TypeCodecs.INT.decode(bytes, protocolVersion);
}

@Override
public @NotNull String format(Integer value) {
return TypeCodecs.INT.format(value);
}

@Override
public Integer parse(String value) {
return TypeCodecs.INT.parse(value);
}

}
3 changes: 3 additions & 0 deletions src/main/java/com/datastax/cdm/cql/codec/CodecFactory.java
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,9 @@ public static List<TypeCodec<?>> getCodecPair(PropertyHelper propertyHelper, Cod
return Arrays.asList(new DOUBLE_StringCodec(propertyHelper), new TEXT_DoubleCodec(propertyHelper));
case BIGINT_STRING:
return Arrays.asList(new BIGINT_StringCodec(propertyHelper), new TEXT_LongCodec(propertyHelper));
case BIGINT_BIGINTEGER:
return Arrays.asList(new BIGINT_BigIntegerCodec(propertyHelper),
new BigInteger_BIGINTCodec(propertyHelper));
case STRING_BLOB:
return Arrays.asList(new TEXT_BLOBCodec(propertyHelper), new BLOB_TEXTCodec(propertyHelper));
case ASCII_BLOB:
Expand Down
4 changes: 2 additions & 2 deletions src/main/java/com/datastax/cdm/cql/codec/Codecset.java
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,6 @@
package com.datastax.cdm.cql.codec;

public enum Codecset {
INT_STRING, DOUBLE_STRING, BIGINT_STRING, DECIMAL_STRING, TIMESTAMP_STRING_MILLIS, TIMESTAMP_STRING_FORMAT,
POINT_TYPE, POLYGON_TYPE, DATE_RANGE, LINE_STRING, STRING_BLOB, ASCII_BLOB
INT_STRING, DOUBLE_STRING, BIGINT_STRING, BIGINT_BIGINTEGER, DECIMAL_STRING, TIMESTAMP_STRING_MILLIS,
TIMESTAMP_STRING_FORMAT, POINT_TYPE, POLYGON_TYPE, DATE_RANGE, LINE_STRING, STRING_BLOB, ASCII_BLOB
}
41 changes: 36 additions & 5 deletions src/main/java/com/datastax/cdm/feature/WritetimeTTL.java
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
*/
package com.datastax.cdm.feature;

import java.math.BigInteger;
import java.time.Instant;
import java.util.*;
import java.util.stream.Collectors;
Expand Down Expand Up @@ -45,6 +46,7 @@ public class WritetimeTTL extends AbstractFeature {
private Long filterMax;
private boolean hasWriteTimestampFilter;
private Long writetimeIncrement;
private boolean allowCollectionsForWritetimeTTL;

@Override
public boolean loadProperties(IPropertyHelper propertyHelper) {
Expand All @@ -61,7 +63,7 @@ public boolean loadProperties(IPropertyHelper propertyHelper) {
logger.info("PARAM -- WriteTimestampCols: {}", writetimeNames);
this.autoWritetimeNames = false;
}

allowCollectionsForWritetimeTTL = propertyHelper.getBoolean(KnownProperties.ALLOW_COLL_FOR_WRITETIME_TTL_COLS);
this.customWritetime = getCustomWritetime(propertyHelper);
if (this.customWritetime > 0) {
logger.info("PARAM -- {}: {} datetime is {} ", KnownProperties.TRANSFORM_CUSTOM_WRITETIME, customWritetime,
Expand Down Expand Up @@ -233,20 +235,49 @@ public Long getLargestWriteTimeStamp(Row row) {
return this.customWritetime;
if (null == this.writetimeSelectColumnIndexes || this.writetimeSelectColumnIndexes.isEmpty())
return null;
OptionalLong max = this.writetimeSelectColumnIndexes.stream().mapToLong(row::getLong).filter(Objects::nonNull)
.max();

OptionalLong max = (allowCollectionsForWritetimeTTL) ? getMaxWriteTimeStampForCollections(row)
: getMaxWriteTimeStamp(row);

return max.isPresent() ? max.getAsLong() + this.writetimeIncrement : null;
}

private OptionalLong getMaxWriteTimeStampForCollections(Row row) {
return this.writetimeSelectColumnIndexes.stream().map(col -> {
if (row.getType(col).equals(DataTypes.BIGINT))
return Arrays.asList(row.getLong(col));
return row.getList(col, BigInteger.class).stream().filter(Objects::nonNull).map(BigInteger::longValue)
.collect(Collectors.toList());
}).flatMap(List::stream).filter(Objects::nonNull).mapToLong(Long::longValue).max();
}

private OptionalLong getMaxWriteTimeStamp(Row row) {
return this.writetimeSelectColumnIndexes.stream().filter(Objects::nonNull).mapToLong(row::getLong).max();
}

public Integer getLargestTTL(Row row) {
if (logDebug)
logger.debug("getLargestTTL: customTTL={}, ttlSelectColumnIndexes={}", customTTL, ttlSelectColumnIndexes);
if (this.customTTL > 0)
return this.customTTL.intValue();
if (null == this.ttlSelectColumnIndexes || this.ttlSelectColumnIndexes.isEmpty())
return null;
OptionalInt max = this.ttlSelectColumnIndexes.stream().mapToInt(row::getInt).filter(Objects::nonNull).max();
return max.isPresent() ? max.getAsInt() : null;

OptionalInt max = (allowCollectionsForWritetimeTTL) ? getMaxTTLForCollections(row) : getMaxTTL(row);

return max.isPresent() ? max.getAsInt() : 0;
}

private OptionalInt getMaxTTLForCollections(Row row) {
return this.ttlSelectColumnIndexes.stream().map(col -> {
if (row.getType(col).equals(DataTypes.INT))
return Arrays.asList(row.getInt(col));
return row.getList(col, Integer.class).stream().filter(Objects::nonNull).collect(Collectors.toList());
}).flatMap(List::stream).filter(Objects::nonNull).mapToInt(Integer::intValue).max();
}

private OptionalInt getMaxTTL(Row row) {
return this.ttlSelectColumnIndexes.stream().filter(Objects::nonNull).mapToInt(row::getInt).max();
}

private void validateTTLColumns(CqlTable originTable) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ public enum PropertyType {
public static final String ORIGIN_TTL_NAMES = "spark.cdm.schema.origin.column.ttl.names";
public static final String ORIGIN_WRITETIME_AUTO = "spark.cdm.schema.origin.column.writetime.automatic";
public static final String ORIGIN_WRITETIME_NAMES = "spark.cdm.schema.origin.column.writetime.names";
public static final String ALLOW_COLL_FOR_WRITETIME_TTL_COLS = "spark.cdm.schema.origin.column.ttlwritetime.allow.collections";

public static final String ORIGIN_COLUMN_NAMES_TO_TARGET = "spark.cdm.schema.origin.column.names.to.target";

Expand All @@ -90,6 +91,8 @@ public enum PropertyType {
types.put(ORIGIN_WRITETIME_NAMES, PropertyType.STRING_LIST);
types.put(ORIGIN_WRITETIME_AUTO, PropertyType.BOOLEAN);
defaults.put(ORIGIN_WRITETIME_AUTO, "true");
types.put(ALLOW_COLL_FOR_WRITETIME_TTL_COLS, PropertyType.BOOLEAN);
defaults.put(ALLOW_COLL_FOR_WRITETIME_TTL_COLS, "false");
types.put(ORIGIN_COLUMN_NAMES_TO_TARGET, PropertyType.STRING_LIST);
}

Expand Down
10 changes: 8 additions & 2 deletions src/main/java/com/datastax/cdm/schema/CqlTable.java
Original file line number Diff line number Diff line change
Expand Up @@ -470,15 +470,19 @@ private void setCqlMetadata(CqlSession cqlSession) {
.filter(md -> !extractJsonExclusive || md.getName().asCql(true).endsWith(columnName))
.collect(Collectors.toCollection(() -> this.cqlAllColumns));

boolean allowCollectionsForWritetimeTTL = propertyHelper
.getBoolean(KnownProperties.ALLOW_COLL_FOR_WRITETIME_TTL_COLS);
this.writetimeTTLColumns = tableMetadata.getColumns().values().stream()
.filter(columnMetadata -> canColumnHaveTTLorWritetime(tableMetadata, columnMetadata))
.filter(columnMetadata -> canColumnHaveTTLorWritetime(tableMetadata, columnMetadata,
allowCollectionsForWritetimeTTL))
.map(ColumnMetadata::getName).map(CqlIdentifier::asInternal).collect(Collectors.toList());

this.columnNameToCqlTypeMap = this.cqlAllColumns.stream().collect(
Collectors.toMap(columnMetadata -> columnMetadata.getName().asInternal(), ColumnMetadata::getType));
}

private boolean canColumnHaveTTLorWritetime(TableMetadata tableMetadata, ColumnMetadata columnMetadata) {
private boolean canColumnHaveTTLorWritetime(TableMetadata tableMetadata, ColumnMetadata columnMetadata,
boolean allowCollectionsForWritetimeTTL) {
DataType dataType = columnMetadata.getType();
boolean isKeyColumn = tableMetadata.getPartitionKey().contains(columnMetadata)
|| tableMetadata.getClusteringColumns().containsKey(columnMetadata);
Expand All @@ -492,6 +496,8 @@ private boolean canColumnHaveTTLorWritetime(TableMetadata tableMetadata, ColumnM
// supported here?
if (CqlData.isFrozen(dataType))
return true;
if (allowCollectionsForWritetimeTTL && CqlData.isCollection(dataType))
return true;
return false;
}

Expand Down
Loading