Skip to content

Commit

Permalink
update link
Browse files Browse the repository at this point in the history
  • Loading branch information
lixi-zhou committed Sep 2, 2023
1 parent 3061e07 commit a016a45
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 7 deletions.
9 changes: 6 additions & 3 deletions DeepMapping/DeepMapping/benchmark_utils.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from DeepMapping import dgpe_compression, byte_dictionary_compression, delta_compression, lzo_compression, zstd_compression, uncompress, rle_compression, deepmapping
from DeepMapping import dgpe_compression, byte_dictionary_compression, delta_compression, lzo_compression, zstd_compression, uncompress, rle_compression, deepmapping, hashtable, hashtable_with_compression


def benchmark_handler(benchmark, bench_type='single'):
Expand All @@ -19,10 +19,13 @@ def benchmark_handler(benchmark, bench_type='single'):
return zstd_compression.measure_latency
elif benchmark == "rle":
return rle_compression.measure_latency
elif benchmark == 'hashtable':
return hashtable.measure_latency
elif benchmark == 'hashtable_with_compression':
return hashtable_with_compression.measure_latency
elif benchmark == "deepmapping":
return deepmapping.measure_latency_any
else:
raise ValueError("NON-EXIST benchmark")
else:
raise ValueError("Non supported bench_type")

raise ValueError("Non supported bench_type")
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ Resources for SIGMOD 2024 Submission
## Dataset
Our experiments covered synthetic datasets, low/high correlation datasets with different scales(100MB, 1GB, 10GB), and TPC-H, TPC-DS benchmark datasets with scale factors as 1 and 10. We removed all string/continuous columns and uploaded our pre-generated datasets to [**HERE**](#FIXME):
Our experiments covered synthetic datasets, low/high correlation datasets with different scales(100MB, 1GB, 10GB), and TPC-H, TPC-DS benchmark datasets with scale factors as 1 and 10. We removed all string/continuous columns and uploaded our pre-generated datasets to [**HERE**](https://mega.nz/file/nNggnQzA#9Ma2v3GIrfR-3ndGNzGXsF5ZOcWtGwZKeRekUiqOnzA):
[**DATA LINK: Uploading...**](#FIXME)
[**DATA LINK: Here**](https://mega.nz/file/nNggnQzA#9Ma2v3GIrfR-3ndGNzGXsF5ZOcWtGwZKeRekUiqOnzA)
After download it, please unzip it to the **root** folder of this GitHub repository. Then, you will see a **dataset** folder here.
Expand All @@ -48,9 +48,9 @@ List of datasets:
## Benchmark
We provided some example models for the following 2 tasks. Please go [**HERE**](#FIXME) to download:
We provided some example models for the following 2 tasks. Please go [**HERE**](https://mega.nz/file/icxG1JaL#cuC5C4_PxQ1FsgSUmswfaXyzCaatOwx9n_b9F_-IDnU) to download:
[**MODEL LINK: Uploading...**](#FIXME)
[**MODEL LINK: Here**](https://mega.nz/file/icxG1JaL#cuC5C4_PxQ1FsgSUmswfaXyzCaatOwx9n_b9F_-IDnU)
After download it, please unzip it to the **root** folder of this GitHub repository. Then, you will see a **models** folder here.
Expand Down

0 comments on commit a016a45

Please sign in to comment.