Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Update self serve replication SQL to accept daily granularity as inte…
…rval (#234) ## Summary <!--- HINT: Replace #nnn with corresponding Issue number, if you are fixing an existing issue --> This PR adds support for daily granularity as an valid input in the SQL API for the `interval` parameter as part of self serve replication. Now the following SQL is valid and will not throw an exception: ``` ALTER TABLE db.testTable SET POLICY (REPLICATION=({destination:'a', interval:1D})) ``` where interval is supported to take daily and hourly inputs. The validations for 'D' and 'H' inputs will continue to be performed at the server-side level to accept 12H and 1/2/3D inputs. The PR for that can be found [here](#227). ## Changes - [x] Client-facing API Changes - [ ] Internal API Changes - [ ] Bug Fixes - [x] New Features - [ ] Performance Improvements - [ ] Code Style - [ ] Refactoring - [ ] Documentation - [ ] Tests For all the boxes checked, please include additional details of the changes made in this pull request. ## Testing Done <!--- Check any relevant boxes with "x" --> - [x] Manually Tested on local docker setup. Please include commands ran, and their output. - [ ] Added new tests for the changes made. - [x] Updated existing tests to reflect the changes made. - [ ] No tests added or updated. Please explain why. If unsure, please feel free to ask for help. - [ ] Some other form of testing like staging or soak time in production. Please explain. For all the boxes checked, include a detailed description of the testing done for the changes made in this pull request. Updated unit tests for SQL statements and tested in local docker: ``` scala> spark.sql("ALTER TABLE u_tableowner.test SET POLICY (REPLICATION=({destination:'a', interval:1D}))") res6: org.apache.spark.sql.DataFrame = [] ``` ``` scala> spark.sql("ALTER TABLE u_tableowner.test SET POLICY (REPLICATION=({destination:'a', interval:12H}))") res8: org.apache.spark.sql.DataFrame = [] ``` Using anything other than `h/H` or `d/D` throws an exception: ``` scala> spark.sql("ALTER TABLE u_tableowner.test SET POLICY (REPLICATION=({destination:'a', interval:1}))") com.linkedin.openhouse.spark.sql.catalyst.parser.extensions.OpenhouseParseException: no viable alternative at input 'interval:1'; line 1 pos 82 ``` ``` scala> spark.sql("ALTER TABLE u_tableowner.test SET POLICY (REPLICATION=({destination:'a', interval:1Y}))") com.linkedin.openhouse.spark.sql.catalyst.parser.extensions.OpenhouseParseException: no viable alternative at input 'interval:1Y'; line 1 pos 82 at com.linkedin.openhouse.spark.sql.catalyst.parser.extensions.OpenhouseParseErrorListener$.syntaxError(OpenhouseSparkSqlExtensionsParser.scala:123) at org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41) ``` # Additional Information - [ ] Breaking Changes - [ ] Deprecations - [ ] Large PR broken into smaller PRs, and PR plan linked in the description. For all the boxes checked, include additional details of the changes made in this pull request.
- Loading branch information