From 9032c9ddffd6b260265a733ae53d65c0c0157bc4 Mon Sep 17 00:00:00 2001 From: Serge Smertin <259697+nfx@users.noreply.github.com> Date: Wed, 8 May 2024 11:19:24 +0200 Subject: [PATCH] Release v0.4.3 (#101) * Bump actions/checkout from 4.1.2 to 4.1.3 ([#97](https://github.com/databrickslabs/lsql/issues/97)). The `actions/checkout` dependency has been updated from version 4.1.2 to 4.1.3 in the `update-main-version.yml` file. This new version includes a check to verify the git version before attempting to disable `sparse-checkout`, and adds an SSH user parameter to improve functionality and compatibility. The release notes and CHANGELOG.md file provide detailed information on the specific changes and improvements. The pull request also includes a detailed commit history and links to corresponding issues and pull requests on GitHub for transparency. You can review and merge the pull request to update the `actions/checkout` dependency in your project. * Maintain PySpark compatibility for databricks.labs.lsql.core.Row ([#99](https://github.com/databrickslabs/lsql/issues/99)). In this release, we have added a new method `asDict` to the `Row` class in the `databricks.labs.lsql.core` module to maintain compatibility with PySpark. This method returns a dictionary representation of the `Row` object, with keys corresponding to column names and values corresponding to the values in each column. Additionally, we have modified the `fetch` function in the `backends.py` file to return `Row` objects of `pyspark.sql` when using `self._spark.sql(sql).collect()`. This change is temporary and marked with a `TODO` comment, indicating that it will be addressed in the future. We have also added error handling code in the `fetch` function to ensure the function operates as expected. The `asDict` method in this implementation simply calls the existing `as_dict` method, meaning the behavior of the `asDict` method is identical to the `as_dict` method. The `as_dict` method returns a dictionary representation of the `Row` object, with keys corresponding to column names and values corresponding to the values in each column. The optional `recursive` argument in the `asDict` method, when set to `True`, enables recursive conversion of nested `Row` objects to nested dictionaries. However, this behavior is not currently implemented, and the `recursive` argument is always `False` by default. Dependency updates: * Bump actions/checkout from 4.1.2 to 4.1.3 ([#97](https://github.com/databrickslabs/lsql/pull/97)). --- CHANGELOG.md | 9 +++++++++ src/databricks/labs/lsql/__about__.py | 2 +- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 8cdbe48b..68f119b9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,14 @@ # Version changelog +## 0.4.3 + +* Bump actions/checkout from 4.1.2 to 4.1.3 ([#97](https://github.com/databrickslabs/lsql/issues/97)). The `actions/checkout` dependency has been updated from version 4.1.2 to 4.1.3 in the `update-main-version.yml` file. This new version includes a check to verify the git version before attempting to disable `sparse-checkout`, and adds an SSH user parameter to improve functionality and compatibility. The release notes and CHANGELOG.md file provide detailed information on the specific changes and improvements. The pull request also includes a detailed commit history and links to corresponding issues and pull requests on GitHub for transparency. You can review and merge the pull request to update the `actions/checkout` dependency in your project. +* Maintain PySpark compatibility for databricks.labs.lsql.core.Row ([#99](https://github.com/databrickslabs/lsql/issues/99)). In this release, we have added a new method `asDict` to the `Row` class in the `databricks.labs.lsql.core` module to maintain compatibility with PySpark. This method returns a dictionary representation of the `Row` object, with keys corresponding to column names and values corresponding to the values in each column. Additionally, we have modified the `fetch` function in the `backends.py` file to return `Row` objects of `pyspark.sql` when using `self._spark.sql(sql).collect()`. This change is temporary and marked with a `TODO` comment, indicating that it will be addressed in the future. We have also added error handling code in the `fetch` function to ensure the function operates as expected. The `asDict` method in this implementation simply calls the existing `as_dict` method, meaning the behavior of the `asDict` method is identical to the `as_dict` method. The `as_dict` method returns a dictionary representation of the `Row` object, with keys corresponding to column names and values corresponding to the values in each column. The optional `recursive` argument in the `asDict` method, when set to `True`, enables recursive conversion of nested `Row` objects to nested dictionaries. However, this behavior is not currently implemented, and the `recursive` argument is always `False` by default. + +Dependency updates: + + * Bump actions/checkout from 4.1.2 to 4.1.3 ([#97](https://github.com/databrickslabs/lsql/pull/97)). + ## 0.4.2 * Added more `NotFound` error type ([#94](https://github.com/databrickslabs/lsql/issues/94)). In the latest update, the `core.py` file in the `databricks/labs/lsql` package has undergone enhancements to the error handling functionality. The `_raise_if_needed` function has been modified to raise a `NotFound` error when the error message includes the phrase "does not exist". This update enables the system to categorize specific SQL query errors as `NotFound` error messages, thereby improving the overall error handling and reporting capabilities. This change was a collaborative effort, as indicated by the co-authored-by statement in the commit. diff --git a/src/databricks/labs/lsql/__about__.py b/src/databricks/labs/lsql/__about__.py index df124332..f6b7e267 100644 --- a/src/databricks/labs/lsql/__about__.py +++ b/src/databricks/labs/lsql/__about__.py @@ -1 +1 @@ -__version__ = "0.4.2" +__version__ = "0.4.3"