Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added catalog and schema parameters to execute and fetch. #90

Merged
merged 1 commit into from
Apr 11, 2024

Conversation

FastLee
Copy link
Contributor

@FastLee FastLee commented Apr 11, 2024

Added catalog and schema parameters to the fetch and execute command.
Created unit and integration tests.

Created unit and integration tests.
Copy link

❌ 19/20 passed, 1 failed, 2 skipped, 15m19s total

❌ test_overwrite: databricks.sdk.errors.platform.BadRequest: [INSUFFICIENT_PERMISSIONS] Insufficient privileges: (671ms)
databricks.sdk.errors.platform.BadRequest: [INSUFFICIENT_PERMISSIONS] Insufficient privileges:
User does not have permission CREATE,USAGE on database `TEST_SCHEMA`.
17:18 DEBUG [databricks.sdk] Loaded from environment
17:18 DEBUG [databricks.sdk] Ignoring pat auth, because metadata-service is preferred
17:18 DEBUG [databricks.sdk] Ignoring basic auth, because metadata-service is preferred
17:18 DEBUG [databricks.sdk] Attempting to configure auth: metadata-service
17:18 INFO [databricks.sdk] Using Databricks Metadata Service authentication
[gw6] linux -- Python 3.10.14 /home/runner/work/lsql/lsql/.venv/bin/python
17:18 DEBUG [databricks.sdk] Loaded from environment
17:18 DEBUG [databricks.sdk] Ignoring pat auth, because metadata-service is preferred
17:18 DEBUG [databricks.sdk] Ignoring basic auth, because metadata-service is preferred
17:18 DEBUG [databricks.sdk] Attempting to configure auth: metadata-service
17:18 INFO [databricks.sdk] Using Databricks Metadata Service authentication
17:18 DEBUG [databricks.labs.lsql.backends] [api][execute] CREATE TABLE IF NOT EXISTS hive_metastore.TEST_SCHEMA.foo (first STRING NOT NULL, second BOOLEAN NOT... (18 more bytes)
17:18 DEBUG [databricks.labs.lsql.core] Executing SQL statement: CREATE TABLE IF NOT EXISTS hive_metastore.TEST_SCHEMA.foo (first STRING NOT NULL, second BOOLEAN NOT NULL) USING DELTA
17:18 DEBUG [databricks.sdk] POST /api/2.0/sql/statements/
> {
>   "format": "JSON_ARRAY",
>   "statement": "CREATE TABLE IF NOT EXISTS hive_metastore.TEST_SCHEMA.foo (first STRING NOT NULL, second BOOLEAN NOT... (18 more bytes)",
>   "warehouse_id": "TEST_DEFAULT_WAREHOUSE_ID"
> }
< 200 OK
< {
<   "statement_id": "01eef827-92ef-141a-acea-cb4e27413b70",
<   "status": {
<     "error": {
<       "error_code": "BAD_REQUEST",
<       "message": "[INSUFFICIENT_PERMISSIONS] Insufficient privileges:\nUser does not have permission CREATE,USAGE o... (21 more bytes)"
<     },
<     "state": "FAILED"
<   }
< }
17:18 DEBUG [databricks.sdk] Loaded from environment
17:18 DEBUG [databricks.sdk] Ignoring pat auth, because metadata-service is preferred
17:18 DEBUG [databricks.sdk] Ignoring basic auth, because metadata-service is preferred
17:18 DEBUG [databricks.sdk] Attempting to configure auth: metadata-service
17:18 INFO [databricks.sdk] Using Databricks Metadata Service authentication
17:18 DEBUG [databricks.labs.lsql.backends] [api][execute] CREATE TABLE IF NOT EXISTS hive_metastore.TEST_SCHEMA.foo (first STRING NOT NULL, second BOOLEAN NOT... (18 more bytes)
17:18 DEBUG [databricks.labs.lsql.core] Executing SQL statement: CREATE TABLE IF NOT EXISTS hive_metastore.TEST_SCHEMA.foo (first STRING NOT NULL, second BOOLEAN NOT NULL) USING DELTA
17:18 DEBUG [databricks.sdk] POST /api/2.0/sql/statements/
> {
>   "format": "JSON_ARRAY",
>   "statement": "CREATE TABLE IF NOT EXISTS hive_metastore.TEST_SCHEMA.foo (first STRING NOT NULL, second BOOLEAN NOT... (18 more bytes)",
>   "warehouse_id": "TEST_DEFAULT_WAREHOUSE_ID"
> }
< 200 OK
< {
<   "statement_id": "01eef827-92ef-141a-acea-cb4e27413b70",
<   "status": {
<     "error": {
<       "error_code": "BAD_REQUEST",
<       "message": "[INSUFFICIENT_PERMISSIONS] Insufficient privileges:\nUser does not have permission CREATE,USAGE o... (21 more bytes)"
<     },
<     "state": "FAILED"
<   }
< }
[gw6] linux -- Python 3.10.14 /home/runner/work/lsql/lsql/.venv/bin/python

Running from acceptance #62

Copy link
Collaborator

@nfx nfx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@nfx nfx merged commit 91978c0 into main Apr 11, 2024
9 of 10 checks passed
@nfx nfx deleted the catalog_schema_overrides branch April 11, 2024 17:23
nfx added a commit that referenced this pull request Apr 11, 2024
* Added catalog and schema parameters to execute and fetch ([#90](#90)). In this release, we have added optional `catalog` and `schema` parameters to the `execute` and `fetch` methods in the `SqlBackend` abstract base class, allowing for more flexibility when executing SQL statements in specific catalogs and schemas. These updates include new method signatures and their respective implementations in the `SparkSqlBackend` and `DatabricksSqlBackend` classes. The new parameters control the catalog and schema used by the `SparkSession` instance in the `SparkSqlBackend` class and the `SqlClient` instance in the `DatabricksSqlBackend` class. This enhancement enables better functionality in multi-catalog and multi-schema environments. Additionally, this change comes with unit tests and integration tests to ensure proper functionality. The new parameters can be used when calling the `execute` and `fetch` methods. For example, with a `SparkSqlBackend` instance `spark_backend`, you can execute a SQL statement in a specific catalog and schema with the following code: `spark_backend.execute("SELECT * FROM my_table", catalog="my_catalog", schema="my_schema")`. Similarly, the `fetch` method can also be used with the new parameters.
@nfx nfx mentioned this pull request Apr 11, 2024
nfx added a commit that referenced this pull request Apr 11, 2024
* Added catalog and schema parameters to execute and fetch
([#90](#90)). In this
release, we have added optional `catalog` and `schema` parameters to the
`execute` and `fetch` methods in the `SqlBackend` abstract base class,
allowing for more flexibility when executing SQL statements in specific
catalogs and schemas. These updates include new method signatures and
their respective implementations in the `SparkSqlBackend` and
`DatabricksSqlBackend` classes. The new parameters control the catalog
and schema used by the `SparkSession` instance in the `SparkSqlBackend`
class and the `SqlClient` instance in the `DatabricksSqlBackend` class.
This enhancement enables better functionality in multi-catalog and
multi-schema environments. Additionally, this change comes with unit
tests and integration tests to ensure proper functionality. The new
parameters can be used when calling the `execute` and `fetch` methods.
For example, with a `SparkSqlBackend` instance `spark_backend`, you can
execute a SQL statement in a specific catalog and schema with the
following code: `spark_backend.execute("SELECT * FROM my_table",
catalog="my_catalog", schema="my_schema")`. Similarly, the `fetch`
method can also be used with the new parameters.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants