Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added nightly tests run at 4:45am UTC #318

Merged
merged 3 commits into from
Nov 15, 2024
Merged

Added nightly tests run at 4:45am UTC #318

merged 3 commits into from
Nov 15, 2024

Conversation

nfx
Copy link
Collaborator

@nfx nfx commented Nov 14, 2024

No description provided.

Copy link

github-actions bot commented Nov 14, 2024

❌ 33/36 passed, 3 failed, 4 skipped, 9m49s total

❌ test_runtime_backend_errors_handled[\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors import NotFound\nbackend = RuntimeBackend()\ntry:\n backend.execute("SELECT * FROM TEST_SCHEMA.__RANDOM__")\n return "FAILED"\nexcept NotFound as e:\n return "PASSED"\n]: assert '{"ts": "2024...]}}\n"PASSED"' == 'PASSED' (1m22.71s)
... (skipped 75100 bytes)
TER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "last_activity_time": 1731666293600,
<   "last_restarted_time": 1731678276793,
<   "last_state_loss_time": 1731666236300,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 5394234943045964788,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "PENDING",
<   "state_message": "Starting Spark"
< }
13:44 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID: (State.PENDING) Starting Spark (sleeping ~6s)
13:45 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "last_activity_time": 1731666293600,
<   "last_restarted_time": 1731678276793,
<   "last_state_loss_time": 1731666236300,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 5394234943045964788,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "PENDING",
<   "state_message": "Starting Spark"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID: (State.PENDING) Starting Spark (sleeping ~7s)
13:45 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "last_activity_time": 1731666293600,
<   "last_restarted_time": 1731678276793,
<   "last_state_loss_time": 1731666236300,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 5394234943045964788,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "PENDING",
<   "state_message": "Starting Spark"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID: (State.PENDING) Starting Spark (sleeping ~8s)
13:45 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "last_activity_time": 1731666293600,
<   "last_restarted_time": 1731678276793,
<   "last_state_loss_time": 1731666236300,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 5394234943045964788,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "PENDING",
<   "state_message": "Starting Spark"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID: (State.PENDING) Starting Spark (sleeping ~9s)
13:45 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_cores": 8.0,
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_memory_mb": 32768,
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver": {
<     "host_private_ip": "10.179.8.14",
<     "instance_id": "f335b24df03e466b8efb19a708cf7d9c",
<     "node_attributes": {
<       "is_spot": false
<     },
<     "node_id": "b993377df44e4a408921281be8db0393",
<     "private_ip": "10.179.10.14",
<     "public_dns": "",
<     "start_timestamp": 1731678282507
<   },
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "jdbc_port": 10000,
<   "last_activity_time": 1731678301803,
<   "last_restarted_time": 1731678323374,
<   "last_state_loss_time": 1731678323349,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 7133597207159756379,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "RUNNING",
<   "state_message": ""
< }
13:45 DEBUG [databricks.sdk] POST /api/1.2/contexts/create
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "1022564843999260532"
< }
13:45 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=1022564843999260532
< 200 OK
< {
<   "id": "1022564843999260532",
<   "status": "Pending"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=1022564843999260532: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~1s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=1022564843999260532
< 200 OK
< {
<   "id": "1022564843999260532",
<   "status": "Pending"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=1022564843999260532: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~2s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=1022564843999260532
< 200 OK
< {
<   "id": "1022564843999260532",
<   "status": "Pending"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=1022564843999260532: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~3s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=1022564843999260532
< 200 OK
< {
<   "id": "1022564843999260532",
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "get_ipython().run_line_magic('pip', 'install /Workspace/Users/4106dc97-a963-48f0-a079-a578238959... (110 more bytes)",
>   "contextId": "1022564843999260532",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "19305b61ec5749cea7e5ed3deea1c143"
< }
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=19305b61ec5749cea7e5ed3deea1c143&contextId=1022564843999260532
< 200 OK
< {
<   "id": "19305b61ec5749cea7e5ed3deea1c143",
<   "results": null,
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=19305b61ec5749cea7e5ed3deea1c143, context_id=1022564843999260532: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=19305b61ec5749cea7e5ed3deea1c143&contextId=1022564843999260532
< 200 OK
< {
<   "id": "19305b61ec5749cea7e5ed3deea1c143",
<   "results": null,
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=19305b61ec5749cea7e5ed3deea1c143, context_id=1022564843999260532: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~2s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=19305b61ec5749cea7e5ed3deea1c143&contextId=1022564843999260532
< 200 OK
< {
<   "id": "19305b61ec5749cea7e5ed3deea1c143",
<   "results": null,
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=19305b61ec5749cea7e5ed3deea1c143, context_id=1022564843999260532: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~3s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=19305b61ec5749cea7e5ed3deea1c143&contextId=1022564843999260532
< 200 OK
< {
<   "id": "19305b61ec5749cea7e5ed3deea1c143",
<   "results": {
<     "data": "Processing /Workspace/Users/4106dc97-a963-48f0-a079-a578238959a6/.x8g1/wheels/databricks_labs_ls... (4246 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:45 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "import json\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors ... (189 more bytes)",
>   "contextId": "1022564843999260532",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "6203d28ad9134bd0970e2272fbb7c949"
< }
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=6203d28ad9134bd0970e2272fbb7c949&contextId=1022564843999260532
< 200 OK
< {
<   "id": "6203d28ad9134bd0970e2272fbb7c949",
<   "results": null,
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=6203d28ad9134bd0970e2272fbb7c949, context_id=1022564843999260532: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=6203d28ad9134bd0970e2272fbb7c949&contextId=1022564843999260532
< 200 OK
< {
<   "id": "6203d28ad9134bd0970e2272fbb7c949",
<   "results": null,
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=6203d28ad9134bd0970e2272fbb7c949, context_id=1022564843999260532: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~2s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=6203d28ad9134bd0970e2272fbb7c949&contextId=1022564843999260532
< 200 OK
< {
<   "id": "6203d28ad9134bd0970e2272fbb7c949",
<   "results": {
<     "data": "{\"ts\": \"2024-11-15 13:45:54,826\", \"level\": \"ERROR\", \"logger\": \"SQLQueryContextLogger\", \"msg\": \"[... (13306 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:45 WARNING [databricks.sdk] cannot parse converted return statement. Just returning text
Traceback (most recent call last):
  File "/home/runner/work/lsql/lsql/.venv/lib/python3.10/site-packages/databricks/labs/blueprint/commands.py", line 123, in run
    return json.loads(results.data)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/__init__.py", line 346, in loads
    return _TEST_SCHEMA_decoder.decode(s)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 13394)
[gw0] linux -- Python 3.10.15 /home/runner/work/lsql/lsql/.venv/bin/python
❌ test_runtime_backend_errors_handled[\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors import NotFound\nbackend = RuntimeBackend()\ntry:\n query_response = backend.fetch("SELECT * FROM TEST_SCHEMA.__RANDOM__")\n return "FAILED"\nexcept NotFound as e:\n return "PASSED"\n]: assert '{"ts": "2024...]}}\n"PASSED"' == 'PASSED' (1m22.738s)
... (skipped 75100 bytes)
TER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "last_activity_time": 1731666293600,
<   "last_restarted_time": 1731678276793,
<   "last_state_loss_time": 1731666236300,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 5394234943045964788,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "PENDING",
<   "state_message": "Starting Spark"
< }
13:44 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID: (State.PENDING) Starting Spark (sleeping ~6s)
13:45 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "last_activity_time": 1731666293600,
<   "last_restarted_time": 1731678276793,
<   "last_state_loss_time": 1731666236300,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 5394234943045964788,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "PENDING",
<   "state_message": "Starting Spark"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID: (State.PENDING) Starting Spark (sleeping ~7s)
13:45 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "last_activity_time": 1731666293600,
<   "last_restarted_time": 1731678276793,
<   "last_state_loss_time": 1731666236300,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 5394234943045964788,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "PENDING",
<   "state_message": "Starting Spark"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID: (State.PENDING) Starting Spark (sleeping ~8s)
13:45 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "last_activity_time": 1731666293600,
<   "last_restarted_time": 1731678276793,
<   "last_state_loss_time": 1731666236300,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 5394234943045964788,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "PENDING",
<   "state_message": "Starting Spark"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID: (State.PENDING) Starting Spark (sleeping ~9s)
13:45 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_cores": 8.0,
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_memory_mb": 32768,
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver": {
<     "host_private_ip": "10.179.8.14",
<     "instance_id": "f335b24df03e466b8efb19a708cf7d9c",
<     "node_attributes": {
<       "is_spot": false
<     },
<     "node_id": "b993377df44e4a408921281be8db0393",
<     "private_ip": "10.179.10.14",
<     "public_dns": "",
<     "start_timestamp": 1731678282507
<   },
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "jdbc_port": 10000,
<   "last_activity_time": 1731678301803,
<   "last_restarted_time": 1731678323374,
<   "last_state_loss_time": 1731678323349,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 7133597207159756379,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "RUNNING",
<   "state_message": ""
< }
13:45 DEBUG [databricks.sdk] POST /api/1.2/contexts/create
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "7312760718130488415"
< }
13:45 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=7312760718130488415
< 200 OK
< {
<   "id": "7312760718130488415",
<   "status": "Pending"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=7312760718130488415: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~1s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=7312760718130488415
< 200 OK
< {
<   "id": "7312760718130488415",
<   "status": "Pending"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=7312760718130488415: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~2s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=7312760718130488415
< 200 OK
< {
<   "id": "7312760718130488415",
<   "status": "Pending"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=7312760718130488415: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~3s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=7312760718130488415
< 200 OK
< {
<   "id": "7312760718130488415",
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "get_ipython().run_line_magic('pip', 'install /Workspace/Users/4106dc97-a963-48f0-a079-a578238959... (110 more bytes)",
>   "contextId": "7312760718130488415",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "01d2b497814f469a862034327cdcad25"
< }
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=01d2b497814f469a862034327cdcad25&contextId=7312760718130488415
< 200 OK
< {
<   "id": "01d2b497814f469a862034327cdcad25",
<   "results": null,
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=01d2b497814f469a862034327cdcad25, context_id=7312760718130488415: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=01d2b497814f469a862034327cdcad25&contextId=7312760718130488415
< 200 OK
< {
<   "id": "01d2b497814f469a862034327cdcad25",
<   "results": null,
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=01d2b497814f469a862034327cdcad25, context_id=7312760718130488415: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~2s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=01d2b497814f469a862034327cdcad25&contextId=7312760718130488415
< 200 OK
< {
<   "id": "01d2b497814f469a862034327cdcad25",
<   "results": null,
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=01d2b497814f469a862034327cdcad25, context_id=7312760718130488415: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~3s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=01d2b497814f469a862034327cdcad25&contextId=7312760718130488415
< 200 OK
< {
<   "id": "01d2b497814f469a862034327cdcad25",
<   "results": {
<     "data": "Processing /Workspace/Users/4106dc97-a963-48f0-a079-a578238959a6/.9NhW/wheels/databricks_labs_ls... (4247 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:45 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "import json\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors ... (204 more bytes)",
>   "contextId": "7312760718130488415",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "d3bc65cc46034be09bd13d3892cae915"
< }
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=d3bc65cc46034be09bd13d3892cae915&contextId=7312760718130488415
< 200 OK
< {
<   "id": "d3bc65cc46034be09bd13d3892cae915",
<   "results": null,
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=d3bc65cc46034be09bd13d3892cae915, context_id=7312760718130488415: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=d3bc65cc46034be09bd13d3892cae915&contextId=7312760718130488415
< 200 OK
< {
<   "id": "d3bc65cc46034be09bd13d3892cae915",
<   "results": null,
<   "status": "Running"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=d3bc65cc46034be09bd13d3892cae915, context_id=7312760718130488415: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~2s)
13:45 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=d3bc65cc46034be09bd13d3892cae915&contextId=7312760718130488415
< 200 OK
< {
<   "id": "d3bc65cc46034be09bd13d3892cae915",
<   "results": {
<     "data": "{\"ts\": \"2024-11-15 13:45:54,829\", \"level\": \"ERROR\", \"logger\": \"SQLQueryContextLogger\", \"msg\": \"[... (13306 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:45 WARNING [databricks.sdk] cannot parse converted return statement. Just returning text
Traceback (most recent call last):
  File "/home/runner/work/lsql/lsql/.venv/lib/python3.10/site-packages/databricks/labs/blueprint/commands.py", line 123, in run
    return json.loads(results.data)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/__init__.py", line 346, in loads
    return _TEST_SCHEMA_decoder.decode(s)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 13394)
[gw2] linux -- Python 3.10.15 /home/runner/work/lsql/lsql/.venv/bin/python
❌ test_runtime_backend_errors_handled[\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors import NotFound\nbackend = RuntimeBackend()\ntry:\n query_response = backend.fetch("DESCRIBE __RANDOM__")\n return "FAILED"\nexcept NotFound as e:\n return "PASSED"\n]: assert '{"ts": "2024...]}}\n"PASSED"' == 'PASSED' (22.715s)
... (skipped 15090 bytes)
runtime-identity"
<   },
<   "schemas": [
<     "urn:ietf:params:scim:schemas:core:2.0:User",
<     "... (1 additional elements)"
<   ],
<   "userName": "4106dc97-a963-48f0-a079-a578238959a6"
< }
13:45 DEBUG [databricks.labs.blueprint.wheels] Building wheel for /tmp/tmpcpqtz8lc/working-copy in /tmp/tmpcpqtz8lc
13:45 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/wheels/databricks_labs_lsql-0.13.1+620241115134555-py3-none-any.whl
13:45 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 404 Not Found
< {
<   "error_code": "RESOURCE_DOES_NOT_EXIST",
<   "message": "The parent folder (/Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/wheels) does not exist."
< }
13:45 DEBUG [databricks.labs.blueprint.installation] Creating missing folders: /Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/wheels
13:45 DEBUG [databricks.sdk] POST /api/2.0/workspace/mkdirs
> {
>   "path": "/Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/wheels"
> }
< 200 OK
< {}
13:45 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935302
< }
13:45 DEBUG [databricks.labs.blueprint.installation] Converting Version into JSON format
13:45 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/version.json
13:45 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935304
< }
13:45 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_cores": 8.0,
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_memory_mb": 32768,
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver": {
<     "host_private_ip": "10.179.8.14",
<     "instance_id": "f335b24df03e466b8efb19a708cf7d9c",
<     "node_attributes": {
<       "is_spot": false
<     },
<     "node_id": "b993377df44e4a408921281be8db0393",
<     "private_ip": "10.179.10.14",
<     "public_dns": "",
<     "start_timestamp": 1731678282507
<   },
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "jdbc_port": 10000,
<   "last_activity_time": 1731678347945,
<   "last_restarted_time": 1731678323374,
<   "last_state_loss_time": 1731678323349,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 7133597207159756379,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "RUNNING",
<   "state_message": ""
< }
13:45 DEBUG [databricks.sdk] POST /api/1.2/contexts/create
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "8336745086801389978"
< }
13:45 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=8336745086801389978
< 200 OK
< {
<   "id": "8336745086801389978",
<   "status": "Pending"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=8336745086801389978: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~1s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=8336745086801389978
< 200 OK
< {
<   "id": "8336745086801389978",
<   "status": "Pending"
< }
13:46 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=8336745086801389978: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~2s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=8336745086801389978
< 200 OK
< {
<   "id": "8336745086801389978",
<   "status": "Pending"
< }
13:46 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=8336745086801389978: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~3s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=8336745086801389978
< 200 OK
< {
<   "id": "8336745086801389978",
<   "status": "Running"
< }
13:46 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "get_ipython().run_line_magic('pip', 'install /Workspace/Users/4106dc97-a963-48f0-a079-a578238959... (110 more bytes)",
>   "contextId": "8336745086801389978",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "e4160f6106cc4124b8d49895ea667f98"
< }
13:46 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=e4160f6106cc4124b8d49895ea667f98&contextId=8336745086801389978
< 200 OK
< {
<   "id": "e4160f6106cc4124b8d49895ea667f98",
<   "results": null,
<   "status": "Running"
< }
13:46 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=e4160f6106cc4124b8d49895ea667f98, context_id=8336745086801389978: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=e4160f6106cc4124b8d49895ea667f98&contextId=8336745086801389978
< 200 OK
< {
<   "id": "e4160f6106cc4124b8d49895ea667f98",
<   "results": null,
<   "status": "Running"
< }
13:46 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=e4160f6106cc4124b8d49895ea667f98, context_id=8336745086801389978: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~2s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=e4160f6106cc4124b8d49895ea667f98&contextId=8336745086801389978
< 200 OK
< {
<   "id": "e4160f6106cc4124b8d49895ea667f98",
<   "results": null,
<   "status": "Running"
< }
13:46 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=e4160f6106cc4124b8d49895ea667f98, context_id=8336745086801389978: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~3s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=e4160f6106cc4124b8d49895ea667f98&contextId=8336745086801389978
< 200 OK
< {
<   "id": "e4160f6106cc4124b8d49895ea667f98",
<   "results": {
<     "data": "Processing /Workspace/Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/wheels/databricks_labs_ls... (3270 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:46 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "import json\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors ... (191 more bytes)",
>   "contextId": "8336745086801389978",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "865a285eeafb4b4e974989861f23b28d"
< }
13:46 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=865a285eeafb4b4e974989861f23b28d&contextId=8336745086801389978
< 200 OK
< {
<   "id": "865a285eeafb4b4e974989861f23b28d",
<   "results": {
<     "data": "{\"ts\": \"2024-11-15 13:46:18,026\", \"level\": \"ERROR\", \"logger\": \"SQLQueryContextLogger\", \"msg\": \"[... (13394 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:46 WARNING [databricks.sdk] cannot parse converted return statement. Just returning text
Traceback (most recent call last):
  File "/home/runner/work/lsql/lsql/.venv/lib/python3.10/site-packages/databricks/labs/blueprint/commands.py", line 123, in run
    return json.loads(results.data)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/__init__.py", line 346, in loads
    return _TEST_SCHEMA_decoder.decode(s)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 13482)
13:45 DEBUG [databricks.sdk] Loaded from environment
13:45 DEBUG [databricks.sdk] Ignoring pat auth, because metadata-service is preferred
13:45 DEBUG [databricks.sdk] Ignoring basic auth, because metadata-service is preferred
13:45 DEBUG [databricks.sdk] Attempting to configure auth: metadata-service
13:45 INFO [databricks.sdk] Using Databricks Metadata Service authentication
13:45 DEBUG [databricks.sdk] GET /api/2.0/preview/scim/v2/Me
< 200 OK
< {
<   "active": true,
<   "displayName": "labs-runtime-identity",
<   "emails": [
<     {
<       "primary": true,
<       "type": "work",
<       "value": "**REDACTED**"
<     }
<   ],
<   "entitlements": [
<     {
<       "value": "**REDACTED**"
<     },
<     "... (1 additional elements)"
<   ],
<   "externalId": "d0f9bd2c-5651-45fd-b648-12a3fc6375c4",
<   "groups": [
<     {
<       "$ref": "Groups/300667344111082",
<       "display": "labs.scope.runtime",
<       "type": "direct",
<       "value": "**REDACTED**"
<     }
<   ],
<   "id": "4643477475987733",
<   "name": {
<     "givenName": "labs-runtime-identity"
<   },
<   "schemas": [
<     "urn:ietf:params:scim:schemas:core:2.0:User",
<     "... (1 additional elements)"
<   ],
<   "userName": "4106dc97-a963-48f0-a079-a578238959a6"
< }
13:45 DEBUG [databricks.labs.blueprint.wheels] Building wheel for /tmp/tmpcpqtz8lc/working-copy in /tmp/tmpcpqtz8lc
13:45 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/wheels/databricks_labs_lsql-0.13.1+620241115134555-py3-none-any.whl
13:45 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 404 Not Found
< {
<   "error_code": "RESOURCE_DOES_NOT_EXIST",
<   "message": "The parent folder (/Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/wheels) does not exist."
< }
13:45 DEBUG [databricks.labs.blueprint.installation] Creating missing folders: /Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/wheels
13:45 DEBUG [databricks.sdk] POST /api/2.0/workspace/mkdirs
> {
>   "path": "/Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/wheels"
> }
< 200 OK
< {}
13:45 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935302
< }
13:45 DEBUG [databricks.labs.blueprint.installation] Converting Version into JSON format
13:45 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/version.json
13:45 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935304
< }
13:45 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_cores": 8.0,
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_memory_mb": 32768,
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver": {
<     "host_private_ip": "10.179.8.14",
<     "instance_id": "f335b24df03e466b8efb19a708cf7d9c",
<     "node_attributes": {
<       "is_spot": false
<     },
<     "node_id": "b993377df44e4a408921281be8db0393",
<     "private_ip": "10.179.10.14",
<     "public_dns": "",
<     "start_timestamp": 1731678282507
<   },
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "jdbc_port": 10000,
<   "last_activity_time": 1731678347945,
<   "last_restarted_time": 1731678323374,
<   "last_state_loss_time": 1731678323349,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 7133597207159756379,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "RUNNING",
<   "state_message": ""
< }
13:45 DEBUG [databricks.sdk] POST /api/1.2/contexts/create
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "8336745086801389978"
< }
13:45 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=8336745086801389978
< 200 OK
< {
<   "id": "8336745086801389978",
<   "status": "Pending"
< }
13:45 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=8336745086801389978: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~1s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=8336745086801389978
< 200 OK
< {
<   "id": "8336745086801389978",
<   "status": "Pending"
< }
13:46 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=8336745086801389978: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~2s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=8336745086801389978
< 200 OK
< {
<   "id": "8336745086801389978",
<   "status": "Pending"
< }
13:46 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=8336745086801389978: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~3s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=8336745086801389978
< 200 OK
< {
<   "id": "8336745086801389978",
<   "status": "Running"
< }
13:46 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "get_ipython().run_line_magic('pip', 'install /Workspace/Users/4106dc97-a963-48f0-a079-a578238959... (110 more bytes)",
>   "contextId": "8336745086801389978",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "e4160f6106cc4124b8d49895ea667f98"
< }
13:46 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=e4160f6106cc4124b8d49895ea667f98&contextId=8336745086801389978
< 200 OK
< {
<   "id": "e4160f6106cc4124b8d49895ea667f98",
<   "results": null,
<   "status": "Running"
< }
13:46 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=e4160f6106cc4124b8d49895ea667f98, context_id=8336745086801389978: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=e4160f6106cc4124b8d49895ea667f98&contextId=8336745086801389978
< 200 OK
< {
<   "id": "e4160f6106cc4124b8d49895ea667f98",
<   "results": null,
<   "status": "Running"
< }
13:46 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=e4160f6106cc4124b8d49895ea667f98, context_id=8336745086801389978: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~2s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=e4160f6106cc4124b8d49895ea667f98&contextId=8336745086801389978
< 200 OK
< {
<   "id": "e4160f6106cc4124b8d49895ea667f98",
<   "results": null,
<   "status": "Running"
< }
13:46 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=e4160f6106cc4124b8d49895ea667f98, context_id=8336745086801389978: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~3s)
13:46 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=e4160f6106cc4124b8d49895ea667f98&contextId=8336745086801389978
< 200 OK
< {
<   "id": "e4160f6106cc4124b8d49895ea667f98",
<   "results": {
<     "data": "Processing /Workspace/Users/4106dc97-a963-48f0-a079-a578238959a6/.qDqm/wheels/databricks_labs_ls... (3270 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:46 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "import json\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors ... (191 more bytes)",
>   "contextId": "8336745086801389978",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "865a285eeafb4b4e974989861f23b28d"
< }
13:46 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=865a285eeafb4b4e974989861f23b28d&contextId=8336745086801389978
< 200 OK
< {
<   "id": "865a285eeafb4b4e974989861f23b28d",
<   "results": {
<     "data": "{\"ts\": \"2024-11-15 13:46:18,026\", \"level\": \"ERROR\", \"logger\": \"SQLQueryContextLogger\", \"msg\": \"[... (13394 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:46 WARNING [databricks.sdk] cannot parse converted return statement. Just returning text
Traceback (most recent call last):
  File "/home/runner/work/lsql/lsql/.venv/lib/python3.10/site-packages/databricks/labs/blueprint/commands.py", line 123, in run
    return json.loads(results.data)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/__init__.py", line 346, in loads
    return _TEST_SCHEMA_decoder.decode(s)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 13482)
[gw0] linux -- Python 3.10.15 /home/runner/work/lsql/lsql/.venv/bin/python

Running from acceptance #452

@nfx nfx merged commit 4f5ef74 into main Nov 15, 2024
7 of 8 checks passed
@nfx nfx deleted the feat/nightly branch November 15, 2024 13:53
nfx added a commit that referenced this pull request Nov 15, 2024
* Added nightly tests run at 4:45am UTC ([#318](#318)). A new nightly workflow has been added to the codebase, designed to automate a series of jobs every day at 4:45am UTC on the `larger` environment. The workflow includes permissions for writing id-tokens, accessing issues, reading contents and pull-requests. It checks out the code with a full fetch-depth, installs Python 3.10, and uses hatch 1.9.4. The key step in this workflow is the execution of nightly tests using the databrickslabs/sandbox/acceptance action, which creates issues if necessary. The workflow utilizes several secrets, including VAULT_URI, GITHUB_TOKEN, ARM_CLIENT_ID, and ARM_TENANT_ID, and sets the TEST_NIGHTLY environment variable to true. Additionally, the workflow is part of a concurrency group called "single-acceptance-job-per-repo", ensuring that only one acceptance job runs at a time per repository.
* Bump codecov/codecov-action from 4 to 5 ([#319](#319)). In this version update, the Codecov GitHub Action has been upgraded from 4 to 5, bringing improved functionality and new features. This new version utilizes the Codecov Wrapper to encapsulate the CLI, enabling faster updates. Additionally, an opt-out feature has been introduced for tokens in public repositories, allowing contributors and other members to upload coverage reports without requiring access to the Codecov token. The upgrade also includes changes to the arguments: `file` is now deprecated and replaced with `files`, and `plugin` is deprecated and replaced with `plugins`. New arguments have been added, including `binary`, `gcov_args`, `gcov_executable`, `gcov_ignore`, `gcov_include`, `report_type`, `skip_validation`, and `swift_project`. Comprehensive documentation on these changes can be found in the release notes and changelog.
* Fixed `RuntimeBackend` exception handling ([#328](#328)). In this release, we have made significant improvements to the exception handling in the `RuntimeBackend` component, addressing issues reported in tickets [#328](#328), [#327](#327), [#326](#326), and [#325](#325). We have updated the `execute` and `fetch` methods to handle exceptions more gracefully and changed exception handling from catching `Exception` to catching `BaseException` for more comprehensive error handling. Additionally, we have updated the `pyproject.toml` file to use a newer version of the `databricks-labs-pytester` package (0.2.1 to 0.5.0) which may have contributed to the resolution of these issues. Furthermore, the `test_backends.py` file has been updated to improve the readability and user-friendliness of the test output for the functions testing if a `NotFound`, `BadRequest`, or `Unknown` exception is raised when executing and fetching statements. The `test_runtime_backend_use_statements` function has also been updated to print `PASSED` or `FAILED` instead of returning those values. These changes enhance the robustness of the exception handling mechanism in the `RuntimeBackend` class and update related unit tests.

Dependency updates:

 * Bump codecov/codecov-action from 4 to 5 ([#319](#319)).
@nfx nfx mentioned this pull request Nov 15, 2024
nfx added a commit that referenced this pull request Nov 15, 2024
* Added nightly tests run at 4:45am UTC
([#318](#318)). A new
nightly workflow has been added to the codebase, designed to automate a
series of jobs every day at 4:45am UTC on the `larger` environment. The
workflow includes permissions for writing id-tokens, accessing issues,
reading contents and pull-requests. It checks out the code with a full
fetch-depth, installs Python 3.10, and uses hatch 1.9.4. The key step in
this workflow is the execution of nightly tests using the
databrickslabs/sandbox/acceptance action, which creates issues if
necessary. The workflow utilizes several secrets, including VAULT_URI,
GITHUB_TOKEN, ARM_CLIENT_ID, and ARM_TENANT_ID, and sets the
TEST_NIGHTLY environment variable to true. Additionally, the workflow is
part of a concurrency group called "single-acceptance-job-per-repo",
ensuring that only one acceptance job runs at a time per repository.
* Bump codecov/codecov-action from 4 to 5
([#319](#319)). In this
version update, the Codecov GitHub Action has been upgraded from 4 to 5,
bringing improved functionality and new features. This new version
utilizes the Codecov Wrapper to encapsulate the CLI, enabling faster
updates. Additionally, an opt-out feature has been introduced for tokens
in public repositories, allowing contributors and other members to
upload coverage reports without requiring access to the Codecov token.
The upgrade also includes changes to the arguments: `file` is now
deprecated and replaced with `files`, and `plugin` is deprecated and
replaced with `plugins`. New arguments have been added, including
`binary`, `gcov_args`, `gcov_executable`, `gcov_ignore`, `gcov_include`,
`report_type`, `skip_validation`, and `swift_project`. Comprehensive
documentation on these changes can be found in the release notes and
changelog.
* Fixed `RuntimeBackend` exception handling
([#328](#328)). In this
release, we have made significant improvements to the exception handling
in the `RuntimeBackend` component, addressing issues reported in tickets
[#328](#328),
[#327](#327),
[#326](#326), and
[#325](#325). We have
updated the `execute` and `fetch` methods to handle exceptions more
gracefully and changed exception handling from catching `Exception` to
catching `BaseException` for more comprehensive error handling.
Additionally, we have updated the `pyproject.toml` file to use a newer
version of the `databricks-labs-pytester` package (0.2.1 to 0.5.0) which
may have contributed to the resolution of these issues. Furthermore, the
`test_backends.py` file has been updated to improve the readability and
user-friendliness of the test output for the functions testing if a
`NotFound`, `BadRequest`, or `Unknown` exception is raised when
executing and fetching statements. The
`test_runtime_backend_use_statements` function has also been updated to
print `PASSED` or `FAILED` instead of returning those values. These
changes enhance the robustness of the exception handling mechanism in
the `RuntimeBackend` class and update related unit tests.

Dependency updates:

* Bump codecov/codecov-action from 4 to 5
([#319](#319)).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant