You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 25, 2024. It is now read-only.
there should be a flag to run a complete deployment rather than just incremental or clean.
By complete, I mean, run an incremental deploy, and delete the workspace notebooks that do not exist from the local import. Whereas clean simply deletes all workspace notebooks and imports all local imports which is a costly procedure when importing many notebooks during a CICD process.
. The output of this method could be used to diff the local notebooks to see which should be deleted. The final logic to run for a "complete" deployment would be to use the import api method already seen in import-databricksfolder.ps1
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
As part of this method,
https://github.com/DataThirstLtd/azure.databricks.cicd.tools/blob/d7c765f3cec50a019d9b3e64822cc854f547cf71/Public/Import-DatabricksFolder.ps1,
there should be a flag to run a complete deployment rather than just incremental or clean.
By complete, I mean, run an incremental deploy, and delete the workspace notebooks that do not exist from the local import. Whereas clean simply deletes all workspace notebooks and imports all local imports which is a costly procedure when importing many notebooks during a CICD process.
This may not be as simply as triggering a flag in the databricks api. Instead, one could first write a list-all module that lists all the notebooks in a given workspace using
https://docs.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/workspace#--list
. The output of this method could be used to diff the local notebooks to see which should be deleted. The final logic to run for a "complete" deployment would be to use the import api method already seen in import-databricksfolder.ps1
The text was updated successfully, but these errors were encountered: