-
Notifications
You must be signed in to change notification settings - Fork 741
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(wrangler): identify draft and inherit bindings and provision draft ones #7427
Conversation
🦋 Changeset detectedLatest commit: ee0a46d The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
6dff288
to
a1a17b7
Compare
A wrangler prerelease is available for testing. You can install this latest build in your project with: npm install --save-dev https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12295632265/npm-package-wrangler-7427 You can reference the automatically updated head of this PR with: npm install --save-dev https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/prs/7427/npm-package-wrangler-7427 Or you can use npx https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12295632265/npm-package-wrangler-7427 dev path/to/script.js Additional artifacts:npx https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12295632265/npm-package-create-cloudflare-7427 --no-auto-update npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12295632265/npm-package-cloudflare-kv-asset-handler-7427 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12295632265/npm-package-miniflare-7427 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12295632265/npm-package-cloudflare-pages-shared-7427 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12295632265/npm-package-cloudflare-vitest-pool-workers-7427 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12295632265/npm-package-cloudflare-workers-editor-shared-7427 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12295632265/npm-package-cloudflare-workers-shared-7427 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12295632265/npm-package-cloudflare-workflows-shared-7427 Note that these links will no longer work once the GitHub Actions artifact expires.
Please ensure constraints are pinned, and |
a1a17b7
to
04b5b5b
Compare
b558b43
to
87fd88b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome work! The only suggestion I have is to have a clearer separation of concerns in the provisioner, so that it doesn't get too unwieldy later on, and so we can move more confidently towards modularisation of wrangler modules.
logger.debug("No settings found"); | ||
} | ||
|
||
for (const kv of bindings.kv_namespaces ?? []) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could consider providing a simpler path towards modularisation by injecting the various handlers (kv, r2, d1, etc.) so that provisionBindings and runProvisioningFlow stay agnostic?
I was thinking something like this:
interface ResourceHandler {
type: string; // The resource type (e.g., 'kv_namespace', 'r2_bucket', etc.)
provision(
accountId: string,
bindings: CfWorkerInit["bindings"],
settings: Settings | undefined
): Promise<PendingResource[]>;
listExisting(accountId: string): Promise<NormalisedResourceInfo[]>;
}
interface PendingResource {
binding: string;
create: (name: string) => Promise<string>;
updateId: (id: string) => void;
type: string;
}
class Provisioner {
#pendingResources: { [key: string]: PendingResource[] };
constructor(private handlers: ResourceHandler[]) {}
async provision(
accountId: string,
bindings: CfWorkerInit["bindings"],
scriptName: string
): Promise<void> {
let settings: Settings | undefined;
try {
settings = await getSettings(accountId, scriptName);
} catch (error) {
logger.debug("No settings found");
}
for (const handler of this.handlers) {
pendingResources[handler.type] = await handler.provision(
accountId,
bindings,
settings
);
}
for (const pendingResource in pendingResources) {
await this.#runProvisioningFlow(pendingResource);
}
}
async #runProvisioningFlow(pendingResource: PendingResource) {
// ...
const handler = this.handlers.find((h) => h.type === pendingResource.type);
if (handler && this.#pendingResources.length > 0) {
const preExistingResources = await handler.listExisting("accountId");
...
for (const item of this.#pendingResources) {
// ...
}
}
}
}
You could then define the kvHandler
in the kv folder, e.g. /src/kv/provision.ts
:
const kvHandler: ResourceHandler = {
type: "kv_namespace",
async provision(
accountId: string,
bindings: CfWorkerInit["bindings"],
settings: Settings | undefined
): Promise<PendingResources[]> {
const pendingResources: PendingResource[] = [];
for (const binding of bindings.kv_namespaces || []) {
if (
!settings?.bindings.find(
(b) => b.name === binding.binding && b.type === "kv_namespace"
)
) {
pendingResources.push({
binding: binding.binding,
create: async (name: string) => {
const id = await createKVNamespace(accountId, name);
return id;
},
updateId: (id: string) => {
binding.id = id;
},
type: "kv_namespace",
});
}
}
return pendingResources;
},
async listExisting(accountId: string): Promise<NormalisedResourceInfo[]> {
const namespaces = await listKVNamespaces(accountId);
return namespaces.map((ns) => ({ name: ns.title, id: ns.id }));
},
};
And finally, create the provisioner with all the required handlers:
const provisioner = new Provisioner([kvHandler, r2BucketHandler, d1DatabaseHandler]);
// await provisioner.provision(accountId, bindings, scriptName);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
happy to add to the momentum for modularising wrangler! thanks for the very thorough comment too :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...although in the interest of time, I'll probably revisit this later (around new year's when i'm on call)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
705532b
to
507b26f
Compare
// we need to add d1 back into the config because otherwise wrangler will | ||
// call the api for all 5000 or so db's the e2e test account has | ||
// :( | ||
await helper.seed({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also removing the id-less bindings from the wrangler.toml because otherwise there will be a config error (at least until the PR to add experimental flags to defineCommand lands)
...output | ||
.map((bindingGroup) => { | ||
return [ | ||
`- ${bindingGroup.name}:`, | ||
bindingGroup.entries.map(({ key, value }) => ` - ${key}: ${value}`), | ||
bindingGroup.entries.map( | ||
({ key, value }) => ` - ${key}${value ? ":" : ""} ${value}` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this just removes the colon if there is no value (ie for inherited bindings). I do think we maybe should print something for inherited bindings (id? name?) and that's totally possible as we've got the worker settings, but just wanted to discuss first and not block this PR on that.
ea48312
to
906b16e
Compare
}); | ||
let output = await helper.run(`wrangler r2 bucket delete ${workerName}-r2`); | ||
expect(output.stdout).toContain(`Deleted bucket`); | ||
output = await helper.run(`wrangler d1 delete ${workerName}-d1 -y`, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just delete the D1 ID directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
apparently not a thing you can do with wrangler - name or binding is the only thing you can pass in.
the various resource delete commands are very inconsistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah cool, okay. cc @rozenmd out of scope for this PR, but is that something D1 wants to support?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More of a question for @vy-ton, but seems doable to parse whether the incoming value for wrangler d1 delete
is a UUID and just send the request to the API
333002a
to
e0f7b92
Compare
Fixes DEVX-1421 and DEVX-1420
This PR adds the ability to identify draft and inherit bindings by looking up the current binding settings.