diff --git a/docs/archetypes/default.md b/docs/archetypes/default.md deleted file mode 100644 index 26f317f303e7..000000000000 --- a/docs/archetypes/default.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: "{{ replace .Name "-" " " | title }}" -date: {{ .Date }} -draft: true ---- diff --git a/docs/content/community/roadmap.md b/docs/content/community/roadmap.md deleted file mode 100644 index a9ccf1c1d295..000000000000 --- a/docs/content/community/roadmap.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: Roadmap -menuPosition: 1 -draft: true ---- diff --git a/docs/content/docs/concepts/images/architecture.png b/docs/content/docs/concepts/images/architecture.png deleted file mode 100644 index 8c8248f5dddc..000000000000 Binary files a/docs/content/docs/concepts/images/architecture.png and /dev/null differ diff --git a/docs/content/docs/concepts/images/load-flow.png b/docs/content/docs/concepts/images/load-flow.png deleted file mode 100644 index adb5d0d7741b..000000000000 Binary files a/docs/content/docs/concepts/images/load-flow.png and /dev/null differ diff --git a/docs/content/docs/data-structures/cell.md b/docs/content/docs/data-structures/cell.md deleted file mode 100644 index 37874dc513f8..000000000000 --- a/docs/content/docs/data-structures/cell.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: SharedCell -status: unwritten -draft: true ---- - -Example of wrapping an object in a SharedCell and listening to changes on that object. Synced settings could be a good -scenario to demonstrate. diff --git a/docs/content/docs/data-structures/directory.md b/docs/content/docs/data-structures/directory.md deleted file mode 100644 index aeb166245840..000000000000 --- a/docs/content/docs/data-structures/directory.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: SharedDirectory -status: unwritten -draft: true ---- - -Directory usage guide. - -How do I store hierarchical data correctly in Directory? - -Examples of using Directory to listen to only some changes in the underlying map. diff --git a/docs/content/docs/data-structures/task-manager.md b/docs/content/docs/data-structures/task-manager.md deleted file mode 100644 index ba4716f1dec3..000000000000 --- a/docs/content/docs/data-structures/task-manager.md +++ /dev/null @@ -1,153 +0,0 @@ ---- -title: TaskManager -draft: true -menuPosition: 9 ---- - -## Introduction - -FluidFramework is designed to facilitate real-time collaboration in modern web applications by distributing data throughout its clients with the help of its many distributed data structures (DDSes). However, TaskManager uniquely distributes tasks rather than a dataset. Furthermore, TaskManager is designed to distribute tasks that should be exclusively executed by a single client to avoid errors and mitigate redundancy. - -{{% callout note "What exactly is a \"task\"?" %}} -A task is simply code that should only be executed by **one** client at a time. This could be as small as a single line of code, or an entire system . However, we reccomend large processes to frequently synchronize their progress in the case of an unexpected disconnection so another client can resume the process with minimal data loss. -{{% /callout %}} - -### Task Queue - -TaskManager's main role is to maintain a queue of clients for each unique task. The client at the top of the queue is assigned the task, and is given permission to exclusively execute the task. All other clients will remain in queue until they leave, disconnect (unexpectedly), or the task is completed by the assigned client. It's important to note that TaskManager maintains the consensus state of the task queue. This means that locally submitted operations will not affect the queue until the operation is accepted by all other clients. To learn more about conensus based data structures, click [here]({{< relref "./overview.md#consensus-data-structures" >}}). - -### Consensus Based DDS - -An important note about TaskManager is that it is a consensus based DDS. This essentially means that operations are not accepted until every client acknowledge and accepts the operation. This differs from an "optimistic" DDS (i.e. [SharedMap]({{< relref "./map.md" >}})) which immediately accept ops and then relays them to other clients. For more information regarding different types of DDSes, click [here]({{< relref "./overview.md" >}}). - -## Usage - -### APIs - -The `TaskManager` object provides a number of methods to manage the execution of tasks. Please note: each API requires an input of `taskId` which is type `string`. - - -- `volunteerForTask(taskId)` -- Adds the client to the task queue **once**. It returns a promise that resolves `true` if the client is assigned the task and `false` if the task was completed by another client. It will throw an error if the client disconnects while in queue. -- `subscribeToTask(taskId)` -- Will continuously add the client to the task queue. Does not return a value, and will therefore require listening to [events](#events) to determine if the task is assigned, lost, or completed. -- `subscribed(taskId)` -- Returns a boolean to indicate if the client is subscribed to the task. -- `complete(taskId)` -- Will release all clients from the task queue, including the currently assigned client. -- `abandon(taskId)` -- Exits the queue and releasing the task if currently assigned. Will also unsubscribe from the task (if subscribed). -- `queued(taskId)` -- Returns a boolean to indicate if the client is in the task queue (being assigned a task is still considered queued). -- `assigned(taskId)` -- Returns a boolean to indicate if the client is assigned the task. - - -### `volunteerForTask()` vs `subscribeToTask()` - -Although both APIs are ultimately used to join the task queue, they have two key differences which impacts which should be used in any given scenario. The first key difference is that `volunteerForTask()` returns a `Promise`, while `subscribeToTask()` is synchronous and will rely on events. Second, `volunteerForTask()` will only enter the client into the task queue **once**, while `subscribeToTask()` will re-enter the client into the task queue if the client disconnects and later reconnects. - -Due to these differences, `volunteerForTask()` is better suited for one-time tasks such as data imports or migrations. For an example, see [the schema upgrade demo](#external-examples). On the other hand, `subscribeToTask()` is prefered for ongoing tasks that have no definitive end. For an example, see [the task selection demo](#external-examples). - -### Events - -`TaskManager` is an `EventEmitter`, and will emit events when a task is assigned to the client or released. Each of the following events fires with an event listener that contains a callback argument `taskId`. This represents the task for which the event was fired. - -- `assigned` -- Fires when the client reaches the top of the task queue and is assigned the task. -- `lost` -- Fires when the client disconnects after having been assigned the task. -- `completed` -- Fires on all connected clients when the assigned client calls `complete()`. - -### Creation - -To create a `TaskManager`, call the static create method below. Note: - -- `this.runtime` is a `IFluidDataStoreRuntime` object that represents the data store that the new task queue belongs to. -- `"my-task-manager"` is the name for the new task queue (this is an optional argument). - -```typescript -const taskManager = TaskManager.create(this.runtime, "my-task-manager"); -``` - -## Examples - -### Basic Example -- `volunteerForTask()` - -The following is a basic example for `volunteerForTask()`. Note that we check the `boolean` return value from the promise to ensure that the task was not completed by another client. - -```typescript -const myTaskId = "myTaskId"; - -taskManager.volunteerForTask(myTaskId) - .then((isAssigned: boolean) => { - if (isAssigned) { - console.log("Assigned task."); - - // We setup a listener in case we lose the task assignment while executing the code. - const onLost = (taskId: string) => { - if (taskId === myTaskId) { - // The task assignment has been lost, therefore we should halt execution. - stopExecutingTask(); - } - }; - taskManager.on("lost", onLost); - - // Now that we are assigned the task we can begin executing the code. - executeTask() - .then(() => { - // We should remember to turn off the listener once we are done with it. - taskManager.off("lost", onLost); - - // We should call complete() if we didn't already do that at the end of executeTask(). - taskManager.complete(myTaskId); - }); - } else { - console.log("Task completed by another client."); - } - }) - .catch((error) => { - console.error("Removed from queue:", error); - }); -``` - -### Basic Example -- `subscribeToTask()` - -The following is an example using `subscribeToTask()`. Since `subscribeToTask()` does not have a return value, we must rely on event listeners. We can setup the following listeners below. Please note how we compare the `taskId` with `myTaskId` to ensure we are responding to the appropriate task event. - - -```typescript -const myTaskId = "myTaskId"; - -const onAssigned = (taskId: string) => { - console.log(`Client was assigned task: ${taskId}`); - if (taskId === myTaskId) { - // Now that we are assigned the task we can begin executing the code. - // We assume that complete() is called at the end of executeTask(). - executeTask(); - } -} - -const onLost = (taskId: string) => { - console.log(`Client released task: ${taskId}`); - if (taskId === myTaskId) { - // This client is no longer assigned the task, therefore we should halt execution. - stopExecutingTask(); - } -} - -const onCompleted = (taskId: string) => { - console.log(`Task ${taskId} completed by another client`); - if (taskId === myTaskId) { - // Make sure we turn off the event listeners now that we are done with them. - taskManager.off("assigned", onAssigned); - taskManager.off("lost", onLost); - taskManager.off("completed", onCompleted); - } - -} - -taskManager.on("assigned", onAssigned); -taskManager.on("lost", onLost); -taskManager.on("completed", onCompleted); - -// Once the listeners are setup we can finally subscribe to the task. -taskManager.subscribeToTask(myTaskId); -``` - -### External Examples - -- [Schema Upgrade](https://github.com/microsoft/FluidFramework/tree/main/examples/hosts/app-integration/schema-upgrade) -- Experimental application to outline an approach for migrating data from an existing Fluid container into a new Fluid container which may have a different schema or code running on it. TaskManager is used to ensure only a single client performs the migration. -- [Task Selection](https://github.com/microsoft/FluidFramework/tree/main/examples/data-objects/task-selection) -- Simple application to demonstrate TaskManager with a rolling die. TaskManager is used to have only a single client "rolling" the die while other clients observe. - diff --git a/docs/content/docs/deep/_index.md b/docs/content/docs/deep/_index.md deleted file mode 100644 index 675a91ce5be7..000000000000 --- a/docs/content/docs/deep/_index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: "The depths" -draft: false -area: deep -cascade: - area: deep - draft: false ---- diff --git a/docs/content/docs/deep/blobs.md b/docs/content/docs/deep/blobs.md deleted file mode 100644 index 5b0b8e62662e..000000000000 --- a/docs/content/docs/deep/blobs.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: "Blobs in the Fluid Framework" -menuPosition: 7 -status: unwritten -draft: true ---- - -Section on attachment blobs vs. snapshot blobs from issue #6374. diff --git a/docs/content/docs/deep/breaking-changes.md b/docs/content/docs/deep/breaking-changes.md deleted file mode 100644 index b0125e7cbd4c..000000000000 --- a/docs/content/docs/deep/breaking-changes.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Breaking changes -draft: true -aliases: - - "/docs/advanced/breaking-changes/" ---- - -See . diff --git a/docs/content/docs/deep/compatibility.md b/docs/content/docs/deep/compatibility.md deleted file mode 100644 index 559d49895bb7..000000000000 --- a/docs/content/docs/deep/compatibility.md +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: Version compatibility -draft: true -status: outdated -aliases: - - "/docs/concepts/compatibility/" ---- - -Because the Fluid Framework is a platform, maintaining predictable backwards/forwards compatibility is an important part -of development and documentation. Any breaking changes should be placed in the [BREAKING.md](./breaking-changes.md) -file in the root of the repository. Understanding the different parts of the Fluid Framework can help with making sure -contributions are acceptably compatible and the code is reasonably clean. - -## Breakdown - -The following overview shows the various levels and their corresponding contracts: - -- Common: @fluidframework/common-definitions - - Common utils/definitions that might be shared at all levels of the stack -- Protocol: @fluidframework/protocol-definitions - - Definition of protocol between the server and the client (ops and summary structure, etc.) -- Driver: @fluidframework/driver-definitions - - API of driver for access to storage and web socket connections -- Loader: @fluidframework/container-definitions - - The core framework responsible for loading runtime code into a container -- Runtime: @fluidframework/runtime-definitions - - A base set of runtime code that supports the Fluid model, summarization, and other core Fluid features -- Framework: @fluidframework/framework-definitions - - A set of base implementations and helper utilities to support developers building on Fluid - -This document will focus on a few specific layer boundaries. - -### Protocol - -Changes to the protocol definitions should be vetted highly, and ideally should always be backwards compatible. These -changes require synchronization between servers and clients, and are meant to be minimal and well-designed. - -### Driver and Loader - -The driver and loader versions will come from the hosting applications. Driver implementations depend on the -corresponding server version. Changes to driver definitions must be applied to all driver implementations, and so they -should be infrequent. The loader implementations are meant to be very slim, only providing enough functionality to load -the runtime code and connect to the server. - -The driver contract is consumed by both the loader and the runtime layers. Since the driver and loader come from the -same source, it is not necessary to maintain compatibility between the driver-to-loader boundary for now. As number of -drivers increase and become external, this may change in the future. - -The loader contract (also called container definitions) is consumed by the runtime layer. Consumers of the Fluid -Framework may have different frequencies for releasing their host (with driver and loader) as their runtime code, so -this compatibility across this boundary is important. Currently Fluid maintains that the driver/loader will be -backwards *and* forwards compatible with the runtime by at least 1 version. For a given driver or loader version `2.x`, -it should be compatible with runtime versions `1.x`, `2.x`, and `3.x`. This is illustrated by the table below: - -| Driver/Loader | | 1.x | 2.x | 3.x | -|--------------:|-|:---:|:---:|:---:| -| Runtime | | | | | -| 1.x | | C | BC | X | -| 2.x | | FC | C | BC | -| 3.x | | X | FC | C | - -- C - Fully compatible -- BC - Driver/loader backwards compatible with runtime -- FC - Driver/loader forwards compatible with runtime (runtime backwards compatible with driver and loader) -- X - May not be compatible - -### Runtime - -Within the Fluid Framework, the runtime consists of a few parts: - -1. The container-level runtime code: this corresponds to a single data source/document, and *contains* the data stores. - The container-level runtime code is dictated by the "code" value in the quorum. Typically developers building on - Fluid will create an instance of the Fluid `ContainerRuntime` by passing it a registry- which instructs how to - instantiate data stores; this may be dynamic, or all data store code could bundled with the container runtime. -2. The data-store-level runtime code: this corresponds to each loaded data store within a container. The data-store-level - runtime code is dictated by the package information in its attach op. The data-store-level runtime code - and container-level runtime code depend on each other through the APIs defined in runtime-definitions. For reference, - this boundary occurs between the `IFluidDataStoreContext` - (container-level) and the `IFluidDataStoreRuntime` (data-store-level). Fluid tries to keep the container runtime backwards - compatible with the data store runtime by at least 1 version. -3. The distributed data structures code: typically developers can build data stores consisting of the Fluid Framework - provided set of distributed data structures. There is a registry of DDS factories within each data store that - instruct how to load the DDS code, but this code is meant to be statically loaded with the data store. Developers can - build their own distributed data structures, but it may be more complicated, being that they are lower-level to the - ops and summaries. - -When making changes to the Fluid Framework repository, it is important to note when breaking changes are made to -runtime-definitions which affect compatibilities between different version of data stores. We should ensure that -our own container-level runtime code can load our own data-store-level runtime code at least 1 version back. - -Specific interfaces to monitor: - -- `IContainerRuntime` - interfaces container runtime to loaded data store runtime -- `IFluidDataStoreContext` - interfaces data store context to loaded data store runtime -- `IFluidDataStoreRuntime` - interfaces loaded data store runtime to its context - -## Guidelines for compatible contributions - -There are many approaches to writing backwards/forwards compatible code. For large changes or changes that are -difficult to feature detect, it might be valuable to leverage versioned interfaces. For smaller changes, it might be as -simple as adding comments indicating what is deprecated and making the code backwards compatible. - -It is required to make the changes backwards/forwards compatible at least 1 version where indicated above. This means -splitting the logic in some way to handle the old API as well as comfortably handling the new API. 2+ versions later, -the code can be revisited and the specialized back-compat code can be removed. - -### Isolate back-compat code - -Typically, it is best to isolate the back-compat code as much as possible, rather than inline it. This will help make -it clear to readers of the code that they should not rely on that code, and it will simplify removing it in the future. - -One strategy is to write the code as it should be without being backwards compatible first, and then add extra code to -handle the old API. - -### Comment appropriately - -Add comments to indicate important changes in APIs; for example add a comment to indicate if an API is deprecated. Use -the tsdocs `@deprecated` comment keyword where appropriate. - -In addition to isolating back-compat code, adding comments can also help identify all places to change when revisiting -in the future for cleanup. Using a consistent comment format can make it easier to identify these places. - -```typescript -// back-compat: 0.11 clientType -``` - -The above format contains the breaking version and a brief tag, making it easy to find all references in the code later. -Liberally adding these near back-compat code can help with the later cleanup step significantly, as well as concisely -give readers of the code insight into why the forked code is there. - -### Track the follow-up work - -It is necessary to track the follow-up work to remove this back-compat code to keep the code pruned. The code -complexity will creep up as more back-compat code comes in. The strategy is to create a GitHub issue and include -information that provides context for the change and makes it easy for someone to cleanup in the future. - -### Update the docs - -During the initial change, it is important to make sure the API changes are indicated somewhere in the docs. After -making the follow-up change to remove the backwards compatible code, it should be documented in the -[BREAKING.md](./breaking-changes.md) file so that it is clear that it will break. diff --git a/docs/content/docs/deep/container-and-component-loading.md b/docs/content/docs/deep/container-and-component-loading.md deleted file mode 100644 index 3e3040814578..000000000000 --- a/docs/content/docs/deep/container-and-component-loading.md +++ /dev/null @@ -1,249 +0,0 @@ ---- -title: Container and component loading deep dive -draft: true -status: outdated ---- - -This doc provides an in-depth outline of how Container and Component loading works. It also provides an overview of how -Fluid packages are partitioned. While the system is not overly complex, looking at it as a whole can be overwhelming, -and difficult to rationalize. As we go through the doc we will build a clear picture of the entirety of the system. - -If you want to look at the entire system in one picture see [Appendix 1](#appendix-1) at the bottom of the doc. - -The complete loading flow in Fluid can follow multiple paths, and this can create complexities when attempting to explain -the flow in a single document. For simplicity, this document follows the *Create from Existing* flow with minor notes about -how the *Create New* flow differs. - -It should also be noted that this doc contains intentional simplifications. So, while this document attempts to provide -a detailed representation of the loading flow there may be areas where it does not 100% reflect the truth. Hopefully, -these simplifications are negligible and help provide clarity. But if you find any of the simplifications particularly -misleading please point them out. - -If you see a bolded number - Ex. **(2)** - it represents a line in the diagram. This number will be in the next diagram -as well as the finished diagram in [Appendix 1](#appendix-1). - -Finally, as you read through this doc you will find yourself having lots of questions. This is good, and intentional! -Keep reading as it's likely explained later. - -## Loading flow - -The Hosting Application is a webpage that loads a Fluid container. This has also been referred to as a "Fluid -Enlightened Canvas" and currently consists of: the Fluid preview app, Teams, Outlook, and a handful more. To load any -Fluid container, the Hosting Application needs the Fluid Loader Package. This is a small package whose only -responsibility is to load Fluid containers. The Fluid Loader has no knowledge of the `ContainerRuntime` or `Component` -specific code. - -The `Loader` object has a method `resolve(...)` **(1)** that can load a `Container` when provided the following: - -- `url` to Operation Stream (op stream) -- `Driver` **(1.1)** - used for talking to the Fluid Server -- `CodeLoader` **(1.2)** - used for resolving the `ContainerRuntime` code - -![Image 1](/images/container-and-component-loading-1.jpg) - -In the case of resolving a `Container` that has not been loaded locally, the `Loader` will create a new `Container` -object **(2)**. - -![Image 2](/images/container-and-component-loading-2.jpg) - -The `Container` will use the provided `url` and `Driver` to connect, and start processing, the op stream **(3)**. - -::: tip - -The Operation Stream (op stream) is how Fluid stores state. State, including connected clients, the code to load, as -well as distributed data structure modifications, are stored as a series of operations that when played in order produce -the current state. I don't go into further details about it here. - -::: - -Connecting and processing the op stream includes: - -- Getting the Summary -- Establishing the Websocket connection -- Retrieving any missing ops from the REST endpoint - -The `Driver` is responsible for taking the requests above **(3)** and transforming them to requests that the Fluid Server -understands **(3.1)**. - -The Fluid Core (`Loader` + `Runtime`) is agnostic to how the Fluid Server is implemented. It instead uses -a `Driver` model to allow for different servers to optimize for their own infrastructure. - -![Image 3](/images/container-and-component-loading-3.jpg) - -The `Container` object itself does not actually do much. Once it has established a connection via the `Driver` its other -responsibility is to listen specifically for one event emitted from the `Quorum`. This is the `"code"` proposal. - -::: tip - -The `Quorum` is a special key/value distributed data structure that requires all current members to agree on the value -before the it is accepted. I don't go into further details about it here. - -::: - -There are a few different ways that the `Container` will get this `"code"` value: - -1. In the *Create New* flow this `"code"` value needs to be proposed by the Hosting Application. Once the value is - accepted by everyone connected (only you, the current client, in this case) the `Container` will get the event and - have the value. -2. In the *Create from Existing* flow there are two scenarios. - 1. In the *load from Summary flow* the `"code"` value is written into the Summary itself. - 2. In the *load from op stream* flow (no Summary) the `"code"` value will be played as an op. - -In any case, once the `Container` has the `"code"` value it asks the `CodeLoader` to resolve the proposed code **(4)**. Since -the Loader Package does not know anything about the `ContainerRuntime`, or Components, it needs someone to tell it -where that code lives. This is the responsibility of the `CodeLoader`. The `CodeLoader` can dynamically pull this code -from some source (CDN) or in some cases the code already exists on the page. Either way the `CodeLoader` needs to -include the code on the page and return a pointer to that code to the `Container`. In the Browser, this pointer -is an entry point to a webpacked bundle that is usually on the `window` object. In Node.js, it's a pointer to a package. - -![Image 4](/images/container-and-component-loading-4.jpg) - -At this point the `Container` has a pointer to the `ContainerRuntime` code and it uses that code to create a new -`ContainerContext` **(5)** and executes the `instantiateRuntime` **(6.1)** on the webpack bundle with the `ContainerContext`. - -The important thing to note here is that up until this point the Hosting Application and Fluid know nothing of the Fluid -`ContainerRuntime` or the `Component` code. That code is provided after the `Container` is established and stored in the -op stream. **This is powerful because it allows the Hosting Applications to load Containers and Components without -knowing the underlying code.** This is how Teams and Outlook can easily load the Fluid preview app `Container` and -`Components`. - -![Image 5](/images/container-and-component-loading-5.jpg) - -The implementer of `instantiateRuntime` is what we refer to as a "Container Developer". As you can see, the term is slightly -overloaded since they are not actually writing the `Container` object, but simply implementing a function. This function -lives on the `IContainerFactory` interface and the `Container` specifically looks for an exported -`fluidExport` variable within the webpack bundle to find the Container Factory. - -The `instantiateRuntime` function can perform any number of functions but has become primarily responsible for **(6.2)**: - -1. Creating the `ContainerRuntime` object -2. Setting the `request` handler on the `ContainerRuntime` - - The `request` handlers are used to route requests through the `Container` (more on this later) - - The primary use is to get Components -3. Providing a `ComponentRegistry` of Component Factories to the `ContainerRuntime` - - The `ComponentRegistry` is a `Map Promise(IComponentFactory)>` - - Defines what Components can be created in the `Container` -4. Creating the default `Component` - -![Image 6](/images/container-and-component-loading-6.jpg) - -Containers can exist without Components but they are not very functional. The paradigm we've created is for the -Container Developer (`instantiateRuntime` implementer) to create a default `Component`. The default `Component` is simply -the first `Component` in the `Container`. Having a default `Component` allows the Hosting Application to make a `request` -against the `Container` asking for the default `Component` without knowing what the default `Component` is (more on -this later). - -The default `Component` is created the same as every other `Component`. The only difference is that it is the first -`Component` and created in the `instantiateRuntime` call as opposed to being created by another `Component` (also more on -this later). - -A `Component` is created by calling `createComponent("packageName")` on the `ContainerRuntime` **(6.2)**. The -`ContainerRuntime` uses its `ComponentRegistry` to look for the entry of `"packageName"`. When it's found it creates a -`ComponentContext` **(7)** and executes the corresponding `instantiateComponent` with the `ComponentContext` **(8.1)**. - -![Image 7](/images/container-and-component-loading-7.jpg) - -You might notice a lot of similarities between the `ContainerRuntime` creation flow and the `ComponentRuntime` -creation flow. - -In the `instantiateComponent` call **(8.1)** the following is performed: - -1. `ComponentRuntime` object is created **(8.2)** -2. Sets the `request` handler on the `ComponentRuntime` **(8.2)** - - Requests that are sent to the `ComponentRuntime` are proxied to the `Component` object (more on this later) -3. Provides a registry of Distributed Data Structures (DDS) / Sub-Component factories to the `ComponentRuntime` **(8.2)** - - This can be used to create new DDSs - - This can be used to create new Components that are not defined in the `ContainerRegistry` -4. Create the `Component` object **(8.3)** - -![Image 8](/images/container-and-component-loading-8.jpg) - -The `Component`, and the `instantiateComponent`, are what a "Component Developer" writes and contain all the business -specific logic. In most cases the `instantiateComponent` call will provide the `Component` with references to the -`ComponentContext` **(8.3.1)**, and the `ComponentRuntime` **(8.3.2)** it created. - -The `Component` should use the `ComponentContext` to talk upwards to the `ContainerRuntime` **(8.3.1)**, and should use -the `ComponentRuntime` to manage Fluid state of itself; mainly creating DDSs **(8.3.2)**. - -![Image 9](/images/container-and-component-loading-9.jpg) - -The `Component` will use the DDS objects directly **(9.1)** and will persist/`attach` them using the -`ComponentRuntime` **(9.2)**. When storing a DDS `handle` on another already attached DDS, the `ComponentRuntime` -will automatically `attach` the new DDS. - -::: tip - -`attach` sends an op on the op stream that persists the DDS and notifies all users it is live for editing. More on this in -[Anatomy of a Distributed Data Structure](./dds-anatomy) - -::: - -![Image 10](/images/container-and-component-loading-10.jpg) - -At this point you might have noticed that the `ComponentRuntime` does not actually know anything about the `Component` -object itself. In the `ContainerRuntime` all `ComponentRuntimes` are treated equally without hierarchy. But then how do Components -interact with each other? - -Components can create and hold references to other Components in the `ContainerRuntime`. The same way -`instantiateRuntime` created the default `Component`, the default `Component` can use `createComponent` on its -`ComponentContext` **(8.3.1)** to create a second `Component`. - -Calling `createComponent` causes the `ContainerRuntime` to look at the `ComponentRegistry`, create a second -`ComponentContext` **(7)**, which will call a new `instantiateComponent` **(8.1)**, which will create a second -`ComponentRuntime` **(8.2)**, and second `Component` object **(8.3)**. - -![Appendix 1](/images/container-and-component-loading-11.jpg) - -Great! Now we've loaded our entire `Container` plus our two Components. But we don't actually have anything rendered on -the page. All these objects just exist in memory. - -## Requesting and Routing - -### Loading the Default Component - -In the most basic case of rendering the default `Component` the Hosting Application will make a `request` against the -`Container` object. This `request` will look something like `container.request({ url: "/" });` where the `"/"` denotes the -default component. - -We've talked briefly about setting `request` handlers, and that it is done in the `instantiateRuntime` and -`instantiateComponent` section. Now we have a request on the `Container` object. But the `Container` doesn't know how to -handle this `request`, so it forwards the `request` to the `ContainerContext` **(5)**. The `ContainerContext` doesn't -know how to handle it either so it forwards the `request` to the `ContainerRuntime` **(6)**. - -In our `instantiateRuntime` we set a specific `request` handler on the `ContainerRuntime` that says if someone asks for -`"/"` we will return the default `Component` we've created. So the `ContainerRuntime` finds the `ComponentContext` -relating the default `Component` and forwards the `request` there **(7)**. The `ComponentContext` doesn't know how to -handle the request so it forwards the request to the `ComponentRuntime` **(8)**. - -Now in our `instantiateComponent` for the default `Component` we set a specific `request` handler that said if anyone asks -for this `ComponentRuntime` to forward the request to the `Component` object itself. So the `ComponentRuntime` forwards the -request to the `Component` **(8.3.2)**. Finally, in the `Component` we've set a `request` handler that if anyone should send -it a `request` it will return itself. - -So by requesting `"/"`, the Hosting Application has retrieved the default `Component` object. - -The Hosting Application can now use Fluid's [feature detection -mechanism](./components.md#feature-detection-and-delegation) to check if the `Component` it got supports a view by checking -`component.IComponentHTMLView` and calling `render(...)` if `IComponentHTMLView` returns an object. - -That was a lot to unpack in a lot of text, and don't worry if it feels overwhelming. The overall principal of the -request pattern is that requests are delegated through the system to the place where they are meant to go. - -### Loading a Component from a Component - -This flow works the same as the default `Component` above except that the loading `Component` has to be explicit about -the id of the loaded `Component`. - -In the scenario below we have Component1 attempting to get Component2. - -Instead of calling the `Container`, Component1 will call its `ComponentContext` -`context.request({ url: "/component-2-unique-id" });` **(8.3.1)**. You can see we are not just requesting `"/"` but the -direct id of Component2 `"/component-2-unique-id`. The `ContainerRuntime` handler that we set will use the id to look -up the `ComponentContext` of Component2 and forward the request there **(7)**. The -`ComponentContext` will forward to the `ComponentRuntime` **(8)** of Component2, which will forward to the `Component` -object of Component2 **(8.3.2)**. The `Component` object will return itself and now Component1 has a reference to Component2. - - -## Appendix 1 - -![Appendix 1](/images/container-and-component-loading-11.jpg) diff --git a/docs/content/docs/deep/containers-runtime.md b/docs/content/docs/deep/containers-runtime.md deleted file mode 100644 index 0c7a6394f6ed..000000000000 --- a/docs/content/docs/deep/containers-runtime.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: "Containers and the container runtime" -menuPosition: 5 -aliases: - - "/docs/concepts/containers-runtime" -draft: true ---- - -A **Fluid container** is a foundational concept for creating anything with the Fluid Framework. All of the sample Fluid -applications use a Fluid container to manage the user experience, app logic, and app state. - -However, a Fluid container is *not* a standalone application. A Fluid container is a *code-plus-data package*. A -container must be loaded by the Fluid loader and connected to a Fluid service. - -Because containers are such a core concept, we'll look at them from a few different angles. - -## Container vs Runtime - -A Fluid container is the instantiated container JavaScript object, but it's also the definition of the container. We -interchangeably use "container" to refer to the class, which can create new objects, and the instantiated object itself. - -The `ContainerRuntime` refers to the inner mechanics of the Fluid container. As a developer you will interact with the -runtime through the runtime methods that expose useful properties of the instantiated container object. - -## What is a Fluid container? - -A Fluid container is a code-plus-data package. A container includes at least one shared object for app logic, but -often multiple shared objects are composed together to create the overall experience. - -From the Fluid service perspective, the container is the atomic unit of Fluid. The service does not know about anything -inside of a Fluid container. - -That being said, app logic is handled by Data Objects and state is handled by the distributed data structures within -the Data Objects. - -## What does the Fluid container do? - -The Fluid container interacts with the [processes and distributes operations](./hosts), manages the [lifecycle of Fluid -objects](./dataobject-aqueduct), and provides a request API for accessing shared objects. - -### Process and distribute operations - -When the Fluid loader resolves the Fluid container, it passes the container a group of service drivers. These drivers -are the **DeltaConnection**, **DeltaStorageService**, and **DocumentStorageService**. - -The Fluid container includes code to process the operations from the DeltaConnection, catch up on missed operations -using the DeltaStorageService, and create or fetch summaries from the DocumentStorageService. Each of these are -important, but the most critical is the op processing. - -The Fluid container is responsible for passing operations to the relevant distributed data structures and Data Objects. - -### Manage shared object lifecycle - -The container provides a `createDataStore` method to create new data stores. The container is responsible for -instantiating the shared objects and creating the operations that let other connected clients know about the new Fluid -object. - -### Using a Fluid container: the Request API - -The Fluid container is interacted with through the request paradigm. While aqueduct creates a default request handler -that returns the default Data Objects, the request paradigm is a powerful pattern that lets developers create custom -logic. - -To retrieve the default data store, you can perform a request on the container. Similar to the [loaders API](./hosts.md) -this will return a status code and the default data store. - -```ts -container.request({url: "/"}) -``` diff --git a/docs/content/docs/deep/custom-dds.md b/docs/content/docs/deep/custom-dds.md deleted file mode 100644 index 25b341a2a977..000000000000 --- a/docs/content/docs/deep/custom-dds.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: Build a custom distributed data structure -menuPosition: 9 -status: unwritten -draft: true ---- diff --git a/docs/content/docs/deep/dataobject-aqueduct.md b/docs/content/docs/deep/dataobject-aqueduct.md deleted file mode 100644 index e5a59cd30fbc..000000000000 --- a/docs/content/docs/deep/dataobject-aqueduct.md +++ /dev/null @@ -1,258 +0,0 @@ ---- -title: Encapsulating data with DataObject -menuPosition: 7 -aliases: - - "/docs/concepts/dataobject-aqueduct/" -draft: true ---- - - - -In the previous section we introduced distributed data structures and demonstrated how to use them. We'll now discuss -how to combine those distributed data structures with custom code (business logic) to create modular, reusable pieces. - - - - - - - - -![Aqueduct](https://publicdomainvectors.org/photos/johnny-automatic-Roman-aqueducts.png) - -The Aqueduct is a library for building Fluid objects and Fluid containers within the Fluid Framework. Its goal is to -provide a thin base layer over the existing Fluid Framework interfaces that allows developers to get started quickly. - -## Fluid object development - -Fluid object development consists of developing the data object and the corresponding data object factory. The data -object defines the logic of your Fluid object, whereas the data object factory defines how to initialize your object. - -## Data object development - -`DataObject` and `PureDataObject` are the two base classes provided by the library. - -### DataObject - -The [DataObject][] class extends [PureDataObject](#puredataobject) and provides the following additional functionality: - -- A `root` SharedDirectory that makes creating and storing distributed data structures and objects easy. -- Blob storage implementation that makes it easier to store and retrieve blobs. - -**Note:** Most developers will want to use the `DataObject` as their base class to extend. - -### PureDataObject - -[PureDataObject][] provides the following functionality: - -- Basic set of interface implementations to be loadable in a Fluid container. -- Functions for managing the Fluid object lifecycle. - - `initializingFirstTime(props: S)` - called only the first time a Fluid object is initialized and only on the first - client on which it loads. - - `initializingFromExisting()` - called every time except the first time a Fluid object is initialized; that is, every - time an instance is loaded from a previously created instance. - - `hasInitialized()` - called every time after `initializingFirstTime` or `initializingFromExisting` executes -- Helper functions for creating and getting other data objects in the same container. - -**Note:** You probably don't want to inherit from this data object directly unless you are creating another base data -object class. If you have a data object that doesn't use distributed data structures you should use Container Services -to manage your object. - -### DataObject example - -In the below example we have a simple data object, _Clicker_, that will render a value alongside a button the the page. -Every time the button is pressed the value will increment. Because this data object renders to the DOM it also extends -`IFluidHTMLView`. - -```jsx -export class Clicker extends DataObject implements IFluidHTMLView { - public static get Name() { return "clicker"; } - - public get IFluidHTMLView() { return this; } - - private _counter: SharedCounter | undefined; - - protected async initializingFirstTime() { - const counter = SharedCounter.create(this.runtime); - this.root.set("clicks", counter.handle); - } - - protected async hasInitialized() { - const counterHandle = this.root.get>("clicks"); - this._counter = await counterHandle.get(); - } - - public render(div: HTMLElement) { - ReactDOM.render( - , - div, - ); - return div; - } - - private get counter() { - if (this._counter === undefined) { - throw new Error("SharedCounter not initialized"); - } - return this._counter; - } -} -``` - -## DataObjectFactory development - -The `DataObjectFactory` is used to create a Fluid object and to initialize a data object within the context of a -Container. The factory can live alongside a data object or within a different package. The `DataObjectFactory` defines -the distributed data structures used within the data object as well as any Fluid objects it depends on. - -The Aqueduct offers a factory for each of the data objects provided. - -### More details - -- [DataObjectFactory][] -- [PureDataObjectFactory][] - -### DataObjectFactory example - -In the below example we build a `DataObjectFactory` for the [Clicker](#dataobject-example) example above. To build a -`DataObjectFactory`, we need to provide factories for the distributed data structures we are using inside of our -`DataObject`. In the above example we store a handle to a `SharedCounter` in `this.root` to track our `"clicks"`. The -`DataObject` comes with the `SharedDirectory` (`this.root`) already initialized, so we just need to add the factory for -`SharedCounter`. - -```typescript -export const ClickerInstantiationFactory = new DataObjectFactory( - Clicker.Name, - Clicker, - [SharedCounter.getFactory()], // distributed data structures - {}, // Provider Symbols see below -); -``` - -This factory can then create Clickers when provided a creating instance context. - -```typescript -const myClicker = ClickerInstantiationFactory.createInstance(this.context) as Clicker; -``` - -### Providers in data objects - -The `this.providers` object on `PureDataObject` is initialized in the constructor and is generated based on Providers -provided by the Container. To access a specific provider you need to: - -1. Define the type in the generic on `PureDataObject`/`DataObject` -2. Add the symbol to your factory (see [DataObjectFactory Example](#dataobjectfactory-example) below) - -In the below example we have an `IFluidUserInfo` interface that looks like this: - -```typescript -interface IFluidUserInfo { - readonly userCount: number; -} -``` - -On our example we want to declare that we want the `IFluidUserInfo` Provider and get the `userCount` if the Container -provides the `IFluidUserInfo` provider. - -```typescript -export class MyExample extends DataObject { - protected async initializingFirstTime() { - const userInfo = await this.providers.IFluidUserInfo; - if(userInfo) { - console.log(userInfo.userCount); - } - } -} - -// Note: we have to define the symbol to the IFluidUserInfo that we declared above. This is compile time checked. -export const ClickerInstantiationFactory = new DataObjectFactory( - Clicker.Name - Clicker, - [], // distributed data structures - {IFluidUserInfo}, // Provider Symbols see below -); -``` - -## Container development - -A Container is a collection of data objects and functionality that produce an experience. Containers hold the instances -of data objects as well as defining the data objects that can be created within the Container. Because of this data -objects cannot be consumed except for when they are within a Container. - -The Aqueduct library provides the [ContainerRuntimeFactoryWithDefaultDataStore][] that enables you as a container -developer to: - -- Define the registry of data objects that can be created -- Declare the default data object -- Use provider entries -- Declare Container level [Request Handlers](#container-level-request-handlers) - -## Container object example - -In the below example we will write a Container that exposes the above [Clicker](#dataobject-example) using the -[Clicker Factory](#dataobjectfactory-example). You will notice below that the Container developer defines the -registry name (data object type) of the Fluid object. We also pass in the type of data object we want to be the default. -The default data object is created the first time the Container is created. - -```typescript -export fluidExport = new ContainerRuntimeFactoryWithDefaultDataStore( - ClickerInstantiationFactory.type, // Default data object type - ClickerInstantiationFactory.registryEntry, // Fluid object registry - [], // Provider Entries - [], // Request Handler Routes -); -``` - -## Container-level request handlers - -You can provide custom request handlers to the container. These request handlers are injected after system handlers but -before the `DataObject` get function. Request handlers allow you to intercept requests made to the container and return -custom responses. - -Consider a scenario where you want to create a random color generator. I could create a RequestHandler that when someone -makes a request to the Container for `{url:"color"}` will intercept and return a custom `IResponse` of `{ status:200, type:"text/plain", value:"blue"}`. - -We use custom handlers to build the Container Services pattern. - - - - - - - - - - - - - - -[Fluid container]: {{< relref "containers.md" >}} -[Signals]: {{< relref "/docs/concepts/signals.md" >}} - - - -[SharedCounter]: {{< relref "/docs/data-structures/counter.md" >}} -[SharedMap]: {{< relref "/docs/data-structures/map.md" >}} -[SharedString]: {{< relref "/docs/data-structures/string.md" >}} -[Sequences]: {{< relref "/docs/data-structures/sequences.md" >}} -[SharedTree]: {{< relref "/docs/data-structures/tree.md" >}} - - - -[fluid-framework]: {{< packageref "fluid-framework" "v2" >}} -[@fluidframework/azure-client]: {{< packageref "azure-client" "v2" >}} -[@fluidframework/tinylicious-client]: {{< packageref "tinylicious-client" "v1" >}} -[@fluidframework/odsp-client]: {{< packageref "odsp-client" "v2" >}} - -[AzureClient]: {{< apiref "azure-client" "AzureClient" "class" "v2" >}} -[TinyliciousClient]: {{< apiref "tinylicious-client" "TinyliciousClient" "class" "v1" >}} - -[FluidContainer]: {{< apiref "fluid-static" "IFluidContainer" "interface" "v2" >}} -[IFluidContainer]: {{< apiref "fluid-static" "IFluidContainer" "interface" "v2" >}} - - - - diff --git a/docs/content/docs/deep/dds-anatomy.md b/docs/content/docs/deep/dds-anatomy.md deleted file mode 100644 index 14f036b74ab1..000000000000 --- a/docs/content/docs/deep/dds-anatomy.md +++ /dev/null @@ -1,117 +0,0 @@ ---- -title: Anatomy of a distributed data structure -menuPosition: 8 -draft: true ---- - -Although each distributed data structure (DDS) has its own unique functionality, they all share some broad traits. -Understanding these traits is the first step to understanding how DDSes work. They are: - -1. Local representation -1. Op vocabulary -1. Data serialization format (op) -1. Data serialization format (summary operations) -1. Reaction to remote changes -1. Conflict resolution strategies - -## Local representation - -Just like any non-distributed data structure such as JavaScript's Map object, all DDSes must also be accessible on the -client with an in-memory representation via a public API surface. A developer using the DDS operates on and reads from -this in-memory structure similarly to any other non-distributed data structure. The particular format of the data and -functionality of the API will vary between data structures. For example, a SharedMap holds key:value data and provides -interfaces like get and set for reading and updating values in the map. This is very similar to the native -(non-distributed) Map object in JS. - -## Op vocabulary - -As the in-memory representation is modified on one client, we need to notify other clients of the updates. Most DDSes -will have multiple operations that can be performed, so we'll need to differentiate the types of notifications (ops) -we're sending. For example, a SharedMap might be modified through "set", "delete", or "clear". - -These ops will probably correspond loosely with specific APIs on the DDS that cause data modification with the -expectation that there is a 1:1:1 correspondence between that API call on client A, the op that is sent, and the -corresponding update being applied on client B. However, this correspondence is not mandatory. - -## Data serialization format (op) - -Frequently, ops will need to carry a data payload. For example, when performing a "set" on a SharedMap, the new -key:value pair needs to be communicated to other clients. As a result, DDSes will have some serialization format for op -data payloads that can be reconstituted on the receiving end. This is why SharedMap requires its keys to be strings and -values to be serializable - non-serializable keys or values can't be transmitted to other clients. - -## Data serialization format (summary operations) - -Although the state of a DDS can be reconstructed by playing back every op that has ever been applied to it, this becomes -inefficient as the number of ops grows. Instead, DDSes should be able to serialize their entire contents into -a format that clients can use to reconstruct the DDS without processing the entire op history. There may be some overlap -with the serialization format used in ops, but it isn't strictly necessary. For instance, the SharedMap uses the same -serialization format for key/value pairs in its summary as it does in its set ops, but the Ink DDS serializes individual -coordinate updates in its ops while serializing entire ink strokes in its summary. - -## Reaction to remote changes - -As compared to their non-distributed counterparts, DDSes can change state without the developer's awareness as remote -ops are received. A standard JS Map will never change values without the local client calling a method on it, but a -SharedMap will, as remote clients modify data. To make the local client aware of the update, DDSes must expose a means -for the local client to observe and respond to these changes. This is typically done through eventing, like the -"valueChanged" event on SharedMap. - -## Conflict resolution strategies - -Data structures must be aware that multiple clients can act on the structure remotely, and the propagation of those -changes take time. It's possible then for a client to make a change to a data structure while unaware of its most-recent -state. The data structure must incorporate strategies for handling these scenarios such that any two clients which have -received the same set of ops will agree on the state. This property is referred to as "eventual consistency" or -"[convergence](https://en.wikipedia.org/wiki/Operational_transformation#The_CC_model)". These strategies may be varied -depending on the specific operation even within a single DDS. Some (non-exhaustive) examples of valid strategies: - -### Conflict avoidance - -Some data structures may not need to worry about conflict because their nature makes it impossible. For instance, the -Counter DDS increment operations can be applied in any order, since end result of the addition will be the same. -Characteristics of data structures that can take this approach: - -1. The data structure somehow ensures no data can be acted upon simultaneously by multiple users (purely additive, - designated owner, etc.) -1. The order in which actions are taken is either guaranteed (single actor, locking, etc.) or is irrelevant to the - scenario (incrementing a counter, etc.) - -### Last wins - -If it's possible to cause conflicts in the data, then a last-wins strategy may be appropriate. This strategy is used by -SharedMap, for example, in the case that multiple clients attempt to set the same key. In this case, clients need to be -aware that their locally applied operations may actually be chronologically before or after unprocessed remote -operations. As remote updates come in, each client needs to update the value to reflect the last (chronologically) set -operation. - -### Operational Transform and Intention Preservation - -More-advanced DDSes require a more-sophisticated conflict resolution strategy to meet user expectations. The general -principle is referred to as [Intention -Preservation](https://en.wikipedia.org/wiki/Operational_transformation#The_CCI_model). For example, the text I insert at -position 23 of a SharedString while a friend deletes at position 12 needs to be transformed to insert at the location -that matches my intention (that is, remains in the same location relative to the surrounding text, not the numerical -index). - -### Consensus and quorum - -Some resolution strategies may not be satisfied with eventual consistency, and instead require stronger guarantees -about the global state of the data. The consensus data structures achieve this by accepting a delay of a roundtrip -to the server before applying any changes locally (thus allowing them to confirm their operation was applied on a -known data state). The quorum offers an even stronger guarantee (with a correspondingly greater delay), that the -changes will not be applied until all connected clients have accepted the modification. These delays generally aren't -acceptable for real-time interactivity, but can be useful for scenarios with more lenient performance demands. - -## Additional thoughts - -1. Strictly speaking, summarization isn't a mandatory requirement of a DDS. If the ops are retained, the DDS can - be reconstructed from those. However, in practice it is not practical to load from ops alone, as this will - degrade load time over the lifetime of the DDS. -1. The requirement of "eventual consistency" has some flexibility to it. Discrepancies between clients are allowed as - long as they don't result in disagreements between clients on the observable state of the data. For example: - - SharedString can be represented differently across clients in internal in-memory representation depending on op - order, but this discrepancy is invisible to the user of the SharedString DDS. - - SharedMap will raise a different number of valueChanged events across clients when simultaneous sets occur. the - client that set last will get a single valueChanged event, while earlier setters will get an additional event for - each set after their own. diff --git a/docs/content/docs/deep/feature-detection-iprovide.md b/docs/content/docs/deep/feature-detection-iprovide.md deleted file mode 100644 index f224ab4db0b5..000000000000 --- a/docs/content/docs/deep/feature-detection-iprovide.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: Feature detection via FluidObject -draft: true -status: outdated -aliases: - - "/docs/advanced/feature-detection-iprovide/" ---- - -In an earlier section we introduced the Data Object, a convenient way to combine distributed data structures and our own -code (business logic) into a modular, reusable piece. This in turn enables us to modularize pieces of our application -- -data included. - -Fluid can be a very dynamic system. There are scenarios in which your code will call certain members of an object, *if -and only if*, the object has certain capabilities; that is, it implements certain interfaces. So, your code needs a way -of detecting whether the object implements specific interfaces. To make this easier, Fluid has a feature detection -mechanism, which centers around a special type called `FluidObject`. Feature detection is a technique by which one -Data Object can dynamically determine the capabilities of another Data Object. - -In order to detect features supported by an unknown object, you cast it to an `FluidObject` and then query the object -for a specific interface that it may support. The interfaces available via `FluidObject` include many core Fluid -interfaces, such as `IFluidHandle` or `IFluidLoadable`. This -discovery system (see example below) enables any Data Object to record what interfaces it implements and make it -possible for other Data Objects to discover them. The specifics of how these interfaces are declared is not relevant -until you want to define your own interfaces, which we'll cover in a later section. - -The following is an example of feature detection using `FluidObject`: - -```typescript -const anUnknownObject = anyObject as FluidObject; - -// Query the object to see if it supports IFluidLoadable -const loadable = anUnknownObject.IFluidLoadable; // loadable: IFluidLoadable | undefined - -if (loadable) { // or if (loadable !== undefined) - // It does! Now we know definitively that loadable's type is IFluidLoadable and we can safely call a method - await loadable.method(); -} -``` - -Note the `anUnknownObject.IFluidLoadable` expression and the types of the objects. If the object supports IFluidLoadable, -then an IFluidLoadable will be returned; otherwise, `undefined` will be returned. - - -## Delegation and the *IProvide* pattern - -In the example above, `fluidObject.IFluidLoadable` is a *property* that is of type IFluidLoadable. `fluidObject` itself -need not implement IFluidLoadable. Rather, it must *provide* an implementation of IFluidLoadable. We call this -*delegation* -- `fluidObject.IFluidLoadable` may return `fluidObject` itself in its implementation, or it may delegate by -returning another object that implements IFluidLoadable. - -If you search through the Fluid Framework code, you'll notice that many interfaces come in pairs, such as -`IFluidLoadable` and `IProvideFluidLoadable`. `IProvideFluidLoadable` is defined as follows: - -```typescript -export interface IProvideFluidLoadable { - readonly IFluidLoadable: IFluidLoadable; -} -``` - -We call this the *IProvide pattern*. This interface definition means that if we have an `IProvideFluidLoadable`, we may -call `.IFluidLoadable` on it and get an `IFluidLoadable` back -- which is what we did in the code sample above. - -As mentioned earlier, an object that implements IFluidLoadable may choose to return itself. This is quite common in -practice and is facilitated by the following convention: `IFluidFoo extends IProvideFluidFoo`. - -Returning to our `IFluidLoadable` example: - -```typescript -export interface IFluidLoadable extends IProvideFluidLoadable { - ... -} -``` - -The following example shows how a class may implement the IProvide* interfaces two different ways: - -```typescript -export abstract class PureDataObject<...> - extends ... - implements IFluidLoadable, IFluidRouter, IProvideFluidHandle -{ - ... - private readonly innerHandle: IFluidHandle; - ... - public get IFluidLoadable() { return this; } - public get IFluidHandle() { return this.innerHandle; } -``` - -`PureDataObject` implements `IProvideFluidLoadable` via `IFluidLoadable`, and thus simply returns `this` in that case. -But for `IProvideFluidHandle`, it delegates to a private member. The caller does not need to know how the property is -implemented -- it simply asks for `fluidObject.IFluidLoadable` or `fluidObject.IFluidHandle` and either gets back an -object of the correct type or `undefined`. - - - - - - - - - - - -[Fluid container]: {{< relref "containers.md" >}} -[Signals]: {{< relref "/docs/concepts/signals.md" >}} - - - -[SharedCounter]: {{< relref "/docs/data-structures/counter.md" >}} -[SharedMap]: {{< relref "/docs/data-structures/map.md" >}} -[SharedString]: {{< relref "/docs/data-structures/string.md" >}} -[Sequences]: {{< relref "/docs/data-structures/sequences.md" >}} -[SharedTree]: {{< relref "/docs/data-structures/tree.md" >}} - - - -[fluid-framework]: {{< packageref "fluid-framework" "v2" >}} -[@fluidframework/azure-client]: {{< packageref "azure-client" "v2" >}} -[@fluidframework/tinylicious-client]: {{< packageref "tinylicious-client" "v1" >}} -[@fluidframework/odsp-client]: {{< packageref "odsp-client" "v2" >}} - -[AzureClient]: {{< apiref "azure-client" "AzureClient" "class" "v2" >}} -[TinyliciousClient]: {{< apiref "tinylicious-client" "TinyliciousClient" "class" "v1" >}} - -[FluidContainer]: {{< apiref "fluid-static" "IFluidContainer" "interface" "v2" >}} -[IFluidContainer]: {{< apiref "fluid-static" "IFluidContainer" "interface" "v2" >}} - - - - diff --git a/docs/content/docs/deep/grouped-ops.md b/docs/content/docs/deep/grouped-ops.md deleted file mode 100644 index 64b044d34960..000000000000 --- a/docs/content/docs/deep/grouped-ops.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Grouped Ops -menuPosition: 6 -status: unwritten -discussion: 5468 -aliases: - - "/docs/advanced/grouped-ops/" -draft: true ---- - -Grouped ops provide a guarantee that all ops within a group will be ordered as a whole group. - -This is not the same as atomicity, and we need to explain that. diff --git a/docs/content/docs/deep/hosts.md b/docs/content/docs/deep/hosts.md deleted file mode 100644 index 5053330d44a5..000000000000 --- a/docs/content/docs/deep/hosts.md +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: Hosts and the loader -menuPosition: 4 -aliases: - - "/docs/concepts/hosts" -draft: true ---- - -The **Fluid loader** is one of the key parts of the Fluid Framework. Developers use the Fluid loader within their -applications to load Fluid containers and to initiate communication with the Fluid service. - -A **Fluid host** is any application that uses the Fluid loader to load a Fluid container. - -The Fluid loader uses a plugin model. - - -## Who needs a Fluid loader? - -If your app or website will load a Fluid container, then you are creating a Fluid host and you will need to use the -Fluid loader! - -If you are building a Fluid container and you will not build a standalone application with Fluid, you may still be -interested in learning about the Fluid loader. The Fluid loader includes capabilities, such as host scopes, that are used -by containers. - -You may also want to host your Fluid container on a standalone website. - - -## Summary - -The Fluid loader loads Fluid containers by connecting to the Fluid service and fetching Fluid container code. From a -system architecture perspective, the Fluid loader sits in between the Fluid service and a Fluid container. - -The Fluid architecture consists of a client and service. The
-client contains the Fluid loader and the Fluid container. The Fluid loader contains a document service factory, code
-loader, scopes, and a URL resolver. The Fluid runtime is encapsulated within a container, which is built using Fluid
-objects and distributed data structures. - -The Fluid loader is intended to be extremely generic. To maintain generic-ness, the loader uses a plugin model. With the -right plugins (drivers, handlers, resolvers), the Fluid loader will work for any wire protocol and any service -implementation. - -The loader mimics existing web protocols. Similar to how the browser requests state and app logic (a website) from a -web server, a Fluid host uses the loader to request a [Fluid container][] from the Fluid service. - -## Fluid host responsibilities - -A Fluid host creates a Fluid loader with a URL resolver, Fluid service driver, and code loader. The host then requests a -Fluid container from the loader. Finally, the host *does something* with the Fluid containers. A host can request -multiple containers from the loader. - -The Fluid loader connects to a URL using a container resolver, a
-service driver, and a container code loader. It then returns a Fluid container or shared object. - -We'll talk about each of these parts, starting with the request and loader dependencies, over the next sections. - -## Loading a container: class by class - -Let's address the role of each part of the Fluid loader and dive in to some details. - -### Request - -The request includes a Fluid container URL and optional header information. This URL contains a protocol and other -information that will be parsed by the URL Resolver to identify where the container is located. - -This is not part of instantiating the loader. The request kicks of the process of loading a container. - -### URL resolver - -The URL resolver parses a request and returns an `IFluidResolvedUrl`. This object includes all the endpoints and tokens -needed by the Fluid service driver to access the container. - -An example `IFluidResolvedUrl` includes the below information. - -```typescript -const resolvedUrl: IFluidResolvedUrl = { - endpoints: { - deltaStorageUrl: "www.ContosoFluidService.com/deltaStorage", - ordererUrl: "www.ContosoFluidService.com/orderer", - storageUrl: "www.ContosoFluidService.com/storage", - }, - tokens: { jwt: "token" }, - type: "fluid", - url: "https://www.ContosoFluidService.com/ContosoTenant/documentIdentifier", -} -``` - -You may notice we are mimicking the DNS and protocol lookup a browser performs when loading a webpage. That's because a -loader may access containers stored on multiple Fluid services. Furthermore, each Fluid service could be operating with -a different API and protocol. - -### Fluid service driver factory (DocumentServiceFactory) - -The loader uses a Fluid service driver to connect to a Fluid service. - -While many developers will only load one container at a time, it's interesting to consider how the loader handles -loading two containers that are stored on different Fluid services. To keep track of the services, the loader uses the -protocol from the resolved URL to identify the correct Fluid service driver for the Fluid service. - -### Code loader - -The loader uses the code loader to fetch container code. Because a Fluid container is a app logic and distributed state -we need all of the connected clients to agree on the same container code. - -### Scopes - -Scopes allow the container access to resources from the host. For example, the host may have access to an authorization -context that the container code is not trusted to access. The host can provide a scope to the container that federates -access to the secure resource. - -## Handling the response - -The Fluid loader will return a response object from the request. This is a continuation of our web protocol metaphor, -you'll receive an object with a mimeType (e.g. "fluid/object"), response status (e.g. 200), and a value (e.g. the Fluid -object). - -The host is responsible for checking that this response is valid. Did the loader return a 200? Is the mimeType correct? -As the Fluid Framework expands, we intend to make further use of these responses. - - - - - - - - - - -[Fluid container]: {{< relref "containers.md" >}} -[Signals]: {{< relref "/docs/concepts/signals.md" >}} - - - -[SharedCounter]: {{< relref "/docs/data-structures/counter.md" >}} -[SharedMap]: {{< relref "/docs/data-structures/map.md" >}} -[SharedString]: {{< relref "/docs/data-structures/string.md" >}} -[Sequences]: {{< relref "/docs/data-structures/sequences.md" >}} -[SharedTree]: {{< relref "/docs/data-structures/tree.md" >}} - - - -[fluid-framework]: {{< packageref "fluid-framework" "v2" >}} -[@fluidframework/azure-client]: {{< packageref "azure-client" "v2" >}} -[@fluidframework/tinylicious-client]: {{< packageref "tinylicious-client" "v1" >}} -[@fluidframework/odsp-client]: {{< packageref "odsp-client" "v2" >}} - -[AzureClient]: {{< apiref "azure-client" "AzureClient" "class" "v2" >}} -[TinyliciousClient]: {{< apiref "tinylicious-client" "TinyliciousClient" "class" "v1" >}} - -[FluidContainer]: {{< apiref "fluid-static" "IFluidContainer" "interface" "v2" >}} -[IFluidContainer]: {{< apiref "fluid-static" "IFluidContainer" "interface" "v2" >}} - - - - diff --git a/docs/content/docs/deep/service.md b/docs/content/docs/deep/service.md deleted file mode 100644 index 64dbbdaa461e..000000000000 --- a/docs/content/docs/deep/service.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: The Fluid service -menuPosition: 3 -aliases: - - "/docs/concepts/service" -draft: true ---- - -The Fluid Framework contains a service component. A reference implementation of a Fluid service called *Routerlicious* is -included in the FluidFramework repo. Note that Routerlicious is one of many Fluid services that could be implemented. -The Fluid Framework uses a loose-coupling architecture for integrating with services, so Fluid is not limited to a single -implementation. - - -## Responsibilities - -Fluid services like Routerlicious have three responsibilities: - -1. **Ordering:** They assign monotonically increasing sequence numbers to incoming operations. -1. **Broadcast:** They then broadcast the operations to all connected clients, including their sequence numbers. -1. **Storage:** They're also responsible for storing Fluid data in the form of summary operations. - - -## Ordering and drivers - -The Fluid service ensures that all operations are ordered and also broadcasts the operations to other connected clients. -We sometimes refer to this as "op routing;" this is the source of the name *Routerlicious*. - - -## Summaries - -Summaries are a serialized form of a Fluid document, created by consolidating all operations and serializing the data -model. Summaries are used to improve load performance. When a Fluid document is loaded, the service may send a summary -to the client so that the client does not need to replay all ops locally to get to the current state. - -One of the connected clients is chosen to generate the summary. Once the summary is created it is sent to the service -like any other operation. To learn more about summaries and how they are created, see the [advanced Summarizer -topic]({{< relref "summarizer.md" >}}). - - -## Drivers - -The Fluid Framework uses a loose-coupling architecture for integrating with Fluid services. Drivers are used to abstract -the service-specific behavior. This enables an implementer to use any ordering and storage architecture or technology to -implement the Fluid service. - - -## More information - -You can learn more about Routerlicious, including how to run it using Docker, at -. diff --git a/docs/content/docs/deep/summaryTelemetry.md b/docs/content/docs/deep/summaryTelemetry.md deleted file mode 100644 index 91415f7bf4be..000000000000 --- a/docs/content/docs/deep/summaryTelemetry.md +++ /dev/null @@ -1,388 +0,0 @@ ---- -title: Summary telemetry -menuPosition: 9 -draft: true ---- - - -## Summary Collection - -The core data structure that tracks summary attempts and corresponding results by monitoring the op log. - -### SummaryAckWithoutOp - -> Error - -It means that a summary ack was observed without a corresponding summary op. We only raise this event if the missing summary op's sequence number >= the initial sequence number which we loaded from. - -Potential causes are that a summary op was nacked then acked, double-acked, or the `summarySequenceNumber` is invalid. All cases should be recoverable, but still indicate bad behavior. - -- `sequenceNumber` - sequence number of the observed summary ack op. -- `summarySequenceNumber` - sequence number of the missing summary op, as indicated by the summary ack op. -- `initialSequenceNumber` - sequence number we initially loaded from. This is relevant since it is compared with the missing summary op sequence number to determine if we are in an error case or not. - -## Summary Manager - -> Event Prefix: `SummaryManager:` - -### CreatingSummarizer - -Logs right before attempting to spawn summarizer client. - -- `throttlerDelay` - throttle delay in ms (does not include initial delay) -- `initialDelay` - initial delay in ms -- `opsSinceLastAck` - count of ops since last summary ack, reported by SummaryCollection. This can be relevant for the initial delay bypass logic. -- `opsToBypassInitialDelay` - count of ops since last summary ack that allow us to bypass the initial delay - -### RunningSummarizer - -> Performance - -The parent client elected as responsible for summaries tracks the life cycle of its spawned summarizer client. - -This event starts when calling `run()` on the spawned summarizer client's `ISummarizer`. - -This event ends when that `run()` call's resulting promise is fulfilled. This happens when the client closes. - -- `attempt` - number of attempts within the last time window, used for calculating the throttle delay. - -### SummarizerException - -> Error - -Exception raised during summarization. - -- `category` - string that categorizes the exception ("generic" or "error") - -### EndingSummarizer - -Logs after summarizer has stopped running, i.e., after the client has disconnected or stop has been requested - -- `reason` - the reason for stopping, returned by Summarizer.run - -## Summarizer Client Election - -> Event Prefix: `OrderedClientElection:` - -### ElectedClientNotSummarizing - -> Error - -When a client is elected the summarizer, this indicates that too many ops have passed since they were elected or since their latest successful summary ack if they have one. - -- `electedClientId` - the client ID of the elected parent client responsible for summaries which is not summarizing. -- `lastSummaryAckSeqForClient` - the sequence number of the last summary ack received during this client's election. -- `electionSequenceNumber` - the sequence number at which this failing client was elected. -- `nextElectedClientId` - the client ID of the next oldest client in the Quorum which is eligible to be elected as responsible for summaries. It may be undefined if the currently elected client is the youngest (or only) client in the Quorum. -- `electionEnabled` - election of a new client on logging this error is enabled - -### UnexpectedElectionSequenceNumber - -> Unexpected Error - -Verifies the state transitioned as expected, based on assumptions about how `OrderedClientElection` works. - -- `lastSummaryAckSeqForClient` - expected to be undefined! -- `electionSequenceNumber` - expected to be same as op sequence number! - -## Ordered Client Election - -> Event Prefix: `OrderedClientElection:` - -### InitialElectedClientNotFound - -> Error - -Failed to find the initially elected client determined by the state in the summary. This is unexpected, and likely indicates a discrepancy between the `Quorum` members and the `SummarizerClientElection` state at the time the summary was generated. - -When this error happens, no client will be elected at the start. The code in `SummarizerClientElection` should still recover from this scenario. - -- `electionSequenceNumber` - sequence number which the initially elected client was supposedly elected as of. This is coming from the initial state loaded from the summary. -- `expectedClientId` - client ID of the initially elected client which was not found in the underlying `OrderedClientCollection`. This is coming from the base summary. -- `electedClientId` - the client which will now be elected; always undefined. -- `clientCount` - the number of clients in the underlying `OrderedClientCollection`, which should be the same as the number of clients in the `Quorum` at the time of load. - -### InitialElectedClientIneligible - -> Error - -The initially elected client determined by the summary fails the eligibility check. Presumably they must have passed it at the time the summary was generated and they were originally elected. So this indicates a discrepancy/change between the eligibility or a bug in the code. - -When this error happens, the first eligible client that is younger than this client will be elected. - -- `electionSequenceNumber` - sequence number which the initially elected client was elected as of. This is coming from the initial state loaded from the summary. -- `expectedClientId` - client ID of the initially elected client which is failing the eligibility check. This is coming from the base summary. -- `electedClientId` - client ID of the newly elected client or undefined if no younger clients are eligible. - -## Ordered Client Collection - -> Event Prefix: `OrderedClientCollection:` - -## ClientNotFound - -> Error - -A member of the `Quorum` was removed, but it was not found in the `OrderedClientCollection`. This should not be possible, since the tracked clients in the `OrderedClientCollection` should match 1-1 to the clients in the `Quorum`. - -- `clientId` - client ID of the member removed from the `Quorum`. -- `sequenceNumber` - sequence number at the time when the member was removed from the `Quorum`. This should be equivalent to the sequence number of their leave op, since that is what triggers them exiting the `Quorum`. - -## Summarizer - -> Event Prefix: `Summarizer:` - -### StoppingSummarizer - -This event fires when the Summarizer is stopped. - -- `reason` - reason code provided for stopping. -- `onBehalfOf` - the last known client ID of the parent client which spawned this summarizer client. - -### RunningSummarizer - -Summarizer has started running. This happens when the summarizer client becomes connected with write permissions, and `run()` has been called on it. At this point in time it will create a `RunningSummarizer` and start updating its state in response to summary ack ops. - -- `onBehalfOf` - the last known client ID of the parent client which spawned this summarizer client. -- `initSummarySeqNumber` - initial sequence number that the summarizer client loaded from - -### HandleSummaryAckError - -> Error - -An error was encountered while watching for or handling an inbound summary ack op. - -- `referenceSequenceNumber` - reference sequence number of the summary ack we are handling if the error occurs during `refreshLatestSummaryAck` (most likely). It could be the reference sequence number of the previously handled one + 1 (defaulting to initial sequence number if this is the first) if the error occurs while waiting for the summary ack (indicating a bug in `SummaryCollection`), but that should be significantly less likely. - -### HandleSummaryAckFatalError - -> Unexpected Error - -This should not even be possible, but it means that an unhandled error was raised while listening for summary ack ops in a loop. This is particularly unexpected, because if any handling of a summary ack fails, then we catch that error already and keep going, logging a different error. - -## Running Summarizer - -> Event Prefix: `Summarizer:Running:` - -- `summarizeCount` - the number of summarize attempts this client has made. This can be used to correlate events for individual summary attempts. -- `summarizerSuccessfulAttempts` - the number of successful summaries this summarizer instance has performed. This property subtracted from the `summarizeCount` property equals the number of attempts that failed to produce a summary. - -### SummaryAckWaitTimeout - -> Error - -When a summary op is sent, the summarizer waits `summaryAckWaitTimeout` for a summary ack/nack op in response from the server. If a corresponding response is not seen within that time, this event is raised, and the client retries. - -- `maxAckWaitTime` - cap on the maximum amount of time client will wait for a summarize op ack -- `referenceSequenceNumber` - last attempt summary op reference sequence number. -- `summarySequenceNumber` - last attempt summary op sequence number. -- `timePending` - time spent waiting for a summary ack/nack as computed by client. - -### MissingSummaryAckFoundByOps - -During first load, the wait for a summary ack/nack op in response to a summary op, can be bypassed by comparing the op timestamps. Normally a timer is used while running, but if the server-stamped op time difference exceeds the `maxAckWaitTimeout`, then raise this event, clear the timer and stop waiting to start. - -- `referenceSequenceNumber` - last attempt summary op reference sequence number. -- `summarySequenceNumber` - last attempt summary op sequence number. - -### SummarizeAttemptDelay - -Logs the presence of a delay before attempting summary. Note that the event is logged before waiting for the delay. - -- `duration` - duration delay in seconds. This is the `retryAfter` value found in the summary nack response op, if present. -Otherwise, it's the delay from regular summarize attempt retry. -- `reason` - "nack with retryAfter" if the `duration` value came from a summary nack response op. Undefined otherwise. - -### FailToSummarize - -> Error - -All consecutive retry attempts to summarize by heuristics have failed. The summarizer client should stop itself with "failToSummarize" reason code, closing the container. - -- `summarizeReason` - reason for attempting to summarize -- `message` - message returned with the last summarize result - -### UnexpectedSummarizeError - -> Unexpected Error - -This should not be possible, but it indicates an error was thrown in the code that runs immediately after a summarize attempt. This is just lock release and checking if it should summarize again. - -## Summary Generator - -> Event Prefix: `Summarizer:Running:` - -- `summarizeCount` - the number of summarize attempts this client has made. This can be used to correlate events for individual summary attempts. -- `summarizerSuccessfulAttempts` - the number of successful summaries this summarizer instance has performed - -### UnexpectedSummarizeError - -> Unexpected Error - -This definitely should not happen, since the code that can trigger this is trivial. - -### Summarize - -> Performance - -This event is used to track an individual summarize attempt from end to end. - -The event starts when the summarize attempt is first started. - -The event ends after a summary ack op is received in response to this attempt's summary op. - -The event cancels in response to a summary nack op for this attempt, an error along the way, or if the client disconnects while summarizing. - -- `reason` - reason code for attempting to summarize. -- `fullTree` - flag indicating whether the attempt should generate a full summary tree without any handles for unchanged subtrees. -- `timeSinceLastAttempt` - time in ms since the last summary attempt (whether it failed or succeeded) for this client. -- `timeSinceLastSummary` - time in ms since the last successful summary attempt for this client. - -- `message` - message indicating result of summarize attempt; possible values: - - - `disconnect` - the summary op was submitted but broadcast was cancelled. - - `submitSummaryFailure` - the attempt failed to submit the summary op. - - `summaryOpWaitTimeout` - timeout while waiting to receive the submitted summary op broadcasted. - - `summaryAckWaitTimeout` - timeout while waiting to receive a summary ack/nack op in response to this attempt's summary op. - - `summaryNack` - attempt was rejected by server via a summary nack op. - - `summaryAck` - attempt was successful, and the summary ack op was received. - -- `ackWaitDuration` (ack/nack received only) - time in ms spent waiting for the summary ack/nack op after submitting the summary op. -- `ackNackSequenceNumber` (ack/nack received only) - sequence number of the summary ack/nack op in response to this attempt's summary op. -- `summarySequenceNumber` (ack/nack received only) - sequence number of this attempt's summary op. -- `handle` (ack only) - summary handle found on this attempt's summary ack op. - -### Summarize_generate - -This event fires during a summary attempt, as soon as the ContainerRuntime has finished its summarize work, which consists of: generating the tree, uploading to storage, and submitting the op. It should fire this event even if something goes wrong during those steps. - -- `fullTree` - flag indicating whether the attempt should generate a full summary tree without any handles for unchanged subtrees. -- `timeSinceLastAttempt` - time in ms since the last summary attempt (whether it failed or succeeded) for this client. -- `timeSinceLastSummary` - time in ms since the last successful summary attempt for this client. -- `referenceSequenceNumber` - reference sequence number at the time of this summary attempt. -- `opsSinceLastAttempt` - number of ops that have elapsed since the the last summarize attempt for this client. -- `opsSinceLastSummary` - number of ops that have elapsed since the last successful summarize attempt for this client. -- several properties with summary stats (count of nodes in the tree, etc.) -- `generateDuration` (only if tree generated) - time in ms it took to generate the summary tree. -- `handle` (only if uploaded to storage) - proposed summary handle as returned by storage for this summary attempt. -- `uploadDuration` (only if uploaded to storage) - time in ms it took to upload the summary tree to storage and receive back a handle. -- `clientSequenceNumber` (only if summary op submitted) - client sequence number of summary op submitted for this attempt. This can be used to correlate the submit attempt with the received summary op after it is broadcasted. - -### IncrementalSummaryViolation - -> Error - -Fires if an incremental summary (i.e., not full tree) summarizes more data stores than the expected maximum number - -- `summarizedDataStoreCount` - number of data stores actually summarized -- `gcStateUpdatedDataStoreCount` - number of data stores with an updated GC state since the last summary -- `opsSinceLastSummary` - number of ops since the last summary - -### Summarize_Op - -This event fires during a summary attempt, as soon as the client observes its own summary op. This means that the summary op it submitted was sequenced and broadcasted by the server. - -- `duration` - time in ms spent waiting for the summary op to be broadcast after submitting it. This should be low; should represent the round-trip time for an op. -- `referenceSequenceNumber` - reference sequence number of the summary op. This should match the reference sequence number of the Summarize event for this attempt as well. -- `summarySequenceNumber` - server-stamped sequence number of the summary op for this attempt. -- `handle` - proposed summary tree handle on the summary op for this attempt, which was originally returned from storage. - -### SummaryNack - -> Error - -Fires if the summary receives a nack response - -- `fullTree` - flag indicating whether the attempt should generate a full summary tree without any handles for unchanged subtrees. -- `timeSinceLastAttempt` - time in ms since the last summary attempt (whether it failed or succeeded) for this client. -- `timeSinceLastSummary` - time in ms since the last successful summary attempt for this client. -- `referenceSequenceNumber` - reference sequence number at the time of this summary attempt. -- `opsSinceLastAttempt` - number of ops that have elapsed since the the last summarize attempt for this client. -- `opsSinceLastSummary` - number of ops that have elapsed since the last successful summarize attempt for this client. -- several properties with summary stats (count of nodes in the tree, etc.) -- `generateDuration` (only if tree generated) - time in ms it took to generate the summary tree. -- `handle` (only if uploaded to storage) - proposed summary handle as returned by storage for this summary attempt. -- `uploadDuration` (only if uploaded to storage) - time in ms it took to upload the summary tree to storage and receive back a handle. -- `clientSequenceNumber` (only if summary op submitted) - client sequence number of summary op submitted for this attempt. This can be used to correlate the submit attempt with the received summary op after it is broadcasted. -- `retryAfterSeconds` - time in seconds to wait before retrying, as read from the nack message - -### SummarizeTimeout - -> Performance - -This event can fire multiple times (up to a cap) per summarize attempt. It indicates that a lot of time has passed during the summarize attempt. - -For example, after 20 seconds of summarizing this event might fire. Then after another 40 seconds pass, it will fire again. Then after another 80 seconds pass, it will fire again. The third time that it logged, a total time of 140 seconds has passed. - -- `timeoutTime` - time in ms for this timeout to occur, this counts since the previous timeout event for this summarize attempt, so it is not cumulative. -- `timeoutCount` - number of times this event has fired for this attempt. - -## SummarizerNode - -Should use the in-progress summarize attempt correlated logger. - -### DecodeSummaryMaxDepth - -Differential summaries are disabled, so we aren't expecting to see this often, but it is possible since it happens while loading a snapshot. - -Indicates >100 consecutive failed summaries for a single datastore. It means there are 100+ nested `_baseSummary` trees encountered while loading. - -- `maxDecodeDepth` - 100 - -### DuplicateOutstandingOps - -Differential summaries are disabled, so we aren't expecting to see this often, but it is possible since it happens while loading a snapshot. - -When organizing the outstanding ops from the `_outstandingOps` blobs of nested differential summaries, it found an overlap in sequence number ranges. This indicates something went wrong. - -- `message` - "newEarliestSeq <= latestSeq in decodeSummary: {newEarliestSeq} <= {latestSeq}" - -## Container Runtime - -Should use the in-progress summarize attempt correlated logger. - -### SequenceNumberMismatch - -> Error - -Fires during ContainerRuntime load from snapshot if the sequence number read from the snapshot does not match DeltaManager.initialSequenceNumber. - -### SummariesDisabled - -Fires during ContainerRuntime load if automatic summaries are disabled for the given Container - -### SummaryStatus:Behind - -> Error - -Fires if too many ops (7000 by default) have been processed since the last summary. - -### SummaryStatus:CaughtUp - -Fires if, after a previous `SummaryStatus:Behind` event, a summary ack is received - -### LastSequenceMismatch - -> Error - -Fires on summary submit if the summary sequence number does not match the sequence number of the last message processed by the Delta Manager. - -- `error` - error message containing the mismatched sequence numbers - -### GarbageCollection - -> Performance - -This event tracks the performance around the garbage collection process. - -- `deletedNodes` -- `totalNodes` -- `deletedDataStores` -- `totalDataStores` - -### MissingGCNode - -> Disabled: too noisy - -While running garbage collection, a node was detected as missing that is referenced. - -- `missingNodeId` diff --git a/docs/content/docs/testing/debugging.md b/docs/content/docs/testing/debugging.md deleted file mode 100644 index 45472b50c8df..000000000000 --- a/docs/content/docs/testing/debugging.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Debugging -menuPosition: 5 -status: unwritten -draft: true ---- - -## How to test your application - -### Enable Fluid logs in the browser - -### Understanding Fluid error logs - -Errors raised by the Fluid Framework, or handled and "normalized" by the framework, will have a few keys fields to consider: - -* `errorType` -- e.g. `throttlingError` -- A code-searchable term that directs you to the "class" of error. This may indicate some other domain-specific data that would be logged, such as `retryAfterSeconds`. This is the only field in the error contract used programatically by partners. -* `error` or `message` (optional) -- The free-form error message. May contain additional details, but if not, remember to check for other properties -in the log line. In cases where an external error is wrapped, you may find there's a prefix that gives Fluid's summary of the error, -with the original error message following after a colon. - -Note that for a time, `fluidErrorCode` was used in addition to `message` to describe the specific error case, but has since been deprecated. - -## Debugging with Fluid diff --git a/docs/content/posts/_index.md b/docs/content/posts/_index.md deleted file mode 100644 index 0c416154b97b..000000000000 --- a/docs/content/posts/_index.md +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: "Blog & Updates" -draft: true ---- diff --git a/docs/static/images/container-and-component-loading-1.jpg b/docs/static/images/container-and-component-loading-1.jpg deleted file mode 100644 index 600987a90d6c..000000000000 Binary files a/docs/static/images/container-and-component-loading-1.jpg and /dev/null differ diff --git a/docs/static/images/container-and-component-loading-10.jpg b/docs/static/images/container-and-component-loading-10.jpg deleted file mode 100644 index 64fceed43071..000000000000 Binary files a/docs/static/images/container-and-component-loading-10.jpg and /dev/null differ diff --git a/docs/static/images/container-and-component-loading-11.jpg b/docs/static/images/container-and-component-loading-11.jpg deleted file mode 100644 index 85dcaf916c82..000000000000 Binary files a/docs/static/images/container-and-component-loading-11.jpg and /dev/null differ diff --git a/docs/static/images/container-and-component-loading-2.jpg b/docs/static/images/container-and-component-loading-2.jpg deleted file mode 100644 index 99ee7eee7ad7..000000000000 Binary files a/docs/static/images/container-and-component-loading-2.jpg and /dev/null differ diff --git a/docs/static/images/container-and-component-loading-3.jpg b/docs/static/images/container-and-component-loading-3.jpg deleted file mode 100644 index 4c7cf4628778..000000000000 Binary files a/docs/static/images/container-and-component-loading-3.jpg and /dev/null differ diff --git a/docs/static/images/container-and-component-loading-4.jpg b/docs/static/images/container-and-component-loading-4.jpg deleted file mode 100644 index e6a6667518ca..000000000000 Binary files a/docs/static/images/container-and-component-loading-4.jpg and /dev/null differ diff --git a/docs/static/images/container-and-component-loading-5.jpg b/docs/static/images/container-and-component-loading-5.jpg deleted file mode 100644 index c106797ff998..000000000000 Binary files a/docs/static/images/container-and-component-loading-5.jpg and /dev/null differ diff --git a/docs/static/images/container-and-component-loading-6.jpg b/docs/static/images/container-and-component-loading-6.jpg deleted file mode 100644 index b9cd2cd79eb6..000000000000 Binary files a/docs/static/images/container-and-component-loading-6.jpg and /dev/null differ diff --git a/docs/static/images/container-and-component-loading-7.jpg b/docs/static/images/container-and-component-loading-7.jpg deleted file mode 100644 index 361eb630fcd8..000000000000 Binary files a/docs/static/images/container-and-component-loading-7.jpg and /dev/null differ diff --git a/docs/static/images/container-and-component-loading-8.jpg b/docs/static/images/container-and-component-loading-8.jpg deleted file mode 100644 index 42cab0de00a1..000000000000 Binary files a/docs/static/images/container-and-component-loading-8.jpg and /dev/null differ diff --git a/docs/static/images/container-and-component-loading-9.jpg b/docs/static/images/container-and-component-loading-9.jpg deleted file mode 100644 index 2dcd3a438318..000000000000 Binary files a/docs/static/images/container-and-component-loading-9.jpg and /dev/null differ