Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: introduce vmpool to make each vm run on single thread #272

Merged
merged 4 commits into from
Sep 26, 2024

Conversation

beer-1
Copy link
Contributor

@beer-1 beer-1 commented Sep 26, 2024

Description

Closes: #XXXX

We cannot share vm across threads until loader v2 installed, so create this vm pool. details

            // TODO: If the VM is not shared across threads, this error means that there is a
            //       recursive type. But in case it is shared, the current implementation is not
            //       correct because some other thread can cache depth formula before we reach
            //       this line, and result in an invariant violation. We need to ensure correct
            //       behavior, e.g., make the cache available per thread.

Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.

I have...

  • included the correct type prefix in the PR title, you can find examples of the prefixes below:
  • confirmed ! in the type prefix if API or client breaking change
  • targeted the correct branch
  • provided a link to the relevant issue or specification
  • reviewed "Files changed" and left comments if necessary
  • included the necessary unit and integration tests
  • updated the relevant documentation or specification, including comments for documenting Go code
  • confirmed all CI checks have passed

Reviewers Checklist

All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.

I have...

  • confirmed the correct type prefix in the PR title
  • confirmed all author checklist items have been addressed
  • reviewed state machine logic, API design and naming, documentation is accurate, tests and test coverage

@beer-1 beer-1 requested a review from a team as a code owner September 26, 2024 08:03
Copy link

coderabbitai bot commented Sep 26, 2024

📝 Walkthrough
📝 Walkthrough

Walkthrough

The changes involve significant modifications to the virtual machine (VM) management within the x/move/keeper package. The Keeper struct now supports multiple VM instances, enhancing concurrency control with mutexes and semaphores. Key methods for executing functions and scripts have been updated to acquire and release VM instances appropriately. A new file, vmpool.go, introduces a VM pool management system, providing methods for acquiring and releasing VMs. Overall, these changes aim to improve the handling and performance of VM operations in the project.

Changes

Files Change Summary
x/move/keeper/genesis.go Modified the Initialize function to acquire and release a VM instance, changing from k.moveVM.Initialize to vm.Initialize.
x/move/keeper/handler.go Updated executeEntryFunction, executeScript, and executeViewFunction to acquire and release VM instances, replacing k.moveVM with the new vm variable.
x/move/keeper/keeper.go Replaced single moveVM instance with a slice of VM pointers (moveVMs), added mutex and semaphore for synchronization, and modified NewKeeper for multiple VMs.
x/move/keeper/vmpool.go Introduced a new Keeper struct managing a pool of VMs with acquireVM and releaseVM methods, ensuring thread-safe access.

Possibly related PRs

  • feat: bump movevm and point #248: The changes in x/move/keeper/handler.go in this PR involve modifications to methods that manage virtual machine (VM) execution, which directly relates to the VM acquisition and release mechanisms introduced in the main PR's changes to the same file.

Poem

🐇 In the land of code where VMs play,
A keeper's magic leads the way.
With pools of VMs, swift and bright,
They dance and twirl, a wondrous sight.
So hop along, let changes flow,
In this rabbit's world, we watch them grow! 🌟


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 73f31d2 and 62309d3.

📒 Files selected for processing (2)
  • x/move/keeper/keeper.go (4 hunks)
  • x/move/keeper/vmpool.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • x/move/keeper/keeper.go
  • x/move/keeper/vmpool.go

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

github-actions bot commented Sep 26, 2024

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

OpenSSF Scorecard

PackageVersionScoreDetails

Scanned Manifest Files

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Outside diff range and nitpick comments (8)
x/move/keeper/vmpool.go (4)

9-17: LGTM with suggestion: Consider adding error handling for empty VM pool.

The acquireVM method implements robust concurrency control using both a semaphore and a mutex. This ensures thread-safe access to the VM pool while limiting the number of VMs that can be acquired simultaneously.

However, there's a potential issue if the VM pool becomes empty. Consider adding error handling to deal with this scenario.

Here's a suggested modification to handle the empty pool case:

 func (k Keeper) acquireVM(ctx context.Context) (vm types.VMEngine) {
 	k.moveVMSemaphore.Acquire(ctx, 1)
 
 	k.moveVMMutx.Lock()
+	if len(*k.moveVMs) == 0 {
+		k.moveVMMutx.Unlock()
+		k.moveVMSemaphore.Release(1)
+		return nil // or return an error
+	}
 	vm, *k.moveVMs = (*k.moveVMs)[0], (*k.moveVMs)[1:]
 	k.moveVMMutx.Unlock()
 
 	return
 }

13-13: Consider using pop for clarity and efficiency.

While the current implementation is correct, using a dedicated pop function might improve readability and potentially efficiency.

Consider refactoring the VM removal logic:

-vm, *k.moveVMs = (*k.moveVMs)[0], (*k.moveVMs)[1:]
+vm = k.popVM()

// Add this helper method to the Keeper struct
+func (k *Keeper) popVM() types.VMEngine {
+	vm := (*k.moveVMs)[0]
+	*k.moveVMs = (*k.moveVMs)[1:]
+	return vm
+}

19-25: LGTM with suggestions: Consider adding safety checks.

The releaseVM method correctly uses a mutex to ensure thread-safe access when returning a VM to the pool. The semaphore is appropriately released after the VM is added back to the pool.

To enhance robustness, consider adding the following checks:

  1. Ensure the VM being released is not nil.
  2. Optionally, implement a mechanism to prevent duplicate VMs in the pool.

Here's a suggested modification to include these safety checks:

-func (k Keeper) releaseVM(vm types.VMEngine) {
+func (k Keeper) releaseVM(vm types.VMEngine) error {
+	if vm == nil {
+		return errors.New("cannot release a nil VM")
+	}
+
 	k.moveVMMutx.Lock()
+	defer k.moveVMMutx.Unlock()
+
+	// Optional: Check for duplicates (this might be expensive for large pools)
+	for _, existingVM := range *k.moveVMs {
+		if existingVM == vm {
+			return errors.New("VM already exists in the pool")
+		}
+	}
+
 	*k.moveVMs = append(*k.moveVMs, vm)
-	k.moveVMMutx.Unlock()
 
 	k.moveVMSemaphore.Release(1)
+	return nil
 }

Note: The duplicate check might be expensive for large pools. Consider if this check is necessary for your use case.


1-25: Overall implementation is solid with room for improvement.

The vmpool.go file introduces a well-structured VM pool management system with proper concurrency control using mutexes and semaphores. The acquireVM and releaseVM methods provide a good foundation for managing VM resources.

To further enhance the implementation, consider the following suggestions:

  1. Add error handling for the case when the VM pool becomes empty in acquireVM.
  2. Implement safety checks in releaseVM to handle nil VMs and potentially prevent duplicate VMs.
  3. Consider using a pop function for clarity and potential efficiency in acquireVM.

These improvements will make the VM pool more robust and easier to maintain.

x/move/keeper/handler.go (3)

167-170: LGTM. Consider tracking the TODO for future removal.

The changes appropriately implement VM acquisition and release, aligning with the PR objective of introducing a VM pool. This ensures thread-safety for VM operations.

Would you like me to create a GitHub issue to track the TODO for removing this code after loader v2 is installed?


568-571: LGTM. Consider refactoring for DRY principle.

The changes consistently implement VM acquisition and release across all execution methods, which is excellent for maintainability. However, this repetition suggests an opportunity for refactoring.

Consider extracting the VM acquisition and release logic into a separate helper function or using a decorator pattern to apply this behavior to all execution methods. This would adhere to the DRY (Don't Repeat Yourself) principle and make future maintenance easier.

Example refactoring (pseudo-code):

func withVM(k *Keeper, ctx context.Context, f func(*VM) error) error {
    vm := k.acquireVM(ctx)
    defer k.releaseVM(vm)
    return f(vm)
}

// Then use it like:
func (k Keeper) executeEntryFunction(...) error {
    return withVM(k, ctx, func(vm *VM) error {
        // existing function body, using `vm` instead of `k.moveVM`
    })
}

This refactoring would eliminate the need for the TODO comments in each function and centralize the VM management logic.


Line range hint 1-610: Overall assessment: Changes align with PR objectives, but consider refactoring.

The modifications consistently implement VM acquisition and release across the executeEntryFunction, executeScript, and executeViewFunction methods. This aligns well with the PR objective of introducing a VM pool to address issues with VM sharing across threads.

Key points:

  1. The changes enhance thread-safety for VM operations.
  2. The implementation is consistent across all modified functions.
  3. The TODO comments indicate that this is a temporary solution until loader v2 is installed.

Recommendations:

  1. Consider the suggested refactoring to reduce code duplication and improve maintainability.
  2. Ensure that a task is created to remove these changes once loader v2 is installed.
  3. Verify that the performance impact of acquiring and releasing VMs for each operation is acceptable.
x/move/keeper/keeper.go (1)

49-49: Correct the field name 'moveVMMutx' to 'moveVMMutex'

There's a typo in the field name moveVMMutx. It should be moveVMMutex to reflect the correct spelling of "Mutex", enhancing code readability.

Apply this diff to fix the field name:

-	moveVMMutx      *sync.Mutex
+	moveVMMutex     *sync.Mutex
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between d577a4a and e809999.

📒 Files selected for processing (4)
  • x/move/keeper/genesis.go (1 hunks)
  • x/move/keeper/handler.go (6 hunks)
  • x/move/keeper/keeper.go (4 hunks)
  • x/move/keeper/vmpool.go (1 hunks)
🔇 Additional comments (3)
x/move/keeper/vmpool.go (1)

3-7: LGTM: Import statements are appropriate.

The import statements are concise and relevant to the functionality implemented in this file.

x/move/keeper/genesis.go (1)

49-55: LGTM! Consider adding explanatory comments and creating a follow-up issue.

The changes align well with the PR objectives of introducing a VM pool to address VM sharing issues. Here are some suggestions to improve the code further:

  1. Add a comment explaining why VM acquisition is necessary here. This will help future developers understand the context of these changes.

  2. Create a follow-up issue for the TODO comment on line 49. This will ensure that the temporary solution is properly tracked and addressed in the future.

Consider adding the following comment above line 50:

// Acquire a VM instance from the pool to ensure thread-safety during initialization

Let's verify the impact of these changes on the rest of the codebase:

This script will help us identify any other parts of the codebase that might need to be updated to align with these changes.

x/move/keeper/handler.go (1)

287-290: LGTM. Consistent implementation across functions.

The changes here mirror those in executeEntryFunction, maintaining consistency in the VM acquisition and release process across different execution methods.

x/move/keeper/keeper.go Show resolved Hide resolved
x/move/keeper/keeper.go Show resolved Hide resolved
x/move/keeper/keeper.go Outdated Show resolved Hide resolved
x/move/keeper/keeper.go Outdated Show resolved Hide resolved
Copy link

codecov bot commented Sep 26, 2024

Codecov Report

Attention: Patch coverage is 90.90909% with 4 lines in your changes missing coverage. Please review.

Project coverage is 40.67%. Comparing base (d577a4a) to head (62309d3).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
x/move/keeper/keeper.go 85.71% 1 Missing and 1 partial ⚠️
x/move/keeper/vmpool.go 80.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #272      +/-   ##
==========================================
+ Coverage   40.60%   40.67%   +0.06%     
==========================================
  Files         264      265       +1     
  Lines       25225    25258      +33     
==========================================
+ Hits        10243    10274      +31     
- Misses      13395    13396       +1     
- Partials     1587     1588       +1     
Files with missing lines Coverage Δ
x/move/keeper/genesis.go 17.39% <100.00%> (+2.46%) ⬆️
x/move/keeper/handler.go 79.10% <100.00%> (+0.64%) ⬆️
x/move/keeper/keeper.go 56.32% <85.71%> (+0.94%) ⬆️
x/move/keeper/vmpool.go 80.00% <80.00%> (ø)

Copy link
Contributor

@sh-cha sh-cha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@beer-1 beer-1 merged commit a0b130a into main Sep 26, 2024
9 checks passed
@beer-1 beer-1 deleted the feat/vm-pool branch September 26, 2024 09:29
This was referenced Oct 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants