Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dpapp] Initial dpapp implementation being a vpp plugin #609

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

jimmyzhai
Copy link
Collaborator

@jimmyzhai jimmyzhai commented Jul 23, 2024

Following dpapp HLD - #606, this is the 3th part implementation. It implements the basic logic of inline flow creation, deletion and ageout.

  • Build with command make dpapp
  • Run with command make run-dpapp

@KrisNey-MSFT
Copy link
Collaborator

hi @jimmyzhai - is this ready to merge? Thank you!

@KrisNey-MSFT
Copy link
Collaborator

hi @jimmyzhai it looks like we have conflicts that need to be resolved please and thank you.

@@ -0,0 +1,17 @@
#!/bin/bash
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also rename this file.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@jimmyzhai jimmyzhai force-pushed the dp_app_add_vpp_plugin branch 3 times, most recently from e2c1a02 to 1ac27da Compare September 2, 2024 12:48
@@ -0,0 +1,11 @@
cmake_minimum_required(VERSION 3.5)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mind to add a README here to help people getting started on it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@KrisNey-MSFT
Copy link
Collaborator

checking in @jimmyzhai - is this a lower priority in general? TY!

@KrisNey-MSFT
Copy link
Collaborator

just checking in :)

@jimmyzhai
Copy link
Collaborator Author

just checking in :)

I'm actively updating this PR now after #608 is merged.

@KrisNey-MSFT
Copy link
Collaborator

KrisNey-MSFT commented Oct 30, 2024 via email

@jimmyzhai jimmyzhai force-pushed the dp_app_add_vpp_plugin branch 4 times, most recently from fb6bab2 to 765c873 Compare November 5, 2024 17:25
@KrisNey-MSFT
Copy link
Collaborator

hi @jimmyzhai - do you need help from @chrispsommers on this one?

@jimmyzhai jimmyzhai force-pushed the dp_app_add_vpp_plugin branch 9 times, most recently from 276617b to 0b51e53 Compare November 6, 2024 07:45
@jimmyzhai jimmyzhai force-pushed the dp_app_add_vpp_plugin branch 2 times, most recently from 8f735d4 to 8031721 Compare November 12, 2024 09:25
@jimmyzhai
Copy link
Collaborator Author

hi @jimmyzhai - do you need help from @chrispsommers on this one?

Hi @chrispsommers, I've always encountered workflow DASH-BMV2-CI failure with many tries. The error is:

**The runner has received a shutdown signal. This can happen when the runner service is stopped, or a manually started runner is canceled.

Run PTF Tests The operation was canceled.**

The error may happen sometimes before step "Run PTF Tests". The run-saithrift-ptftests succeeds in my local host. Looks the job is canceled by github action runner (due to running out of resources??). The runner host has 4 vCPUs, 16G memory (seen in step "Check dpapp status"). With adding dpapp, it takes more system overload and reaches limit? Any idea? Would you try to adjust github action runner host to more capacity machine (8 vCPUs) and re-run this workflow?

Found the cause. The runner is stopped due to sysctl vm.max_map_count = 1548, which limits max memory mapping number of one process.

@jimmyzhai jimmyzhai closed this Nov 12, 2024
@jimmyzhai jimmyzhai reopened this Nov 12, 2024
@jimmyzhai jimmyzhai force-pushed the dp_app_add_vpp_plugin branch 3 times, most recently from 21306ad to 27c84a6 Compare November 14, 2024 15:28
@jimmyzhai jimmyzhai requested a review from r12f November 14, 2024 15:57
dash-pipeline/dpapp/dash/dash.c Outdated Show resolved Hide resolved
dash-pipeline/dpapp/dash/dash.c Outdated Show resolved Hide resolved
dash-pipeline/dpapp/dash/dash.c Show resolved Hide resolved
dash-pipeline/dpapp/dash/node.c Outdated Show resolved Hide resolved
dash-pipeline/dpapp/dash/node.c Outdated Show resolved Hide resolved
dash-pipeline/dpapp/dash/node.c Outdated Show resolved Hide resolved
dash-pipeline/dpapp/dash/node.c Outdated Show resolved Hide resolved
dash-pipeline/dpapp/dash/node.c Outdated Show resolved Hide resolved
dash-pipeline/dpapp/dash/node.c Outdated Show resolved Hide resolved
dash-pipeline/dpapp/dash/node.c Outdated Show resolved Hide resolved
test/test-cases/functional/ptf/p4_dash_utils.py Outdated Show resolved Hide resolved
test/test-cases/functional/ptf/p4_dash_utils.py Outdated Show resolved Hide resolved
test/test-cases/functional/ptf/p4_dash_utils.py Outdated Show resolved Hide resolved
dash-pipeline/dpapp/dash/dash.c Outdated Show resolved Hide resolved
1. not enable dpdk_plugin.so, then not need hugepage allocation
2. Update NODE_FN dash_node to be simple
Copy link
Collaborator

@r12f r12f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks Junhua for fixing the all the offline comments! I think there are a few more left that we have talked about but didn't change yet. I have left the comments there.

{
u32 n_left_to_next;

vlib_get_next_frame (vm, node, next_index, to_next, n_left_to_next);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks a lot for fixing this giant loop!

return DASH_ERROR_OK;
}

VLIB_NODE_FN (dash_node) (vlib_main_t * vm, vlib_node_runtime_t * node,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mind to rename this file to dash_node.c?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

VLIB_NODE_FN (dash_node) (vlib_main_t * vm, vlib_node_runtime_t * node,
vlib_frame_t * frame)
{
u32 n_left_from, *from, *to_next;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as we discussed last time, it will be better to give these variables more descriptive names: from/to/bi/b0 is way too vague for anyone to understand the code.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most vpp plugins adhere to these names in node functions to express packet vector movement between current node and its next nodes. I prefer to be consistent with them. If we look at them and the uses in vlib_get_next_frame, vlib_put_next_frame, vlib_validate_buffer_enqueue_x1, etc as a whole, the naming is concise and clear.

@r12f r12f self-requested a review November 25, 2024 04:02
Copy link
Collaborator

@r12f r12f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for fixing the variable namings! this change looks good to me now!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants