-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
neonvm-controller: introduce unit tests #935
Conversation
Not a full review, but brief thoughts: At first glance, seems like there's a couple other changes that might be good to separate out into other PRs (to make them separate, and merged faster) — in particular, thinking of golangci-lint bump + renaming the files in Also, not sure if it's still the case, but IIRC there's some requirements around exact matching between golangci-lint versions & the go toolchain version. IIRC #885 was blocking upgrading the Go version, so maybe we can do that now? |
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
d2e9726
to
53db1cf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments, didn't go super deep.
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replied to a comment, resolved others. + asked via DM about why move files to the functest
directory.
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, mostly just nits
var reconcilerMetrics = MakeReconcilerMetrics() | ||
|
||
func newTestParams(t *testing.T) *testParams { | ||
os.Setenv("VM_RUNNER_IMAGE", "vm-runner-img") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't thread-safe, right? can we get away with not setting this for now? (if no: let's extract it into ReconcilerConfig
?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is thread-safe, the implementation contains a mutex. I'd leave it right now, and later refactor it to be a normal parameter. CC #921
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh ok, fair enough. Yeah, +1 to refactoring in a follow-up
@@ -14,7 +14,7 @@ See the License for the specific language governing permissions and | |||
limitations under the License. | |||
*/ | |||
|
|||
package controllers | |||
package functests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from DMs: if the reason to move this into a separate package is just so that it can be individually tested, that can also be accomplished with go test <file>
-- so I'd prefer leaving this as-is unless there's other reasons as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are more than one test file in the controllers/
, and it would be possible to run them all, while leaving functests aside.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't quite get what you mean, but if you get value from having them separate, then fair enough, let's go with it :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant there is vm_controller_test.go
and vm_qmp_test.go
, so by go test ./neonvm/controllers
we can run them both.
Also it is "aesthetically" looks better IMO that lightweight unit tests are in the same folder, but heavier functional tests are in a different folder.
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
Signed-off-by: Oleg Vasilev <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, nice! one nit regarding package controllers
vs package controllers_test
@@ -0,0 +1,252 @@ | |||
package controllers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: let's make this controllers_test
?
AFAICT the following diff would be required to do that (mostly just doing a couple renames)
Diff
diff --git a/neonvm/controllers/vm_controller.go b/neonvm/controllers/vm_controller.go
index e9ebfcf2..5f6b2254 100644
--- a/neonvm/controllers/vm_controller.go
+++ b/neonvm/controllers/vm_controller.go
@@ -60,15 +60,15 @@ import (
)
const (
- virtualmachineFinalizer = "vm.neon.tech/finalizer"
+ VirtualmachineFinalizer = "vm.neon.tech/finalizer"
)
// Definitions to manage status conditions
const (
- // typeAvailableVirtualMachine represents the status of the Deployment reconciliation
- typeAvailableVirtualMachine = "Available"
- // typeDegradedVirtualMachine represents the status used when the custom resource is deleted and the finalizer operations are must to occur.
- typeDegradedVirtualMachine = "Degraded"
+ // TypeAvailableVirtualMachine represents the status of the Deployment reconciliation
+ TypeAvailableVirtualMachine = "Available"
+ // TypeDegradedVirtualMachine represents the status used when the custom resource is deleted and the finalizer operations are must to occur.
+ TypeDegradedVirtualMachine = "Degraded"
)
const (
@@ -133,9 +133,9 @@ func (r *VMReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Re
// The object is not being deleted, so if it does not have our finalizer,
// then lets add the finalizer and update the object. This is equivalent
// registering our finalizer.
- if !controllerutil.ContainsFinalizer(&vm, virtualmachineFinalizer) {
+ if !controllerutil.ContainsFinalizer(&vm, VirtualmachineFinalizer) {
log.Info("Adding Finalizer for VirtualMachine")
- if ok := controllerutil.AddFinalizer(&vm, virtualmachineFinalizer); !ok {
+ if ok := controllerutil.AddFinalizer(&vm, VirtualmachineFinalizer); !ok {
log.Info("Failed to add finalizer from VirtualMachine")
return ctrl.Result{Requeue: true}, nil
}
@@ -147,14 +147,14 @@ func (r *VMReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Re
}
} else {
// The object is being deleted
- if controllerutil.ContainsFinalizer(&vm, virtualmachineFinalizer) {
+ if controllerutil.ContainsFinalizer(&vm, VirtualmachineFinalizer) {
// our finalizer is present, so lets handle any external dependency
log.Info("Performing Finalizer Operations for VirtualMachine before delete it")
r.doFinalizerOperationsForVirtualMachine(ctx, &vm)
// remove our finalizer from the list and update it.
log.Info("Removing Finalizer for VirtualMachine after successfully perform the operations")
- if ok := controllerutil.RemoveFinalizer(&vm, virtualmachineFinalizer); !ok {
+ if ok := controllerutil.RemoveFinalizer(&vm, VirtualmachineFinalizer); !ok {
log.Info("Failed to remove finalizer from VirtualMachine")
return ctrl.Result{Requeue: true}, nil
}
@@ -312,7 +312,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
// Let's check and just set the condition status as Unknown when no status are available
if vm.Status.Conditions == nil || len(vm.Status.Conditions) == 0 {
// set Unknown condition status for AvailableVirtualMachine
- meta.SetStatusCondition(&vm.Status.Conditions, metav1.Condition{Type: typeAvailableVirtualMachine, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
+ meta.SetStatusCondition(&vm.Status.Conditions, metav1.Condition{Type: TypeAvailableVirtualMachine, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
}
// NB: .Spec.EnableSSH guaranteed non-nil because the k8s API server sets the default for us.
@@ -434,7 +434,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
vm.Status.PodIP = vmRunner.Status.PodIP
vm.Status.Phase = vmv1.VmRunning
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeAvailableVirtualMachine,
+ metav1.Condition{Type: TypeAvailableVirtualMachine,
Status: metav1.ConditionTrue,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) created successfully", vm.Status.PodName, vm.Name)})
@@ -452,21 +452,21 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
case runnerSucceeded:
vm.Status.Phase = vmv1.VmSucceeded
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeAvailableVirtualMachine,
+ metav1.Condition{Type: TypeAvailableVirtualMachine,
Status: metav1.ConditionFalse,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) succeeded", vm.Status.PodName, vm.Name)})
case runnerFailed:
vm.Status.Phase = vmv1.VmFailed
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeDegradedVirtualMachine,
+ metav1.Condition{Type: TypeDegradedVirtualMachine,
Status: metav1.ConditionTrue,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) failed", vm.Status.PodName, vm.Name)})
case runnerUnknown:
vm.Status.Phase = vmv1.VmPending
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeAvailableVirtualMachine,
+ metav1.Condition{Type: TypeAvailableVirtualMachine,
Status: metav1.ConditionUnknown,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) in Unknown phase", vm.Status.PodName, vm.Name)})
@@ -484,7 +484,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
vm.Status.PodName))
vm.Status.Phase = vmv1.VmFailed
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeDegradedVirtualMachine,
+ metav1.Condition{Type: TypeDegradedVirtualMachine,
Status: metav1.ConditionTrue,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) not found", vm.Status.PodName, vm.Name)})
@@ -572,21 +572,21 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
case runnerSucceeded:
vm.Status.Phase = vmv1.VmSucceeded
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeAvailableVirtualMachine,
+ metav1.Condition{Type: TypeAvailableVirtualMachine,
Status: metav1.ConditionFalse,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) succeeded", vm.Status.PodName, vm.Name)})
case runnerFailed:
vm.Status.Phase = vmv1.VmFailed
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeDegradedVirtualMachine,
+ metav1.Condition{Type: TypeDegradedVirtualMachine,
Status: metav1.ConditionTrue,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) failed", vm.Status.PodName, vm.Name)})
case runnerUnknown:
vm.Status.Phase = vmv1.VmPending
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeAvailableVirtualMachine,
+ metav1.Condition{Type: TypeAvailableVirtualMachine,
Status: metav1.ConditionUnknown,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) in Unknown phase", vm.Status.PodName, vm.Name)})
@@ -605,7 +605,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
vm.Status.PodName))
vm.Status.Phase = vmv1.VmFailed
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeDegradedVirtualMachine,
+ metav1.Condition{Type: TypeDegradedVirtualMachine,
Status: metav1.ConditionTrue,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) not found", vm.Status.PodName, vm.Name)})
@@ -625,7 +625,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
case runnerSucceeded:
vm.Status.Phase = vmv1.VmSucceeded
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeAvailableVirtualMachine,
+ metav1.Condition{Type: TypeAvailableVirtualMachine,
Status: metav1.ConditionFalse,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) succeeded", vm.Status.PodName, vm.Name)})
@@ -633,7 +633,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
case runnerFailed:
vm.Status.Phase = vmv1.VmFailed
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeDegradedVirtualMachine,
+ metav1.Condition{Type: TypeDegradedVirtualMachine,
Status: metav1.ConditionTrue,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) failed", vm.Status.PodName, vm.Name)})
@@ -641,7 +641,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
case runnerUnknown:
vm.Status.Phase = vmv1.VmPending
meta.SetStatusCondition(&vm.Status.Conditions,
- metav1.Condition{Type: typeAvailableVirtualMachine,
+ metav1.Condition{Type: TypeAvailableVirtualMachine,
Status: metav1.ConditionUnknown,
Reason: "Reconciling",
Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) in Unknown phase", vm.Status.PodName, vm.Name)})
diff --git a/neonvm/controllers/vm_controller_unit_test.go b/neonvm/controllers/vm_controller_unit_test.go
index fc7f4429..5db7c824 100644
--- a/neonvm/controllers/vm_controller_unit_test.go
+++ b/neonvm/controllers/vm_controller_unit_test.go
@@ -1,4 +1,4 @@
-package controllers
+package controllers_test
import (
"context"
@@ -24,6 +24,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
vmv1 "github.com/neondatabase/autoscaling/neonvm/apis/neonvm/v1"
+ "github.com/neondatabase/autoscaling/neonvm/controllers"
)
type mockRecorder struct {
@@ -78,13 +79,13 @@ func defaultVm() *vmv1.VirtualMachine {
type testParams struct {
t *testing.T
ctx context.Context
- r *VMReconciler
+ r *controllers.VMReconciler
client client.Client
origVM *vmv1.VirtualMachine
mockRecorder *mockRecorder
}
-var reconcilerMetrics = MakeReconcilerMetrics()
+var reconcilerMetrics = controllers.MakeReconcilerMetrics()
func newTestParams(t *testing.T) *testParams {
os.Setenv("VM_RUNNER_IMAGE", "vm-runner-img")
@@ -107,11 +108,11 @@ func newTestParams(t *testing.T) *testParams {
origVM: nil,
}
- params.r = &VMReconciler{
+ params.r = &controllers.VMReconciler{
Client: params.client,
Recorder: params.mockRecorder,
Scheme: scheme,
- Config: &ReconcilerConfig{
+ Config: &controllers.ReconcilerConfig{
IsK3s: false,
UseContainerMgr: false,
MaxConcurrentReconciles: 10,
@@ -159,7 +160,7 @@ func TestReconcile(t *testing.T) {
assert.Equal(t, reconcile.Result{
Requeue: true,
}, res)
- assert.Contains(t, params.getVM().Finalizers, virtualmachineFinalizer)
+ assert.Contains(t, params.getVM().Finalizers, controllers.VirtualmachineFinalizer)
// Round 2
res, err = params.r.Reconcile(params.ctx, req)
@@ -199,7 +200,7 @@ func prettyPrint(t *testing.T, obj any) {
func TestRunningPod(t *testing.T) {
params := newTestParams(t)
origVM := defaultVm()
- origVM.Finalizers = append(origVM.Finalizers, virtualmachineFinalizer)
+ origVM.Finalizers = append(origVM.Finalizers, controllers.VirtualmachineFinalizer)
origVM.Status.Phase = vmv1.VmPending
origVM = params.initVM(origVM)
@@ -248,5 +249,5 @@ func TestRunningPod(t *testing.T) {
// VM is now running
assert.Equal(t, vmv1.VmRunning, vm.Status.Phase)
assert.Len(t, vm.Status.Conditions, 1)
- assert.Equal(t, vm.Status.Conditions[0].Type, typeAvailableVirtualMachine)
+ assert.Equal(t, vm.Status.Conditions[0].Type, controllers.TypeAvailableVirtualMachine)
}
Alternately, I'm ok to leave that as a follow-up if you like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to leave the opportunity to test private functions also, these are unit tests after all.
Signed-off-by: Oleg Vasilev <[email protected]>
First PR, introducing unit tests. The old functional tests are moved to a
functest
directory.Part of #763