Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

neonvm-controller: introduce unit tests #935

Merged
merged 17 commits into from
Jun 21, 2024
Merged
4 changes: 3 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,7 @@ vet: ## Run go vet against code.
CGO_ENABLED=0 go vet ./...


TESTARGS ?= ./...
.PHONY: test
test: fmt vet envtest ## Run tests.
# chmodding KUBEBUILDER_ASSETS dir to make it deletable by owner,
Expand All @@ -107,7 +108,8 @@ test: fmt vet envtest ## Run tests.
export KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(LOCALBIN) -p path)"; \
find $(KUBEBUILDER_ASSETS) -type d -exec chmod 0755 {} \; ; \
CGO_ENABLED=0 \
go test ./... -coverprofile cover.out
go test $(TESTARGS) -coverprofile cover.out
go tool cover -html=cover.out -o cover.html
Omrigan marked this conversation as resolved.
Show resolved Hide resolved

##@ Build

Expand Down
1 change: 1 addition & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -172,6 +172,7 @@ require (
github.com/sirupsen/logrus v1.9.0 // indirect
github.com/spf13/cobra v1.6.1 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/vishvananda/netns v0.0.0-20211101163701-50045581ed74 // indirect
go.etcd.io/etcd/api/v3 v3.5.6 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.6 // indirect
Expand Down
1 change: 1 addition & 0 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -502,6 +502,7 @@ github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/

package controllers
package functests

import (
"path/filepath"
Expand Down Expand Up @@ -51,7 +51,7 @@ var _ = BeforeSuite(func() {

By("bootstrapping test environment")
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "config", "crd", "bases")},
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config", "crd", "bases")},
ErrorIfCRDPathMissing: true,
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/

package controllers
package functests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from DMs: if the reason to move this into a separate package is just so that it can be individually tested, that can also be accomplished with go test <file> -- so I'd prefer leaving this as-is unless there's other reasons as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are more than one test file in the controllers/, and it would be possible to run them all, while leaving functests aside.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't quite get what you mean, but if you get value from having them separate, then fair enough, let's go with it :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant there is vm_controller_test.go and vm_qmp_test.go, so by go test ./neonvm/controllers we can run them both.

Also it is "aesthetically" looks better IMO that lightweight unit tests are in the same folder, but heavier functional tests are in a different folder.


import (
"context"
Expand All @@ -31,6 +31,7 @@ import (
"k8s.io/apimachinery/pkg/types"

vmv1 "github.com/neondatabase/autoscaling/neonvm/apis/neonvm/v1"
"github.com/neondatabase/autoscaling/neonvm/controllers"
)

var _ = Describe("VirtualMachine controller", func() {
Expand Down Expand Up @@ -93,11 +94,11 @@ var _ = Describe("VirtualMachine controller", func() {
}, time.Minute, time.Second).Should(Succeed())

By("Reconciling the custom resource created")
virtualmachineReconciler := &VMReconciler{
virtualmachineReconciler := &controllers.VMReconciler{
Client: k8sClient,
Scheme: k8sClient.Scheme(),
Recorder: nil,
Config: &ReconcilerConfig{
Config: &controllers.ReconcilerConfig{
IsK3s: false,
UseContainerMgr: true,
MaxConcurrentReconciles: 1,
Expand Down
262 changes: 262 additions & 0 deletions neonvm/controllers/vm_controller_unit_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,262 @@
package controllers
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: let's make this controllers_test ?

AFAICT the following diff would be required to do that (mostly just doing a couple renames)

Diff
diff --git a/neonvm/controllers/vm_controller.go b/neonvm/controllers/vm_controller.go
index e9ebfcf2..5f6b2254 100644
--- a/neonvm/controllers/vm_controller.go
+++ b/neonvm/controllers/vm_controller.go
@@ -60,15 +60,15 @@ import (
 )
 
 const (
-	virtualmachineFinalizer = "vm.neon.tech/finalizer"
+	VirtualmachineFinalizer = "vm.neon.tech/finalizer"
 )
 
 // Definitions to manage status conditions
 const (
-	// typeAvailableVirtualMachine represents the status of the Deployment reconciliation
-	typeAvailableVirtualMachine = "Available"
-	// typeDegradedVirtualMachine represents the status used when the custom resource is deleted and the finalizer operations are must to occur.
-	typeDegradedVirtualMachine = "Degraded"
+	// TypeAvailableVirtualMachine represents the status of the Deployment reconciliation
+	TypeAvailableVirtualMachine = "Available"
+	// TypeDegradedVirtualMachine represents the status used when the custom resource is deleted and the finalizer operations are must to occur.
+	TypeDegradedVirtualMachine = "Degraded"
 )
 
 const (
@@ -133,9 +133,9 @@ func (r *VMReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Re
 		// The object is not being deleted, so if it does not have our finalizer,
 		// then lets add the finalizer and update the object. This is equivalent
 		// registering our finalizer.
-		if !controllerutil.ContainsFinalizer(&vm, virtualmachineFinalizer) {
+		if !controllerutil.ContainsFinalizer(&vm, VirtualmachineFinalizer) {
 			log.Info("Adding Finalizer for VirtualMachine")
-			if ok := controllerutil.AddFinalizer(&vm, virtualmachineFinalizer); !ok {
+			if ok := controllerutil.AddFinalizer(&vm, VirtualmachineFinalizer); !ok {
 				log.Info("Failed to add finalizer from VirtualMachine")
 				return ctrl.Result{Requeue: true}, nil
 			}
@@ -147,14 +147,14 @@ func (r *VMReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Re
 		}
 	} else {
 		// The object is being deleted
-		if controllerutil.ContainsFinalizer(&vm, virtualmachineFinalizer) {
+		if controllerutil.ContainsFinalizer(&vm, VirtualmachineFinalizer) {
 			// our finalizer is present, so lets handle any external dependency
 			log.Info("Performing Finalizer Operations for VirtualMachine before delete it")
 			r.doFinalizerOperationsForVirtualMachine(ctx, &vm)
 
 			// remove our finalizer from the list and update it.
 			log.Info("Removing Finalizer for VirtualMachine after successfully perform the operations")
-			if ok := controllerutil.RemoveFinalizer(&vm, virtualmachineFinalizer); !ok {
+			if ok := controllerutil.RemoveFinalizer(&vm, VirtualmachineFinalizer); !ok {
 				log.Info("Failed to remove finalizer from VirtualMachine")
 				return ctrl.Result{Requeue: true}, nil
 			}
@@ -312,7 +312,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
 	// Let's check and just set the condition status as Unknown when no status are available
 	if vm.Status.Conditions == nil || len(vm.Status.Conditions) == 0 {
 		// set Unknown condition status for AvailableVirtualMachine
-		meta.SetStatusCondition(&vm.Status.Conditions, metav1.Condition{Type: typeAvailableVirtualMachine, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
+		meta.SetStatusCondition(&vm.Status.Conditions, metav1.Condition{Type: TypeAvailableVirtualMachine, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
 	}
 
 	// NB: .Spec.EnableSSH guaranteed non-nil because the k8s API server sets the default for us.
@@ -434,7 +434,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
 			vm.Status.PodIP = vmRunner.Status.PodIP
 			vm.Status.Phase = vmv1.VmRunning
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeAvailableVirtualMachine,
+				metav1.Condition{Type: TypeAvailableVirtualMachine,
 					Status:  metav1.ConditionTrue,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) created successfully", vm.Status.PodName, vm.Name)})
@@ -452,21 +452,21 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
 		case runnerSucceeded:
 			vm.Status.Phase = vmv1.VmSucceeded
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeAvailableVirtualMachine,
+				metav1.Condition{Type: TypeAvailableVirtualMachine,
 					Status:  metav1.ConditionFalse,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) succeeded", vm.Status.PodName, vm.Name)})
 		case runnerFailed:
 			vm.Status.Phase = vmv1.VmFailed
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeDegradedVirtualMachine,
+				metav1.Condition{Type: TypeDegradedVirtualMachine,
 					Status:  metav1.ConditionTrue,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) failed", vm.Status.PodName, vm.Name)})
 		case runnerUnknown:
 			vm.Status.Phase = vmv1.VmPending
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeAvailableVirtualMachine,
+				metav1.Condition{Type: TypeAvailableVirtualMachine,
 					Status:  metav1.ConditionUnknown,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) in Unknown phase", vm.Status.PodName, vm.Name)})
@@ -484,7 +484,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
 					vm.Status.PodName))
 			vm.Status.Phase = vmv1.VmFailed
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeDegradedVirtualMachine,
+				metav1.Condition{Type: TypeDegradedVirtualMachine,
 					Status:  metav1.ConditionTrue,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) not found", vm.Status.PodName, vm.Name)})
@@ -572,21 +572,21 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
 		case runnerSucceeded:
 			vm.Status.Phase = vmv1.VmSucceeded
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeAvailableVirtualMachine,
+				metav1.Condition{Type: TypeAvailableVirtualMachine,
 					Status:  metav1.ConditionFalse,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) succeeded", vm.Status.PodName, vm.Name)})
 		case runnerFailed:
 			vm.Status.Phase = vmv1.VmFailed
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeDegradedVirtualMachine,
+				metav1.Condition{Type: TypeDegradedVirtualMachine,
 					Status:  metav1.ConditionTrue,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) failed", vm.Status.PodName, vm.Name)})
 		case runnerUnknown:
 			vm.Status.Phase = vmv1.VmPending
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeAvailableVirtualMachine,
+				metav1.Condition{Type: TypeAvailableVirtualMachine,
 					Status:  metav1.ConditionUnknown,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) in Unknown phase", vm.Status.PodName, vm.Name)})
@@ -605,7 +605,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
 					vm.Status.PodName))
 			vm.Status.Phase = vmv1.VmFailed
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeDegradedVirtualMachine,
+				metav1.Condition{Type: TypeDegradedVirtualMachine,
 					Status:  metav1.ConditionTrue,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) not found", vm.Status.PodName, vm.Name)})
@@ -625,7 +625,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
 		case runnerSucceeded:
 			vm.Status.Phase = vmv1.VmSucceeded
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeAvailableVirtualMachine,
+				metav1.Condition{Type: TypeAvailableVirtualMachine,
 					Status:  metav1.ConditionFalse,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) succeeded", vm.Status.PodName, vm.Name)})
@@ -633,7 +633,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
 		case runnerFailed:
 			vm.Status.Phase = vmv1.VmFailed
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeDegradedVirtualMachine,
+				metav1.Condition{Type: TypeDegradedVirtualMachine,
 					Status:  metav1.ConditionTrue,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) failed", vm.Status.PodName, vm.Name)})
@@ -641,7 +641,7 @@ func (r *VMReconciler) doReconcile(ctx context.Context, vm *vmv1.VirtualMachine)
 		case runnerUnknown:
 			vm.Status.Phase = vmv1.VmPending
 			meta.SetStatusCondition(&vm.Status.Conditions,
-				metav1.Condition{Type: typeAvailableVirtualMachine,
+				metav1.Condition{Type: TypeAvailableVirtualMachine,
 					Status:  metav1.ConditionUnknown,
 					Reason:  "Reconciling",
 					Message: fmt.Sprintf("Pod (%s) for VirtualMachine (%s) in Unknown phase", vm.Status.PodName, vm.Name)})
diff --git a/neonvm/controllers/vm_controller_unit_test.go b/neonvm/controllers/vm_controller_unit_test.go
index fc7f4429..5db7c824 100644
--- a/neonvm/controllers/vm_controller_unit_test.go
+++ b/neonvm/controllers/vm_controller_unit_test.go
@@ -1,4 +1,4 @@
-package controllers
+package controllers_test
 
 import (
 	"context"
@@ -24,6 +24,7 @@ import (
 	"k8s.io/apimachinery/pkg/runtime"
 
 	vmv1 "github.com/neondatabase/autoscaling/neonvm/apis/neonvm/v1"
+	"github.com/neondatabase/autoscaling/neonvm/controllers"
 )
 
 type mockRecorder struct {
@@ -78,13 +79,13 @@ func defaultVm() *vmv1.VirtualMachine {
 type testParams struct {
 	t            *testing.T
 	ctx          context.Context
-	r            *VMReconciler
+	r            *controllers.VMReconciler
 	client       client.Client
 	origVM       *vmv1.VirtualMachine
 	mockRecorder *mockRecorder
 }
 
-var reconcilerMetrics = MakeReconcilerMetrics()
+var reconcilerMetrics = controllers.MakeReconcilerMetrics()
 
 func newTestParams(t *testing.T) *testParams {
 	os.Setenv("VM_RUNNER_IMAGE", "vm-runner-img")
@@ -107,11 +108,11 @@ func newTestParams(t *testing.T) *testParams {
 		origVM:       nil,
 	}
 
-	params.r = &VMReconciler{
+	params.r = &controllers.VMReconciler{
 		Client:   params.client,
 		Recorder: params.mockRecorder,
 		Scheme:   scheme,
-		Config: &ReconcilerConfig{
+		Config: &controllers.ReconcilerConfig{
 			IsK3s:                   false,
 			UseContainerMgr:         false,
 			MaxConcurrentReconciles: 10,
@@ -159,7 +160,7 @@ func TestReconcile(t *testing.T) {
 	assert.Equal(t, reconcile.Result{
 		Requeue: true,
 	}, res)
-	assert.Contains(t, params.getVM().Finalizers, virtualmachineFinalizer)
+	assert.Contains(t, params.getVM().Finalizers, controllers.VirtualmachineFinalizer)
 
 	// Round 2
 	res, err = params.r.Reconcile(params.ctx, req)
@@ -199,7 +200,7 @@ func prettyPrint(t *testing.T, obj any) {
 func TestRunningPod(t *testing.T) {
 	params := newTestParams(t)
 	origVM := defaultVm()
-	origVM.Finalizers = append(origVM.Finalizers, virtualmachineFinalizer)
+	origVM.Finalizers = append(origVM.Finalizers, controllers.VirtualmachineFinalizer)
 	origVM.Status.Phase = vmv1.VmPending
 
 	origVM = params.initVM(origVM)
@@ -248,5 +249,5 @@ func TestRunningPod(t *testing.T) {
 	// VM is now running
 	assert.Equal(t, vmv1.VmRunning, vm.Status.Phase)
 	assert.Len(t, vm.Status.Conditions, 1)
-	assert.Equal(t, vm.Status.Conditions[0].Type, typeAvailableVirtualMachine)
+	assert.Equal(t, vm.Status.Conditions[0].Type, controllers.TypeAvailableVirtualMachine)
 }

Alternately, I'm ok to leave that as a follow-up if you like.

Copy link
Contributor Author

@Omrigan Omrigan Jun 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to leave the opportunity to test private functions also, these are unit tests after all.


import (
"context"
"encoding/json"
"os"
"testing"
"time"

"github.com/samber/lo"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"go.uber.org/zap/zapcore"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/controller-runtime/pkg/reconcile"

corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"

vmv1 "github.com/neondatabase/autoscaling/neonvm/apis/neonvm/v1"
)

type mockRecorder struct {
mock.Mock
}

func (m *mockRecorder) Event(object runtime.Object, eventtype, reason, message string) {
m.Called(object, eventtype, reason, message)
}

func (m *mockRecorder) Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...interface{}) {
m.Called(object, eventtype, reason, messageFmt, args)
}

func (m *mockRecorder) AnnotatedEventf(object runtime.Object, annotations map[string]string, eventtype, reason, messageFmt string, args ...interface{}) {
m.Called(object, annotations, eventtype, reason, messageFmt, args)
}

// defaultVm returns a VM which is similar to what we can reasonably
// expect from the control plane.
func defaultVm() *vmv1.VirtualMachine {
return &vmv1.VirtualMachine{
ObjectMeta: metav1.ObjectMeta{
Name: "test-vm",
Namespace: "default",
},
Spec: vmv1.VirtualMachineSpec{
EnableSSH: lo.ToPtr(false),
EnableAcceleration: lo.ToPtr(true),
//nolint:exhaustruct // This is a test
Guest: vmv1.Guest{
KernelImage: lo.ToPtr("kernel-img"),
AppendKernelCmdline: nil,
CPUs: vmv1.CPUs{
Min: lo.ToPtr(vmv1.MilliCPU(1000)),
Max: lo.ToPtr(vmv1.MilliCPU(2000)),
Use: lo.ToPtr(vmv1.MilliCPU(1500)),
},
MemorySlots: vmv1.MemorySlots{
Min: lo.ToPtr(int32(512)),
Max: lo.ToPtr(int32(2048)),
Use: lo.ToPtr(int32(1024)),
},
MemorySlotSize: *resource.NewQuantity(1, resource.DecimalSI),
sharnoff marked this conversation as resolved.
Show resolved Hide resolved
},
},
Status: vmv1.VirtualMachineStatus{
Phase: "",
Conditions: []metav1.Condition{},
RestartCount: 0,
PodName: "",
PodIP: "",
ExtraNetIP: "",
ExtraNetMask: "",
Node: "",
CPUs: lo.ToPtr(vmv1.MilliCPU(100)),
MemorySize: resource.NewQuantity(123, resource.DecimalSI),
SSHSecretName: "",
},
sharnoff marked this conversation as resolved.
Show resolved Hide resolved
}
}

type testParams struct {
t *testing.T
ctx context.Context
r *VMReconciler
client client.Client
origVM *vmv1.VirtualMachine
mockRecorder *mockRecorder
}

var reconcilerMetrics = MakeReconcilerMetrics()

func newTestParams(t *testing.T) *testParams {
os.Setenv("VM_RUNNER_IMAGE", "vm-runner-img")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't thread-safe, right? can we get away with not setting this for now? (if no: let's extract it into ReconcilerConfig?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is thread-safe, the implementation contains a mutex. I'd leave it right now, and later refactor it to be a normal parameter. CC #921

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh ok, fair enough. Yeah, +1 to refactoring in a follow-up


logger := zap.New(zap.UseDevMode(true), zap.WriteTo(os.Stdout),
zap.Level(zapcore.DebugLevel))
ctx := log.IntoContext(context.Background(), logger)

scheme := runtime.NewScheme()
scheme.AddKnownTypes(vmv1.SchemeGroupVersion, &vmv1.VirtualMachine{})
scheme.AddKnownTypes(corev1.SchemeGroupVersion, &corev1.Pod{})

params := &testParams{
t: t,
ctx: ctx,
client: fake.NewClientBuilder().WithScheme(scheme).Build(),
//nolint:exhaustruct // This is a mock
mockRecorder: &mockRecorder{},
r: nil,
origVM: nil,
}

params.r = &VMReconciler{
Client: params.client,
Recorder: params.mockRecorder,
Scheme: scheme,
Config: &ReconcilerConfig{
IsK3s: false,
UseContainerMgr: false,
MaxConcurrentReconciles: 10,
QEMUDiskCacheSettings: "",
FailurePendingPeriod: time.Minute,
FailingRefreshInterval: time.Minute,
},
Metrics: reconcilerMetrics,
}

return params
}

func (p *testParams) initVM(vm *vmv1.VirtualMachine) *vmv1.VirtualMachine {
sharnoff marked this conversation as resolved.
Show resolved Hide resolved
err := p.client.Create(p.ctx, vm)
require.NoError(p.t, err)
p.origVM = vm

// Do serialize/deserialize, to normalize resource.Quantity
return p.getVM()
sharnoff marked this conversation as resolved.
Show resolved Hide resolved
}

func (p *testParams) getVM() *vmv1.VirtualMachine {
var obj vmv1.VirtualMachine
err := p.client.Get(p.ctx, client.ObjectKeyFromObject(p.origVM), &obj)
require.NoError(p.t, err)

return &obj
}

func TestReconcile(t *testing.T) {
params := newTestParams(t)
origVM := params.initVM(defaultVm())

req := reconcile.Request{
NamespacedName: client.ObjectKeyFromObject(origVM),
}

// Round 1
res, err := params.r.Reconcile(params.ctx, req)
assert.NoError(t, err)

// Added finalizer
assert.Equal(t, reconcile.Result{
Requeue: true,
}, res)
assert.Contains(t, params.getVM().Finalizers, virtualmachineFinalizer)

// Round 2
res, err = params.r.Reconcile(params.ctx, req)
assert.NoError(t, err)
assert.Equal(t, false, res.Requeue)

// VM is pending
assert.Equal(t, vmv1.VmPending, params.getVM().Status.Phase)

// Round 3
params.mockRecorder.On("Event", mock.Anything, "Normal", "Created",
mock.Anything)
res, err = params.r.Reconcile(params.ctx, req)
assert.NoError(t, err)
assert.Equal(t, false, res.Requeue)

// We now have a pod
vm := params.getVM()
assert.NotEmpty(t, vm.Status.PodName)
// Spec is unchanged
assert.Equal(t, vm.Spec, origVM.Spec)

// Round 4
res, err = params.r.Reconcile(params.ctx, req)
assert.NoError(t, err)
assert.Equal(t, false, res.Requeue)

// Pod is pending, so nothing changes
sharnoff marked this conversation as resolved.
Show resolved Hide resolved
assert.Equal(t, vm, params.getVM())
}

func PrettyPrint(t *testing.T, obj any) {
Omrigan marked this conversation as resolved.
Show resolved Hide resolved
s, _ := json.MarshalIndent(obj, "", "\t")
t.Logf("%s\n", s)
}

func TestRunningPod(t *testing.T) {
params := newTestParams(t)
origVM := defaultVm()
origVM.Finalizers = append(origVM.Finalizers, virtualmachineFinalizer)
origVM.Status.Phase = vmv1.VmPending

origVM = params.initVM(origVM)

req := reconcile.Request{
NamespacedName: client.ObjectKeyFromObject(origVM),
}

// Round 1
params.mockRecorder.On("Event", mock.Anything, "Normal", "Created",
mock.Anything)
res, err := params.r.Reconcile(params.ctx, req)
require.NoError(t, err)
assert.Equal(t, false, res.Requeue)

// We now have a pod
podName := params.getVM().Status.PodName
podKey := client.ObjectKey{
Namespace: origVM.Namespace,
Name: podName,
}
var pod corev1.Pod
err = params.client.Get(params.ctx, podKey, &pod)
require.NoError(t, err)

assert.Len(t, pod.Spec.Containers, 1)
assert.Equal(t, "neonvm-runner", pod.Spec.Containers[0].Name)
assert.Equal(t, "vm-runner-img", pod.Spec.Containers[0].Image)
assert.Len(t, pod.Spec.InitContainers, 2)
assert.Equal(t, "init", pod.Spec.InitContainers[0].Name)
assert.Equal(t, "init-kernel", pod.Spec.InitContainers[1].Name)

PrettyPrint(t, pod)
sharnoff marked this conversation as resolved.
Show resolved Hide resolved

pod.Status.Phase = corev1.PodRunning
err = params.client.Update(params.ctx, &pod)
require.NoError(t, err)

// Round 2
res, err = params.r.Reconcile(params.ctx, req)
require.NoError(t, err)
assert.Equal(t, false, res.Requeue)

vm := params.getVM()

// VM is now running
assert.Equal(t, vmv1.VmRunning, vm.Status.Phase)
assert.Len(t, vm.Status.Conditions, 1)
assert.Equal(t, vm.Status.Conditions[0].Type, typeAvailableVirtualMachine)
}
Loading