kubernetes CRD Development Guide

Extending kubernetes is the two most commonly used and most needed things to master: custom resources CRD and adminsion web hook. This article teaches you how to master CRD development in 10 minutes.

kubernetes allows users to customize their own resource objects, just like deployment stateful settings, which are widely used. For example, prometheus opterator customizes Prometheus objects, plus a customized controller creates a Pod to form a pormetheus cluster when it listens to kubectl create Prometheus. rook and so on.

I need to schedule virtual machines with kubernetes, so here's a custom Virtual Machine type
<!--more-->

kubebuilder

kubebuilder can save us a lot of work and make developing CRD and adminsion web hook very simple.

install

Installation by source code:

git clone https://github.com/kubernetes-sigs/kubebuilder
cd kubebuilder
make build
cp bin/kubebuilder $GOPATH/bin

Or download binary:

os=$(go env GOOS)
arch=$(go env GOARCH)

# download kubebuilder and extract it to tmp
curl -sL https://go.kubebuilder.io/dl/2.0.0-beta.0/${os}/${arch} | tar -xz -C /tmp/

# move to a long-term location and put it on your path
# (you'll need to set the KUBEBUILDER_ASSETS env var if you put it somewhere else)
sudo mv /tmp/kubebuilder_2.0.0-beta.0_${os}_${arch} /usr/local/kubebuilder
export PATH=$PATH:/usr/local/kubebuilder/bin

Need to be installed kustomize This is an artifact that renders yaml and makes helm tremble.

go install sigs.k8s.io/kustomize/v3/cmd/kustomize

Use

Note that you have to have a kubernetes cluster first. Step by Step Installation Walks You

Create CRD

kubebuilder init --domain sealyun.com --license apache2 --owner "fanux"
kubebuilder create api --group infra --version v1 --kind VirtulMachine

Install CRD and start controller

make install # Installation of CRD
make run # Start controller

Then we can see the created CRD.

# kubectl get crd
NAME                                           AGE
virtulmachines.infra.sealyun.com                  52m

To create a virtual machine:

# kubectl apply -f config/samples/
# kubectl get virtulmachines.infra.sealyun.com 
NAME                   AGE
virtulmachine-sample   49m

Take a look at the yaml file:

# cat config/samples/infra_v1_virtulmachine.yaml 
apiVersion: infra.sealyun.com/v1
kind: VirtulMachine
metadata:
  name: virtulmachine-sample
spec:
  # Add fields here
  foo: bar

Here's just storing yaml in etcd, and our controller s did nothing when they heard about creating events.

Deploy controller s into clusters

make docker-build docker-push IMG=fanux/infra-controller
make deploy

I'm Kubenetes from the far end of the company. I can't pass the test when making docker-build. There's no bin file for etcd, so I shut down the test first.

Modify Makefile:

# docker-build: test
docker-build: 

gcr.io/distroless/static:latest in Dockerfile. You may not be able to pull it down. Just change it at will. I changed it to golang:1.12.7.

It's also possible that some of the code won't pull down when you build it. Enable go mod vendor to package dependencies

go mod vendor
 If you can't pull down some code locally, you can use proxy:

export GOPROXY=https://goproxy.io

Then change the Docker file and comment out download:

After modification:

# Build the manager binary
FROM golang:1.12.7 as builder

WORKDIR /go/src/github.com/fanux/sealvm
# Copy the Go Modules manifests
COPY . . 

# Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o manager main.go

# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
# FROM gcr.io/distroless/static:latest
FROM golang:1.12.7
WORKDIR /
COPY --from=builder /go/src/github.com/fanux/sealvm/manager .
ENTRYPOINT ["/manager"]

make deploy Times error: Error: json: cannot unmarshal string into Go struct field Kustomization.patches of type types.Patch

Change the patches in config/default/kustomization.yaml to patches Strategic Merge:

kustomize build config/default This command is rendered. controller Of yaml Documents, you can experience

Look at you. controller It's already running:

kubectl get deploy -n sealvm-system
NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
sealvm-controller-manager   1         1         1            0           3m
kubectl get svc -n sealvm-system
NAME                                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
sealvm-controller-manager-metrics-service   ClusterIP   10.98.71.199   <none>        8443/TCP   4m

Development

Increasing Object Data Parameters

Look at the yaml file under config/samples:

apiVersion: infra.sealyun.com/v1
kind: VirtulMachine
metadata:
  name: virtulmachine-sample
spec:
  # Add fields here
  foo: bar

There is foo:bar in the parameter, so let's add a virtual CPU, memory information:

Direct api/v1/virtulmachine_types.go

// VirtulMachineSpec defines the desired state of VirtulMachine
// Add information here
type VirtulMachineSpec struct {
    // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
    // Important: Run "make" to regenerate code after modifying this file
    CPU    string `json:"cpu"`   // That's what I added.
    Memory string `json:"memory"`
}

// VirtulMachineStatus defines the observed state of VirtulMachine
// Add status information here, such as virtual machine is boot state, stop state, etc.
type VirtulMachineStatus struct {
    // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
    // Important: Run "make" to regenerate code after modifying this file
}

Then make it:

make && make install && make run

By rendering the controller's yaml, you can see that the CRD already has CPU and memory information on it:

kustomize build config/default

properties:
  cpu:
    description: 'INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
      Important: Run "make" to regenerate code after modifying this file'
    type: string
  memory:
    type: string

Modify yaml:

apiVersion: infra.sealyun.com/v1
kind: VirtulMachine
metadata:
  name: virtulmachine-sample
spec:
  cpu: "1"
  memory: "2G"
# kubectl apply -f config/samples 
virtulmachine.infra.sealyun.com "virtulmachine-sample" configured
# kubectl get virtulmachines.infra.sealyun.com virtulmachine-sample -o yaml 
apiVersion: infra.sealyun.com/v1
kind: VirtulMachine
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"infra.sealyun.com/v1","kind":"VirtulMachine","metadata":{"annotations":{},"name":"virtulmachine-sample","namespace":"default"},"spec":{"cpu":"1","memory":"2G"}}
  creationTimestamp: 2019-07-26T08:47:34Z
  generation: 2
  name: virtulmachine-sample
  namespace: default
  resourceVersion: "14811698"
  selfLink: /apis/infra.sealyun.com/v1/namespaces/default/virtulmachines/virtulmachine-sample
  uid: 030e2b9a-af82-11e9-b63e-5254bc16e436
spec:      # The new CRD has come into force
  cpu: "1"
  memory: 2G 

Statues is the same, let's not go into details. For example, I add a Create to the status to indicate that the controller is going to create a virtual machine (mainly some control level logic), and that the state is changed to Running.

Reconcile's only interface to be implemented

controller encapsulates both rotation training and event monitoring in this interface. You don't need to care about how events are monitored.

Getting Virtual Machine Information

func (r *VirtulMachineReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
    ctx = context.Background()
    _ = r.Log.WithValues("virtulmachine", req.NamespacedName)

    vm := &v1.VirtulMachine{}
    if err := r.Get(ctx, req.NamespacedName, vm); err != nil { # Getting VM information
        log.Error(err, "unable to fetch vm")
    } else {
        fmt.Println(vm.Spec.CPU, vm.Spec.Memory) # Print CPU memory information
    }

    return ctrl.Result{}, nil
}

Make & make install & make run to create a virtual machine kubectl apply-f config/samples at this time, the log will output CPU memory. List interface is the same, I will not dwell on it.

r.List(ctx, &vms, client.InNamespace(req.Namespace), client.MatchingField(vmkey, req.Name))

Update status

Add a status field to the status structure:

type VirtulMachineStatus struct {
    Status string `json:"status"`
}

To update the status in the controller:

vm.Status.Status = "Running"
if err := r.Status().Update(ctx, vm); err != nil {
    log.Error(err, "unable to update vm status")
}

If the server can not find the requested resource, a comment // +kubebuilder:subresource:status should be added to the CRD structure.

// +kubebuilder:subresource:status
// +kubebuilder:object:root=true

type VirtulMachine struct {
    metav1.TypeMeta   `json:",inline"`
    metav1.ObjectMeta `json:"metadata,omitempty"`

    Spec   VirtulMachineSpec   `json:"spec,omitempty"`
    Status VirtulMachineStatus `json:"status,omitempty"`
}

That's all right.

After compilation starts, go to apply and find that the state has changed to running:

# kubectl get virtulmachines.infra.sealyun.com virtulmachine-sample -o yaml
...
status:
  status: Running 

delete

time.Sleep(time.Second * 10)
if err := r.Delete(ctx, vm); err != nil {
    log.Error(err, "unable to delete vm ", "vm", vm)
}

After 10 seconds, we will not get GET.

Delete Finalizers

If Finalizers are not used, the etcd data is deleted directly when the kubectl delete is used, and the controller can't get it when he wants to get the CRD again:

ERRO[0029] VirtulMachine.infra.sealyun.com "virtulmachine-sample" not foundunable to fetch vm  source="virtulmachine_controller.go:48"

So we need to add Finalizer to the CRD when we create it:

vm.ObjectMeta.Finalizers = append(vm.ObjectMeta.Finalizers, "virtulmachine.infra.sealyun.com")

Then when deleting, only a deletion timestamp will be added to the CRD for our subsequent processing, and after processing, we delete Finalizers:

If Deleteion Timestamp does not exist
    If there are no Finalizers
        Add Finalizers and update CRD
 Otherwise, the instructions are to be deleted.
    If Finalizers exist, delete Finalizers and update CRD

See a complete code example:

if cronJob.ObjectMeta.DeletionTimestamp.IsZero() {
        if !containsString(cronJob.ObjectMeta.Finalizers, myFinalizerName) {
            cronJob.ObjectMeta.Finalizers = append(cronJob.ObjectMeta.Finalizers, myFinalizerName)
            if err := r.Update(context.Background(), cronJob); err != nil {
                return ctrl.Result{}, err
            }
        }
    } else {
        if containsString(cronJob.ObjectMeta.Finalizers, myFinalizerName) {
            if err := r.deleteExternalResources(cronJob); err != nil {
                return ctrl.Result{}, err
            }

            cronJob.ObjectMeta.Finalizers = removeString(cronJob.ObjectMeta.Finalizers, myFinalizerName)
            if err := r.Update(context.Background(), cronJob); err != nil {
                return ctrl.Result{}, err
            }
        }
    }

Failed retry

Assuming that we rely on B and then create B, we can simply return to failure when dealing with A CRD, which will be retried soon.

webhook

kuberentes has three kinds of webhooks, admission webhook, authorization webhook and CRD conversion webhook.

Here, for example, if we want to set some default values for CRD, or if the user fills in fewer parameters when creating, then we have to prohibit creating and so on.

It's also very simple to use webhook, just to implement Defaulter and Validator interfaces with a defined structure.

Other interfaces

The Reconcile structure aggregates the Client interface, so all client methods can be called directly, mostly related to CRD object operations.

type Client interface {
    Reader
    Writer
    StatusClient
}
// Reader knows how to read and list Kubernetes objects.
type Reader interface {
    // Get retrieves an obj for the given object key from the Kubernetes Cluster.
    // obj must be a struct pointer so that obj can be updated with the response
    // returned by the Server.
    Get(ctx context.Context, key ObjectKey, obj runtime.Object) error

    // List retrieves list of objects for a given namespace and list options. On a
    // successful call, Items field in the list will be populated with the
    // result returned from the server.
    List(ctx context.Context, list runtime.Object, opts ...ListOptionFunc) error
}

// Writer knows how to create, delete, and update Kubernetes objects.
type Writer interface {
    // Create saves the object obj in the Kubernetes cluster.
    Create(ctx context.Context, obj runtime.Object, opts ...CreateOptionFunc) error

    // Delete deletes the given obj from Kubernetes cluster.
    Delete(ctx context.Context, obj runtime.Object, opts ...DeleteOptionFunc) error

    // Update updates the given obj in the Kubernetes cluster. obj must be a
    // struct pointer so that obj can be updated with the content returned by the Server.
    Update(ctx context.Context, obj runtime.Object, opts ...UpdateOptionFunc) error

    // Patch patches the given obj in the Kubernetes cluster. obj must be a
    // struct pointer so that obj can be updated with the content returned by the Server.
    Patch(ctx context.Context, obj runtime.Object, patch Patch, opts ...PatchOptionFunc) error
}

// StatusClient knows how to create a client which can update status subresource
// for kubernetes objects.
type StatusClient interface {
    Status() StatusWriter
}

Scanning Focus on sealyun

Exploring additive QQ groups: 98488045

Tags: Linux Kubernetes JSON Docker github

Posted on Wed, 07 Aug 2019 00:54:09 -0700 by jstinehelfer