jenkins+docker+gitlab+harbor+pipeline Rapid Deployment Release Process

introduce
As the business grows, demand begins to increase, and the size, development cycle and release time of each demand are not consistent.The system architecture based on micro-services, the overlap of functions, the number of corresponding services is also increasing, the rapid iteration of size and function, more requirements for rapid and intelligent deployment.As a result, traditional manual deployment is already overwhelming.
Continuous integration, continuous deployment and continuous interaction are indispensable to the overall efficiency of the team for the development of micro services.Reasonable use of CI,CD can greatly improve the production efficiency, as well as the quality of product interaction.
Process combing:

1. The developer tag ged the online version on gitLab
2.jenkins gets tag version.
3. Write pipeline s
3.1 Pull Substitution Code git clone
3.2 maven build jar package
3.3 Write dockerfile
3.4 Create a mirror and push it to the local harbor mirror warehouse
3.5 Delete the container first, pull the mirror and start the container

jenkins preparation: (use of related plugins)
1.1 Install Plugins

1.2. Use the Role-Based Strategy plug-in

1.3 Add Global and Project Roles

1.4 Give users permissions

2. Introduction to master-slave
2.1 Solve scenarios where jenkins single-point build has many tasks, high load, and insufficient performance.

2.2 Choose when you build the project, and when you build the project, you perform a series of operations in the pipeline on the 192.168.17.7 server.

jenkins graphical interface settings:
1. Set preservation build history (number and days)

2. parameterized construction, select the corresponding tag to distribute.(Tag versions are usually named in Date Time Minutes for easy distinction)

3.pipeline Writing

Reference for details:

node() {
stage(' Git clone ') {
git branch: 'test', credentialsId: 'dc404ee2-350e-4934-a668-da4aee1ba535', url: 'git@xx.xx.xx.xx:meiyeyun/backend/xxxx/yx_${service_name}.git'
}

stage('maven buid'){
def mvnhome = '/usr/local/maven/'
def jdkhome = '/usr/local/jdk1.8.0_161'
env.PATH = "${mvnhome}/bin:${jdkhome}/bin:${env.PATH}"
sh "mvn clean package -Dmaven.test.skip=true"
}
stage("Create Dockerfile"){
sh '''cat << EOF > Dockerfile
FROM java
ADD target/${service_name}-0.0.1-SNAPSHOT.jar app.jar
enter ["java","-Djava.library.path=/usr/local/lib -server -XX:ReservedCodeCacheSize=64m -XX:TLABWasteTargetPercent=10 -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -XX:+CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UnlockDiagnosticVMOptions -XX:ParallelGCThreads=8 -XX:ConcGCThreads=4 -Xss256k -server -Xms2g -Xmx2g -XX:MaxDirectMemorySize=256m -XX:MaxTenuringThreshold=3 -XX:NewRatio=1 -XX:SurvivorRatio=8 -XX:+UnlockDiagnosticVMOptions -XX:ParGCCardsPerStrideChunk=32768 -XX:+AlwaysPreTouch","-jar","/app.jar","--spring.profiles.active=test"]
RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
'''
sh 'cat Dockerfile'
}
stage("build images and push to harbor"){
sh '''
containerPool=172.10.2.10/myy_test
docker build -t ${service_name}:v1 .
docker tag ${service_name}:v1 ${containerPool}/${service_name}:v1
docker login -u admin -p 124356 172.10.2.10
docker push ${containerPool}/${service_name}:v1
'''
}
stage("pull images and deploy"){
sh '''
containerPool=172.10.2.10/myy_test
ansible myy_test -a "docker rm -f ${service_name}"
ansible myy_test -a "docker login -u admin -p 124356 172.10.2.10"
ansible myy_test -a "docker pull ${containerPool}/${service_name}:v1"
ansible myy_test -a "docker run -d -p 8991:8991 --name ${service_name} -v /data/${service_name}/:/data/ --network=host ${containerPool}/${service_name}:v1"
ansible myy_test -m shell -a "docker ps |grep ${service_name}"
'''
} 
}

Note:

1.172.10.2.10 is the harbor address.
2. There are many microservices that can be defined in gitlab, such as yx_oms.git; yx_ums.git; and Jenkins can be deployed optionally.

jenkins+docker+gitlab+harbor+pipeline
3. In order to achieve high availability and load balancing of the backend, it is generally deployed on two machines, so ansible is used to manage the backend nodes, such as deleting containers, pulling mirrors, and starting containers.

Tags: Linux Docker git ansible jenkins

Posted on Wed, 28 Aug 2019 21:13:45 -0700 by pdub56