Integration Hell을 극복하기 위한 파이프라인을 구성하는 Continuous Integration / Continuous Deployment 프로그램인 Jenkins, 소스 코드를 검사하기 위한 프로그램인 SonarQube, 빌드 및 외부 레지스트리 역할을 담당하는 Nexus 설치를 진행한다. 또한, 파이프라인을 구성하기 위한 Jenkinsfile을 작성해서 Build Pipeline을 구성한다.
-
이미지 준비
get-image.sh
생성- jenkins-slave-skopeo-centos7은 skopeo를 이용하기 위해서, local git repo에서 받으면 됨
#!/bin/bash echo "set variable" export RH_ID='REDHAT_ID' export RH_PW='REDHAT_PASSWORD' echo "login external registry" podman login -u ${RH_ID} -p ${RH_PW} registry.redhat.io sleep 2 echo "pull images" podman pull registry.redhat.io/openshift4/ose-jenkins:v4.4 podman pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:v4.2 podman pull registry.redhat.io/openshift4/jenkins-slave-skopeo-centos7:latest podman pull registry.redhat.io/jboss-eap-7/eap72-openshift:1.2 podman pull registry.redhat.io/rhscl/postgresql-96-rhel7:latest podman pull docker.io/openshiftdemos/gogs:0.11.34 podman pull docker.io/siamaksade/sonarqube:latest echo "save images" podman save -o cicd.tar \ registry.redhat.io/openshift4/ose-jenkins:v4.4 \ registry.redhat.io/openshift4/ose-jenkins-agent-maven:v4.2 \ registry.redhat.io/openshift4/jenkins-slave-skopeo-centos7:latest registry.redhat.io/jboss-eap-7/eap72-openshift:1.2 \ registry.redhat.io/rhscl/postgresql-96-rhel7:latest \ docker.io/openshiftdemos/gogs:0.11.34 \ docker.io/siamaksade/sonarqube:latest
get-image.sh
실행
sh get-image.sh
-
push-image.sh
생성cat <<EOF > push-image.sh #!/bin/bash # set variable export PRV_ID='admin' export PRV_PW='admin' export PRV_REG='registry.demo.ocp4.com:5000' echo "=== SET VARIABLE ============================================" echo PRIVATE_REG=${PRV_REG} echo "=============================================================" # load images podman load -i cicd.tar # Dont do this export PRAVATE_REG='bastion.redhat2.cccr.local:5000' # tag images podman tag registry.redhat.io/openshift4/ose-jenkins:v4.4 ${PRAVATE_REG}/openshift4/ose-jenkins:v4.4 podman tag registry.redhat.io/openshift4/ose-jenkins-agent-maven:v4.2 ${PRAVATE_REG}/openshift4/ose-jenkins-agent-maven:v4.2 podman tag registry.redhat.io/rhscl/postgresql-96-rhel7:latest ${PRAVATE_REG}/rhscl/postgresql-96-rhel7:latest podman tag docker.io/openshiftdemos/gogs:0.11.34 ${PRAVATE_REG}/openshiftdemos/gogs:0.11.34 podman tag registry.redhat.io/jboss-eap-7/eap72-openshift:1.2 ${PRAVATE_REG}/jboss-eap-7/eap72-openshift:1.2 podman tag docker.io/siamaksade/sonarqube:latest ${PRAVATE_REG}/siamaksade/sonarqube:latest # login private registry podman login -u ${PRV_ID} -p ${PRV_PW} ${PRV_REG} # push images podman push ${PRAVATE_REG}/openshift4/ose-jenkins:v4.4 podman push ${PRAVATE_REG}/openshift4/ose-jenkins-agent-maven:v4.2 podman push ${PRAVATE_REG}/rhscl/postgresql-96-rhel7:latest podman push ${PRAVATE_REG}/openshiftdemos/gogs:0.11.34 podman push ${PRAVATE_REG}/jboss-eap-7/eap72-openshift:1.2 podman push ${PRAVATE_REG}/siamaksade/sonarqube:latest EOF
push-image.sh
실행
sh push-image.sh
해당 작업을 위해서는 cluster-admin 권한이 필요하다.
-
mirror registry의 trust ca 등록을 위한
configmap
파일 생성cat <<EOF > mirror-registry-ca.yaml apiVersion: v1 kind: ConfigMap metadata: name: mirror-registry-ca namespace: openshift-config data: registry.demo.ocp4.com..5000: | # mirror registry의 인증서 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- EOF
- mirror registry의 인증서의 기본 위치는 opt/registry/certs
- cluster admin 권한으로 OCP 로그인 후
openshift-config
project
oc login -u <cluster-admin id> -p <cluster-admin pw> oc create -f mirror-registry-ca.yaml -n openshift-config
-
oc edit으로 이미지 yaml 파일 변경
oc edit image.config.openshift.io cluster
(...) spec: additionalTrustedCA: name: mirror-registry-ca (...)
-
project 생성
- project 정보는 dev-demo(개발), stage-demo(스테이지), cicd-demo(CI/CD tools)이 준비된다.
- label node를 넣어주는 이유는, 만든 application이 dev, stage에 뜨지만, 노드를 service에 띄워주기 위함이다.
- 부하를 분산시켜주기 위한 목적이다.
oc login -u <ocp id> -p <ocp pw> oc label node service-1.redhat2.cccr.local cicd=true oc label node service-1.redhat2.cccr.local service=true oc label node service-2.redhat2.cccr.local cicd=true oc label node service-2.redhat2.cccr.local service=true oc new-project dev-demo --display-name="test - Dev" oc new-project stage-demo --display-name="test - Stage" oc new-project cicd-demo --display-name="CI/CD"
- cicd project에서 dev 및 stage에 빌드 및 배포를 수행할 수 있도록 권한 설정
oc policy add-role-to-group edit system:serviceaccounts:cicd-demo -n dev-demo oc policy add-role-to-group edit system:serviceaccounts:cicd-demo -n stage-demo
- mirror-registry, quay에 접근하기 위한 secret을 생성 및 정의해주어야 한다.
oc -n cicd-demo create secret docker-registry mirror-reg-pull \
--docker-server=bastion.redhat2.cccr.local:5000 \
--docker-username=devops \
--docker-password=dkagh1. \
--docker-email='[email protected]'
oc -n dev-demo create secret docker-registry mirror-reg-pull \
--docker-server=bastion.redhat2.cccr.local:5000 \
--docker-username=devops \
--docker-password=dkagh1. \
--docker-email='[email protected]'
oc -n stage-demo create secret docker-registry mirror-reg-pull \
--docker-server=bastion.redhat2.cccr.local:5000 \
--docker-username=devops \
--docker-password=dkagh1. \
--docker-email='[email protected]'
oc -n cicd-demo create secret docker-registry quay-cicd-secret \
--docker-server=quay.redhat2.cccr.local \
--docker-username=admin \
--docker-password=password \
--docker-email='[email protected]'
oc -n dev-demo create secret docker-registry quay-cicd-secret \
--docker-server=quay.redhat2.cccr.local \
--docker-username=admin \
--docker-password=password \
--docker-email='[email protected]'
oc -n stage-demo create secret docker-registry quay-cicd-secret \
--docker-server=quay.redhat2.cccr.local \
--docker-username=admin \
--docker-password=password \
--docker-email='[email protected]'
oc -n cicd-demo secret link builder mirror-reg-pull
oc -n dev-demo secret link builder mirror-reg-pull
oc -n stage-demo secret link builder mirror-reg-pull
oc -n cicd-demo secret link default --for=pull quay-cicd-secret
oc -n dev-demo secret link default --for=pull quay-cicd-secret
oc -n stage-demo secret link default --for=pull quay-cicd-secret
해당 작업을 위해서는 cluster-admin 권한이 필요하다.
-
위에서 받은 skopeo 이미지도 import가 필요하다.
- maven, podman, skopeo 도구들은 jenkins에 agent로 올려야 한다.
- 그래야 파이프라인 실행 시, agent 'agent 이름'을 통해서 원하는 빌드 시점에 해당 agent를 label을 정해서 사용할 수 있습니다.
oc import-image jenkins:2 --from=bastion.redhat2.cccr.local:5000/openshift4/ose-jenkins:v4.4 --confirm -n openshift oc import-image postgresql:9.6 --from=bastion.redhat2.cccr.local:5000/rhscl/postgresql-96-rhel7:latest --confirm -n openshift oc import-image eap72-openshift:1.2 --from=bastion.redhat2.cccr.local:5000/jboss-eap-7/eap72-openshift:1.2 --confirm -n openshift oc import-image sonarqube:latest --from=bastion.redhat2.cccr.local:5000/siamaksade/sonarqube:latest --confirm -n cicd-demo oc import-image gogs:0.11.34 --from=bastion.redhat2.cccr.local:5000/openshiftdemos/gogs:0.11.34 --confirm -n cicd-demo
- maven, podman, skopeo 도구들은 jenkins에 agent로 올려야 한다.
해당 작업을 위해서는 cluster-admin 권한이 필요하다.
-
jenkins template 파일 생성
cat <<EOF > ocp-jenkins-template.json { "apiVersion": "v1", "kind": "Template", "labels": { "app": "jenkins-ephemeral", "template": "jenkins-ephemeral-template" }, "message": "A Jenkins service has been created in your project. Log into Jenkins with your OpenShift account. The tutorial at https://github.com/openshift/origin/blob/master/examples/jenkins/README.md contains more information about using this template.", "metadata": { "annotations": { "description": "Jenkins service, without persistent storage.\n\nWARNING: Any data stored will be lost upon pod destruction. Only use this template for testing.", "iconClass": "icon-jenkins", "openshift.io/display-name": "Jenkins (Ephemeral)", "openshift.io/documentation-url": "https://docs.okd.io/latest/using_images/other_images/jenkins.html", "openshift.io/long-description": "This template deploys a Jenkins server capable of managing OpenShift Pipeline builds and supporting OpenShift-based oauth login. The Jenkins configuration is stored in non-persistent storage, so this configuration should be used for experimental purposes only.", "openshift.io/provider-display-name": "Red Hat, Inc.", "openshift.io/support-url": "https://access.redhat.com", "tags": "instant-app,jenkins" }, "name": "jenkins-ephemeral" }, "objects": [ { "apiVersion": "v1", "kind": "Route", "metadata": { "annotations": { "haproxy.router.openshift.io/timeout": "4m", "template.openshift.io/expose-uri": "http://{.spec.host}{.spec.path}" }, "name": "${JENKINS_SERVICE_NAME}" }, "spec": { "tls": { "insecureEdgeTerminationPolicy": "Redirect", "termination": "edge" }, "to": { "kind": "Service", "name": "${JENKINS_SERVICE_NAME}" } } }, { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "annotations": { "template.alpha.openshift.io/wait-for-ready": "true" }, "name": "${JENKINS_SERVICE_NAME}" }, "spec": { "replicas": 1, "selector": { "name": "${JENKINS_SERVICE_NAME}" }, "strategy": { "type": "Recreate" }, "template": { "metadata": { "labels": { "name": "${JENKINS_SERVICE_NAME}" } }, "spec": { "containers": [ { "env": [ { "name": "OPENSHIFT_ENABLE_OAUTH", "value": "${ENABLE_OAUTH}" }, { "name": "OPENSHIFT_ENABLE_REDIRECT_PROMPT", "value": "true" }, { "name": "DISABLE_ADMINISTRATIVE_MONITORS", "value": "${DISABLE_ADMINISTRATIVE_MONITORS}" }, { "name": "KUBERNETES_MASTER", "value": "https://kubernetes.default:443" }, { "name": "KUBERNETES_TRUST_CERTIFICATES", "value": "true" }, { "name": "JENKINS_SERVICE_NAME", "value": "${JENKINS_SERVICE_NAME}" }, { "name": "JNLP_SERVICE_NAME", "value": "${JNLP_SERVICE_NAME}" } ], "image": " ", "imagePullPolicy": "IfNotPresent", "livenessProbe": { "failureThreshold": 2, "httpGet": { "path": "/login", "port": 8080 }, "initialDelaySeconds": 420, "periodSeconds": 360, "timeoutSeconds": 240 }, "name": "jenkins", "readinessProbe": { "httpGet": { "path": "/login", "port": 8080 }, "initialDelaySeconds": 3, "timeoutSeconds": 240 }, "resources": { "limits": { "memory": "${MEMORY_LIMIT}" } }, "securityContext": { "capabilities": {}, "privileged": false }, "terminationMessagePath": "/dev/termination-log", "volumeMounts": [ { "mountPath": "/var/lib/jenkins", "name": "${JENKINS_SERVICE_NAME}-data" } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "serviceAccountName": "${JENKINS_SERVICE_NAMEJENKINS_SERVICE_NAME}", "volumes": [ { "emptyDir": { "medium": "" }, "name": "${JENKINS_SERVICE_NAME}-data" } ] } }, "triggers": [ { "imageChangeParams": { 23 "automatic": true, "containerNames": [ "jenkins" ], "from": { "kind": "ImageStreamTag", "name": "${JENKINS_IMAGE_STREAM_TAG}", "namespace": "${NAMESPACE}" }, "lastTriggeredImage": "" }, "type": "ImageChange" }, { "type": "ConfigChange" } ] } }, { "apiVersion": "v1", "kind": "ServiceAccount", "metadata": { "annotations": { "serviceaccounts.openshift.io/oauth-redirectreference.jenkins": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"${JENKINS_SERVICE_NAME}\"}}" }, "name": "${JENKINS_SERVICE_NAME}" } }, { "apiVersion": "v1", "groupNames": null, "kind": "RoleBinding", "metadata": { "name": "${JENKINS_SERVICE_NAME}_edit" }, "roleRef": { "name": "edit" }, "subjects": [ { "kind": "ServiceAccount", "name": "${JENKINS_SERVICE_NAME}" } ] }, { "apiVersion": "v1", "kind": "Service", "metadata": { "name": "${JNLP_SERVICE_NAME}" }, "spec": { "ports": [ { "name": "agent", "nodePort": 0, "port": 50000, "protocol": "TCP", "targetPort": 50000 } ], "selector": { "name": "${JENKINS_SERVICE_NAME}" }, "sessionAffinity": "None", "type": "ClusterIP" } }, { "apiVersion": "v1", "kind": "Service", "metadata": { "annotations": { "service.alpha.openshift.io/dependencies": "[{\"name\": \"${JNLP_SERVICE_NAME}\", \"namespace\": \"\", \"kind\": \"Service\"}]", "service.openshift.io/infrastructure": "true" }, "name": "${JENKINS_SERVICE_NAME}" }, "spec": { "ports": [ { "name": "web", "nodePort": 0, "port": 80, "protocol": "TCP", "targetPort": 8080 } ], "selector": { "name": "${JENKINS_SERVICE_NAME}" }, "sessionAffinity": "None", "type": "ClusterIP" } } ], "parameters": [ { "description": "The name of the OpenShift Service exposed for the Jenkins container.", "displayName": "Jenkins Service Name", "name": "JENKINS_SERVICE_NAME", "value": "jenkins" }, { "description": "The name of the service used for master/slave communication.", "displayName": "Jenkins JNLP Service Name", "name": "JNLP_SERVICE_NAME", "value": "jenkins-jnlp" }, { "description": "Whether to enable OAuth OpenShift integration. If false, the static account 'admin' will be initialized with the password 'password'.", "displayName": "Enable OAuth in Jenkins", "name": "ENABLE_OAUTH", "value": "true" }, { "description": "Maximum amount of memory the container can use.", "displayName": "Memory Limit", "name": "MEMORY_LIMIT", "value": "512Mi" }, { "description": "The OpenShift Namespace where the Jenkins ImageStream resides.", "displayName": "Jenkins ImageStream Namespace", "name": "NAMESPACE", "value": "openshift" }, { "description": "Whether to perform memory intensive, possibly slow, synchronization with the Jenkins Update Center on start. If true, the Jenkins core update monitor and site warnings monitor are disabled.", "displayName": "Disable memory intensive administrative monitors", "name": "DISABLE_ADMINISTRATIVE_MONITORS", "value": "false" }, { "description": "Name of the ImageStreamTag to be used for the Jenkins image.", "displayName": "Jenkins ImageStreamTag", "name": "JENKINS_IMAGE_STREAM_TAG", "value": "jenkins:2" } ] } EOF
- jenkins template을 openshift 프로젝트에 등록
oc create -f ocp-jenkins-template.json -n openshift
- gogs template 파일 생성
cat <<EOF > gogs-template-ephemeral.yaml kind: Template apiVersion: v1 metadata: annotations: description: The Gogs git server (<https://gogs.io/>) tags: instant-app,gogs,go,golang name: gogs objects: - kind: ServiceAccount apiVersion: v1 metadata: labels: app: ${APPLICATION_NAME} name: ${APPLICATION_NAME} - kind: Service apiVersion: v1 metadata: annotations: description: Exposes the database server name: ${APPLICATION_NAME}-postgresql spec: ports: - name: postgresql port: 5432 targetPort: 5432 selector: name: ${APPLICATION_NAME}-postgresql - kind: DeploymentConfig apiVersion: v1 metadata: annotations: description: Defines how to deploy the database name: ${APPLICATION_NAME}-postgresql labels: app: ${APPLICATION_NAME} spec: replicas: 1 selector: name: ${APPLICATION_NAME}-postgresql strategy: type: Recreate template: metadata: labels: name: ${APPLICATION_NAME}-postgresql name: ${APPLICATION_NAME}-postgresql spec: serviceAccountName: ${APPLICATION_NAME} containers: - env: - name: POSTGRESQL_USER value: ${DATABASE_USER} - name: POSTGRESQL_PASSWORD value: ${DATABASE_PASSWORD} - name: POSTGRESQL_DATABASE value: ${DATABASE_NAME} - name: POSTGRESQL_MAX_CONNECTIONS value: ${DATABASE_MAX_CONNECTIONS} - name: POSTGRESQL_SHARED_BUFFERS value: ${DATABASE_SHARED_BUFFERS} - name: POSTGRESQL_ADMIN_PASSWORD value: ${DATABASE_ADMIN_PASSWORD} image: ' ' livenessProbe: initialDelaySeconds: 30 tcpSocket: port: 5432 timeoutSeconds: 1 name: postgresql ports: - containerPort: 5432 readinessProbe: exec: command: - /bin/sh - -i - -c - psql -h 127.0.0.1 -U ${POSTGRESQL_USER} -q -d ${POSTGRESQL_DATABASE} -c 'SELECT 1' initialDelaySeconds: 5 timeoutSeconds: 1 resources: limits: memory: 512Mi volumeMounts: - mountPath: /var/lib/pgsql/data name: gogs-postgres-data volumes: - name: gogs-postgres-data emptyDir: {} triggers: - imageChangeParams: automatic: true containerNames: - postgresql from: kind: ImageStreamTag name: postgresql:${DATABASE_VERSION} namespace: openshift type: ImageChange - type: ConfigChange - kind: Service apiVersion: v1 metadata: annotations: description: The Gogs server's http port service.alpha.openshift.io/dependencies: '[{"name":"${APPLICATION_NAME}-postgresql","namespace":"","kind":"Service"}]' labels: app: ${APPLICATION_NAME} name: ${APPLICATION_NAME} spec: ports: - name: 3000-tcp port: 3000 protocol: TCP targetPort: 3000 - name: 10022-tcp port: 10022 protocol: TCP targetPort: 10022 selector: app: ${APPLICATION_NAME} deploymentconfig: ${APPLICATION_NAME} sessionAffinity: None type: ClusterIP - kind: Route apiVersion: v1 id: ${APPLICATION_NAME}-http metadata: annotations: description: Route for application's http service. labels: app: ${APPLICATION_NAME} name: ${APPLICATION_NAME} spec: host: ${HOSTNAME} port: targetPort: 3000-tcp to: name: ${APPLICATION_NAME} - kind: Route apiVersion: v1 id: ${APPLICATION_NAME}-ssh metadata: annotations: description: Route for application's ssh service. labels: app: ${APPLICATION_NAME} name: ${APPLICATION_NAME}-ssh spec: host: secure${HOSTNAME} port: targetPort: 10022-tcp to: name: ${APPLICATION_NAME} - kind: DeploymentConfig apiVersion: v1 metadata: labels: app: ${APPLICATION_NAME} name: ${APPLICATION_NAME} spec: replicas: 1 selector: app: ${APPLICATION_NAME} deploymentconfig: ${APPLICATION_NAME} strategy: resources: {} rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 type: Rolling template: metadata: creationTimestamp: null labels: app: ${APPLICATION_NAME} deploymentconfig: ${APPLICATION_NAME} spec: serviceAccountName: ${APPLICATION_NAME} containers: - image: " " imagePullPolicy: Always name: ${APPLICATION_NAME} ports: - containerPort: 3000 protocol: TCP - containerPort: 10022 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log volumeMounts: - name: gogs-data mountPath: /opt/gogs/data - name: gogs-config mountPath: /etc/gogs/conf readinessProbe: httpGet: path: / port: 3000 scheme: HTTP initialDelaySeconds: 3 timeoutSeconds: 1 periodSeconds: 20 successThreshold: 1 failureThreshold: 3 livenessProbe: httpGet: path: / port: 3000 scheme: HTTP initialDelaySeconds: 3 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 dnsPolicy: ClusterFirst restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: gogs-data emptyDir: {} - name: gogs-config configMap: name: gogs-config items: - key: app.ini path: app.ini test: false triggers: - type: ConfigChange - imageChangeParams: automatic: true containerNames: - ${APPLICATION_NAME} from: kind: ImageStreamTag name: ${APPLICATION_NAME}:${GOGS_VERSION} type: ImageChange - kind: ImageStream apiVersion: v1 metadata: labels: app: ${APPLICATION_NAME} name: ${APPLICATION_NAME} spec: tags: - name: "${GOGS_VERSION}" from: kind: DockerImage name: docker.io/openshiftdemos/gogs:${GOGS_VERSION} importPolicy: {} annotations: description: The Gogs git server docker image tags: gogs,go,golang version: "${GOGS_VERSION}" - kind: ConfigMap apiVersion: v1 metadata: name: gogs-config labels: app: ${APPLICATION_NAME} data: app.ini: | RUN_MODE = prod RUN_USER = gogs [database] DB_TYPE = postgres HOST = ${APPLICATION_NAME}-postgresql:5432 NAME = ${DATABASE_NAME} USER = ${DATABASE_USER} PASSWD = ${DATABASE_PASSWORD} [repository] ROOT = /opt/gogs/data/repositories [server] ROOT_URL=http://${HOSTNAME} SSH_DOMAIN=secure${HOSTNAME} START_SSH_SERVER=true SSH_LISTEN_PORT=10022 [security] INSTALL_LOCK = ${INSTALL_LOCK} [service] ENABLE_CAPTCHA = false [webhook] SKIP_TLS_VERIFY = ${SKIP_TLS_VERIFY} parameters: - description: The name for the application. name: APPLICATION_NAME required: true value: gogs - description: 'Custom hostname for http service route. Leave blank for default hostname, e.g.: <application-name>-<project>.<default-domain-suffix>' name: HOSTNAME required: true - displayName: Database Username from: gogs value: gogs name: DATABASE_USER - displayName: Database Password from: '[a-zA-Z0-9]{8}' value: gogs name: DATABASE_PASSWORD - displayName: Database Name name: DATABASE_NAME value: gogs - displayName: Database Admin Password from: '[a-zA-Z0-9]{8}' generate: expression name: DATABASE_ADMIN_PASSWORD - displayName: Maximum Database Connections name: DATABASE_MAX_CONNECTIONS value: "100" - displayName: Shared Buffer Amount name: DATABASE_SHARED_BUFFERS value: 12MB - displayName: Database version (PostgreSQL) name: DATABASE_VERSION value: "9.5" - name: GOGS_VERSION displayName: Gogs Version description: 'Version of the Gogs container image to be used (check the available version <https://hub.docker.com/r/openshiftdemos/gogs/tags>)' value: "0.11.34" required: true - name: INSTALL_LOCK displayName: Installation lock description: 'If set to true, installation (/install) page will be disabled. Set to false if you want to run the installation wizard via web' value: "true" - name: SKIP_TLS_VERIFY displayName: Skip TLS verification on webhooks description: Skip TLS verification on webhooks. Enable with caution! EOF
- gogs template image 정보 갱신
PRV_REG=registry.demo.ocp4.com:5000 # mirror registry url sed -i 's/docker.io\\/openshiftdemos\\/gogs/${PRV_REG}\\/openshiftdemos\\/gogs/g' gogs-template-ephemeral.yaml
- sonarqube-template 파일 생성
cat <<EOF > sonarqube-template.yml apiVersion: v1 kind: Template metadata: name: "sonarqube" objects: - apiVersion: v1 kind: ImageStream metadata: labels: app: sonarqube name: sonarqube spec: tags: - annotations: description: The SonarQube Docker image tags: sonarqube from: kind: DockerImage name: docker.io/siamaksade/sonarqube:latest importPolicy: {} name: latest - apiVersion: v1 kind: Secret stringData: database-name: ${POSTGRES_DATABASE_NAME} database-password: ${POSTGRES_PASSWORD} database-user: ${POSTGRES_USERNAME} metadata: labels: app: sonarqube template: postgresql-template name: sonardb type: Opaque - apiVersion: v1 stringData: password: ${SONAR_LDAP_BIND_PASSWORD} username: ${SONAR_LDAP_BIND_DN} kind: Secret metadata: name: sonar-ldap-bind-dn type: kubernetes.io/basic-auth - apiVersion: v1 kind: DeploymentConfig metadata: generation: 1 labels: app: sonarqube template: postgresql-template name: sonardb spec: replicas: 1 selector: name: sonardb strategy: activeDeadlineSeconds: 21600 recreateParams: timeoutSeconds: 600 resources: {} type: Recreate template: metadata: labels: name: sonardb spec: containers: - env: - name: POSTGRESQL_USER valueFrom: secretKeyRef: key: database-user name: sonardb - name: POSTGRESQL_PASSWORD valueFrom: secretKeyRef: key: database-password name: sonardb - name: POSTGRESQL_DATABASE valueFrom: secretKeyRef: key: database-name name: sonardb imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 5432 timeoutSeconds: 1 name: postgresql ports: - containerPort: 5432 protocol: TCP readinessProbe: exec: command: - /bin/sh - -i - -c - psql -h 127.0.0.1 -U $POSTGRESQL_USER -q -d $POSTGRESQL_DATABASE -c 'SELECT 1' failureThreshold: 3 initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: memory: ${POSTGRES_CONTAINER_MEMORY_SIZE_LIMIT} cpu: ${POSTGRES_CONTAINER_CPU_LIMIT} requests: memory: 1Gi securityContext: capabilities: {} privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/pgsql/data name: sonardb-data dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: sonardb-data emptyDir: {} test: false triggers: - imageChangeParams: automatic: true containerNames: - postgresql from: kind: ImageStreamTag name: 'postgresql:9.6' namespace: openshift type: ImageChange - type: ConfigChange - apiVersion: v1 kind: DeploymentConfig metadata: generation: 1 labels: app: sonarqube name: sonarqube spec: replicas: 1 selector: app: sonarqube deploymentconfig: sonarqube strategy: activeDeadlineSeconds: 21600 recreateParams: timeoutSeconds: 600 type: Recreate template: metadata: labels: app: sonarqube deploymentconfig: sonarqube spec: containers: - env: - name: JDBC_URL value: jdbc:postgresql://sonardb:5432/sonar - name: JDBC_USERNAME valueFrom: secretKeyRef: key: database-user name: sonardb - name: JDBC_PASSWORD valueFrom: secretKeyRef: key: database-password name: sonardb - name: FORCE_AUTHENTICATION value: ${FORCE_AUTHENTICATION} - name: PROXY_HOST value: ${PROXY_HOST} - name: PROXY_PORT value: ${PROXY_PORT} - name: PROXY_USER value: ${PROXY_USER} - name: PROXY_PASSWORD value: ${PROXY_PASSWORD} - name: LDAP_URL value: ${SONAR_LDAP_URL} - name: LDAP_REALM value: ${SONAR_AUTH_REALM} - name: LDAP_AUTHENTICATION value: ${SONAR_LDAP_BIND_METHOD} - name: LDAP_USER_BASEDN value: ${SONAR_BASE_DN} - name: LDAP_USER_REAL_NAME_ATTR value: ${SONAR_LDAP_USER_REAL_NAME_ATTR} - name: LDAP_USER_EMAIL_ATTR value: ${SONAR_LDAP_USER_EMAIL_ATTR} - name: LDAP_USER_REQUEST value: ${SONAR_LDAP_USER_REQUEST} - name: LDAP_GROUP_BASEDN value: ${SONAR_LDAP_GROUP_BASEDN} - name: LDAP_GROUP_REQUEST value: ${SONAR_LDAP_GROUP_REQUEST} - name: LDAP_GROUP_ID_ATTR value: ${SONAR_LDAP_GROUP_ID_ATTR} - name: LDAP_CONTEXTFACTORY value: ${SONAR_LDAP_CONTEXTFACTORY} - name: SONAR_AUTOCREATE_USERS value: ${SONAR_AUTOCREATE_USERS} - name: LDAP_BINDDN valueFrom: secretKeyRef: key: username name: sonar-ldap-bind-dn - name: LDAP_BINDPASSWD valueFrom: secretKeyRef: key: password name: sonar-ldap-bind-dn imagePullPolicy: Always livenessProbe: failureThreshold: 3 httpGet: path: / port: 9000 scheme: HTTP initialDelaySeconds: 45 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: sonarqube ports: - containerPort: 9000 protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: / port: 9000 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 200m memory: 1Gi limits: cpu: ${SONARQUBE_CPU_LIMIT} memory: ${SONARQUBE_MEMORY_LIMIT} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /opt/sonarqube/data name: sonar-data dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: sonar-data emptyDir: {} test: false triggers: - imageChangeParams: automatic: true containerNames: - sonarqube from: kind: ImageStreamTag name: sonarqube:latest type: ImageChange - type: ConfigChange - apiVersion: v1 kind: Route metadata: labels: app: sonarqube name: sonarqube spec: port: targetPort: 9000-tcp tls: termination: edge to: kind: Service name: sonarqube weight: 100 wildcardPolicy: None - apiVersion: v1 kind: Service metadata: annotations: template.openshift.io/expose-uri: postgres://{.spec.clusterIP}:{.spec.ports[?(.name=="postgresql")].port} labels: app: sonarqube template: postgresql-template name: sonardb spec: ports: - name: postgresql port: 5432 protocol: TCP targetPort: 5432 selector: name: sonardb sessionAffinity: None type: ClusterIP status: loadBalancer: {} - apiVersion: v1 kind: Service metadata: labels: app: sonarqube name: sonarqube spec: ports: - name: 9000-tcp port: 9000 protocol: TCP targetPort: 9000 selector: deploymentconfig: sonarqube sessionAffinity: None type: ClusterIP status: loadBalancer: {} parameters: - description: Password for the Posgres Database to be used by Sonarqube displayName: Postgres password name: POSTGRES_PASSWORD generate: expression from: '[a-zA-Z0-9]{16}' required: true - description: Username for the Posgres Database to be used by Sonarqube displayName: Postgres username name: POSTGRES_USERNAME generate: expression from: 'user[a-z0-9]{8}' required: true - description: Database name for the Posgres Database to be used by Sonarqube displayName: Postgres database name name: POSTGRES_DATABASE_NAME value: sonar required: true - description: Postgres Container Memory size limit displayName: Postgres Container Memory size limit name: POSTGRES_CONTAINER_MEMORY_SIZE_LIMIT value: 1Gi - description: Postgres Container CPU limit displayName: Postgres Container CPU limit name: POSTGRES_CONTAINER_CPU_LIMIT value: "1" - name: SONARQUBE_MEMORY_LIMIT description: SonarQube memory displayName: SonarQube memory value: 2Gi - name: SONARQUBE_CPU_LIMIT description: SonarQube Container CPU limit displayName: SonarQube Container CPU limit value: "2" - name: FORCE_AUTHENTICATION displayName: Force authentication value: "false" - name: SONAR_AUTH_REALM value: '' description: The type of authentication that SonarQube should be using (None or LDAP) (Ref - <https://docs.sonarqube.org/display/PLUG/LDAP+Plugin>) displayName: SonarQube Authentication Realm - name: SONAR_AUTOCREATE_USERS value: 'false' description: When using an external authentication system, should SonarQube automatically create accounts for users? displayName: Enable auto-creation of users from external authentication systems? required: true - name: PROXY_HOST description: Hostname of proxy server the SonarQube application should use to access the Internet displayName: Proxy server hostname/IP - name: PROXY_PORT description: TCP port of proxy server the SonarQube application should use to access the Internet displayName: Proxy server port - name: PROXY_USER description: Username credential when the Proxy Server requires authentication displayName: Proxy server username - name: PROXY_PASSWORD description: Password credential when the Proxy Server requires authentication displayName: Proxy server password - name: SONAR_LDAP_BIND_DN description: When using LDAP authentication, this is the Distinguished Name used for binding to the LDAP server displayName: LDAP Bind DN - name: SONAR_LDAP_BIND_PASSWORD description: When using LDAP for authentication, this is the password with which to bind to the LDAP server displayName: LDAP Bind Password - name: SONAR_LDAP_URL description: When using LDAP for authentication, this is the URL of the LDAP server in the form of ldap(s)://<hostname>:<port> displayName: LDAP Server URL - name: SONAR_LDAP_REALM description: When using LDAP, this allows for specifying a Realm within the directory server (Usually not used) displayName: LDAP Realm - name: SONAR_LDAP_AUTHENTICATION description: When using LDAP, this is the bind method (simple, GSSAPI, kerberos, CRAM-MD5, DIGEST-MD5) displayName: LDAP Bind Mode - name: SONAR_LDAP_USER_BASEDN description: The Base DN under which SonarQube should search for user accounts in the LDAP directory displayName: LDAP User Base DN - name: SONAR_LDAP_USER_REAL_NAME_ATTR description: The LDAP attribute which should be referenced to get a user's full name displayName: LDAP Real Name Attribute - name: SONAR_LDAP_USER_EMAIL_ATTR description: The LDAP attribute which should be referenced to get a user's e-mail address displayName: LDAP User E-Mail Attribute - name: SONAR_LDAP_USER_REQUEST description: An LDAP filter to be used to search for user objects in the LDAP directory displayName: LDAP User Request Filter - name: SONAR_LDAP_GROUP_BASEDN description: The Base DN under which SonarQube should search for groups in the LDAP directory displayName: LDAP Group Base DN - name: SONAR_LDAP_GROUP_REQUEST description: An LDAP filter to be used to search for group objects in the LDAP directory displayName: LDAP Group Request Filter - name: SONAR_LDAP_GROUP_ID_ATTR description: The LDAP attribute which should be referenced to get a group's ID displayName: LDAP Group Name Attribute - name: SONAR_LDAP_CONTEXTFACTORY description: The ContextFactory implementation to be used when communicating with the LDAP server displayName: LDAP Context Factory value: com.sun.jndi.ldap.LdapCtxFactory EOF
- gogs template image 정보 갱신
PRV_REG=registry.demo.ocp4.com:5000 # mirror registry url sed -i 's/docker.io\\/siamaksade\\/sonarqube/${PRV_REG}\\/siamaksade\\/sonarqube/g' sonarqube-template.yml
- jenkins 설치 및 자원 설정
oc new-app jenkins-ephemeral -n cicd-demo
oc set resources dc/jenkins --limits=cpu=2,memory=2Gi --requests=cpu=100m,memory=512Mi
- 삭제해야 할 때, 해당 하는 app 관련 정보들을 삭제하기 위한 명령어
oc delete all,configmap,pvc,serviceaccount,rolebinding --selector app=jenkins-ephemeral -n cicd-demo
oc delete all,configmap,pvc,serviceaccount,rolebinding --selector app=gogs -n cicd-demo
oc delete all,configmap,pvc,serviceaccount,rolebinding --selector app=sonarqube -n cicd-demo
HOSTNAME=$(oc get route jenkins -o template --template='{{.spec.host}}' | sed "s/jenkins-//g")
GOGS_HOSTNAME="gogs-$HOSTNAME"
oc new-app -f gogs-template-ephemeral.yaml \
--param=GOGS_VERSION=0.11.34 \
--param=DATABASE_VERSION=9.6 \
--param=HOSTNAME=$GOGS_HOSTNAME \
--param=SKIP_TLS_VERIFY=true
oc new-app -f sonarqube-template.yml --param=SONARQUBE_MEMORY_LIMIT=2Gi
oc set resources dc/sonardb --limits=cpu=500m,memory=1Gi --requests=cpu=50m,memory=128Mi
oc set resources dc/sonarqube --limits=cpu=1,memory=2Gi --requests=cpu=50m,memory=128Mi
- name을 주어 활용해야 함.
oc new-build --name=demo --image-stream=eap72-openshift:1.2 --binary=true -n dev-demo
oc new-app demo:latest --allow-missing-images -n dev-demo
oc set triggers dc -l app=demo --containers=demo --from-image=demo:latest --manual -n dev-demo
oc expose dc/demo --port=8080 -n dev-demo
oc expose svc/demo -n dev-demo
oc set probe -n dev-demo dc/demo --readiness -- /bin/bash -c /opt/eap/bin/readinessProbe.sh
oc set probe -n dev-demo dc/demo --liveness --initial-delay-seconds=60 -- /bin/bash -c /opt/eap/bin/livenessProbe.sh
oc rollout cancel dc/demo -n dev-demo
oc new-app demo:stage --allow-missing-images -n stage-demo
oc set triggers dc -l app=demo --containers=demo --from-image=demo:stage --manual -n stage-demo
oc expose dc/demo --port=8080 -n stage-demo
oc expose svc/demo -n stage-demo
oc set probe dc/demo -n stage-demo --readiness -- /bin/bash -c /opt/eap/bin/readinessProbe.sh
oc set probe dc/demo -n stage-demo --liveness --initial-delay-seconds=60 -- /bin/bash -c /opt/eap/bin/livenessProbe.sh
oc rollout cancel dc/demo -n stage-demo
- requirement : gogs user is already created through gogs web ui
- internet 접속이 가능한 환경에서 clone한 application을 bastion 서버로 업로드
git clone https://github.com/OpenShiftDemos/openshift-tasks.git
tar cvf openshift-tasks.tar openshift-tasks
Bastion 서버에서 작업해야 함
GOGS_HOSTNAME=$(oc get route gogs -o template --template='{{.spec.host}}')
cd openshift-tasks
git remote set-url origin http://${GOGS_HOSTNAME}/gogs/openshift-tasks.git
git push -u origin master
- pom.xml
...
<distributionManagement>
<repository>
<id>nexus</id>
<!-- change releases repository url -->
<url>http://<***nexus ip***>/repository/maven-releases/</url>
</repository>
<snapshotRepository>
<id>nexus</id>
<!-- change snapshot repository url -->
<url>http://<***nexus ip***>/repository/maven-snapshots/</url>
</snapshotRepository>
</distributionManagement>
...
- configuration/cicd-settings-nexus3.xml
- maven 버전에 따라서 repository 이름이 달라질 수 있다는 것에 주의
<settings>
<servers>
<server>
<id>nexus</id>
<!-- change nexus id/pw -->
<username>admin</username>
<password>admin123</password>
</server>
</servers>
<mirrors>
<mirror>
<id>nexus</id>
<mirrorOf>*</mirrorOf>
<!-- change nexus url -->
<url>http://<***nexus ip***>:8081/repository/maven-public/</url>
</mirror>
</mirrors>
...
nexus의 경우 container 방식으로 OCP 노드들이 접근 가능한 서버에 internet 접속이 가능하게 구성하여 진행한다.
- pull nexus image
podman pull docker.io/sonatype/nexus3:latest
- run nexus image
selinux 확인 필요(disable or set context)
mkdir -p /opt/sonartype/nexus-data && chown -R 200 /opt/sonartype/nexus-data
podman run -d -p 8081:8081 --name nexus -v /opt/sonartype/nexus-data:/nexus-data sonatype/nexus3
- add redhat jboss repository to nexus
- Nexus 내부에서 UI를 통해 repository를 만들고 public에 등록해주면 됨
sample application에서 jboss maven repository를 사용하기 때문에 추가해야 함
다른 application으로 진행할 경우 이번 절차는 필요 없음
General availability repository: <https://maven.repository.redhat.com/ga/>
Early-access repository: <https://maven.repository.redhat.com/earlyaccess/all/>
Using Nexus 3 as Your Repository - Part 1: Maven Artifacts
- jenkins pipeline build template 작성
piVersion: v1
kind: Template
labels:
template: cicd
group: cicd
metadata:
annotations:
iconClass: icon-jenkins
tags: instant-app,jenkins,gogs,nexus,cicd
name: cicd
message: "Use the following credentials for login:\nJenkins: use your OpenShift credentials\nNexus: admin/admin123\nSonarQube: admin/admin\nGogs Git Server: gogs/gogs"
parameters:
- displayName: DEV project name
value: dev
name: DEV_PROJECT
required: true
- displayName: STAGE project name
value: stage
name: STAGE_PROJECT
required: true
- displayName: Ephemeral
description: Use no persistent storage for Gogs and Nexus
value: "true"
name: EPHEMERAL
required: true
- description: Webhook secret
from: '[a-zA-Z0-9]{8}'
generate: expression
name: WEBHOOK_SECRET
required: true
- displayName: Integrate Quay.io
description: Integrate image build and deployment with Quay.io
value: "true"
name: ENABLE_QUAY
required: true
- displayName: Quay.io Username
description: Quay.io username to push the images to tasks-sample-app repository on your Quay.io account
name: QUAY_USERNAME
value: admin
- displayName: Quay.io Password
description: Quay.io password to push the images to tasks-sample-app repository on your Quay.io account
name: QUAY_PASSWORD
value: password
- displayName: Quay.io Image Repository
description: Quay.io repository for pushing Tasks container images
name: QUAY_REPOSITORY
required: true
value: ubi7
objects:
- apiVersion: v1
kind: BuildConfig
metadata:
annotations:
pipeline.alpha.openshift.io/uses: '[{"name": "jenkins", "namespace": "", "kind": "DeploymentConfig"}]'
labels:
app: cicd-pipeline
name: cicd-pipeline
name: tasks-pipeline
spec:
triggers:
- type: GitHub
github:
secret: ${WEBHOOK_SECRET}
- type: Generic
generic:
secret: ${WEBHOOK_SECRET}
runPolicy: Serial
source:
type: None
strategy:
jenkinsPipelineStrategy:
env:
- name: DEV_PROJECT
value: ${DEV_PROJECT}
- name: STAGE_PROJECT
value: ${STAGE_PROJECT}
- name: ENABLE_QUAY
value: ${ENABLE_QUAY}
jenkinsfile: |-
def mvnCmd = "mvn -s configuration/cicd-settings-nexus3.xml"
pipeline {
agent {
label 'maven'
}
stages {
stage('Build App') {
steps {
git branch: 'master',url: 'http://root:[email protected]:18080/root/openshift-tasks.git'
sh "${mvnCmd} install -DskipTests=true"
}
}
stage('Test') {
steps {
sh "${mvnCmd} test"
step([$class: 'JUnitResultArchiver', testResults: '**/target/surefire-reports/TEST-*.xml'])
}
}
stage('Code Analysis') {
steps {
script {
sh "${mvnCmd} sonar:sonar -Dsonar.host.url=http://sonarqube:9000 -DskipTests=true"
}
}
}
stage('Archive App') {
steps {
sh "${mvnCmd} deploy -DskipTests=true -P nexus"
}
stage('Build Image') {
steps {
sh "cp target/openshift-tasks.war target/ROOT.war"
script {
openshift.withCluster() {
openshift.withProject(env.DEV_PROJECT) {
openshift.selector("bc", "demo").startBuild("--from-file=target/ROOT.war", "--wait=true")
}
}
}
}
}
stage('Send to QUAY') {
agent {
label 'skopeo'
}
steps {
script {
sh '''
oc login https://api.redhat2.cccr.local:6443 -u kubeadmin -p qMZXS-eoF9S-W46LX-USXoj --insecure-skip-tls-verify=true
oc project dev-demo
skopeo copy --src-tls-verify=false --dest-tls-verify=false --src-creds kubeadmin:`oc whoami -t` --dest-creds 'admin:password' docker://image-registry.openshift-image-registry.svc:5000/dev-demo/demo:latest docker://quay.redhat2.cccr.local/admin/demo:stage
'''
}
}
}
stage('Deploy DEV') {
steps {
script {
openshift.withCluster() {
openshift.withProject(env.DEV_PROJECT) {
openshift.selector("dc", "demo").rollout().latest();
}
}
}
}
}
stage('Deploy STAGE? Promote or Abort') {
steps {
timeout(time:15, unit:'MINUTES') {
input message: "Promote to STAGE?", ok: "Promote"
}
script {
openshift.withCluster() {
openshift.tag("${env.DEV_PROJECT}/demo:latest", "${env.STAGE_PROJECT}/demo:stage")
openshift.withProject(env.STAGE_PROJECT) {
openshift.selector("dc", "demo").rollout().latest();
}
}
}
}
}
}
}
type: JenkinsPipeline
- 생성된 template으로 jenkins pipeline build를 cicd project에 생성
oc new-app -f cicd-template.yaml -p DEV_PROJECT=dev-demo -p STAGE_PROJECT=stage-demo -p ENABLE_QUAY=false -n cicd-demo
-
docker대신, podman + skopeo(image copy etc)를 요즘에는 더 많이 사용함
- podman으로는 부족한 부분이 존재
- skopeo : 이미지 보안을 위해 digest 값을 유지한 채로 복사, mirror 가능 ⇒ 설치 메뉴얼에 명시된 digest 값을 유지해야 할 때 주로 사용
-
GitLab install in Jenkins
- podman을 이용한 QUAY에 업로드
stages {
stage('Push image to QUAY') {
steps {
sh "oc login https://api.redhat2.cccr.local:6443 -u kubeadmin -p qMZXS-eoF9S-W46LX-USXoj --insecure-skip-tls-verify=true"
sh "podman pull --creds kubeadmin:`oc whoami -t` image-registry.openshift-image-registry.svc:5000/dev-demo/demo:latest"
sh "podman tag image-registry.openshift-image-registry.svc:5000/dev-demo/demo:latest quay.redhat2.cccr.local/admin/demo:stage"
sh "podman push --creds admin:password quay.redhat2.cccr.local/admin/demo:stage"
}
}
}
- skopeo를 이용한 QUAY에 이미지 복사
- maven 대신 skopeo를 이용해서 이미지를 복사해서 quay에 대입
def mvnCmd = "mvn -s configuration/cicd-settings-nexus3.xml"
pipeline {
agent {
label 'maven'
}
stages {
stage('Promote to STAGE?') {
agent {
label 'skopeo'
}
steps {
sh '''
oc login https://api.redhat2.cccr.local:6443 -u kubeadmin -p qMZXS-eoF9S-W46LX-USXoj --insecure-skip-tls-verify=true
oc project dev-demo
skopeo copy --src-tls-verify=false --dest-tls-verify=false --src-creds kubeadmin:`oc whoami -t` --dest-creds 'admin:password' docker://image-registry.openshift-image-registry.svc:5000/dev-demo/demo:latest docker://quay.redhat2.cccr.local/admin/demo:stage
'''
}
}
}
}
- 생성된 template으로 jenkins pipeline build를 cicd project에 생성 ⇒ Quay 사용하는 경우
oc new-app -f cicd-template.yaml -p DEV_PROJECT=dev-demo -p STAGE_PROJECT=stage-demo -n cicd-demo
-
gitLab에서 먼저 받을 때, gitlab repo에 코드를 올려야 한다.
- gitlab에서 repo를 하나 만들어주고, git remote set-url origin을 quay.redhat2.cccr.local 주소로 설정 후, git push -u origin master로 넣어주어야 한다.
-
openshift-task의 코드의 패키지 버전들이 최신 버전이 아니라 코드 실행 시, 오류가 발생할 수 있음
- 패키지 버전은 maven repository 공식 홈페이지를 참고해서 의존성을 작성해준다.
- pom.xml
<?xml version="1.0" encoding="UTF-8"?> <!-- JBoss, Home of Professional Open Source Copyright 2014, Red Hat, Inc. and/or its affiliates, and individual contributors by the @authors tag. See the copyright.txt in the distribution for a full listing of individual contributors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.jboss.quickstarts.eap</groupId> <artifactId>jboss-tasks-rs</artifactId> <version>6.4.0-SNAPSHOT</version> <packaging>war</packaging> <name>JBoss EAP - Tasks JAX-RS App</name> <licenses> <license> <name>Apache License, Version 2.0</name> <distribution>repo</distribution> <url>http://www.apache.org/licenses/LICENSE-2.0.html</url> </license> </licenses> <properties> <!-- Explicitly declaring the source encoding eliminates the following message: --> <!-- [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! --> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- JBoss dependency versions --> <version.jboss.maven.plugin>7.4.Final</version.jboss.maven.plugin> <!-- Define the version of the JBoss BOMs we want to import to specify tested stacks. --> <version.jboss.bom.eap>6.4.0.GA</version.jboss.bom.eap> <!-- other plugin versions --> <version.surefire.plugin>2.19.1</version.surefire.plugin> <version.war.plugin>2.1.1</version.war.plugin> <!-- maven-compiler-plugin --> <maven.compiler.target>1.8</maven.compiler.target> <maven.compiler.source>1.8</maven.compiler.source> </properties> <repositories> <repository> <id>openshift-repository</id> <url>https://mirror.openshift.com/nexus/content/groups/public</url> </repository> </repositories> <distributionManagement> <repository> <id>nexus</id> <url>http://10.10.10.17:8081/repository/maven-releases</url> </repository> <snapshotRepository> <id>nexus</id> <url>http://10.10.10.17:8081/repository/maven-snapshots</url> </snapshotRepository> </distributionManagement> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom.eap</groupId> <artifactId>jboss-javaee-6.0-with-tools</artifactId> <version>${version.jboss.bom.eap}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.0</version> </dependency> <!-- Import the CDI API, we use provided scope as the API is included in JBoss EAP 6 --> <dependency> <groupId>javax.enterprise</groupId> <artifactId>cdi-api</artifactId> <scope>provided</scope> </dependency> <!-- Import the JPA API, we use provided scope as the API is included in JBoss EAP 6 --> <dependency> <groupId>org.hibernate.javax.persistence</groupId> <artifactId>hibernate-jpa-2.0-api</artifactId> <scope>provided</scope> </dependency> <!-- Import the JAX-RS API, we use provided scope as the API is included in JBoss EAP 6 --> <dependency> <groupId>org.jboss.spec.javax.ws.rs</groupId> <artifactId>jboss-jaxrs-api_1.1_spec</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jackson-provider</artifactId> <version>2.3.1.GA</version> <scope>provided</scope> </dependency> <!-- Import the EJB API, we use provided scope as the API is included in JBoss EAP 6 --> <dependency> <groupId>org.jboss.spec.javax.ejb</groupId> <artifactId>jboss-ejb-api_3.1_spec</artifactId> <scope>provided</scope> </dependency> <!-- Test dependencies --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.13.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.arquillian.junit</groupId> <artifactId>arquillian-junit-container</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.arquillian.protocol</groupId> <artifactId>arquillian-protocol-servlet</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-core</artifactId> <version>3.5.15</version> <scope>test</scope> </dependency> <dependency> <groupId>com.sun.jersey</groupId> <artifactId>jersey-client</artifactId> <version>1.12</version> <scope>test</scope> </dependency> </dependencies> <build> <!-- Maven will append the version to the finalName (which is the name given to the generated war, and hence the context root) --> <finalName>openshift-tasks</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.0</version> <configuration> <source>${maven.compiler.source}</source> <target>${maven.compiler.target}</target> <encoding>UTF-8</encoding> </configuration> </plugin> <plugin> <artifactId>maven-war-plugin</artifactId> <version>${version.war.plugin}</version> <configuration> <!-- Java EE 6 doesn't require web.xml, Maven needs to catch up! --> <failOnMissingWebXml>false</failOnMissingWebXml> </configuration> </plugin> <!-- Surefire plugin is responsible for running tests as part of project build --> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>${version.surefire.plugin}</version> </plugin> <!-- The JBoss AS plugin deploys your war to a local JBoss EAP container --> <!-- To use, run: mvn package jboss-as:deploy --> <plugin> <groupId>org.jboss.as.plugins</groupId> <artifactId>jboss-as-maven-plugin</artifactId> <version>${version.jboss.maven.plugin}</version> </plugin> <plugin> <groupId>org.sonarsource.scanner.maven</groupId> <artifactId>sonar-maven-plugin</artifactId> <version>3.5.0.1254</version> </plugin> <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.8.6</version> <executions> <execution> <id>default-prepare-agent</id> <goals> <goal>prepare-agent</goal> </goals> </execution> <execution> <id>default-report</id> <phase>prepare-package</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.ec4j.maven</groupId> <artifactId>editorconfig-maven-plugin</artifactId> <version>0.0.5</version> <executions> <execution> <id>check</id> <phase>verify</phase> <goals> <goal>check</goal> </goals> </execution> </executions> <configuration> <excludes> <exclude>deployments/**</exclude> <exclude>src/main/webapp/css/*</exclude> <exclude>src/main/webapp/fonts/*</exclude> <exclude>src/main/webapp/img/*</exclude> <exclude>src/main/webapp/js/*</exclude> </excludes> </configuration> </plugin> </plugins> </build> <reporting> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <version>2.9</version> <reportSets> <reportSet> <reports> <report>javadoc</report> </reports> </reportSet> </reportSets> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <version>3.8</version> </plugin> <plugin> <groupId>org.owasp</groupId> <artifactId>dependency-check-maven</artifactId> <version>2.1.1</version> <configuration> <skipProvidedScope>true</skipProvidedScope> <skipRuntimeScope>true</skipRuntimeScope> </configuration> <reportSets> <reportSet> <reports> <report>aggregate</report> </reports> </reportSet> </reportSets> </plugin> <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.7.9</version> <reportSets> <reportSet> <reports> <report>report</report> </reports> </reportSet> </reportSets> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <version>2.17</version> <reportSets> <reportSet> <reports> <report>checkstyle</report> </reports> </reportSet> </reportSets> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>findbugs-maven-plugin</artifactId> <version>3.0.5</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <version>2.20.1</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-project-info-reports-plugin</artifactId> <version>2.6</version> <reportSets> <reportSet> <reports> <report>summary</report> <report>dependency-management</report> </reports> </reportSet> </reportSets> </plugin> </plugins> </reporting> <profiles> <profile> <!-- The default profile skips all tests, though you can tune it to run just unit tests based on a custom pattern --> <!-- Separate profiles are provided for running all tests, including Arquillian tests that execute in the specified container --> <id>default</id> <activation> <activeByDefault>true</activeByDefault> </activation> <build> <plugins> <plugin> <artifactId>maven-surefire-plugin</artifactId> <configuration> <groups>org.jboss.as.quickstarts.tasksrs.category.UnitTest</groups> </configuration> </plugin> </plugins> </build> </profile> <profile> <id>int-tests</id> <build> <plugins> <plugin> <artifactId>maven-surefire-plugin</artifactId> <configuration> <groups>org.jboss.as.quickstarts.tasksrs.category.IntegrationTest</groups> </configuration> </plugin> </plugins> </build> </profile> <profile> <!-- An optional Arquillian testing profile that executes tests in your JBoss EAP instance --> <!-- This profile will start a new JBoss EAP instance, and execute the test, shutting it down when done --> <!-- Run with: mvn clean test -Parq-jbossas-managed --> <id>arq-jbossas-managed</id> <dependencies> <dependency> <groupId>org.jboss.as</groupId> <artifactId>jboss-as-arquillian-container-managed</artifactId> <scope>test</scope> </dependency> </dependencies> </profile> <profile> <!-- An optional Arquillian testing profile that executes tests in a remote JBoss EAP instance --> <!-- Run with: mvn clean test -Parq-jbossas-remote --> <id>arq-jbossas-remote</id> <dependencies> <dependency> <groupId>org.jboss.as</groupId> <artifactId>jboss-as-arquillian-container-remote</artifactId> <scope>test</scope> </dependency> </dependencies> </profile> <profile> <!-- When built in OpenShift the 'openshift' profile will be used when invoking mvn. --> <!-- Use this profile for any OpenShift specific customization your app will need. --> <!-- By default that is to put the resulting archive into the 'deployments' folder. --> <!-- http://maven.apache.org/guides/mini/guide-building-for-different-environments.html --> <id>openshift</id> <build> <plugins> <plugin> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skip>true</skip> </configuration> </plugin> <plugin> <artifactId>maven-war-plugin</artifactId> <version>2.4</version> <configuration> <failOnMissingWebXml>false</failOnMissingWebXml> <outputDirectory>deployments</outputDirectory> <warName>ROOT</warName> </configuration> </plugin> </plugins> </build> </profile> </profiles> </project>
- configuration 폴더 내의 nexus 설정 파일(.xml 확장자)
<settings> <servers> <server> <id>nexus</id> <username>admin</username> <password>dkagh1.</password> </server> </servers> <mirrors> <mirror> <!--This sends everything else to /public --> <id>nexus</id> <mirrorOf>*</mirrorOf> <url>http://10.10.10.17:8081/repository/maven-public/</url> </mirror> </mirrors> <profiles> <profile> <id>nexus</id> <!--Enable snapshots for the built in central repo to direct --> <!--all requests to nexus via the mirror --> <repositories> <repository> <id>central</id> <url>http://central</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>true</enabled></snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>central</id> <url>http://central</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>true</enabled></snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <!--make the profile active all the time --> <activeProfile>nexus</activeProfile> </activeProfiles> </settings>
-
새로운 코드를 작성 시, gitlab에 올려서 코드가 제대로 받아오는 지, 접근이 가능한지에 대한 인증이 필수