Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to update agentPolicies within Kibana CRD #7290

Closed
landorg opened this issue Nov 6, 2023 · 4 comments · Fixed by #8125
Closed

Unable to update agentPolicies within Kibana CRD #7290

landorg opened this issue Nov 6, 2023 · 4 comments · Fixed by #8125
Labels
>bug Something isn't working

Comments

@landorg
Copy link

landorg commented Nov 6, 2023

Bug Report

What did you do?
Setup Kibana like in the resource definition part below.

Then change something in the agent policy.

What did you expect to see?
The agent policy to update according to the setting in xpack.fleet.agentPolicies

What did you see instead? Under which circumstances?
Agent policy not updated.

Environment

  • ECK version: 2.9.0

  • Kubernetes information: RKE v1.3.3 on bare metal servers k8s: 1.23.8

$ kubectl version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.23.8
  • Resource definition:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  annotations:
  name: main-kibana
  namespace: elasticsearch
spec:
  config:
    server:
      publicBaseUrl: https://kibana1.example.com
    xpack.fleet.agentPolicies:
    - id: eck-fleet-server
      is_default_fleet_server: true
      monitoring_enabled:
      - logs
      - metrics
      name: Fleet Server on ECK policy
      namespace: default
      package_policies:
      - id: fleet_server-1
        name: fleet_server-1
        package:
          name: fleet_server
      unenroll_timeout: 900
    - id: eck-agent
      is_default: true
      monitoring_enabled:
      - logs
      - metrics
      name: Elastic Agent on ECK policy
      namespace: default
      package_policies:
      - id: system-1
        name: system-1
        package:
          name: system
      - id: system-2
        name: system-2
        package:
          name: auditd
      - id: system-3
        name: system-3
        package:
          name: auditd_manager
      - id: system-4
        name: system-4
        package:
          name: network_traffic
      - id: system-5
        name: system-5
        package:
          name: kubernetes
      unenroll_timeout: 900
    xpack.fleet.agents.fleet_server.hosts:
    - https://fleet-server.example.com
    xpack.fleet.outputs:
    - config:
        logging.level: debug
      hosts:
      - https://elasticsearch1.example.com
      id: main-elasticsearch
      is_default: true
      is_default_monitoring: true
      name: main-elasticsearch
      type: elasticsearch
    xpack.fleet.packages:
    - name: system
      version: latest
    - name: auditd
      version: latest
    - name: auditd_manager
      version: latest
    - name: elastic_agent
      version: latest
    - name: fleet_server
      version: latest
    - name: network_traffic
      version: latest
    - name: kubernetes
      version: latest
  count: 1
  elasticsearchRef:
    name: main-elasticsearch
  enterpriseSearchRef: {}
  http:
    service:
      metadata:
        annotations:
          konghq.com/protocol: https
      spec: {}
    tls:
      certificate: {}
  monitoring:
    logs: {}
    metrics: {}
  podTemplate:
    metadata:
      creationTimestamp: null
    spec:
      containers:
      - env:
        - name: NODE_OPTIONS
          value: --max-old-space-size=2048
        name: kibana
        resources:
          limits:
            memory: 3Gi
          requests:
            memory: 2Gi
      nodeSelector:
        deploy/elasticsearch: "true"
  version: 8.10.4
@barkbay
Copy link
Contributor

barkbay commented Oct 17, 2024

I can confirm this behaviour. I create a policy with only 1 integration, then added a second one without effect.

I had to rename the policy to get the 2 expected integrations displayed in the Kibana UI:

Image

Note that:

  • The previous policy is not deleted while it no longer exists in Kibana (may be acceptable?)
  • Despite the change in the policyID the Pods are still using the previous policy. So it seems we have actually 2 bugs here...

@barkbay
Copy link
Contributor

barkbay commented Oct 17, 2024

Despite the change in the policyID the Pods are still using the previous policy. So it seems we have actually 2 bugs here...

This is because while a new enrollment token is created and injected in the *-envvars Secret the Agent Pods are not restarted to load that new token. I'll open an issue for this one.

@barkbay
Copy link
Contributor

barkbay commented Oct 18, 2024

Related issue in Kibana: elastic/kibana#111401

@barkbay
Copy link
Contributor

barkbay commented Oct 18, 2024

As mentioned in #8109 (comment) using is_managed: true solved the problem during my tests on 8.15.2 :

    xpack.fleet.agentPolicies:
      - name: Fleet Server on ECK policy
        id: eck-fleet-server
        namespace: default
        is_managed: true ## <---- here
        monitoring_enabled:
          - logs
          - metrics
        unenroll_timeout: 900
        package_policies:
          - name: fleet_server-1
            id: fleet_server-1
            package:
              name: fleet_server
      - name: Elastic Agent on ECK policy
        id: eck-agent1
        namespace: default
        is_managed: true ## <---- here
        monitoring_enabled:
          - logs
          - metrics
        unenroll_timeout: 900
        package_policies:
          - package:
              name: kubernetes
            name: kubernetes-1
            id: kubernetes-1
          - name: system-1
            id: system-1
            package:
              name: system

We should update our examples to include this setting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants