Skip to content

MON-4029: Add KubeStateMetricsConfig to ClusterMonitoring API#2778

Open
danielmellado wants to merge 1 commit intoopenshift:masterfrom
danielmellado:mon_4029_kube_state_metrics
Open

MON-4029: Add KubeStateMetricsConfig to ClusterMonitoring API#2778
danielmellado wants to merge 1 commit intoopenshift:masterfrom
danielmellado:mon_4029_kube_state_metrics

Conversation

@danielmellado
Copy link
Copy Markdown
Contributor

Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD,
allowing configuration of the kube-state-metrics agent (node selectors,
resources, tolerations, topology spread constraints). Includes
comprehensive integration tests for validation.

Signed-off-by: Daniel Mellado dmellado@redhat.com

@openshift-ci-robot
Copy link
Copy Markdown

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 24, 2026
@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented Mar 24, 2026

@danielmellado: This pull request references MON-4029 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD,
allowing configuration of the kube-state-metrics agent (node selectors,
resources, tolerations, topology spread constraints). Includes
comprehensive integration tests for validation.

Signed-off-by: Daniel Mellado dmellado@redhat.com

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 24, 2026

Hello @danielmellado! Some important instructions when contributing to openshift/api:
API design plays an important part in the user experience of OpenShift and as such API PRs are subject to a high level of scrutiny to ensure they follow our best practices. If you haven't already done so, please review the OpenShift API Conventions and ensure that your proposed changes are compliant. Following these conventions will help expedite the api review process for your PR.

@openshift-ci openshift-ci bot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Mar 24, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 24, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign deads2k for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@danielmellado
Copy link
Copy Markdown
Contributor Author

@everettraven this PR continues #2461 and rebases and adds your comments. Thanks!

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 24, 2026

📝 Walkthrough

Walkthrough

Added an optional kubeStateMetricsConfig field to ClusterMonitoringSpec and introduced the KubeStateMetricsConfig and KubeStateMetricsResourceLabels types plus related enums/constants for resource and label names. Updated generated deepcopy and Swagger documentation. Extended the ClusterMonitoring CRD schema to validate the new fields. Added substantial YAML tests covering many accept and reject cases for spec.kubeStateMetricsConfig.

🚥 Pre-merge checks | ✅ 10
✅ Passed checks (10 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically summarizes the main change: adding KubeStateMetricsConfig to the ClusterMonitoring API, matching the changeset's primary objective.
Description check ✅ Passed The description is directly related to the changeset, explaining the new KubeStateMetricsConfig struct addition and mentioning comprehensive integration tests, which aligns with the file changes.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Stable And Deterministic Test Names ✅ Passed Test names in the PR are descriptive and static without dynamic identifiers, timestamps, or generated values.
Test Structure And Quality ✅ Passed YAML-based test cases follow Ginkgo test quality requirements through established test generator framework with proper setup, teardown, and consistent test patterns.
Microshift Test Compatibility ✅ Passed Tests being added are local integration tests using envtest, not e2e tests running on actual clusters. They validate ClusterMonitoring CRD schema locally through generated Ginkgo test cases.
Single Node Openshift (Sno) Test Compatibility ✅ Passed PR adds only YAML-based CRD validation tests, not Ginkgo e2e tests. No Ginkgo patterns or imports found in modified Go files.
Topology-Aware Scheduling Compatibility ✅ Passed PR only adds type definitions and schemas for KubeStateMetricsConfig; no hardcoded scheduling constraints or deployment manifests that would restrict topology compatibility.
Ote Binary Stdout Contract ✅ Passed The pull request modifies only declarative files with no process-level executable code that could write to stdout.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed This pull request contains no Ginkgo e2e tests; only Go type definitions, autogenerated code, CRD schema updates, and CUE-based validation test cases in YAML format.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.11.4)

Error: build linters: unable to load custom analyzer "kubeapilinter": tools/_output/bin/kube-api-linter.so, plugin: not implemented
The command is terminated due to an error: build linters: unable to load custom analyzer "kubeapilinter": tools/_output/bin/kube-api-linter.so, plugin: not implemented


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`:
- Around line 1117-1234: The CRD currently documents invariants but doesn't
enforce them; update the schema for the PodTopologySpread-like object to:
restrict whenUnsatisfiable to an enum of allowed values (e.g.,
"DoNotSchedule","ScheduleAnyway") on the whenUnsatisfiable property, add
minimum: 1 (and disallow 0) on maxSkew (format int32) and minimum: 1 on
minDomains (or use minimum: 1 when present), and add a validation rule that
requires labelSelector when matchLabelKeys is set (make matchLabelKeys mutually
exclusive with labelSelector and/or add a x-kubernetes-requirements or OpenAPI
dependency that enforces labelSelector presence when matchLabelKeys exists).
Target the properties named whenUnsatisfiable, maxSkew, minDomains,
matchLabelKeys and labelSelector in the CRD fragment to implement these
constraints.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: a91c322b-92fd-43a0-a96f-b2b4da86aa91

📥 Commits

Reviewing files that changed from the base of the PR and between 324a1bc and 02919b5.

⛔ Files ignored due to path filters (3)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**
📒 Files selected for processing (5)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • config/v1alpha1/zz_generated.deepcopy.go
  • config/v1alpha1/zz_generated.swagger_doc_generated.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

Comment on lines +1117 to +1234
matchLabelKeys:
description: |-
MatchLabelKeys is a set of pod label keys to select the pods over which
spreading will be calculated. The keys are used to lookup values from the
incoming pod labels, those key-value labels are ANDed with labelSelector
to select the group of existing pods over which spreading will be calculated
for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector.
MatchLabelKeys cannot be set when LabelSelector isn't set.
Keys that don't exist in the incoming pod labels will
be ignored. A null or empty list means only match against labelSelector.

This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default).
items:
type: string
type: array
x-kubernetes-list-type: atomic
maxSkew:
description: |-
MaxSkew describes the degree to which pods may be unevenly distributed.
When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference
between the number of matching pods in the target topology and the global minimum.
The global minimum is the minimum number of matching pods in an eligible domain
or zero if the number of eligible domains is less than MinDomains.
For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
labelSelector spread as 2/2/1:
In this case, the global minimum is 1.
| zone1 | zone2 | zone3 |
| P P | P P | P |
- if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2;
scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2)
violate MaxSkew(1).
- if MaxSkew is 2, incoming pod can be scheduled onto any zone.
When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence
to topologies that satisfy it.
It's a required field. Default value is 1 and 0 is not allowed.
format: int32
type: integer
minDomains:
description: |-
MinDomains indicates a minimum number of eligible domains.
When the number of eligible domains with matching topology keys is less than minDomains,
Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed.
And when the number of eligible domains with matching topology keys equals or greater than minDomains,
this value has no effect on scheduling.
As a result, when the number of eligible domains is less than minDomains,
scheduler won't schedule more than maxSkew Pods to those domains.
If value is nil, the constraint behaves as if MinDomains is equal to 1.
Valid values are integers greater than 0.
When value is not nil, WhenUnsatisfiable must be DoNotSchedule.

For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same
labelSelector spread as 2/2/2:
| zone1 | zone2 | zone3 |
| P P | P P | P P |
The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0.
In this situation, new pod with the same labelSelector cannot be scheduled,
because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones,
it will violate MaxSkew.
format: int32
type: integer
nodeAffinityPolicy:
description: |-
NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector
when calculating pod topology spread skew. Options are:
- Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.

If this value is nil, the behavior is equivalent to the Honor policy.
type: string
nodeTaintsPolicy:
description: |-
NodeTaintsPolicy indicates how we will treat node taints when calculating
pod topology spread skew. Options are:
- Honor: nodes without taints, along with tainted nodes for which the incoming pod
has a toleration, are included.
- Ignore: node taints are ignored. All nodes are included.

If this value is nil, the behavior is equivalent to the Ignore policy.
type: string
topologyKey:
description: |-
TopologyKey is the key of node labels. Nodes that have a label with this key
and identical values are considered to be in the same topology.
We consider each <key, value> as a "bucket", and try to put balanced number
of pods into each bucket.
We define a domain as a particular instance of a topology.
Also, we define an eligible domain as a domain whose nodes meet the requirements of
nodeAffinityPolicy and nodeTaintsPolicy.
e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology.
And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology.
It's a required field.
type: string
whenUnsatisfiable:
description: |-
WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
the spread constraint.
- DoNotSchedule (default) tells the scheduler not to schedule it.
- ScheduleAnyway tells the scheduler to schedule the pod in any location,
but giving higher precedence to topologies that would help reduce the
skew.
A constraint is considered "Unsatisfiable" for an incoming pod
if and only if every possible node assignment for that pod would violate
"MaxSkew" on some topology.
For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
labelSelector spread as 3/1/1:
| zone1 | zone2 | zone3 |
| P P P | P | P |
If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled
to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies
MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler
won't make it *more* imbalanced.
It's a required field.
type: string
required:
- maxSkew
- topologyKey
- whenUnsatisfiable
type: object
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Enforce documented topology constraint invariants in the schema

The descriptions document hard constraints, but the schema does not enforce several of them (whenUnsatisfiable allowed values, positive maxSkew/minDomains, and matchLabelKeys requiring labelSelector). Invalid CRs can pass admission and then fail later during reconciliation/pod creation.

🔧 Proposed schema validation additions
                         maxSkew:
                           description: |-
                             MaxSkew describes the degree to which pods may be unevenly distributed.
@@
                           format: int32
+                          minimum: 1
                           type: integer
                         minDomains:
                           description: |-
                             MinDomains indicates a minimum number of eligible domains.
@@
                           format: int32
+                          minimum: 1
                           type: integer
@@
                         whenUnsatisfiable:
                           description: |-
                             WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
@@
+                          enum:
+                          - DoNotSchedule
+                          - ScheduleAnyway
                           type: string
+                      x-kubernetes-validations:
+                      - message: minDomains can only be set when whenUnsatisfiable is DoNotSchedule
+                        rule: '!has(self.minDomains) || self.whenUnsatisfiable == ''DoNotSchedule'''
+                      - message: matchLabelKeys cannot be set when labelSelector is not set
+                        rule: '!has(self.matchLabelKeys) || has(self.labelSelector)'
                       required:
                       - maxSkew
                       - topologyKey
                       - whenUnsatisfiable

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`
around lines 1117 - 1234, The CRD currently documents invariants but doesn't
enforce them; update the schema for the PodTopologySpread-like object to:
restrict whenUnsatisfiable to an enum of allowed values (e.g.,
"DoNotSchedule","ScheduleAnyway") on the whenUnsatisfiable property, add
minimum: 1 (and disallow 0) on maxSkew (format int32) and minimum: 1 on
minDomains (or use minimum: 1 when present), and add a validation rule that
requires labelSelector when matchLabelKeys is set (make matchLabelKeys mutually
exclusive with labelSelector and/or add a x-kubernetes-requirements or OpenAPI
dependency that enforces labelSelector presence when matchLabelKeys exists).
Target the properties named whenUnsatisfiable, maxSkew, minDomains,
matchLabelKeys and labelSelector in the CRD fragment to implement these
constraints.

@openshift-ci openshift-ci bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 28, 2026
@danielmellado danielmellado force-pushed the mon_4029_kube_state_metrics branch from 02919b5 to 361dc49 Compare April 7, 2026 13:37
@openshift-ci openshift-ci bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 7, 2026
@danielmellado
Copy link
Copy Markdown
Contributor Author

/hold until openshift/cluster-monitoring-operator#2553 merges

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 7, 2026
@danielmellado
Copy link
Copy Markdown
Contributor Author

/cc @rexagod

@openshift-ci openshift-ci bot requested a review from rexagod April 7, 2026 13:38
Adds a new KubeStateMetricsConfig struct to the ClusterMonitoring CRD,
allowing configuration of the kube-state-metrics agent (node selectors,
resources, tolerations, topology spread constraints). Includes
comprehensive integration tests for validation.

Signed-off-by: Daniel Mellado <dmellado@redhat.com>
Signed-off-by: Daniel Mellado <dmellado@fedoraproject.org>
@danielmellado danielmellado force-pushed the mon_4029_kube_state_metrics branch from 361dc49 to d134762 Compare April 17, 2026 12:56
@danielmellado
Copy link
Copy Markdown
Contributor Author

/unhold

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 17, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml (1)

1175-1292: ⚠️ Potential issue | 🟠 Major

Enforce documented topology spread invariants in schema (still missing).

This block documents constraints, but schema-level enforcement is still incomplete (whenUnsatisfiable values, positive maxSkew/minDomains, and matchLabelKeys requiring labelSelector). Invalid CRs can pass admission and fail later.

🔧 Proposed schema validation patch
                         maxSkew:
@@
                           format: int32
+                          minimum: 1
                           type: integer
                         minDomains:
@@
                           format: int32
+                          minimum: 1
                           type: integer
@@
                         whenUnsatisfiable:
@@
+                          enum:
+                          - DoNotSchedule
+                          - ScheduleAnyway
                           type: string
                       required:
                       - maxSkew
                       - topologyKey
                       - whenUnsatisfiable
                       type: object
+                      x-kubernetes-validations:
+                      - message: minDomains can only be set when whenUnsatisfiable is DoNotSchedule
+                        rule: '!has(self.minDomains) || self.whenUnsatisfiable == ''DoNotSchedule'''
+                      - message: matchLabelKeys cannot be set when labelSelector is not set
+                        rule: '!has(self.matchLabelKeys) || has(self.labelSelector)'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`
around lines 1175 - 1292, The CRD currently documents invariants but doesn't
enforce them; update the CRD's openAPIV3Schema to (1) restrict whenUnsatisfiable
to the allowed enum values (e.g., "DoNotSchedule","ScheduleAnyway"), (2) add
minimum: 1 for maxSkew and minDomains (and type integer) so maxSkew cannot be
0/negative and minDomains > 0, (3) add a conditional validation that when
minDomains is set then whenUnsatisfiable must equal "DoNotSchedule", and (4) add
validations requiring that matchLabelKeys cannot be present unless labelSelector
is set and that keys in matchLabelKeys do not overlap with labelSelector keys;
implement the conditional/complex checks using CRD CEL
(x-kubernetes-validations) or equivalent CRD validation mechanisms referencing
the fields matchLabelKeys, labelSelector, maxSkew, minDomains, and
whenUnsatisfiable so invalid CRs are rejected at admission time.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 2145-2158: The test "Should reject KubeStateMetricsConfig with
duplicate resource in additionalResourceLabels" uses an invalid resource value
"jobs" which triggers enum validation before duplicate detection; update the two
entries under spec.kubeStateMetricsConfig.additionalResourceLabels to use a
valid resource (for example "pods" or another accepted enum value) so the YAML
passes resource validation and the duplicate-resource check is exercised by
ClusterMonitoring validation.

---

Duplicate comments:
In
`@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml`:
- Around line 1175-1292: The CRD currently documents invariants but doesn't
enforce them; update the CRD's openAPIV3Schema to (1) restrict whenUnsatisfiable
to the allowed enum values (e.g., "DoNotSchedule","ScheduleAnyway"), (2) add
minimum: 1 for maxSkew and minDomains (and type integer) so maxSkew cannot be
0/negative and minDomains > 0, (3) add a conditional validation that when
minDomains is set then whenUnsatisfiable must equal "DoNotSchedule", and (4) add
validations requiring that matchLabelKeys cannot be present unless labelSelector
is set and that keys in matchLabelKeys do not overlap with labelSelector keys;
implement the conditional/complex checks using CRD CEL
(x-kubernetes-validations) or equivalent CRD validation mechanisms referencing
the fields matchLabelKeys, labelSelector, maxSkew, minDomains, and
whenUnsatisfiable so invalid CRs are rejected at admission time.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 5847c428-e700-46b3-84f5-f167574373e9

📥 Commits

Reviewing files that changed from the base of the PR and between 361dc49 and d134762.

⛔ Files ignored due to path filters (6)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
  • openapi/openapi.json is excluded by !openapi/**
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • config/v1alpha1/types_cluster_monitoring.go

Comment on lines +2145 to +2158
- name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels
initial: |
apiVersion: config.openshift.io/v1alpha1
kind: ClusterMonitoring
spec:
kubeStateMetricsConfig:
additionalResourceLabels:
- resource: jobs
labels:
- foo
- resource: jobs
labels:
- bar
expectedError: "Duplicate value"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate resource test uses invalid resource values.

This test intends to validate duplicate detection for additionalResourceLabels, but uses jobs which is not a valid resource value. The test may fail for the wrong reason (invalid enum) before reaching the duplicate validation. Use valid resource values to properly test duplicate detection.

🔧 Proposed fix
     - name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels
       initial: |
         apiVersion: config.openshift.io/v1alpha1
         kind: ClusterMonitoring
         spec:
           kubeStateMetricsConfig:
             additionalResourceLabels:
-              - resource: jobs
+              - resource: nodes
                 labels:
                   - foo
-              - resource: jobs
+              - resource: nodes
                 labels:
                   - bar
       expectedError: "Duplicate value"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels
initial: |
apiVersion: config.openshift.io/v1alpha1
kind: ClusterMonitoring
spec:
kubeStateMetricsConfig:
additionalResourceLabels:
- resource: jobs
labels:
- foo
- resource: jobs
labels:
- bar
expectedError: "Duplicate value"
- name: Should reject KubeStateMetricsConfig with duplicate resource in additionalResourceLabels
initial: |
apiVersion: config.openshift.io/v1alpha1
kind: ClusterMonitoring
spec:
kubeStateMetricsConfig:
additionalResourceLabels:
- resource: nodes
labels:
- foo
- resource: nodes
labels:
- bar
expectedError: "Duplicate value"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`
around lines 2145 - 2158, The test "Should reject KubeStateMetricsConfig with
duplicate resource in additionalResourceLabels" uses an invalid resource value
"jobs" which triggers enum validation before duplicate detection; update the two
entries under spec.kubeStateMetricsConfig.additionalResourceLabels to use a
valid resource (for example "pods" or another accepted enum value) so the YAML
passes resource validation and the duplicate-resource check is exercised by
ClusterMonitoring validation.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 17, 2026

@danielmellado: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants