This is the multi-page printable view of this section. Click here to print.
Documentation
- 1: Installation
- 1.1: kubectl apply
- 1.2: Helm
- 1.3: Configuration
- 1.3.1: Configure Storage Layer
- 2: Concepts
- 2.1: PediaCluster
- 2.2: Public Configuration of Cluster Sync Resources(ClusterSyncResources)
- 2.3: Collection Resource
- 2.4: Cluster Auto Import Policy
- 3: Usage
- 3.1: Import Clusters
- 3.2: Interfacing to Multi-Cloud Platforms
- 3.3: Synchronize Cluster Resources
- 3.4: Access the Clusterpedia
- 3.5: Search
- 3.5.1: Multiple Clusters
- 3.5.2: Specified a Cluster
- 3.5.3: Collection Resource
- 4: Advanced Features
- 5: Features
- 5.1: Return RemainingItemCount
- 5.2: Raw SQL Query
- 5.3: Resource Field Pruning
- 5.4: Standalone TCP for Health Checker
- 5.5: Sync All Custom Resources
- 5.6: Sync All Resources
- 6: Release notes
1 - Installation
1.1 - kubectl apply
Install
The installation of Clusterpedia is divided into several parts:
If you use existing storage component (MySQL or PostgreSQL), directly skip the step of installing the storage component.
Pull clusterpedia project:
git clone https://github.com/clusterpedia-io/clusterpedia.git
cd clusterpedia
git checkout v0.7.0
Install storage component
Clusterpedia installation provides two storage components (MySQL 8.0 and PostgreSQL 12) to choose.
If you use existing storage components (MySQL or PostgreSQL), directly skip this step
Go to the installation directory of the selected storage component
cd ./deploy/internalstorage/postgres
Go to the installation directory of the selected storage component
cd ./deploy/internalstorage/mysql
The storage component uses the Local PV method to store data, and you shall specify the node where the Local PV is located during deployment
You can choose to provide your own PV
export STORAGE_NODE_NAME=<nodename>
sed "s|__NODE_NAME__|$STORAGE_NODE_NAME|g" `grep __NODE_NAME__ -rl ./templates` > clusterpedia_internalstorage_pv.yaml
Deploy storage component
kubectl apply -f .
# Go back to Clusterpedia root directory
cd ../../../
Install Clusterpedia
Once the storage component are successfully deployed, you can install the Clusterpedia.
If you uses existing storage component, refer to Configure Storage Layer to set the storage component into Default Storage Layer
Run the following cmd in the clusterpedia root directory
# Deploy Clusterpedia CRD and components
kubectl apply -f ./deploy
Final check
Check if the component Pods are running properly
kubectl -n clusterpedia-system get pods
Create Cluster Auto Import Policy —— ClusterImportPolicy
After 0.4.0, Clusterpedia provides a more friendly way to interface to multi-cloud platforms.
Users can create ClusterImportPolicy
to automatically discover managed clusters in the multi-cloud platform and automatically synchronize them as PediaCluster
,
so you don’t need to maintain PediaCluster
manually based on the managed clusters.
We maintain PediaCluster
for each multi-cloud platform in the Clusterpedia repository. ClusterImportPolicy` for each multi-cloud platform.
People also submit ClusterImportPolicy to Clusterpedia for interfacing to other multi-cloud platforms.
After installing Clusterpedia, you can create the appropriate ClusterImportPolicy
,
or create a new ClusterImportPolicy
according to your needs (multi-cloud platform).
For details, please refer to Interfacing to Multi-Cloud Platforms
kubectl get clusterimportpolicy
Uninstall
Clean up ClusterImportPolicy
If you have deployed ClusterImportPolicy
then you need to clean up the ClusterImportPolicy
resources first.
kubectl get clusterimportpolicy
Clean up PediaCluster
Before uninstalling Clusterpedia, you need to check if PediaCluster resources still exist in your environment, and clean up those resources.
kubectl get pediacluster
Uninstall Clusterpedia
After the PediaCluster resource cleanup is complete, uninstall the Clusterpedia components.
kubectl delete -f ./deploy/clusterpedia_apiserver_apiservice.yaml
kubectl delete -f ./deploy/clusterpedia_apiserver_deployment.yaml
kubectl delete -f ./deploy/clusterpedia_clustersynchro_manager_deployment.yaml
kubectl delete -f ./deploy/clusterpedia_apiserver_rbac.yaml
kubectl delete -f ./deploy/cluster.clusterpedia.io_pediaclusers.yaml
Uninstall Storage Component
Remove related resources depending on the type of storage component selected.
kubectl delete -f ./deploy/internalstorage/<storage type>
remove Local PV and clean up data
After the storage component is uninstalled, the Local PV and corresponding data will still be left in the node and we need to clean it manually.
View the mounted nodes via Local PV resource details.
kubectl get pv clusterpedia-internalstorage-<storage type>
Once you know the node where the data is stored, you can delete the Local PV.
kubectl delete pv clusterpedia-internalstorage-<storage type>
Log in to the node where the data is located and clean up the data.
# In the node where the legacy data is located
rm -rf /var/local/clusterpedia/internalstorage/<storage type>
1.2 - Helm
1.3 - Configuration
1.3.1 - Configure Storage Layer
Default Storage Layer
of Clusterpedia supports two storage components: MySQL and PostgreSQL.
When installing Clusterpedia, you can use existing storage component and create Default Storage Layer
(ConfigMap) and Secret of storage component
.
Configure the Default Storage Layer
You shall create clusterpedia-internalstorage
ConfigMap in the clusterpedia-system
namespace.
# internalstorage configmap example
apiVersion: v1
kind: ConfigMap
metadata:
name: clusterpedia-internalstorage
namespace: clusterpedia-system
data:
internalstorage-config.yaml: |
type: "mysql"
host: "clusterpedia-internalstorage-mysql"
port: 3306
user: root
database: "clusterpedia"
connPool:
maxIdleConns: 10
maxOpenConns: 100
connMaxLifetime: 1h
log:
slowThreshold: "100ms"
logger:
filename: /var/log/clusterpedia/internalstorage.log
maxbackups: 3
Default Storage Layer
config supports the following fields:
field | description |
---|---|
type |
type of storage component such as “postgres” and “mysql” |
host |
host for storage component such as IP address or Service Name |
port |
port for storage component |
user |
user for storage component |
password |
password for storage component |
database |
the database used by Clusterpedia |
It is a good choice to store the access password to Secret. For details see Configure Secret of storage component
Connection Pool
field | description | default value |
---|---|---|
connPool.maxIdleConns |
the maximum number of connections in the idle connection pool. | 10 |
connPool.maxOpenConns |
the maximum number of open connections to the database. | 100 |
connPool.connMaxLifetime |
the maximum amount of time a connection may be reused. | 1h |
Set up the database connection pool according to the user’s current environment.
Configure log
Clusterpedia supports to configure logs for storage layer, enabling the log to record slow SQL queries
and errors
via the log
field.
field | description |
---|---|
log.stdout |
Output log to standard device |
log.colorful |
Enable color print or not |
log.slowThreshold |
Set threshold for slow SQL queries such as “100ms” |
log.level |
Set the severity level such as Slient, Error, Warn, Info |
log.logger |
configure rolling logger |
After enabling log, if log.stdout
is not set to true, the log will be output to /var/log/clusterpedia/internalstorage.log
Rolling logger
Write storage lay logs to file, and configure log file rotation
field | description |
---|---|
log.logger.filename |
the file to write logs to, backup log files will be retained in the same directory, default is /var/log/clusterpedia/internalstorage.log |
log.logger.maxsize |
the maximum size in megabytes of the log file before it gets rotated. default is 100 MB. |
log.logger.maxage |
the maximum number of days to retain old log files based on the timestamp encoded in their filename. |
log.logger.maxbackups |
the maximum number of old log files to retain. |
log.logger.localtime |
whether it is local time, default is to use UTC time |
log.logger.compress |
compress determines if the rotated log files should be compressed using gzip. |
Disable log
If the log
field is not filled in the internalstorage config, log will be ignored, for example:
type: "mysql"
host: "clusterpedia-internalstorage-mysql"
port: 3306
user: root
database: "clusterpedia"
More configuration
The default storage layer also provides more configurations about MySQL and PostgreSQL. Refer to internalstorage/config.go.
Configure Secret
The yaml file that is used to install Clusterpedia may get the password from internalstorage-password
Secret.
Configure the storage component password to Secret
kubectl -n clusterpedia-system create secret generic \
internalstorage-password --from-literal=password=<password to access storage components>
2 - Concepts
2.1 - PediaCluster
Clusterpedia uses the PediaCluster
resource to represent a cluster that needs to synchronize and retrieve resources.
Clusterpedia needs to be very friendly to other multi-cloud platforms that may use
Cluster
resource to represent managed clusters, to avoid conflicts Clusterpedia usesPediaCluster
.
$ kubectl get pediacluster
NAME APISERVER VERSION STATUS
demo1 https://10.6.101.100:6443 v1.22.3-aliyun.1 Healthy
demo2 https://10.6.101.100:6443 v1.21.0 Healthy
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: demo1
spec:
apiserver: https://10.6.101.100:6443
kubeconfig:
caData:
tokenData:
certData:
keyData:
syncResources: []
syncAllCustomResources: false
syncResourcesRefName: ""
PediaCluster has two uses:
- Configure authentication information for the cluster
- Configure resources for synchronization
Configuring cluster authentication information can be found in Import Clusters
There are three fields to configure the resources to be synchronized.
spec.syncResources
configures the resources that need to be synchronized for this clusterspec.syncAllCustomResources
synchronizes all custom resourcesspec.syncResourcesRefName
references Public Configuration of Cluster Sync Resources
For details on configuring synchronization resources, see Synchronize Cluster Resources
2.2 - Public Configuration of Cluster Sync Resources(ClusterSyncResources)
The Clusterpedia provides the public configuration of cluster sync resources —— ClusterSyncResources
kubectl get clustersyncresources
The spec.syncResources
field of ClusterSyncResources is configured in the same way as PediaCluster’s spec.syncResources
, see Synchronize Cluster Resources
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: ClusterSyncResources
metadata:
name: global-base
spec:
syncResources:
- group: ""
resources:
- pods
- group: "apps"
resources:
- "*"
The spec.syncAllCustomResource
field will be supported in the future to support setting up the synchronization of all custom resources
Any PediaCluster can refer to the same ClusterSyncResources via spec.syncResourcesRefName
field.
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: demo1
spec:
syncResourcesRefName: "global-base"
When we modify ClusterSyncResources, all resource types syncronized within the PediaCluster that reference it will be modified accordingly.
If PediaCluster has both spec.syncResourcesRefName
and spec.syncResources
set, then the concatenation of the two will be used.
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: demo1
spec:
syncResourcesRefName: "global-base"
syncResources:
- group: ""
resources:
- pods
- configmaps
In the above example, clusterpedia synchronizes the pods and configmaps resources, and all resources under the apps group in the demo1 cluster.
2.3 - Collection Resource
In order to query multiple types of resources at once, Clusterpedia provides a new resource: Collection Resource
.
Collection Resource
is composed of different types of resources, and these resources can be retrieved and paged in a uniform way through the Collection Resource
.
What Collection Resources are supported by the Clusterpedia depends on the Storage Layer
. For example, the Default Storage Layer
temporarily supports the any
, workloads
and kuberesources
.
kubectl get collectionresources
# Output:
NAME RESOURCES
any *
workloads deployments.apps,daemonsets.apps,statefulsets.apps
kuberesources .*,*.admission.k8s.io,*.admissionregistration.k8s.io,*.apiextensions.k8s.io,*.apps,*.authentication.k8s.io,*.authorization.k8s.io,*.autoscaling,*.batch,*.certificates.k8s.io,*.coordination.k8s.io,*.discovery.k8s.io,*.events.k8s.io,*.extensions,*.flowcontrol.apiserver.k8s.io,*.imagepolicy.k8s.io,*.internal.apiserver.k8s.io,*.networking.k8s.io,*.node.k8s.io,*.policy,*.rbac.authorization.k8s.io,*.scheduling.k8s.io,*.storage.k8s.io
any
means any resources, the use need to pass the groups or resources he wants to combine when using it, for details see Use Any CollectionResource
kuberesources
contains all of kube’s built-in resources, and we can use kuberesources
to filter and search all of theme in a uniform api.
View the supported Collection Resource
in a yaml file
kubectl get collectionresources -o yaml
# Output:
apiVersion: v1
items:
- apiVersion: clusterpedia.io/v1beta1
kind: CollectionResource
metadata:
creationTimestamp: null
name: any
resourceTypes: []
- apiVersion: clusterpedia.io/v1beta1
kind: CollectionResource
metadata:
creationTimestamp: null
name: workloads
resourceTypes:
- group: apps
resource: deployments
version: v1
- group: apps
resource: daemonsets
version: v1
- group: apps
resource: statefulsets
version: v1
- apiVersion: clusterpedia.io/v1beta1
kind: CollectionResource
metadata:
creationTimestamp: null
name: kuberesources
resourceTypes:
- group: ""
- group: admission.k8s.io
- group: admissionregistration.k8s.io
- group: apiextensions.k8s.io
- group: apps
- group: authentication.k8s.io
- group: authorization.k8s.io
- group: autoscaling
- group: batch
- group: certificates.k8s.io
- group: coordination.k8s.io
- group: discovery.k8s.io
- group: events.k8s.io
- group: extensions
- group: flowcontrol.apiserver.k8s.io
- group: imagepolicy.k8s.io
- group: internal.apiserver.k8s.io
- group: networking.k8s.io
- group: node.k8s.io
- group: policy
- group: rbac.authorization.k8s.io
- group: scheduling.k8s.io
- group: storage.k8s.io
kind: List
metadata:
resourceVersion: ""
selfLink: ""
It is found that workloads
includes three resources: deployments
, daemonsets
, and statefulsets
.
And kuberesources
contains all of kube’s built-in resources.
For details about Collection Resource
, see Search for Collection Resource
Custom Collection Resource
Clusterpedia plans to provide two ways to let users combine the types of resources they want to query at will.
Any CollectionResource
—— useany collectionresource
CustomCollectionResource
—— custom collection resource
After 0.4, Clusterpedia provides any collectionresource
to allow users to combine defferent types of resources by passing groups
and resources
parameters.
However, it should be noted that any collectionresource
cannot be retrieved using kubectl, see Using Any CollectionResource
$ kubectl get collectionresources any
Error from server (BadRequest): url query - `groups` or `resources` is required
Custom Collection Resource allows users to create or update a Collection Resource via kubectl apply collectionresource <collectionresource name>
, and users can configure the resource type of the Collection Resource at will.
apiVersion: clusterpedia.io/v1beta1
kind: CollectionResource
metadata:
name: workloads
resourceTypes:
- group: apps
resource: deployments
- group: apps
resource: daemonsets
- group: apps
resource: statefulsets
- group: batch
resource: cronjobs
Custom Collection Resource are not currently supported
2.4 - Cluster Auto Import Policy
The custom resource ClusterImportPolicy
defines how a certain type of resource should be converted into a PediaCluster
,
so that clusterpedia can automatically create, update and delete PediaCluster based on a certain resource.
First we need to define the type of resource (that you want to convert) to watch to in the ClusterImportPolicy
resource , we will call the resource being watched to the Source resource.
When a Source resource is created or deleted, the ClusterImportPolicy Controller creates the corresponding PediaClusterLifecycle
resource.
The custom resource PediaClusterLifecycle
creates, updates and deletes PediaCluster
based on the specific Source resource.
When creating and updating a PediaCluster
, the Source resource may store the cluster’s authentication information (eg. CA, Token, etc.) in other resources, which are collectively referred to as Reference resources.
The reconciliation of
ClusterImportPolicy
andPediaClusterLifecycle
is done by the ClusterImportPolicy Controller and PediaClusterLifecycle Controller within the Clusterpedia Controller Manager
ClusterImportPolicy
andPediaClusterLifecycle
resource structures may be updated more frequently, so to avoid unnecessary impact on the cluster.clusterpedia.io group, they are placed in the policy.clusterpedia.io group. It may be migrated to cluster.clusterpedia.io in the future 1.0 release.
ClusterImportPolicy
An example of a complete ClusterImportPolicy
resource is as follows:
apiVersion: policy.clusterpedia.io/v1alpha1
kind: ClusterImportPolicy
metadata:
name: mcp
spec:
source:
group: "cluster.example.io"
resource: clusters
selectorTemplate: ""
references:
- group: ""
resource: secrets
namespaceTemplate: "{{ .source.spec.authSecretRef.namespace }}"
nameTemplate: "{{ .source.spec.authSecretRef.name }}"
key: authSecret
nameTemplate: "mcp-{{ .source.metadata.name }}"
template: |
spec:
apiserver: "{{ .source.spec.apiEndpoint }}"
caData: "{{ .references.authSecret.data.ca }}"
tokenData: "{{ .references.authSecret.data.token }}"
syncResources:
- group: ""
resources:
- "pods"
- group: "apps"
resources:
- "*"
creationCondition: |
{{ if ne .source.spec.apiEndpoint "" }}
{{ range .source.status.conditions }}
{{ if eq .type "Ready" }}
{{ if eq .status "True" }} true {{ end }}
{{ end }}
{{ end }}
{{ end }}
There are these sections:
spec.source
andspec.references
define Source resource type and Reference resourcesspec.nameTemplate
defines the name of the generatedPediaClusterLifecycle
andPediaCluster
, which can be rendered according to the Source resource.spec.template
andspec.creationCondition
define the resource conversion policy.
Templates within references, template, and creationCondition all support the 70+ template functions provided by sprig, Sprig Function Documentation
Source and References Resource
The first thing we need to define in the ClusterImportPolicy
resource is the Source resource type and the Reference resources.
apiVersion: policy.clusterpedia.io/v1alpha1
kind: ClusterImportPolicy
metadata:
name: multi-cluster-polatform
spec:
source:
group: "example.io"
resource: clusters
versions: []
selectorTemplate: |
{{ if eq .source.metadata.namespace "default" }} true {{ end }}
references:
- group: ""
resource: secrets
versions: []
namespaceTemplate: "{{ .source.spec.secretRef.namespace }}"
nameTemplate: "{{ .source.spec.secretRef.name }}"
key: secret
The Source resource specifies the resource group and resource name via spec.source.group
and spec.source.resource
. We can also use spec.source.versions
to restrict Source resource versions, by default there is no restriction on Source resource versions
A Source resource can only have one
ClusterImportPolicy
resource responsible for converting that resource
You can also filter the Source by the spec.source.selectorTemplate
field.
apiVersion: policy.clusterpedia.io/v1alpha1
kind: ClusterImportPolicy
metadata:
name: kubevela
spec:
source:
group: ""
resource: secrets
selectorTemplate: |
{{ if eq .source.metadata.namespace "vela-system" }}
{{ if .source.metadata.labels }}
{{ eq (index .source.metadata.labels "cluster.core.oam.dev/cluster-credential-type") "X509Certificate" }}
{{ end }}
{{ end }}
source resources can be used for other template fields via {{ .source.<field> }}
The resources involved in the conversion process are defined in spec.references
and we need to specify the type of the resource and the specific reference resource via namespace and name templates. We can also restrict the version of the Reference resource in the same way as the Source resource.
In addition to this, we need to set a key for the reference resource, which will be used by subsequent template fields as {{ .references.<key> }}
*The later items in *spec.references
can also refer to the earlier ones via .references.<key>
spec:
references:
- group: a.example.io
resource: aresource
namespaceTemplate: "{{ .source.spec.aNamespace }}"
nameTemplate: "{{ .source.spec.aName }}"
key: refA
- group: b.example.io
resource: bresource
namespaceTemplate: "{{ .references.refA.spec.bNamespace }}"
nameTemplate: "{{ .references.refA.spec.bName }}"
key: refB
PediaClusterLifecycle
When a Source is created, the ClusterImportPolicy Controller creates the corresponding PediaClusterLifecycle
resource based on the ClusterImportPolicy
resource.
The name of the PediaClusterLifecycle
resource is set via the spec.nameTemplate
field of the ClusterImportPolicy
.
apiVersion: policy.clusterpedia.io/v1alpha1
kind: ClusterImportPolicy
metadata:
name: multi-cluster-platform
spec:
nameTemplate: "mcp-{{ .source.metadata.namespace }}-{{ .source.metadata.name }}"
The nameTemplate can only render templates based on Source resources, and the field is usually set based on whether it is a Cluster Scoped resource.
nameTemplate: "<prefix>-{{ .source.metadata.namespace}}-{{ .source.metadata.name }}"
nameTemplate: "<prefix>-{{ .source.metadata.name }}"
**nameTemplate is usually prefixed with a multi-cloud platform or other meaningful name, but can of course be set without a prefix. **
The PediaClusterLifecycle
sets the specific source resource and references resource and the conversion policy
apiVersion: policy.clusterpedia.io/v1alpha1
kind: PediaClusterLifecycle
metadata:
name: <prefix>-example
spec:
source:
group: example.io
version: v1beta1
resource: clusters
namespace: ""
name: example
references:
- group: ""
resource: secrets
version: v1
namespaceTemplate: "{{ .source.spec.secretRef.namespace }}"
nameTemplate: "{{ .source.spec.secretRef.name }}"
key: secret
The spec.source
of the PediaClusterLifecycle
sets a specific Source resource, including the specific version, namespace and name of the resource.
spec.references
contains the specific resource version compared to ClusterImportPolicy
, the other fields are the same as the References definition within ClusterImportPolicy
.
The namespace and name of the references resource will be resolved when converting the resource.
PediaClusterLifecycle 和 PediaCluster
The name of PediaClusterLifecycle
corresponds to PediaCluster
, and PediaClusterLifecycle
creates and updates PediaCluster
with the same name according to the conversion policy.
PediaCluster Convertion Policy
We focus on the following aspects when defining the conversion policy:
- the template used to create or update the
PediaCluster
- When to trigger the creation of a `PediaCluster
In ClusterImportPolicy
, we use spec.template
and spec.creationCondition
to define them
apiVersion: policy.clusterpedia.io/v1alpha1
kind: ClusterImportPolicy
metadata:
name: mcp
spec:
... other fields
template: |
spec:
apiserver: "{{ .source.spec.apiEndpoint }}"
caData: "{{ .references.authSecret.data.ca }}"
tokenData: "{{ .references.tokenData.data.token }}"
syncResources:
- group: ""
resources:
- "pods"
- group: "apps"
resources:
- "*"
creationCondition: |
{{ if ne .source.spec.apiEndpoint "" }}
{{ range .source.status.conditions }}
{{ if eq .type "Ready" }}
{{ if eq .status "True" }} true {{ end }}
{{ end }}
{{ end }}
{{ end }}
Both of these fields are template fields that need to be rendered based on Source resource and References resources.
When the Source resource is created, the ClusterImportPolicy Controller will create the corresponding PediaClusterLifecycle
based on the ClusterImportPolicy
, and the PediaClusterLifecycle
will will also contain the conversion policy.
Of course, if the policy in
ClusterImportPolicy
is modified, it will be synchronized to allPediaClusterLifecycle
resources it belongs to.
apiVersion: policy.clusterpedia.io/v1alpha1
kind: ClusterImportPolicy
metadata:
name: mcp-example
spec:
... other fields
template: |
spec:
apiserver: "{{ .source.spec.apiEndpoint }}"
caData: "{{ .references.authSecret.data.ca }}"
tokenData: "{{ .references.tokenData.data.token }}"
syncResources:
- group: ""
resources:
- "pods"
- group: "apps"
resources:
- "*"
creationCondition: |
{{ if ne .source.spec.apiEndpoint "" }}
{{ range .source.status.conditions }}
{{ if eq .type "Ready" }}
{{ if eq .status "True" }} true {{ end }}
{{ end }}
{{ end }}
{{ end }}
PediaClusterLifecycleis responsible for creating and updating specific
PediaClustersbased on
spec.creationConditionand
spec.template`.
Creation Condition
Sometimes we don’t create a PediaCluster
immediately after a Source resource is created, but rather we need to wait until some fields or some state of the Source resource is ready before creating the PediaCluster
.
spec.creationCondition
uses the template syntax to determine if the creation condition is met,
and when the template is rendered with a value of True
(case insensitive) then PediaCluster
is created according to `spec.
If the PediaCluster
already exists, spec.creationCondition
will not affect updates to the PediaCluster
.
PediaCluster Template
spec.template
defines a PediaCluster
resource template that renders specific resources based on Source resource and References resources when creating or updating a PediaCluster
.
The PediaCluster
template can be separated into three parts.
- Metadata: labels and annotations
- Cluster endpoint and authentication fields:
spec.apiserver
,spec.caData
,spec.tokenData
,spec.certData
,spec.keyData
andspec.kubeconfig
- Resource sync fields:
spec.syncResources
,spec.syncAllCustomResources
,spec.syncResourcesRefName
The metadata and resource sync fields of the PediaCluster
resource are only available when the PediaCluser is created, and only the cluster endpoint and authentication fields are updated when the PediaCluster
resource is updated
PediaCluster Deletion Condition
If a PediaCluster
is created by PediaClusterLifecycle
, then PediaClusterLifecycle
will be set to the owner of that PediaCluster
resource.
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: mcp-example
ownerReferences:
- apiVersion: policy.clusterpedia.io/v1alpha1
kind: PediaClusterLifecycle
name: mcp-example
uid: f932483a-b1c5-4894-a524-00f78ea34a9f
When a Source resource is deleted, the PediaClusterLifecycle
is deleted at the same time and the PediaCluster
is deleted automatically.
If PediaCluster
already existed before PediaClusterLifecycle
, then Source will not automatically delete PediaCluster
when it is deleted.
DeletionCondition will be added in the future to allow users to force or preempt the deletion of PediaCluster
.
3 - Usage
3.1 - Import Clusters
Clusterpedia uses the custom resource - PediaCluster
to represent the imported cluster.
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: cluster-example
spec:
apiserver: "https://10.30.43.43:6443"
kubeconfig:
caData:
tokenData:
certData:
keyData:
syncResources: []
There are two ways for users to configure the imported clusters:
- Configure base64-encoded kube config directly to the
spec.kubeconfig
field for cluster connectivity and authentication. - Configure the address for the imported cluster and the authentication information.
When using the apiserver
field to set address for the imported cluster, there are several options for configure the authentication fields:
caData
+tokenData
caData
+certData
+keyData
caData
can be left blank if the cluster APIServer allows Insecure connections
All these authentication fields need to be encoded by base64. If the field values are obtained directly from ConfigMap or Secret, they have already been encoded by base64.
Use the Kube Config to import a cluster
One of the easiest ways to connect and authenticate to a cluster is to use the kube config.
First you need to base64 encode the kube config for the imported cluster.
# mac
cat ./kubeconfig | base64
# linux
cat ./kubeconfig | base64 -w 0
# Output:
YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIGNlcnRpZmljYXRlLWF1dGhvcml0eS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VNdmFrTkRRV1ZoWjBGM1NVSkJaMGxDUVVSQlRrSm5hM0ZvYTJsSE9YY3dRa0ZSYzBaQlJFRldUVkpOZDBWUldVUldVVkZFUlhkd2NtUlhTbXdLWTIwMWJHUkhWbnBOUWpSWVJGUkplRTFFYTNsT1JFVjNUVlJOZVU1R2IxaEVWRTE0VFVScmVVMXFSWGROVkUxNVRrWnZkMFpVUlZSTlFrVkhRVEZWUlFwQmVFMUxZVE5XYVZwWVNuVmFXRkpzWTNwRFEwRlRTWGRFVVZsS1MyOWFTV2gyWTA1QlVVVkNRbEZCUkdkblJWQkJSRU5EUVZGdlEyZG5SVUpCVHk5VENuWnRhMVU1YmsxdVVsUklUM2x2SzNoamRGUkpZMGxQWW5NemMwRjVjVEkyZGpSUVlrVnRiM1pXTTJ4UE9WUXdNVEYyY0U5NlMwcHlPVUZ4ZVZaTVJuWUtWWEZCUkhCVGFrTTNXWGQzTW5ad1NsZDNiREV5U2xCdlVtMXhaMUZCU0ZOa1lsSnBVM0JEVERSdWRqbHZSMjVWT1dJMmRsbFdTeTlpUml0a1VWRkNTQXBuUTFoNk5uWm9UR1k0V21kMk4ydFVRMkpCZGtGUGFFOU9TbFUzTWxsWVRFOHpUMGxaUWpKdmExTkNSR0ZWVWpOdk5ucHdaR1ZXVGt0NVYwRXlOVkEzQ2tSb2JrOHlUazAxUXpscFJFUnFUVFJMWTJGVGEzSlBTa0p2YlVsc1NIRlpSalJ3VlhkVFRsRnZjR1ZHUlZSeVozWnpjVGt3U2tzMllVSlZTMHQ1YWpZS0syTkdkakkzUzBrNEsxWk1VRXRhU1RFMmMyNU1ibmcyUlhSVGF6WnRaakpYVEhkSlpsaHlRbGd3UkVzdllYQkVRMDE1UjJwRWIyZENhR3BKU1Zob1ZBcDJialZRWm5kRldVTnNkR1pGVEVoS1NrZFZRMEYzUlVGQllVNWFUVVpqZDBSbldVUldVakJRUVZGSUwwSkJVVVJCWjB0clRVRTRSMEV4VldSRmQwVkNDaTkzVVVaTlFVMUNRV1k0ZDBoUldVUldVakJQUWtKWlJVWkpWRGhMUkhkQ2JVVnZNSGxhZFVGRVpraGtLelExTDNaRll6ZE5RbFZIUVRGVlpFVlJVVThLVFVGNVEwTnRkREZaYlZaNVltMVdNRnBZVFhkRVVWbEtTMjlhU1doMlkwNUJVVVZNUWxGQlJHZG5SVUpCVDBGNVZIUTRTM1pGTjBkdlJFaFFUMDlwZGdveVIySTJXV1ZzVVU1S2NVTXphMWRJT1hjMU5URk5hR1p2UzNaaU0yMVZhVVY2WlZNd09VTndaVVFyVEZoNVpubHFRemhaWWtKeFFqWlhTRmhOWldNckNucFBkRE5QYXpSWVYwRm1aVlZaVFhoT1ExRkpibGM0Y2pJNGNtWm5ibEVyYzFOQ2RIUXllRVJRTjFSWlkwOW9OVlpHWmtJMkszSnRUbUZUYmxaMU5qZ0tTRkZ4ZGxGTU5FRlhiVmhrUjA5alJXTkJSVGhZZGtkaU9XaHdTalZOY2tSSGR6UTBVVFl5T0c5WWF6WjBOMDFhV1RGT01VTlFkVzlIWjFWbVMxTjNiZ28xTVVGV1JURk9WVmROVjB0RVFYaGFhMkk0YkVodlIzVldhREZ6V21kM1NuSlJRalI1Y2xoMWNteEdOMFkyYlZSbFltNHJjRFZLTTB0b1QwVjRLemxzQ2pGWGRrd3diV2t4TDFKMmJWSktObTExWW10aldVd3pOMUZKV2pJMVlYZHlhRVpNTjBaMWVqTlJTVEZxVFRkWU1IWkVUMlZVTTJWdVZVRkNaVzVTTVM4S1VubG5QUW90TFMwdExVVk9SQ0JEUlZKVVNVWkpRMEZVUlMwdExTMHRDZz09CiAgICBzZXJ2ZXI6IGh0dHBzOi8vMTAuNi4xMDAuMTA6NjQ0MwogIG5hbWU6IGt1YmVybmV0ZXMKY29udGV4dHM6Ci0gY29udGV4dDoKICAgIGNsdXN0ZXI6IGt1YmVybmV0ZXMKICAgIHVzZXI6IGt1YmVybmV0ZXMtYWRtaW4KICBuYW1lOiBrdWJlcm5ldGVzLWFkbWluQGt1YmVybmV0ZXMKY3VycmVudC1jb250ZXh0OiBrdWJlcm5ldGVzLWFkbWluQGt1YmVybmV0ZXMKa2luZDogQ29uZmlnCnByZWZlcmVuY2VzOiB7fQp1c2VyczoKLSBuYW1lOiBrdWJlcm5ldGVzLWFkbWluCiAgdXNlcjoKICAgIGNsaWVudC1jZXJ0aWZpY2F0ZS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VSSlZFTkRRV2R0WjBGM1NVSkJaMGxKV2s0eVNscE5TbnAwU21kM1JGRlpTa3R2V2tsb2RtTk9RVkZGVEVKUlFYZEdWRVZVVFVKRlIwRXhWVVVLUVhoTlMyRXpWbWxhV0VwMVdsaFNiR042UVdWR2R6QjVUVlJCTlUxcVVYaE5SRVY2VFdwU1lVWjNNSGxOYWtFMVRXcFJlRTFFUlhwTmFtaGhUVVJSZUFwR2VrRldRbWRPVmtKQmIxUkViazQxWXpOU2JHSlVjSFJaV0U0d1dsaEtlazFTYTNkR2QxbEVWbEZSUkVWNFFuSmtWMHBzWTIwMWJHUkhWbnBNVjBackNtSlhiSFZOU1VsQ1NXcEJUa0puYTNGb2EybEhPWGN3UWtGUlJVWkJRVTlEUVZFNFFVMUpTVUpEWjB0RFFWRkZRVFZEUkdSYVdIcEliMXAxVVRKeFJEZ0tTakpRZGtWdWEyTk1UV05RVG14RE1DOVRTR1YzV25kME5FRjRLM2RDWTFSSVJ6aGpWakJhZUZSYVQwdDNPSFJ4UWxrMk1tcGtOM1p4VkdoeFRWbHdad3AyYzNwSFVXeHlXbGRyZHpSUmFrUldORnBLY1dSbFRITkRVV3BqZUZsa05Ea3JSalEyYkVsS1VUSjVjRXhTUjBkb2NGTlpZMlYzWkdOTVkweHNTamRIQ21wRlJFTnlVRGxrWTFsSWRWUTFlSE5YVG5aQlFXcG5RM051UTNsU1ZXbExOVzAyTDFaR1JEQllTVFp6TlZFclFuZDBPVXNyUzFkblJrSlBVQ3M0TlRBS1Vra3ZZblJSYTJsdmNIZFphMGR1WmtkVE9FeEJiM2t2TTBwUWFsTXlWbXAwUVN0aVR6SnhUa1pFTmpWcWEwRXhWa05XVGxFeFIxVmphV1pYUTFaQ2RRcHpOM2hQUWpnME9WZzVjMUZ6TVhaTlpWSTNTbTh6VjBSRFJEWm9lVTFXZDNOb1FqbEdhR2QxYm5acFNFRlRibkJ5UTJWME9EUjJaMnBSYVdWT1RITmhDbWRFZEhaRlVVbEVRVkZCUW04eFdYZFdSRUZQUW1kT1ZraFJPRUpCWmpoRlFrRk5RMEpoUVhkRmQxbEVWbEl3YkVKQmQzZERaMWxKUzNkWlFrSlJWVWdLUVhkSmQwUkJXVVJXVWpCVVFWRklMMEpCU1hkQlJFRm1RbWRPVmtoVFRVVkhSRUZYWjBKVFJTOURaemhCV21oTFRrMXRZbWRCTTNnelpuVlBaamQ0U0FwUGVrRk9RbWRyY1docmFVYzVkekJDUVZGelJrRkJUME5CVVVWQk5XNWlRME5LYTBwTk5qQkRjVlZsZFdWVVUwbzBaRXBWWkc5S1NHVkhVblJGTWtKRkNrOVNXWEJIVUVVMllqUk5VVlJYY3pSbFZrOTFiRlUzYnpabU9WZFFVV1pDWm5JMmVGSlBXRFo1YUVoM2NIcDNkRVpVVW1od1lqaE5TVWxKV2pscWRqWUtaVVZ3TXpoWmFtUnBPVkV3SzBSaFkzRkxka0pVTURsMVEzWmtNR2x3UnpkTFNuVlNibkZMVVd4VWNtVnRkWFJsVGpOMk9HOUNTVGxXWjJsc2JXUllaZ3BwWkdGS1lqUlJaelpZVkdvemNFMUdkbFpqWTNOSGFWZG9UMHh5T1ZaSVZDdFFWazVaTjB4WlVHeG1Xa2RETkRCSk1URmlTVFZuUlZadVUydHZNa1JqQ21Od1NXOHJNbmRWZFRGU1IybExZMUp3V0RSb1FtUnBORWxYYlM4ek5sTXhaM2gzTW1KMFdFOWxNV3Q2T1c5SFlVNVplazVXU1VObkwzZDNiRzVEYVVNS2FtWjRiVFJJZWtOR1NXcHZRMGRxVFdWWVJFMVhieTlGT0d0U2RuaDFhMnQzYlc1MWN6aHpVV05FTVcxUkswZFFlbWM5UFFvdExTMHRMVVZPUkNCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2c9PQogICAgY2xpZW50LWtleS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJTVTBFZ1VGSkpWa0ZVUlNCTFJWa3RMUzB0TFFwTlNVbEZiM2RKUWtGQlMwTkJVVVZCTlVORVpGcFlla2h2V25WUk1uRkVPRW95VUhaRmJtdGpURTFqVUU1c1F6QXZVMGhsZDFwM2REUkJlQ3QzUW1OVUNraEhPR05XTUZwNFZGcFBTM2M0ZEhGQ1dUWXlhbVEzZG5GVWFIRk5XWEJuZG5ONlIxRnNjbHBYYTNjMFVXcEVWalJhU25Ga1pVeHpRMUZxWTNoWlpEUUtPU3RHTkRac1NVcFJNbmx3VEZKSFIyaHdVMWxqWlhka1kweGpUR3hLTjBkcVJVUkRjbEE1WkdOWlNIVlVOWGh6VjA1MlFVRnFaME56YmtONVVsVnBTd28xYlRZdlZrWkVNRmhKTm5NMVVTdENkM1E1U3l0TFYyZEdRazlRS3pnMU1GSkpMMkowVVd0cGIzQjNXV3RIYm1aSFV6aE1RVzk1THpOS1VHcFRNbFpxQ25SQksySlBNbkZPUmtRMk5XcHJRVEZXUTFaT1VURkhWV05wWmxkRFZrSjFjemQ0VDBJNE5EbFlPWE5SY3pGMlRXVlNOMHB2TTFkRVEwUTJhSGxOVm5jS2MyaENPVVpvWjNWdWRtbElRVk51Y0hKRFpYUTROSFpuYWxGcFpVNU1jMkZuUkhSMlJWRkpSRUZSUVVKQmIwbENRVUU0YTFZd01uSk5Tbm8zWkVkMmRRcHFORFJXZUdkTFZqUXhZbVJvTldJeFYwYzBUVEV6Y0VkWldUQnFhSGswT0RKa2JtcFVhVUpGTTNKU2JHWkxjSFZWUVZVMllXTmxWVFp3WkhreFUyMW5DbTgzWkVkYVJYQXpUMVZKVkVkU1JHSnhVR0ZzTHpCaUx6TjFZbWx1WWxSSGRucE1SVEZ1TDBoSWFrcEtabWhyZEhSd05ITk5jMjl6THlzNVFsWjRWbmNLVkVsR01uTjJWa1Z3WmtWdmVrdGhaMGhXYW5kcVVtZFpiVFpWTkZWYWVIVjJaRmcwVVhGdVVIRm5hVmgyZUd3eU5HeFhibkV6V25wYVQwSjJXa0p6Y2dwM1NWbERlRlJJWWprek5YbGplV3RMS3pKaEwxTlllRGRaUm5GTkwwRXdXbXMyWmxoMVRHeHVVME5wUkdSdlVsUjFWbTFtYWpjMU9VVkRVMjV1YzFCeENreE1hVnBxY1dwc2J6SlNaRlpSVlVOeVRrSk1MMHBGUjJ4aE5IZ3pkRUpxU21NdmFTdDJLekF2Tms1aVdtVm5aMk5tYlcxQk5USk5TRm8xVVVaVVZrb0tkRTkxT0RnMFJVTm5XVVZCTlZKd1FuSmFZazFXVW1sd1dVOVNPVkl4WVdjclZVeHhORlEzUW1sMU5XWkZXQzloZWtoemVsQmlUR0ZvYURaWVFuQjNTQW95YUZKa01XbDJObUZRVkZOSFRYbDFOR2M0WmtSM1owdzJOVE51VVZCVloxUlRURmxVV2xwb2NqUkNPRTUxYmxFMU9XOUZjREUzVW5VNWFIWkhOV1Z5Q204emNIZ3hNRXhRVUdaaUsyUnpNazU2UWxab2IwVlVNSGx5ZFcxbGNXbzFUemxNY1djeFVqRk1NbE5zWlc4M1ZTOXBVRVJMTUd0RFoxbEZRUzkxYkZVS1RHRnBObWRoWVN0VVV5dDRNVEowTlhWblZYaE5kRGROWkhSc2NsRkpjRkl4V2xsV04xQk1kVXBxVDJsSE1HaHdkM2RGWjNkcVdEVXZRMncxU1d0MVNRbzJWaXRKVjFWdFpGcERZbkZoTURsME9XcDZTVXB5TDFjMU5IcFJabmRUWWxsdE9YRjBPVVpZU0cxNWNFMXpUblZKSzBKb1IweENSRGh4UmpKQ1FVaFFDbXhXTkdwSFYxTkJSSG94Y2t0WVNGVkRkRTh4U21KUmMxSTRieXRqWkdGM05XTm1VMGhaYTBObldVVkJNMHc1YlRVd1lqQjZWVEk0TjI5S1lXWXJSbXdLY1RaamFIZEVWVU56WTFseGVtbGhLMmRQV2pSdlozUjVZVmRoVTB0TGEzaEhOVEJwUzFadmIzQjJZVEprV0ZSTVZWVkJNbk5tYjFReWEzaEZXbG9yVEFwS2VXWmhLMU01WTBsdmJWQndhVEl5ZGpVclptVnNSblpxZVhKc1N6bFpRbUZNZUhwamJrZExWa2M1YjBWeWNrOHhVVlZLUlhrNEswZDRWMmxRU0VGU0NqZGxWekZXZVU5TE5HdGFPRGs1UlcxUk1WaGpSMUpyUTJkWlFqa3pUMng1WW1ab1FUUm1jbVZyTm10ck5qSXdPRWQ1YjNseGRUSmlNVlJvZFhOd01EY0tZalZMT1RONWExWmtZazFhT0RSUk9VRTRZVkF2YVZSRWFrSnlRMmQ2WkVSMU5tSlJTakZtZFZKdlZFTnVXVW95TjFsWlMwVXZhbWhrV21KUk1FazJSUXBoVDNwNFprRjVaU3RvYjBVNVdtWm9XVkF5ZDA5blFXbDNabEpMWjBSYWJEQjNhRlJzYkhKbmNUTjJTa0lyYjJoMWJYbGpRa1F4UlRaVFozZ3ZNRnA1Q2k5c2JsSjFVVXRDWjBaTWNGWTVVQzg1Y21GbWJtUjBXRFZZZVZWMFQwMHZRbmt6UlZsbmJGbHZSMDlrUlhGTmFIaFlSeXQzUTFaQ1ZFSlZUMlJ6Y0V3S1RreGlVVkI0YW1KT1ZFMVFTakI2U0ZwcVppdDFhMHBvU1U5MGR6UmlUbk51YzNCa1NsTnpWMmRtTlhVeGRqZFBaMkUyVnpKMGFFRkNSelE1VEZGbFJ3cHRNWEZHUTJkTFpEZFpVRzlMUldKbGIwMXpTRXBSZUhCR2VYWnZSSE40VEZVNU0wOUVVblE1YVVGSFpFMUpZMll5Y25CTkNpMHRMUzB0UlU1RUlGSlRRU0JRVWtsV1FWUkZJRXRGV1MwdExTMHRDZz09Cg==
If the output contains newlines, then you can use
base64 -w 0 ./kubeconfig
Set the content after the base64 to PediaCluster spec.kubeconfig
, in addition spec.apiserver
and other authentication fields don’t need to set.
However, since the cluster address is configured in kube config, the APIServer URL is empty when you use kubectl get pediacluster
.
kubectl get pediacluster
# Output:
NAME APISERVER URL VERSION STATUS
cluster-1 v1.22.2 Healthy
Mutating addmission webhooks will be added in the future to automatically set spec.apiserver
,
currently if you want to show the cluster apiserver address when kubectl get pediacluster
,
then you need to manually configure the spec.apiserver
field additionally.
Use ServiceAccount to import a cluster
You can also choose to create a ServiceAccount in the Imported Cluster and configure the proper RBAC to import the cluster.
# Connect the current kubectl to the imported cluster
kubectl apply -f https://raw.githubusercontent.com/clusterpedia-io/clusterpedia/main/examples/clusterpedia_synchro_rbac.yaml
# Get CA and Token for Service Account
SYNCHRO_CA=$(kubectl -n default get secret $(kubectl -n default get serviceaccount clusterpedia-synchro -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.ca\.crt}')
SYNCHRO_TOKEN=$(kubectl -n default get secret $(kubectl -n default get serviceaccount clusterpedia-synchro -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}')
Fill $SYNCHRO_CA
and SYNCHRO_TOKEN
into spec.caData
and spec.tokenData
fields for the PediaCluster resource.
Create PediaCluster
After completing the cluster authentication fields, you can get a complete PediaCluster
resource and can directly use kubectl apply -f
to create it.
eg.
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: cluster-example
spec:
apiserver: https://10.6.100.10:6443
caData: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1Ea3lOREV3TVRNeU5Gb1hEVE14TURreU1qRXdNVE15TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTy9TCnZta1U5bk1uUlRIT3lvK3hjdFRJY0lPYnMzc0F5cTI2djRQYkVtb3ZWM2xPOVQwMTF2cE96S0pyOUFxeVZMRnYKVXFBRHBTakM3WXd3MnZwSld3bDEySlBvUm1xZ1FBSFNkYlJpU3BDTDRudjlvR25VOWI2dllWSy9iRitkUVFCSApnQ1h6NnZoTGY4Wmd2N2tUQ2JBdkFPaE9OSlU3MllYTE8zT0lZQjJva1NCRGFVUjNvNnpwZGVWTkt5V0EyNVA3CkRobk8yTk01QzlpRERqTTRLY2FTa3JPSkJvbUlsSHFZRjRwVXdTTlFvcGVGRVRyZ3ZzcTkwSks2YUJVS0t5ajYKK2NGdjI3S0k4K1ZMUEtaSTE2c25Mbng2RXRTazZtZjJXTHdJZlhyQlgwREsvYXBEQ015R2pEb2dCaGpJSVhoVAp2bjVQZndFWUNsdGZFTEhKSkdVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZJVDhLRHdCbUVvMHladUFEZkhkKzQ1L3ZFYzdNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBT0F5VHQ4S3ZFN0dvREhQT09pdgoyR2I2WWVsUU5KcUMza1dIOXc1NTFNaGZvS3ZiM21VaUV6ZVMwOUNwZUQrTFh5ZnlqQzhZYkJxQjZXSFhNZWMrCnpPdDNPazRYV0FmZVVZTXhOQ1FJblc4cjI4cmZnblErc1NCdHQyeERQN1RZY09oNVZGZkI2K3JtTmFTblZ1NjgKSFFxdlFMNEFXbVhkR09jRWNBRThYdkdiOWhwSjVNckRHdzQ0UTYyOG9YazZ0N01aWTFOMUNQdW9HZ1VmS1N3bgo1MUFWRTFOVVdNV0tEQXhaa2I4bEhvR3VWaDFzWmd3SnJRQjR5clh1cmxGN0Y2bVRlYm4rcDVKM0toT0V4KzlsCjFXdkwwbWkxL1J2bVJKNm11YmtjWUwzN1FJWjI1YXdyaEZMN0Z1ejNRSTFqTTdYMHZET2VUM2VuVUFCZW5SMS8KUnlnPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
tokenData: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklrMHRSalJtZGpSdVgxcFljMGxsU1ZneFlXMHpPSFZOY0Zwbk1UTkhiVFpsVFZwQ2JIWk9SbU5XYW5NaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUprWldaaGRXeDBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbU5zZFhOMFpYSndaV1JwWVMxemVXNWphSEp2TFhSdmEyVnVMVGsxYTJSNElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVibUZ0WlNJNkltTnNkWE4wWlhKd1pXUnBZUzF6ZVc1amFISnZJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpYSjJhV05sTFdGalkyOTFiblF1ZFdsa0lqb2lNREl5WXpNMk5USXRPR1k0WkMwME5qSmtMV0l6TnpFdFpUVXhPREF3TnpFeE9HUTBJaXdpYzNWaUlqb2ljM2x6ZEdWdE9uTmxjblpwWTJWaFkyTnZkVzUwT21SbFptRjFiSFE2WTJ4MWMzUmxjbkJsWkdsaExYTjVibU5vY204aWZRLkF4ZjhmbG5oR0lDYjJaMDdkT0FKUW11aHVIX0ZzRzZRSVY5Sm5sSmtPUnF5aGpWSDMyMkVqWDk1bVhoZ2RVQ2RfZXphRFJ1RFFpLTBOWDFseGc5OXpYRks1MC10ZzNfYlh5NFA1QnRFOUpRNnNraUt4dDFBZVJHVUF4bG5fVFU3SHozLTU5Vnl5Q3NwckFZczlsQWQwRFB6bTRqb1dyS1lKUXpPaGl5VjkzOWpaX2ZkS1BVUmNaMVVKVGpXUTlvNEFFY0hMdDlyTEJNMTk2eDRkbzA4ZHFaUnVtTzJZRXFkQTB3ZnRxZ2NGQzdtTGlSVVhkWElkYW9CY1BuWXBwM01MU3B5QjJQMV9vSlRFNS1nd3k4N2Jwb3U1RXo2TElSSExIeW5NWXAtWVRLR2hBbDJwMXdJb0tDZUNnQng4RlRfdzM4Rnh1TnE0UDRoQW5RUUh6bU9Ndw==
syncResources: []
View Cluster
After a cluster is successfully imported, you can use kubectl get pediacluster
to view the imported clusters and check its status
kubectl get pediacluster
# Output:
NAME APISERVER URL VERSION STATUS
cluster-1 https://10.6.100.10:6443 v1.22.2 Healthy
cluster-2 https://10.50.10.11:16443 v1.10.11 Healthy
Next
3.2 - Interfacing to Multi-Cloud Platforms
After 0.4.0, Clusterpedia provides a more friendly way to interface to multi-cloud platforms.
Users can create ClusterImportPolicy
to automatically discover managed clusters in the multi-cloud platform and automatically synchronize them as PediaCluster
,
so you don’t need to maintain PediaCluster
manually based on the managed clusters.
We maintain PediaCluster
for each multi-cloud platform in the Clusterpedia repository. ClusterImportPolicy` for each multi-cloud platform.
People also submit ClusterImportPolicy to Clusterpedia for interfacing to other multi-cloud platforms.
After installing Clusterpedia, you can create the appropriate ClusterImportPolicy
,
or you can create a new ClusterImportPolicy
according to your needs (multi-cloud platform).
ClusterAPI ClusterImportPolicy
Users can refer to Cluster API Quick Start to install the Cluster API,or refer to Quickly deploy Cluster API + Clusterpedia to deploy a sample environment.
Create ClusterImportPolicy
for interfacing to the ClusterAPI platform.
$ kubectl apply -f https://raw.githubusercontent.com/clusterpedia-io/clusterpedia/main/deploy/clusterimportpolicy/cluster_api.yaml
$ kubectl get clusterimportpolicy
NAME AGE
cluster-api 4d19h
If the clusters created by the ClusterAPI already exists in the management cluster, then you can view the Cluster
and PediaCluster
resources.
$ kubectl get cluster
NAME PHASE AGE VERSION
capi-quickstart Provisioned 3d23h v1.24.2
capi-quickstart-2 Provisioned 3d23h v1.24.2
$ kubectl get pediaclusterlifecycle
NAME AGE
default-capi-quickstart 3d23h
default-capi-quickstart-2 3d23h
$ kubectl get pediacluster
NAME READY VERSION APISERVER
default-capi-quickstart True v1.24.2
default-capi-quickstart-2 True v1.24.2
PediaCluster is automatically created based on the Cluster, and the kubeconfig of PediaCluster is automatically updated when the kubeconfig of the Cluster changes.
When creating a new Cluster, Clusterpedia automatically creates a PediaCluster when ControlPlaneInitialized is True according to the Cluster API ClusterImportPolicy,
and you can check the initialization status of the cluster by using kubectl get kubeadmcontrolplane
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
capi-quickstart-2xcsz capi-quickstart true 1 1 1 86s v1.24.2
Once the Cluster has been initialized, you can use kubectl to retrieve multiple cluster resources directly.
Beforing using kubectl, you need to generate cluster shortcut configuration for multi-cluster resource retrieval.
$ # Since CNI is not installed, the nodes are not ready.
$ kubectl --cluster clusterpedia get no
CLUSTER NAME STATUS ROLES AGE VERSION
default-capi-quickstart-2 capi-quickstart-2-ctm9k-g2m87 NotReady control-plane 12m v1.24.2
default-capi-quickstart-2 capi-quickstart-2-md-0-s8hbx-7bd44554b5-kzcb6 NotReady <none> 11m v1.24.2
default-capi-quickstart capi-quickstart-2xcsz-fxrrk NotReady control-plane 21m v1.24.2
default-capi-quickstart capi-quickstart-md-0-9tw2g-b8b4f46cf-gggvq NotReady <none> 20m v1.24.2
Karmada ClusterImportPolicy
For Karmada platform, you need to first deploy Clusterpedia in Karmada APIServer, the deployment steps can be found at https://github.com/Iceber/deploy-clusterpedia-to-karmada
Create ClusterImportPolicy
for interfacing to the Karmada platform.
$ kubectl create -f https://raw.githubusercontent.com/clusterpedia-io/clusterpedia/main/deploy/clusterimportpolicy/karmada.yaml
$ kubectl get clusterimportpolicy
NAME AGE
karmada 7d5h
View Karmada Cluster
and PediaClusterLifecycle
resources.
$ kubectl get cluster
NAME VERSION MODE READY AGE
argocd Push False 8d
member1 v1.23.4 Push True 22d
member3 v1.23.4 Pull True 22d
$ kubectl get pediaclusterlifecycle
NAME AGE
karmada-argocd 7d5h
karmada-member1 7d5h
karmada-member3 7d5h
Clusterpedia creates a corresponding PediaClusterLifecycle
for each Karmada Cluster,
and you can use kubectl describe pediaclusterlifecycle <name>
to see the status of the transition between Karmada Cluster and PediaCluster resources.
The status will be detailed in
kubectl get pediaclusterlifecycle
in the future
View the successfully created PediaCluster
NAME APISERVER VERSION STATUS
karmada-member1 https://172.18.0.4:6443 v1.23.4 Healthy
The karmada clusterimportpolicy requires the karmada cluster to be in Push mode and in Ready state, so the karmada-member-1 pediacluster resource is created for the member-1 cluster.
VCluster ClusterImportPolicy
Create the ClusterImportPolicy
for auto-discovery of VCluster.
$ kubectl create -f https://raw.githubusercontent.com/clusterpedia-io/clusterpedia/main/deploy/clusterimportpolicy/vcluster.yaml
$ kubectl get clusterimportpolicy
NAME AGE
vclsuter 5h
Note that the VCluster cluster needs to be created in such a way that the Server address of the generated kubeconfig can be accessed by other Pods in the host cluster.
This can be set to a VCluster Service domain name, a Node IP or an Ingress address.
syncer:
extraArgs:
- --out-kube-config-server=https://<vcluster name>.<namespace>.svc
- --tls-san=<vcluster name>.<namespace>.svc,127.0.0.1
Create two VClusters in the default namespace
create the virtual cluster vcluster-1
# vcluster-1.yaml
syncer:
extraArgs:
- --out-kube-config-server=https://vcluster-1.default.svc
- --tls-san=vcluster-1.default.svc,127.0.0.1
$ vcluster create -n default -f vcluster-1.yaml vcluster-1
create the virtual cluster vcluster-2
# vcluster-2.yaml
syncer:
extraArgs:
- --out-kube-config-server=https://vcluster-2.default.svc
- --tls-san=vcluster-2.default.svc,127.0.0.1
$ vcluster create -n default -f vcluster-2.yaml vcluster-2
List all VCluster clusters
$ vcluster list
NAME NAMESPACE STATUS CONNECTED CREATED AGE
caiwei-vcluster caiwei-vcluster Running 2022-08-26 16:10:52 +0800 CST 484h49m6s
vcluster-1 default Running 2022-09-15 20:57:59 +0800 CST 1m59s
vcluster-2 default Running 2022-09-15 20:59:34 +0800 CST 24s
We can use kubectl + Clusterpedia directly to retrieve the resources in any VCluster.
Beforing using kubectl, you need to generate cluster shortcut configuration for multi-cluster resource retrieval.
$ kubectl --cluster clusterpedia get po -A
NAMESPACE CLUSTER NAME READY STATUS RESTARTS AGE
default vc-caiwei-vcluster-caiwei-vcluster backend-77f8f45fc8-5ssww 1/1 Running 0 20d
default vc-caiwei-vcluster-caiwei-vcluster backend-77f8f45fc8-j5m4c 1/1 Running 0 20d
default vc-caiwei-vcluster-caiwei-vcluster backend-77f8f45fc8-vjzf6 1/1 Running 0 20d
kube-system vc-default-vcluster-1 coredns-669fb9997d-cxktv 1/1 Running 0 3m40s
kube-system vc-default-vcluster-2 coredns-669fb9997d-g7w8l 1/1 Running 0 2m6s
kube-system vc-caiwei-vcluster-caiwei-vcluster coredns-669fb9997d-x6vc2 1/1 Running 0 20d
$ kubectl --cluster clusterpedia get ns
CLUSTER NAME STATUS AGE
vc-default-vcluster-2 default Active 2m49s
vc-default-vcluster-1 default Active 4m24s
vc-caiwei-vcluster-caiwei-vcluster default Active 20d
vc-default-vcluster-2 kube-node-lease Active 2m49s
vc-default-vcluster-1 kube-node-lease Active 4m24s
vc-caiwei-vcluster-caiwei-vcluster kube-node-lease Active 20d
vc-default-vcluster-2 kube-public Active 2m49s
vc-default-vcluster-1 kube-public Active 4m24s
vc-caiwei-vcluster-caiwei-vcluster kube-public Active 20d
vc-default-vcluster-2 kube-system Active 2m49s
vc-default-vcluster-1 kube-system Active 4m24s
vc-caiwei-vcluster-caiwei-vcluster kube-system Active 20d
Clusterpedia will automatically discover the virtual clusters(VClusters) within the host cluster and create the corresponding PediaCluster according to the VCluster ClusterImportPolicy, and users can access Clusterpedia directly to retrieve resources
$ kubectl get pediaclusterlifecycle
NAME AGE
vc-caiwei-vcluster-caiwei-vcluster 20d
vc-default-vcluster-1 5m57s
vc-default-vcluster-2 4m24s
$ kubectl get pediacluster
NAME READY VERSION APISERVER
vc-caiwei-vcluster-caiwei-vcluster True v1.23.5+k3s1 https://caiwei-vcluster.caiwei-vcluster.svc
vc-default-vcluster-1 True v1.23.5+k3s1 https://vcluster-1.default.svc
vc-default-vcluster-2 True v1.23.5+k3s1 https://vcluster-2.default.svc
New ClusterImportPolicy
If the Clusterpedia repository does not maintain a ClusterImportPolicy for a platform, then we can create a new ClusterImportPolicy
A detailed description of the ClusterImportPolicy
principles and fields can be found in the Cluster Auto Import Policy
Now assume that there is a multi-cloud platform MCP that uses a custom resource – Cluster to represent the managed clusters and stores the cluster authentication information in a Secret with the same name as the cluster
apiVersion: cluster.mcp.io
kind: Cluster
metadata:
name: cluster-1
spec:
apiEndpoint: "https://172.10.10.10:6443"
authSecretRef:
namespace: "default"
name: "cluster-1"
status:
conditions:
- type: Ready
status: True
---
apiVersion: v1
kind: Secret
metadata:
name: cluster-1
data:
ca: **cluster ca bundle**
token: **cluster token**
We define a ClusterImportPolicy
resource for the MCP platform and synchronize the pods resource and all resources under the apps group by default.
apiVersion: policy.clusterpedia.io/v1alpha1
kind: ClusterImportPolicy
metadata:
name: mcp
spec:
source:
group: "cluster.mcp.io"
resource: clusters
versions: []
references:
- group: ""
resource: secrets
versions: []
namespaceTemplate: "{{ .source.spec.authSecretRef.namespace }}"
nameTemplate: "{{ .source.spec.authSecretRef.name }}"
key: authSecret
nameTemplate: "mcp-{{ .source.metadata.name }}"
template: |
spec:
apiserver: "{{ .source.spec.apiEndpoint }}"
caData: "{{ .references.authSecret.data.ca }}"
tokenData: "{{ .references.authSecret.data.token }}"
syncResources:
- group: ""
resources:
- "pods"
- group: "apps"
resources:
- "*"
syncResourcesRefName: ""
creationCondition: |
{{ if ne .source.spec.apiEndpoint "" }}
{{ range .source.status.conditions }}
{{ if eq .type "Ready" }}
{{ if eq .status "True" }} true {{ end }}
{{ end }}
{{ end }}
{{ end }}
spec.source
defines the resource Cluster that needs to be watched tospec.preferences
defines the resources involved in converting an MCP Cluster to a PediaCluster, currently only secrets resources are usedspec.nameTemplate
will render the name of the PediaCluster resource based on the MCP Cluster resourcespec.template
renders the PediaCluster resource from the resources defined in MCP Cluster andspec.references
, see PediaCluster Template for rulesspec.creationCondition
determines when a PediaCluster can be created based on the resources defined by the MCP Cluster andspec.references
, here it defines when the MCP Cluster is Ready before creating the PediaCluster. See Creation Condition for details
3.3 - Synchronize Cluster Resources
The main function of Clusterpedia is to provide complex search for resources in multiple clusters.
Clusterpedia uses the PediaCluster
resource to specify which resources in the cluster need to support complex search,
and synchronizes these resources onto the Storage Component
via Storage Layer
in real time.
# example
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: cluster-example
spec:
apiserver: "https://10.30.43.43:6443"
syncResources:
- group: apps
resources:
- deployments
- group: ""
resources:
- pods
- configmaps
- group: cert-manager.io
versions:
- v1
resources:
- certificates
Synchronize built-in resources
In order to manage and view these synchronized resources through PediaCluster
, you need to configure resources in groups
syncResources:
- group: apps
versions: []
resources:
- deployments
- daemonsets
For built-in resources, versions
is not required.
Clusterpedia will automatically select the appropriate version to synchronize based on the resource version supported in the cluster.
Also, you do not need to worry about version conversion because Clusterpedia will open all version interfaces for built-in resources.
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps" | jq
{
"kind": "APIGroup",
"apiVersion": "v1",
"name": "apps",
"versions": [
{
"groupVersion": "apps/v1",
"version": "v1"
},
{
"groupVersion": "apps/v1beta2",
"version": "v1beta2"
},
{
"groupVersion": "apps/v1beta1",
"version": "v1beta1"
}
],
"preferredVersion": {
"groupVersion": "apps/v1",
"version": "v1"
}
}
Clusterpedia supports three versions of Deployment
: v1
, v1beta2
, and v1beta1
.
Synchronize custom resources
Compared with built-in resources, custom resources have slightly different configuration on resource versions.
syncResources:
- group: cert-manager.io
versions: []
resources:
- certificates
You can also ignore the versions field and then Clusterpedia will synchronize the previous three cluster versions in the Group.
Take cert-manager.io as an example to get the Group supported by cert-manager.io in an imported cluster
# Run the command in an imported cluster
kubectl get --raw="/apis/cert-manager.io" | jq
{
"kind": "APIGroup",
"apiVersion": "v1",
"name": "cert-manager.io",
"versions": [
{
"groupVersion": "cert-manager.io/v1",
"version": "v1"
},
{
"groupVersion": "cert-manager.io/v1beta1",
"version": "v1beta1"
},
{
"groupVersion": "cert-manager.io/v1alpha3",
"version": "v1alpha3"
},
{
"groupVersion": "cert-manager.io/v1alpha2",
"version": "v1alpha2"
}
],
"preferredVersion": {
"groupVersion": "cert-manager.io/v1",
"version": "v1"
}
}
The imported cluster supports four versions for cert-manager.io: v1
, v1beta1
, v1alpha3
, v1alpha2
.
When syncResources.[group].versions
is left blank, Clusterpedia will synchronize three versions v1
, v1beta1
, v1alpah3
in the order of the APIGroup.versions
list except for v1alpha2
.
Specify a sync version for custom resources
If you specified versions
, the specific resource would be synchronized by versions
.
syncResources:
- group: cert-manager.io
versions:
- v1beta1
resources:
- certificates
The above snippet only synchronizes v1beta1
.
Usage notes
The custom resource synchronization does not support version conversion currently. The versions are fixed after synchronization.
If cluster-1 only synchronizes v1beta1
resources when you are searching for multi-cluster resources, the request to search for version v1 will ignore the version v1beta1
.
You are required to learn and handle the different versions in multiple clusters for custom resources.
Sync all custom resources
The custom resource types and versions change with the CRD, so when a CRD is created and we don’t want to modify spec.syncResources
to sync resources as the same time,
we can set spec.syncAllCustomResources
to sync all custom resources.
spec:
syncAllCustomResources: true
However, it should be noted that to use this feature, you need to enabled the corresponding Feature Gate in clustersynchro-manager
, which can be found in Sync All Custom Resource
Using wildcards to sync resources
Group Wildcard
spec:
syncResources:
- group: "apps"
resources:
- "*"
Use Group Wildcard
to sync all types of resources under the specified group.
In the above example, all resources under apps
will be synced.
All-resources Wildcard
spec:
syncResources:
- group: "*"
resources:
- "*"
The All-resources Wildcard
allows we to sync built-in resources, custom resources and aggregated API resources in the imported cluster.
This feature creates a large number of long connections, so use it with caution and enable the corresponding Feature Gate in the clustersynchro-manager
, as described in Sync All Resources
Reference ClusterSyncResources
ClusterSyncResources
is used to define cluster resource synchronization configurations that are commonly referenced by multiple PediaClusters, see Public Configuration of Cluster Sync Resources for more information about ClusterSyncResources
PediaCluster sets the referenced ClusterSyncResources
by spec.syncResourceRefName
.
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: ClusterSyncResources
metadata:
name: global-base
spec:
syncResources:
- group: ""
resources:
- pods
- group: "apps"
resources:
- "*"
---
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: demo1
spec:
syncResourcesRefName: "global-base"
syncResources:
- group: ""
resources:
- pods
- configmaps
If PediaCluster has both spec.syncResourcesRefName
and spec.syncResources
set, then the concatenation of the two will be used.
In the above example, clusterpedia synchronizes the pods and configmaps resources, and all resources under the apps group in the demo1 cluster.
View synchronized resources
You can view resources, sync versions, and storage versions by using Status
of the PediaCluster
resource.
For Status
, a resource may have Sync Version and Storage Version:
- Sync Version refers to the resource version from a synchronized cluster by Clusterpedia
- Storage Version refers to the version stored at the storage layer by Clusterpedia
status:
syncResources:
- group: apps
resources:
- name: deployments
kind: Deployment
namespaced: true
syncConditions:
- lastTransitionTime: "2022-01-13T04:34:08Z"
status: Syncing
storageVersion: v1
version: v1
In general, Sync Version is same as Storage Version for a cluster resource.
However, if an imported cluster only provides the Deployment
resource of the v1beta1
version, the Sync Version is v1beta1
and the Storage Version is v1
.
For example, when synchronizing a Deployment of Kubernetes 1.10, the synchronization status is as follows:
status:
syncResources:
- group: apps
resources:
- name: deployments
kind: Deployment
namespaced: true
syncConditions:
- lastTransitionTime: "2022-01-13T04:34:04Z"
status: Syncing
storageVersion: v1
version: v1beta1
For a custom resource, Synchronized Version is same as Storage Version
Next
After resource synchronization, you can Access the Clusterpedia to Search for Resources
3.4 - Access the Clusterpedia
Clusterpedia has two main components:
ClusterSynchroManager
manages thePediaCluster
resource in the master cluster, connects to the specified cluster through thePediaCluster
authentication information, and synchronizes the corresponding resources in real time.APIServer
also listens to thePediaCluster
resource in the master cluster and provides complex search for resources in a compatible Kubernetes OpenAPI manner based on the resources synchronized by the cluster.Controller Manager
Also, the Clusterpedia APIServer
will be registered to the master cluster APIServer in the way of Aggregation API, so that we can access Clusterpedia through the same entry as the master cluster.
Resources and Collection Resource
Clusterpedia APIServer will provide two different resources to search under Group - clusterpedia.io:
kubectl api-resources | grep clusterpedia.io
# Output:
NAME SHORTNAMES APIVERSION NAMESPACED KIND
collectionresources clusterpedia.io/v1beta1 false CollectionResource
resources clusterpedia.io/v1beta1 false Resources
Resources
is used to specify a resource type to search for, compatible with Kubernetes OpenAPICollectionResource
is used to search for new resource aggregated by different types to find multiple resource types at one time
For concepts and usage about Collection Resource, refer to What is Collection Resource and Search for Collection Resource.
Access the Clusterpedia resources
When searching for a resource of a specific type, you can request it according to the Get
/List
specification of Kubernetes OpenAPI. In this way we can not only use the URL to access Clusterpedia resources, but also directly use kubectl
or client-go
to search for the resources.
Clusterpedia uses URL Path to distinguish whether the request is a multi-cluster resource or a specific cluster:
Multi-cluster resource path, directly prefix the Resources path:
/apis/clusterpedia.io/v1beta1/resources
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/version"
Specific cluster resource path, specify a cluster by setting the resource name based on the Resources path
/apis/clusterpedia.io/v1beta1/resources/clusters/
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/clusters/cluster-1/version"
Regardless of a resource path of multiple clusters or a specific cluster, the path can be spliced and followed by Kubernetes Get
/List
Path
Configure the cluster shortcut for kubectl
Although we can use URLs to access Clusterpedia resources, if we want to use kubectl to query more conveniently, we need to configure the kubeconfig cluster.
Clusterpedia provides a simple script to generate cluster config
in the kubeconfig.
curl -sfL https://raw.githubusercontent.com/clusterpedia-io/clusterpedia/v0.7.0/hack/gen-clusterconfigs.sh | sh -
# Output:
Current Context: kubernetes-admin@kubernetes
Current Cluster: kubernetes
Server: https://10.6.100.10:6443
TLS Server Name:
Insecure Skip TLS Verify:
Certificate Authority:
Certificate Authority Data: ***
Cluster "clusterpedia" set.
Cluster "cluster-1" set.
Cluster "cluster-2" set.
Check the script from hack/gen-clusterconfigs.sh
The script prints the current cluster information and configures the PediaCluster
into kubeconfig.
cat ~/.kube/config
# .kube/config
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1Ea3lOREV3TVRNeU5Gb1hEVE14TURreU1qRXdNVE15TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTy9TCnZta1U5bk1uUlRIT3lvK3hjdFRJY0lPYnMzc0F5cTI2djRQYkVtb3ZWM2xPOVQwMTF2cE96S0pyOUFxeVZMRnYKVXFBRHBTakM3WXd3MnZwSld3bDEySlBvUm1xZ1FBSFNkYlJpU3BDTDRudjlvR25VOWI2dllWSy9iRitkUVFCSApnQ1h6NnZoTGY4Wmd2N2tUQ2JBdkFPaE9OSlU3MllYTE8zT0lZQjJva1NCRGFVUjNvNnpwZGVWTkt5V0EyNVA3CkRobk8yTk01QzlpRERqTTRLY2FTa3JPSkJvbUlsSHFZRjRwVXdTTlFvcGVGRVRyZ3ZzcTkwSks2YUJVS0t5ajYKK2NGdjI3S0k4K1ZMUEtaSTE2c25Mbng2RXRTazZtZjJXTHdJZlhyQlgwREsvYXBEQ015R2pEb2dCaGpJSVhoVAp2bjVQZndFWUNsdGZFTEhKSkdVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZJVDhLRHdCbUVvMHladUFEZkhkKzQ1L3ZFYzdNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBT0F5VHQ4S3ZFN0dvREhQT09pdgoyR2I2WWVsUU5KcUMza1dIOXc1NTFNaGZvS3ZiM21VaUV6ZVMwOUNwZUQrTFh5ZnlqQzhZYkJxQjZXSFhNZWMrCnpPdDNPazRYV0FmZVVZTXhOQ1FJblc4cjI4cmZnblErc1NCdHQyeERQN1RZY09oNVZGZkI2K3JtTmFTblZ1NjgKSFFxdlFMNEFXbVhkR09jRWNBRThYdkdiOWhwSjVNckRHdzQ0UTYyOG9YazZ0N01aWTFOMUNQdW9HZ1VmS1N3bgo1MUFWRTFOVVdNV0tEQXhaa2I4bEhvR3VWaDFzWmd3SnJRQjR5clh1cmxGN0Y2bVRlYm4rcDVKM0toT0V4KzlsCjFXdkwwbWkxL1J2bVJKNm11YmtjWUwzN1FJWjI1YXdyaEZMN0Z1ejNRSTFqTTdYMHZET2VUM2VuVUFCZW5SMS8KUnlnPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.6.100.10:6443/apis/clusterpedia.io/v1beta1/resources/clusters/cluster-1
name: cluster-1
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1Ea3lOREV3TVRNeU5Gb1hEVE14TURreU1qRXdNVE15TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTy9TCnZta1U5bk1uUlRIT3lvK3hjdFRJY0lPYnMzc0F5cTI2djRQYkVtb3ZWM2xPOVQwMTF2cE96S0pyOUFxeVZMRnYKVXFBRHBTakM3WXd3MnZwSld3bDEySlBvUm1xZ1FBSFNkYlJpU3BDTDRudjlvR25VOWI2dllWSy9iRitkUVFCSApnQ1h6NnZoTGY4Wmd2N2tUQ2JBdkFPaE9OSlU3MllYTE8zT0lZQjJva1NCRGFVUjNvNnpwZGVWTkt5V0EyNVA3CkRobk8yTk01QzlpRERqTTRLY2FTa3JPSkJvbUlsSHFZRjRwVXdTTlFvcGVGRVRyZ3ZzcTkwSks2YUJVS0t5ajYKK2NGdjI3S0k4K1ZMUEtaSTE2c25Mbng2RXRTazZtZjJXTHdJZlhyQlgwREsvYXBEQ015R2pEb2dCaGpJSVhoVAp2bjVQZndFWUNsdGZFTEhKSkdVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZJVDhLRHdCbUVvMHladUFEZkhkKzQ1L3ZFYzdNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBT0F5VHQ4S3ZFN0dvREhQT09pdgoyR2I2WWVsUU5KcUMza1dIOXc1NTFNaGZvS3ZiM21VaUV6ZVMwOUNwZUQrTFh5ZnlqQzhZYkJxQjZXSFhNZWMrCnpPdDNPazRYV0FmZVVZTXhOQ1FJblc4cjI4cmZnblErc1NCdHQyeERQN1RZY09oNVZGZkI2K3JtTmFTblZ1NjgKSFFxdlFMNEFXbVhkR09jRWNBRThYdkdiOWhwSjVNckRHdzQ0UTYyOG9YazZ0N01aWTFOMUNQdW9HZ1VmS1N3bgo1MUFWRTFOVVdNV0tEQXhaa2I4bEhvR3VWaDFzWmd3SnJRQjR5clh1cmxGN0Y2bVRlYm4rcDVKM0toT0V4KzlsCjFXdkwwbWkxL1J2bVJKNm11YmtjWUwzN1FJWjI1YXdyaEZMN0Z1ejNRSTFqTTdYMHZET2VUM2VuVUFCZW5SMS8KUnlnPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.6.100.10:6443/apis/clusterpedia.io/v1beta1/resources/clusters/cluster-2
name: cluster-2
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1Ea3lOREV3TVRNeU5Gb1hEVE14TURreU1qRXdNVE15TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTy9TCnZta1U5bk1uUlRIT3lvK3hjdFRJY0lPYnMzc0F5cTI2djRQYkVtb3ZWM2xPOVQwMTF2cE96S0pyOUFxeVZMRnYKVXFBRHBTakM3WXd3MnZwSld3bDEySlBvUm1xZ1FBSFNkYlJpU3BDTDRudjlvR25VOWI2dllWSy9iRitkUVFCSApnQ1h6NnZoTGY4Wmd2N2tUQ2JBdkFPaE9OSlU3MllYTE8zT0lZQjJva1NCRGFVUjNvNnpwZGVWTkt5V0EyNVA3CkRobk8yTk01QzlpRERqTTRLY2FTa3JPSkJvbUlsSHFZRjRwVXdTTlFvcGVGRVRyZ3ZzcTkwSks2YUJVS0t5ajYKK2NGdjI3S0k4K1ZMUEtaSTE2c25Mbng2RXRTazZtZjJXTHdJZlhyQlgwREsvYXBEQ015R2pEb2dCaGpJSVhoVAp2bjVQZndFWUNsdGZFTEhKSkdVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZJVDhLRHdCbUVvMHladUFEZkhkKzQ1L3ZFYzdNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBT0F5VHQ4S3ZFN0dvREhQT09pdgoyR2I2WWVsUU5KcUMza1dIOXc1NTFNaGZvS3ZiM21VaUV6ZVMwOUNwZUQrTFh5ZnlqQzhZYkJxQjZXSFhNZWMrCnpPdDNPazRYV0FmZVVZTXhOQ1FJblc4cjI4cmZnblErc1NCdHQyeERQN1RZY09oNVZGZkI2K3JtTmFTblZ1NjgKSFFxdlFMNEFXbVhkR09jRWNBRThYdkdiOWhwSjVNckRHdzQ0UTYyOG9YazZ0N01aWTFOMUNQdW9HZ1VmS1N3bgo1MUFWRTFOVVdNV0tEQXhaa2I4bEhvR3VWaDFzWmd3SnJRQjR5clh1cmxGN0Y2bVRlYm4rcDVKM0toT0V4KzlsCjFXdkwwbWkxL1J2bVJKNm11YmtjWUwzN1FJWjI1YXdyaEZMN0Z1ejNRSTFqTTdYMHZET2VUM2VuVUFCZW5SMS8KUnlnPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.6.100.10:6443/apis/clusterpedia.io/v1beta1/resources
name: clusterpedia
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1Ea3lOREV3TVRNeU5Gb1hEVE14TURreU1qRXdNVE15TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTy9TCnZta1U5bk1uUlRIT3lvK3hjdFRJY0lPYnMzc0F5cTI2djRQYkVtb3ZWM2xPOVQwMTF2cE96S0pyOUFxeVZMRnYKVXFBRHBTakM3WXd3MnZwSld3bDEySlBvUm1xZ1FBSFNkYlJpU3BDTDRudjlvR25VOWI2dllWSy9iRitkUVFCSApnQ1h6NnZoTGY4Wmd2N2tUQ2JBdkFPaE9OSlU3MllYTE8zT0lZQjJva1NCRGFVUjNvNnpwZGVWTkt5V0EyNVA3CkRobk8yTk01QzlpRERqTTRLY2FTa3JPSkJvbUlsSHFZRjRwVXdTTlFvcGVGRVRyZ3ZzcTkwSks2YUJVS0t5ajYKK2NGdjI3S0k4K1ZMUEtaSTE2c25Mbng2RXRTazZtZjJXTHdJZlhyQlgwREsvYXBEQ015R2pEb2dCaGpJSVhoVAp2bjVQZndFWUNsdGZFTEhKSkdVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZJVDhLRHdCbUVvMHladUFEZkhkKzQ1L3ZFYzdNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBT0F5VHQ4S3ZFN0dvREhQT09pdgoyR2I2WWVsUU5KcUMza1dIOXc1NTFNaGZvS3ZiM21VaUV6ZVMwOUNwZUQrTFh5ZnlqQzhZYkJxQjZXSFhNZWMrCnpPdDNPazRYV0FmZVVZTXhOQ1FJblc4cjI4cmZnblErc1NCdHQyeERQN1RZY09oNVZGZkI2K3JtTmFTblZ1NjgKSFFxdlFMNEFXbVhkR09jRWNBRThYdkdiOWhwSjVNckRHdzQ0UTYyOG9YazZ0N01aWTFOMUNQdW9HZ1VmS1N3bgo1MUFWRTFOVVdNV0tEQXhaa2I4bEhvR3VWaDFzWmd3SnJRQjR5clh1cmxGN0Y2bVRlYm4rcDVKM0toT0V4KzlsCjFXdkwwbWkxL1J2bVJKNm11YmtjWUwzN1FJWjI1YXdyaEZMN0Z1ejNRSTFqTTdYMHZET2VUM2VuVUFCZW5SMS8KUnlnPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.6.100.10:6443
name: kubernetes
The script generates clusterpedia clusters for multi-cluster access and other cluster configs in the name of PediaCluster
, and reuses the master cluster’s entry and authentication information when accessing Clusterpedia.
Compared with the master cluster entry, it only adds Clusterpedia Resources path.
After multi-cluster kubeconfig is generated, you can use kubectl --cluster
to specify the cluster access
# Supported resources for multi-cluster search
kubectl --cluster clusterpedia api-resources
# Supported resources for cluster-1 search
kubectl --cluster cluster-1 api-resources
What resources are supported for search
We can get the global and specific resource information according to the URL path.
Global resource information is the union of resource types that are synchronized across all clusters
Discovery API
opened by Clusterpedia is similary compatible with Kubernetes OpenAPI. You can use kubectl
, client-go/discovery, client-go/restmapper or controller-runtime/dynamic-restmapper to access it.
Use URL to get APIGroupList and APIGroup information
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis" | jq
{
"kind": "APIGroupList",
"apiVersion": "v1",
"groups": [
{
"name": "apps",
"versions": [
{
"groupVersion": "apps/v1",
"version": "v1"
},
{
"groupVersion": "apps/v1beta2",
"version": "v1beta2"
},
{
"groupVersion": "apps/v1beta1",
"version": "v1beta1"
}
],
"preferredVersion": {
"groupVersion": "apps/v1",
"version": "v1"
}
},
{
"name": "cert-manager.io",
"versions": [
{
"groupVersion": "cert-manager.io/v1",
"version": "v1"
}
],
"preferredVersion": {
"groupVersion": "cert-manager.io/v1",
"version": "v1"
}
}
]
}
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps" | jq
{
"kind": "APIGroup",
"apiVersion": "v1",
"name": "apps",
"versions": [
{
"groupVersion": "apps/v1",
"version": "v1"
},
{
"groupVersion": "apps/v1beta2",
"version": "v1beta2"
},
{
"groupVersion": "apps/v1beta1",
"version": "v1beta1"
}
],
"preferredVersion": {
"groupVersion": "apps/v1",
"version": "v1"
}
}
Use kubectl to get api-resources
kubectl --cluster clusterpedia api-resources
# Output:
NAME SHORTNAMES APIVERSION NAMESPACED KIND
configmaps cm v1 true ConfigMap
namespaces ns v1 false Namespace
nodes no v1 false Node
pods po v1 true Pod
secrets v1 true Secret
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
issuers cert-manager.io/v1 true Issuer
3.5 - Search
Clusterpedia supports complex search for multi-cluster resources, specified cluster resoruces, and Collection Resources.
And these complex search conditions can be passed to Clusterpedia APIServer
in two ways:
URL Query
: directly pass query conditions as QuerySearch Labels
: to keep compatible with Kubernetes OpenAPI, the search conditions can be set via Label Selector
Both Search Labels
and URL Query
support same operators as Label Selector:
exist
,not exist
=
,==
,!=
in
,notin
In addition to conditional retrieval, Clusterpedia also enhances Field Selector
to meet the filtering requirements by fields such as metadata.annotation
or status.*
.
Search by metadata
Supported Operators:
==
,=
,in
.
Role | search label key | url query |
---|---|---|
Filter cluster names | search.clusterpedia.io/clusters |
clusters |
Filter namespaces | search.clusterpedia.io/namespaces |
namespaces |
Filter resource names | search.clusterpedia.io/names |
names |
Current, we don’t support operators such as
!=
,notin
operators, if you have these needs or scenarios, you can discuss them in the issue.
Fuzzy Search
Supported Operators:
==
,=
,in
.
This feature is expermental and only search label are available for now
Role | search label key | url query |
---|---|---|
Fuzzy Search for resource name | internalstorage.clusterpedia.io/fuzzy-name | - |
Search by creation time interval
Supported Operators:
==
,=
.
The search is based on the creation time interval of the resource, using a left-closed, right-open internval.
Role | search label key | url query |
---|---|---|
Search | search.clusterpedia.io/since | since |
Before | search.clusterpedia.io/before | before |
There are four formats for creation time:
Unix Timestamp
for ease of use will distinguish between units ofs
orms
based on the length of the timestamp. The 10-bit timestamp is in seconds, the 13-bit timestamp is in milliseconds.RFC3339
2006-01-02T15:04:05Z or 2006-01-02T15:04:05+08:00UTC Date
2006-01-02UTC Datetime
2006-01-02 15:04:05
Because of the limitation of the kube label selector, the search label only supports Unix Timestamp
and UTC Date
.
All formats are available using the url query method.
Search by Owner
Supported Operators:
==
,=
.
Role | search label key | url query |
---|---|---|
Specified Owner UID | search.clusterpedia.io/owner-uid | ownerUID |
Specified Owner Name | search.clusterpedia.io/owner-name | ownerName |
SPecified Owner Group Resource | search.clusterpedia.io/owner-gr | ownerGR |
Specified Owner Seniority | internalstorage.clusterpedia.io/owner-seniority |
ownerSeniority |
Note that when specifying Owner UID
, Owner Name
and Owner Group Resource
will be ignored.
The format of the Owner Group Resource
is resource.group
, for example deployments.apps or nodes.
OrderBy
Supported Operators:
=
,==
,in
.
Role | search label key | url query |
---|---|---|
Order by fields | search.clusterpedia.io/orderby |
orderby |
Paging
Supported Operators:
=
,==
.
Role | search label key | url query |
---|---|---|
Set page size | search.clusterpedia.io/size |
limit |
Set page offset | search.clusterpedia.io/offset |
continue |
Response required with Continue | search.clusterpedia.io/with-continue |
withContinue |
Response required with remaining count | search.clusterpedia.io/with-remaining-count |
withRemainingCount |
When you perform operations with kubectl, the page size can only be set via
kubectl --chunk-size
, because kubectl will set the default limit to 500.
Label Selector
Regardless of kubectl or URL, all Label Selectors that do not contain clusterpedia.io in the Key will be used as Label Selectors to filter resources.
All behaviors are consistent with those provided by Kubernetes.
Role | kubectl | url query |
---|---|---|
Filter by labels | kubectl -l or kubectl --label-selector |
labelSelector |
Field Selector
Field Selector is consistent with Label Selector in terms of operators, and Clusterpedia also supports:
exist
, not exist
, ==
, =
, !=
, in
, notin
.
All command parameters for URL and kubectl are same as those for Field Selector.
Role | kubectl | url query |
---|---|---|
Filter by fields | kubectl --field-selector |
fieldSelector |
For details refer to:
Advanced Search(Custom Conditional Search)
Custom search is a feature provided by the default storage layer
to meet more flexible and variable search needs of users.
Feature | search label key | url query |
---|---|---|
custom SQL used for filter | - | whereSQL |
Custom search is not supported by search label, only url query can be used to pass custom search SQL.
In addition, this feature is still in alpha stage, you need to open the corresponding Feature Gate in clusterpedia apiserver
, for details, please refer to Raw SQL Query
CollectionResource URL Query
The following URL Query belongs exclusively to Collection Resource.
Role | url query | example |
---|---|---|
get only the metadata of the resource | onlyMetadata |
onlyMetadata=true |
specify the groups of any collectionresource |
groups |
groups=apps,cert-manager.io/v1 |
specify the resources of any collectionresource |
resources |
resources=apps/deployments,batch/v1/cronjobs |
3.5.1 - Multiple Clusters
Multi-cluster resource search allows us to filter resources in multiple clusters at once based on query criteria, and provides the ability to paginate and sort these resources.
When using kubectl
, we can see what resources are currently available for search
kubectl --cluster clusterpedia api-resources
# Output:
NAME SHORTNAMES APIVERSION NAMESPACED KIND
configmaps cm v1 true ConfigMap
namespaces ns v1 false Namespace
nodes no v1 false Node
pods po v1 true Pod
secrets v1 true Secret
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
issuers cert-manager.io/v1 true Issuer
Clusterpedia provides multi-cluster resource search based on all cluster-synchronized resources, and we can view Sync Cluster Resources to update the resources that need to be synchronized.
Basic Features
Specify Clusters
When searching multiple clusters, all clusters will be retrieved by default, we can also specify a single cluster or a group of clusters
Use Search Label search.clusterpedia.io/clusters
to specify a group of clusters.
kubectl --cluster clusterpedia get deployments -l "search.clusterpedia.io/clusters in (cluster-1,cluster-2)"
# Output:
NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
kube-system cluster-1 coredns 2/2 2 2 68d
kube-system cluster-2 coredns 2/2 2 2 64d
For specifying a single cluster search, we can also use Search Label to set it up, or see Search in Specified Cluster to specify a cluster using URL Path.
# specifying a single cluster
kubectl --cluster clusterpedia get deployments -l "search.clusterpedia.io/clusters=cluster-1"
# specifying a cluster can also be done with --cluster <cluster name>
kubectl --cluster cluster-1 get deployments
When using URL, use clusters
as URL Query to pass.
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?clusters=cluster-1"
If we specify a single cluster, we can also put the cluster name in the URL Path.
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/clusters/cluster-1/apis/apps/v1/deployments"
Lean More Specify Cluster Search
Specify Namespaces
We can specify a single namespace or all namespaces as if we were viewing a native Kubernetes resource.
Use -n <namespace>
to specify the namespace, the default is in the default namespace
kubectl --cluster clusterpedia get deployments -n kube-system
# Output:
CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
cluster-1 coredns 2/2 2 2 68d
cluster-2 calico-kube-controllers 1/1 1 1 64d
cluster-2 coredns 2/2 2 2 64d
Use -A
or --all-namespaces
to see the resources under all namespaces for all clusters
kubectl --cluster clusterpedia get deployments -A
# Output:
NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
kube-system cluster-1 coredns 2/2 2 2 68d
kube-system cluster-2 calico-kube-controllers 1/1 1 1 64d
kube-system cluster-2 coredns 2/2 2 2 64d
default cluster-2 dd-airflow-scheduler 0/1 1 0 54d
default cluster-2 dd-airflow-web 0/1 1 0 54d
The URL Path to get the resources is the same as the native Kubernetes /apis/apps/v1/deployments.
We just need to prefix the path to Clusterpedia Resources with /apis/clusterpedia.io/v1beta1/resources to indicate that it is currently a Clusterpedia request.
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments"
# Specify namespace
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/namespaces/kube-system/deployments"
In addition to specifying a single namespace, we can also specify to search the resources under a group of namespaces.
Use Search Label search.clusterpedia.io/namespaces
to specify a group of namespaces.
Be sure to specify the
-A
flag to avoid kubectl setting default namespace in the path.
kubectl --cluster clusterpedia get deployments -A -l "search.clusterpedia.io/namespaces in (kube-system, default)"
# Output:
NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
kube-system cluster-1 coredns 2/2 2 2 68d
kube-system cluster-2 calico-kube-controllers 1/1 1 1 64d
kube-system cluster-2 coredns 2/2 2 2 64d
default cluster-2 dd-airflow-scheduler 0/1 1 0 54d
default cluster-2 dd-airflow-web 0/1 1 0 54d
When using URL, we don’t need to use Label Selector to pass parameters, just use URL Query - namespaces
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?namespaces=kube-system,default"
Specify Resource Names
Users can filter resources by a group of resource names
Use Search Label search.clusterpedia.io/names
to specify a group of resource names.
Note: To search for resources under all namespaces, specify the
-A
flag, or use-n
to specify the namespace.
kubectl --cluster clusterpedia get deployments -A -l "search.clusterpedia.io/names=coredns"
# Output:
NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
kube-system cluster-1 coredns 2/2 2 2 68d
kube-system cluster-2 coredns 2/2 2 2 64d
When using URL, use names
to pass as URL Query, and if you need to specify namespaces, then add namespace to the path.
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?names=kube-coredns,dd-airflow-web"
# search resources with specified names under default namespace
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/namespaces/default/deployments?names=kube-coredns,dd-airflow-web"
When searching from multiple clusters, the data returned is actually encapsulated in a structure similar to DeploymentList
.
If we want to get a single Deployment
then we need to specify the cluster name in the URL path, refer to Get Single Resource
Creation Time Interval
The creation time interval used for the search is left closed and right open, since <= creation time < before.
For more details on the time interval parameters, see Search by Creation Time Interval
Use Search Label - search.clusterpedia.io/since
and search.clusterpedia.io/before
to specify the time interval respectively.
kubectl --cluster clusterpedia get deployments -A -l "search.clusterpedia.io/since=2022-03-24, \
search.clusterpedia.io/before=2022-04-10"
When using URLs, you can use Query - since
and before
to specify the time interval respectively.
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?since=2022-03-24&before=2022-04-10"
Fuzzy Search
Currently supports fuzzy search based on resource names.
Since fuzzy search needs to be discussed further, it is temporarily provided as an experimental feature.
Only the Search Label method is supported, URL Query isn’t supported.
kubectl --cluster clusterpedia get deployments -A -l "internalstorage.clusterpedia.io/fuzzy-name=test"
Filters out deployments whose names contain the test string.
You can use the in
operator to pass multiple fuzzy arguments, so that you can filter out resources that have all strings in their names.
Field Selector
Native Kubernetes currently only supports field filtering on metadata.name
and metadata.namespace
, and the operators only support =,
!=,
==`, which is very limited.
Clusterpedia provides more powerful features based on the compatibility with existing Field Selector features, and supports the same operators as Label Selector
.
Field Selector’s key currently supports three formats.
- Use
.
to sperate fields
kubectl --cluster clusterpedia get pods --field-selector="status.phase=Running"
# we can also add the first character `.`
kubectl --cluster clusterpedia get pods --field-selector=".status.phase notin (Running,Succeeded)"
- Field names wrapped in
''
or""
can be used for fields with illegal characters like.
kubectl --cluster clusterpedia get deploy \
--field-selector="metadata.annotations['test.io'] in (value1,value2),spec.replica=3"
- Use
[]
to separate fields, the string inside[]
must be wrapped with''
or""
kubectl --cluster clusterpedia get pods --field-selector="status['phase']!=Running"
Support List Fields
The actual design of field filtering takes into account the filtering of fields within list elements
, but more discussion is needed as to whether the usage scenario actually makes sense:
issue: support list field filtering
Examples:
kubectl get po --field-selector="spec.containers[].name!=container1"
kubectl get po --field-selector="spec.containers[].name == container1"
kubectl get po --field-selector="spec.containers[1].name in (container1,container2)"
Search by Parent or Ancestor Owner
Searching by Owner is a very useful search function, and Clusterpedia also supports the seniority advancement of Owner to search for grandparents and even higher seniority.
By searching by Owner, we can query all Pods
under Deployment
at once, without having to query ReplicaSet
in between.
When using the Owner query, we must specify a single cluster, either as a Serach Label or URL Query, or you can specify the cluster name in the URL Path.
For details on how to search by Owner, you can refer to Search by Parent or Ancestor Owenr within a specified cluster
Paging and Sorting
Paging and sorting are essential features for resource retrieval.
Sorting by multiple fields
Multiple fields can be specified for sorting, and the support for sorting fields is determined by the Storage Layer.
The current Default Storage Layer
supports sorting cluster
,namespace
,name
,created_at
,resource_version
in both asc and desc,
and the fields are also supported in any combination
Sorting using multiple fields
kubectl --cluster clusterpedia get pods -l \
"search.clusterpedia.io/orderby in (cluster, name)"
Because of Label Selector’s validation of value, order by desc requires _desc at the end of the field.
kubectl --cluster clusterpedia get pods -l \
"search.clusterpedia.io/orderby in (namespace_desc, cluster, name)"
Use URL Query to specify sorting fields
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?orderby=namespace,cluster"
When specifying a field in order by desc, add desc to the end of the field, separated by spaces
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?orderby=namespace desc,cluster"
Paging
Native Kubernetes actually supports paging, and fields for paging queries already exist in ListOptions.
Clusterpedia reuses the ListOptions.Limit
and ListOptions.Continue
fields as the size
and offset
for paging.
kubectl --chunk-size
is actually used for paging pulls by setting ListOptions.Limit
.
The native Kubernetes APIServer carries the continue
for the next list in the returned response,
and performs the next list based on --chunk-size
and conintue
until the conintue
is empty in the response data.
Clusterpedia does not return the continue
field in the response by default in order to ensure paged search in kubectl
,
which prevents kubectl
from pulling all data using chunks.
kubectl --cluster cluster-1 get pods --chunk-size 10
Note that kubectl sets the limit
to the default value of 500 without setting --chunk-size
,
which means that search.clusterpedia.io/size
does not actually take effect and is only used to correspond to search.clusterpedia.io/offset
.
URL Query
has a higher priority thanSearch Label
There is no flag to set for continue
in kubectl. So you have to use Search Label to pass it.
kubectl --cluster clusterpedia get pods --chunk-size 10 -l \
"search.clusterpedia.io/offset=10"
To paginate resources, just set the limit
and continue
in the URL.
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?limit=10&continue=5"
Response With Continue
ListMeta.Continue can be used in ListOptions.Continue as the offset for the next request.
As mentioned in the paging feature, Clusterepdia does not have continue
in the response to prevent kubectl from pulling the full amount of data in pieces.
However, if the user requires it, he can request that the response include continue
.
When accessing Clusterepdia using a URL, the response' continue
can be used as the offset for the next request.
Use with paging
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?withContinue=true&limit=1" | jq
{
"kind": "DeploymentList",
"apiVersion": "apps/v1",
"metadata": {
"continue": "1"
},
"items": [
...
]
}
Setting search.clusterpedia.io/with-continue
in kubectl will result in pulling the full amount of resources as a paged pull.
kubectl --cluster clusterpedia get deploy -l \
"search.clusterpedia.io/with-continue=true"
Response With Remaining Count
In some UI cases, it is often necessary to get the total number of resources in the current search condition.
The RemainingItemCount
field exists in the ListMeta of the Kubernetes List response.
By reusing this field, the total number of resources can be returned in a Kubernetes OpenAPI-compatible manner:
offset + len(list.items) + list.metadata.remainingItemCount
When offset is too large,
remainingItemCount
may be negative, ensuring that the total number of resources can always be calculated.
Set withRemainingCount
in the URL Query to request that the response include the number of remaining resources.
Use with paging
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?withRemainingCount&limit=1" | jq
{
"kind": "DeploymentList",
"apiVersion": "apps/v1",
"metadata": {
"remainingItemCount": 23
},
"items": [
...
]
}
Need to use this feature as a URL
3.5.2 - Specified a Cluster
In addition to searching in multiple clusters, Clusterpedia can also search for resources in a specified cluster.
Using
Search Label
orURL Query
to specify a single cluster is not different from specifying a cluster in URL Path in terms of performanceThis topic focuses on specifying clusters in URL Path
Before using kubectl in the way of specifying a cluster, you need to configure the cluster shortcut for kubectl
kubectl --cluster cluster-1 get deployments -n kube-system
# Output:
NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
kube-system cluster-1 coredns 2/2 2 2 68d
Specify a cluster by using the cluster name in the URL path
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/clusters/cluster-1/apis/apps/v1/deployments"
You can also specify a single cluster by URL Query
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?clusters=cluster-1"
The function supported by searching in a specified cluster is basically the same as that of multi-cluster search.
It is more convenient for searching by Owner in a specified cluster. In addition, when getting a single resource, you can only use the specified cluster in the URL Path.
Search by Parent or Ancestor Owner
To query by Owner, you shall specify a single cluster. You can use Search Label or URL Query to specify, or specify the cluster name in the URL Path.
Searching for resources based on ancestor owners can be done with Owner UID
or Owner Name
, and with Owner Seniority
for Owner seniority advancement.
For the specific query parameters, you can refer to Search by Owner
In this way, you can directly search for the Pods
corresponding to a Deployment without having to query which ReplicaSet
belong to that Deployment
.
Use the Owner UID
Owner Name
and Owner Group Resource
will be ignored after Owner UID
is specified.
Firstly use kubectl to get Deployment
UID
kubectl --cluster cluster-1 get deploy fake-deploy -o jsonpath="{.metadata.uid}"
#Output:
151ae265-28fe-4734-850e-b641266cd5da
Getting the uid under kubectl may be tricky, but it’s usually already easier to check
metadata.uid
in UI scenarios
Use owner-uid
to specify Owner UID and use owner-seniority
to promote the Owner’s seniority.
owner-seniority
is 0 by default, which represents Owner is parent. If you set it to 1, Owenr can be promoted to grandfather
kubectl --cluster cluster-1 get pods -l \
"search.clusterpedia.io/owner-uid=151ae265-28fe-4734-850e-b641266cd5da,\
search.clusterpedia.io/owner-seniority=1"
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/clusters/cluster-1/api/v1/namespaces/default/pods?ownerUID=151ae265-28fe-4734-850e-b641266cd5da&ownerSeniority=1"
Use the Owner Name
If the Owner UID is not known in advance, then using Owner UID
is a more troublesome way.
We can specify the Owner by it’s name, and we can also specify Owner Group Resource
to restrict the Owner’s Group Resource.
Again, let’s take the example of getting the corresponding Pods under Deployment.
kubectl --cluster cluster-1 get pods -l \
"search.clusterpedia.io/owner-name=deploy-1,\
search.clusterpedia.io/owner-seniority=1"
In addition, to avoid multiple types of owner resources in some cases, we can use the Owner Group Resource
to restrict the type of owner.
kubectl --cluster cluster-1 get pods -l \
"search.clusterpedia.io/owner-name=deploy-1,\
search.clusterpedia.io/owner-gr=deployments.apps,\
search.clusterpedia.io/owner-seniority=1"
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/clusters/cluster-1/api/v1/namespaces/default/pods?ownerName=deploy-1&ownerSeniority=1"
Get a single resource
When we want to use the resource name to get (Get) a resource, we must pass the cluster name in the URL Path, just like namespace.
If a resource name is passed in a multi-cluster mode, an error will be reported
kubectl --cluster cluster-1 get deploy fake-deploy
# Output:
CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
cluster-1 fake-deploy 1/1 1 1 35d
Certainly, you can use Search Label to specify a resource name in the case of kubectl.
However, if you use -o yaml
or other methods to check the returned source data, it is different from using kubectl --cluster <cluster name>
.
# The actual server returns the DeploymentList resource, which is replaced with a list by kubectl
kubectl --cluster clusterpedia get deploy -l
"search.clusterpedia.io/clusters=cluster-1,\
search.clusterpedia.io/names=fake-deploy" -o yaml
# Output:
apiVersion: v1
items:
- ...
kind: List
metadata:
resourceVersion: ""
selfLink: ""
The actual returned resource is still a KindList
, while kubectl --cluster <clsuter name>
returns a specific Kind
.
kubectl --cluster cluster-1 get deploy fake-deploy -o yaml
# Output:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
shadow.clusterpedia.io/cluster-name: cluster-1
creationTimestamp: "2021-12-16T02:26:29Z"
generation: 2
name: fake-deploy
namespace: default
resourceVersion: "38085769"
uid: 151ae265-28fe-4734-850e-b641266cd5da
spec:
...
status:
...
The URL to get a specified resource can be divided into three parts:
- Prefix to search for resource: /apis/clusterpedia.io/v1beta1/resources
- Specified cluster name: /clusters/< cluster name >
- Resource name for Kubernetes API: Path /apis/apps/v1/namespaces/< namespace >/deployments/< resource name >
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/clusters/cluster-1/apis/apps/v1/namespaces/default/deployments/fake-deploy"
3.5.3 - Collection Resource
For collection resource, refer to What is Collection Resource
Due to the limitation of kubectl, we cannot pass search conditions through Label Selector
or other methods, so it is recommended to search for Collection Resource
by using a URL.
When requesting
Collection Resource
, you shall use paging because the number of resources may be very large.
kubectl get --raw="/apis/clusterpedia.io/v1beta1/collectionresources/workloads?limit=1" | jq
# Output
{
"kind": "CollectionResource",
"apiVersion": "clusterpedia.io/v1beta1",
"metadata": {
"name": "workloads",
"creationTimestamp": null
},
"resourceTypes": [
{
"group": "apps",
"version": "v1",
"kind": "Deployment",
"resource": "deployments"
},
{
"group": "apps",
"version": "v1",
"resource": "daemonsets"
},
{
"group": "apps",
"version": "v1",
"resource": "statefulsets"
}
],
"items": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
...
}
]
}
The complex search of Collection Resource
is basically the same as the function of multi-cluster resource search, only some operations are not supported:
- Search by Owner is not supported. If you need to specify a specific resource type to search by Owner, you can refer to
multi-cluster resource search
andspecified cluster search
- Getting a specific single resource in
Collection Resource
is not supported, because you shall specify cluster and type for a specific resource. In this case, you can use Get a single resource.
It is not easy to search for Collection Resource
by using kubectl. However, you can have a try.
kubectl cannot pass pages and other search conditions and may get all
Collection Resources
at one time. It is not recommended to use kubectl to viewCollection Resource
if a large number of clusters are imported or a cluster has manydeployments
,daemonsets
andstatefulsets
resources.
kubectl get collectionresources workloads
# Output
CLUSTER GROUP VERSION KIND NAMESPACE NAME AGE
cluster-1 apps v1 DaemonSet kube-system vsphere-cloud-controller-manager 63d
cluster-2 apps v1 Deployment kube-system calico-kube-controllers 109d
cluster-2 apps v1 Deployment kube-system coredns-coredns 109d
...
Search for Collection Resource
by using URL
Only Metadata
When we retrieve a CollectionResource, the default resource is the full resource content, but sometimes we just need to retrieve the metadata of the resource.
We can use url query – onlyMetadata
to retrieve only the resource metadata when retrieving.
$ kubectl get --raw "/apis/clusterpedia.io/v1beta1/collectionresources/workloads?onlyMetadata=true&limit=1" | jq
{
"kind": "CollectionResource",
"apiVersion": "clusterpedia.io/v1beta1",
"metadata": {
"name": "workloads",
"creationTimestamp": null
},
"resourceTypes": [
{
"group": "apps",
"version": "v1",
"kind": "Deployment",
"resource": "deployments"
}
],
"items": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"deployment.kubernetes.io/revision": "1",
"shadow.clusterpedia.io/cluster-name": "cluster-example"
},
"creationTimestamp": "2021-09-24T10:19:19Z",
"generation": 1,
"labels": {
"k8s-app": "tigera-operator"
},
"name": "tigera-operator",
"namespace": "tigera-operator",
"resourceVersion": "125073610",
"uid": "992f9d53-37cb-4184-a004-15b278b11f79"
}
}
]
}
Any CollectionResource
any collectionresource
is one of the ways that users can freely combine resource types with custom collection resources to see more.
clusterpedia supports a special CollectionResource —— any
.
$ kubectl get collectionresources
NAME RESOURCES
any *
When retrieving any collectionresource
, we must specify a set of resource types by url query, so we can only retrieve any collectionresource
via clusterpedia-io/client-go or URL.
$ kubectl get collectionresources any
Error from server (BadRequest): url query - `groups` or `resources` is required
any collectionresource
supports two url queries —— groups
and resources
groups
and resources
can be specified together, currently they are taken together and are not de-duplicated, the caller is responsible for de-duplication,
there are some future optimizations for this behavior.
$ kubectl get --raw "/apis/clusterpedia.io/v1beta1/collectionresources/any?onlyMetadata=true&groups=apps&resources=batch/jobs,batch/cronjobs" | jq
groups
groups
can specify a group and version of a set of resources, with multiple group versions separated by ,
,
the group version format is < group >/< version >, or no version can be specified < group >, for resources under /api, you can just use the empty string.
Example: groups=apps/v1,,batch specifies three groups apps/v1, core, batch.
resources
resources
can specify a specific resource type, multiple resource types are separated by ,
,
the resource type format is < group >/< version >/< resource>, by also not specifying the version < group >/< resource >.
Example: resources=apps/v1/deployments,apps/daemonsets,/pods specifies three resources deloyments, daemonsets and pods.
4 - Advanced Features
4.1 - Multi-Cluster kube-state-metrics
Clusterpedia provides kube-state-metrics features for multi-cluster resources at a fraction of the cost, providing the same metrics information as kube-state-metrics, but with the addition of a cluster name label.
kube_deployment_created{cluster="test-14",namespace="clusterpedia-system",deployment="clusterpedia-apiserver"} 1.676557618e+09
Since this feature is experimental, you will install Clusterpedia the standard way first.
Once Clusterpedia is installed, we need to update the helm to enable the multi-cluster kube-state-metrics
feature.
The kube-state-metrics feature has been merged into the main branch and will be included in
v0.8.0
in the future. The feature is included in the ghcr.io/iceber/clusterpedia/clustersynchro-manager:v0.8.0-ksm.1
Enable Multi-Cluster kube-state-metrics
Ensure Clusterpedia Chart Version >= v1.8.0
$ helm repo update clusterpedia
$ helm search clusterpedia
NAME CHART VERSION APP VERSION DESCRIPTION
clusterpedia/clusterpedia 1.8.0 v0.7.0 A Helm chart for Kubernetes
Get the current chart values
$ helm -n clusterpedia-system get values clusterpedia > values.yaml
Create patch values
$ echo "clustersynchroManager:
image:
repository: iceber/clusterpedia/clustersynchro-manager
tag: v0.8.0-ksm.1
kubeStateMetrics:
enabled: true
" > patch.yaml
Update Clusterpedia to enable multi-cluster kube-state-metrics.
$ helm -n clusterpedia-system upgrade -f values.yaml -f patch.yaml clusterpedia clusterpedia/clusterpedia
Get clusterpedia kube-state-metrics services
$ kubectl -n clusterpedia-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
clusterpedia-apiserver ClusterIP 10.97.129.238 <none> 443/TCP 150d
clusterpedia-clustersynchro-manager-metrics ClusterIP 10.108.129.32 <none> 8081/TCP 51m
clusterpedia-kube-state-metrics ClusterIP 10.108.130.62 <none> 8080/TCP 43m
clusterpedia-mysql ClusterIP 10.102.38.225 <none> 3306/TCP 150d
clusterpedia-mysql-headless ClusterIP None <none> 3306/TCP 150d
For more information on importing clusters and using clusterpedia: Import Clusters
Future
Multi-cluster kube-state-metrics is a very interesting feature that removes the need to install a single-cluster version of kube-state-metrics in each cluster, and it handles the issue of differing resource versions very well.
There is a lot of discussion about this feature here, feel free to comment!
- The resource state metrics provide different metrics paths depending on the cluster
- Support remote write to send resource metrics data
- Support for filtering exposed resource state metrics based on namespace
- Support for filtering exposed resource state metrics by cluster labels/annotations
Also welcome to create a new issue
4.2 - Custom Storage Layer Plugin
Clusterpedia can use different storage components such as MySQL/PostgreSQL, Memory, Elasticsearch through the storage layer.
Currently, Clusterpedia has two built-in storage layers:
- The internalstorage storage layer for accessing relational databases.
- The memory storage layer based on Memory
Although Clusterpedia already supports relational databases and memory by default, user requirements are often variable and complex, and a fixed storage layer may not match the requirements of different users for storage components and performance, so Clusterpedia supports access to user-implemented storage layers by means of plugins, which we call custom storage layer plugins
, or storage plugins
for short.
With the storage plugin
, users can do the following things:
- Use any storage component, such as
Elasticsearch
,RedisGraph
,Etcd
, or evenMessageQueue
with no problem - Allow users to optimize the storage format and query performance of resources for their business
- Implement more advanced retrieval features for storage components
Clusterpedia also maintains a number of storage plugins
that users can choose from, depending on your needs:
- Sample Storage: Example of a storage plugin that can connect to relational databases
- Elasticsearch Storage: Storage plugin for connecting to Elasticsearch
Storage plugins
are loaded by Clusterpedia components via Go Plugin, which provides very flexible plug-in access without any performance loss compared to RPC or other methods.
The performance impact of the Go Plugin can be found at https://github.com/uberswe/goplugins
As we all know, Go Plugin is troublesome to develop and use, but Clusterpedia cleverly optimizes the use and development of storage plugins through some mechanisms, and provides clusterpedia-io/sample-storage plugin as a reference.
Here we take clusterpedia-io/sample-storage as an example to introduce.
Use the custom storage layer plugin
The use of the storage plugin can be broadly divided into three ways:
- Run the Clusterpedia component binary and load the storage plugins
- Use the base Chart – clusterpedia-core to set up the
storage plugin image
and configure the storage layer - Use the Clusterpedia Advanced Chart to not care about the storage plugin settings
By running the component binary locally, we can get a better understanding of how the Cluserpedia component loads and runs the storage plugins
.
Users can actually use the storage plugin image
already built, or deploy Clusterpedia Advanced Chart directly
Local Run
Building Plugins
A storage plugin is actually a dynamic link library with a .so suffix.
Clusterpedia components can load storage plugins
at startup and use specific storage plugins
depending on the specified storage layer name.
Let’s take clusterpedia-io/sample-storage as an example and build a storage plugin binary
$ git clone --recursive https://github.com/clusterpedia-io/sample-storage.git && cd sample-storage
$ make build-plugin
Use the file command to view storage plugin information
$ file ./plugins/sample-storage-layer.so
./plugins/sample-storage-layer.so: Mach-O 64-bit dynamically linked shared library x86_64
Clusterpedia’s ClusterSynchro Manager
and APIServer
components can load and use storage plugins via environment variables and command flags:
STORAGE_PLUGINS=<plugins dir>
environment variable sets the directory where the plugins are located, and Clusterpedia will load all plugins in that directory into the component--storage-name=<storage name>
command flag, set the storage layer name--storage-config=<storage config path>
command flag, set the storage layer configuration
Building components
To ensure consistent dependencies when running locally, clusterpedia components need to be built locally with the make build-components
command
For more information on building storage plugins and Clusterpedia components see Developing custom storage layer plugins
$ # cd sample-storage
$ make build-components
$ ls -al ./bin
-rwxr-xr-x 1 icebergu staff 90707488 11 7 11:15 apiserver
-rwxr-xr-x 1 icebergu staff 91896016 11 7 11:16 binding-apiserver
-rwxr-xr-x 1 icebergu staff 82769728 11 7 11:16 clustersynchro-manager
-rwxr-xr-x 1 icebergu staff 45682000 11 7 11:17 controller-manager
Storage plugin runtime configuration file
Before running clusterpedia you also need to prepare the runtime configuration file for the storage plugin. sample-storage provides an example configuration example-config.yaml
When running the clusterpedia component, set the configuration file via --storage-config=./config.yaml
to specify the runtime configuration file
# example-config.yaml
type: mysql
host: 127.0.0.1
port: "3306"
user: root
password: dangerous0
database: clusterpedia
log:
stdout: true
colorful: true
slowThreshold: 100ms
The user needs to configure the runtime configuration according to the selected storage layer
Run clusterpedia clustersynchro manager
$ STORAGE_PLUGINS=./plugins ./bin/clustersynchro-manager --kubeconfig ~/.kube/config \
--storage-name=sample-storage-layer \
--storage-config ./config.yaml
Run clusterpedia apiserver
You can choose not to use your own generated certificate, which requires running apiserver without the -client-ca-file ca.crt
flag.
$ openssl req -nodes -new -x509 -keyout ca.key -out ca.crt
$ openssl req -out client.csr -new -newkey rsa:4096 -nodes -keyout client.key -subj "/CN=development/O=system:masters"
$ openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -sha256 -out client.crt
run apiserver
$ STORAGE_PLUGINS=./plugins ./bin/apiserver --client-ca-file ca.crt --secure-port 8443 \
--kubeconfig ~/.kube/config \
--authentication-kubeconfig ~/.kube/config \
--authorization-kubeconfig ~/.kube/config \
--storage-name=sample-storage-layer \
--storage-config ./config.yaml
Storage Plugin Image + Helm Charts
Clusterpeida already provides several Charts:
- charts/clusterpedia is a Chart using the internalstorage storage layer, which can be optionally deployed with MySQL or PostgreSQL, but does not support setting up storage plugins
- charts/clusterpedia-core supports configuration of any storage layer Chart, usually used as a child Chart
- charts/clusterpedia-mysql is an advanced Chart using MySQL as the storage component, based on clusterpedia-core implementation
- charts/clusterpedia-postgresql is an advanced Chart using PostgreSQL as the storage component, based on the clusterpedia-core implementation
- charts/clusterpedia-elasticsearch uses Elasticsearch as the advanced Chart for the storage component, based on the clusterpedia-core implementation
If you don’t need a storage plugin
and the internalstorage storage layer and relational database are sufficient, you can use charts/clusterpedia directly, out of the box.
clusterpedia-mysql,clusterpedia-postgresql and clusterpedia-elasticsearch are advanced Charts based on charts/clusterpedia-core, in which the user is shielded from the complex concept of storage plugins
by default configuration of clusterpedia-core’s storage plugin image and storage layer configuration, right out of the box.
Although we usually use Advanced Charts directly in our usage, knowing how to use charts/clusterpedia-core to set up storage plugin images
gives us a better understanding of how plugin images work.
clusterpedia-core
Let’s take clusterpedia-io/sample-storage as an example and deploy Clusterpedia using the ghcr.io/clusterpedia-io/clusterpedia/sample-storage-layer plugin image.
The clusterpedia-core does not involve the deployment and installation of any storage components, so users need to configure the storage layer according to the deployed storage components
# myvalues.yaml
storage:
name: "sample-storage-layer"
image:
registry: ghcr.io
repository: clusterpedia-io/clusterpedia/sample-storage-layer
tag: v0.0.0-v0.6.0
config:
type: "mysql"
host: "10.111.94.196"
port: 3306
user: root
password: dangerous0
database: clusterpedia
storage.name
sets the storage layer name of the storage image plugin
clusterpedia-core copies the storage plugins
from the plugin image defined by storage.image
to the component’s plugin directory.
# helm template clusterpedia -n clusterpedia-system -f myvalues.yaml ./clusterpedia-core
...
initContainers:
- name: copy-storage-plugin
image: ghcr.io/clusterpedia-io/clusterpedia/sample-storage-layer:v0.0.0-v0.6.0
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -ec
- cp /plugins/* /var/lib/clusterpedia/plugins/
volumeMounts:
- name: storage-plugins
mountPath: /var/lib/clusterpedia/plugins
containers:
- name: clusterpedia-clusterpedia-core-apiserver
image: ghcr.io/clusterpedia-io/clusterpedia/apiserver:v0.6.0
imagePullPolicy: IfNotPresent
command:
- /usr/local/bin/apiserver
- --secure-port=443
- --storage-name=sample-storage-layer
- --storage-config=/etc/clusterpedia/storage/config.yaml
env:
- name: STORAGE_PLUGINS
value: /var/lib/clusterpedia/plugins
volumeMounts:
- name: storage-config
mountPath: /etc/clusterpedia/storage
readOnly: true
- name: storage-plugins
mountPath: /var/lib/clusterpedia/plugins
readOnly: true
volumes:
- name: storage-config
configMap:
name: clusterpedia-clusterpedia-core-sample-storage-layer-config
- name: storage-plugins
emptyDir: {}
...
In addition to using storage.config
to define the runtime configuration config.yaml for the storage layer, you can also use the existing configmap and secret.
# myvalues.yaml
storage:
name: "sample-storage-layer"
image:
registry: ghcr.io
repository: clusterpedia-io/clusterpedia/sample-storage-layer
tag: v0.0.0-v0.6.0
configMap: "sample-storage-config"
componentEnv:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "sample-storage-password"
key: password
clusterpedia-core is very flexible in configuration of the storage layer in order to be referenced by other advanced Charts as a child Chart, but users do not necessarily need to use clusterpedia-core directly in practice, just the advanced Charts deployed for the specific storage component, such as clusterpedia-mysql and clusterpedia-postgresql.
In the next section we will also describe how to implement Advanced Charts for specific storage components based on clusterpedia-core.
Developing custom storage layer plugins
clusterpedia-io/sample-storage is not only a storage plugin example, but also a template repository where the project structure and most of the build tools can be used in other storage plugin projects
We first clone sample-storage, or generate a new storage plugin repository based on sample-storage
$ git clone --recursive https://github.com/clusterpedia-io/sample-storage.git && cd sample-storage
Note that when pulling the repository, you need to specify --recursive
to pull the sub-repository
$ ls -al
...
-rw-r--r-- 1 icebergu staff 260 12 13 15:14 Dockerfile
-rw-r--r-- 1 icebergu staff 1836 12 13 16:03 Makefile
-rw-r--r-- 1 icebergu staff 2219 11 23 10:25 README.md
drwxr-xr-x 32 icebergu staff 1024 11 23 10:30 clusterpedia
-rw-r--r-- 1 icebergu staff 156 11 23 10:25 example-config.yaml
-rw-r--r-- 1 icebergu staff 2376 12 13 15:33 go.mod
-rw-r--r-- 1 icebergu staff 46109 12 13 15:33 go.sum
-rw-r--r-- 1 icebergu staff 139 11 23 10:25 main.go
drwxr-xr-x 16 icebergu staff 512 12 13 15:33 storage
drwxr-xr-x 9 icebergu staff 288 12 13 15:33 vendor
The entire project structure is divided into three categories:
- main.go, storage package: the core logic of the custom storage plugin
clusterpedia
local repository: used for local development and testing- Dockerfile and Makefile for project build and image packaging, which can be applied to any storage plugin project
core logic
main.go is the main storage plugin file, mainly used to call the registration function in the storage package – RegisterStorageLayer
.
package main
import (
plugin "github.com/clusterpedia-io/sample-storage-layer/storage"
)
func init() {
plugin.RegisterStorageLayer()
}
The storage package contains the core logic of the storage plugin:
- Implementing the clusterpedia storage layer interface
storage.StorageFactory
import (
"gorm.io/gorm"
"github.com/clusterpedia-io/clusterpedia/pkg/storage"
)
type StorageFactory struct {
db *gorm.DB
}
var _ storage.StorageFactory = &StorageFactory{}
- NewStorageFactory function to return an instance of
storage.StorageFactory
func NewStorageFactory(configPath string) (storage.StorageFactory, error)
- The RegisterStorageLayer function registers the NewStorageFactory with clusterpedia
const StorageName = "sample-storage-layer"
func RegisterStorageLayer() {
storage.RegisterStorageFactoryFunc(StorageName, NewStorageFactory)
}
The registered NewStorageFactory is automatically called when the user specifies the storage layer with --storage-name
to create an instance of storage.StorageFactory
.
Local development run
To facilitate development and testing, we have added the clusterpedia repository as a subrepository to the storage plugin repository
$ git submodule status
+4608c8d13101d82960525dfe39f51e4f64ed49b3 clusterpedia (v0.6.0)
and replace the clusterpedia repository in go.mod with a local subrepository
# go.mod
replace (
github.com/clusterpedia-io/api => ./clusterpedia/staging/src/github.com/clusterpedia-io/api
github.com/clusterpedia-io/clusterpedia => ./clusterpedia
)
The local cluserpedia subrepository will not be used when building the storage layer image
Build storage plugin
The build of the storage plugin is divided into two parts, building the components in the clusterpedia repository and building the storage plugin
.
$ make build-components
OUTPUT_DIR=/Users/icebergu/workspace/clusterpedia/sample-storage-layer ON_PLUGINS=true \
/Library/Developer/CommandLineTools/usr/bin/make -C clusterpedia all
hack/builder.sh apiserver
hack/builder.sh binding-apiserver
hack/builder.sh clustersynchro-manager
hack/builder-nocgo.sh controller-manager
$ ls -al ./bin
-rwxr-xr-x 1 icebergu staff 90724968 12 15 09:51 apiserver
-rwxr-xr-x 1 icebergu staff 91936472 12 15 09:52 binding-apiserver
-rwxr-xr-x 1 icebergu staff 82826584 12 15 09:52 clustersynchro-manager
-rwxr-xr-x 1 icebergu staff 45677904 12 15 09:52 controller-manager
The make build-components
command will call make all
from the clusterpedia repository and output the result to ./bin directory of the storage plugin project.
If the clusterpedia subrepository has not changed, then you only need to build the components once
Build the storage plugin
$ make build-plugin
CLUSTERPEDIA_REPO=/Users/icebergu/workspace/clusterpedia/sample-storage/clusterpedia \
clusterpedia/hack/builder.sh plugins sample-storage-layer.so
$ ls -al ./plugins
-rw-r--r-- 1 icebergu staff 53354352 12 15 09:47 sample-storage-layer.so
Building the storage plugin locally also requires using the builder.sh script of the clusterpedia repository to build the plugin binary.
For running storage plugins, see Running storage plugins locally
storage plugin image
As mentioned above, storage plugins
are shared with clusterpedia by image in a real deployment.
The Makefile provides make image-plugin
to build images and make push-images
to publish them.
Building images
To build a plugin image, we need to use the clusterpedia/builder image as the base image to build the plugin, and the builder image needs to be the same version as the clusterpedia component that uses the plugin
$ BUILDER_IMAGE=ghcr.io/clusterpedia-io/clusterpedia/builder:v0.6.0 make image-plugin
clusterpedia maintains a builder image of the published version, and users can also use their own locally built builder image
Build the builder image locally
$ cd clusterpedia
$ make image-builder
docker buildx build \
-t "ghcr.io/clusterpedia-io/clusterpedia"/builder-amd64:4608c8d13101d82960525dfe39f51e4f64ed49b3 \
--platform=linux/amd64 \
--load \
-f builder.dockerfile . ; \
The tag format for storage plugin images
is < stroage version >-<clusterpedia-version/commit>, for example: ghcr.io/clusterpedia-io/clusterpedia/sample-storage-layer:v0.0.0-v0.6.0
The storage plugin image can be deployed in the <clusterpedia-version/commit> version of Clusterpedia
push images
make image-plugin
builds the storage plugin image based on the manually set builder image
While pushing images with make push-images
automatically builds images for all compatible versions and architectures
# Makefile
CLUSTERPEDIA_VERSIONS = v0.6.0-beta.1 v0.6.0
RELEASE_ARCHS ?= amd64 arm64
Once the image is built, the storage plugin image can be used via cluserpedia-core
Advanced Chart based on clusterpedia-core
After implementing our own storage plugin, we still need to provide an Advanced Chart based on clusterpedia-core Chart to make it easier to use.
Advanced Chart needs to provide the following capabilities:
- Set the default storage plugin image
- Set the storage layer name
- Support dynamic setting of the runtime configuration of the storage layer
- Provide configuration and installation of storage components
Create a new Chart using the sample-storage storage plugin – clusterpedia-sample-mysql, which will use mysql as the storage component.
# Chart.yaml
dependencies:
- name: mysql
repository: https://charts.bitnami.com/bitnami
version: 9.x.x
- name: common
repository: https://charts.bitnami.com/bitnami
version: 1.x.x
- name: clusterpedia-core
repository: https://clusterpedia-io.github.io/clusterpedia-helm/
version: 0.1.x
We need to override the storage layer related settings in clusterpedia-core, which provides both values.yaml and dynamic templates to set up the storage plugin and storage layer information
We override the static settings of the storage layer in values.yaml, such as plugin image and storage layer name
# values.yaml
clusterpedia-core:
storage:
name: "sample-storage-layer"
image:
registry: "ghcr.io"
repository: "clusterpedia-io/clusterpedia/sample-storage-layer"
tag: "v0.0.0-v0.6.0"
The config.yaml and some environment variables of the custom storage layer generally need to refer to ConfigMap and Secret, and the names of these resources will change dynamically according to the Chart release name, so we need to use the dynamic templates way to set
clusterpedia-core provides three overriding naming templates
# clusterpedia-core/templates/_storage_override.yaml
{{- define "clusterpedia.storage.override.initContainers" -}}
{{- end -}}
{{- define "clusterpedia.storage.override.configmap.name" -}}
{{- end -}}
{{- define "clusterpedia.storage.override.componentEnv" -}}
{{- end -}}
Each of them can be set as follows:
apiserver
andclustersynchro manager
init containers before running- ConfigMap name to store the config.yaml configuration that the plugin needs to read
- Environment variables to be used by the storage plugin
Let’s take clusterpedia-mysql as an example and see how it is set
# _storage_override.yaml
{{- define "clusterpedia.storage.override.initContainers" -}}
- name: ensure-database
image: docker.io/bitnami/mysql:8.0.28-debian-10-r23
command:
- /bin/sh
- -ec
- |
if [ ${CREARE_DATABASE} = "ture" ]; then
until mysql -u${STORAGE_USER} -p${DB_PASSWORD} --host=${STORAGE_HOST} --port=${STORAGE_PORT} -e 'CREATE DATABASE IF NOT EXISTS ${STORAGE_DATABASE}'; do
echo waiting for database check && sleep 1;
done;
echo 'DataBase OK ✓'
else
until mysqladmin status -u${STORAGE_USER} -p${DB_PASSWORD} --host=${STORAGE_HOST} --port=${STORAGE_PORT}; do sleep 1; done
fi
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "clusterpedia.mysql.storage.fullname" . }}
key: password
envFrom:
- configMapRef:
name: {{ include "clusterpedia.mysql.storage.initContainer.env.name" . }}
{{- end -}}
clusterpedia-mysql defines the environment variables needed to init containers in the storage-initcontainer-env-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "clusterpedia.mysql.storage.initContainer.env.name" . }}
namespace: {{ .Release.Namespace }}
labels: {{ include "common.labels.standard" . | nindent 4 }}
data:
STORAGE_HOST: {{ include "clusterpedia.mysql.storage.host" . | quote }}
STORAGE_PORT: {{ include "clusterpedia.mysql.storage.port" . | quote }}
STORAGE_USER: {{ include "clusterpedia.mysql.storage.user" . | quote }}
STORAGE_DATABASE: {{ include "clusterpedia.mysql.storage.database" . | quote }}
CREARE_DATABASE: {{ .Values.externalStorage.createDatabase | quote }}
The init container dynamically set by the clusterpedia.storage.override.initContainers
naming template will be rendered to Deployment
# helm template clusterpedia -n clusterpedia-system --set persistenceMatchNode=None .
...
spec:
initContainers:
- name: ensure-database
image: docker.io/bitnami/mysql:8.0.28-debian-10-r23
command:
- /bin/sh
- -ec
- |
if [ ${CREARE_DATABASE} = "ture" ]; then
until mysql -u${STORAGE_USER} -p${DB_PASSWORD} --host=${STORAGE_HOST} --port=${STORAGE_PORT} -e 'CREATE DATABASE IF NOT EXISTS ${STORAGE_DATABASE}'; do
echo waiting for database check && sleep 1;
done;
echo 'DataBase OK ✓'
else
until mysqladmin status -u${STORAGE_USER} -p${DB_PASSWORD} --host=${STORAGE_HOST} --port=${STORAGE_PORT}; do sleep 1; done
fi
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: clusterpedia-mysql-storage
key: password
envFrom:
- configMapRef:
name: clusterpedia-mysql-storage-initcontainer-env
The ConfigMap and environment variables of the storage plugin runtime configuration config.yaml are also dynamically configured in clusterpedia-mysql
# _storage_override.yaml
{{- define "clusterpedia.storage.override.configmap.name" -}}
{{- printf "%s-mysql-storage-config" .Release.Name -}}
{{- end -}}
{{- define "clusterpedia.storage.override.componentEnv" -}}
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "clusterpedia.mysql.storage.fullname" . }}
key: password
{{- end -}}
The storage layer configuration is set to the APIServer and ClusterSynchro Manager Deployment through static override of values and dynamic setting of naming templates
Advanced Chart like clusterpedia-mysql allows you to mask the use of underlying storage plugins for users out of the box
5 - Features
When using feature functionality, users need to enabled the corresponding feature gates.
For example, enable SyncAllResources
of clustersynchro manager to allow the user of All-resources Wildcard
# ignore other flags
./bin/clustersynchro-manager --feature-gates=SyncAllResources=true
Clusterpedia APIServer and Clusterpedia ClusterSynchro Manager have different feature gates.
APIServer
desc | feature gate | default |
---|---|---|
Set the default to return the number of resources remaining | RemainingItemCount |
false |
Raw SQl Query | AllowRawSQLQuery |
false |
ClusterSynchro Manager
desc | feature gate | default |
---|---|---|
Prune metadata.managedFields |
PruneManagedFields |
true |
Prune metadata.annotations['lastAppliedConfiguration'] |
PruneLastAppliedConfiguration |
true |
Allows synchronization of all types of custom resources | AllowSyncCustomResources |
false |
Allows synchronization of all types of resources | AllowSyncAllResources |
false |
Use standalone tcp for health checker | HealthCheckerWithStandaloneTCP |
false |
5.1 - Return RemainingItemCount
We can require the number of remaining resources to be included in the response by search label or url query when querying.
search label | url query |
---|---|
search.clusterpedia.io/with-remaining-count | withRemainingCount |
Detailed use can be referred to Response With Remaining Count
You can set the number of remaining resources to be returned by default via Feature Gates – RemainingItemCount
,
so that the user does not need to use a search label or url query to display the request at each request.
When the remaining item count is returned by default, you can still request that the remaining item count not be returned via search label or url query.
kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?withRemainingCount=false&limit=1" | jq
Feature Gate dedicated to clusterpedia apiserver
desc | feature gate | default |
---|---|---|
Set whether to return the number of remaining resources by default | RemainingItemCount |
false |
This feature is turned off by default because it may have an impact on the behavior or performance of the storage layer.
For the default storage tier, returning the number of remaining resources results in an additional COUNT query
5.2 - Raw SQL Query
Different users may have different needs, and although clusterpedia provides many easy search options, such as specifying a set of namespaces or clusters, or specifying an owner for a query, users may still have more complex queries.
In this case, you can use the Raw SQL Query
provided by the default storage layer
to pass more complex search conditions.
URL="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments"
kubectl get --raw="$URL?whereSQL=(cluster='global') OR (namespace IN ('kube-system','default'))"
In the example, we pass a SQL statement for a WHERE query —— (cluster=‘global’) OR (namespace IN (‘kube-system’,‘default’)),
This statement will retrieve deployments
under all namespaces in the global cluster and under the kube-system and default namespaces in other clusters.
The sql statement needs to conform to the SQL syntax of the specific storage component(MySQL, PostgreSQL).
This feature gate is exclusive to the clusterpedia apiserver
desc | feature gates | 默认值 |
---|---|---|
Allow search conditions to be set using raw SQL | AllowRawSQLQuery |
false |
Raw SQL queries are currently in alpha and are not well protected against SQL injection, so you need to enable this feature via Feature Gate.
5.3 - Resource Field Pruning
There are some fields in the resource’s metadata that are usually not very useful in the actual search, so we prune these fields by default when syncing.
We use feature gates to separately control whether thess fields are prunned during resource synchronization, these feature gates are exclusive to the clustersynchro manager
component
field | feature gates | default |
---|---|---|
metadata.managedFields | PruneManagedFields |
true |
metadata.annotations[‘lastAppliedConfiguration’] | PruneLastAppliedConfiguration |
true |
5.4 - Standalone TCP for Health Checker
When client-go creates any number of Clients with the same configuration, such as certificates, it reuses the same TCP connection. https://github.com/kubernetes/kubernetes/blob/3f823c0daa002158b12bfb2d53bcfe433516659d/staging/src/k8s.io/client-go/transport/transport.go#L54
This results in the cluster health check interface using the same TCP connection as the resource synchronized informer, which may cause TCP blocking and increased health check latency if a large number of informers are started for the first time.
We add a feature gate - HealthCheckerWithStandaloneTCP
to allow users to use a standalone tcp for health checks
./clustersynchro-manager --feature-gates=HealthCheckerWithStandaloneTCP=true
desc | feature gates | default |
---|---|---|
Use standalone tcp for health checker | HealthCheckerWithStandaloneTCP |
false |
Note: When this feature is turned on, the TCP long connections to member clusters will change from 1 to 2. If 1000 clusters are imported, then ClusterSynchro Manager will keep 2000 TCP connections.
5.5 - Sync All Custom Resources
The custom resources differ from kube’s built-in resources in that kube’s built-in resources do not usually change on a regular basis (there are still two cases where the native resource type can change).
The custom resource types can be created and deleted dynamically.
If you want to automatically adjust the synchronized resource types based on changes in the CRD of the imported cluster, you specify in PediaCluster to synchronize all custom resources.
spec:
syncAllCustomResources: true
This feature may cause a lot of long connections, so you need to enable Feature Gate in the clustersynchro manager
.
desc | feature gate | default |
---|---|---|
Allow synchronization of all custom resources | AllowSyncAllCustomResources |
true |
5.6 - Sync All Resources
You can synchronize all types of resources with the All-resource Wildcard
, and any resource type change in the imported cluster (e.g. kube version upgrade, group/version disabled, CRD or APIService change) will cause the synchronized resource type to be modified.
spec:
syncResources:
- group: "*"
resources:
- "*"
Please use this feature with caution, it will create a lot of long connections, in the future Clusterpedia will add the Agent feature to avoid the creation of long connections
It is recommended to specify specific resource types, if you need to dynamically synchronize custom resources, you can use Sync all custom resources.
To use All-resources Wildcard
you need to enable Feature Gate in clustersynchro manager
.
desc | feature gate | default |
---|---|---|
Allow synchronization of all resources | AllowSyncAllResources |
true |