This is the multi-page printable view of this section. Click here to print.
Blog
- Cluster API Searching Has Never Been Easier
- Clusterpedia is Listed in the CNCF Cloud Native Landscape
- Clusterpedia v0.2.0 Release
- Quickly Deploy Clusterpedia with Helm
- Clusterpedia Awarded | One of IT Technology Influcence Stars Selected by CSDN
- Demo Video |Clusterpedia - Complex Retrieval of Resources in a Multi-Cloud Environment
- Clusterpedia v0.1.0 Release — four important functions
- Upgrade to Clusterpedia 0.1.0
- Clusterpedia with kubectl support to retrieve multicluster resources
Cluster API Searching Has Never Been Easier
After 0.4.0, Clusterpedia provides a more friendly way to interface to multi-cloud platforms, Users simply create or join clusters in the multi-cloud platform and then use Clusterpedia to retrieve the resources within those clusters directly .
We maintain
ClusterImportPolicy
for each multi-cloud platform in the Clusterpedia repository. You are very welcome to submit ClusterImportPolicy to Clusterpedia for interfacing to other multi-cloud platforms.After installing Clusterpedia, you can create the appropriate ClusterImportPolicy, or you can create a new ClusterImportPolicy according to your needs (multi-cloud platform).
The ClusterImportPolicy for the Cluster API has been submitted in clusterpedia#288. After creating clusters in the Cluster API, you can use Clusterpedia directly to do complex searches of resources within these clusters.
$ kubectl get cluster
NAME PHASE AGE VERSION
capi-quickstart Provisioned 10m v1.24.2
capi-quickstart-2 Provisioned 118s v1.24.2
$ kubectl get kubeadmcontrolplane
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
capi-quickstart-2-ctm9k capi-quickstart-2 true 1 1 1 10m v1.24.2
capi-quickstart-2xcsz capi-quickstart true 1 1 1 19m v1.24.2
$ # the pediacluster resources will automatically create, updates or delete based on cluster resources
$ kubectl get pediacluster -o wide
NAME READY VERSION APISERVER VALIDATED SYNCHRORUNNING CLUSTERHEALTHY
default-capi-quickstart True v1.24.2 Validated Running Healthy
default-capi-quickstart-2 True v1.24.2 Validated Running Healthy
$ kubectl --cluster clusterpedia get no
CLUSTER NAME STATUS ROLES AGE VERSION
default-capi-quickstart-2 capi-quickstart-2-ctm9k-g2m87 NotReady control-plane 12m v1.24.2
default-capi-quickstart-2 capi-quickstart-2-md-0-s8hbx-7bd44554b5-kzcb6 NotReady <none> 11m v1.24.2
default-capi-quickstart capi-quickstart-2xcsz-fxrrk NotReady control-plane 21m v1.24.2
default-capi-quickstart capi-quickstart-md-0-9tw2g-b8b4f46cf-gggvq NotReady <none> 20m v1.24.2
Quickly deploy a sample environment for Cluster API and Clusterpedia
Prerequisites
- Install and setup kubectl in your local environment
- Install Kind and Docker
- Install clusterctl
Minimum kind supported version: v0.14.0
Create a management cluster and deploy the Cluster API
Deploying the Cluster API can also be found in https://cluster-api.sigs.k8s.io/user/quick-start.html
$ cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
EOF
$ kind create cluster --name capi-sample --config kind-cluster-with-extramounts.yaml
$ export CLUSTER_TOPOLOGY=true
$ clusterctl init --infrastructure docker
Deploy Clusterpedia
$ git clone https://github.com/clusterpedia-io/clusterpedia.git && cd clusterpedia/charts
$ helm install clusterpedia . \
--namespace clusterpedia-system \
--create-namespace \
--set installCRDs=true \
# --set persistenceMatchNode={{ LOCAL_PV_NODE }}
--set persistenceMatchNode=capi-sample-control-plane
The Clusterpedia Chart creates a local pv for the storage component, but you need to specify the node using the persistenceMatchNode option, eg. –set persistenceMatchNode=master-1.
If you don’t need to create a local pv, add the –set persistenceMatchNode=None flag. Lean More
Create ClusterImportPolicy for interfacing to the Cluster API
$ kubectl apply -f https://raw.githubusercontent.com/Iceber/clusterpedia/add_cluster_api_clusterimportpolicy/deploy/clusterimportpolicy/cluster_api.yaml
Clusterpedia can be integrated into any multi-cloud management platform, Lean More
Gen cluster shortcut for kubectl, If you use client-go or OpenAPI to access, you can omit this step
$ curl -sfL https://raw.githubusercontent.com/clusterpedia-io/clusterpedia/main/hack/gen-clusterconfigs.sh | sh -
$ # Using kubectl to retrieve multicluster resources, the current Cluster API does not create a cluster, so it returns null
$ kubectl --cluster clusterpedia api-resources
Create a cluster using the Cluster API
When using the sample environments' Docker Provider to create a cluster, you need to add
--flavor development
flag.
$ clusterctl generate cluster capi-quickstart --flavor development \
--kubernetes-version v1.24.2 \
--control-plane-machine-count=1 \
--worker-machine-count=1 \
> capi-quickstart.yaml
$ kubectl apply -f ./capi-quickstart.yaml
View cluster creation status
$ kubectl get cluster
NAME PHASE AGE VERSION
capi-quickstart Provisioned 8s v1.24.2
$ kubectl get kubeadmcontrolplane -w
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
capi-quickstart-2xcsz capi-quickstart true 1 1 1 86s v1.24.2
when kubeadmcontrolplane’s Initialized is true, lusterpedia will automatically synchronize the resources in the cluster, you can use kubectl --cluster clusterpedia get
to search the resources.
$ kubectl get pediacluster
NAME READY VERSION APISERVER
default-capi-quickstart True v1.24.2
$ kubectl --cluster clusterpedia get pod -A
NAMESPACE CLUSTER NAME READY STATUS RESTARTS AGE
kube-system default-capi-quickstart kube-apiserver-capi-quickstart-2xcsz-fxrrk 1/1 Running 0 2m32s
kube-system default-capi-quickstart kube-scheduler-capi-quickstart-2xcsz-fxrrk 1/1 Running 0 2m31s
kube-system default-capi-quickstart coredns-6d4b75cb6d-lrwj4 0/1 Pending 0 2m20s
kube-system default-capi-quickstart kube-proxy-p8v9m 1/1 Running 0 2m20s
kube-system default-capi-quickstart kube-controller-manager-capi-quickstart-2xcsz-fxrrk 1/1 Running 0 2m32s
kube-system default-capi-quickstart etcd-capi-quickstart-2xcsz-fxrrk 1/1 Running 0 2m32s
kube-system default-capi-quickstart kube-proxy-2ln2w 1/1 Running 0 105s
kube-system default-capi-quickstart coredns-6d4b75cb6d-2hcmz 0/1 Pending 0 2m20s
The cluster-api clusterimportpolicy sets the resources to be synchronized by default in the cluster.
Users can also manually modify the configuration of synchronization in pediacluster, Synchronize Cluster Resources
When the cluster is deleted in the Cluster API, Clusterpedia also deletes PeidaCluster at the same time.
Resources retrieval for multiple clusters
Use the above steps to create multiple clusters
$ kubectl get cluster
NAME PHASE AGE VERSION
capi-quickstart Provisioned 10m v1.24.2
capi-quickstart-2 Provisioned 118s v1.24.2
$ kubectl get kubeadmcontrolplane
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
capi-quickstart-2-ctm9k capi-quickstart-2 true 1 1 1 10m v1.24.2
capi-quickstart-2xcsz capi-quickstart true 1 1 1 19m v1.24.2
$ # the pediacluster resources will automatically create, updates or delete based on cluster resources
$ kubectl get pediacluster -o wide
NAME READY VERSION APISERVER VALIDATED SYNCHRORUNNING CLUSTERHEALTHY
default-capi-quickstart True v1.24.2 Validated Running Healthy
default-capi-quickstart-2 True v1.24.2 Validated Running Healthy
$ kubectl --cluster clusterpedia get no
CLUSTER NAME STATUS ROLES AGE VERSION
default-capi-quickstart-2 capi-quickstart-2-ctm9k-g2m87 NotReady control-plane 12m v1.24.2
default-capi-quickstart-2 capi-quickstart-2-md-0-s8hbx-7bd44554b5-kzcb6 NotReady <none> 11m v1.24.2
default-capi-quickstart capi-quickstart-2xcsz-fxrrk NotReady control-plane 21m v1.24.2
default-capi-quickstart capi-quickstart-md-0-9tw2g-b8b4f46cf-gggvq NotReady <none> 20m v1.24.2
Clusterpedia supports two types of resource search:
$ kubectl api-resources | grep clusterpedia.io
collectionresources clusterpedia.io/v1beta1 false CollectionResource
resources clusterpedia.io/v1beta1 false Resources
$ kubectl --cluster clusterpedia get cm -A
NAMESPACE CLUSTER NAME DATA AGE
kube-system default-capi-quickstart extension-apiserver-authentication 6 19m
kube-system default-capi-quickstart kubeadm-config 1 19m
kube-public default-capi-quickstart cluster-info 2 19m
kube-system default-capi-quickstart kube-proxy 2 19m
kube-node-lease default-capi-quickstart kube-root-ca.crt 1 19m
kube-system default-capi-quickstart-2 extension-apiserver-authentication 6 10m
kube-system default-capi-quickstart kubelet-config 1 19m
kube-system default-capi-quickstart coredns 1 19m
kube-system default-capi-quickstart kube-root-ca.crt 1 19m
kube-public default-capi-quickstart kube-root-ca.crt 1 19m
kube-system default-capi-quickstart-2 coredns 1 10m
default default-capi-quickstart kube-root-ca.crt 1 19m
kube-system default-capi-quickstart-2 kube-proxy 2 10m
kube-system default-capi-quickstart-2 kubeadm-config 1 10m
kube-system default-capi-quickstart-2 kubelet-config 1 10m
kube-system default-capi-quickstart-2 kube-root-ca.crt 1 10m
kube-node-lease default-capi-quickstart-2 kube-root-ca.crt 1 10m
kube-public default-capi-quickstart-2 cluster-info 3 10m
kube-public default-capi-quickstart-2 kube-root-ca.crt 1 10m
default default-capi-quickstart-2 kube-root-ca.crt 1 10m
$ # gen cluster shortcuts
$ curl -sfL https://raw.githubusercontent.com/clusterpedia-io/clusterpedia/main/hack/gen-clusterconfigs.sh | sh -
$ kubectl --cluster default-capi-quickstart get cm -n kube-system
Clusterpedia can also perform more advanced aggregation of resources. For example, you can use Collection Resource to get a set of different resources at once.
$ kubectl get collectionresources
NAME RESOURCES
any *
workloads apps.deployments,apps.daemonsets,apps.statefulsets
kuberesources .*,admission.k8s.io.*,admissionregistration.k8s.io.*,apiextensions.k8s.io.*,apps.*,authentication.k8s.io.*,authorization.k8s.io.*,autoscaling.*,batch.*,certificates.k8s.io.*,coordination.k8s.io.*,discovery.k8s.io.*,events.k8s.io.*,extensions.*,flowcontrol.apiserver.k8s.io.*,imagepolicy.k8s.io.*,internal.apiserver.k8s.io.*,networking.k8s.io.*,node.k8s.io.*,policy.*,rbac.authorization.k8s.io.*,scheduling.k8s.io.*,storage.k8s.io.*
$ kubectl get collectionresources workloads
Search
$ kubectl --cluster clusterpedia get cm -A -l \
"search.clusterpedia.io/clusters in (default-capi-quickstart,default-capi-quickstart-2),\
search.clusterpedia.io/namespaces in (kube-system,default)"
NAMESPACE CLUSTER NAME DATA AGE
kube-system default-capi-quickstart extension-apiserver-authentication 6 23m
kube-system default-capi-quickstart kubeadm-config 1 23m
kube-system default-capi-quickstart kube-proxy 2 23m
kube-system default-capi-quickstart-2 extension-apiserver-authentication 6 14m
kube-system default-capi-quickstart kubelet-config 1 23m
kube-system default-capi-quickstart coredns 1 23m
kube-system default-capi-quickstart kube-root-ca.crt 1 23m
kube-system default-capi-quickstart-2 coredns 1 14m
default default-capi-quickstart kube-root-ca.crt 1 23m
kube-system default-capi-quickstart-2 kube-proxy 2 14m
kube-system default-capi-quickstart-2 kubeadm-config 1 14m
kube-system default-capi-quickstart-2 kubelet-config 1 14m
kube-system default-capi-quickstart-2 kube-root-ca.crt 1 14m
default default-capi-quickstart-2 kube-root-ca.crt 1 14m
Clusterpedia is Listed in the CNCF Cloud Native Landscape
In the updated CNCF Cloud Native Landscape, Clusterpedia was listed into the Scheduling & Orchestration quadrant of the Orchestration & Management layer, becoming a cloud-native multi-cluster complex retrieval tool recommended by CNCF.
Cloud Native Computing Foundation (CNCF) belongs to the Linux Foundation and was established in December 2015. It is a non-profit organization dedicated to fostering and maintaining a vendor-neutral open source ecosystem to promote cloud native technologies and make cloud native universal and sustainable.
The Cloud Native Landscape has been maintained by CNCF since December 2016. It is intended as a map to list popular projects with best practices in the community, and categorizes them in the cloud native space to provide reference for enterprises to build a cloud native ecosystem. It has extensive influence on the development, operation, and maintenance of cloud native technologies.
Clusterpedia v0.2.0 Release
Use helm to install
Users can already use Helm to install Clusterpedia:
$ helm install clusterpedia . \
--namespace clusterpedia-system \
--create-namespace \
--set persistenceMatchNode={{ LOCAL_PV_NODE }} \
# --set installCRDs=true
Use the Kube Config to import a cluster
For v0.1.0, users need to Configure the address for the imported cluster and the authentication information.
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: cluster-example
spec:
apiserver: "https://10.30.43.43:6443"
caData:
tokenData:
certData:
keyData:
syncResources: []
In v0.2.0, the PediaCluster
added the spec.kubeconfig
field so that users can use kube config
to import the cluster directly.
First you need to base64 encode the kube config for the imported cluster.
$ base64 ./kubeconfig.yaml
Set the content after the base64 to PediaCluster spec.kubeconfig
, in addition spec.apiserver
and other authentication fields don’t need to set.
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: cluster-example
spec:
kubeconfig: **base64 kubeconfig**
syncResources: []
However, since the cluster address is configured in kube config, the APIServer is empty when you use kubectl get pediacluster.
$ kubectl get pediacluster
NAME APISERVER VERSION STATUS
cluster-example v1.22.2 Healthy
Mutating addmission webhooks will be added in the future to automatically set spec.apiserver
, currently if you want to show the cluster apiserver address when kubectl get pediacluster
, then you need to manually configure the spec.apiserver
field additionally.
New Search Feature
Search by creation time interval
Description | Search Label Key | URL Query |
---|---|---|
Since | search.clusterpedia.io/since | since |
Before | search.clusterpedia.io/before | before |
The creation time interval used for the search is left closed and right open, since <= creation time < before.
There are four formats for creation time:
Unix Timestamp
for ease of use will distinguish between units ofs
orms
based on the length of the timestamp. The 10-bit timestamp is in seconds, the 13-bit timestamp is in milliseconds.RFC3339
2006-01-02T15:04:05Z or 2006-01-02T15:04:05+08:00UTC Date
2006-01-02UTC Datetime
2006-01-02 15:04:05
Because of the limitation of the kube label selector, the search label only supports
Unix Timestamp
andUTC Date
.All formats are available using the url query method.
Look at what resources are under the default namespace
$ kubectl --cluster clusterpedia get pods
CLUSTER NAME READY STATUS RESTARTS AGE
cluster-example quickstart-ingress-nginx-admission-create--1-kxlnn 0/1 Completed 0 171d
cluster-example fake-pod-698dfbbd5b-wvtvw 1/1 Running 0 8d
cluster-example fake-pod-698dfbbd5b-74cjx 1/1 Running 0 21d
cluster-example fake-pod-698dfbbd5b-tmcw7 1/1 Running 0 8d
We use the creation time to filter the resources.
$ kubectl --cluster clusterpedia get pods -l "search.clusterpedia.io/since=2022-03-20"
CLUSTER NAME READY STATUS RESTARTS AGE
cluster-example fake-pod-698dfbbd5b-wvtvw 1/1 Running 0 8d
cluster-example fake-pod-698dfbbd5b-tmcw7 1/1 Running 0 8d
$ kubectl --cluster clusterpedia get pods -l "search.clusterpedia.io/before=2022-03-20"
CLUSTER NAME READY STATUS RESTARTS AGE
cluster-example quickstart-ingress-nginx-admission-create--1-kxlnn 0/1 Completed 0 171d
cluster-example fake-pod-698dfbbd5b-74cjx 1/1 Running 0 21d
Search by Owner Name
As of v0.1.0, we can specify ancestor or parent Owner UID
to query resources, but Owner UID
is not convenient to use, after all, you still need to know the UID of the Owner resource in advance.
In v0.2.0, we support querying directly with Owner Name
, and the Owner query has been moved from experimental to released functionality, the prefix of Search Label has been upgraded from internalstorage.c lusterpedia.io to *search.clusterpedia.io *, and URL Query is provided.
Role | search label key | url query |
---|---|---|
Specified Owner UID | search.clusterpedia.io/owner-uid | ownerUID |
Specified Owner Name | search.clusterpedia.io/owner-name | ownerName |
SPecified Owner Group Resource | search.clusterpedia.io/owner-gr | ownerGR |
Specified Owner Seniority | internalstorage.clusterpedia.io/owner-seniority |
ownerSeniority |
Note that when specifying
Owner UID
,Owner Name
andOwner Group Resource
will be ignored.
$ kubectl --cluster cluster-example get pods -l \
"search.clusterpedia.io/owner-name=fake-pod, \
search.clusterpedia.io/owner-seniority=1"
CLUSTER NAME READY STATUS RESTARTS AGE
cluster-example fake-pod-698dfbbd5b-wvtvw 1/1 Running 0 8d
cluster-example fake-pod-698dfbbd5b-74cjx 1/1 Running 0 21d
cluster-example fake-pod-698dfbbd5b-tmcw7 1/1 Running 0 8d
In addition, to avoid multiple types of owner resources in some cases, we can use the Owner Group Resource
to restrict the type of owner.
$ kubectl --cluster cluster-example get pods -l \
"search.clusterpedia.io/owner-name=fake-pod,\
search.clusterpedia.io/owner-gr=deployments.apps,\
search.clusterpedia.io/owner-seniority=1"
... some output
Fuzzy Search base on resource names
Since fuzzy search needs to be discussed further, it is temporarily provided as an experimental feature.
Only the Search Label method is supported, URL Query isn’t supported. |Role| search label key|url query| | – | ————— | ——- | |Fuzzy Search for resource name|internalstorage.clusterpedia.io/fuzzy-name|-|
$ kubectl --cluster clusterpedia get deployments -l "internalstorage.clusterpedia.io/fuzzy-name=fake"
CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
cluster-example fake-pod 3/3 3 3 113d
You can use the in operator to pass multiple fuzzy arguments, so that you can filter out resources that have all strings in their names.
Other Features
In v0.1.0, searching the resources allow the number of remaining resources to be returned so that the user can calculate the total number of resources.
This feature has been enhanced in v0.2.0. When offset is too large, remainingItemCount
may be negative, ensuring that the total number of resources can always be calculated.
Release Notes
- Support of using Helm Charts for installation (#53, #125, @calvin0327, @wzshiming)
PediaCluster
supports for importing a cluster using the kubeconfig (#115, @wzshiming)
APIServer
- Support for filtering resources by a period of creation (#113, @cleverhu)
- Support for searching for resources by an Owner name. Now, the feature of
Search by Owner
is officially released. (#91, @Iceber)
Default Storage Layer
- Support for fuzzy search by a resource name (#117, @cleverhu)
RemainingItemCount
can be a negative number. We can still useoffset + len(items) + remainingItemCount
to calculate the total amount of resources if theOffset
is too large. (#123, @cleverhu)
Bug Fixes
Deprecation
Search by Owner
has been released as an official feature.internalstorage.clusterpedia.io/owner-name
andinternalstorage.clusterpedia.io/owner-seniority
will be removed in the next release. (#91, @Iceber)
Other
Quickly Deploy Clusterpedia with Helm
Currently Clusterpedia has supported a rapid deployment with Helm.
All of first, you need to check if helm v3 is installed in your current environment.
Preparation
Pull the Clusterpedia repository.
Currently, the chart has not been uploaded to the public charts repository.
git clone https://github.com/clusterpedia-io/clusterpedia.git
cd clusterpedia/charts
Since Clusterpedia uses bitnami/postgresql
and bitnami/mysql
as subcharts of storage components, it is necessary to add the bitnami repository and update the dependencies of the clusterpedia chart.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm dependency build
Choose storage components
The Clusterpedia chart provides two storage components such as bitnami/postgresql
and bitnami/mysql
to choose from as sub-charts.
postgresql
is the default storage component. IF you want to use MySQL, you can add --set postgresql.enabled=false --set mysql.enabled=true
in the subsequent installation command.
For specific configuration about storage components, see bitnami/postgresql and bitnami/mysql.
You can also choose not to install any storage component, but use external components. For related settings, see charts/values.yaml
Choose a installation or management mode for CRDs
Clusterpedia requires proper CRD resources to be created in the retrieval environment. You can choose to manually deploy CRDs by using YAML, or you can manage it with Helm.
Manage manually
kubectl apply -f ./_crds
Manage with Helm
Manually add --set installCRDs=true
in the subsequent installation command.
Check if you need to create a local PV
Through the Clusterpedia chart, you can create storage components to use a local PV.
You need to specify the node where the local PV is located through --set persistenceMatchNode=<selected node name>
during installation.
If you need not to create the local PV, you can use --set persistenceMatchNode=None
to declare it explicitly.
Install Clusterpedia
After the above procedure is completed, you can run the following command to install Clusterpedia:
helm install clusterpedia . \
--namespace clusterpedia-system \
--create-namespace \
--set persistenceMatchNode={{ LOCAL_PV_NODE }} \
--set installCRDs=true
Uninstall Clusterpedia
Before uninstallation, you shall manually clear all PediaCluster
resources.
kubectl get pediacluster
You can run the command to uninstall it after the PediaCluster
resources are cleared.
helm -n clusterpedia-system uninstall clusterpedia
If you use any CRD resource that is manually created, you also need to manually clear the CRDs.
kubectl delete -f ./_crds
Note that PVC and PV will not be deleted. You need to manually delete them.
If you created a local PV, you need log in to the node and remove all remained data about the local PV.
# Log in to the node with Local PV
rm -rf /var/local/clusterpedia/internalstorage/<storage type>
Clusterpedia Awarded | One of IT Technology Influcence Stars Selected by CSDN
On March 30, CSDN officially announced the selection list of IT technology influence stars, and Clusterpedia was selected as a “Cloud Native Technology Product in 2021”.
In the multi-cloud era, it is more and more complex and becomes a problem for resource management and retrieval in a multi-cluster environment.
In a single cluster, we usually use kubectl to view resources, directly access Kubernetes OpenAPI, or use client-go to retrieve resources in the code.
Now, in a multi-cluster environment, Clusterpedia provides compatibility with Kubernetes OpenAPI, so you can still perform complex retrieval or search for multi-cluster resources like a single cluster without pulling data from each cluster to the local for filtering.
Demo Video |Clusterpedia - Complex Retrieval of Resources in a Multi-Cloud Environment
Iceber, the sponsor of Clusterpedia and a cloud native senior engineer of Daocloud, introduced the functions provided by Clusterpedia about resource retrieval in detail. This video step-by-step demonstrated what issues can be solved by using Clusterepdia.
Clusterpedia is an artifact for multi-cluster resource retrieval
With the increase of services you provide and the continuous expansion of the cluster scale, a single Kubernetes cluster may no longer meet the needs of many enterprises. As the cloud-native technologies develop, a multi-cloud era is coming. It is more complex and difficult to manage and retrieve resources in multiple clusters.
As a result, many excellent open source projects have emerged in the community, such as cluster api for cluster lifecycle management, karmada and clusternet for multi-cloud application management. Clusterpedia is built on these cloud management platforms to provide you with complex search for multi-cluster resources.
In a single cluster, we often use kubectl to view resources, directly access Kubernetes OpenAPI, or use client-go to retrieve resources in the code.
Now, in a multi-cluster environment, Clusterpedia provides compatibility with Kubernetes OpenAPI, so you can still perform complex retrieval or search for multi-cluster resources like a single cluster without pulling data from each cluster to the local for filtering.
In addition, the capabilities of Clusterpedia are not only for searching and viewing. It also supports simple control of resources in the future, just like wiki that also supports to edit entries. Clusterpedia provides the following features now:
- Support for search with complex conditions, filters, sorting, and paging
- Support for requesting attached resources when querying resources
- Use a unified retrieval portal for master cluster and multi-cluster resources
- Compatible with kubernetes OpenAPI, through which you can directly use kubectl for multi-cluster retrieval and need not any third-party plugins or tools
- Compatible with collecting different versions of cluster resources and not constrained by the version of master cluster
- High performance and low memory consumption in the process of resource collection
- Automatically start and stop resource collection based on to the health status of clusters
- Support for the pluggable storage layer that indicates you can use other storage components to customize the storage layer
- High availability
What’s Next
In addition to supporting complex retrieval of multiple clusters, Clusterpedia can provide more benefits, such as a unified portal to the master cluster and multi-cluster resources through an aggregated API, low memory usage and weak network optimization when synchronizing sub-cluster resources in real time. It can also provide a pluggable storage layer to decouple the dependencies of storage components.
In the next topic, we will introduce the specific design and implementation principles, and explain more benefits offered by Clusterpedia, so stay tuned.
Clusterpedia v0.1.0 Release — four important functions
This is the first release of Clusterpedia 🥳🥳🥳, and it also means that it is officially in the iteration phase.
Compared to the initial v0.0.8 and v0.0.9-alpha, v0.1.0 add a lot of features and makes some incompatible updates.
If upgrading from v0.0.9-alpha or v0.0.8, you can refer to Upgrade to Clusterpedia 0.1.0
Features Preview
Role | Search Label Key | URL Query |
---|---|---|
Filter cluster names | search.clusterpedia.io/clusters |
clusters |
Filter namespaces | search.clusterpedia.io/namespaces |
namespaces |
Filter resource names | search.clusterpedia.io/names |
names |
Specified Owner UID | internalstorage.clusterpedia.io/owner-uid |
- |
Specified Owner Seniority | internalstorage.clusterpedia.io/owner-seniority |
ownerSeniority |
Order by fields | search.clusterpedia.io/orderby |
orderby |
Set page size | search.clusterpedia.io/size |
limit |
Set page offset | search.clusterpedia.io/offset |
continue |
Response include Continue | search.clusterpedia.io/with-continue |
withContinue |
Response include remaining count | search.clusterpedia.io/with-remaining-count |
withRemainingCount |
Native Label Selector
and enhanced Field Selector
supported in addition to search label.
Important Features
Let’s start with the more important features that have been added in 0.1.0
- The number of remaining items carried in response data
- Added warnning when searching for resources in a
Not Ready
cluster - Enhancements to the native FieldSelector
- Search by Parent or Ancestor Owner
Warnning alert on resource search
When a cluster is not ready for some reason, resources are often not synchronised properly either.
Warnning alerts are used to alert users of cluster exceptions when searching for resources within the cluster, and the resources searched may not be accurate in real time.
$ kubectl get pediacluster
NAME APISERVER VERSION STATUS
cluster-1 https://10.6.100.10:6443 v1.22.2 ClusterSynchroStop
$ kubectl --cluster cluster-1 get pods
Warning: cluster-1 is not ready and the resources obtained may be inaccurate, reason: ClusterSynchroStop
CLUSTER NAME READY STATUS RESTARTS AGE
cluster-1 fake-pod-698dfbbd5b-64fsx 1/1 Running 0 68d
cluster-1 fake-pod-698dfbbd5b-9ftzh 1/1 Running 0 39d
cluster-1 fake-pod-698dfbbd5b-rk74p 1/1 Running 0 39d
cluster-1 quickstart-ingress-nginx-admission-create--1-kxlnn 0/1 Completed 0 126d
Field Selector
Native Kubernetes currently only supports field filtering on metadata.name
and metadata.namespace
, and the operators only support =, !=, ==`, which is very limited.
Although some specific resources will support some special fields, the use is still rather limited
# kubernetes/pkg
$ grep AddFieldLabelConversionFunc . -r
./apis/core/v1/conversion.go: err := scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("Pod"),
./apis/core/v1/conversion.go: err = scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("Node"),
./apis/core/v1/conversion.go: err = scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("ReplicationController"),
./apis/core/v1/conversion.go: return scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("Event"),
./apis/core/v1/conversion.go: return scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("Namespace"),
./apis/core/v1/conversion.go: return scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("Secret"),
./apis/certificates/v1/conversion.go: return scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("CertificateSigningRequest"),
./apis/certificates/v1beta1/conversion.go: return scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("CertificateSigningRequest"),
./apis/batch/v1/conversion.go: return scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("Job"),
./apis/batch/v1beta1/conversion.go: err = scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind(kind),
./apis/events/v1/conversion.go: return scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("Event"),
./apis/events/v1beta1/conversion.go: return scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("Event"),
./apis/apps/v1beta2/conversion.go: if err := scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("StatefulSet"),
./apis/apps/v1beta1/conversion.go: if err := scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("StatefulSet"),
Clusterpedia provides more powerful features based on the compatibility with existing Field Selector features, and supports the same operators as Label Selector: !
, =
, !=
, ==
, in
, notin
.
For example, we can filter by annotations, like label selector
kubectl get deploy --field-selector="metadata.annotations['test.io'] in (value1, value2)"
Search by Parent or Ancestor Owner
There will usually be an Owner relationship between Kubernetes resources.
apiVersion: v1
kind: Pod
metadata:
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: fake-pod-698dfbbd5b
uid: d5bf2bdd-47d2-4932-84fb-98bde486d244
Searching by Owner is a very useful search function, and Clusterpedia also supports the seniority advancement of Owner to search for grandparents and even higher seniority.
By searching by Owner, we can query all Pods under Deployment at once, without having to query ReplicaSet in between
Currently only supports query by Owner UID. The feature of using Owner Name for queries is still under discussion, we can join the discussion in the issue: Support for searching resources by owner
$ DEPLOY_UID=$(kubectl --cluster cluster-1 get deploy fake-deploy -o jsonpath="{.metadata.uid}")
$ kubectl --cluster cluster-1 get pods -l \
"internalstorage.clusterpedia.io/owner-uid=$DEPLOY_UID,\
internalstorage.clusterpedia.io/owner-seniority=1"
The number of remaining items carried in response data
In some UI cases, it is often necessary to get the total number of resources in the current search condition.
The RemainingItemCount field exists in the ListMeta of the Kubernetes List response.
type ListMeta struct {
...
// remainingItemCount is the number of subsequent items in the list which are not included in this
// list response. If the list request contained label or field selectors, then the number of
// remaining items is unknown and the field will be left unset and omitted during serialization.
// If the list is complete (either because it is not chunking or because this is the last chunk),
// then there are no more remaining items and this field will be left unset and omitted during
// serialization.
// Servers older than v1.15 do not set this field.
// The intended use of the remainingItemCount is *estimating* the size of a collection. Clients
// should not rely on the remainingItemCount to be set or to be exact.
// +optional
RemainingItemCount *int64 `json:"remainingItemCount,omitempty" protobuf:"bytes,4,opt,name=remainingItemCount"`
}
By reusing this field, the total number of resources can be returned in a Kubernetes OpenAPI-compatible manner:
offset + len(list.items) + list.metadata.remainingItemCount
Use with Paging
$ kubectl get --raw="/apis/clusterpedia.io/v1beta1/resources/apis/apps/v1/deployments?withRemainingCount&limit=1" | jq
{
"kind": "DeploymentList",
"apiVersion": "apps/v1",
"metadata": {
"remainingItemCount": 23
},
"items": [
...
]
}
Realease v0.1.0
Upgrade to Clusterpedia 0.1.0
With the release of Clusterpedia 0.1.0, we can now update the earlier 0.0.9-alpha or 0.0.8 to 0.1.0
Clean Resources
Since the url path to search resources has been modified(#73), we need to use cealn-clusterconfigs.sh in 0.0.9-alpha to clean up the cluster shortcut in the .kube/config
curl -sfL https://raw.githubusercontent.com/clusterpedia-io/clusterpedia/v0.0.9-alpha/hack/clean-clusterconfigs.sh | sh -
Backup and delete the PediaCluster
resources.
kubectl get pediacluster -o yaml > clusters.yaml.bak
kubectl delete pediacluster --all
After all PediaCluster
resources have been deleted, remove the PediaCluster CRD
kubectl delete crd pediaclusters.clusters.clusterpedia.io
Remove the APIServices
used to register the Aggregated API
kubectl delete apiservices v1alpha1.pedia.clusterpedia.io
Upgrade Clusterpedia
Create PediaCluster CRD
, and upgrade Clusterpedia APIServer
and Clustersynchro Manager
.
DEPLOY_YAML_PATH=https://raw.githubusercontent.com/clusterpedia-io/clusterpedia/v0.1.0/deploy
CRD_YAML_PATH=$DEPLOY_YAML_PATH/crds
kubectl apply -f \
$CRD_YAML_PATH/cluster.clusterpedia.io_pediaclusters.yaml,\
$DEPLOY_YAML_PATH/clusterpedia_clustersynchro_manager_deployment.yaml,\
$DEPLOY_YAML_PATH/clusterpedia_apiserver_deployment.yaml,\
$DEPLOY_YAML_PATH/clusterpedia_apiserver_apiservice.yaml
We can also download the YAML locally, or pull the clusterpedia locally and go to ./deploy directory and run kubectl apply
Re-import the clusters
Since the APIVersion and schema of PediaCluster
have been optimized for incompatibility,
it it necessary to recreate PediaCluster
based on the backed up clusters.yaml.bak.
The current example of PediaCluster
:
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: cluster-example
spec:
apiserver: "https://10.30.43.43:6443"
caData:
tokenData:
certData:
keyData:
syncResources:
- group: apps
resources:
- deployments
- group: ""
resources:
- pods
There are three main changes compared to 0.0.9-alpha:
apiVersion
: clusters.clusterpedia.io/v1alpha1 -> cluster.clusterpedia.io/v1alpha2spec.apiserverURL
->spec.apiserver
spec.resources
->spec.syncResources
Create new pediaclusters based on the old pediaclusters in clusters.yaml.bak
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: cluster-1
spec: {}
---
apiVersion: cluster.clusterpedia.io/v1alpha2
kind: PediaCluster
metadata:
name: cluster-2
spec: {}
View clusters status
kubectl get pediacluster
Configure the cluster shortcut for kubectl
curl -sfL https://raw.githubusercontent.com/clusterpedia-io/clusterpedia/v0.1.0/hack/gen-clusterconfigs.sh | sh -
Clusterpedia with kubectl support to retrieve multicluster resources
This name Clusterpedia is inspired by Wikipedia. It is an encyclopedia of multi-cluster to synchronize, search for, and simply control multi-cluster resources. Clusterpedia can synchronize resources with multiple clusters and provide more powerful search features on the basis of compatibility with Kubernetes OpenAPI to help you effectively get any multi-cluster resource that you are looking for in a quick and easy way.
The capability of Clusterpedia is not only to search for and view but also simply control resources in the future, just like Wikipedia that supports for editing entries.
Architecture
The architecture diagram of Clusterpedia is as follows:
The architecture consists of four parts:- Clusterpedia APIServer: Register to Kube APIServer by the means of Aggregated API and provide services through a unified entrance
- ClusterSynchro Manager: Manage the Cluster Synchro that is used to synchronize cluster resources
- Storage Layer: Connect with a specific storage component and then register to Clusterpedia APIServer and ClusterSynchro Manager via a storage interface
- Storage component: A specific storage facility such as MySQL, postgres, redis or other graph databases
In addition, Clusterpedia will use the custom resource PediaCluster to implement cluster authentication and synchronize the resource configuration.
Clusterpedia also provides a default storage layer that can connect with MySQL and postgres.
Clusterpedia does not care about the specific storage settings used by users, you can choose or implement the storage layer according to your own needs and then register the storage layer in Clusterpedia as a plug-in
Features
- Support for complex search, filters, sorting, paging, and more
- Support for requesting relevant resources when you query resources
- Unify the search entry for master clusters and multi-cluster resources
- Compatible with kubernetes OpenAPI, where you can directly use kubectl for multi-cluster search without any third-party plug-ins or tools
- Compatible with synchronizing different versions of cluster resources, not restricted by the version of master cluster
- High performance and low memory consumption for resource synchronization
- Automatically start/stop resource synchronization according to the current health status of the cluster - Support for plug-in storage layer. You can use other storage components to customize the storage layer according to your needs.
- High availability
The above unimplemented features are already in the Roadmap
Deployment
For details on the deployment process, see Instaling Clusterpedia, which highlights how to use clusterpedia.
Synchronize cluster resources
After deploying clusterpedia crds, you can use kubectl to operate PediaCluster resources.
$ kubectl get pediaclusters
In the examples directory, you can check examples of PediaCluster:
apiVersion: clusters.clusterpedia.io/v1alpha1
kind: PediaCluster
metadata:
name: cluster-example
spec:
apiserverURL: "https://10.30.43.43:6443"
caData: ""
tokenData: ""
certData: ""
keyData: ""
resources:
- group: apps
resources:
- deployments
- group: ""
resources:
- pods
The configuration of PediaCluster can be divided into two parts:
- Cluster authentication
- Synchronize a specific resource
.spec.resources
Cluster authentication
The fields of caData
, tokenData
, certData
, and keyData
can be used for cluster verification.
Currently it does not support for getting the relevant verification information from ConfigMap or Secret. However, the information is already in the Roadmap.
When setting the verification field, you shall use the strings encoded by base64.
The . /examples
directory provides the rbac yaml clusterpedia_synchro_rbac.yaml
, which can be used to easily obtain the permission token for a subcluster.
Deploy the yaml in the subcluster and get the proper token and CA certificate.
$ # Switch to the sub-cluster to create rbac related resources
$ kubectl apply -f examples/clusterpedia_synchro_rbac.yaml
$ SYNCHRO_TOKEN=$(kubectl get secret $(kubectl get serviceaccount clusterpedia-synchro -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}')
$ SYNCHRO_CA=$(kubectl get secret $(kubectl get serviceaccount clusterpedia-synchro -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.ca\.crt}')
Copy ./examples/pediacluster.yaml, modify .spec.apiserverURL
and .metadata.name
fields, and fill $SYNCHRO_TOKEN
and $SYNCHRO_CA
into tokenData
and caData
.
$ kubectl apply -f cluster-1.yaml
pediacluster.clusters.clusterpedia.io/cluster-1 created
Synchronize resources
You can specify the synchronized resources by setting group
in the spec.resources
field and the resources
section under group
.
You can also view the resource synchronization status in the status
section:
status:
conditions:
- lastTransitionTime: "2021-12-02T04:00:45Z"
message: ""
reason: Healthy
status: "True"
type: Ready
resources:
- group: ""
resources:
- kind: Pod
namespaced: true
resource: pods
syncConditions:
- lastTransitionTime: "2021-12-02T04:00:45Z"
status: Syncing
storageVersion: v1
version: v1
- group: apps
resources:
- kind: Deployment
namespaced: true
resource: deployments
syncConditions:
- lastTransitionTime: "2021-12-02T04:00:45Z"
status: Syncing
storageVersion: v1
version: v1
version: v1.22.2
Search for resources
After configuring the resources to be synchronized, you can search for the cluster resources. Clusterpedia supports two types of resource search:
- Search for resources that are compatible with Kubernetes OpenAPI
- Search for
Collection Resource
$ kubectl api-resources | grep pedia.clusterpedia.io
collectionresources pedia.clusterpedia.io/v1alpha1 false CollectionResource
resources pedia.clusterpedia.io/v1alpha1 false Resources
In order to facilitate and well use kubectl for searching, you’d better create a ‘shortcut’ for searching the sub-cluster through make gen-clusterconfig
:
$ make gen-clusterconfigs
./hack/gen-clusterconfigs.sh
Current Context: kubernetes-admin@kubernetes
Current Cluster: kubernetes
Server: https://10.6.11.11:6443
TLS Server Name:
Insecure Skip TLS Verify:
Certificate Authority:
Certificate Authority Data: ***
Cluster "clusterpedia" set.
Cluster "cluster-1" set.
Use the kubectl config get-clusters
command to view the currently supported clusters.
In this case, Clusterpedia is a special cluster used to search for multi-clusters by using kubectl --cluster clusterpedia
.
Multi-cluster resource search
First check which resources are synchronized. You cannot find a resource until it is properly synchronized:
$ kubectl --cluster clusterpedia api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
pods po v1 true Pod
deployments deploy apps/v1 true Deployment
You can check the currently synchronized resources including pods and deployments.apps.
Get deployments in the kube-system
namespace of all clusters:
$ kubectl --cluster clusterpedia get deployments -n kube-system
CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
cluster-1 coredns 2/2 2 2 68d
cluster-2 calico-kube-controllers 1/1 1 1 64d
cluster-2 coredns 2/2 2 2 64d
Get deployments in the two namespaces kube-system
and default
of all clusters:
$ kubectl --cluster clusterpedia get deployments -A -l "search.clusterpedia.io/namespaces in (kube-system, default)"
NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
kube-system cluster-1 coredns 2/2 2 2 68d
kube-system cluster-2 calico-kube-controllers 1/1 1 1 64d
kube-system cluster-2 coredns 2/2 2 2 64d
default cluster-2 dd-airflow-scheduler 0/1 1 0 54d
default cluster-2 dd-airflow-web 0/1 1 0 54d
default cluster-2 hello-world-server 1/1 1 1 27d
default cluster-2 keycloak 1/1 1 1 52d
default cluster-2 keycloak-02 1/1 1 1 41d
default cluster-2 my-nginx 1/1 1 1 40d
default cluster-2 nginx-dev 1/1 1 1 15d
default cluster-2 openldap 1/1 1 1 41d
default cluster-2 phpldapadmin 1/1 1 1 41d
Get deployments in the kube-system
and default
namespaces in cluster-1 and cluster-2:
$ kubectl --cluster clusterpedia get deployments -A -l "search.clusterpedia.io/clusters in (cluster-1, cluster-2),\
search.clusterpedia.io/namespaces in (kube-system,default)"
NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
kube-system cluster-1 coredns 2/2 2 2 68d
kube-system cluster-2 calico-kube-controllers 1/1 1 1 64d
kube-system cluster-2 coredns 2/2 2 2 64d
default cluster-2 dd-airflow-scheduler 0/1 1 0 54d
default cluster-2 dd-airflow-web 0/1 1 0 54d
default cluster-2 hello-world-server 1/1 1 1 27d
default cluster-2 keycloak 1/1 1 1 52d
default cluster-2 keycloak-02 1/1 1 1 41d
default cluster-2 my-nginx 1/1 1 1 40d
default cluster-2 nginx-dev 1/1 1 1 15d
default cluster-2 openldap 1/1 1 1 41d
default cluster-2 phpldapadmin 1/1 1 1 41d
Get deployments in the kube-system
and default
namespaces in cluster-1 and cluster-2:
$ kubectl --cluster clusterpedia get deployments -A -l "search.clusterpedia.io/clusters in (cluster-1, cluster-2),\
search.clusterpedia.io/namespaces in (kube-system,default),\
search.clusterpedia.io/orderby=name"
NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
kube-system cluster-2 calico-kube-controllers 1/1 1 1 64d
kube-system cluster-1 coredns 2/2 2 2 68d
kube-system cluster-2 coredns 2/2 2 2 64d
default cluster-2 dao-2048-2048 1/1 1 1 21d
default cluster-2 dd-airflow-scheduler 0/1 1 0 54d
default cluster-2 dd-airflow-web 0/1 1 0 54d
default cluster-2 hello-world-server 1/1 1 1 27d
default cluster-2 keycloak 1/1 1 1 52d
default cluster-2 keycloak-02 1/1 1 1 41d
default cluster-2 my-nginx 1/1 1 1 40d
default cluster-2 nginx-dev 1/1 1 1 15d
default cluster-2 openldap 1/1 1 1 41d
default cluster-2 phpldapadmin 1/1 1 1 41d
Search a specific cluster
If you want to search a specific cluster for any resource therein, you can add –cluster to specify the cluster name:
$ kubectl --cluster cluster-1 get deployments -A
NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
calico-apiserver cluster-1 calico-apiserver 1/1 1 1 68d
calico-system cluster-1 calico-kube-controllers 1/1 1 1 68d
calico-system cluster-1 calico-typha 1/1 1 1 68d
capi-system cluster-1 capi-controller-manager 1/1 1 1 42d
capi-kubeadm-bootstrap-system cluster-1 capi-kubeadm-bootstrap-controller-manager 1/1 1 1 42d
capi-kubeadm-control-plane-system cluster-1 capi-kubeadm-control-plane-controller-manager 1/1 1 1 42d
capv-system cluster-1 capv-controller-manager 1/1 1 1 42d
cert-manager cluster-1 cert-manager 1/1 1 1 42d
cert-manager cluster-1 cert-manager-cainjector 1/1 1 1 42d
cert-manager cluster-1 cert-manager-webhook 1/1 1 1 42d
clusterpedia-system cluster-1 clusterpedia-apiserver 1/1 1 1 27m
clusterpedia-system cluster-1 clusterpedia-clustersynchro-manager 1/1 1 1 27m
clusterpedia-system cluster-1 clusterpedia-internalstorage-mysql 1/1 1 1 29m
kube-system cluster-1 coredns 2/2 2 2 68d
tigera-operator cluster-1 tigera-operator 1/1 1 1 68d
Except for search.clusterpedia.io/clusters
, the support for other complex queries is same as that for multi-cluster search.
If you want to learn about the details of a resource, you need to specify which cluster it is:
$ kubectl --cluster cluster-1 -n kube-system get deployments coredns -o wide
CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
cluster-1 coredns 2/2 2 2 68d coredns registry.aliyuncs.com/google_containers/coredns:v1.8.4 k8s-app=kube-dns
Complex search
Clusterpedia supports for the following complex search:
- Specify one or more
cluster names
- Specify one or more
namespaces
- Specify one or more
resource names
- Specify how to
sort
multiple fields Paging
function, by which you can specify its size and offsetfilter by labels
The actual effect of field sorting depends on the storage layer. By default, the storage layer supports for sorting according to cluster
, name
, namespace
, created_at
, and resource_version
in a normal or reverse order.
How search conditions are applied
The above example demonstrates how you can use kubectl to search for resources. Where, complex search conditions are applied via a label
. Clusterpedia also supports for using these search conditions directly through url query
.
role | label key | url query | example |
---|---|---|---|
Specified resource name | search.clusterpedia.io/names | names | ?names=pod-1,pod-2 |
Specified namespace | search.clusterpedia.io/namespaces | namespaces | ?namespaces=kube-system,default |
Specified cluster name | search.clusterpedia.io/clusters | clusters | ?clusters=cluster-1,cluster-2 |
Sort by specified fileds | search.clusterpedia.io/orderby | orderby | ?orderby=name desc,namespace |
Specified size | search.clusterpedia.io/size | size | ?size=100 |
Specified offset | search.clsuterpedia.io/offset | offset | ?offset=10 |
The operators of label key
include ==, =, !=, in, not in. For the size
condition, kubectl can specify a size by --chunk-size
instead of the label key
.
Collection Resource
Clusterpedia can also perform more advanced aggregation of resources. For example, you can use Collection Resource
to get a set of different resources at once.
Let’s first check which Collection Resource
currently Clusterpedia supports:
$ kubectl get collectionresources
NAME RESOURCES
workloads deployments.apps,daemonsets.apps,statefulsets.apps
By getting workloads, you can get a set of resources aggregated by deployment, daemonset, and statefulset, and Collection Resource
also supports for all complex queries.
kubectl get collectionresources workloads
will get the corresponding resources of all namespaces in all clusters by default:
$ kubectl get collectionresources workloads
CLUSTER GROUP VERSION KIND NAMESPACE NAME AGE
cluster-1 apps v1 DaemonSet kube-system vsphere-cloud-controller-manager 63d
cluster-2 apps v1 Deployment kube-system calico-kube-controllers 109d
cluster-2 apps v1 Deployment kube-system coredns-coredns 109d
Add the collection of Daemonset in cluster-1 and some of the above output is cut out
Due to the limitation of kubectl, you cannot use complex queries in kubectl and can only be queried by url query
.
Proposal
Perform more complex control over resources
In addition to resource search, similar to Wikipedia, Clusterpedia should also have simple capability of resource control, such as watch, create, delete, update, and more.
In fact, a write action is implemented by double write + response.
We will discuss this feature and decide whether we should implement it according to the community needs
Automatic discovery and resource synchronization
The resource used to represent the cluster in Clusterpedia is called PediaCluster, not a simple Cluster.
**This is because Clusterpedia was originally designed to build on the existing multi-cluster management platform. **
In order to keep the original intention, the first issue is that Clusterpedia should not conflict with the resources in the existing multi-cluster platform. Cluster is a very common resource name that represents a cluster.
In addition, in order to better connect with the existing multi-cluster platform and enable the connected clusters automatically complete resource synchronization, we need a new mechanism to discover clusters. This discovery mechanism needs to solve the following issues:
- Get the authentication info to access the cluster
- Configure conditions that trigger the lifecycle of PediaCluster
- Set the default policy and prefix name for resource synchronization
This feature will be discussed and implemented in detail in Q1 or Q2 2022.
Roadmap
Currently, it is only a tentative roadmap and the specific schedule depends on the community needs.
About some features not added to Roadmap, you can discuss in issues.
Q4 2021
- Support for cropping field
- Synchronize custom resources
Q1 2022
- Support for the plug-in storage layer
- Implement automatic discovery and resource synchronization
Q2 2022
- Support for more control over cluster resources, such as watch/create/update/delete operations
- The storage layer supports for custom Collection Resource by default
- Support for requests with relevant resources
Remarks
Multi-cluster network connectivity
Clusterpedia does not actually solve the problem of network connectivity in a multi-cluster environment. You can use tools such as tower to connect and access sub-clusters, or use submariner or skupper to solve cross-cluster network problems.