This is the multi-page printable view of this section. Click here to print.
Installation
- 1: kubectl apply
- 2: Helm
- 3: Configuration
1 - kubectl apply
Install
The installation of Clusterpedia is divided into several parts:
If you use existing storage component (MySQL or PostgreSQL), directly skip the step of installing the storage component.
Pull clusterpedia project:
git clone https://github.com/clusterpedia-io/clusterpedia.git
cd clusterpedia
git checkout v0.7.0
Install storage component
Clusterpedia installation provides two storage components (MySQL 8.0 and PostgreSQL 12) to choose.
If you use existing storage components (MySQL or PostgreSQL), directly skip this step
Go to the installation directory of the selected storage component
cd ./deploy/internalstorage/postgres
Go to the installation directory of the selected storage component
cd ./deploy/internalstorage/mysql
The storage component uses the Local PV method to store data, and you shall specify the node where the Local PV is located during deployment
You can choose to provide your own PV
export STORAGE_NODE_NAME=<nodename>
sed "s|__NODE_NAME__|$STORAGE_NODE_NAME|g" `grep __NODE_NAME__ -rl ./templates` > clusterpedia_internalstorage_pv.yaml
Deploy storage component
kubectl apply -f .
# Go back to Clusterpedia root directory
cd ../../../
Install Clusterpedia
Once the storage component are successfully deployed, you can install the Clusterpedia.
If you uses existing storage component, refer to Configure Storage Layer to set the storage component into Default Storage Layer
Run the following cmd in the clusterpedia root directory
# Deploy Clusterpedia CRD and components
kubectl apply -f ./deploy
Final check
Check if the component Pods are running properly
kubectl -n clusterpedia-system get pods
Create Cluster Auto Import Policy —— ClusterImportPolicy
After 0.4.0, Clusterpedia provides a more friendly way to interface to multi-cloud platforms.
Users can create ClusterImportPolicy
to automatically discover managed clusters in the multi-cloud platform and automatically synchronize them as PediaCluster
,
so you don’t need to maintain PediaCluster
manually based on the managed clusters.
We maintain PediaCluster
for each multi-cloud platform in the Clusterpedia repository. ClusterImportPolicy` for each multi-cloud platform.
People also submit ClusterImportPolicy to Clusterpedia for interfacing to other multi-cloud platforms.
After installing Clusterpedia, you can create the appropriate ClusterImportPolicy
,
or create a new ClusterImportPolicy
according to your needs (multi-cloud platform).
For details, please refer to Interfacing to Multi-Cloud Platforms
kubectl get clusterimportpolicy
Uninstall
Clean up ClusterImportPolicy
If you have deployed ClusterImportPolicy
then you need to clean up the ClusterImportPolicy
resources first.
kubectl get clusterimportpolicy
Clean up PediaCluster
Before uninstalling Clusterpedia, you need to check if PediaCluster resources still exist in your environment, and clean up those resources.
kubectl get pediacluster
Uninstall Clusterpedia
After the PediaCluster resource cleanup is complete, uninstall the Clusterpedia components.
kubectl delete -f ./deploy/clusterpedia_apiserver_apiservice.yaml
kubectl delete -f ./deploy/clusterpedia_apiserver_deployment.yaml
kubectl delete -f ./deploy/clusterpedia_clustersynchro_manager_deployment.yaml
kubectl delete -f ./deploy/clusterpedia_apiserver_rbac.yaml
kubectl delete -f ./deploy/cluster.clusterpedia.io_pediaclusers.yaml
Uninstall Storage Component
Remove related resources depending on the type of storage component selected.
kubectl delete -f ./deploy/internalstorage/<storage type>
remove Local PV and clean up data
After the storage component is uninstalled, the Local PV and corresponding data will still be left in the node and we need to clean it manually.
View the mounted nodes via Local PV resource details.
kubectl get pv clusterpedia-internalstorage-<storage type>
Once you know the node where the data is stored, you can delete the Local PV.
kubectl delete pv clusterpedia-internalstorage-<storage type>
Log in to the node where the data is located and clean up the data.
# In the node where the legacy data is located
rm -rf /var/local/clusterpedia/internalstorage/<storage type>
2 - Helm
3 - Configuration
3.1 - Configure Storage Layer
Default Storage Layer
of Clusterpedia supports two storage components: MySQL and PostgreSQL.
When installing Clusterpedia, you can use existing storage component and create Default Storage Layer
(ConfigMap) and Secret of storage component
.
Configure the Default Storage Layer
You shall create clusterpedia-internalstorage
ConfigMap in the clusterpedia-system
namespace.
# internalstorage configmap example
apiVersion: v1
kind: ConfigMap
metadata:
name: clusterpedia-internalstorage
namespace: clusterpedia-system
data:
internalstorage-config.yaml: |
type: "mysql"
host: "clusterpedia-internalstorage-mysql"
port: 3306
user: root
database: "clusterpedia"
connPool:
maxIdleConns: 10
maxOpenConns: 100
connMaxLifetime: 1h
log:
slowThreshold: "100ms"
logger:
filename: /var/log/clusterpedia/internalstorage.log
maxbackups: 3
Default Storage Layer
config supports the following fields:
field | description |
---|---|
type |
type of storage component such as “postgres” and “mysql” |
host |
host for storage component such as IP address or Service Name |
port |
port for storage component |
user |
user for storage component |
password |
password for storage component |
database |
the database used by Clusterpedia |
It is a good choice to store the access password to Secret. For details see Configure Secret of storage component
Connection Pool
field | description | default value |
---|---|---|
connPool.maxIdleConns |
the maximum number of connections in the idle connection pool. | 10 |
connPool.maxOpenConns |
the maximum number of open connections to the database. | 100 |
connPool.connMaxLifetime |
the maximum amount of time a connection may be reused. | 1h |
Set up the database connection pool according to the user’s current environment.
Configure log
Clusterpedia supports to configure logs for storage layer, enabling the log to record slow SQL queries
and errors
via the log
field.
field | description |
---|---|
log.stdout |
Output log to standard device |
log.colorful |
Enable color print or not |
log.slowThreshold |
Set threshold for slow SQL queries such as “100ms” |
log.level |
Set the severity level such as Slient, Error, Warn, Info |
log.logger |
configure rolling logger |
After enabling log, if log.stdout
is not set to true, the log will be output to /var/log/clusterpedia/internalstorage.log
Rolling logger
Write storage lay logs to file, and configure log file rotation
field | description |
---|---|
log.logger.filename |
the file to write logs to, backup log files will be retained in the same directory, default is /var/log/clusterpedia/internalstorage.log |
log.logger.maxsize |
the maximum size in megabytes of the log file before it gets rotated. default is 100 MB. |
log.logger.maxage |
the maximum number of days to retain old log files based on the timestamp encoded in their filename. |
log.logger.maxbackups |
the maximum number of old log files to retain. |
log.logger.localtime |
whether it is local time, default is to use UTC time |
log.logger.compress |
compress determines if the rotated log files should be compressed using gzip. |
Disable log
If the log
field is not filled in the internalstorage config, log will be ignored, for example:
type: "mysql"
host: "clusterpedia-internalstorage-mysql"
port: 3306
user: root
database: "clusterpedia"
More configuration
The default storage layer also provides more configurations about MySQL and PostgreSQL. Refer to internalstorage/config.go.
Configure Secret
The yaml file that is used to install Clusterpedia may get the password from internalstorage-password
Secret.
Configure the storage component password to Secret
kubectl -n clusterpedia-system create secret generic \
internalstorage-password --from-literal=password=<password to access storage components>