Ceph Reef(18.2.X)的快照分层实战案例

简介: 这篇文章是关于Ceph Reef(18.2.X)版本快照分层的实战案例,详细介绍了从准备测试环境到删除基础镜像快照的一系列操作步骤。

                                              作者:尹正杰
版权声明:原创作品,谢绝转载!否则将追究法律责任。

一.准备测试环境

1.创建存储池

[root@ceph141 ~]# ceph osd pool create yinzhengjie 2 2
pool 'yinzhengjie' created
[root@ceph141 ~]#

2.对存储池启用rbd功能

[root@ceph141 ~]# ceph osd pool application get yinzhengjie
{}
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool application enable yinzhengjie rbd
enabled application 'rbd' on pool 'yinzhengjie'
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool application get yinzhengjie
{
    "rbd": {}
}
[root@ceph141 ~]#

3.对存储池进行环境初始化

[root@ceph141 ~]# rbd pool init yinzhengjie
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd pool stats yinzhengjie
Total Images: 0
Total Snapshots: 0
Provisioned Size: 0 B
[root@ceph141 ~]#

4.创建块设备

[root@ceph141 ~]# rbd create wordpress -s 4G  -p yinzhengjie

5.创建块设备详细信息

[root@ceph141 ~]# rbd ls -p yinzhengjie
wordpress
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd info yinzhengjie/wordpress
rbd image 'wordpress':
        size 4 GiB in 1024 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: ad4945cbcd9
        block_name_prefix: rbd_data.ad4945cbcd9
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features: 
        flags: 
        create_timestamp: Wed Aug 28 00:55:28 2024
        access_timestamp: Wed Aug 28 00:55:28 2024
        modify_timestamp: Wed Aug 28 00:55:28 2024
[root@ceph141 ~]#

6.映射块设备

[root@ceph141 ~]# rbd map yinzhengjie/wordpress
/dev/rbd0
[root@ceph141 ~]#

7.格式化文件系统

[root@ceph141 ~]# mkfs.xfs /dev/rbd0 
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=1048576, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done.
[root@ceph141 ~]#

8.挂载磁盘并拷贝数据

[root@ceph141 ~]# mount /dev/rbd0 /mnt/
[root@ceph141 ~]# 
[root@ceph141 ~]# cp /etc/os-release /mnt/
[root@ceph141 ~]# 
[root@ceph141 ~]# cp /etc/hosts /mnt/
[root@ceph141 ~]# 
[root@ceph141 ~]# ll /mnt/
total 12
drwxr-xr-x  2 root root   37 Aug 28 20:31 ./
drwxr-xr-x 21 root root 4096 Aug 21 20:54 ../
-rw-r--r--  1 root root  283 Aug 28 20:31 hosts
-rw-r--r--  1 root root  386 Aug 28 20:31 os-release
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd showmapped
id  pool         namespace  image      snap  device   
0   yinzhengjie             wordpress  -     /dev/rbd0
[root@ceph141 ~]#

9.卸载块设备并取消映射

[root@ceph141 ~]# umount /mnt 
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd unmap /dev/rbd0
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd showmapped
[root@ceph141 ~]#

二.服务端定制基础快照并保护模式

1.创建基础快照

[root@ceph141 ~]# rbd snap ls yinzhengjie/wordpress
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd snap create yinzhengjie/wordpress@clonewp01
Creating snap: 100% complete...done.
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd snap ls yinzhengjie/wordpress
SNAPID  NAME       SIZE   PROTECTED  TIMESTAMP               
     4  clonewp01  4 GiB             Wed Aug 28 20:40:06 2024
[root@ceph141 ~]#

2.将快照置于保护模式

[root@ceph141 ~]# rbd snap protect yinzhengjie/wordpress@clonewp01
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd snap ls yinzhengjie/wordpress  # 很明显,当前镜像的clonewp01快照已经被咱们保护起来了
SNAPID  NAME       SIZE   PROTECTED  TIMESTAMP               
     4  clonewp01  4 GiB  yes        Wed Aug 28 20:40:06 2024
[root@ceph141 ~]#

3.被"保护"的快照是无法删除的

[root@ceph141 ~]# rbd snap rm yinzhengjie/wordpress@clonewp01
Removing snap: 0% complete...failed.
2024-08-28T22:13:23.313+0800 7fcb7f7fe640 -1 librbd::Operations: snapshot is protected
rbd: snapshot 'clonewp01' is protected from removal.
[root@ceph141 ~]#

三.基于快照进行克隆操作

1.基于基础快照模板克隆新镜像

[root@ceph141 ~]# ceph osd pool ls
.mgr
yinzhengjie-rbd
yinzhengjie
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd ls -p yinzhengjie
wordpress
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd ls -p yinzhengjie-rbd
k8s
[root@ceph141 ~]#  
[root@ceph141 ~]# rbd clone  yinzhengjie/wordpress@clonewp01 yinzhengjie/wp01
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd clone  yinzhengjie/wordpress@clonewp01 yinzhengjie-rbd/wp02  # 克隆时块设备可以不在同一个存储池,但是使用时可能会存在问题。
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd clone  yinzhengjie/wordpress@clonewp01 yinzhengjie/wp03
[root@ceph141 ~]#

2.查看克隆的信息

[root@ceph141 ~]# rbd ls -p yinzhengjie-rbd -l
NAME  SIZE   PARENT                           FMT  PROT  LOCK
k8s   5 GiB                                     2            
wp02  4 GiB  yinzhengjie/wordpress@clonewp01    2            
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd ls -p yinzhengjie -l
NAME                 SIZE   PARENT                           FMT  PROT  LOCK
wordpress            4 GiB                                     2            
wordpress@clonewp01  4 GiB                                     2  yes       
wp01                 4 GiB  yinzhengjie/wordpress@clonewp01    2    
wp03                 4 GiB  yinzhengjie/wordpress@clonewp01    2                    
[root@ceph141 ~]#

3.查看基础快照目标是否有子镜像

[root@ceph141 ~]# rbd children yinzhengjie/wordpress@clonewp01 
yinzhengjie/wp01
yinzhengjie/wp03
yinzhengjie-rbd/wp02
[root@ceph141 ~]#

四.客户端挂载新镜像进行读写测试

温馨提示:
    经过我测试发现如果将多个块设备在同一个节点无法挂载的情况,报错: "wrong fs type, bad option, bad superblock on /dev/rbd1, missing codepage or helper program, or other error."
    要解决这个问题,可以将3个块设备分别挂载到不同的节点测试即可。本案例就时将3个块设备分别挂载到不同的节点测试。我测试的版本是Reef 18.2.4。

1.ceph141节点添加映射块设备测试,无需格式化

[root@ceph141 ~]# rbd showmapped 
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd map yinzhengjie/wp01
/dev/rbd0
[root@ceph141 ~]#  
[root@ceph141 ~]# mkdir -pv /yinzhengjie/data/wp01
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd showmapped 
id  pool         namespace  image  snap  device   
0   yinzhengjie             wp01   -     /dev/rbd0
[root@ceph141 ~]# 
[root@ceph141 ~]# mount /dev/rbd0 /yinzhengjie/data/wp01/
[root@ceph141 ~]# 
[root@ceph141 ~]# ll /yinzhengjie/data/wp01/
total 12
drwxr-xr-x 2 root root   37 Aug 28 20:31 ./
drwxr-xr-x 6 root root 4096 Aug 28 21:44 ../
-rw-r--r-- 1 root root  283 Aug 28 20:31 hosts
-rw-r--r-- 1 root root  386 Aug 28 20:31 os-release
[root@ceph141 ~]# 
[root@ceph141 ~]# cp /etc/fstab /etc/hostname /yinzhengjie/data/wp01/
[root@ceph141 ~]# 
[root@ceph141 ~]# ll /yinzhengjie/data/wp01/
total 20
drwxr-xr-x 2 root root   66 Aug 28 21:45 ./
drwxr-xr-x 6 root root 4096 Aug 28 21:44 ../
-rw-r--r-- 1 root root  657 Aug 28 21:45 fstab
-rw-r--r-- 1 root root    8 Aug 28 21:45 hostname
-rw-r--r-- 1 root root  283 Aug 28 20:31 hosts
-rw-r--r-- 1 root root  386 Aug 28 20:31 os-release
[root@ceph141 ~]#

2.ceph142节点添加映射块设备测试,无需格式化

[root@ceph142 ~]# rbd showmapped 
[root@ceph142 ~]# 
[root@ceph142 ~]# rbd map yinzhengjie-rbd/wp02
/dev/rbd0
[root@ceph142 ~]# 
[root@ceph142 ~]# rbd showmapped 
id  pool             namespace  image  snap  device   
0   yinzhengjie-rbd             wp02   -     /dev/rbd0
[root@ceph142 ~]#  
[root@ceph142 ~]# mkdir -pv /yinzhengjie/data/wp02
[root@ceph142 ~]# 
[root@ceph142 ~]# mount /dev/rbd0 /yinzhengjie/data/wp02/
[root@ceph142 ~]#  
[root@ceph142 ~]# ll /yinzhengjie/data/wp02/
total 12
drwxr-xr-x 2 root root   37 Aug 28 21:48 ./
drwxr-xr-x 3 root root 4096 Aug 28 21:48 ../
-rw-r--r-- 1 root root  283 Aug 28 20:31 hosts
-rw-r--r-- 1 root root  386 Aug 28 20:31 os-release
[root@ceph142 ~]# 
[root@ceph142 ~]# rm -f /yinzhengjie/data/wp02/os-release 
[root@ceph142 ~]# 
[root@ceph142 ~]# ll /yinzhengjie/data/wp02/
total 8
drwxr-xr-x 2 root root   19 Aug 28 21:48 ./
drwxr-xr-x 3 root root 4096 Aug 28 21:48 ../
-rw-r--r-- 1 root root  283 Aug 28 20:31 hosts
[root@ceph142 ~]#

3.ceph143节点添加映射块设备测试,无需格式化

[root@ceph143 ~]# rbd showmapped 
[root@ceph143 ~]# 
[root@ceph143 ~]# rbd map yinzhengjie/wp03
/dev/rbd0
[root@ceph143 ~]# 
[root@ceph143 ~]# rbd showmapped 
id  pool         namespace  image  snap  device   
0   yinzhengjie             wp03   -     /dev/rbd0
[root@ceph143 ~]#  
[root@ceph143 ~]# mkdir -pv /yinzhengjie/data/wp03
[root@ceph143 ~]# 
[root@ceph143 ~]# mount /dev/rbd0 /yinzhengjie/data/wp03
[root@ceph143 ~]# 
[root@ceph143 ~]# ll /yinzhengjie/data/wp03
total 12
drwxr-xr-x 2 root root   37 Aug 28 20:31 ./
drwxr-xr-x 6 root root 4096 Aug 28 21:50 ../
-rw-r--r-- 1 root root  283 Aug 28 20:31 hosts
-rw-r--r-- 1 root root  386 Aug 28 20:31 os-release
[root@ceph143 ~]# 
[root@ceph143 ~]# cp /etc/netplan/00-installer-config.yaml /yinzhengjie/data/wp03
[root@ceph143 ~]# 
[root@ceph143 ~]# ll /yinzhengjie/data/wp03
total 16
drwxr-xr-x 2 root root   69 Aug 28 21:51 ./
drwxr-xr-x 6 root root 4096 Aug 28 21:50 ../
-rw------- 1 root root  367 Aug 28 21:51 00-installer-config.yaml
-rw-r--r-- 1 root root  283 Aug 28 20:31 hosts
-rw-r--r-- 1 root root  386 Aug 28 20:31 os-release
[root@ceph143 ~]#

五.客户端取消镜像挂载

1.ceph141节点操作

[root@ceph141 ~]# rbd showmapped 
id  pool         namespace  image  snap  device   
0   yinzhengjie             wp01   -     /dev/rbd0
[root@ceph141 ~]# 
[root@ceph141 ~]# umount /yinzhengjie/data/wp01 
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd unmap /dev/rbd0
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd showmapped 
[root@ceph141 ~]#

2.ceph142节点操作

[root@ceph142 ~]# rbd showmapped 
id  pool             namespace  image  snap  device   
0   yinzhengjie-rbd             wp02   -     /dev/rbd0
[root@ceph142 ~]# 
[root@ceph142 ~]# umount /yinzhengjie/data/wp02 
[root@ceph142 ~]# 
[root@ceph142 ~]# rbd unmap /dev/rbd0
[root@ceph142 ~]# 
[root@ceph142 ~]# rbd showmapped 
[root@ceph142 ~]#

3.ceph143节点操作

[root@ceph143 ~]# rbd showmapped 
id  pool         namespace  image  snap  device   
0   yinzhengjie             wp03   -     /dev/rbd0
[root@ceph143 ~]# 
[root@ceph143 ~]# umount /yinzhengjie/data/wp03 
[root@ceph143 ~]# 
[root@ceph143 ~]# rbd unmap /dev/rbd0
[root@ceph143 ~]# 
[root@ceph143 ~]# rbd showmapped 
[root@ceph143 ~]#

六.删除基础镜像的快照

1.子快照需要展平数据

温馨提示:
      底层是将基础镜像的快照数据拷贝到子快照中,这个速度取决于镜像的大小

[root@ceph141 ~]# rbd children yinzhengjie/wordpress@clonewp01 # 注意,此处我故意不对"yinzhengjie/wp03"的进行展平。
yinzhengjie/wp01
yinzhengjie/wp03
yinzhengjie-rbd/wp02
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd flatten yinzhengjie/wp01
Image flatten: 100% complete...done.
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd flatten yinzhengjie-rbd/wp02
Image flatten: 100% complete...done.
[root@ceph141 ~]# 
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd children yinzhengjie/wordpress@clonewp01  # 很明显,此处基础快照仅有一个我没有展平的子快照啦~
yinzhengjie/wp03
[root@ceph141 ~]#

2.取消保护基础镜像的快照保护

[root@ceph141 ~]# rbd snap unprotect yinzhengjie/wordpress@clonewp01. # 很明显,如果不展平子快照,这里就不允许取消保护 
2024-08-28T22:09:04.182+0800 7f880530a640 -1 librbd::SnapshotUnprotectRequest: cannot unprotect: at least 1 child(ren) [d4495e828556] in pool 'yinzhengjie'
2024-08-28T22:09:04.182+0800 7f8805b0b640 -1 librbd::SnapshotUnprotectRequest: encountered error: (16) Device or resource busy
2024-08-28T22:09:04.182+0800 7f8805b0b640 -1 librbd::SnapshotUnprotectRequest: 0x55e0d195abd0 should_complete_error: ret_val=-16
rbd: unprotecting snap failed: 2024-08-28T22:09:04.190+0800 7f880530a640 -1 librbd::SnapshotUnprotectRequest: 0x55e0d195abd0 should_complete_error: ret_val=-16
(16) Device or resource busy
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd flatten yinzhengjie/wp03
Image flatten: 100% complete...done.
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd children yinzhengjie/wordpress@clonewp01  # Duang,此处看不到有子快照啦。
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd snap unprotect yinzhengjie/wordpress@clonewp01  # Duang,可以取消保护啦!
[root@ceph141 ~]#

3.删除基础镜像的快照

[root@ceph141 ~]# rbd snap ls yinzhengjie/wordpress
SNAPID  NAME       SIZE   PROTECTED  TIMESTAMP               
     4  clonewp01  4 GiB             Wed Aug 28 20:40:06 2024
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd snap rm yinzhengjie/wordpress@clonewp01
Removing snap: 100% complete...done.
[root@ceph141 ~]#  
[root@ceph141 ~]# rbd snap ls yinzhengjie/wordpress  # Duang,快照删除成功啦!
[root@ceph141 ~]#

七 客户端再次映射测试

温馨提示:
      经过测试发现,尽管删除了基础镜像的快照,并不会影响子镜像的使用哟~

1.ceph141节点

[root@ceph141 ~]# rbd showmapped 
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd map yinzhengjie/wp01
/dev/rbd0
[root@ceph141 ~]# 
[root@ceph141 ~]# mount /dev/rbd0  /yinzhengjie/data/wp01/
[root@ceph141 ~]# 
[root@ceph141 ~]# ll /yinzhengjie/data/wp01/
total 20
drwxr-xr-x 2 root root   66 Aug 28 21:45 ./
drwxr-xr-x 6 root root 4096 Aug 28 21:44 ../
-rw-r--r-- 1 root root  657 Aug 28 21:45 fstab
-rw-r--r-- 1 root root    8 Aug 28 21:45 hostname
-rw-r--r-- 1 root root  283 Aug 28 20:31 hosts
-rw-r--r-- 1 root root  386 Aug 28 20:31 os-release
[root@ceph141 ~]#  
[root@ceph141 ~]# rm -f /yinzhengjie/data/wp01/h*
[root@ceph141 ~]# 
[root@ceph141 ~]# ll /yinzhengjie/data/wp01/
total 12
drwxr-xr-x 2 root root   37 Aug 28 22:22 ./
drwxr-xr-x 6 root root 4096 Aug 28 21:44 ../
-rw-r--r-- 1 root root  657 Aug 28 21:45 fstab
-rw-r--r-- 1 root root  386 Aug 28 20:31 os-release
[root@ceph141 ~]# 
[root@ceph141 ~]# umount /yinzhengjie/data/wp01 
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd showmapped 
id  pool         namespace  image  snap  device   
0   yinzhengjie             wp01   -     /dev/rbd0
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd unmap /dev/rbd0
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd showmapped 
[root@ceph141 ~]#

2.ceph142节点

[root@ceph142 ~]# rbd showmapped 
[root@ceph142 ~]# 
[root@ceph142 ~]# rbd map yinzhengjie-rbd/wp02
/dev/rbd0
[root@ceph142 ~]# 
[root@ceph142 ~]# rbd showmapped 
id  pool             namespace  image  snap  device   
0   yinzhengjie-rbd             wp02   -     /dev/rbd0
[root@ceph142 ~]# 
[root@ceph142 ~]# mount /dev/rbd0 /yinzhengjie/data/wp02/
[root@ceph142 ~]# 
[root@ceph142 ~]# ll /yinzhengjie/data/wp02/
total 8
drwxr-xr-x 2 root root   19 Aug 28 21:48 ./
drwxr-xr-x 3 root root 4096 Aug 28 21:48 ../
-rw-r--r-- 1 root root  283 Aug 28 20:31 hosts
[root@ceph142 ~]# 
[root@ceph142 ~]# cp /etc/hostname /yinzhengjie/data/wp02/
[root@ceph142 ~]# 
[root@ceph142 ~]# ll /yinzhengjie/data/wp02/
total 12
drwxr-xr-x 2 root root   35 Aug 28 22:24 ./
drwxr-xr-x 3 root root 4096 Aug 28 21:48 ../
-rw-r--r-- 1 root root    8 Aug 28 22:24 hostname
-rw-r--r-- 1 root root  283 Aug 28 20:31 hosts
[root@ceph142 ~]# 
[root@ceph142 ~]# umount /yinzhengjie/data/wp02 
[root@ceph142 ~]# 
[root@ceph142 ~]# rbd unmap /dev/rbd0
[root@ceph142 ~]# 
[root@ceph142 ~]# rbd showmapped 
[root@ceph142 ~]#

3.ceph143节点

[root@ceph143 ~]# rbd showmapped 
[root@ceph143 ~]# 
[root@ceph143 ~]# rbd map yinzhengjie/wp03
/dev/rbd0
[root@ceph143 ~]# 
[root@ceph143 ~]# rbd showmapped 
id  pool         namespace  image  snap  device   
0   yinzhengjie             wp03   -     /dev/rbd0
[root@ceph143 ~]# 
[root@ceph143 ~]# mount /dev/rbd0 /yinzhengjie/data/wp03/
[root@ceph143 ~]# 
[root@ceph143 ~]# ll /yinzhengjie/data/wp03/
total 16
drwxr-xr-x 2 root root   69 Aug 28 21:51 ./
drwxr-xr-x 6 root root 4096 Aug 28 21:50 ../
-rw------- 1 root root  367 Aug 28 21:51 00-installer-config.yaml
-rw-r--r-- 1 root root  283 Aug 28 20:31 hosts
-rw-r--r-- 1 root root  386 Aug 28 20:31 os-release
[root@ceph143 ~]# 
[root@ceph143 ~]# rm -f /yinzhengjie/data/wp03/*os*
[root@ceph143 ~]# 
[root@ceph143 ~]# ll /yinzhengjie/data/wp03/
total 8
drwxr-xr-x 2 root root   38 Aug 28 22:26 ./
drwxr-xr-x 6 root root 4096 Aug 28 21:50 ../
-rw------- 1 root root  367 Aug 28 21:51 00-installer-config.yaml
[root@ceph143 ~]# 
[root@ceph143 ~]# umount /yinzhengjie/data/wp03 
[root@ceph143 ~]# 
[root@ceph143 ~]# rbd unmap /dev/rbd0
[root@ceph143 ~]# 
[root@ceph143 ~]# rbd showmapped 
[root@ceph143 ~]#
目录
相关文章
|
缓存 运维 Kubernetes
NVIDIA GPU Operator分析一:NVIDIA驱动安装
背景我们知道,如果在Kubernetes中支持GPU设备调度,需要做如下的工作:节点上安装nvidia驱动节点上安装nvidia-docker集群部署gpu device plugin,用于为调度到该节点的pod分配GPU设备。除此之外,如果你需要监控集群GPU资源使用情况,你可能还需要安装DCCM exporter结合Prometheus输出GPU资源监控信息。要安装和管理这么多的组件,对于运维
5568 0
NVIDIA GPU Operator分析一:NVIDIA驱动安装
|
存储 对象存储
使用Ceph对象存储的Amazon S3接口(基于nautilus版本)
使用Ceph对象存储的Amazon S3接口(基于nautilus版本)
1006 0
|
存储 关系型数据库 块存储
Ceph Reef(18.2.X)集群的状态管理实战
这篇文章是关于Ceph Reef(18.2.X)集群的状态管理实战,包括如何检查集群状态、OSD状态、MON监视器映射、PG和OSD存储对应关系,以及如何通过套接字管理集群和修改集群配置的详细指南。
414 4
|
5月前
|
Prometheus Kubernetes 监控
Kubernetes(k8s)高可用性集群的构建详细步骤
构建高可用Kubernetes集群涉及到的层面非常广泛,包括硬件资源的配置、网络配置以及集群维护策略的规划。因此,在实际操作中,可能还需要根据特定环境和业务需求进行调整和优化。
2020 19
|
7月前
|
监控 Linux 应用服务中间件
Linux多节点多硬盘部署MinIO:分布式MinIO集群部署指南搭建高可用架构实践
通过以上步骤,已成功基于已有的 MinIO 服务,扩展为一个 MinIO 集群。该集群具有高可用性和容错性,适合生产环境使用。如果有任何问题,请检查日志或参考MinIO 官方文档。作者联系方式vx:2743642415。
2608 57
|
存储 对象存储 Swift
Ceph Reef(18.2.X)之对象访问策略配置
这篇文章讲述了对象存储的多种访问方式,包括HTTP、S3cmd、Swift和Python程序访问,并介绍了如何定制存储桶的访问策略和跨域规则。
305 8
Ceph Reef(18.2.X)之对象访问策略配置
|
存储
Ceph Reef(18.2.X)的CephFS高可用集群实战案例
这篇文章是关于Ceph Reef(18.2.X)版本中CephFS高可用集群的实战案例,涵盖了CephFS的基础知识、一主一从架构的搭建、多主一从架构的配置、客户端挂载方式以及fuse方式访问CephFS的详细步骤和配置。
408 3
Ceph Reef(18.2.X)的CephFS高可用集群实战案例
|
安全 应用服务中间件 网络安全
如何测试Nginx反向代理实现SSL加密访问的配置是否正确?
如何测试Nginx反向代理实现SSL加密访问的配置是否正确?
728 60
|
Shell 容器
Ceph Reef(18.2.X)访问ceph集群的方式及管理员节点配置案例
这篇文章是关于Ceph Reef(18.2.X)版本中访问ceph集群的方式和管理员节点配置的案例,介绍了使用cephadm shell的不同方式访问集群和如何配置管理节点以方便集群管理。
731 5
|
存储 关系型数据库 文件存储
Ubuntu22.04LTS基于cephadm快速部署Ceph Reef(18.2.X)集群
这篇文章是关于如何在Ubuntu 22.04LTS上使用cephadm工具快速部署Ceph Reef(18.2.X)存储集群的详细教程,包括ceph的基本概念、集群的搭建步骤、集群管理以及测试集群可用性等内容。
3848 8
Ubuntu22.04LTS基于cephadm快速部署Ceph Reef(18.2.X)集群