1 votes

ceph-osd Aucun périphérique de bloc détecté dans la configuration actuelle

Après le déploiement d'Openstack via Juju, les résultats de ceph-osd sont bloqués.

$: juju status 
ceph-osd/0                blocked   idle       1        10.20.253.197                      No block devices detected using current configuration
ceph-osd/1*               blocked   idle       2        10.20.253.199                      No block devices detected using current configuration
ceph-osd/2                blocked   idle       0        10.20.253.200                      No block devices detected using current configuration

J'ai demandé à juju de se connecter à la première machine avec le ceph-osd/0.

$: juju ssh ceph-osd/0

et je lance les commandes suivantes :

$: sudo fdisk -l
Disk /dev/vda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xaa276e23

Device     Boot Start        End    Sectors  Size Id Type
/dev/vda1        2048 1048575966 1048573919  500G 83 Linux

Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CAA6111D-5ECF-48EB-B4BF-9EC58E38AD64

Device     Start        End    Sectors  Size Type
/dev/vdb1   2048       4095       2048    1M BIOS boot
/dev/vdb2   4096 1048563711 1048559616  500G Linux filesystem

$: df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.9G     0  7.9G   0% /dev
tmpfs           1.6G  856K  1.6G   1% /run
/dev/vda1       492G   12G  455G   3% /
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
tmpfs           100K     0  100K   0% /var/lib/lxd/shmounts
tmpfs           100K     0  100K   0% /var/lib/lxd/devlxd
tmpfs           1.6G     0  1.6G   0% /run/user/1000  

$: lsblk 
    NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    vda    252:0    0  500G  0 disk 
    vda1 252:1    0  500G  0 part /
    vdb    252:16   0  500G  0 disk 
    vdb1 252:17   0    1M  0 part 
    vdb2 252:18   0  500G  0 part

0voto

Riccardo Magrini Points 1197

Si notre environnement est déjà déployé J'ai résolu ce problème en utilisant ces deux tâches :

1° Tâche

$: juju ssh ceph-osd/0 
$: sudo fdisk /dev/vdb

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): d
Partition number (1,2, default 2): 1

Partition 1 has been deleted.

Command (m for help): d
Selected partition 2
Partition 2 has been deleted.

Command (m for help): w

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
select "d" to delete all partitions and then "w" to write the new change. 

puis

$: sudo fdisk -l
Disk /dev/vda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2fa2c9a8

Device     Boot Start        End    Sectors  Size Id Type
/dev/vda1        2048 1048575966 1048573919  500G 83 Linux

Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 146912CF-FC27-4FDC-A202-24F05DC00E69

puis

    $: sudo fdisk /dev/vdb

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n
Partition number (1-128, default 1): 
First sector (34-1048575966, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-1048575966, default 1048575966): 

Created a new partition 1 of type 'Linux filesystem' and of size 500 GiB.

Command (m for help): p
Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 146912CF-FC27-4FDC-A202-24F05DC00E69

Device     Start        End    Sectors  Size Type
/dev/vdb1   2048 1048575966 1048573919  500G Linux filesystem

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

puis

$: sudo fdisk -l
Disk /dev/vda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2fa2c9a8

Device     Boot Start        End    Sectors  Size Id Type
/dev/vda1        2048 1048575966 1048573919  500G 83 Linux

Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 146912CF-FC27-4FDC-A202-24F05DC00E69

Device     Start        End    Sectors  Size Type
/dev/vdb1   2048 1048575966 1048573919  500G Linux filesystem

J'ai répété cette tâche également pour l'autre machine ceph-osd/1 'n ceph-osd/2

2° Tâche

sur Juju Gui j'ai changé sur 3 ceph-osd la chaîne /dev/sdb en /dev/vdb1, save 'n commit que

enter image description here

maintenant son propre statut est "idle"

$: juju status
Model      Controller             Cloud/Region  Version  SLA          Timestamp
openstack  maas-cloud-controller  maas-cloud    2.4.2    unsupported  13:54:02+02:00

App                    Version        Status  Scale  Charm                  Store       Rev  OS      Notes
ceph-mon               13.2.1+dfsg1   active      3  ceph-mon               jujucharms   26  ubuntu  
ceph-osd               13.2.1+dfsg1   active      3  ceph-osd               jujucharms  269  ubuntu  
ceph-radosgw           13.2.1+dfsg1   active      1  ceph-radosgw           jujucharms  259  ubuntu  
cinder                 13.0.0         active      1  cinder                 jujucharms  273  ubuntu  
cinder-ceph            13.0.0         active      1  cinder-ceph            jujucharms  234  ubuntu  
glance                 17.0.0         active      1  glance                 jujucharms  268  ubuntu  
keystone               14.0.0         active      1  keystone               jujucharms  283  ubuntu  
mysql                  5.7.20-29.24   active      1  percona-cluster        jujucharms  269  ubuntu  
neutron-api            13.0.0         active      1  neutron-api            jujucharms  262  ubuntu  
neutron-gateway        13.0.0         active      1  neutron-gateway        jujucharms  253  ubuntu  
neutron-openvswitch    13.0.0         active      3  neutron-openvswitch    jujucharms  251  ubuntu  
nova-cloud-controller  18.0.0         active      1  nova-cloud-controller  jujucharms  311  ubuntu  
nova-compute           18.0.0         active      3  nova-compute           jujucharms  287  ubuntu  
ntp                    4.2.8p10+dfsg  active      4  ntp                    jujucharms   27  ubuntu  
openstack-dashboard    14.0.0         active      1  openstack-dashboard    jujucharms  266  ubuntu  
rabbitmq-server        3.6.10         active      1  rabbitmq-server        jujucharms   78  ubuntu  

Unit                      Workload  Agent  Machine  Public address  Ports              Message
ceph-mon/0                active    idle   2/lxd/1  10.20.253.216                      Unit is ready and clustered
ceph-mon/1                active    idle   0/lxd/0  10.20.253.95                       Unit is ready and clustered
ceph-mon/2*               active    idle   1/lxd/0  10.20.253.83                       Unit is ready and clustered
ceph-osd/0                active    idle   1        10.20.253.197                      Unit is ready (1 OSD)
ceph-osd/1*               active    idle   2        10.20.253.199                      Unit is ready (1 OSD)
ceph-osd/2                active    idle   0        10.20.253.200                      Unit is ready (1 OSD)
ceph-radosgw/0*           active    idle   3/lxd/0  10.20.253.87    80/tcp             Unit is ready
cinder/0*                 active    idle   0/lxd/1  10.20.253.188   8776/tcp           Unit is ready
  cinder-ceph/0*          active    idle            10.20.253.188                      Unit is ready
glance/0*                 active    idle   2/lxd/0  10.20.253.217   9292/tcp           Unit is ready
keystone/0*               active    idle   1/lxd/1  10.20.253.134   5000/tcp           Unit is ready
mysql/0*                  active    idle   3/lxd/1  10.20.253.96    3306/tcp           Unit is ready
neutron-api/0*            active    idle   0/lxd/2  10.20.253.189   9696/tcp           Unit is ready
neutron-gateway/0*        active    idle   3        10.20.253.198                      Unit is ready
  ntp/3                   active    idle            10.20.253.198   123/udp            Ready
nova-cloud-controller/0*  active    idle   2/lxd/2  10.20.253.218   8774/tcp,8778/tcp  Unit is ready
nova-compute/0            active    idle   1        10.20.253.197                      Unit is ready
  neutron-openvswitch/0*  active    idle            10.20.253.197                      Unit is ready
  ntp/0*                  active    idle            10.20.253.197   123/udp            Ready
nova-compute/1*           active    idle   0        10.20.253.200                      Unit is ready
  neutron-openvswitch/1   active    idle            10.20.253.200                      Unit is ready
  ntp/1                   active    idle            10.20.253.200   123/udp            Ready
nova-compute/2            active    idle   2        10.20.253.199                      Unit is ready
  neutron-openvswitch/2   active    idle            10.20.253.199                      Unit is ready
  ntp/2                   active    idle            10.20.253.199   123/udp            Ready
openstack-dashboard/0*    active    idle   1/lxd/2  10.20.253.13    80/tcp,443/tcp     Unit is ready
rabbitmq-server/0*        active    idle   3/lxd/2  10.20.253.86    5672/tcp           Unit is ready

Machine  State    DNS            Inst id              Series  AZ         Message
0        started  10.20.253.200  fxbapd               bionic  Openstack  Deployed
0/lxd/0  started  10.20.253.95   juju-53dcb3-0-lxd-0  bionic  Openstack  Container started
0/lxd/1  started  10.20.253.188  juju-53dcb3-0-lxd-1  bionic  Openstack  Container started
0/lxd/2  started  10.20.253.189  juju-53dcb3-0-lxd-2  bionic  Openstack  Container started
1        started  10.20.253.197  mqdnxt               bionic  Openstack  Deployed
1/lxd/0  started  10.20.253.83   juju-53dcb3-1-lxd-0  bionic  Openstack  Container started
1/lxd/1  started  10.20.253.134  juju-53dcb3-1-lxd-1  bionic  Openstack  Container started
1/lxd/2  started  10.20.253.13   juju-53dcb3-1-lxd-2  bionic  Openstack  Container started
2        started  10.20.253.199  ysg683               bionic  Openstack  Deployed
2/lxd/0  started  10.20.253.217  juju-53dcb3-2-lxd-0  bionic  Openstack  Container started
2/lxd/1  started  10.20.253.216  juju-53dcb3-2-lxd-1  bionic  Openstack  Container started
2/lxd/2  started  10.20.253.218  juju-53dcb3-2-lxd-2  bionic  Openstack  Container started
3        started  10.20.253.198  scycac               bionic  Openstack  Deployed
3/lxd/0  started  10.20.253.87   juju-53dcb3-3-lxd-0  bionic  Openstack  Container started
3/lxd/1  started  10.20.253.96   juju-53dcb3-3-lxd-1  bionic  Openstack  Container started
3/lxd/2  started  10.20.253.86   juju-53dcb3-3-lxd-2  bionic  Openstack  Container started

Alors que si nous devons lancer le déploiement d'Openstack Avant de faire cela, nous devons changer dans Juju Ui la chaîne osd-devices (chaîne) de /dev/sdb a /dev/vdb dans 3 ceph-osd. Ensuite, nous pouvons procéder à son commit.

-1voto

StephanP Points 1

Le chemin du disque par défaut de ceph-base est actuellement défini sur : '/dev/sdb'. Vous devez le définir avec le chemin de votre disque pour les données de ceph-osd ('/dev/vdb') :

$ juju config ceph-osd osd-devices
/dev/sdb
$ juju config ceph-osd osd-devices='/dev/vdb'

Le disque ne doit pas contenir de partitions lorsque vous le configurez. Après cela, le ceph-osds devrait devenir actif.

SistemesEz.com

SystemesEZ est une communauté de sysadmins où vous pouvez résoudre vos problèmes et vos doutes. Vous pouvez consulter les questions des autres sysadmins, poser vos propres questions ou résoudre celles des autres.

Powered by:

X