|
|
|
| OSD Internals | Secure Ceph Cluster Configuration |
Secure Ceph 🔗 consists of niiceph-ovsctrl,
niiceph-gateway, and Open vSwitch (OVS).
These components operate across all Ceph nodes in a distributed manner, then lock down RADOS connections by partitioning the Layer-3 public network among tenants using VLANs. Each tenant's clients are confined to a dedicated VLAN, preventing tenants from reaching each other. ↗
Install Open vSwitch and start it as a service. (Open vSwitch version must be 2.17 or higher)
After the OVS service starts, create an OVS bridge with the configuration described below.
Network interface name connected to the client and Admin: ethxxxx (Specify the interface name on the server where you are installing)
OVS bridge name: br-ceph
LOCAL interface name: br-ceph-local
Configure an IP address for the LOCAL interface.
niiceph-ovsctrl utilizes MON's KVS for storing multi-factor authentication settings and Admin network configurations.
The Python rados module required to access this MON KVS is included in the Ceph package. Therefore, you must install the Ceph package using the following procedure.
$ sudo dnf install ceph
In addition to the rados module, the following Python modules are used and must be installed:
- flask
- requests
- pbr
Create the following directory as the installation location for niiceph-ovsctrl:
/opt/ovs_ctrl/bin
/opt/ovs_ctrl/config
/opt/ovs_ctrl/info
/opt/ovs_ctrl/package
/opt/ovs_ctrl/log
Place the Python files, executable files, and definition files in each respective directory.
- /opt/ovs_ctrl/bin
- ovs_ctrl.sh (from bin)
- bridge_init.sh (from bin)
- bond0_init.sh (from bin)
- /opt/ovs_ctrl/config
- ovs_ctrl.ini (from config)
- logging.ini (from config)
- /opt/ovs_ctrl/info
- node_info.json (from info)
- /opt/ovs_ctrl/package
- ceph (from package)
- ovs (from package)
- client.py (from package)
- ovs_ctrl.py (from package)
- path_ctrl.py (from package)
To start niiceph-ovsctrl as a systemd service, use the ovs_ctrl.service file located in the setup directory.
The following explains changes to the ovs_ctrl.ini configuration.
[server]
host = 0.0.0.0
port = 7301
[network_info]
ovs_bridge = br-ceph
public_network = 10.23.0.0/16
managed_netowrk = 10.22.0.0/16
client_if_name = ethxxxx
gw_outside_if_name = veth_outside_br
gw_inside_if_name = veth_inside_br
public_if_name = ethxxxx
public_vlan_id = 1001
local_if_name = LOCAL
bridgr_init_path = /opt/ovs_ctrl/bin/bridge_init.sh
ceph_gw_pass = false
mtu = 9000
mon_port_no = 3300,6789
[node_info]
node_info_filename = /opt/ovs_ctrl/info/node_info.json
[ceph_info]
ceph_conf_path = /etc/ceph/ceph.conf
| item | value | Description |
|---|---|---|
| ovs_bridge | br-ceph | OVS bridge name |
| public_network | 10.23.0.0/16 | Subnet configuration for the public network connected to clients |
| managed_netowrk | 10.22.0.0/16 | Subnet configuration for the public network connected to admin |
| client_if_name | ethxxxx | Network interface name connected to the client |
| public_if_name | ethxxxx | Network interface name connected to Admin |
| public_vlan_id | 1001 | VLAN ID for the Admin network |
Call the ovs_ctrl REST API to register the IP addresses of the Admin node and the nodes comprising the Ceph cluster with ovs_ctrl.
Registering this information enables communication between the Admin node and the nodes comprising the Ceph cluster.
Create a JSON file containing the following information for the nodes to be registered.
/tmp/node_infos.json
[
{
"CephIp" : "10.23.249.100",
"ManageIp" : "10.22.254.100"
"HostName" : "ceph-admin1",
},
{
"CephIp" : "10.23.12.101",
"ManageIp" : "10.22.12.101"
"HostName" : "ceph-node-001",
},
{
"CephIp" : "10.23.12.102",
"ManageIp" : "10.22.12.102"
"HostName" : "ceph-node-002",
}
]
| item | Description |
|---|---|
| CephIp | Public Network IP Address |
| ManageIp | Operational Network IP Address |
| HostName | Node name |
ManageIP is auxiliary information and is not used for OVS network control.
Use the curl command to call the ovs_ctrl REST API and register the IP addresses of the Admin node and nodes comprising the Ceph cluster with ovs_ctrl.
curl -X POST -k http://[ovs_ctrl node IP]:7301/ceph/nodes/update -H 'accept: application/json' -H 'Content-Type: application/json' -d @/tmp/node_infos.json
This IP address registration must be performed for all ovs_ctrl nodes within the Ceph cluster.
Procedure for registering Ceph user information subject to multi-factor authentication.
For the Ceph user name used in multi-factor authentication, follow the naming convention below and append the VLAN ID to the user name.
ceph_gateway extracts the VLAN ID from the Ceph user name included in the MON authentication request and uses it for multi-factor authentication verification.
Naming Convention:
[VLAN ID]_[name]
Ex: 1002_user1
Create a JSON file containing the Ceph user information to be registered as follows.
/tmp/user_info.json
{
"UserInfo": [
{"Id": "1002_user1", "Pool": ["1002_user1_rbd_pool"]}
]
}
| item | Description |
|---|---|
| Id | Ceph user name |
| Pool | List of Pool names referenced by the Ceph user |
Use the curl command to call the ovs_ctrl REST API and register the multi-factor authentication user.
curl -X PUT -k http://[ovs_ctrl node IP]:7301/clientinfo/[VLAN ID]/[Ceph Client IP] -H 'accept: application/json' -H 'Content-Type: application/json' -d @/tmp/user_info.json
Set the VLAN ID in the REST API to the VLAN ID of the tenant to which the Ceph client node belongs, and set the Ceph Client IP to the IP address of the Ceph client node.
This REST API call to register Ceph users is performed on only one ovs_ctrl instance.
Afterwards, call the following REST API on all ovs_ctrl instances running on nodes within the Ceph cluster.
Calling this REST API enables the registered Ceph users to be authenticated as targets for multi-factor authentication.
curl -s -X POST -k http://[ovs_ctrl node IP]:7301/clientinfos/sync -H 'accept: application/json' -H 'Content-Type: application/json'

