Skip to content

Latest commit

 

History

History
1944 lines (1454 loc) · 98.2 KB

File metadata and controls

1944 lines (1454 loc) · 98.2 KB
copyright
years
2014, 2017
lastupdated 2017-10-24

{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:codeblock: .codeblock} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:download: .download}

Setting up clusters

{: #cs_cluster}

Design your cluster setup for maximum availability and capacity. {:shortdesc}

Before you begin, review the options for highly available cluster configurations.

Stages of high availability for a cluster


Creating clusters with the GUI

{: #cs_cluster_ui}

A Kubernetes cluster is a set of worker nodes that are organized into a network. The purpose of the cluster is to define a set of resources, nodes, networks, and storage devices that keep applications highly available. Before you can deploy an app, you must create a cluster and set the definitions for the worker nodes in that cluster. {:shortdesc}

For {{site.data.keyword.Bluemix_notm}} Dedicated users, see Creating Kubernetes clusters from the GUI in {{site.data.keyword.Bluemix_notm}} Dedicated (Closed Beta) instead.

To create a cluster:

  1. In the catalog, select Kubernetes Cluster.
  2. Select a type of cluster plan. You can choose either Lite or Pay-As-You-Go. With the Pay-As-You-Go plan, you can provision a standard cluster with features like multiple worker nodes for a highly available environment.
  3. Configure your cluster details.
    1. Give your cluster a name, choose a version of Kubernetes, and select a location in which to deploy. Select the location that is physically closest to you for the best performance. Keep in mind that you might require legal authorization before data can be physically stored in a foreign country if you select a location outside your country.
    2. Select a type of machine and specify the number of worker nodes that you need. The machine type defines the amount of virtual CPU and memory that is set up in each worker node and made available to the containers.
      • The micro machine type indicates the smallest option.
      • A balanced machine has an equal amount of memory that is assigned to each CPU, which optimizes performance.
    3. Select a Public and Private VLAN from your IBM Bluemix Infrastructure (SoftLayer) account. Both VLANs communicate between worker nodes but the public VLAN also communicates with the IBM-managed Kubernetes master. You can use the same VLAN for multiple clusters. Note: If you choose not to select a public VLAN, you must configure an alternative solution.
    4. Select a type of hardware. Shared is a sufficient option for most situations.
      • Dedicated: Ensure complete isolation of your physical resources.
      • Shared: Allow storage of your physical resources on the same hardware as other IBM customers.
  4. Click Create cluster. You can see the progress of the worker node deployment in the Worker nodes tab. When the deploy is done, you can see that your cluster is ready in the Overview tab. Note: Every worker node is assigned a unique worker node ID and domain name that must not be manually changed after the cluster is created. Changing the ID or domain name prevents the Kubernetes master from managing your cluster.

What's next?

When the cluster is up and running, you can check out the following tasks:

Creating clusters with the GUI in {{site.data.keyword.Bluemix_notm}} Dedicated (Closed Beta)

{: #creating_ui_dedicated}

  1. Log in to {{site.data.keyword.Bluemix_notm}} Public console (https://console.bluemix.net External link icon) with your IBMid.
  2. From the account menu, select your {{site.data.keyword.Bluemix_notm}} Dedicated account. The console is updated with the services and information for your {{site.data.keyword.Bluemix_notm}} Dedicated instance.
  3. From the catalog, select Containers and click Kubernetes cluster.
  4. Enter a Cluster Name.
  5. Select a Machine type. The machine type defines the amount of virtual CPU and memory that is set up in each worker node and that is available for all the containers that you deploy in your nodes.
    • The micro machine type indicates the smallest option.
    • A balanced machine type has an equal amount of memory assigned to each CPU, which optimizes performance.
  6. Choose the Number of worker nodes that you need. Select 3 to ensure high availability of your cluster.
  7. Click Create Cluster. The details for the cluster open, but the worker nodes in the cluster take a few minutes to provision. In the Worker nodes tab, you can see the progress of the worker node deployment. When the worker nodes are ready, the state changes to Ready.

What's next?

When the cluster is up and running, you can check out the following tasks:


Creating clusters with the CLI

{: #cs_cluster_cli}

A cluster is a set of worker nodes that are organized into a network. The purpose of the cluster is to define a set of resources, nodes, networks, and storage devices that keep applications highly available. Before you can deploy an app, you must create a cluster and set the definitions for the worker nodes in that cluster. {:shortdesc}

For {{site.data.keyword.Bluemix_notm}} Dedicated users, see Creating Kubernetes clusters from the CLI in {{site.data.keyword.Bluemix_notm}} Dedicated (Closed Beta) instead.

To create a cluster:

  1. Install the {{site.data.keyword.Bluemix_notm}} CLI and the {{site.data.keyword.containershort_notm}} plug-in.

  2. Log in to the {{site.data.keyword.Bluemix_notm}} CLI. Enter your {{site.data.keyword.Bluemix_notm}} credentials when prompted. To specify a {{site.data.keyword.Bluemix_notm}} region, include the API endpoint.

    bx login
    

    {: pre}

    Note: If you have a federated ID, use bx login --sso to log in to the {{site.data.keyword.Bluemix_notm}} CLI. Enter your user name and use the provided URL in your CLI output to retrieve your one-time passcode. You know you have a federated ID when the login fails without the --sso and succeeds with the --sso option.

  3. If you have multiple {{site.data.keyword.Bluemix_notm}} accounts, select the account where you want to create your Kubernetes cluster.

  4. Specify the {{site.data.keyword.Bluemix_notm}} organization and space where you want to create your Kubernetes cluster.

    bx target --cf
    

    {: pre}

    Note: Clusters are specific to an account and an organization, but are independent from a {{site.data.keyword.Bluemix_notm}} space. For example, if you create a cluster in your organization in test space, you can still work with that cluster if you later target the dev space.

  5. If you want to create or access Kubernetes clusters in a region other than the {{site.data.keyword.Bluemix_notm}} region that you selected earlier, specify the {{site.data.keyword.containershort_notm}} region API endpoint.

    Note: If you want to create a cluster in US East, you must specify the US East container region API endpoint using the bx cs init --host https://us-east.containers.bluemix.net command.

  6. Create a cluster.

    1. Review the locations that are available. The locations that are shown depend on the {{site.data.keyword.containershort_notm}} region that you are logged in.

      bx cs locations
      

      {: pre}

      Your CLI output matches the locations for the container region.

    2. Choose a location and review the machine types available in that location. The machine type specifies the virtual compute resources that are available to each worker node.

      bx cs machine-types <location>
      

      {: pre}

      Getting machine types list...
      OK
      Machine Types
      Name         Cores   Memory   Network Speed   OS             Storage   Server Type
      u1c.2x4      2       4GB      1000Mbps        UBUNTU_16_64   100GB     virtual
      b1c.4x16     4       16GB     1000Mbps        UBUNTU_16_64   100GB     virtual
      b1c.16x64    16      64GB     1000Mbps        UBUNTU_16_64   100GB     virtual
      b1c.32x128   32      128GB    1000Mbps        UBUNTU_16_64   100GB     virtual
      b1c.56x242   56      242GB    1000Mbps        UBUNTU_16_64   100GB     virtual
      

      {: screen}

    3. Check to see if a public and private VLAN already exists in the IBM Bluemix Infrastructure (SoftLayer) for this account.

      bx cs vlans <location>
      

      {: pre}

      ID        Name                Number   Type      Router
      1519999   vlan   1355     private   bcr02a.dal10
      1519898   vlan   1357     private   bcr02a.dal10
      1518787   vlan   1252     public   fcr02a.dal10
      1518888   vlan   1254     public    fcr02a.dal10
      

      {: screen}

      If a public and private VLAN already exists, note the matching routers. Private VLAN routers always begin with bcr (back-end router) and public VLAN routers always begin with fcr (front-end router). The number and letter combination after those prefixes must match to use those VLANs when creating a cluster. In the example output, any of the private VLANs can be used with any of public VLANs because the routers all include 02a.dal10.

    4. Run the cluster-create command. You can choose between a lite cluster, which includes one worker node set up with 2vCPU and 4GB memory, or a standard cluster, which can include as many worker nodes as you choose in your IBM Bluemix Infrastructure (SoftLayer) account. When you create a standard cluster, by default, the hardware of the worker node is shared by multiple IBM customers and billed by hours of usage.
      Example for a standard cluster:

      bx cs cluster-create --location dal10 --public-vlan <public_vlan_id> --private-vlan <private_vlan_id> --machine-type u1c.2x4 --workers 3 --name <cluster_name>
      

      {: pre}

      Example for a lite cluster:

      bx cs cluster-create --name my_cluster
      

      {: pre}

      Table 1. Understanding this command's components
      Understanding this command's components
      cluster-create The command to create a cluster in your {{site.data.keyword.Bluemix_notm}} organization.
      --location <location> Replace <location> with the {{site.data.keyword.Bluemix_notm}} location ID where you want to create your cluster. [Available locations](cs_regions.html#locations) depend on the {{site.data.keyword.containershort_notm}} region you are logged in to.
      --machine-type <machine_type> If you are creating a standard cluster, choose a machine type. The machine type specifies the virtual compute resources that are available to each worker node. Review [Comparison of lite and standard clusters for {{site.data.keyword.containershort_notm}}](cs_planning.html#cs_planning_cluster_type) for more information. For lite clusters, you do not have to define the machine type.
      --public-vlan <public_vlan_id>
      • For lite clusters, you do not have to define a public VLAN. Your lite cluster is automatically connected to a public VLAN that is owned by IBM.
      • For a standard cluster, if you already have a public VLAN set up in your IBM Bluemix Infrastructure (SoftLayer) account for that location, enter the ID of the public VLAN. If you do not have both a public and a private VLAN in your account, do not specify this option. {{site.data.keyword.containershort_notm}} automatically creates a public VLAN for you.

        Note: Private VLAN routers always begin with bcr (back-end router) and public VLAN routers always begin with fcr (front-end router). The number and letter combination after those prefixes must match to use those VLANs when creating a cluster.
      --private-vlan <private_vlan_id>
      • For lite clusters, you do not have to define a private VLAN. Your lite cluster is automatically connected to a private VLAN that is owned by IBM.
      • For a standard cluster, if you already have a private VLAN set up in your IBM Bluemix Infrastructure (SoftLayer) account for that location, enter the ID of the private VLAN. If you do not have both a public and a private VLAN in your account, do not specify this option. {{site.data.keyword.containershort_notm}} automatically creates a public VLAN for you.

        Note: Private VLAN routers always begin with bcr (back-end router) and public VLAN routers always begin with fcr (front-end router). The number and letter combination after those prefixes must match to use those VLANs when creating a cluster.
      --name <name> Replace <name> with a name for your cluster.
      --workers <number> The number of worker nodes to include in the cluster. If the --workers option is not specified, 1 worker node is created.
  7. Verify that the creation of the cluster was requested.

    bx cs clusters
    

    {: pre}

    Note: It can take up to 15 minutes for the worker node machines to be ordered, and for the cluster to be set up and provisioned in your account.

    When the provisioning of your cluster is completed, the status of your cluster changes to deployed.

    Name         ID                                   State      Created          Workers
    my_cluster   paf97e8843e29941b49c598f516de72101   deployed   20170201162433   1
    

    {: screen}

  8. Check the status of the worker nodes.

    bx cs workers <cluster>
    

    {: pre}

    When the worker nodes are ready, the state changes to normal and the status is Ready. When the node status is Ready, you can then access the cluster.

    Note: Every worker node is assigned a unique worker node ID and domain name that must not be changed manually after the cluster is created. Changing the ID or domain name prevents the Kubernetes master from managing your cluster.

    ID                                                  Public IP        Private IP     Machine Type   State      Status
    prod-dal10-pa8dfcc5223804439c87489886dbbc9c07-w1   169.47.223.113   10.171.42.93   free           normal    Ready
    

    {: screen}

  9. Set the cluster you created as the context for this session. Complete these configuration steps every time that you work with your cluster.

    1. Get the command to set the environment variable and download the Kubernetes configuration files.

      bx cs cluster-config <cluster_name_or_id>
      

      {: pre}

      When the download of the configuration files is finished, a command is displayed that you can use to set the path to the local Kubernetes configuration file as an environment variable.

      Example for OS X:

      export KUBECONFIG=/Users/<user_name>/.bluemix/plugins/container-service/clusters/<cluster_name>/kube-config-prod-dal10-<cluster_name>.yml
      

      {: screen}

    2. Copy and paste the command that is displayed in your terminal to set the KUBECONFIG environment variable.

    3. Verify that the KUBECONFIG environment variable is set properly.

      Example for OS X:

      echo $KUBECONFIG
      

      {: pre}

      Output:

      /Users/<user_name>/.bluemix/plugins/container-service/clusters/<cluster_name>/kube-config-prod-dal10-<cluster_name>.yml
      
      

      {: screen}

  10. Launch your Kubernetes dashboard with the default port 8001.

    1. Set the proxy with the default port number.

      kubectl proxy
      

      {: pre}

      Starting to serve on 127.0.0.1:8001
      

      {: screen}

    2. Open the following URL in a web browser to see the Kubernetes dashboard.

      http://localhost:8001/ui
      

      {: codeblock}

What's next?

Creating clusters with the CLI in {{site.data.keyword.Bluemix_notm}} Dedicated (Closed Beta)

{: #creating_cli_dedicated}

  1. Install the {{site.data.keyword.Bluemix_notm}} CLI and the {{site.data.keyword.containershort_notm}} plug-in.

  2. Log in to the public endpoint for {{site.data.keyword.containershort_notm}}. Enter your {{site.data.keyword.Bluemix_notm}} credentials and select the {{site.data.keyword.Bluemix_notm}} Dedicated account when prompted.

    bx login -a api.<region>.bluemix.net
    

    {: pre}

    Note: If you have a federated ID, use bx login --sso to log in to the {{site.data.keyword.Bluemix_notm}} CLI. Enter your user name and use the provided URL in your CLI output to retrieve your one-time passcode. You know you have a federated ID when the login fails without the --sso and succeeds with the --sso option.

  3. Create a cluster with the cluster-create command. When you create a standard cluster, the hardware of the worker node is billed by hours of usage.

    Example:

    bx cs cluster-create --location <location> --machine-type <machine-type> --name <cluster_name> --workers <number>
    

    {: pre}

    Table 2. Understanding this command's components
    Understanding this command's components
    cluster-create The command to create a cluster in your {{site.data.keyword.Bluemix_notm}} organization.
    --location <location> Replace <location> with the {{site.data.keyword.Bluemix_notm}} location ID where you want to create your cluster. [Available locations](cs_regions.html#locations) depend on the {{site.data.keyword.containershort_notm}} region you are logged in to.
    --machine-type <machine_type> If you are creating a standard cluster, choose a machine type. The machine type specifies the virtual compute resources that are available to each worker node. Review [Comparison of lite and standard clusters for {{site.data.keyword.containershort_notm}}](cs_planning.html#cs_planning_cluster_type) for more information. For lite clusters, you do not have to define the machine type.
    --name <name> Replace <name> with a name for your cluster.
    --workers <number> The number of worker nodes to include in the cluster. If the --workers option is not specified, 1 worker node is created.
  4. Verify that the creation of the cluster was requested.

    bx cs clusters
    

    {: pre}

    Note: It can take up to 15 minutes for the worker node machines to be ordered, and for the cluster to be set up and provisioned in your account.

    When the provisioning of your cluster is completed, the state of your cluster changes to deployed.

    Name         ID                                   State      Created          Workers
    my_cluster   paf97e8843e29941b49c598f516de72101   deployed   20170201162433   1
    

    {: screen}

  5. Check the status of the worker nodes.

    bx cs workers <cluster>
    

    {: pre}

    When the worker nodes are ready, the state changes to normal and the status is Ready. When the node status is Ready, you can then access the cluster.

    ID                                                  Public IP        Private IP     Machine Type   State      Status
    prod-dal10-pa8dfcc5223804439c87489886dbbc9c07-w1   169.47.223.113   10.171.42.93   free           normal    Ready
    

    {: screen}

  6. Set the cluster you created as the context for this session. Complete these configuration steps every time that you work with your cluster.

    1. Get the command to set the environment variable and download the Kubernetes configuration files.

      bx cs cluster-config <cluster_name_or_id>
      

      {: pre}

      When the download of the configuration files is finished, a command is displayed that you can use to set the path to the local Kubernetes configuration file as an environment variable.

      Example for OS X:

      export KUBECONFIG=/Users/<user_name>/.bluemix/plugins/container-service/clusters/<cluster_name>/kube-config-prod-dal10-<cluster_name>.yml
      

      {: screen}

    2. Copy and paste the command that is displayed in your terminal to set the KUBECONFIG environment variable.

    3. Verify that the KUBECONFIG environment variable is set properly.

      Example for OS X:

      echo $KUBECONFIG
      

      {: pre}

      Output:

      /Users/<user_name>/.bluemix/plugins/container-service/clusters/<cluster_name>/kube-config-prod-dal10-<cluster_name>.yml
      
      

      {: screen}

  7. Access your Kubernetes dashboard with the default port 8001.

    1. Set the proxy with the default port number.

      kubectl proxy
      

      {: pre}

      Starting to serve on 127.0.0.1:8001
      

      {: screen}

    2. Open the following URL in a web browser in order to see the Kubernetes dashboard.

      http://localhost:8001/ui
      

      {: codeblock}

What's next?


Using private and public image registries

{: #cs_apps_images}

A Docker image is the basis for every container that you create. An image is created from a Dockerfile, which is a file that contains instructions to build the image. A Dockerfile might reference build artifacts in its instructions that are stored separately, such as an app, the app's configuration, and its dependencies. Images are typically stored in a registry that can either be accessible by the public (public registry) or set up with limited access for a small group of users (private registry). {:shortdesc}

Review the following options to find information about how to set up an image registry and how to use an image from the registry.

Accessing a namespace in {{site.data.keyword.registryshort_notm}} to work with IBM-provided images and your own private Docker images

{: #bx_registry_default}

You can deploy containers to your cluster from an IBM-provided public image or a private image that is stored in your namespace in {{site.data.keyword.registryshort_notm}}.

Before you begin:

  1. Set up a namespace in {{site.data.keyword.registryshort_notm}} on {{site.data.keyword.Bluemix_notm}} Public or {{site.data.keyword.Bluemix_notm}} Dedicated and push images to this namespace.
  2. Create a cluster.
  3. Target your CLI to your cluster.

When you create a cluster, a non-expiring registry token is automatically created for the cluster. This token is used to authorize read-only access to any of your namespaces that you set up in {{site.data.keyword.registryshort_notm}} so that you can work with IBM-provided public and your own private Docker images. Tokens must be stored in a Kubernetes imagePullSecret so that they are accessible to a Kubernetes cluster when you deploy a containerized app. When your cluster is created, {{site.data.keyword.containershort_notm}} automatically stores this token in a Kubernetes imagePullSecret. The imagePullSecret is added to the default Kubernetes namespace, the default list of secrets in the ServiceAccount for that namespace, and the kube-system namespace.

Note: By using this initial setup, you can deploy containers from any image that is available in a namespace in your {{site.data.keyword.Bluemix_notm}} account into the default namespace of your cluster. If you want to deploy a container into other namespaces of your cluster, or if you want to use an image that is stored in another {{site.data.keyword.Bluemix_notm}} region or in another {{site.data.keyword.Bluemix_notm}} account, you must create your own imagePullSecret for your cluster.

To deploy a container into the default namespace of your cluster, create a configuration file.

  1. Create a deployment configuration file that is named mydeployment.yaml.

  2. Define the deployment and the image that you want to use from your namespace in {{site.data.keyword.registryshort_notm}}.

    To use a private image from a namespace in {{site.data.keyword.registryshort_notm}}:

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: ibmliberty-deployment
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: ibmliberty
        spec:
          containers:
          - name: ibmliberty
            image: registry.<region>.bluemix.net/<namespace>/<my_image>:<tag>
    

    {: codeblock}

    Tip: To retrieve your namespace information, run bx cr namespace-list.

  3. Create the deployment in your cluster.

    kubectl apply -f mydeployment.yaml
    

    {: pre}

    Tip: You can also deploy an existing configuration file, such as one of the IBM-provided public images. This example uses the ibmliberty image in the US-South region.

    kubectl apply -f https://raw.githubusercontent.com/IBM-{{site.data.keyword.Bluemix_notm}}/kube-samples/master/deploy-apps-clusters/deploy-ibmliberty.yaml
    

    {: pre}

Deploying images to other Kubernetes namespaces or accessing images in other {{site.data.keyword.Bluemix_notm}} regions and accounts

{: #bx_registry_other}

You can deploy containers to other Kubernetes namespaces, use images that are stored in other {{site.data.keyword.Bluemix_notm}} regions or accounts, or use images that are stored in {{site.data.keyword.Bluemix_notm}} Dedicated by creating your own imagePullSecret.

Before you begin:

  1. Set up a namespace in {{site.data.keyword.registryshort_notm}} on {{site.data.keyword.Bluemix_notm}} Public or {{site.data.keyword.Bluemix_notm}} Dedicated and push images to this namespace.
  2. Create a cluster.
  3. Target your CLI to your cluster.

To create your own imagePullSecret:

Note: ImagePullSecrets are valid only for the Kubernetes namespaces that they were created for. Repeat these steps for every namespace where you want to deploy containers. Images from DockerHub do not require ImagePullSecrets.

  1. If you do not have a token, create a token for the registry that you want to access.

  2. List tokens in your {{site.data.keyword.Bluemix_notm}} account.

    bx cr token-list
    

    {: pre}

  3. Note the token ID that you want to use.

  4. Retrieve the value for your token. Replace <token_id> with the ID of the token that you retrieved in the previous step.

    bx cr token-get <token_id>
    

    {: pre}

    Your token value is displayed in the Token field of your CLI output.

  5. Create the Kubernetes secret to store your token information.

    kubectl --namespace <kubernetes_namespace> create secret docker-registry <secret_name>  --docker-server=<registry_url> --docker-username=token --docker-password=<token_value> --docker-email=<docker_email>
    

    {: pre}

    Table 3. Understanding this command's components
    Understanding this command's components
    --namespace <kubernetes_namespace> Required. The Kubernetes namespace of your cluster where you want to use the secret and deploy containers to. Run kubectl get namespaces to list all namespaces in your cluster.
    <secret_name> Required. The name that you want to use for your imagePullSecret.
    --docker-server <registry_url> Required. The URL to the image registry where your namespace is set up.
    • For namespaces that are set up in US-South and US-East registry.ng.bluemix.net
    • For namespaces that are set up in UK-South registry.eu-gb.bluemix.net
    • For namespaces that are set up in EU-Central (Frankfurt) registry.eu-de.bluemix.net
    • For namespaces that are set up in Australia (Sydney) registry.au-syd.bluemix.net
    • For namespaces that are set up in {{site.data.keyword.Bluemix_notm}} Dedicated registry.<dedicated_domain>
    --docker-username <docker_username> Required. The user name to log in to your private registry. For {{site.data.keyword.registryshort_notm}}, the user name is set to token.
    --docker-password <token_value> Required. The value of your registry token that you retrieved earlier.
    --docker-email <docker-email> Required. If you have one, enter your Docker email address. If you do not have one, enter a fictional email address, as for example [email protected]. This email is mandatory to create a Kubernetes secret, but is not used after creation.
  6. Verify that the secret was created successfully. Replace <kubernetes_namespace> with the name of the namespace where you created the imagePullSecret.

    kubectl get secrets --namespace <kubernetes_namespace>
    

    {: pre}

  7. Create a pod that references the imagePullSecret.

    1. Create a pod configuration file that is named mypod.yaml.

    2. Define the pod and the imagePullSecret that you want to use to access the private {{site.data.keyword.Bluemix_notm}} registry.

      A private image from a namespace:

      apiVersion: v1
      kind: Pod
      metadata:
        name: <pod_name>
      spec:
        containers:
          - name: <container_name>
            image: registry.<region>.bluemix.net/<my_namespace>/<my_image>:<tag>
        imagePullSecrets:
          - name: <secret_name>
      

      {: codeblock}

      A {{site.data.keyword.Bluemix_notm}} public image:

      apiVersion: v1
      kind: Pod
      metadata:
        name: <pod_name>
      spec:
        containers:
          - name: <container_name>
            image: registry.<region>.bluemix.net/
        imagePullSecrets:
          - name: <secret_name>
      

      {: codeblock}

      Table 4. Understanding the YAML file components
      Understanding the YAML file components
      <container_name> The name of the container that you want to deploy to your cluster.
      <my_namespace> The namespace where your image is stored. To list available namespaces, run `bx cr namespace-list`.
      <my_image> The name of the image that you want to use. To list available images in a {{site.data.keyword.Bluemix_notm}} account, run `bx cr image-list`.
      <tag> The version of the image that you want to use. If no tag is specified, the image that is tagged latest is used by default.
      <secret_name> The name of the imagePullSecret that you created earlier.
  8. Save your changes.

  9. Create the deployment in your cluster.

    kubectl apply -f mypod.yaml
    

    {: pre}

Accessing public images from Docker Hub

{: #dockerhub}

You can use any public image that is stored in Docker Hub to deploy a container to your cluster without any additional configuration.

Before you begin:

  1. Create a cluster.
  2. Target your CLI to your cluster.

Create a deployment configuration file.

  1. Create a configuration file that is named mydeployment.yaml.

  2. Define the deployment and the public image from Docker Hub that you want to use. The following configuration file uses the public NGINX image that is available on Docker Hub.

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
    

    {: codeblock}

  3. Create the deployment in your cluster.

    kubectl apply -f mydeployment.yaml
    

    {: pre}

    Tip: Alternatively, deploy an existing configuration file. The following example uses the same public NGINX image, but applies it directly to your cluster.

    kubectl apply -f https://raw.githubusercontent.com/IBM-{{site.data.keyword.Bluemix_notm}}/kube-samples/master/deploy-apps-clusters/deploy-nginx.yaml
    

    {: pre}

Accessing private images that are stored in other private registries

{: #private_registry}

If you already have a private registry that you want to use, you must store the registry credentials in a Kubernetes imagePullSecret and reference this secret in your configuration file.

Before you begin:

  1. Create a cluster.
  2. Target your CLI to your cluster.

To create an imagePullSecret:

Note: ImagePullSecrets are valid for the Kubernetes namespaces they were created for. Repeat these steps for every namespace where you want to deploy containers from an image in a private {{site.data.keyword.Bluemix_notm}} registry.

  1. Create the Kubernetes secret to store your private registry credentials.

    kubectl --namespace <kubernetes_namespace> create secret docker-registry <secret_name>  --docker-server=<registry_url> --docker-username=<docker_username> --docker-password=<docker_password> --docker-email=<docker_email>
    

    {: pre}

    Table 5. Understanding this command's components
    Understanding this command's components
    --namespace <kubernetes_namespace> Required. The Kubernetes namespace of your cluster where you want to use the secret and deploy containers to. Run kubectl get namespaces to list all namespaces in your cluster.
    <secret_name> Required. The name that you want to use for your imagePullSecret.
    --docker-server <registry_url> Required. The URL to the registry where your private images are stored.
    --docker-username <docker_username> Required. The user name to log in to your private registry.
    --docker-password <token_value> Required. The value of your registry token that you retrieved earlier.
    --docker-email <docker-email> Required. If you have one, enter your Docker email address. If you do not have one, enter a fictional email address, as for example [email protected]. This email is mandatory to create a Kubernetes secret, but is not used after creation.
  2. Verify that the secret was created successfully. Replace <kubernetes_namespace> with the name of the namespace where you created the imagePullSecret.

    kubectl get secrets --namespace <kubernetes_namespace>
    

    {: pre}

  3. Create a pod that references the imagePullSecret.

    1. Create a pod configuration file that is named mypod.yaml.

    2. Define the pod and the imagePullSecret that you want to use to access the private {{site.data.keyword.Bluemix_notm}} registry. To use a private image from your private registry:

      apiVersion: v1
      kind: Pod
      metadata:
        name: <pod_name>
      spec:
        containers:
          - name: <container_name>
            image: <my_image>:<tag>
        imagePullSecrets:
          - name: <secret_name>
      

      {: codeblock}

      Table 6. Understanding the YAML file components
      Understanding the YAML file components
      <pod_name> The name of the pod that you want to create.
      <container_name> The name of the container that you want to deploy to your cluster.
      <my_image> The full path to the image in your private registry that you want to use.
      <tag> The version of the image that you want to use. If no tag is specified, the image that is tagged latest is used by default.
      <secret_name> The name of the imagePullSecret that you created earlier.
  4. Save your changes.

  5. Create the deployment in your cluster.

    kubectl apply -f mypod.yaml
    

    {: pre}


Adding {{site.data.keyword.Bluemix_notm}} services to clusters

{: #cs_cluster_service}

Add an existing {{site.data.keyword.Bluemix_notm}} service instance to your cluster to enable your cluster users to access and use the {{site.data.keyword.Bluemix_notm}} service when they deploy an app to the cluster. {:shortdesc}

Before you begin:

  1. Target your CLI to your cluster.
  2. Request an instance of the {{site.data.keyword.Bluemix_notm}} service in your space. Note: To create an instance of a service in the Washington DC location, you must use the CLI.
  3. For {{site.data.keyword.Bluemix_notm}} Dedicated users, see Adding {{site.data.keyword.Bluemix_notm}} services to clusters in {{site.data.keyword.Bluemix_notm}} Dedicated (Closed Beta) instead.

Note:

    • You can only add {{site.data.keyword.Bluemix_notm}} services that support service keys. If the service does not support service keys, see [Enabling external apps to use {{site.data.keyword.Bluemix_notm}} services](/docs/services/reqnsi.html#req_instance).
    • The cluster and the worker nodes must be deployed fully before you can add a service.

To add a service: 2. List all existing services in your {{site.data.keyword.Bluemix_notm}} space.

```
bx service list
```
{: pre}

Example CLI output:

```
name                      service           plan    bound apps   last operation
<service_instance_name>   <service_name>    spark                create succeeded
```
{: screen}
  1. Note the name of the service instance that you want to add to your cluster.

  2. Identify the cluster namespace that you want to use to add your service. Choose between the following options.

    • List existing namespaces and choose a namespace that you want to use.

      kubectl get namespaces
      

      {: pre}

    • Create a new namespace in your cluster.

      kubectl create namespace <namespace_name>
      

      {: pre}

  3. Add the service to your cluster.

    bx cs cluster-service-bind <cluster_name_or_id> <namespace> <service_instance_name>
    

    {: pre}

    When the service is successfully added to your cluster, a cluster secret is created that holds the credentials of your service instance. Example CLI output:

    bx cs cluster-service-bind mycluster mynamespace cleardb
    Binding service instance to namespace...
    OK
    Namespace:	mynamespace
    Secret name:     binding-<service_instance_name>
    

    {: screen}

  4. Verify that the secret was created in your cluster namespace.

    kubectl get secrets --namespace=<namespace>
    

    {: pre}

To use the service in a pod that is deployed in the cluster, cluster users can access the service credentials of the {{site.data.keyword.Bluemix_notm}} service by mounting the Kubernetes secret as a secret volume to a pod.

Adding {{site.data.keyword.Bluemix_notm}} services to clusters in {{site.data.keyword.Bluemix_notm}} Dedicated (Closed Beta)

{: #binding_dedicated}

Note: The cluster and the worker nodes must be deployed fully before you can add a service.

  1. Set the path to your local {{site.data.keyword.Bluemix_notm}} Dedicated configuration file as the DEDICATED_BLUEMIX_CONFIG environment variable.

    export DEDICATED_BLUEMIX_CONFIG=<path_to_config_directory>
    

    {: pre}

  2. Set the same path defined above as the BLUEMIX_HOME environment variable.

    export BLUEMIX_HOME=$DEDICATED_BLUEMIX_CONFIG
    

    {: pre}

  3. Log in to the {{site.data.keyword.Bluemix_notm}} Dedicated environment where you want to create the service instance.

    bx login -a api.<dedicated_domain> -u <user> -p <password> -o <org> -s <space>
    

    {: pre}

  4. List the available services in the {{site.data.keyword.Bluemix_notm}} catalog.

    bx service offerings
    

    {: pre}

  5. Create an instance of the service you want to bind to the cluster.

    bx service create <service_name> <service_plan> <service_instance_name>
    

    {: pre}

  6. Verify that you created your service instance by listing all existing services in your {{site.data.keyword.Bluemix_notm}} space.

    bx service list
    

    {: pre}

    Example CLI output:

    name                      service           plan    bound apps   last operation
    <service_instance_name>   <service_name>    spark                create succeeded
    

    {: screen}

  7. Unset the BLUEMIX_HOME environment variable to return to using {{site.data.keyword.Bluemix_notm}} Public.

    unset $BLUEMIX_HOME
    

    {: pre}

  8. Log in to the public endpoint for {{site.data.keyword.containershort_notm}} and target your CLI to the cluster in your {{site.data.keyword.Bluemix_notm}} Dedicated environment.

    1. Log in to the account by using the public endpoint for {{site.data.keyword.containershort_notm}}. Enter your {{site.data.keyword.Bluemix_notm}} credentials and select the {{site.data.keyword.Bluemix_notm}} Dedicated account when prompted.

      bx login -a api.ng.bluemix.net
      

      {: pre}

      Note: If you have a federated ID, use bx login --sso to log in to the {{site.data.keyword.Bluemix_notm}} CLI. Enter your user name and use the provided URL in your CLI output to retrieve your one-time passcode. You know you have a federated ID when the login fails without the --sso and succeeds with the --sso option.

    2. Get a list of available clusters and identify the name of the cluster to target in your CLI.

      bx cs clusters
      

      {: pre}

    3. Get the command to set the environment variable and download the Kubernetes configuration files.

      bx cs cluster-config <cluster_name_or_id>
      

      {: pre}

      When the download of the configuration files is finished, a command is displayed that you can use to set the path to the local Kubernetes configuration file as an environment variable.

      Example for OS X:

      export KUBECONFIG=/Users/<user_name>/.bluemix/plugins/container-service/clusters/<cluster_name>/kube-config-prod-dal10-<cluster_name>.yml
      

      {: screen}

    4. Copy and paste the command that is displayed in your terminal to set the KUBECONFIG environment variable.

  9. Identify the cluster namespace that you want to use to add your service. Choose between the following options.

    • List existing namespaces and choose a namespace that you want to use.

      kubectl get namespaces
      

      {: pre}

    • Create a new namespace in your cluster.

      kubectl create namespace <namespace_name>
      

      {: pre}

  10. Bind the service instance to your cluster.

    bx cs cluster-service-bind <cluster_name_or_id> <namespace> <service_instance_name>
    

    {: pre}


Managing cluster access

{: #cs_cluster_user}

You can grant access to your cluster to other users, so that they can access the cluster, manage the cluster, and deploy apps to the cluster. {:shortdesc}

Every user that works with {{site.data.keyword.containershort_notm}} must be assigned a service-specific user role in Identity and Access Management that determines what actions this user can perform. Identity and Access Management differentiates between the following access permissions.

  • {{site.data.keyword.containershort_notm}} access policies

    Access policies determine the cluster management actions that you can perform on a cluster, such as creating or removing clusters, and adding or removing extra worker nodes.

  • Cloud Foundry roles

    Every user must be assigned a Cloud Foundry user role. This role determines the actions that the user can perform on the {{site.data.keyword.Bluemix_notm}} account, such as inviting other users, or viewing the quota usage. To review the permissions of each role, see Cloud Foundry roles.

  • RBAC roles

    Every user who is assigned an {{site.data.keyword.containershort_notm}} access policy is automatically assigned an RBAC role. RBAC roles determine the actions that you can perform on Kubernetes resources inside the cluster. RBAC roles are set up for the default namespace only. The cluster administrator can add RBAC roles for other namespaces in the cluster. See Using RBAC Authorization External link icon in the Kubernetes documentation for more information.

Choose between the following actions to proceed:

Overview of required {{site.data.keyword.containershort_notm}} access policies and permissions

{: #access_ov}

Review the access policies and permissions that you can grant to users in your {{site.data.keyword.Bluemix_notm}} account.

Access policy Cluster Management Permissions Kubernetes Resource Permissions
  • Role: Administrator
  • Service Instances: all current service instances
  • Create a lite or standard cluster
  • Set credentials for a {{site.data.keyword.Bluemix_notm}} account to access the IBM Bluemix Infrastructure (SoftLayer) portfolio
  • Remove a cluster
  • Assign and change {{site.data.keyword.containershort_notm}} access policies for other existing users in this account.

This role inherits permissions from the Editor, Operator, and Viewer roles for all clusters in this account.
  • RBAC Role: cluster-admin
  • Read/write access to resources in every namespace
  • Create roles within a namespace
  • Access Kubernetes dashboard
  • Create an Ingress resource that makes apps publically available
  • Role: Administrator
  • Service Instances: a specific cluster ID
  • Remove a specific cluster.

This role inherits permissions from the Editor, Operator, and Viewer roles for the selected cluster.
  • RBAC Role: cluster-admin
  • Read/write access to resources in every namespace
  • Create roles within a namespace
  • Access Kubernetes dashboard
  • Create an Ingress resource that makes apps publically available
  • Role: Operator
  • Service Instances: all current service instances/ a specific cluster ID
  • Add additional worker nodes to a cluster
  • Remove worker nodes from a cluster
  • Reboot a worker node
  • Reload a worker node
  • Add a subnet to a cluster
  • RBAC Role: admin
  • Read/write access to resources inside the default namespace but not to the namespace itself
  • Create roles within a namespace
  • Role: Editor
  • Service Instances: all current service instances a specific cluster ID
  • Bind a {{site.data.keyword.Bluemix_notm}} service to a cluster.
  • Unbind a {{site.data.keyword.Bluemix_notm}} service to a cluster.
  • Create a webhook.

Use this role for your app developers.
  • RBAC Role: edit
  • Read/write access to resources inside the default namespace
  • Role: Viewer
  • Service Instances: all current service instances/ a specific cluster ID
  • List a cluster
  • View details for a cluster
  • RBAC Role: view
  • Read access to resources inside the default namespace
  • No read access to Kubernetes secrets
  • Cloud Foundry organization role: Manager
  • Add additional users to a {{site.data.keyword.Bluemix_notm}} account
 
  • Cloud Foundry space role: Developer
  • Create {{site.data.keyword.Bluemix_notm}} service instances/li>
  • Bind {{site.data.keyword.Bluemix_notm}} service instances to clusters
 
{: caption="Table 7. Overview of required {{site.data.keyword.containershort_notm}} access policies and permissions" caption-side="top"}

Verifying your {{site.data.keyword.containershort_notm}} access policy

{: #view_access}

You can review and verify your assigned access policy for {{site.data.keyword.containershort_notm}}. The access policy determines the cluster management actions that you can perform.

  1. Select the {{site.data.keyword.Bluemix_notm}} account where you want to verify your {{site.data.keyword.containershort_notm}} access policy.

  2. From the menu bar, click Manage > Security > Identity and Access. The Users window displays a list of users with their email addresses and current status for the selected account.

  3. Select the user for whom you want to check the access policy.

  4. In the Service Policies section, review the access policy for the user. To find detailed information about the actions that you can perform with this role, see Overview of required {{site.data.keyword.containershort_notm}} access policies and permissions.

  5. Optional: Change your current access policy.

    Note: Only users with an assigned Administrator service policy for all resources in {{site.data.keyword.containershort_notm}} can change the access policy for an existing user. To add further users to a {{site.data.keyword.Bluemix_notm}} account, you must have the Manager Cloud Foundry role for the account. To find the ID of the {{site.data.keyword.Bluemix_notm}} account owner, run bx iam accounts and look for the Owner User ID.

Changing the {{site.data.keyword.containershort_notm}} access policy for an existing user

{: #change_access}

You can change the access policy for an existing user to grant cluster management permissions for a cluster in your {{site.data.keyword.Bluemix_notm}} account.

Before you begin, verify that you have been assigned the Administrator access policy for all resources in {{site.data.keyword.containershort_notm}}.

  1. Select the {{site.data.keyword.Bluemix_notm}} account where you want to change the {{site.data.keyword.containershort_notm}} access policy for an existing user.
  2. From the menu bar, click Manage > Security > Identity and Access. The Users window displays a list of users with their email addresses and current status for the selected account.
  3. Find the user for whom you want to change the access policy. If you do not find the user you are looking for, invite this user to the {{site.data.keyword.Bluemix_notm}} account.
  4. From the Actions tab, click Assign policy.
  5. From the Service drop-down list, select {{site.data.keyword.containershort_notm}}.
  6. From the Roles drop-down list, select the access policy that you want to assign. Selecting a role without any limitations on a specific region or cluster automatically applies this access policy to all clusters that were created in this account. If you want to limit the access to a certain cluster or region, select them from the Service instance and Region drop-down list. To find a list of supported actions per access policy, see Overview of required {{site.data.keyword.containershort_notm}} access policies and permissions. To find the ID of a specific cluster, run bx cs clusters.
  7. Click Assign Policy to save your changes.

Adding users to a {{site.data.keyword.Bluemix_notm}} account

{: #add_users}

You can add additional users to a {{site.data.keyword.Bluemix_notm}} account to grant access to your clusters.

Before you begin, verify that you have been assigned the Manager Cloud Foundry role for a {{site.data.keyword.Bluemix_notm}} account.

  1. Select the {{site.data.keyword.Bluemix_notm}} account where you want to add users.
  2. From the menu bar, click Manage > Security > Identity and Access. The Users window displays a list of users with their email addresses and current status for the selected account.
  3. Click Invite users.
  4. In Email address or existing IBMid, enter the email address of the user that you want to add to the {{site.data.keyword.Bluemix_notm}} account.
  5. In the Access section, expand Identity and Access enabled services.
  6. From the Services drop-down list, select {{site.data.keyword.containershort_notm}}.
  7. From the Region drop-down list, select a region. If the region you want is not listed and is supported for {{site.data.keyword.containershort_notm}}, select All regions.
  8. From the Roles drop-down list, select the access policy that you want to assign. Selecting a role without any limitations on a specific region or cluster automatically applies this access policy to all clusters that were created in this account. To limit the access to a certain cluster or region, select a value from the Service instance and Region drop-down lists. To find a list of supported actions per access policy, see Overview of required {{site.data.keyword.containershort_notm}} access policies and permissions. To find the ID of a specific cluster, run bx cs clusters.
  9. Expand the Cloud Foundry access section and select the {{site.data.keyword.Bluemix_notm}} organization from the Organization drop-down list to which you want to add the user.
  10. From the Space Roles drop-down list, select any role. Kubernetes clusters are independent from {{site.data.keyword.Bluemix_notm}} spaces.
  11. Click Invite users.
  12. Optional: To allow this user to add additional users to a {{site.data.keyword.Bluemix_notm}} account, assign the user a Cloud Foundry org role.
    1. From the Users overview table, in the Actions column, select Manage User.
    2. In the Cloud Foundry roles section, find the Cloud Foundry organization role that was granted to the user that you added in the previous steps.
    3. From the Actions tab, select Edit Organization Role.
    4. From the Organization Roles drop-down list, select Manager.
    5. Click Save Role.

Updating the Kubernetes master

{: #cs_cluster_update}

Updating a cluster is a two-step process. First, you must update the Kubernetes master, and then you can update each of the worker nodes.

Attention: Updates might cause outages and interruptions for your apps unless you plan accordingly.

Kubernetes provides these update types:

Update type Version label Updated by Impact
Major example: 1.x.x User Might involve changes to the operation of a cluster and might require changes to scripts or deployments.
Minor example: x.5.x User Might involve changes to the operation of a cluster and might require changes to scripts or deployments.
Patch example: x.x.3 IBM/User A small fix that is non-disruptive. A patch does not require changes to scripts or deployments. IBM updates masters automatically, but the user must update worker nodes to apply patches.
{: caption="Types of Kubernetes updates" caption-side="top"}

When making a major or minor update, complete the following steps. Before updating a production environment, use a test cluster. You cannot roll back a cluster to a previous version.

  1. Review the Kubernetes changes and make any updates marked Update before master.
  2. Update your Kubernetes master by using the GUI or running the CLI command. When you update the Kubernetes master, the master is down for about 5 - 10 minutes. During the update, you cannot access or change the cluster. However, worker nodes, apps, and resources that cluster users have deployed are not modified and continue to run.
  3. Confirm that the update is complete. Review the Kubernetes version on the {{site.data.keyword.Bluemix_notm}} Dashboard or run bx cs clusters.

When the Kubernetes master update is complete, you can update your worker nodes to the latest version.


Updating worker nodes

{: #cs_cluster_worker_update}

Worker nodes can be updated to the Kubernetes version of the Kubernetes master. While IBM automatically applies patches to the Kubernetes master, worker nodes require a user command for both updates and patches.

Attention: Updating the worker node version can cause downtime for your apps and services. Data is deleted if not stored outside the pod. Use replicas External link icon in your deployments to allow pods to reschedule to available nodes.

Updating production-level clusters:

  • Use a test cluster to validate that your workloads and the delivery process are not impacted by the update. You cannot roll back worker nodes to a previous version.
  • Production-level clusters should have capacity to survive a worker node failure. If your cluster does not, add a worker node before updating the cluster.
  • The update process does not drain nodes prior to the update. Consider using drain External link icon and uncordon External link icon to help avoid downtime for your apps.

Before you begin, install the version of the kubectl cli External link icon that matches the Kubernetes version of the Kubernetes master.

  1. Review the Kubernetes changes and make any changes marked Update after master to your deployment scripts, if needed.

  2. Update your worker nodes. To update from the {{site.data.keyword.Bluemix_notm}} Dashboard, navigate to the Worker Nodes section of your cluster, and click Update Worker. To get worker node IDs, run bx cs workers <cluster_name_or_id>. If you select multiple worker nodes, the worker nodes are updated one at a time.

    bx cs worker-update <cluster_name_or_id> <worker_node_id1> <worker_node_id2>
    

    {: pre}

  3. Verify that your worker nodes updated. Review the Kubernetes version on the {{site.data.keyword.Bluemix_notm}} Dashboard or run bx cs workers <cluster_name_or_id>. In addition, confirm that the Kubernetes version listed by kubectl updated. In some cases, older clusters might list duplicate worker nodes with a NotReady status after an update. To remove duplicates, see troubleshooting.

    kubectl get nodes
    

    {: pre}

  4. Check the Kubernetes dashboard. If utilization graphs are not displaying in the Kubernetes dashboard, delete the kube-dashboard pod to force a restart. The pod will be re-created with RBAC policies to access heapster for utilization information.

    kubectl delete pod -n kube-system $(kubectl get pod -n kube-system --selector=k8s-app=kubernetes-dashboard -o jsonpath='{.items..metadata.name}')
    

    {: pre}

When you complete the update, repeat the update process with other clusters. In addition, inform developers who work in the cluster to update their kubectl CLI to the version of the Kubernetes master.


Adding subnets to clusters

{: #cs_cluster_subnet}

Change the pool of available portable public IP addresses by adding subnets to your cluster. {:shortdesc}

In {{site.data.keyword.containershort_notm}}, you can add stable, portable IPs for Kubernetes services by adding network subnets to the cluster. When you create a standard cluster, {{site.data.keyword.containershort_notm}} automatically provisions a portable public subnet and 5 IP addresses. Portable public IP addresses are static and do not change when a worker node, or even the cluster, is removed.

One of the portable public IP addresses is used for the Ingress controller that you can use to expose multiple apps in your cluster by using a public route. The remaining 4 portable public IP addresses can be used to expose single apps to the public by creating a load balancer service.

Note: Portable public IP addresses are charged on a monthly basis. If you choose to remove portable public IP addresses after your cluster is provisioned, you still have to pay the monthly charge, even if you used them only for a short amount of time.

Requesting additional subnets for your cluster

{: #add_subnet}

You can add stable, portable public IPs to the cluster by assigning subnets to the cluster.

For {{site.data.keyword.Bluemix_notm}} Dedicated users, instead of using this task, you must open a support ticket to create the subnet, and then use the bx cs cluster-subnet-add command to add the subnet to the cluster.

Before you begin, make sure that you can access the IBM Bluemix Infrastructure (SoftLayer) portfolio through the {{site.data.keyword.Bluemix_notm}} GUI. To access the portfolio, you must set up or use an existing {{site.data.keyword.Bluemix_notm}} Pay-As-You-Go account.

  1. From the catalog, in the Infrastructure section, select Network.

  2. Select Subnet/IPs and click Create.

  3. From the Select the type of subnet to add to this account drop-down menu, select Portable Public.

  4. Select the number of IP addresses that you want to add from your portable subnet.

    Note: When you add portable public IP addresses for your subnet, three IP addresses are used to establish cluster-internal networking, so that you cannot use them for your Ingress controller or to create a load balancer service. For example, if you request eight portable public IP addresses, you can use five of them to expose your apps to the public.

  5. Select the public VLAN where you want to route the portable public IP addresses to. You must select the public VLAN that an existing worker node is connected to. Review the public VLAN for a worker node.

    bx cs worker-get <worker_id>
    

    {: pre}

  6. Complete the questionnaire and click Place order.

    Note: Portable public IP addresses are charged on a monthly basis. If you choose to remove portable public IP addresses after you created them, you still must pay the monthly charge, even if you used them only part of the month.

  7. After the subnet is provisioned, make the subnet available to your Kubernetes cluster.

    1. From the Infrastructure dashboard, select the subnet that you created and note the ID of the subnet.

    2. Log in to the {{site.data.keyword.Bluemix_notm}} CLI. To specify a {{site.data.keyword.Bluemix_notm}} region, include the API endpoint.

      bx login
      

      {: pre}

      Note: If you have a federated ID, use bx login --sso to log in to the {{site.data.keyword.Bluemix_notm}} CLI. Enter your user name and use the provided URL in your CLI output to retrieve your one-time passcode. You know you have a federated ID when the login fails without the --sso and succeeds with the --sso option.

    3. List all clusters in your account and note the ID of the cluster where you want to make your subnet available.

      bx cs clusters
      

      {: pre}

    4. Add the subnet to your cluster. When you make a subnet available to a cluster, a Kubernetes config map is created for you that includes all available portable public IP addresses that you can use. If no Ingress controller exists for your cluster, one portable public IP address is automatically used to create the Ingress controller. All other portable public IP addresses can be used to create load balancer services for your apps.

      bx cs cluster-subnet-add <cluster name or id> <subnet id>
      

      {: pre}

  8. Verify that the subnet was successfully added to your cluster. The subnet CIDR is listed in the VLANs section.

    bx cs cluster-get --showResources <cluster name or id>
    

    {: pre}

Adding custom and existing subnets to Kubernetes clusters

{: #custom_subnet}

You can add existing portable public subnets to your Kubernetes cluster.

Before you begin, target your CLI to your cluster.

If you have an existing subnet in your IBM Bluemix Infrastructure (SoftLayer) portfolio with custom firewall rules or available IP addresses that you want to use, create a cluster with no subnet and make your existing subnet available to the cluster when the cluster provisions.

  1. Identify the subnet to use. Note the ID of the subnet and the VLAN ID. In this example, the subnet ID is 807861 and the VLAN ID is 1901230.

    bx cs subnets
    

    {: pre}

    Getting subnet list...
    OK
    ID        Network                                      Gateway                                   VLAN ID   Type      Bound Cluster
    553242    203.0.113.0/24                               10.87.15.00                               1565280   private
    807861    192.0.2.0/24                                 10.121.167.180                            1901230   public
    
    

    {: screen}

  2. Confirm the location that the VLAN is located. In this example, the location is dal10.

    bx cs vlans dal10
    

    {: pre}

    Getting VLAN list...
    OK
    ID        Name                  Number   Type      Router
    1900403   vlan                    1391     private   bcr01a.dal10
    1901230   vlan                    1180     public   fcr02a.dal10
    

    {: screen}

  3. Create a cluster by using the location and VLAN ID that you identified. Include the --no-subnet flag to prevent a new portable public IP subnet from being created automatically.

    bx cs cluster-create --location dal10 --machine-type u1c.2x4 --no-subnet --public-vlan 1901230 --private-vlan 1900403 --workers 3 --name my_cluster
    

    {: pre}

  4. Verify that the creation of the cluster was requested.

    bx cs clusters
    

    {: pre}

    Note: It can take up to 15 minutes for the worker node machines to be ordered, and for the cluster to be set up and provisioned in your account.

    When the provisioning of your cluster is completed, the status of your cluster changes to deployed.

    Name         ID                                   State      Created          Workers
    my_cluster   paf97e8843e29941b49c598f516de72101   deployed   20170201162433   3
    

    {: screen}

  5. Check the status of the worker nodes.

    bx cs workers <cluster>
    

    {: pre}

    When the worker nodes are ready, the state changes to normal and the status is Ready. When the node status is Ready, you can then access the cluster.

    ID                                                  Public IP        Private IP     Machine Type   State      Status
    prod-dal10-pa8dfcc5223804439c87489886dbbc9c07-w1   169.47.223.113   10.171.42.93   free           normal    Ready
    

    {: screen}

  6. Add the subnet to your cluster by specifying the subnet ID. When you make a subnet available to a cluster, a Kubernetes config map is created for you that includes all available portable public IP addresses that you can use. If no Ingress controller already exists for your cluster, one portable public IP address is automatically used to create the Ingress controller. All other portable public IP addresses can be used to create load balancer services for your apps.

    bx cs cluster-subnet-add mycluster 807861
    

    {: pre}

Adding user-managed subnets and IP addresses to Kubernetes clusters

{: #user_subnet}

Provide your own subnet from an on-premises network that you want {{site.data.keyword.containershort_notm}} to access. Then, you can add private IP addresses from that subnet to load balancer services in your Kubernetes cluster.

Requirements:

  • User-managed subnets can be added to private VLANs only.
  • The subnet prefix length limit is /24 to /30. For example, 203.0.113.0/24 specifies 253 usable private IP addresses, while 203.0.113.0/30 specifies 1 usable private IP address.
  • The first IP address in the subnet must be used as the gateway for the subnet.

Before you begin: Configure the routing of network traffic into and out of the external subnet. In addition, confirm that you have VPN connectivity between the on-premises data center gateway device and the private network Vyatta in your IBM Bluemix Infrastructure (SoftLayer) portfolio. For more information see this blog post External link icon.

  1. View the ID of your cluster's Private VLAN. Locate the VLANs section. In the field Is Public?, identify the VLAN ID with false.

    bx cs cluster-get --showResources <cluster_name>
    

    {: pre}

    VLANs
    VLAN ID   Subnet CIDR         Is Public?   Is BYOIP?
    1555503   192.0.2.0/24        true         false
    1555505   198.51.100.0/24     false        false
    

    {: screen}

  2. Add the external subnet to your private VLAN. The portable private IP addresses are added to the cluster's config map.

    bx cs cluster-user-subnet-add <subnet_CIDR> <VLAN_ID>
    

    {: pre}

    Example:

    bx cs cluster-user-subnet-add 203.0.113.0/24 1555505
    

    {: pre}

  3. Verify that the user-provided subnet is added. The field Is BYOIP? is true.

    bx cs cluster-get --showResources <cluster_name>
    

    {: pre}

    VLANs
    VLAN ID   Subnet CIDR         Is Public?   Is BYOIP?
    1555503   192.0.2.0/24        true         false
    1555505   198.51.100.0/24     false        false
    1555505   203.0.113.0/24      false        true
    

    {: screen}

  4. Add a private load balancer to access your app over the private network. If you want to use a private IP address from the subnet that you added, you must specify an IP address when you create a private load balancer. Otherwise, an IP address is chosen at random from the IBM Bluemix Infrastructure (SoftLayer) subnets or user-provided subnets on the private VLAN. For more information See Configuring access to an app.

    Example configuration file for a private load balancer service with a specified IP address:

    apiVersion: v1
    kind: Service
    metadata:
      name: <myservice>
      annotations:
        service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: private
    spec:
      type: LoadBalancer
      selector:
        <selectorkey>:<selectorvalue>
      ports:
       - protocol: TCP
         port: 8080
      loadBalancerIP: <private_ip_address>
    

    {: codeblock}


Using existing NFS file shares in clusters

{: #cs_cluster_volume_create}

If you already have existing NFS file shares in your IBM Bluemix Infrastructure (SoftLayer) account that you want to use with Kubernetes, you can do so by creating persistent volumes on your existing NFS file share. A persistent volume is a piece of actual hardware that serves as a Kubernetes cluster resource and can be consumed by the cluster user. {:shortdesc}

Before you begin, make sure that you have an existing NFS file share that you can use to create your persistent volume.

Create persistent volumes and persistent volume claims

Kubernetes differentiates between persistent volumes that represent the actual hardware and persistent volume claims that are requests for storage usually initiated by the cluster user. When you want to enable existing NFS file shares to be used with Kubernetes, you must create persistent volumes with a certain size and access mode and create a persistent volume claim that matches the persistent volume specification. If persistent volume and persistent volume claim match, they are bound to each other. Only bound persistent volume claims can be used by the cluster user to mount the volume to a pod. This process is referred to as static provisioning of persistent storage.

Note: Static provisioning of persistent storage only applies to existing NFS file shares. If you do not have existing NFS file shares, cluster users can use the dynamic provisioning process to add persistent volumes.

To create a persistent volume and matching persistent volume claim, follow these steps.

  1. In your IBM Bluemix Infrastructure (SoftLayer) account, look up the ID and path of the NFS file share where you want to create your persistent volume object.

    1. Log in to your IBM Bluemix Infrastructure (SoftLayer) account.
    2. Click Storage.
    3. Click File Storage and note the ID and path of the NFS file share that you want to use.
  2. Open your preferred editor.

  3. Create a storage configuration file for your persistent volume.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: mypv
    spec:
     capacity:
       storage: "20Gi"
     accessModes:
       - ReadWriteMany
     nfs:
       server: "nfslon0410b-fz.service.softlayer.com"
       path: "/IBM01SEV8491247_0908"
    

    {: codeblock}

    Table 8. Understanding the YAML file components
    Understanding the YAML file components
    name Enter the name of the persistent volume object that you want to create.
    storage Enter the storage size of the existing NFS file share. The storage size must be written in gigabytes, for example, 20Gi (20 GB) or 1000Gi (1 TB), and the size must match the size of the existing file share.
    accessMode Access modes define the way that the persistent volume claim can be mounted to a worker node.
    • ReadWriteOnce (RWO): The persistent volume can be mounted to pods in a single worker node only. Pods that are mounted to this persistent volume can read from and write to the volume.
    • ReadOnlyMany (ROX): The persistent volume can be mounted to pods that are hosted on multiple worker nodes. Pods that are mounted to this persistent volume can only read from the volume.
    • ReadWriteMany (RWX): This persistent volume can be mounted to pods that are hosted on multiple worker nodes. Pods that are mounted to this persistent volume can read from and write to the volume.
    server Enter the NFS file share server ID.
    path Enter the path to the NFS file share where you want to create the persistent volume object.
  4. Create the persistent volume object in your cluster.

    kubectl apply -f <yaml_path>
    

    {: pre}

    Example

    kubectl apply -f deploy/kube-config/pv.yaml
    

    {: pre}

  5. Verify that the persistent volume is created.

    kubectl get pv
    

    {: pre}

  6. Create another configuration file to create your persistent volume claim. In order for the persistent volume claim to match the persistent volume object that you created earlier, you must choose the same value for storage and accessMode. The storage-class field must be empty. If any of these fields do not match the persistent volume, then a new persistent volume is created automatically instead.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
     name: mypvc
     annotations:
       volume.beta.kubernetes.io/storage-class: ""
    spec:
     accessModes:
       - ReadWriteMany
     resources:
       requests:
         storage: "20Gi"
    

    {: codeblock}

  7. Create your persistent volume claim.

    kubectl apply -f deploy/kube-config/mypvc.yaml
    

    {: pre}

  8. Verify that your persistent volume claim is created and bound to the persistent volume object. This process can take a few minutes.

    kubectl describe pvc mypvc
    

    {: pre}

    Your output looks similar to the following.

    Name: mypvc
    Namespace: default
    StorageClass:	""
    Status: Bound
    Volume: pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
    Labels: <none>
    Capacity: 20Gi
    Access Modes: RWX
    Events:
      FirstSeen LastSeen Count From        SubObjectPath Type Reason Message
      --------- -------- ----- ----        ------------- -------- ------ -------
      3m 3m 1 {ibm.io/ibmc-file 31898035-3011-11e7-a6a4-7a08779efd33 } Normal Provisioning External provisioner is provisioning volume for claim "default/my-persistent-volume-claim"
      3m 1m	 10 {persistentvolume-controller } Normal ExternalProvisioning cannot find provisioner "ibm.io/ibmc-file", expecting that a volume for the claim is provisioned either manually or via external software
      1m 1m 1 {ibm.io/ibmc-file 31898035-3011-11e7-a6a4-7a08779efd33 } Normal ProvisioningSucceeded	Successfully provisioned volume pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
    

    {: screen}

You successfully created a persistent volume object and bound it to a persistent volume claim. Cluster users can now mount the persistent volume claim to their pod and start reading from and writing to the persistent volume object.


Configuring cluster logging

{: #cs_logging}

Logs help you troubleshoot issues with your clusters and apps. You can enable log forwarding for various cluster log sources and choose where your logs are forwarded. {:shortdesc}

Viewing logs

{: #cs_view_logs}

To view logs for clusters and containers, you can use the standard Kubernetes and Docker logging features. {:shortdesc}

{{site.data.keyword.loganalysislong_notm}}

{: #cs_view_logs_k8s}

For standard clusters, logs are located in the {{site.data.keyword.Bluemix_notm}} space you were logged in to when you created the Kubernetes cluster. Container logs are monitored and forwarded outside of the container. You can access logs for a container by using the Kibana dashboard. For more information about logging, see Logging for the {{site.data.keyword.containershort_notm}}.

To access the Kibana dashboard, go to one of the following URLs and select the {{site.data.keyword.Bluemix_notm}} organization and space where you created the cluster.

Docker logs

{: #cs_view_logs_docker}

You can leverage the built-in Docker logging capabilities to review activities on the standard STDOUT and STDERR output streams. For more information, see Viewing container logs for a container that runs in a Kubernetes cluster.

Configuring log forwarding for a Docker container namespace

{: #cs_configure_namespace_logs}

By default, the {{site.data.keyword.containershort_notm}} forwards Docker container namespace logs to {{site.data.keyword.loganalysislong_notm}}. You can also forward container namespace logs to an external syslog server by creating a new log forwarding configuration. {:shortdesc}

Note: To view logs in the Sydney location, you must forward logs to an external syslog server.

Enabling log forwarding to syslog

{: #cs_namespace_enable}

Before you begin:

  1. Set up a server that can accept a syslog protocol. You can run a syslog server in two ways:
  • Set up and manage your own server or have a provider manage it for you. If a provider manages the server for you, get the logging endpoint from the logging provider.
  • Run syslog from a container. For example, you can use this deployment .yaml file to fetch a Docker public image that runs a container in a Kubernetes cluster. The image publishes the port 30514 on the public cluster IP address, and uses this public cluster IP address to configure the syslog host.
  1. Target your CLI to the cluster where the namespace is located.

To forward your namespace logs to a syslog server:

  1. Create the logging configuration.

    bx cs logging-config-update <my_cluster> --namespace <my_namespace> --hostname <log_server_hostname> --port <log_server_port> --type syslog
    

    {: pre}

    Table 1. Understanding this command's components
    Understanding this command's components
    logging-config-create The command to create the log forwarding configuration for your namespace.
    <my_cluster> Replace <my_cluster> with the name or ID of the cluster.
    --namespace <my_namespace> Replace <my_namespace> with the name of the namespace. Log forwarding is not supported for the ibm-system and kube-system Kubernetes namespaces. If you do not specify a namespace, then all namespaces in the container use this configuration.
    --hostname <log_server_hostname> Replace <log_server_hostname> with the hostname or IP address of the log collector server.
    --port <log_server_port> Replace <log_server_port> with the port of the log collector server. If you do not specify a port, then the standard port 514 is used for syslog.
    --type syslog The log type for syslog.
  2. Verify that the log forwarding configuration was created.

  • To list all of the logging configurations in the cluster:

    bx cs logging-config-get <my_cluster>
    

    {: pre}

    Example output:

    Logging Configurations
    ---------------------------------------------
    Id                                    Source      Protocol Host       Port
    f4bc77c0-ee7d-422d-aabf-a4e6b977264e  kubernetes  syslog   localhost  5514
    5bd9c609-13c8-4c48-9d6e-3a6664c825a9  ingress     ibm      -          -
    
    Container Log Namespace configurations
    ---------------------------------------------
    Namespace         Protocol    Host        Port
    default           syslog      localhost   5514
    my-namespace      syslog      localhost   5514   
    

    {: screen}

  • To list only namespace logging configurations:

    bx cs logging-config-get <my_cluster> --logsource namespaces
    

    {: pre}

    Example output:

    Namespace         Protocol    Host        Port
    default           syslog      localhost   5514
    myapp-namespace   syslog      localhost   5514
    

    {: screen}

Updating the syslog server configuration

{: #cs_namespace_update}

If you want to update details for the current syslog server configuration or change to a different syslog server, you can update the logging forwarding configuration. {:shortdesc}

Before you begin:

  1. Target your CLI to the cluster where the namespace is located.

To change the details of the syslog forwarding configuration:

  1. Update the log forwarding configuration.

    bx cs logging-config-update <my_cluster> --namespace <my_namespace> --hostname <log_server_hostname> --port <log_server_port> --type syslog
    

    {: pre}

    Table 2. Understanding this command's components
    Understanding this command's components
    logging-config-update The command to update the log forwarding configuration for your namespace.
    <my_cluster> Replace <my_cluster> with the name or ID of the cluster.
    --namepsace <my_namespace> Replace <log_source_id> with the name of the namespace with the logging configuration.
    --hostname <log_server_hostname> Replace <log_server_hostname> with the hostname or IP address of the log collector server.
    --port <log_collector_port> Replace <log_server_port> with the port of the log collector server. If you do not specify a port, then the standard port 514 is used.
    --type syslog The logging type for syslog.
  2. Verify that the log forwarding configuration was updated.

    bx cs logging-config-get <my_cluster> --logsource namespaces
    

    {: pre}

    Example output:

    Namespace         Protocol    Host        Port
    default           syslog      localhost   5514
    myapp-namespace   syslog      localhost   5514
    

    {: screen}

Stopping log forwarding to syslog

{: #cs_namespace_delete}

You can stop forwarding logs from a namespace by deleting the logging configuration.

Note: This action deletes only the configuration for forwarding logs to a syslog server. Logs for the namespace continue to be forwarded to {{site.data.keyword.loganalysislong_notm}}.

Before you begin:

  1. Target your CLI to the cluster where the namespace is located.

To stop forwarding logs to syslog:

  1. Delete the logging configuration.

    bx cs logging-config-rm <my_cluster> --namespace <my_namespace>
    

    {: pre} Replace <my_cluster> with the name of the cluster that the logging configuration is in and <my_namespace> with the name of the namespace.


Visualizing Kubernetes cluster resources

{: #cs_weavescope}

Weave Scope provides a visual diagram of your resources within a Kubernetes cluster, including services, pods, containers, processes, nodes, and more. Weave Scope provides interactive metrics for CPU and memory and also provides tools to tail and exec into a container. {:shortdesc}

Before you begin:

  • Remember not to expose your cluster information on the public internet. Complete these steps to deploy Weave Scope securely and access it from a web browser locally.
  • If you do not have one already, create a standard cluster. Weave Scope can be CPU intensive, especially the app. Run Weave Scope with larger standard clusters, not lite clusters.
  • Target your CLI to your cluster to run kubectl commands.

To use Weave Scope with a cluster: 2. Deploy one of the provided RBAC permissions configuration file in the cluster.

To enable read/write permissions:

```
kubectl apply -f "https://raw.githubusercontent.com/IBM-{{site.data.keyword.Bluemix_notm}}/kube-samples/master/weave-scope/weave-scope-rbac.yaml"
```
{: pre}

To enable read-only permissions:

```
kubectl apply -f "https://raw.githubusercontent.com/IBM-{{site.data.keyword.Bluemix_notm}}/kube-samples/master/weave-scope/weave-scope-rbac-readonly.yaml"
```
{: pre}

Output:

```
clusterrole "weave-scope-mgr" created
clusterrolebinding "weave-scope-mgr-role-binding" created
```
{: screen}
  1. Deploy the Weave Scope service, which is privately accessible by the cluster IP address.

    kubectl apply --namespace kube-system -f "https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl version | base64 | tr -d '\n')"
    

    Output:

    serviceaccount "weave-scope" created
    deployment "weave-scope-app" created
    service "weave-scope-app" created
    daemonset "weave-scope-agent" created
    

    {: screen}

  2. Run a port forwarding command to bring up the service on your computer. Now that Weave Scope is configured with the cluster, to access Weave Scope next time, you can run this port forwarding command without completing the previous configuration steps again.

    kubectl port-forward -n kube-system "$(kubectl get -n kube-system pod --selector=weave-scope-component=app -o jsonpath='{.items..metadata.name}')" 4040
    

    {: pre}

    Output:

    Forwarding from 127.0.0.1:4040 -> 4040
    Forwarding from [::1]:4040 -> 4040
    Handling connection for 4040
    

    {: screen}

  3. Open your web browser to http://localhost:4040. Choose to view topology diagrams or tables of the Kubernetes resources in the cluster.

    Example topology from Weave Scope

Learn more about the Weave Scope features External link icon.


Removing clusters

{: #cs_cluster_remove}

When you are finished with a cluster, you can remove it so that the cluster is no longer consuming resources. {:shortdesc}

Lite and standard clusters created with a standard or {{site.data.keyword.Bluemix_notm}} Pay-As-You-Go account must be removed manually by the user when they are not needed anymore. Lite clusters created with a free trial account are automatically removed after the free trial period ends.

When you delete a cluster, you are also deleting resources on the cluster, including containers, pods, bound services, and secrets. If you do not delete your storage when you delete your cluster, you can delete your storage through the IBM Bluemix Infrastructure (SoftLayer) dashboard in the {{site.data.keyword.Bluemix_notm}} GUI. Due to the monthly billing cycle, a persistent volume claim cannot be deleted on the last day of a month. If you delete the persistent volume claim on the last day of the month, the deletion remains pending until the beginning of the next month.

Warning: No backups are created of your cluster or your data in your persistent storage. Deleting a cluster is permanent and cannot be undone.

  • From the {{site.data.keyword.Bluemix_notm}} GUI
    1. Select your cluster and click Delete from the More actions... menu.
  • From the {{site.data.keyword.Bluemix_notm}} CLI
    1. List the available clusters.

      bx cs clusters
      

      {: pre}

    2. Delete the cluster.

      bx cs cluster-rm my_cluster
      

      {: pre}

    3. Follow the prompts and choose whether to delete cluster resources.

When you remove a cluster, the portable public and private subnets are not removed automatically. Subnets are used to assign portable public IP addresses to load balancer services or your Ingress controller. You can choose to manually delete subnets or reuse them in a new cluster.