OpenShift Configure storage for Container Platform services

Configuring Image Registry to use OpenShift Data Foundation

1. Create a Persistent Volume Claim for the Image Registry to use.

1. In the OpenShift Web Console, click StoragePersistent Volume Claims.
2. Set the Project to openshift-image-registry.
3. Click Create Persistent Volume Claim.
    From the list of available storage classes retrieved above, specify the   Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com.
   Specify the Persistent Volume Claim Name, for example, ocsregistry.
   Specify an Access Mode of Shared Access (RWX).
   Specify a Size of at least 100 GB.
   Click Create.

Wait until the status of the new Persistent Volume Claim is listed as Bound.

2. Configure the cluster’s Image Registry to use the new Persistent Volume Claim.

1. Click AdministrationCustom Resource Definitions.
2. Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group.
3. Click the Instances tab.
4. Beside the cluster instance, click the Action Menu (⋮)Edit Config.
5. Add the new Persistent Volume Claim as persistent storage for the Image Registry.
   Add the following under spec:, replacing the existing storage: section if necessary.
  storage:
    pvc:
      claim: <new-pvc-name>   For example:
  storage:
    pvc:
      claim: ocs4registry  Click Save

3. Verify that the new configuration is being used.

1. Click WorkloadsPods.
2. Set the Project to openshift-image-registry.
3. Verify that the new image-registry-* pod appears with a status of Running, and that the previous image-registry-* pod terminates.
4. Click the new image-registry-* pod to view pod details.
5. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocsregistry.
https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.15/html-single/managing_and_allocating_storage_resources/index#configuring-image-registry-to-use-openshift-data-foundation_rhodf

Openshift Customizing the console route

Edit the cluster Ingress configuration:

oc edit ingress.config.openshift.io cluster

Set the custom hostname and optionally the serving certificate and key:

apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
  name: cluster
spec:
  componentRoutes:
    - name: console
      namespace: openshift-console
      hostname: <custom_hostname> 
      servingCertKeyPairSecret:
        name: <secret_name> 

Or Apply new Ingress Config

apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
  name: cluster
spec:
  componentRoutes:
    - name: console
      namespace: openshift-console
      hostname: OpenShift.Meaty.Cloud
      servingCertKeyPairSecret:
        name: custom-console-certificate

Example YAML file


apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
  name: cluster

spec:
  componentRoutes:
  - hostname: console.apps.oscp.abc.local
    name: console
    namespace: openshift-console
  domain: apps.oscp.abc.local
  loadBalancer:
    platform:
      type: ""
status:
  componentRoutes:
  - conditions:

    - lastTransitionTime: "2024-03-15T11:24:42Z"
      message: All is well
      reason: AsExpected
      status: "False"
      type: Degraded

https://meatybytes.io/posts/openshift/ocp-features/security/tls/customizing-console

https://docs.openshift.com/container-platform/4.15/web_console/customizing-the-web-console.html#customizing-the-web-console-url_customizing-web-console

Openshift Assisted Installer Networking prerequisites

A DHCP server unless using static IP addressing.
A base domain name. 

The OpenShift Container Platform cluster’s network must also meet the following requirements:
Connectivity between all cluster nodes
Connectivity for each node to the internet
Access to an NTP server for time synchronization between the cluster nodes

Example DNS zone database

$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
	IN	MX 10	smtp.example.com.
;
;
ns1.example.com.		IN	A	192.168.1.1
smtp.example.com.		IN	A	192.168.1.5
;
helper.example.com.		IN	A	192.168.1.5
;
api.ocp4.example.com.		IN	A	192.168.1.5 1
api-int.ocp4.example.com.	IN	A	192.168.1.5 2
;
*.apps.ocp4.example.com.	IN	A	192.168.1.5 3
;
control-plane0.ocp4.example.com.	IN	A	192.168.1.97 4
control-plane1.ocp4.example.com.	IN	A	192.168.1.98
control-plane2.ocp4.example.com.	IN	A	192.168.1.99
;
worker0.ocp4.example.com.	IN	A	192.168.1.11 5
worker1.ocp4.example.com.	IN	A	192.168.1.7
;
;EOF
  1. Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. ↩︎
  2. Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. ↩︎
  3. Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the worker machines by default. ↩︎
  4. Provides name resolution for the control plane machines. ↩︎
  5. Provides name resolution for the worker machines. ↩︎

Example DNS zone database for reverse records

$$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
;
5.1.168.192.in-addr.arpa.	IN	PTR	api.ocp4.example.com. 
5.1.168.192.in-addr.arpa.	IN	PTR	api-int.ocp4.example.com. 
;
97.1.168.192.in-addr.arpa.	IN	PTR	control-plane0.ocp4.example.com. 
98.1.168.192.in-addr.arpa.	IN	PTR	control-plane1.ocp4.example.com.
99.1.168.192.in-addr.arpa.	IN	PTR	control-plane2.ocp4.example.com.
;
11.1.168.192.in-addr.arpa.	IN	PTR	worker0.ocp4.example.com. 
7.1.168.192.in-addr.arpa.	IN	PTR	worker1.ocp4.example.com.
;
;EOF

Adding worker nodes to OpenShift clusters

Confirm that the cluster recognizes the machines:

oc get nodes

Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

oc get csr

To approve them individually, run the following command for each valid CSR:

oc adm certificate approve <csr_name>

Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

oc get csr

If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

oc adm certificate approve <csr_name>

# To approve all pending CSRs, run the following command:
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve

After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

oc get nodes
https://docs.openshift.com/container-platform/4.15/nodes/nodes/nodes-sno-worker-nodes.html

Openshift Cluster OAuth – AD LDAP

Create user and group in AD.

ocp-user: Users with OpenShift access
Any users who should be able to log-in to OpenShift must be members of this group
All of the below mentioned users are in this group
ocp-normal-dev: Normal OpenShift users
Regular users of OpenShift without special permissions
Contains: normaluser1, teamuser1, teamuser2
ocp-fancy-dev: Fancy OpenShift users
Users of OpenShift that are granted some special privileges
Contains: fancyuser1, fancyuser2
ocp-teamed-app: Teamed app users
A group of users that will have access to the same OpenShift Project
Contains: teamuser1, teamuser2

Create a Secret with the bind password.

oc create secret generic ldapuser-secret --from-literal=bindPassword=yourPassword -n openshift-config

Update the cluster OAuth object with the LDAP identity provider.

spec:
  identityProviders:
  - name: ldap
    challenge: false
    login: true
    mappingMethod: claim
    type: LDAP
    ldap:
      attributes:
        id:
        - distinguishedName
        email:
        - userPrincipalName
        name:
        - givenName
        preferredUsername:
        - sAMAccountName
      bindDN: "cn=ldapuser,cn=Users,dc=dcloud,dc=demo,dc=com"
      bindPassword:
        name: ldapuser-secret
      insecure: true
      url: "ldap://ad1.dcloud.demo.com:389/cn=Users,dc=dcloud,dc=demo,dc=com?sAMAccountName?sub?(memberOf=cn=ocp-user,cn=Users,dc=dcloud,dc=demo,dc=com)"
  tokenConfig:
    accessTokenMaxAgeSeconds: 86400
https://rhthsa.github.io/openshift-demo/infrastructure-authentication-providers.html

Start And Stop VMs By Azure Automation

1.Add the Azure Automation Account

1. Sign in to the Azure portal. Search for Automation Accounts. In the search results, select Automation Accounts.

2. On the next screen, click Add.

3. In the Add Automation Account pane, enter a name for your new Automation account in the Name field. You can’t change this name after it’s chosen.

If you have more than one subscription, use the Subscription field to specify the subscription to use for the new account.

For Resource group, enter or select a new or existing resource group.

For Location, select an Azure datacenter location.

For the Create Azure Run As account option, ensure that Yes is selected, and then click Create.

2. Import AzureRM,profile

1. From your Automation account, under Shared Resources, select Modules. In the search bar, enter the module name (for example, AzureRM.Profile).

2. On the PowerShell Module page, select Import to import the module into your Automation account.

3. Select I agree to update all of the Azure modules.

4. Wait for the installation to complete.

3. Create Runbook

1. Click Create a runbook.

2. Enter a name for the runbook and select its type. The runbook name must start with a letter and can contain letters, numbers, underscores, and dashes.

3. Copy the code to the code editor.

Workflow Stop-Start-AzureVM 
{ 
    Param 
    (    
        [Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()] 
        [String] 
        $AzureSubscriptionId, 
        [Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()] 
        [String] 
        $AzureVMList="All", 
        [Parameter(Mandatory=$true)][ValidateSet("Start","Stop")] 
        [String] 
        $Action 
    ) 
     
    $connectionName = "AzureRunAsConnection"
try
{
    # Get the connection "AzureRunAsConnection "
    $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName         

    "Logging in to Azure..."
    Add-AzureRmAccount `
        -ServicePrincipal `
        -TenantId $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
}
catch {
    if (!$servicePrincipalConnection)
    {
        $ErrorMessage = "Connection $connectionName not found."
        throw $ErrorMessage
    } else{
        Write-Error -Message $_.Exception
        throw $_.Exception
    }
}
 
    if($AzureVMList -ne "All") 
    { 
        $AzureVMs = $AzureVMList.Split(",") 
        [System.Collections.ArrayList]$AzureVMsToHandle = $AzureVMs 
    } 
    else 
    { 
        $AzureVMs = (Get-AzureRmVM).Name 
        [System.Collections.ArrayList]$AzureVMsToHandle = $AzureVMs 
 
    } 
 
    foreach($AzureVM in $AzureVMsToHandle) 
    { 
        if(!(Get-AzureRmVM | ? {$_.Name -eq $AzureVM})) 
        { 
            throw " AzureVM : [$AzureVM] - Does not exist! - Check your inputs " 
        } 
    } 
 
    if($Action -eq "Stop") 
    { 
        Write-Output "Stopping VMs"; 
        foreach -parallel ($AzureVM in $AzureVMsToHandle) 
        { 
            Get-AzureRmVM | ? {$_.Name -eq $AzureVM} | Stop-AzureRmVM -Force 
        } 
    } 
    else 
    { 
        Write-Output "Starting VMs"; 
        foreach -parallel ($AzureVM in $AzureVMsToHandle) 
        { 
            Get-AzureRmVM | ? {$_.Name -eq $AzureVM} | Start-AzureRmVM 
        } 
    } 
}

4. Click Save and Publish (before publishing please have a test for the runbook).

4. Test the Runbook

Before you publish the runbook to make it available in production, you should test it to make sure that it works properly. Testing a runbook runs its Draft version and allows you to view its output interactively.

1. In the Azure portal, open your Automation account. Select Runbooks under Process Automation to open the list of runbooks. Click your runbook.

2. Click Edit.

3. Click Test pane to open the Test pane.

4. Enter the AZURESUBSCRIPTIONID, AZUREVMLIST and ACTION’s values. Click Start to start the test.

5. When the runbook job completes, close the Test pane to return to the canvas.

5. Create a Schedule in the Azure portal

1. From your Automation account, on the left-hand pane select Schedules under Shared Resources. On the Schedules page, select Add a schedule.

2. Choose Schedule.

3. Create a new schedule.

4. Select whether the schedule runs once or on a reoccurring schedule by selecting Once or Recurring. If you select Once, specify a start time and then select Create. If you select Recurring, specify a start time. For Recur every, select how often you want the runbook to repeat. Select by hour, day, week, or month.

  • If you select Week, the days of the week are presented for you to choose from. Select as many days as you want. The first run of your schedule will happen on the first day selected after the start time. For example, to choose a weekend schedule, select Saturday and Sunday.
  • If you select Month, you’re given different options. For the Monthly occurrences option, select either Month days or Week days. If you select Month days, a calendar appears so that you can choose as many days as you want. If you choose a date such as the 31st that doesn’t occur in the current month, the schedule won’t run. If you want the schedule to run on the last day, select Yes under Run on last day of month. If you select Week days, the Recur every option appears. Choose First, Second, Third, Fourth, or Last. Finally, choose a day to repeat on.

5. When you’re finished, select Create.

6. Choose Parameters and run settings.

7. Enter the AZURESUBSCRIPTIONID, AZUREVMLIST and ACTION’s values.

8. Click OK.

9.Finished. Follow the steps above to create a stop schedule.