# Installation on OpenShift
This tutorial shows how to manually install Entando into OpenShift 3.11 or 4.x. If you're working with OpenShift 4.6+ it is highly recommended that you install via the Red Hat-certified Entando Operator which should be available in your OperatorHub thanks to the Red Hat Marketplace. See this tutorial for instructions specific to the
- An OpenShift installation (3.11 or 4.x)
occommand line tool
- A helm 3 client
# Local Installation
If you want to run OpenShift in your local development environment you can run Minishift (OpenShift 3.11) or Code Ready Containers (OpenShift 4). Use the local development version that matches the cluster where you intend to deploy your application.
Once you've completed the installation above capture the local IP address of your development instance using
minishift ip or
crc ip. You'll need it when configuring your Entando application.
Login to your OpenShift environment from the command line with
oc login using the URL and credentials for your cluster.
# Install the Entando Custom Resource Definitions (CRDs)
Once per cluster you need to deploy the
Entando Custom Resources. This is the only step in this guide that requires cluster level access. If you are running on Minishift or CRC make sure you are connected using the administrator login provided when you started your local instance.
- Download the Custom Resource Definitions (CRDs) and deploy them
oc apply -n entando -f https://raw.githubusercontent.com/entando/entando-releases/v6.3.2/dist/ge-1-1-6/namespace-scoped-deployment/cluster-resources.yaml
- Install namespace scoped resources
oc apply -n entando -f https://raw.githubusercontent.com/entando/entando-releases/v6.3.2/dist/ge-1-1-6/namespace-scoped-deployment/orig/namespace-resources.yaml
# Get your Cluster Default Ingress
If you're deploying on a managed cluster get the default hostname from your cluster administrator. Entando uses wildcard addressing to connect different parts of your Entando application and the default route for applications exposed on your cluster is needed. You'll set this value in step 3 below.
# Setup and Deploy
- Download and unpack the entando-helm-quickstart release you want to use from here: https://github.com/entando-k8s/entando-helm-quickstart/releases (opens new window)
- See the included README file for more information on the following steps.
curl -sfL https://github.com/entando-k8s/entando-helm-quickstart/archive/v6.3.2.tar.gz | tar xvz
- Change into the new directory
- If you're deploying to a managed cluster:
entando.default.routing.suffixto the default URL of applications deployed in your OpenShift cluster. If you're unsure of this value, please check with your cluster administrator for this URL.
- Entando will create applications using that default URL and relies on wildcard DNS resolution.
- If you're using Minishift or CRC:
entando.default.routing.suffixto the value from
nip.io. For example,
- If you're deploying to a managed cluster:
- Create the Entando namespace:
oc new-project entando
- Update helm dependencies:
helm dependency update
- Run helm to generate the template file:
helm template my-app --namespace=entando ./ > my-app.yaml
- If you're using Helm 2 instead of Helm 3, then replace
helm template my-appwith
helm template --name=my-app
- Deploy Entando via
oc create -f my-app.yaml
- If you see this error
no matches for kind "Deployment" in version "extensions/v1beta1", then you'll need to edit my-app.yaml and set
apiVersion: "apps/v1"for the Deployment.
- Watch Entando startup
oc get pods -n entando --watch
- This step is complete when the
quickstart-composite-app-deployerwith a status of completed. For example,
quickstart-composite-app-deployer-0547 0/1 Completed 0 7m44s
- The full pod name will differ but by default will start with
- Check for the Entando ingresses using
oc describe ingress -n entando. This is a snippet:
Name: quickstart-ingress Namespace: entando Address: Default backend: default-http-backend:80 (<none>) Rules: Host Path Backends ---- ---- -------- quickstart-entando.192.168.64.10.nip.io /entando-de-app quickstart-server-service:8080 (<none>) /digital-exchange quickstart-server-service:8083 (<none>) /app-builder/ quickstart-server-service:8081 (<none>)
The host path in the configuration above plus
/app-builder/ (trailing slash is important) will allow you to log into your environment. For example,
# Appendix A - Troubleshooting and Common Errors
# Permission Errors
If you get OpenShift permission errors deploying your Entando application into your OpenShift namespace make sure your user has the
bind verbs on Roles in the namespace you're deploying to. Ultimately you need this command to
oc auth can-i escalate role to return
yes. That access is only required in the namespace where you are deploying your Entando application. No cluster level access is required.
Check with your cluster administrator if you need help assigning these roles. Generally this requires the creation of a role with those permissions, preferably a ClusterRole, and then depending on how administrators manage security your Entando installer needs to be given that role in the target namespace. So let's assume the clusterRole we create is
entando-installer and the user's name is john, on OpenShift creating the rolebinding would be:
oc policy add-role-to-user entando-installer john -n <your-namespace>
Before installing, we suggest running
oc auth can-i escalate role with your given user in the targeted namespace. If it says "yes" you should be able to install.
# Forbidden Error installing Entando Custom Resource Definitions in Minishift or CRC
If you get an error like the one below installing the CRDs in your local instance you need to login using the administrator role.
/opt/ocInstallLocal$ oc create -f dist/crd/ Error from server (Forbidden): error when creating "dist/crd/EntandoAppCRD.yaml": customresourcedefinitions.apiextensions.k8s.io is forbidden: User "developer" cannot create resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope
The administrator credentials are printed when you started your local cluster in a message like this one:
To access the cluster, first set up your environment by following 'crc oc-env' instructions INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' INFO To login as an admin, username is 'kubeadmin' and password is xxxx-xxxx-xxxx-xxxx
# Application is not available when accessing app builder
If you get the message "Application is not available" when accessing the app-builder make sure to include a trailing slash in the URL. For example, http://quickstart-entando.192.168.64.10.nip.io/app-builder/
# Network Issues
If you see errors when images are being retrieved (resulting in errors like ErrImagePull or ImagePullBackOff), you may want to start crc using
crc start -n "188.8.131.52 or configure the nameserver using
crc config set nameserver 184.108.40.206 before running
crc start. This will allow the cluster to perform DNS lookups via Google's public DNS server.
If you're on Windows, you should also check out the notes here since Minishift and CRC rely on Windows Hyper-V by default. This can result in network issues when the host computer is restarted.