Introduction to Container Services
February 13th, 2019

Containers provide a standard way to package your application’s code, configurations, and dependencies into a single object. Containers share an operating system installed on the server and run as resource-isolated processes, ensuring quick, reliable, and consistent deployments, regardless of environment.
How they work
Why Containers
Instead of virtualizing the hardware stack as with the virtual machines approach, containers virtualize at the operating system level, with multiple containers running atop the OS kernel directly. This means that containers are far more lightweight: they share the OS kernel, start much faster, and use a fraction of the memory compared to booting an entire OS.
Consistent Environment
Containers give developers the ability to create predictable environments that are isolated from other applications. Containers can also include software dependencies needed by the application, such as specific versions of programming language runtimes and other software libraries.
Run Anywhere
Containers are able to run virtually anywhere, greatly easing development and deployment: on Linux, Windows, and Mac operating systems; on virtual machines or bare metal; on a developer’s machine or in data centers on-premises; and of course, in the public cloud.
Isolation
Containers virtualize CPU, memory, storage, and network resources at the OS-level, providing developers with a sandboxed view of the OS logically isolated from other applications.
Build & Deploy Containers with Docker
Docker provides container software that is ideal for developers and teams looking to get started and experimenting with container-based applications. Docker helps you create and deploy software within containers. It’s an open source collection of tools that help you “Build, Ship, and Run any App, Anywhere”.
With Docker, you create a special file called a Dockerfile. Dockerfiles define a build process, which, when fed to the ‘docker build’ command, will produce an immutable docker image. When you want to start it up, just use the ‘docker run’ command to run it anywhere the docker daemon is supported and running. Docker also provides a cloud-based repository called Docker Hub. You can think of it like GitHub for Docker Images. You can use Docker Hub to store and distribute the container images you build.
Manage Containers with Kubernetes
Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. Google open-sourced the Kubernetes project in 2014.
Kubernetes has a number of features. It can be thought of as:
- a container platform
- a microservices platform
- a portable cloud platform and a lot more.
Kubernetes provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.
Kubernetes is an open source container orchestration platform, allowing large numbers of containers to work together in harmony, reducing operational burden. It helps with things like:
- Running containers across many different machines
- Scaling up or down by adding or removing containers when demand changes
- Keeping storage consistent with multiple instances of an application
- Distributing load between the containers
- Launching new containers on different machines if something fails
Container Management with Docker Swarm
Kubernetes isn’t the only container management tool around. Docker also has its own native container management tool called Docker Swarm. It lets you deploy containers as Swarms that you can interact with as a single unit, with all the container management taken care of. To be clear, Kubernetes does not interact with Docker Swarm in any fashion, only the Docker engine itself.
Kubernetes versus Docker Swarm
Though both the open-source orchestration platforms provide much of the same functionalities, there are some fundamental differences between how these two operate. Below here are some of the notable points. This section compares the features of Docker Swarm and Kubernetes and the weaknesses/strengths of choosing one platform over the other.
Application definition
Kubernetes: An application can be deployed in Kubernetes utilizing a combination of services (or microservices), deployments, and pods.
Docker Swarm: The applications can be deployed as micro-services or services in a swarm cluster in Docker Swarm. YAML(YAML Ain’t Markup Language) files can be utilized to identify multi-container. Moreover, Docker compose can install the application.
Networking
Kubernetes: The networking model is a flat network, allowing all pods to interact with one another. The network policies specify how the pods interact with each other. The flat network is implemented typically as an overlay. The model needs two CIDRs: one for the services and the other from which pods acquire an IP address.
Docker Swarm: The Node joining a swarm cluster generates an overlay network for services that span every host in the docker swarm and a host-only docker bridge network for containers. The users have a choice to encrypt container data traffic while creating of an overlay network by on their own in docker swarm.
Scalability
Kubernetes: For distributed systems, Kubernetes is more of an all-in-one framework. It is a complex system because it provides strong guarantees about the cluster state and a unified set of APIs. This slows down container scaling and deployment.
Docker Swarm: Docker Swarm, when compared to Kubernetes, can deploy container much faster and this allows faster reaction times to scale on demand.
High Availability
Kubernetes: All the pods in kubernetes are distributed among nodes and this offers high availability by tolerating the failure of application. Load balancing services in kubernetes detect unhealthy pods and get rid of them. So, this supports high availability.
Docker Swarm: As the services can be replicated in Swarm nodes, Docker Swarm also offers high availability. The Swarm manager nodes in Docker Swarm are responsible for the entire cluster and handle the worker nodes’ resources.
Container Setup
Kubernetes: Kubernetes utilizes its own YAML, API, and client definitions and each of these differ from that of standard docker equivalents. That is to say, you cannot utilize Docker Compose nor Docker CLI to define containers. While switching platforms, YAML definitions and commands need to be rewritten.
Docker Swarm: The Docker Swarm API doesn’t entirely encompass all of Docker’s commands but offers much of the familiar functionality from Docker. It supports most of the tools that run with Docker. Nevertheless, if Docker API is deficient of a particular operation, there doesn’t exist an easy way around it utilizing Swarm.
Load Balancing
Kubernetes: Pods are exposed via service, which can be utilized as a load balancer within the cluster. Generally, an ingress is utilized for load balancing.
Docker Swarm: Swarm mode consists of a DNS element that can be utilized for distributing incoming requests to a service name. Services can be assigned automatically or can run on ports specified by the user.
Docker and Kubernetes work at different levels. Under the hood, Kubernetes can integrate with the Docker engine to coordinate the scheduling and execution of Docker containers on Kubelets. The Docker engine itself is responsible for running the actual container image built by running ‘docker build’. Higher level concepts such as service-discovery, loadbalancing and network policies are handled by Kubernetes as well.
When used together, both Docker and Kubernetes are great tools for developing a modern cloud architecture, but they are fundamentally different at their core.
Openshift Container Platform
Red Hat OpenShift is an open source container application platform based on the Kubernetes container orchestrator for enterprise application development and deployment.
FOR APPLICATION DEVELOPMENT TEAMS
OpenShift Container Platform provides developers with a self-service platform for provisioning, building, and deploying applications and their components. With automated workflows like our source-to-image (S2I) process, it is easy to get source code from version control systems into ready-to-run, Docker-formatted container images. OpenShift Container Platform integrates with continuous integration and continuous delivery (CI/CD) tools, making it an ideal solution for any organization.
FOR I.T. OPERATIONS
OpenShift Container Platform gives IT operations secure, enterprise-grade Kubernetes for policy-based controls and automation for application management. Cluster services, scheduling, and orchestration provide load-balancing and auto-scaling capabilities. Security features prevent tenants from compromising other applications or the underlying host. And because OpenShift Container Platform can attach persistent storage directly to Linux containers, IT organizations can run both stateful and stateless applications on one platform.
Platform as a Service (PaaS) – Kubernetes vs OpenShift
- OpenShift product vs. Kubernetes project
Kubernetes is an open source project (or even a framework), while OpenShift is a product that comes in many variants. There’s an open source version of OpenShift which is called OKD.OpenShift Container Platform is a product that you can install on your infrastructure that has a paid support included that comes with a subscription. You need to renew your OpenShift subscription for your cluster and you pay more when your cluster grows.Kubernetes has many distributions, but it’s a project and if something bad happens you can count mostly on community or external experts. OKD version is free to use and includes most of the features of its commercial product, but you cannot buy a support nor can’t you use Red Hat based official images. - OpenShift limited installation vs. install Kubernetes (almost) anywhere
If you decide to install OpenShift you need to use either Red Hat Enterprise Linux (or Red Hat Atomic) for OpenShift Container Platform and additionally CentOS for OKD. You cannot install it on other linux distributions. Kubernetes, on the other hand, can be installed almost on any linux distribution such as Debian, Ubuntu (most popular ones) and many others. - OpenShift has more strict security policies than default Kubernetes
It’s probably because of the target group for OpenShift product, but indeed default policies are more strict there than on Kubernetes. For example, most of container images available on Docker Hub won’t run on OpenShift, as it forbids to run a container as root and even many of official images don’t meet this requirement. That’s why people are sometimes confused and angry because they cannot run simple apps like they used to on Kubernetes. - Routers on OpenShift vs. Ingress on Kubernetes
Red Hat had needed an automated reverse proxy solution for containers running on OpenShift long before Kubernetes came up with Ingress. So now in OpenShift we have a Route objects which almost the same job as Ingress in Kubernetes. The main difference is that routes are implemented by good, old HAproxy that can be replaced by commercial solution based on F5 BIG-IP. On Kubernetes, however, you have much more choice, as Ingress is an interface implemented by multiple servers starting from most popular nginx, traefik, AWS ELB/ALB, GCE, Kong and others including HAproxy as well. - A different approach to deployments
Similarly like with Ingress, OpenShift chose to have a different way of managing deployments. In Kubernetes there are Deployment objects (you can also use them in OpenShift with all other Kubernetes objects as well) responsible for updating pods in a rolling update fashion and is implemented internally in controllers. OpenShift has a similar object called DeploymentConfig implemented not by controllers, but rather by sophisticated logic based on dedicated pods controlling whole process. It has some drawbacks, but also one significant advantage over Kubernetes Deployment – you can use hooks to prepare your environment for an update – e.g. by changing database schema. - OpenShift is easier for beginners
As the last part I want to emphasize the difference when it comes to user experience. Containers are still new and having a complex, sophisticated interface for managing them makes it even harder to learn and adapt. That’s why I find OpenShift versions of both command line and web interfaces superior over Kubernetes ones.
Follow Us
Other Articles
- A guide to onboard Security Information and Event Manag ...
- Digitalization without Cyber Security
- The story of university data attacks
- What is Soar?
- When Protection Fails, Forensics can still win the game
- Drones are capable to capture your communications!
- 2019 The Year of Cyber Crime
- Email Security Gateways
- Introduction to SIEM
- Insider Threat
- A beginner’s guide to Blockchain
- NoSQL – High-performance, non relational database ...
- Leveraging Cloud for Disaster Recovery
- Application Performance Monitoring
- Cognitive Security AI Driven Cyber Security
- Introduction to Container Services
- Insider Threat Detection
- Build Secure and Governed Microservices with Kafka Streams
- Add and Manage photos in Outlook messages and contacts ...
- Security on a Budget
- About CodeTwo Email Signatures for Office 365
- Googles presence in China
- Check Point Software acquires Dome9 to beef up multi-cl ...
- Exploring the benefits and challenges of hyper converge ...
- Next Generation cloud backup and data protection for Of ...
- Backup for Office 365 with Code Two
- Cyberattack
- Email Security
- Cisco Issues Security Patch
- British Airways Hacked
- AutoML Vision
- Day 2 Keynote: Bringing the Cloud to You
- CI/CD in a Serverless World
- Keynote Google
- Google Cloud Next 2018 in Under 12 Minutes
- UAE Crowned as the most Digital Friendly Country
- Ransomware continues to prey on the UAE
- Chrome for all
- Machine Learning for a Future-Facing ZTS Revolution
- The Dawn of the Cloud
- GDPR
- Will Cryptocurrency Replace Conventional Currency
- Internet of Thing Under Attack
- Cloud Native Computing Transforming IT Infrastructure
- Cyber Security with Artificial Intelligence
- Understanding Cybersecurity at the Corporate level
- Cryptojacking on the rise
- Google discontinues Google Search Appliance (GSA)
- Secure cloud entry points with Google Chrome Enterprise
- Cloud Infrastructure to drive UAE Cloud Computing Market
- AI to contribute $320 billion USD to Middle East GDP by 2030
- Well begun for well being
- A Spin around the Space
- Oracle opens first innovation hub with a focus on AI
- AI to bring a world of opportunities to Dubai
- The BitCoin Revolution
- Annihilating to a Green Thought
- The Intelligent Move
- Looking Right at the Face of Facebook and Google