Many of the early posts about cloud computing introduced a lot of new terms and concepts. When cloud technologies arose, these were difficult to grasp all at once for executives, as well as those who had only worked in traditional data centers, were network professionals, or virtualization administrators. Because DevOps teams either succeed or fail together, it is important that all members share the same grasp of the new terms and concepts.
In today’s world of computing, everybody is a decision-maker (or at least they should be). We will first look at cloud computing from the perspectives of various IT Professionals and then we will pass through the DevOps pipeline to explore different cloud considerations along the way.
At the onset of 2020, we can use the advantage of hindsight to look back on the last decade to identify various aspects of cloud computing with which all IT professionals should be familiar. The purpose of this blog post is to highlight the need for various cloud technologies in order to succeed today.
Our first consideration is the people and how cloud computing has changed their responsibilities.
Development environments have changed to accommodate the migration from monolithic application deployment on a server, to microservices provisioned onto some distributed cloud environment in an automated way. Historically we have created sandbox environments for testing, and have made extreme efforts to create identical conditions in all stages of deployment (i.e. development, testing, staging, and production) down to the hardware. In most cases, developers can now test on any platform that supports containers (which could be as simple as Docker Desktop on Mac or Windows). No more tossing code over the wall. Developers can see it through to the end and work with a dedicated team to ensure success.
We have also witnessed operations professionals augment administration and engineering skills to include scripting and programming to keep current with the DevOps movement. Automation is needed to make the environments we deploy code in as predictable as possible. Gone are the days of administrators and engineers who perform manual one-off tasks to stand up an environment. Orchestration has become the norm as we complete the pipeline from version control to production release.
Once the heavy lifting has been done a complex environment can be duplicated very easily through CI/CD (Continuous Integration and Delivery) tools (discussed later) or a Kubernetes Helm chart (a file that contains all of the Kubernetes cluster settings). All kinds of new requirements have landed on the plate of operations. These include cgroup limitations, namespace definitions, version control, more automation scripting, extensive package management for multiple distributions, increased networking understanding, and compiling software.
With the rise of cloud instances, there also arose a need to connect those systems together. By separating network traffic into control plane (routing/networking information) and data plane (payload/application traffic) we are now able to define our networking through software networking devices. Even though these software switches run on hardware switches, there is an element of programming that has been introduced into the daily life of many network professionals.
OpenFlow compatibility on the devices of most major hardware manufacturers has enveloped many network engineers in orchestration efforts. Almost universally, it is expected that networking now also includes a basic understanding of Python (since it runs natively on most network devices) and Ansible (because it uses Python and does not require an agent on the target system/device).
Most organizations that develop software now have SCRUM Masters, instead of Project Managers (PM), who serve on a SCRUM Team and represent their team and their efforts to stakeholders. Accountability has become a daily event, and team members celebrate failure and success equally, as they version-up their software with every feature for all to see and test. In the past, the PM would likely blame somebody for not getting some task done on time for a project milestone, but now the culture has completely changed. SCRUM Masters use tools to manage SCRUM related activities, rather than Gantt charts (used for project scheduling) for larger milestones with dependent tasks.
The availability of services and the lack of downtime are the biggest advantages of choosing to use applications built on cloud systems. For many consumers, these may not be important considerations, but for organizations, it is imperative to ensure the availability of services. The SCRUM Team works very hard to anticipate what a user’s experience will be and base all of their efforts off from this projection known to them as a “user story.”
Our second consideration is the essential tools that need to be mastered to work successfully in the newer more modern frameworks.
When version control is employed, not only is code made available throughout an organization with proper access controls, but every change is recorded and accessible. Prior to cloud-hosted version control, thousands of hours of work and code have been lost (either buried in a project as commented lines or deleted entirely). It was not uncommon for a programmer to have all work only on their laptop; if they ever left so did their code. Any programmer contracting their services today must be able to use the popular version control systems. As a general rule, you should not accept an archive (zip, tar, etc) file or any form of code in an email as a deliverable unless it is shared through version control.
The first step down the pipeline is the ‘build server’ that takes every version of the code in the version-controlled source code repository to run preliminary tests. This will ensure that the code does not negatively affect code submitted by other developers. Regression testing adds tests to be run with each subsequent build: the more features, the more tests. Previously, such quality assurance (QA) measures would delay delivery because they were done after the software was passed on, only to be sent back if a problem was detected. With CI, code only progresses once it passes all of the tests.
Continuous Delivery (CD)
After the software is built and all of the automated tests are successful the code is made available for user acceptance testing (UAT). At the heart of CD is automation. The main features provided by a CD solution are:
- Visibility: Everyone on the team can see the entire system and collaborate.
- Feedback: All team members are notified immediately of any issues.
- Continual Deployment: Any version of the software can be deployed to any environment.
CD uses orchestration tools (such as Chef, Puppet, SaltStack, and Ansible) to deploy an environment where the new software will run.
CI and CD are usually provided by the same software, such as Travis or Jenkins. Some version control systems have built-in CI/CD tools.
Types of Cloud
A software solution will be deployed onto one of five types of cloud platforms:
- On-Premise: These are the organization’s hosted systems.
- Co-location: Nothing is managed for the organization. The organization puts its own hardware in someone else’s controlled environment. The host provides physical security, power, and temperature control.
- Infrastructure As A Service (IAAS): This includes the hardware (servers, storage,
- network devices) or their virtual equivalents. Nothing is managed for the organization but it is located in a controlled environment that is managed by someone else.
- Platform As A Service (PAAS): Hardware (servers, storage, network devices) are managed by someone else in a controlled environment. An operating system and environment software (such as middleware and runtime) are also provided by the host.
- Software As A Service (SAAS): This is for hosted services. Everything is provided by the software vendor. The cloud solution selected will depend upon how much control is needed to successfully provide a service to an end-user.
The greatest benefits of cloud computing would not be possible without first breaking down an application into smaller services that work together to make up one big service. If such a conversion is needed (from and existing application to one that is micro serviced) then development and operations need to come together once again to create containers (or pods) to provide those microservices.
Lowering organizational overhead is a recurring theme when IT objectives are discussed. With a well-planned cloud strategy limiting costs to actual resources used is not just possible, but expected. For this reason, a fundamental understanding of cloud technologies is necessary in order for an organization to succeed today. All stakeholders and IT professionals should be involved in cloud-related decisions as they are all decision-makers at several points along the software delivery pipeline, if not the entire thing. In ten years everything may change, but for now, embracing cloud is long overdue if it has not been done already.