Skip to main content
Save BIG on Earth Day Deals with 30% Sitewide Savings. SAVE NOW!

Perspectives In Cloud Computing

January 8, 2020March 26th, 2021Announcements

Many of the early posts about cloud computing introduced a lot of new terms and concepts.   When cloud technologies arose, these were difficult to grasp all at once for executives, as well as those who had only worked in traditional data centers, were network professionals, or virtualization administrators. Because DevOps teams either succeed or fail together, it is important that all members share the same grasp of the new terms and concepts.  

In today’s world of computing, everybody is a decision-maker (or at least they should be). We will first look at cloud computing from the perspectives of various IT Professionals and then we will pass through the DevOps pipeline to explore different cloud considerations along the way.

At the onset of 2020, we can use the advantage of hindsight to look back on the last decade to identify various aspects of cloud computing with which all IT professionals should be familiar. The purpose of this blog post is to highlight the need for various cloud technologies in order to succeed today.  

Our first consideration is the people and how cloud computing has changed their responsibilities. 

Developers

Development environments have changed to accommodate the migration from monolithic application deployment on a server, to microservices provisioned onto some distributed cloud environment in an automated way. Historically we have created sandbox environments for testing, and have made extreme efforts to create identical conditions in all stages of deployment (i.e. development, testing, staging, and production) down to the hardware. In most cases,  developers can now test on any platform that supports containers (which could be as simple as Docker Desktop on Mac or Windows). No more tossing code over the wall. Developers can see it through to the end and work with a dedicated team to ensure success. 

Operations Professionals

We have also witnessed operations professionals augment administration and engineering skills to include scripting and programming to keep current with the DevOps movement. Automation is needed to make the environments we deploy code in as predictable as possible. Gone are the days of administrators and engineers who perform manual one-off tasks to stand up an environment. Orchestration has become the norm as we complete the pipeline from version control to production release. 

Once the heavy lifting has been done a complex environment can be duplicated very easily through CI/CD (Continuous Integration and Delivery) tools (discussed later) or a Kubernetes Helm chart (a file that contains all of the Kubernetes cluster settings). All kinds of new requirements have landed on the plate of operations.  These include cgroup limitations, namespace definitions, version control, more automation scripting, extensive package management for multiple distributions, increased networking understanding, and compiling software. 

Network Engineers

With the rise of cloud instances, there also arose a need to connect those systems together. By separating network traffic into control plane (routing/networking information) and data plane (payload/application traffic) we are now able to define our networking through software networking devices. Even though these software switches run on hardware switches, there is an element of programming that has been introduced into the daily life of many network professionals.

OpenFlow compatibility on the devices of most major hardware manufacturers has enveloped many network engineers in orchestration efforts. Almost universally, it is expected that networking now also includes a basic understanding of Python (since it runs natively on most network devices) and Ansible (because it uses Python and does not require an agent on the target system/device). 

Project Managers

Most organizations that develop software now have SCRUM Masters, instead of Project Managers (PM), who serve on a SCRUM Team and represent their team and their efforts to stakeholders. Accountability has become a daily event, and team members celebrate failure and success equally, as they version-up their software with every feature for all to see and test. In the past, the PM would likely blame somebody for not getting some task done on time for a project milestone, but now the culture has completely changed. SCRUM Masters use tools to manage SCRUM related activities, rather than Gantt charts (used for project scheduling) for larger milestones with dependent tasks. 

End Users 

The availability of services and the lack of downtime are the biggest advantages of choosing to use applications built on cloud systems. For many consumers, these may not be important considerations, but for organizations, it is imperative to ensure the availability of services. The SCRUM Team works very hard to anticipate what a user’s experience will be and base all of their efforts off from this projection known to them as a “user story.” 

Our second consideration is the essential tools that need to be mastered to work successfully in the newer more modern frameworks.

Version Control

When version control is employed, not only is code made available throughout an organization with proper access controls, but every change is recorded and accessible. Prior to cloud-hosted version control, thousands of hours of work and code have been lost (either buried in a project as commented lines or deleted entirely). It was not uncommon for a programmer to have all work only on their laptop; if they ever left so did their code. Any programmer contracting their services today must be able to use the popular version control systems. As a general rule, you should not accept an archive (zip, tar, etc)  file or any form of code in an email as a deliverable unless it is shared through version control. 

Continuous Integration(CI)

The first step down the pipeline is the ‘build server’ that takes every version of the code in the version-controlled source code repository to run preliminary tests. This will ensure that the code does not negatively affect code submitted by other developers. Regression testing adds tests to be run with each subsequent build: the more features, the more tests. Previously, such quality assurance (QA) measures would delay delivery because they were done after the software was passed on, only to be sent back if a problem was detected. With CI, code only progresses once it passes all of the tests. 

Continuous Delivery (CD)

After the software is built and all of the automated tests are successful the code is made available for user acceptance testing (UAT). At the heart of CD is automation. The main features provided by a CD solution are: 

  • Visibility: Everyone on the team can see the entire system and collaborate. 
  • Feedback: All team members are notified immediately of any issues. 
  • Continual Deployment: Any version of the software can be deployed to any environment. 

CD uses orchestration tools (such as Chef, Puppet, SaltStack, and Ansible) to deploy an environment where the new software will run. 

CI and CD are usually provided by the same software, such as Travis or Jenkins. Some version control systems have built-in CI/CD tools. 

Types of Cloud

A software solution will be deployed onto one of five types of cloud platforms: 

  • On-Premise: These are the organization’s hosted systems. 
  • Co-location: Nothing is managed for the organization. The organization puts its own hardware in someone else’s controlled environment. The host provides physical security, power, and temperature control.
  • Infrastructure As A Service (IAAS): This includes the hardware (servers, storage, 
  • network devices) or their virtual equivalents. Nothing is managed for the organization but it is located in a controlled environment that is managed by someone else.
  • Platform As A Service (PAAS): Hardware (servers, storage, network devices) are managed by someone else in a controlled environment. An operating system and environment software (such as middleware and runtime) are also provided by the host.
  • Software As A Service (SAAS): This is for hosted services. Everything is provided by the software vendor. The cloud solution selected will depend upon how much control is needed to successfully provide a service to an end-user. 

Microservices

The greatest benefits of cloud computing would not be possible without first breaking down an application into smaller services that work together to make up one big service. If such a conversion is needed (from and existing application to one that is micro serviced) then development and operations need to come together once again to create containers (or pods) to provide those microservices. 

Conclusion

Lowering organizational overhead is a recurring theme when IT objectives are discussed. With a well-planned cloud strategy limiting costs to actual resources used is not just possible, but expected. For this reason, a fundamental understanding of cloud technologies is necessary in order for an organization to succeed today. All stakeholders and IT professionals should be involved in cloud-related decisions as they are all decision-makers at several points along the software delivery pipeline, if not the entire thing. In ten years everything may change, but for now, embracing cloud is long overdue if it has not been done already.

Karl Clinger, LF InstructorKarl is currently the Department of Defense Cybersecurity Coordinator at Oklahoma State University Institute of Technology and CEO of Enterprise Linux Professionals. After becoming a Red Hat Certified Architect (RHCA), with an additional certificate of expertise in SE Linux, and consulting at many Fortune 500 companies (and other agencies) he went on to become a trainer for The Linux Foundation. His prominent projects span energy, languages, health, cybersecurity, and travel. He has lived internationally and he enjoys providing new experiences for his family most of all.
Karl teaches the Linux Foundation cloud-related courses, Kubernetes Administration (LFS458), Software Defined Networking Essentials (LFS465), and Open Source Virtualization (LFS462). Over the last 15 years, Karl has personally assisted thousands of students on their journey to learn Linux related technologies. “No matter which direction I run I always end up back in education. I forage for new technology and then bring it back to share with others. This must be my calling in life.” – Karl Clinger

Thank you for your interest in Linux Foundation training and certification. We think we can better serve you from our China Training site. To access this site please click below.

感谢您对Linux Foundation培训的关注。为了更好地为您服务,我们将您重定向到中国培训网站。 我们期待帮助您实现在中国区内所有类型的开源培训目标。