In this three-part guide series, you will learn about the basics of DevOps, understanding and use cases of the platform as a service and technologies built around it like Docker and Kubernetes. You will also learn about managing the configurations and secret keys on the cloud.
Software Development Life Cycle (SDLC) is a software development workflow which was used by enterprises from the beginning of software development era. It was divided into many consecutive stages and a software artifact goes through each stage in series, Requirement gathering, Technical specification, Code authoring, etc. The software can move to next stage only after the completion and approval of previous stages thus also known as waterfall development model. During those days there was a completely separate team whose role was to build, test and deploy the software after the development process is complete and was ready to be shipped. Now, imagine following the same development process for your application when you have the lowest budget available to validate a very exciting idea that can change the game. Spending massive amount of time and money upfront for something which you don’t even know has a required market size or not, is not the most efficient way in my books and I am sure also is not in yours.
As you know the world has moved on from traditional development methodologies long back and those systems do not fit to the current rapidly iterating software products due to varying customer demands. The best approach to push out the software is to spend the least amount of money and time we can to quickly collect the feedback on our offering and to estimate the demands. While putting the shoes out as fast as possible is required but the software must be resilient and ready to scale in case the demand surge occurs, a happy scenario should remain most exciting memory but not a nightmare disaster that can bring the business down to failure thus preparedness is also equally needed.
DevOps has grown out of traditional SDLC and alike development workflows to fit with new development models that require “fail fast and learn fast” approach to deliver the highly iterable software products in a short span of time. That will help your business to acquire more customers by solving their problems and responding to them. It has also blurred the line between the development and operations team to unify both together into a single process for a faster iterative workflow that enables to course correct themselves through a feedback loop. Your team is the main driver of your business and its growth, it’s highly required to have regular communication inside your team for building a better culture and happier employees.
It also embraces the failures at various stages of software for increasing the overall reliability of the system by making it fault tolerant and reducing the downtime. Nobody wants their application to throw errors not be available when the customers are present on the platform. Having unpredictable downtimes will make your users unhappy and even rages them. A rogue customer can give negative feedbacks, decreasing your search result rank and harming your business. Modern-day DevOps can only be achieved through the utilization of proper tooling. It also gives special privileges to monitoring and reporting of the deployed services(or apps), underlining networking infrastructure and system performance. Regular health checks for your application and supporting architecture is important for better reliability.
Thus a DevOps can be attributed with:
- Reduced organizational silos – By merging the gap between different teams and their dedicated roles into more heterogeneous teams the DevOps removes the hard-line boundaries between the teams and makes them a collaborative workforce.
- Accept failure as normal – System crashes, hardware faults, and network connection drops are imminent in those cases DevOps makes the software more resilient by proving the system with the ability to recover from its failures or replacing it with a new system using automation.
- Implement gradual change – Its impractical to pull the complete system down for updating it with new features or fixing of bugs, with the help of DevOps oriented workflows software can be updated with a canary release or rolling releases reducing the negative impacts of new releases and regular maintenance.
- Leverage tooling and automation – Using plug and play method of experimentation teams can try new software that can increase the team productivity by automating repetitive manual tasks like new servers provisioning.
- Measure everything – Measuring the important metrics helps to present the overall the state of the application and supporting architecture for better forecasting of the application load stress and near correct failure estimations.
Most development teams practicing DevOps are able to do multiple releases per day and can easily move to the integration of modern architectures like microservices that helps them scale up fast while reducing the unnecessary costs and deliver uninterrupted services to their clients. To remain competent your infrastructure must be flexible enough to scale up when demand surges and to scale down when traffic cooldowns. Elite software teams practice DevOps that makes them agile towards oncoming changes in their system by responding fast. Some teams are able to go from final commit to release in an hour thus able to do multiple releases within a day. As mentioned in Accelerate: State of DevOps (2018) report in the section software delivery performance shown in the below table the aspect of software delivery performance is more with higher deployment frequency, lower time for changes and time to restore affected services.
We offer A managed service for your teams’ complete devops solution on the cloud. Click here
DevOps is a very useful tool for the teams working in very volatile business problems that are susceptible to frequent changes. It increases the organization reflexes for progressive alterations inside and outside the system. Most businesses fail to comprehend the massive number of iteration that is required within the system to make it a useful product or service. You can get an edge by maintaining DevOps policy within your startup to make it more agile and successful in the world. The most effective DevOps can be achieved by setting up a frugal pipeline that phases the various stages through which a software artifact can go multiple times following a feedback loop.
A serial pipeline that sets the stage for the movement of software artifact from one phase to another along the way reporting about the state of new changes introduced in the form of bug fixes and feature enhancements and its security. This pipeline is most efficient when fully automated although it need not be the case. It must be agile enough to accommodate the complete software development process across many projects and so it needs to evolve over time. You must give time and resources for researching over this pipeline that may call for introducing new workflows and other new tools for the improvement of team productivity and efficiency.
The technology is getting more and more complex every year making the manual labor energy draining and causes burnouts and it also directly affects your business by reduced delivery speed. Software development and operations performance(SDO) is a new concept that is grabbing much attention lately along with DevOps and Site Reliability Engineers (SRE). The software delivery performance is a competitive edge now that can give the business boost and increases the company valuation due to future sustainability.
A frugal pipeline that can decrease the amount of time it takes to move from code commit to obtaining actual feedback from users to steer the organization towards more growth is the most sought-after and advantageous thing to have.
A typical pipeline has following stages
- Code – Teams write the software code that can be a single file or a multiple repository codebase, latter in case of microservices. Ideally, if a code is distributed each repository should have its own pipeline but that is not mandatory. Code must be written and organized so that it can be easily integrated with existing DevOps pipeline. You must keep your configurations separately to be able to run the same code in different possible environments. Secrets must not be accessible to everyone and to external entities like users, agents etc. Few people within your team or organization should have the ability to update or discard secrets. While coding is an isolated activity but teams can often integrate future pipeline stages and make the deployment time even faster. Using test-driven development (TDD) your team can write good and reliable code that will be exhaustively tested at the testing stage. Unit tests, performance test can be written and can be set up to run using a simple script runner making setting up of testing stage easier and faster.
- Build – A software in the form of written code is of not much use for the business as your customers can not directly interact with it and avail the benefits of the provided services. That code must be converted into something tangible that is interactive and immediately usable. A Java code is compiled to jar files, windows apps are converted to exe, Linux software a formed in packages even the most basic kind of build output can be compressed archive. Your software build can even produce a snapshot of the virtual machine as in case of Netflix, a new AWS Amazon machine image (AMI) is generated. Simply a build can be a way to compress existing files sizes and concatenate the bundled resources as in the case of static site builds. Similarly, a compressed zip archive must not contain any residue of development, for example, temp dirs, etc. It must be a very clean and concise build output. For some large-scale applications the build times can get too much, so the time to generate a build output must be kept in check to decrease the build stage completion duration resulting in faster pipeline overall. Some programming environments may require the developers to bundle the project dependencies within the build artifact and lock those dependencies with the project including their versions for easier deployments.
- Test – Although the tests are written during the development but there must be a specific stage within the DevOps pipeline that does the exhaustive testing of the code for its errors and performance. These tests can be written as unit tests, system integration tests or any other kind of tests the programming environment supports. A good strategy is to check for possible memory leaks from the application also at this stage. This stage is the last one to validate your team’s written code before the code gets merged with the production environment where it gets in front of your users. It is advisable to check for any possibilities of system failures and deteriorated performance at this stage within the newly committed code and if some drastic issues are found like tests failing. The pipeline can be manually stopped to review code by your team and restart the pipeline after fixing any issues. The sophisticated CI/CD systems will automatically pause the DevOps pipeline in case of failures found in the testing stage and a new commit has to be made for restarting the pipeline. This ensures any kind of bug cannot leak into production running an application that may cause issues at a later point in time.
- Deploy – This stage is about replacing the tested artifact with a required component within the architecture for fixed bugs or implementation of new features. In case of microservices you can spin up a new VM and the service is deployed with required configuration and secret keys of the production environment. Your team can do it manually also but most high performing DevOps teams have this step automated which allows them to focus on fixing more bugs and writing more code. There is no point of return after this stage so it is highly recommended your software must be tested for any bugs and any performance issues like memory leaks. Most of the time releases can be done with the deployment but it is best to have a release separate for the reasons discussed in the next section.
- Release – A software when launched or updated with any new feature enhancements or bug fixes is considered released. Releasing can be a part of deployment but large-scale application rollout their release partially to select group of users first and then slowly eventually releasing it for all. This strategy will make sure that at once all users using your application are not affected by any major issues that were not caught until the deployment stage. And with quick user feedbacks, the release can be stopped before propagating to all the users which can also save your organization reputation in front of thousands or millions of users you have.
- Monitor – Many low performing teams consider their job done after the software is successfully released to the masses and there is no immediate feedback from its users. But some issues that can crop up from a associated release after a long duration, for example some of your running service or a process may be leaking memory which is not evident immediately as the system resources are abundant but after a few weeks the memory consumption may suddenly increase to a large extent of system resources and your team may be clueless about the cause behind it. Monitoring and Reporting are much needed in software development as much as testing. Would you release an untested software that can have numerous issues? There are various monitoring tools out there that can monitor your system resources as well as logs generated from various services within your application.
So far you have learned all the theories behind DevOps and its various stages but you must also understand that the DevOps pipeline is not a one time process, for highest efficiency your team must be able to do multiple releases within a day and for each new release the DevOps pipeline is triggered. You must allocate some budget and system resources within your infrastructure (cloud or on-premises) for running DevOps processes. It will be wise to do some investment in acquiring new software or taking a subscription of any Platform as a service.
In this section, I will describe a more practical approach to DevOps using some well-known software and platform setup. I will dive deeper into the DevOps tools in my next blog within this three-part series about DevOps basics but here I will introduce you to some of them.
Get your development team up and running with a manged DevOps setup of the cloud. Click Here
As you can see in the above figure the DevOps like any other agile development model starts from a plan. Planning is crucial to development activities. Launching a new feature to an application running in production where it is accessed constantly by a large number of users is not a small task by any means and if not planned well your release may become a Friday night horror which may get carried onto the weekend other than a cherished evening. There are many tools out there for planning the software development, most well known is the Kanban board. You can find some open source implementation of it but the good news is there are many integrated platforms that have the Kanban board built in along with other features.
After you are through with an effective plan for your next release its the time to write some code. Writing a good code calls for well-organized setup of directories and properly named files. Every programming language has their own favorable editors or development environments but in every environment, there must be an integrated code version system like svn, git, etc. If your choice is to go with git there are many popular platforms for hosting your git repositories. Github is most popular for holding 67 million git repositories but it does not allow you to self-host it. Which is a huge deal for some of the startups/enterprises who are not comfortable with hosting their source code on a third party due to compliance restrictions, etc. Gitlab is another well-known git repository platform that is a completely open source and comes with the ability to self-host it. Whichever you choose the basic requirement is your every code must be versioned somewhere with a correct timestamp and author details.
A written code must be packaged in the form of an executable on the production environment. The build executable is then run as a process or kept inside a directory to which a running process can refer to for picking up the required files and resources. Every programming environment has its own build artifact so this step is not unanimous among different programming environments but its underlying principles are like it is a good practice to keep the size of the build to a minimum and all the required dependencies must be locked with the configuration that can be installed separately while deployment.
Every programming language comes with its own test suite that is used for writing test cases. And the tests can be triggered with the help of a script. In DevOps pipeline for running and monitoring these, we can use some plugins or wrappers that can encapsulate the testing process to generate a test completion report and monitor it, it can be configured to pause the pipeline in case tests are failed. Jenkis is the most popular platform that provides test wrappers for many programming languages like Java, ASP, etc. That can execute the build job after a code is committed to a version control system like Github through webhooks. And it can also trigger another job in the pipeline immediately after the build job is complete and take the build artifact to execute configured test script on it. Gitlab, on the other hand, provides its own job schedulers that can perform various DevOps operations as configured jobs pipeline.
Once, its assured that all the tests are passed and application is performing well, we can start the deployment process. Depending upon the software there may be many deployment targets like, virtual machines on-premises or on the cloud(AWS, GCP, Azure, etc), application containers like LxC, Docker or a cluster of VMs and containers that are managed by other software like Docker swarm or Kubernetes. I will discuss more on each of these platforms in my next blog post and we will see how these technologies along with a good setup cloud infrastructure can help reduce our cost and development effort that will, in turn, make our software delivery faster.
Generally, a release can be as simple as redirecting all the traffic going to old instances to new instances or it may require you to set up the DNS with new instance IP address that can also be a load balancer. If you are deploying a canary release you may want to redirect traffic going to only instances located in one particular region and then slowly, following your organization policy you can do it for other regions one by one and eventually do a 100% rollout.
Every system can be tracked for its resource utilization, each running application process generates or must generate some kind of logs, instead of ignoring these logs all the logs can be stored in a file system or in a dedicated log storage. Also, you can use something like logrotate that will keep the log files size to the minimum and keeping only the most recent logs. Kibana is a very popular log storage and visualization service that presents a neat dashboard for keeping a holistic view on your complete infrastructure and running processes. Logs can be a very good tool for doing data analytics, performing machine learning to get some intriguing insights into your system and user behaviors.
As you have come to know that DevOps is not just about tools and techniques it also embeds organization practices and reflects the development culture within a team. It embraces collaboration over hard-line boundaries between teams and individuals thus empower your organization with the speed and agility to stand strong and be highly competitive in the ever growing and complex technology space.
Please comment below with your thoughts and suggestions about the DevOps or ask any questions that you may have related to this subject.