It is a folklore that has been proven true. In the 80s, Van Halen had strict conditions to remove brown M&Ms from their dressing room at the tour venues, or the show promoter will forfeit their money. The 53 pages typewritten rider contained the condition that along with a wide selection of beverages and food, M&Ms must be provided, but absolutely no brown ones. Years later, David Lee Roth charmingly explains the truth behind this clause in his video – that it was not a silly rockstar misdemeanor excess, but an intelligent safety check measure. Simply put, if the band found brown M&Ms in the dressing room, they will assume the promoters have not taken care of all the electrical and mechanical safety conditions in the rider. Then the band would spend time checking everything with a fine-tooth comb to ensure a safe and flawless show.
In other words, it is a simple assumption that if someone has taken care of the small stuff, they certainly can be trusted to take care of the big things. Just like Van Halen, check if your outsourcing partner has done the small things right. If they did, you could rest assured that they will take care of the big things.
Access to everyone on the team
Did the outsourcing company set up a meeting early to introduce everyone on the team? Such meetings are impactful when done with video. You should have all the details to reach everyone on the team – their emails, phone, skype, etc. Easy access increases communication among the teams. Highly collaborative companies set up Slack channels to communicate instantly with team members. Do you have easy access to the provider’s senior management? The provider’s leadership must check in with you periodically. When needed, you also should be able to get their senior management’s attention.
Transparency in daily activities
You should know what your outsourced team does every day. Though they maybe thousands of miles away and separated by timezones, you should get brief but crisp updates each day – on Slack or via email. Your daily stand up may include them to provide the updates. The remote teams should be check-in code into your repository every day. Weekly timesheets with a judicious amount of details will provide better insight into the time spent on various activities throughout the week.
Empowered Client Partner/Project Manager
Your project manager must your trust to make decisions on their end – as well as demand changes on your side – to ensure mutual success. While you have access to all of your team – who are hyper-focused on coding, testing, etc., you need a client partner who has your perspective to make everyday tactical decisions. They do not lose sight of the forest for the trees. The project manager should make specific, concise, and realistic communication about what they need and expect from each other. Do they take the liberty to suggest process changes? To put is crude, while you may have many backs to pat, you need one throat to choke.
The flexibility of the engagement
Good partners make the engagement flexible for both. Does your outsourcer lock you down with long term commitments and penalties? An outsourcing provider should be agile in terms of process, contracts, and other demands. How easy is it for you to scale your team up or down with relatively short notice, say weeks and not months.
How well do they treat their employees
“Customers will never love a company until the employees love it first.” — Simon Sinek
Companies that treat their employees well, certainly will treat their clients well and value them. When employees are valued with trust, respect, and dignity, they perform at their best. High performing teams will produce results that matter to you. See if your outsourcing vendor provides their employees a good work/life balance, continued carrier training, rewards, and recognition.
In summary, little things make big things happen. See if your outsourcer takes care of some of these small things. If they do, then you can trust that they take care of more complex and critical things too.
Have you missed the first part of this blog series on the five DevOps tools that you simply cannot ignore? Click here to read the same. In this second part of this 2-part blog series, I will focus on SaltStack and Kubernetes.
DevOps specializes in continuous change and requires automation to manage the complexity and scale of infrastructure as well as code that is generated in a typical enterprise software development scenario. To top this, speed has become paramount to success which is the reason why many organizations view infrastructure as a means to an end, with the `end’ being quick development, testing, and deployment of applications. However, web-scale organizations and enterprise IT also want to push development faster but infrastructure at scale is complex, to say the least. As a result, the DevOps approach becomes difficult and it helps to have automation built for the same.
SaltStack is an open source software for modern IT automation. In the last five years, it is being used by thousands of DevOps and enterprise IT organizations to automate the management of data center infrastructure and application environments. Analysts vouch for Salt as the most intelligent, powerful and flexible open source software for various reasons such as:
Provisioning, control, and configuration of any cloud system;
Event-driven, intelligent orchestration for infrastructure security through audit, remediation, and policy compliance;
Container management and introspection.
Salt is suitable for applying a patch to ten thousand systems, to help quickly deliver code to production, audit and secure code in a Docker container and so forth. In a way, Salt is a chosen answer for efficient automation of IT operations at scale.
To summarize, SaltStack Enterprise software helps DevOps organizations by orchestrating the efficient movement of code into production and by keeping complex infrastructures fine-tuned for optimal business service and application delivery. SaltStack orchestrates the DevOps value chain and helps to deploy and configure dynamic applications, and the general-purpose infrastructure they run on, faster and easier than ever.
Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers. Kubernetes helps to deploy applications rapidly, scale as required and roll out new features.
To understand what Kubernetes offers in the DevOps world it is important to rewind a bit. In the past, applications would be installed on a host using an operating system. This would lead to entanglement of applications’ executables, configuration libraries, and life cycles with each other and also with the operating system. In the Kubernetes world, containers are deployed based on operating system-level virtualization rather than hardware virtualization. Since containers are separate from the host they have their own file systems. They obviously cannot view each other’s processes. Since they are decoupled from the underlying infrastructure, they can easily be ported across clouds and operating systems. This system works powerfully well as it allows the creation of application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure. It leads to rapid application development and deployment and continuous development, integration and deployment. Kubernetes also provides for consistency across development, testing, and production.
To summarize, the focus has shifted to distributing and scaling applications that can be deployed, run and monitored effectively, paving the way for Kubernetes which helps to run a cluster of containers, manage deployments of applications across these containers and finally also monitor these containers. Kubernetes helps DevOps to quickly and efficiently respond to customer demand.
Defining DevOps has been difficult even for industry experts but most agree that it offers a new operating model to accelerate software delivery which in turn leads to competitive advantage. For example, the `2016 State of DevOps Report’ says that high performing organizations spend 50 percent less time on unplanned work and rework, and as a result, they are able to spend 66 percent more time on new work, such as new features or code. They are able to do this because they build quality into each stage of the development process through the use of continuous delivery practices, instead of retrofitting it at the end of a development cycle.
Since the process of DevOps requires automation, we need to come to the subject of tools which allow automation to happen in a DevOps environment. In this 2-part blog series, I will briefly discuss five DevOps tools that you simply cannot ignore. They are – Docker, Chef, Puppet, Salt Stack and Kubernetes. In this first part of the blog series, I will be discussing, Docker, Chef and Puppet.
Today, nearly 44 percent of enterprises are looking to adopt a DevOps initiative within their organizations. This is obviously bringing down traditional barriers that exist between developer teams and the IT operations. The Docker platform enables DevOps by providing key tools to improve application development process.
Developers can use Docker to create a Docker environment on their laptops to locally develop and test applications in containers. They can test a stack service made up of multiple Docker containers at an average of around 500 milliseconds. Multiple Docker containers converged as a single service stack, like a LAMP stack, can take seconds to instantiate. Convergence is the interconnection information that needs to be configured between the instances, for example a database connection info or load balancer IP’s. An alternative non-containerized environment can take anywhere from 2 to 10 minutes to spin up and converge depending on the complexity. The end result with Docker is that developers experience less context switch time for testing and retesting and resulting in significantly increased velocity.
While the DevOps movement is a mind-set at first, the major principles all point to business outcomes like faster innovation, higher quality and a feedback loop of continual learning. The Docker platform uniquely allows organizations to apply tools into their application environment to accelerate the rate of change, reduce friction and improve efficiencies.
“The tools we use reinforce the behavior; the behavior reinforces the tool. Thus, if you want to change your behavior, change your tools.” says Adam Jacob, CTO, Chef. He means that merely labeling a tool as a DevOps solution does not necessarily make it so. It has to address today’s IT challenges, i.e. of building agile, organizations while facilitating improvement through collaboration. Tools are agents of change and instrumental in making this change happen.
Chef is a tool for automation, provisioning and configuration management. It has several components including Chef Server, Chef Client, Chef Workstation, Chef Analytics and Chef Supermarket.
Chef offers a full-stack continuous deployment pipeline, automated testing for compliance and security and visibility into everything. To elaborate, whether one has five or several thousands of servers, Chef, helps to manage them by turning infrastructure into code. As a result, rigid infrastructure becomes flexible, versionable, readable and testable. Placing this code in the cloud, on-premises or in a hybrid environment helps an enterprise to quickly adapt the business to changing requirements. With more and more organizations migrating their infrastructure to the cloud, Chef simplifies this path enabling the migration process to happen faster, in a consistent and smooth manner.
Chef Automate helps to test for compliance and security, assess and remediate continuously. Chef’s full-stack pipeline enables enterprises to deploy infrastructure and apps the DevOps way, by testing before deploying, automating workflow and eliminating silos. It automates for effective cross-team testing and release coordination by removing complex dependencies all the way from development to production.
Puppet is an operations and sysadmin-oriented solution and DevOps practitioners—both developers and operations staff alike—strive to achieve optimal conditions for continuous integration/delivery. Tooling is therefore increasingly evaluated based on its ability to achieve these ends effectively and efficiently in the context of an enterprise’s unique needs., Puppet has enjoyed significant first-mover advantages over the years, and since the early days of IT automation, Puppet boasts a longer commercial track record and larger install base.
Puppet, in several ways similar to Chef, manages `infrastructure as code’, and provides the foundation for DevOps practices such as versioning, automated testing and continuous delivery. Enterprises can deploy changes with confidence and recover more quickly from failures, freeing teams to be more agile and responsive to business needs.
Puppet Enterprise lets enterprises deliver technology changes faster, release better software and do all this more frequently with confidence. It helps to increase reliability even as cycle times are decreased. Puppet Enterprise ensures consistency across dev, test and production environments so when changes are promoted, one is confident that this is consistent and systems are stable.
Currently on version 4.3, Puppet is commonly deployed in a client/server configuration with managed nodes periodically synchronizing their configurations with the server. Reporting (e.g., results from automation runs, errors/exceptions) and other information is sent by the clients back to the server for aggregate analysis and processing.
Puppet automation works by enforcing the desired state of an environment as defined in Puppet Manifests—files containing pre-defined information (i.e., resources) describing the state of a system.
SDN is a new paradigm for networking that decouples network control and forwarding from physical infrastructure, enabling agile management of network resources in rapidly changing environments. Just as cloud computing enables IT to quickly spin up compute and storage instances on-demand, SDN replaces rigid (and sometimes manual) network operations with dynamically provisioned network services and resources.
This new model for networking is right in line with Puppet’s advocacy of “infrastructure as code.” As such, the company has made significant strategic initiatives and partnerships in support of SDN. For example, Puppet Labs recently announced a partnership with Arista Networks—a leading developer of SDN switches—to provide automation support to the vendor’s SDN equipment line. This and other similar partnerships (e.g, Cumulus Networks, Dell, Cisco) will position Puppet favorably over competing vendors once SDN technologies gains widespread adoption.
Don’t miss the next blog of this series where I will discuss Salt Stack and Kubernetes.