Archive for October, 2011

October 23, 2011

DevOps in the Cloud Explained


This post was done by @martinjlogan and based on a presentation given by George Reese @GeorgeReese at Camp DevOps 2011

Time to throw some more buzzwords at you. Nothing makes peoples eyes roll back more quickly than saying DevOps and Cloud in the same sentence. This is going to be all about Cloud and DevOps.

The theory of DevOps is predicated on the idea that all elements of a technology infrastructure can be controlled through code. Without cloud that can’t be entirely true. Someone has to back the servers into the data center and so on. In pure cloud operations we get to this sort of nirvana where everything relating to technology is controllable purely through code.

A key thing with respect to DevOps plus Cloud that follows from the statements made above is that everything becomes repeatable. At some level that is the point of DevOps. You take the repeatable elements of Dev and apply them to Ops. Starting a server becomes a repeatable testable process.

Scalability; this is another thing you get with DevOps + Cloud. DevOpsCloud allows you to increase the server to admin ratio. No longer is provisioning a server kept as a long set of steps stored in a black binder somewhere. It is kept as actual software. This reduces errors tremendously.

DevOps + Cloud is self healing. What I mean by that is that when you get to a scenario where you entire infrastructure is governed by code, that code can become aware of anything that is going wrong within its domain. You can detect VM failures and automagically bring up a replacement VM for example. You know that the replacement is going to work the way you designed it. No more 3:00 AM pager alerts for many of your VM failures because this is all handled automatically. Some organizations even go so far as to have “chaos monkeys” these folks are paid to wreak havoc just to ensure/prove that the system is self healing.

Continuous integration and deployment. This means that your environments are never static. There is no fixed production environment with change management processes that govern how you get code into that environment. The walls between dev and ops fall down. We are running CI and delivering code continuously not just at the application level but at the infrastructure level when you fully go DevOps and Cloud.

CloudDevOpsNoSQLCoolness

DevOps needs these buzzwords. The reason it does is because DevOps only succeeds in so far as you are able to manage things through code. Anywhere you drop back to human behavior you are getting away from the value proposition that is represented by DevOps. What cloud and NoSQL bring into the mix is this ability, or ease, to treat all elements of your infrastructure as a programmable component that can be managed through DevOps processes.

I will start off with NoSQL. I am not saying that you can’t automate RDBMS systems, it is just that it is quite a bit easier to automate NoSQL systems because of their properties. It will make more sense in a bit. Let’s talk CAP theorem. CAP theorem is an over arching theorem about how you can manage consistency, availability and partition tolerance.

Consistency – how consistent is your data for all readers in a system. Can one reader see a different value than another at times – for how long?

Availability – how tolerant is the entire system of a failure of a single node

Partition tolerance – a system that continuous to operate despite message loss between partitions.

Now all that was an oversimplification but lets just go with it for now. Cap theorem says that you can’t have more than 2 of the aforementioned properties at once. Relational DBs are essentially focused on consistency. It is very hard [impossible] to have a relational system that has nodes in Chicago and London and has nodes that are perfectly available and consistent. These distributed setups present some of the largest failure scenarios we see in traditional systems. These failures are the types of things we would want to handle automatically with a self healing system. Due to the properties of RDBMS systems, as we will see, this can be very difficult.

The deployment of a relational DB is fairly easy. This can be automated well enough. What gets hard is if you want to scale your reads based on autoscaling metrics. Lets say I have my reads spread across my slaves and I want the number of slaves to go up and down based on demand. The problem here is that each slave brought up needs to pull a ton of data in order to meet the consistency requirements of the system. This is not very efficient.

NoSQL systems go for something called eventual consistency. Most data that you are dealing with on a day to day basis can be eventually consistent. If, for example, if I update my Facebook status, delete it, and then add another it is ok if some people saw the first update and others only saw the second. Lots of data is this way. If you rely on eventual consistency things become a lot easier.

NoSQL systems by design can deal with node failures. In the SQL side if the master fails your application can’t do any writes until you solve the problem by bringing up a new master or promoting a slave. Many NoSQL systems have a peer to peer based relationship between its nodes. This lack of differentiation makes the system much more resistent to failure and much easier to reason about. Automating the recovery of these NoSQL systems is far easier than the RDBMS as you can imagine. At the end of the day, tools with these properties, being easy to reason about and being simple to automate are exactly the types of tools that we should prefer in our CloudDevOps type environments.

Cloud

Cloud is, for the purposes of this discussion, is sort of the pinnacle of SOA in that it makes everything controllable through an API. If it has no API it is not Cloud. If you buy into this then you agree that everything that is Cloud is ultimately programmable.

Virtualization is the foundation of Cloud but virtualization is not Cloud by itself. It certainly enables many of the things we talk about when we talk Cloud but it is not necessary sufficient to be a cloud. Google app engine is a cloud that does not incorporate virtualization. One of the reasons that virtualization is great is because you can automate the procurement of new boxes.

The Cloud platform has 2 key components to it that turn virtualization into Cloud. One of them is locational transparency. When you go into a vSphere console you are very aware of where the VM sits. With a cloud platform you essentially stop caring about that stuff. You don’t have to care anymore about where things lie which means that topology changes become much easier to handle in the scaling case or failure cases. Programming languages like Erlang have made heavy use of this property for years and have proven that this property is highly effective. Let’s talk now about configuration management.

Configuration management is one of the most fundamental elements allowing DevOps in the cloud. It allows you to have different VMs that have just enough OS that they can be provisioned, automatically through virtualization, and then through configuration management can be assigned to a distinct purpose within the cloud. The CM system handles turning the lightly provisioned VM into the type of server that it is intended to be. Orchestration is also a key part of management.

Orchestration is the understanding of the application level view of the infrastructure. It is knowing which apps need to communicate and interact in what ways. The orchestration system armed with this knowledge can then provision and manage nodes in a way consistent with that knowledge. This part of the system also does policy enforcement to please all the governance folks. Moving to an even higher level of abstraction and opinionatedness we will talk about Platform Clouds (PaaS) in the next section.

Platform Clouds (PaaS)

Platform clouds are a bit different. They are not VM focused but instead focus on providing resources and containers for automated scalable applications. Some examples of the type of resources that are provided by a PaaS system are database platforms including both RDBMS and NoSQL resource types. The platform allows you to spin up instances of these sorts of resources through an API on demand. Amazon SimpleDB is an example of this. Messaging components, things that manage communication between components are yet another example of the type of service that these sorts of systems provide. The management of these platforms and the resources provisioned within them is handled by the cloud orchestration and management systems. The systems are also highly effective at providing containers for resources that are user created.

The real power here is that you can package up your application, say a war file or some ruby on rails app, and then just hand it off to the cloud which has specific containers for encapsulating your code and managing things like security, fault tolerance, and scalability. You don’t have to think about it.

One caveat to be aware of though is that you can run into vendor lock in. Moving from one of these platforms, should you rely heavily on its services and resources, can be very difficult and require a lot of refactoring.

Cloud and DevOps

Cloud with your DevOps offers some fantastic properties. The ability to leverage all the advancements made in software development around repeatability and testability with your infrastructure. The ability to scale up as need be real time (autoscaling) and among other things being able to harness the power of self healing systems. DevOps better with Cloud.

Info

Twitter: @GeorgeReese
Email: george.reese at enstratus dot com

October 22, 2011

Overcoming Organizational Hurdles

By Seth Thomson and Chris Read @cread given at Camp DevOps 2011

This post was live blogged by @martinjlogan so expect errors.

This talk is about how to overcome organizational hurdles and get DevOps humming in your org. This illustrates how we did it at DRW Trading.

DRW needed to adjust. The problem was that we are not exposing people to problems upfront. Everyone was only exposed to their local problems and only optimized locally. We looked and continue to look at DevOps as our tool to change this.

Cultural lessons

[Seth is talking a bit about the lessons that were learned at DRW that can really be applied at all levels in the org.]

The first ting you need to do if you are introducing DevOps to your org is define what DevOps is do you. Gartner has an interesting definition, not sure if it reflects our opinions, but at least they are trying to figure it out. At DRW we use the words “agile operations” and DevOps interchangeably. We are integrating IT operations with agile and lean principles. Fast iterative work, embedding people on teams and moving people as close to the value they are delivering as possible. DevOps is not a job, it is a way of working. You can have people in embedded positions using these practices as easily as you can for folks in shared teams.

The next thing you need to do is focus on the problem that you are trying to solve. This is obvious but not all that simple. Here is an example. We had a complaint from our high frequency trading folks last year saying that servers were not available fast enough. It took on average 35 days for us to get a server purchased and ready to run. Dan North and I were reading the book “The Goal” – a book I highly recommend. It is a really good read. In the book he talks about the theory of constraints and applying lean principles to repeatable process. We used a technique called value stream mapping to our server delivery process. People complained that I [Seth] was a bottleneck becuase I had to approve all server purchases. Turned out I only take 2 hours to do that. The real problem laid elsewhere. The value stream mapping allowed us to see where our bottlenecks were so that we could focus in on our real bottlenecks and not waste cycles on less productive areas. We zeroed in accurately and reduced the time from 35 to 12 days.

The third cultural lesson, and an important one, is keep your specialists. One of the worst things that can happen is that you introduced a lot of general operators and then the network team, for example, says wow, you totally devalued me, and they quit. You lose a lot of expertise that it turns out is quite useful this way. Keep your specialists in the center. You want to highlight the tough problems to the specialists and leverage them for solving those problems. Introducing DevOps can actually open the floodgates for more work for the people in the center. We endeavored to distribute unix system management to reduce the amount of work for the Unix team itself. This got people all across the org a bit closer to what was going on in this domain. What actually happened is that the Unix team was hit harder than ever. As we got people closer to the problem the demand that we had not seen or been able to notice previously increased quite a bit. This is a good problem to have because you start to understand more of what you are trying to do and you get more opportunities to innovate around it.

If you are looking at a traditional org oftentimes these specialist teams are spending time justifying their own existence. They invent their own projects and they do things no one needs. These days at DRW we find that we have long shopping lists of deep unix things that we actually need. The Unix specialists are now constantly working on key useful features. We are always looking for more expert unix admins.

The last lesson learned, a painful lesson, is that “people have to buy in”. The CIO can’t just walk in and say you have to start doing DevOps. You can’t force it. We made a mistake recently and we learned from it and turned it into a success. A few months ago we were looking at source control usage. The infrastructure teams were not leveraging this stuff enough for my taste among other things. I said, we need to get these guys pairing with a software engineer. I forced it. It went along these lines: the person doing the pairing was not teaching the person they were pairing with. They were instead just focused on solving the problem of the moment. The person being paired with was not bought in to even doing the pairing in the first place. People resented this whole arrangement.

We took a hard retrospective look at this and in the end we practiced iterative agile management and changed course. I worked with Dan North who came from a software engineering background and who also had a lot of DevOps practice. A key thing about Dan is that he loves to teach and coach other people. The fact that he loved coaching was a huge help. Dan sat with folks on the networking team and got buy-in from them. He got them invested in the changes we wanted to make. The head of the networking team now is learning python and using version control. Now the network team is standing up self service applications that are adding huge value for the rest of the organization and making us much more efficient.

Some lessons learned from the technology

Ok, so Seth has covered a lot of the cultural bits and pieces. Now I [Chris Read] will talk about the technical lessons or at least lessons stemming from technical issues. To follow are a few examples that have reinforced some of the cultural things we have done. The first one is the story of the lost packet. This happened within the first month or 2 of me joining. We had an exchange sending out market data, through a few hops, to a server that every now and again loses market data. We know this because we can see gaps in the sequence numbers.

The first thing we would do is check the exchange to see if it was actually mis-sequencing the data. Nope, that was not the problem. So then the dev team went down to check the server itself. The unix team looks at the machine, the ip stack, the interfaces, etc… they declared the machine fine. Next the network guys jump in and see that everything is fine there. The server however was still missing data. So we jump in and look at the routers. Guess what, everything looks fine. This is where I [Chris Read] got involved. This problem is what you call the call center conundrum. People focus on small parts of the infrastructure and with the knowledge that they have things look fine. I got in and luckily in previous lives I have been a network admin and a unix admin. I dig in and I can see that the whole network up to the machine was built with high availability pairs. I dig into these pairs. The first ones looked good. I look into more and then finally get down to one little pair at the bottom and there was a different config on one of the machines. A single line problem. Solving this fixed it. It was only though having a holistic view of the system and having the trust of the org to get onto all of these machines that I was able to find the problem.

The next story is called “monitoring giants”. This also happened quite early in my dealings at DRW. This one taught me a very interesting lesson. I had been in London for 6 weeks and lots of folks were talking about monitoring. We needed more monitoring. I set up a basic Zenoss install and other such things. I came to Chicago and my goal was to show the folks here how monitoring was done by mean to inspire the Chicago folks. I go to show them things about monitoring and I was met with fairly negative response. The guys perceived my work as a challenge on their domain. My whole point in putting this together was lost. I learned the lesson of starting to work with folks early on and being careful about how you present things. It was also a lesson on change. It is only in the last couple of months that I have learned how difficult change can be for a lot of people. You have to take this into account when pushing change. Another bit of this lesson is that you need to make your intentions obvious – over-communicate.

We actually think it is ok to recreate the wheel if you are going to innovate. What is not ok is to recreate it without telling the folks that currently own it. – Seth Thompson.

The next lesson is about DNS. This one was quite surprising to me. It is all about unintended consequences. Our DNS services used to handle a very low number of requests. As we started introducing DevOps there was a major ramp up in requests to DNS per second. We were not actually monitoring it though. All of a sudden people started noticing latency. People started to say “hey, why is the Internet slow?”. Network people looked at all kinds of things and then the problem seemed to solve itself. We let it go. Then a few weeks later, outage! The head of our Windows team noticed that one host was doing 112k lookups per second. Some developers wrote a monitoring script that did a DNS lookup in a tight loop. We have now added all this to our monitoring suite. Because the windows team had been taught about network monitoring and log file analysis, because they had been exposed, they were able to catch and fix this problem themselves.

Quick summary of the lessons

Communication is very key. You must spend time with the people you are asking to change the way they are working.

Get buy-in, don’t push. As soon as you push something onto someone, they are going to push back. Something will break, someone will get hurt. You need to develop a pull – they must pull change from you they must want it.

Keep iterating. Keep get better and make room for failure. If people are afraid of mistakes they won’t iterate.

Finally, change is hard. Change is hard, but it is the only constant. As you are developing you will constantly change. Make sure that your organization and your people are geared toward healthy attitudes about change.

Question: Can you talk a little bit more about buy-in.
Answer: One of the most important thing about getting buy-in is to prove your changes out for them. Try things on a smaller scale, prototypes or process or technology, get a success and hold it up as an example of why it should be scaled out further.

October 22, 2011

Groupon: Clean and Simple DevOps with Roller

By Zack Steinkamp from Groupon @thenobot given at Camp DevOps 2011

This was live blogged by @martinjlogan so please forgive any errors and typos.

The way we do things in production is not always the right way to do things. Coming here to a conference like Camp DevOps and listening to folks like Jez Humble is kind of like coming to Church and reupping your faith in what’s right!

Handcrafted; great for a lot of things. Furniture, clothes, and shoes. The imperfections give a thing character. Handcrafted however has no place in the datacenter. Services are like appliances. Imagine that you run a laundromat. Would you rather have a dozen different machines that all need to be repaired in different ways by different people or would you rather have one industrial strength uniform design for each unit?

In Groupon’s infancy in order to get started quickly we outsourced all operations. We have gotten to the scale though where the expertise of those we outsourced to is not sufficient for our current needs. As a result we have brought it in house now. Given this we needed a way to manage our infrastructure efficiently and with minimal errors under constant change.

In Sept 2010 we had about 100 servers in one datacenter. Many of them were handcrafted. That was ok though, because someone else worked on them. Today we have over 1000 servers in 6 locations. As the service has grown we have felt the pain of a shakey foundation under our platform. That is the driver behind developing this project – Roller. Roller really embodies the DevOps mindset.

The DevOps mindset is typified by folks that love developing software and that are also interested in linux kernels and such – and vice versa. I am one such person. At Groupon I do work for many different areas. I started my career at Yahoo in 1999. I also co-founded a company called Dippidy. I left there and worked for Symantec. Each time I have worn a different hat. Enough about me and my stuff though – lets dig into Roller.

I won’t be giving a philosophical talk but instead will get you into the nuts and bolts of roller [I will summarize this in this live blog – see the slides for more details]. If you have any preconceived notions about how host config and management should be done please try to forget them as this project is quite different. This project is on track to be open sourced from Groupon sometime in the first half of next year.

So, what does Roller do? It installs software. Really, what is a server, it is a computer that has some software on it. Roller installs this software. It facilitates versioning your servers in a super clean way. It allows for perfect consistency across your data center. This handles basic system utils like strace to application deployment. They are all the same, just files on a disk.

You are probably asking yourself why is this guy up here reinventing the wheel. Why do this? We already have Chef and Puppet why bother. Well, we wanted this to be very lightweight. Some existing solutions require message queues, relational DBs, and strange languages that are not already on the system. We also needed to deal with platform specific differences. We have 4 different varieties of Linux. The big thing though, is we wanted a system that was dead simple and audit-able. A lot of the systems now give you tons of power. Inheritance heirarchys like webserver -> apache webserver -> some config of that server etc… That looks great from a programmer brain perspective, but in production this complexity can cause unwanted side effects and cause problems. We wanted to build a system that was blocked on a source code repo commit. We wanted any change in the system to go through git or some other VCS system.

There are 4 parts to roller.

1. The configuration repository.
2. Config server
3. Packages
4. “roll” program.

The configuration repository is a collection of yaml files. The config server sits in each data center. This is a web server that does not have any db. It is a ruby on rails app with no database. It provides views of data stored in the config repository. Packages are precompiled chunks of software. For instance we have an apache package, or some other appliction. A package is just a tarball. The packages are stored and distributed from the config server. Config servers in different datacenters use S3 to distribute packages. We put a package on one config server and then it is world wide in about a minute. Finally we have “roll”. This is what we execute on a host, a blank machine perhaps, to turn it in to a specific appliance.

Configuration Repository

This contains simple files that have within them information about datacenters. This also contains host classes – basically configurations of particular host types. These host classes are just like defining a class in Ruby or some other language that supports classes. The config repo is basically a tree of a fixed depth of 2.5 levels and no deeper. The leaf nodes are the host files. These are contained in the host directory within the config repo. This defines configuration at the host level. Host classes have names and versions for a particular host. The hostclass does not contain a version.

Config Server

We have spoken alot about these yaml files. These are the world for roller. Now to make use of them we need the config server. The config server is a rails app that gives us views of the config repo data. We get to see the yaml config, we can see which hosts are using a particular big of config, we can diff configs to see what changed. A nice thing about this is that you can just run curl commands to investigate the system.

I can also use curl to investigate host classes. Config server just pulls these things out from a git repo and sends them back. This creates a nice http bridge into our running system. This has a lot of value. We will see this with roll.

Roll

Groupon Roller’s Roll server executes code on a host. It runs the http fetches, just like you would do with curl, fetching the host and host class yaml files from the config server. It then downloads any packages that it does not have. It then prepares a new /usr/local dir candidate. It generates configs. It stops services. Moves the new /usr/local into place, then starts the services. Basically each time it nukes the host starting from a new base state. Roller owns user local essentially. This is kind of a nuclear solution. We are not quite re-imaging the whole host but it is still fairly brutal.

This whole cycle typically takes 10 to 30 seconds. The actual services are down for just a few seconds normally. Things are actually only down for a short period of time.

Foreman

This is a roller package. It is in every hostclass. It adds a cron entry when installed on a host. Every x minutes it trues up its users with a config repository via a config host. This is how we do basic user management. You can use this to manage your own profiles and user directories to get your .profile or emacs config or whatever you want on all the hosts in which you have access to.

Wrapping it up

zsteinkamp@groupon.com
on twitter at @thenobot
steinkamp.us/campdevops_notes.pdf is where you can get notes on the presentation.