Understanding Capacity in vSAN

Overview

Recently a customer posed a question about what they were seeing in vSAN when they moved a virtual machine over to a vSAN datastore.  Essentially, they were seeing vCenter report the VMDK as consuming more space than they had expected.  I began to compose a response, but then realized I might be missing some things in my explanation myself, so I took the opportunity to leverage our Hands-On-Labs to do a little experimenting.

By the way, if you don’t already use the HOL, it’s a great resource for learning, experimenting, and understanding behavior…. there are a ton of products you can play with and it is all available through your web browser.https://hol.vmware.com

So I wanted to answer the following questions for my customer:

  • How does the reporting of free / used space differ between vCenter and the guest operating system of a virtual machine?
  • What is the difference – if any – between the utilization of capacity between a normal datastore and a vSAN datastore?
  • What should we expect to see when the protection scheme used by vSAN changes?

I used the following methodology to answer these questions – screen shots and descriptions follow in subsequent sections:

  1. Create a new virtual machine on an NFS datastore, using thin provisioning.
    1. Compare reported disk utilization between guest operating system and vCenter as a baseline.
  2. Create a large file in the guest.
    1. Compare reported disk utilization between guest operating system and vCenter.
  3. Delete the large file in the guest.
    1. Compare reported disk utilization between guest operating system and vCenter.
  4. Migrate the guest from NFS to vSAN.
    1. Compare vSAN space utilization with previous NFS.
    2. This test leveraged the default vSAN Storage Policy of FTT=1 / RAID 1.
  5. Change the policy to FTT=1 / RAID5.
    1. Compare vSAN space utilization against RAID 1.
  6. Change the policy to FTT=2 / RAID6.
    1. Compare vSAN space utilization against RAID 1.

New VM on NFS

I began by downloading the vSphere OVA for Photon OS (found here:  https://vmware.github.io/photon/). I deployed it to a datastore mounted over NFS to my cluster in the HOL, ensuring it was thin provisioned.  I took screen shots of the datastore listing while powered off, and then again while powered on.

You can see from the above images that the provisioned VMDK is 15.625 GB, but it is only consuming about 556 MB after being powered on.

Impact of Large File in Guest

Once the virtual machine was booted and running, I logged into the guest file system to run a quick df -h command to compare what I was seeing in vCenter with what the virtual machine thinks is going on.

08-photon-vm-os-df-h

You can see from the Used column that the virtual machine thinks there is about 406-407MB in use… so right off the bat there is a discrepancy of about ~150MB between vCenter and the virtual machine itself.  Some of this might be chalked up to disk geometry, as well as perhaps some VMDK file headers, and so on (note – I have a request into VMware to see if I can get better detail on what makes up that difference).

Then I went ahead and created a 1GB binary file inside the guest; I wanted to see if that 150 MB discrepancy was simply overhead in the VMDK itself, or if it was compounded once the operating system began to chew up space.  I compared the reported space in the guest with the space reported by vCenter, and as you can see in the screen shots below:

  • the guest shows a 1GB file
  • vCenter shows the VMDK is now 1,565,720 KB (about 1.5GB), growing by about 992832 KB (or just shy of 1GB).

I then deleted the 1GB file, and though the VM shows used space shrinking, the VMDK does not. 

A df -h command within the guest show used space returning to ~407MB, yet vCenter continues to show ~1.5GB.

What’s going on here?

Essentially, vSphere can only tell whether a block has been touched- not if is still in use. In the example above, the VMDK grows by 1 GB, as one would expect.  vSphere recognizes that those blocks have been claimed by the guest, and allocates them to the guest as appropriate.  Even though we then deleted the file – effectively releasing 1GB of space – all vSphere knows is that the guest has touched those blocks… not that they have actually been released.  Unless you run some tools to both release the file space within the virtual machine and then reclaim the space on VMFS, vSphere only knows that 1GB of space on VMFS was touched – even if it has been deleted within the guest.  vSAN will automatically reclaim space when a virtual machine is deleted (unlike VMFS datastores on FC), but if you are looking to reclaim space that has been released within a guest, some extra steps are necessary.

Migrate VM to vSAN

I then powered off the virtual machine (to ensure the guest operating system wasn’t doing anything I needed to account for), and migrated the vm to vSAN.  In the screen captures below, the “Virtual SAN Default Storage Policy” is in effect, which performs protection of virtual machines using a mirror or replica of the VMDK on 2 ESXi hosts.

Once the virtual was migrated, I checked the properties of the virtual machine:

18-photon-settings-post-migration

You can see from the properties that the VM is now listed as consuming 3.22 GB, but is allocated 31.25 GB.  This is further confirmed by examining the vSAN datastore:

You can see the placement of the replica copies on hosts 1 and 6, and the datastore now lists the size of the VMDK as 3375104KB, or 3.2 GB.  This is only about 200 MB larger than 2x the individual VMDK when it was homed on the NFS datastore.  This is most likely due to the witness component, which contains a small amount of metadata to identify the hosts participating in the RAID1 mirror for the VMDK.

Change vSAN Policy to RAID5

vSAN 6.2 introduced Erasure Coding, which essentially allows us to treat the hosts in a cluster as members of a disk pool and perform something like RAID5 or RAID6 protection on the virtual machines. These protection schemes may be exposed as policies which may be applied to individual or groups of virtual machines.  Since we do not to ever risk data loss, if you change the policy applied to a virtual machine, vSAN will first apply the new protection scheme to the vm before eliminating the previous protection structures.  This means during the re-protection period, those virtual machines affected will effectively consume the capacity required for both the original protection scheme AND the new protection scheme.  For that reason, VMware suggests customers leave approximately 30% free space on the vSAN datastore.

In the screens below, you can see that I created a new Storage Policy leveraging FTT=1 / RAID5, which I then applied to the virtual machine.  vSAN created the new protection structures which still protecting the virtual machine with RAID 1.

You can see in the settings of the virtual machine that it predicts capacity utilization will drop by 10.4 GB of allocated space and 1.06 of used space.  Sure enough, in the final 2 shots above, you can see that the virtual machine has now been protected across 4 hosts, and the capacity utilization reported by vCenter has dropped to 2355200KB, or 2.25GB.

  • the original VMDK size was 1.5GB
  • it grew to 3.2 GB on vSAN, protected by RAID1
  • it is now 2.25GB on vSAN, protected by RAID5.

Change vSAN Policy to RAID6

Following the same methodology as above, I created a new storage policy for FTT=2 / RAID6.  I then applied the policy to the virtual machine to observe changes in capacity utilization.

As you can see, once again the virtual machine settings warn of a change in utilization – this time an increase in capacity utilization, as we are now demanding the solution be capable of accommodating 2 failures (FTT=2).  Once again, you can see that vSAN maintains the RAID5 protection while it is creating the RAID6 protection.  Finally, once the protection is complete, you can see the storage utilization is 2666496KB, or 2.54GB.

 

  • The original VMDK size was 1.5GB
  • It grew to 3.2 GB on vSAN, protected by RAID1 – slightly more than 2x the original.
    • The extra 200MB is most likely the ‘Witness’ component, used to track the replicas.
  • It shrank to 2.25GB on vSAN, protected by RAID5 – about 1.5x the original.
    • The extra overhead on the VMDK is most likely the ‘parity’ information used for protection.
  • It grew to 2.54GB on vSAN, protected by RAID6 – about 1.7x the original.
    • The extra overhead on the VMDK is most likely the ‘parity’ information used for protection.

Conclusion

While I didn’t perform an in-depth analysis of the storage profiles available to virtual machines in vCenter and vSAN, hopefully you can see the power of using storage profiles to alter the protection and behavior of your virtual machines.  Furthermore, I hope this post serves to help explain how vSAN consumes capacity under different protection schemes, and how to make sense of what you are seeing in vCenter.

 

Posted in Miscellaneous | Leave a comment

The New Reality for VMware and VMware Customers

I have had the privilege to begin to work with the Cloud Native Apps team at VMware a bit over the past couple months, and I am becoming more and more convinced that this is a topic and a trend that will be come more and more impactful as time passes.

There is much to be said (written) about the rise of Continuous Integration and Continuous Deployment (CI/CD), how DevOps is changing the way our customers go about the business of IT, and how applications are being written in new ways with new tools (think ‘microservices’).  However, without getting bogged down by the noise created by everything that’s going on in the market, there are some fundamental truths we have to come to grips with…

1 – Containers (not virtual machines) are the new deployment targets.

Developers prefer containers:

  • As a development environment, because Docker has made it dead simple to wrap up dependencies in a configuration file, allowing one to focus on coding, rather than worrying about infrastructure and waiting for operations teams to provide an environment.
  • As a deployment target, because they can spin them up on their laptop(s) without much fuss… there is no operating system to install, as a container is essentially a construct of the operating system.
  • For portability, for the two reasons above…. Containers an be spun up on any Linux distribution, the proper dependencies injected into the container, and application code downloaded to run from Github (or someplace else) far faster than waiting for IT to spin up an approved VM.

Unfortunately, too often the ‘usual suspects’ (infrastructure teams) don’t always see this, or aren’t aware of it, and if they are they might be struggling to understand what, if anything, they need to do about it.

VMware needs to help them understand why containers are important, and how we can continue to support them, offering containers as first class citizens of the datacenter both now and in the future (think vSphere Integrated Containers).

2 – Clusters are the new application architecture.

These are not your Oracle / Microsoft clusters.  Think of Hadoop as an example – a ‘master’ node, with multiple ‘minions.’  The master handles scheduling and assigning tasks to its minions… of which there could be hundreds, or thousands.  When we talk about building ‘cloud native applications’ – whether brand new applications, or decomposing a legacy application into smaller pieces –  frequently these new applications are built as microservices scheduled and managed by a cluster technology.  You might have heard of Google Kubernetes, Docker Swarm, and Apache Mesos – of the three, Mesos is the most mature and widely used.  Twitter, Netflix, Apple, and many other companies that we equate with ‘cloud native’ all leverage Mesos for instantiating, managing, restarting (etc) their workloads.  As a matter of fact, Microsoft recently announced they plan to launch a Mesos service in Azure for managing Linux container clusters.

Unfortunately (again), while both containers and such clusters as Mesos run readily on vSphere (or in vCloud Air), our infrastructure customers are frequently ill-equipped to offer such services, as they have built their datacenters and operational practices around management of virtual machines and applications we tend to be more familiar with (Microsoft, ORCL, SAP, etc).

We need to help our customers understand why cluster architectures are important, and how we can continue to support them, offering cluster services both now and in the future (think Photon Platform).

3 – We Need to Meet our Developer Customers Where They Are…

During a cloud-native panel discussion at Tech Summit (an internal VMware event) last week, one of the audience members asked (and I am somewhat paraphrasing), “Why are we doing all this with containers and Photon platform?  There are already Powershell scripts for Instant Clone in vSphere 6…”

It was a good question, and illustrates a significant point that we all need to grok (Robert A Heinlein, Stranger in a Strange Land) – developers aren’t using Powershell for development (on vSphere or anywhere else)…  Powershell isn’t even targeted at developers.  It is a scripting language targeted at system administrators for automation of infrastructure (initially for MSFT sysadmins, but VMware has leveraged it to great effect for vSphere administration).  For that matter, not only aren’t developers not using Powershell –  they aren’t using any VMware APIs for development of their applications.  If they were, VMware wouldn’t have to worry about doing anything different!

Developers are building using an entire ecosystem of tools that VMware doesn’t currently play with, or at least only a little.  Yes, there are some nice stories around Vagrant, Puppet and Chef, and yes, we would all love everyone to be using vRealize Automation for automated deployment of infrastructure.  But that is not the case.

In many organizations, control of budgets is shifting away from infrastructure teams to lines of business and developers.  To quote one of my customers from earlier this year (and this is verbatim) – “Anything that enables developer productivity gets funded.”  Developers desire an effortless experience spinning up an environment in which they can build and test their code, and minimal friction getting their code to development (see point #1 above).  We know this story… we have all heard this as part of the SDDC value proposition.  But in many cases, developers are avoiding IT to spin up containers on their laptop and then in production because it represents an easier way to deploy than waiting for IT to build and ‘certify’ a new VM.

Therefore, I personally believe VMware needs to provide customers a great development experience:

  • During development on the laptop (think AppCatalyst)
  • Through test and dev (think vRealize Code Stream)
  • Into Production (think vSphere Integrated Containers and Photon Platform).

Finally, we need to be embracing the open source movement to further enable the community at large, which is why VMware has started to contribute back to the community with Photon OS, Lightwave, AppCatalyst, and many other projects.

If VMware is to successfully navigate this next wave of innovation, they need to get out in front and yell from the rooftops.  That is what the Cloud Native Apps group is all about at VMware, and I am proud to be a part of it!

Posted in From the field, Journey to the Cloud, Next-Gen Apps | Tagged , , , | Leave a comment

Building and Installing Apache Mesos

Found this great resource for anyone looking to get Mesos running in their lab / environment! Thanks Marco @massenz

Code Trips & Tips

I have just published a gist that shows how to build and install Apache Mesos on a Ubuntu 14.04 VM or physical box.

The Getting started instructions are a good start (well…) but are somewhat incomplete and currently look a bit outdated (I plan to fix them soon): however, the outcome has been that I have struggled more than I felt necessary in building and running Mesos on a dev VM (Ubuntu 14.04 running under VirtualBox).

Some of the issue seem to arise from the unfortunate combination of Mesos Master trying to guess its own IP address, the VM being (obviously) non-DNS resolvable and, eventually, the Slave and the Framework failing to properly communicate with the Master.

In the process of solving this, I ended up automating all the dependencies installation, building and running the framework; I have then broken it down into the following modules to make it easier…

View original post 7 more words

Posted in Miscellaneous | Leave a comment

VMware Photon OS, Kubernetes, & Mesos

VMware recently released Photon OS – Tech Preview 2.  You can find it here, along with some of their other open-source initiatives.  This post is intends to (hopefully) introduce folks to Photon and demonstrate how easy it is to use as part of a larger solution.

Photon OS is an RPM-based Linux distribution built expressly to support containers and container-cluster frameworks, like Kubernetes and Mesos, and optimized to run on vSphere.  Support for Docker, Rocket, and Pivotal Garden are built right into Photon OS, though only the binaries for Docker are included by default.  What’s more, with the release of Tech Preview 2, the binaries for Kubernets and Mesos are built-in as well, making working with your favorite cluster resource scheduler easier than ever.  You can read more about Photon OS here.

One thing to note, however, is that when you first install Photon OS, you have a choice of installation types – Micro, Minimal, Full, and something else named OS Tree, which I won’t go into now.  The difference(s) between Micro, Minimal and Full are – as one would expect – in the packages and binaries that are included with each installation.  For example, if I install a “minimal installation of Photon, take a look at the output of a couple of “list” commands in the console:

Screen Shot 2015-09-17 at 8.11.49 PM

Photon OS minimal installation – list command output

You can see that for a minimal installation, Docker is included…  Now compare this to a “Full” installation:

Screen Shot 2015-09-17 at 8.14.26 PMYou can see above that not only is Docker included, but the binaries for both Kubernetes and Mesos are as well!  (It should be noted I am using an updated build of Photon TP2 that includes support for Mesos 0.23 – many thanks to Chris Mutchler for the elbow grease in building the ISO – you can read more about it here:

Screen Shot 2015-09-18 at 3.12.49 PMLet’s see how easy it is… after I install Photon OS in a vm in VMware Fusion, I simply create a couple of additional virtual machines in Fusion through Linked Clones, and issue the appropriate commands to spin up the master and a couple of slave nodes.

After enabling SSH in the master VM (you just have to edit /etc/ssh/sshd_config), and establishing the IP address of each vm, it’s simple.

 

 

The commands are relatively straightforward:

  • Master node:/bin/mesos master –ip=LOCALIP –work_dir=/var/lib/mesos –cluster=CLUSTERNAME
  • Slave node: /var/bin/mesos slave –ip=LOCALIP –master=MASTERIP:5050
Screen Shot 2015-09-17 at 8.35.57 PM

Master node: /bin/mesos master –ip=LOCALIP –work_dir=/var/lib/mesos –cluster=CLUSTERNAME

Once that is done, you should be able to open a browser to the Mesos web page, located at the Master IP:PORT specified in your command above:

Screen Shot 2015-09-17 at 8.42.21 PM

I should mention that my esteemed colleague, Chris Mutchler, has already done a great deal of work putting this into production with a robust implementation leveraging vSphere and Big Data Extensions, which you can read about here:

That’s all for now!  In a future post, I will be working on integrating Photon as part of a larger solution.

Posted in Journey to the Cloud, Miscellaneous, Next-Gen Apps, VMworld | Tagged , , | Leave a comment

Presenting at VMworld!

I am thrilled to have been able to present at VMworld  this year!  Nothing like a last-minute request to goad you into action…  I received a call about 10 days before the event to cover for another speaker that couldn’t make it, so there was much scurrying and preparation!

I co-presented with Andrew Nelson  for session CNA4725 – Scalable Cloud Native Apps with Docker and Mesos.

You can check out the recording here:  http://vmware.mediasite.com/mediasite/Play/bc636c07aa1e4fc68cedb19ab50141e41d?catalog=1c95c1d4-0353-4ae1-b3ed-a5067afb57aa.

I would love to hear any feedback!

Posted in Miscellaneous | Leave a comment

Cloud Native Applications at VMworld 2015!

Cloud-Native Applications is an exciting new effort at VMware that involves everyone at VMware.  Every business unit is, or will be involved in building and extending products that can support micro services and container-based applications. Cloud-Native Applications will have a large footprint at VMworld 2015.  For those attending VMworld here are a few things to check out as well as a field guide that you may reference (attached).

  • Kit Colbert’s Spotlight session CNA6649-S:  Build and run Cloud-Native Apps in your Software-Defined Data Center
  • 10 Breakout Sessions-Highlighting some popular sessions below
    • Session ID:  CNA5379-Panel: Enterprise architecture for Cloud-Native Applications
    • Session ID:  CNA5479-Running Cloud-Native Apps on your Existing Infrastructure
    • Session ID:  CNA5698-Building your Next Infrastructure Specifically for Cloud Native Apps CNA
  • Office of the CTO Booth located in Hang Space this year.
  • VMware Booth located in the Solutions Exchange
  • Hands-on Labs (Moscone South) HOL-SDC-1630:  Cloud-Native Apps: Bringing Microservices and Containers to the Software-Defined Data Center
    • 2 opportunities to take the workshop from those experts that wrote the lab
    • 08/31 @3:30
    • 09/02 @10:30
  • CNA DevOps Workshop on 08/31 @ 4:30-5:30 (Moscone North, near the general session area, Hangspace):  CNA to host a workshop in the DevOps area discussing the use AppCatalyst, and a Photon VM running  Docker.
  • Customer Meetings (VBC &CNA)
  • VMware Videogame Container System (Location Hangspace/Moscone W. Level 2)  featuring Bonneville and Prince of Persia

With VMware’s company wide effort and large footprint at VMworld, Cloud-Native Apps is definitely something everyone needs to check out this year.  We hope to see you there!

186837_VMW-DigitalSGN-CNA-1080X1920_RGB-v103-Release

Posted in Miscellaneous | Leave a comment

DevOps, Cloud, and the SDDC…

If you live or work in technology, as I do, then you already know that change in IT is constant.  We all are faced with the reality that the technology you are using will eventually be usurped by something better/faster/smaller/better, and you will need to reducate yourself in the latest, newest, coolest thing.  Taking a page from The Innovator’s Dilemma, far better to do this proactively and ‘disrupt’ yourself, rather than wait for market forces to push you into it.  In fact, I have spent my career trying to, as Wayne Gretzky allegedly once told a reporter, “…skate to where the puck is going,” rather than where it is now.

The past several years, those of us working in infrastructure have been somewhat preoccupied with virtualization, automation, cloud business models, and sassy-sounding acronyms like ITaas, IaaS, PaaS, and SaaS.   I use the term preoccupied because those of us hawking the concept of ‘cloud’ and ‘IaaS’ are about to be disrupted again, which is a euphemism for what happens when you are so focused on doing your own thing you don’t see the large unstoppable force careening toward you from an unanticipated direction…

Ten to fifteen years ago, if a company needed to roll out a new service or application, the IT department was empowered to make its own decisions on how best to service the needs of the business and deploy the appropriate technology for that solution.  Over time, with the ever-increasing ubiquity of the internet, cloud and mobile applications, that power has started to shift away from IT.  I have seen many  IT departments placed under serious pressure to build an infrastructure that supports a consumer-driven mindset leveraging self-service, chargeback, immediate response, ubiquitous access and alwys-on availability.  All this while under ever-increasing budget constraints.  Virtualization and cloud are supposed to fix, or at least help this, right?

While IT has been trying to figure all this out, the Development Community (I am using capitals to represent the collective effect of millions of independent developers) has been working to solve these problems as well.  How to deliver quality software to customers / business with ever increasing frequency and ever increasing quality?  How to deploy the workloads necessary to support my application or service as quickly as possible?

As a result of these efforts on the part of the Development Community, we have seen the rise of “Continuous Integration” (CI) as a practice to speed the build and test of software.  The Community is now trying to conceptually extend CI all the way through to production, leveraging the same principles of automated deployment, testing, and lifecycle management of the entire application stack through “Continuous Delivery” (CD).  Together this is referred to as CI/CD.

The Development Community is also making extensive use of APIs as a technique to maximize their ability to deploy applications and manage workloads once in production…. this has significant repercussions for many infrastructures, as it appears OpenStack is gaining the mind-share in this space.  More on that in another post…

Simultaneously, a separate movement has arisen focused on aligning the efforts of developers with the efforts of those in operations.  Rather than a shift in technology, this is more of a shift in the mindset and approach to IT operations, and it illustrates that while ‘cloud computing’ may be a part of the puzzle, it is a prime target for disruption, even though the market is only a few years old.

What is DevOps?

DevOps is something of an IT-cultural movement encompassing people and process, in which joint collaboration occurs between all parties (development teams and operational teams), working toward a common goal: increasing the business’ responsiveness and value to its customers.  In today’s world, that means:

  • Developers own their code from inception to production
  • Developers and operations share the responsibility of deploying applications and running the environment for greater appreciation of each others’ roles
  • More communication between teams, earlier in the process of releasing new applications / features
  • More responsibility on development teams to ensure operational readiness
  • More responsibility on operational teams to be service oriented
  • Agreement that failure will occur, and everyone must be willing to take responsibility for it

What is interesting to me is that this movement has arisen independently of the ‘cloud’ and the CI/CD movements, both of which are heavily weighted toward technology.  Cloud computing tries to address the infrastructure underpinning services and applications, while CI/CD tries to address the delivery of software onto infrastructure. Neither sufficiently addresses the organizational changes necessary to people and process to realize the benefits of either.  The success of the Devops movement in promoting organizational change is a severe indictment on how clearly ‘cloud’ has failed to deliver on its promises.

While DevOps may be acknowledged as a trend toward a loosely defined operating model, it carries with it specific and identifiable – though not formalized – goals, as mentioned above.  These are all focused on increasing developer productivity, improving operational readiness, and increasing environmental resilience.  Often, the implementation of a DevOps-enabled system focuses on the application release pipeline – from the development of the code, through testing and Q/A, and finally promotion to a production environment.  We may infer, then, that there are certain characteristics of a DevOps-enabled datacenter (“DEDC”) that we might find advantageous.

First and foremost, we can identify as our primary requirement for our DEDC a high degree of automation for the application release pipeline itself.  Often referred to as “continuous integration and continuous delivery” (or CI/CD), a highly automated release pipeline has the potential to have the greatest impact on developer productivity and therefore the greatest impact on responsiveness to the business.  Of course, CI/CD carries with it its own requirements, including:

  1. A hyper-standardized infrastructure, potentially yielding:
    1. Greater parity of environments across Dev, Test, QA, and production
    2. Shorter mean time to resolution (MTTR) for troubleshooting
    3. Highly optimized and efficient procedures for deployment and replacement or equipment
  2. A high degree of automation in the infrastructure for deployment of workloads
  3. Proper instrumentation for operational transparency, monitoring, alerting, reporting, etc.

Note that the characteristics described above are also found in highly virtualized environments, as well as cloud-based infrastructures.

What is the Difference Between DevOps and Cloud?

Similar to DevOps, the cloud model also encompasses people, process, and technology, but more tightly defined, and with an emphasis on infrastructure rather than applications.  I personally view ‘Cloud’ as a business model that addresses the budgeting, procurement, implementation, consumption and maintenance of IT assets.  Virtualization and consolidation by itself is not cloud, as virtualization really only addresses the implementation and consumption of those assets.  Cloud must encompass the entire IT deployment lifecycle, including the budgeting and procurement (and ultimately chargeback) of those assets as well as the implementation and consumption of the infrastructure.  The entire business of IT changes to support a cloud-enabled business.

The National Institutes of Standards and Technologies (NIST) actually has a definition of cloud computing (which you can read here), but I prefer a simpler, shorter definition (from VMware’s definition, circa 2010):

Cloud Computing is an approach to computing that leverages the efficient pooling of on-demand, self-managed virtual infrastructure, consumed as a service.

Unfortunately, companies that have failed to incorporate cloud business models into their operational procedures are finding themselves falling behind their competition at increasing rates.  As cloud computing only addresses the infrastructure portion of the technology value chain, recognition is sinking into the industry that while cloud computing is a step in the right direction, it ultimately falls short of truly meeting the needs of today’s businesses.

So How Do We Enable DevOps in the Datacenter?

Traditional infrastructures with heavy reliance on physical systems or those lacking programmable interfaces are inherently brittle and cumbersome.  The greater the effort necessary to make configuration changes to layers of the stack, the less flexible and responsive the infrastructure (and by extension, the operations team) will be.  As we discussed in the last section, cloud computing attempts to address these issues through a highly virtualized environment coupled with a service-oriented mentality.  We now know, however, that hyper-standardization (which is almost a prerequisite for virtualization, and certainly a best practice) and virtualization together are not enough.  Ideally, we need automated deployment of physical or virtual servers.  We need deployment of those workloads on demand to the environment (dev/test/prod) of our choosing.  We need a finely tuned continuous integration and continuous delivery application release pipeline.  In short, the entire environment is designed with streamlined delivery of new applications / code / features in mind.  We need a software-defined enterprise.

A software-defined enterprise allows for changes to the infrastructure and application(s) on the fly, as ideally the entire application and its requisite SLA(s) are encapsulated in software, defined as code, governed by policy, and therefore inherently flexible.  It only makes sense, then, that a software-defined enterprise is better equipped to embrace DevOps.

In a future post, I will discuss how VMware and the vRealize Suite may be utilized to enable DevOps in the Software-Defined-Datacenter.

Posted in Five Minutes...., Journey to the Cloud, Miscellaneous | Tagged , , , , , , | Leave a comment