When should you NOT virtualize that system?

Virtualization is an amazing and valuable tool for so many reasons. I’ve been a proponent of the concept since GSX server, and early Workstation products. In the early days, I used to use it as a fresh image to load on a workstation for a development crew who required a pristine testing environment for a specific environment. When their code inevitably blew up the image, all we did was recreate that test machine OS with the existing image already located on that machine.

vMotion was a huge plus for us. We created a POC for a large soft drink company wherein we had two hosts vMotioning a file-server every 30 minutes for 2 weeks. When we arrived back in to discuss the viability of vMotion, they’d experienced no headache, and were not even yet aware that we’d done anything. When we showed the logs of the vMotion taking place 48 times a day for two weeks, they were completely convinced.

Things have only gotten better, with additional capacities in terms of disc size, processor capacities and amount of RAM that could be added to an individual machine. Fault tolerance, DRS, and many other technologies have taken away any doubt that most any X86 platform app is a viable target for the virtual world.

But, are there circumstances wherein a device should NOT be virtualized? Perish the thought!

I can envision only a few cases.

For example, one might say that a machine that requires all the resources that a host might have to offer shouldn’t be virtualized. I still say though, that in this case, a VM is preferable to a physical machine. Backup and recovery is easier, uptime can be far more viable, in that DRS allows the movement of the virtual machine off the host and onto another one for hardware maintenance, etc. However, licensing may just find this unacceptable. When you’ve an ELA in place, and can virtualize as much as you want, this actually does become a great solution.

Maybe, in another case, the application hosted on that server is not approved by the vendor? Well, it’s my experience that the OS is the key, and while the app may not have approval by the creator, but testing often makes that a non-issue. However, there may be circumstances wherein the app is tied to a physical hardware entity, and the process of virtualizing it makes it truly not function. I would call this poor application development, but these things are often hard to get around. Another similar case here is when, as in many older apps, the server requires a hardware dongle, or serialized device connected to a physical port on the server. These create challenges, which often can be overcome with the assistance of the company who created the app.

I would posit that in some cases, when a VM relies on a time-sync specific concerns may pose an issue. An example of this machine is a Radius or RSA Server device, in which the connecting device must sync with the connection device as part of its remote access. Assuming that you’ve configured all hosts to connect to an authoritative NTP source, and connection to this is both consistent and redundant, there still exists some small possibility of a time drift. Most importantly, one must be aware of this issue and ensure all efforts to resolve it have been made before virtualizing that server.

And, finally, operating system interoperability. I recently, and remember this is the latter half of 2015, had a customer ask me to virtualize their OpenVMS server. It’s generally not an X86 Operating system, as, for example, OpenSolaris is a port of the original Risc based OS to be able to run on X86. This may be virtualized, but OpenVMS is still a proprietary hardware reliant OS, and thus is, as well as some but few others cannot virtualize into X86 architectures. I am aware that there is a way to virtualize this, but the tech on the Hypervisor is not at all standard.

Generally, any X86 based operating system, or application today is fair game. While we’ll be unlikely to achieve 100% Virtualization Nirvana anytime soon, but, believe me, there are benefits beyond the obvious to ensuring that an application resides within a virtual environment.

A Treatise on Geekdom: What does it mean to be a Geek?

According to Wikipedia, the definition of Geek is:

The word geek is a slang term originally used to describe eccentric or non-mainstream people; in current use, the word typically connotes an expert or enthusiast or a person obsessed with a hobby or intellectual pursuit, with a general pejorative meaning of a “peculiar or otherwise dislikable person, especially] one who is perceived to be overly intellectual

I’m not entirely sure that perception is correct, nor would I agree with the dislikable part, but I do think that obsessed enthusiast is most accurate.

In my profession, many of us gladly wear the mantle of Geek. Our IT friends tend toward a focus on the tech we’ve chosen for our profession. A storage geek may argue about IOps, or the benefits of File based versus Block based, or whether object file systems will eventually win out. A virtualization geek would want to discuss the value of the the orchestration elements of VMware and its associated costs, versus the lack of support on a KVM deployment. We would have so very many degrees of conversation regarding the nuance of our industries that we happily jargon-on ad-infinitum.

One thing that I’ve noticed in my time in this rarified world of IT Geekdom is that our members tend to geek out on many other things as well. The Princess Bride, Monty Python and the Holy Grail, The Big Lebowski, Sean of the Dead, and other comedies can be quoted verbatim by an inordinately large percentage of us. Many of my peers are into high performance cars, and know the variables regarding Nitrous powered vehicles, and torque ratios.

In my case, the geekiness extends most deeply into music. While I tend toward obsession about certain bands, and I find many who share these tastes, they aren’t the most popular, and can often be a bit more obscure.

Few things satisfy me as much as sharing those musical gems with my friends, turning them on to the music that enriches my life, and hopefully finding a kindred joy in my friend’s appreciation of the same.

Do you think that Genesis was better when Peter Gabriel was the lead vocalist, and did the leaving of Steve Hackett from the band send them on the downward spiral as do I? I will argue to the point of annoyance that Phil Collins, while an amazing drummer, was not the front man that helped to define the greatness of this ensemble.

In the Grateful Dead, who was your favorite keyboard player? Was it Brent? He certainly was mine, but there is merit for Tom Constantin, Keith Godchaux, Pigpen, and even Vince Welnick.

One of my all-time favorite musicians, songwriters, and even producers is Todd Rundgren. His many incarnations, bands, and styles all hold my interest consistently. Do any of my peers dig deeply into his catalogue? This core group of Todd fans is small but incredibly loyal. I’ve seen many of the same faces throughout the years at his shows. Not a whole lot of people share this particular taste.

Another thing that really moves me is the guitar. Not only the player of the instrument, but the instrument itself. I find myself looking at pictures, learning about the unique elements of various pickups, strings, tuning pegs, bridges, nuts, etc. Even to the point, where recently I began building them myself. I’ve put together two guitars from parts. I suppose that since I’m a truly mediocre player, that the instrument itself gives me so much joy. Yeah, I have too many of them. Is that a crime?

So, what’s your particular geek obsession? I’d love to hear, and learn about it. These great distinctions between us are one of the most interesting differentiations within our group. The intelligence is not necessarily the key detail, but really the desire to know everything we can about our unique foci.

So tell me, what’s your geek focus?

How to explain Virtualization to newbies

In the world of enterprise architecture, many parts of our daily conversation are taken for granted. We talk about Virtualization, vMotion, Storage vMotion, Replication, IOps, and so very many jargon filled statements. We know these things, understand the concepts and feel that our conversations amongst industry veterans can utilize these concepts without explanation. Even when we talk about things at a higher level, we leverage many of these concepts with so many presumptions that to go back and describe some of the most basic of them are exercises we almost never have to perform.

Sometimes, in my role as pre-sales evangelist, I find myself in the unenviable position many of us do to explain the concept of virtualization to people who have no basis on which to conceptualize. Often this is a conversation arising on dates, with family, or the parents of friends. And, often a lesson in futility. I’ve been in this space since ESX version 2, and have struggled with explaining this in a way that this population could grasp many times.

Simply using the phrase oft-quoted “Turning hardware into software” really doesn’t cut it.

I would love to get some of your analogies.

As I so often do, I usually use the connection to music, and the evolution of how music has been consumed through the years.

Forgive me for some of the obvious gaps in this conversation, like DAT, MiniDisc, etc, but here goes: Originally, we bought albums, 8-tracks, or cassettes. We were able to duplicate these media onto cassettes, but we’d experience degradation. Along came CD’s. CD’s, (apoloties to audiophiles who always and still believed that sound quality of this digital media never compared to their analogue counterparts, the LP) gave us perfect files with perfect consistency.

Hardware developed, and we were able to buy CD players that were able to load up many discs into them. I likened this concept to the ESX host, while the CD was analogized with the Virtual Machine. This worked because the disc itself was like a software version of the hardware that the physical version of the server. The analogy was Physical to Virtual (P2V). Of course, the difference was that these CD decks were unable to play all the discs simultaneously. I point to this as a differentiation, but find the conversational leap not all that difficult to bridge.

vMotion seems an easy concept to broach at this point, pointing out the assumption that many of the multi-disc changers are bound together in a “Cluster.”

And from that point, the idea of migrating these to the cloud, makes for again, an easy jump in conversation.

As time passed in my conversations, I found that using the analogy of our MP3 players made the idea of hardware becoming software became even easier, because the novices were more able to visualize that the music itself was just files.

This posting may seem rudimentary, but I find it to be an honest, real-world conversation that many of us find ourselves unable to perform adequately. I hope, again, that the posting will promote some conversation, and that I can read some of your approaches to this.

VDI – Problems and current considerations

In my last posting, I wrote about my history with VDI, and some of the complications I dealt with as these things moved forward. In these cases, the problems we encountered were largely amplified exponentially when scaling the systems up. The difficulties we encountered, as I alluded to previously had mostly to do with storage.

Why is that? In many cases, particularly when implementing non-persistent desktops (those that would be destroyed upon log off, and regenerate upon fresh logins) we would see much load be placed on the storage environment. Often when many of these were being launched, we’d encounter what became known as a boot storm. To be fair most of the storage IO at the time, was generated by placing more discs into the disc group, or LUN. Mathematically, for example, a single 1500RPM disc produces at most 120 iops, so that if you aggregate 8 discs into one LUN, you receive a maximum throughput from the disc side of 960 iops. Compared to even the slowest of solid state discs today, that’s a mere pittance. I’ve seen SSD’s operate at as many as 80,000 iops. Or over 550MB/s. These discs were cost-prohibitive for the majority of the population, but pricing has dropped them to the point where today even the most casual of end user can purchase these discs for use in standard workstations.

Understand, please, that just throwing SSD at an issue like this isn’t necessarily a panacea to the issues we’ve discussed, but you can go a long way towards resolving things like boot storms with ample read-cache. You can go a long way toward resolving other issues by having ample write cache.

Storage environments, even monolithic storage providers, are essentially servers attached to discs. Mitigating many of these issues also requires adequate connection within the storage environment from server to disc. Also, ample RAM and processor in those servers or number of servers (Heads, Controllers, and Nodes are other names for these devices) are additional functional improvements for the IO issues faced. However, in some cases, and as I don’t focus on product, but solution here, one must establish the best method of solving these. Should you care to discuss discrete differentiations between architectures, please feel free to ask. Note: I will not recommend particular vendors products in this forum.

There have been many developments also in terms of server-side cache that aid in these kinds of boot storms. These typically involve placing either PCIe based solid state devices or true solid state discs in the VDI host servers, onto which the vdi guest images are loaded, and from which they are deployed. This alleviates the load on the storage environment itself.

The key in this from a mitigation perspective is not just hardware, but more often than not, the management software that allows a manager to allocate these resources most appropriately.

Remember, when architecting this kind of environment, the rule of the day is “Assess or Guess.” This means, that unless you have a good idea of what kind of IO will be required, you couldn’t possibly know what you will require. Optimizing the VM’s is key. Think about this: a good-sized environment with 10,000 desktops, running, for example Windows 7 at 50 iops per desktop as opposed to an optimized environment in which those same desktops are optimized down to 30 iops shows a differentiation of 200,000 iops at running state. Meanwhile, one must architect the storage mitigation at peak utilization. It’s really difficult to achieve these kinds of numbers on spinning discs alone.

I can go deeper into these issues if I find the audience is receptive.

VDI The Promise versus the Reality

Back in 2005, when I managed the VMware infrastructure at a large insurance company, we had many contractors located off-shore. These contractors were located mostly in India and were primarily programmers working on internal systems. We had a number of issues with them having to do with inconsistent latencies, and inconsistent desktop designs when we had them VPN into our network. We decided to deploy a cluster of VMware hosts, and onto these deploy static Windows XP desktops with the goal of making the environment more stabile, consistent and manageable. While this was not what we consider today to be VDI, I’ll call it Virtual Desktop 1.0. It worked. We were able to deploy new machines from images, dedicate VLans to specific security zones, having them sit in the DMZ when appropriate, etc. Plus, we were able to mitigate the latency issues between interface and application back end, as the desktops resided inside the data center. We no longer had any issues with malware or viruses, and when a machine would become compromised in any way, we were able to swiftly respond to the end user’s needs by simply redeploying machines for them. The location of their data resided on a network volume, so in the event of a redeploy, they were able to retain their settings and personal drives from the redirected home directory. It was a very viable solution for us.

Citrix had accomplished this type of concept years earlier, but as a primarily VMware shop, we wanted to leverage our ELA for this. However, we did have a few applications deployed by MetaFrame, which was still a functional solution.

Time moved on, and VMware View was released. This had the added ability to deploy applications and images from thin images, and ease the special requirements on the storage. In addition, the desktop images now could be either persistent or non-persistent. Meaning, if our goal was to put out fresh desktops to the user upon log in. In this case, our biggest benefit was that the desktop would only take up space on the storage when in use, and if the user was not in the system, they’d have no footprint whatsoever.

There were some issues, though, in this. The biggest concern in this was that the non-persistent desktops, upon login, would demand such processing power that we’d experience significant “Boot Storms.” It would cause our users to experience significant drag on the system. At the time, with a series of LUNS dedicated to this environment, only spinning disc, we had IO issues forcing us to sit in a traditional or fully persistent state.

In my next post, I’m going to talk about how the issues of VDI became one of the industry’s main drivers to add IO to the storage, and to expand the ease at which we were able to push applications to these machines.

The promise of VDI has some very compelling rationale. I’ve only outlined a few above, but in addition, the concepts of pushing apps to mobile devices, “Bring Your Own Device” as well as other functions are so very appealing. I will talk next about how VDI has grown, solutions have become more elegant, and how hardware has fixed many of our issues.

Make your Data Center sing with Converged Architectures

What does this CI concept mean? A converged architecture is, to put it simply, a purpose built system, with the inclusion of Server, Storage, and Network, designed to ease the build of a centralized environment for the data center. By this I mean, there’re servers and storage combined with networking, so that with very minor configuration after plugging the equipment in, one will be able to manage the environment as a whole.

In the early days of this concept, prior to the creation of VCE, I worked on a team at EMC called the vSpecialists. Our task was to seek out appropriate sales opportunities wherein an architecture like this would be viable, and qualify our prospects for what was called the vBlock. These included Cisco Switches, the just released Nexus line, Cisco Servers, the also freshly released UCS blades and EMC storage. These vBlocks were very conscripted toward sizing, and very dedicated to housing virtualized environments. VMware was critical to the entire infrastructure. In order for these systems to be validated by the federation, all workloads on these would need to be virtualized. The key, and the reason that this was more significant than what customers may already have in their environments was the management layer. A piece of software combining UCS Manager, IOS for the Switch layer, and storage management was pulled together called Ionix. This was where the magic occurred.

Then came a number of competitors. NetApp released the FlexPod in response. The FlexPod was just that. More flexible. Workloads were not required to be virtual exclusively, obviously, storage in this case would be NetApp, and importantly, the customer would be able to configure these based less rigidly on their sizing requirements, and build them up further as they needed.

There were other companies, most notably Hewlett Packard and IBM that built alternative solutions, but the vBlock and FlexPod were really the main players.

After a bit of time, a new category was created. This was called HyperConvergence. The early players in this field were Nutanix and Simplivity. Both these companies built much smaller architectures. These are called Hyper Converged, most reasonably. They were originally seen as entry points for organizations looking to virtualize from a zero point, or point type solutions for new circumstantial projects like VDI. They’ve grown both in technology and function to the point where companies today are basing their entire virtual environments on them. While Nutanix is leveraging new OS models, building management layers onto KVM and replication strategies for DR, Simplivity has other compelling pieces, such as a storage deduplication model, and replication making for compelling rationale for pursuing.

There are also many new players in the HyperConverged marketplace, making it the fastest growing segment of the market now. Hybrid cloud models are making these types of approaches very appealing to IT managers setting direction for the future of their data centers. Be sure to look for new players in the game like Pivot3, Scale Computing, IdealStor, and bigger companies like Hewlett Packard and the EVO approach from Vmware with Evo Rail and Evo Rack getting it’s official launch this week at VMworld.

Orchestration Considerations on Disaster Recovery As A Service

In previous posts, I’ve explored differing aspects of concerns regarding orchestrations from a management level on Application Deployment, particularly in hybrid environments, and what to both seek and avoid in orchestration and monitoring in Cloud environments. I believe one of the most critical pieces of information, as well as one of the most elegant tasks to be addressed in the category of orchestration is that of Disaster Recovery.

Screenshot 2015-08-19 10.03.34

While Backup and Restore are not really the key goals of a disaster recovery environment, in many cases, the considerations of backup and data integrity are part and parcel to a solid DR environment.

When incorporating cloud architectures into what began as a simple backup/recovery environment, we find that the geographic disbursement of locations is both a blessing and a curse.

As a blessing, I’ll say that the ability to accommodate for more than one data center and full replications means that with the proper considerations, an organization can have, at best a completely replicated environment with the ability to support either a segment of their users or as many as all their applications and users in the event of a Katrina or Sandy-like disaster. When an organization has this consideration in place, while we’re not discussing restoring files, we’re discussing a true disaster recovery event including uptime and functionality concerns.

As a curse, technical challenges in terms of replication, cache coherency, bandwidth and all compute storage and network require considerations. While some issues can be resolved by the sharing of infrastructure in hosted environments, some level of significant investment must be made and crossed off against the potential loss in business functionality should that business be faced with catastrophic data and functionality loss.

For the purposes of this thought, let’s go under the assumption that dual data centers are in place, with equivalent hardware to support a fully mirrored environment. The orchestration level, replication software, lifecycle management of these replications, management level ease of use, insight into physical and virtual architectures, these things are mission critical. Where do you go as an administrator to ensure that these pieces are all in place? Can you incorporate an appropriate backup methodology into your disaster recovery implementation? How about a tape-out function for long term archive?

In my experience, most organizations are attempting to retrofit their new-world solution to their old-world strategies, and with the exception of very few older solutions, these functions are not able to be incorporated into newer paradigms.

If I were to be seeking a DR solution based on already existing infrastructure, including but not exclusive to an already existing backup scenario, I want to find the easiest, most global solution that allows my backup solution to be incorporated. Ideally, as well, I’d like to include technologies such as global centralized Dedupe, lifecycle management, a single management interface, Virtual and Physical server backup, and potentially long term archival storage (potentially in a tape based archive) into my full scope solution. Do these exist? I’ve seen a couple solutions that feel as if they’d solve my goals. What are your experiences?

I feel that my next post should have something to do with Converged Architectures. What are your thoughts?

Orchestration, Specifically Application Deployment in the Hybrid Cloud

On occasion considerations will be made regarding the the migration of applications to the cloud. There are may reasons why an organization might choose to do this.

Full application upgrades, and the desire to simplify that process moving forward. The desire for licensing centralization, and rollout of future upgrades could be a sure cause of functionally moving application sets to the cloud. A big motivator is the End of Life of older versions of operating systems requiring upgrades. Corporate acquisitions and the goal of bringing in a large workforce, particularly a geographically diverse one, into the organization’s structures can be a large motivator.

Above all, the desire to provide a stability in hardware redundancies can be a motivator.

When a company has desire to augment the internal datacenter with additional function, redundancy, hardware, uptimes, etc., the movement of applications to a hybrid cloud model can be the ideal solution to get them to that application nirvana.

To move a legacy application to a hybrid model is a multi-layered process, and sometimes cannot be done. In the case of legacy, home-grown applications, we often find ourselves facing a decision as to complete rewrite or retrofit. In many cases, particularly arcane database cases in which the expertise regarding design, upgrade, or installation has long since left the organization, a new front-end, possibly web-based may make the most sense. Often, these databases can remain intact, or require just a small amount of massaging to make them function properly in their new state.

Other applications, those more “Information Worker” in focus, such as Microsoft Office already have online equivalents, so potentially, the migration of these apps to a cloud function is unlikely to be as much of a problem as a complete reinstallation, or migration. Components like authentication have been smoothed out, such that these functions have become far more trivial than they were at inception.

However, as stated above, there are a number of high visibility and highly reasonable purposes for moving apps to the cloud. The question has to be which approach is most ideal? As I’ve stated in many of my previous posts, the global approach must be paramount in the discussion process. For example, should there be a motivation of application upgrade in the future, the goal may be to create a new hosted environment, with a full-stack install of the new version code, a cloning/synchronization of data, then whatever upgrades to the data set requisite. At that point, you have a new platform fully functional, awaiting testing, and awaiting deployment. This method allows for a complete backup to be maintained of a moment in time, and the ability to revert should it become necessary. At that point, another consideration that must have been entertained is how to deploy that front end. How is that application delivered to the end-users? Is there a web-based front end? Or, must there be fat applications pushed to the endpoints? A consideration at that point would be a centralized and cloud based application rollout such as VDI.

As you can see, there are many planning considerations involved in this type of scenario. Good project management, with time-stream consideration will ensure a project such as this proceeds smoothly.

My next installment, within the next couple of weeks will take on the management of Disaster Recovery as a Service.

The tarnishing of the Brass Ring and fragmentation in the Monitoring space

There are many areas in which existing monitoring, and orchestration fall short. Subsequent to the discussions in response to my previous posting there were some interesting and appropriate concerns raised.

The added complexity of the addition of the hybrid cloud into the equation regarding monitoring dictate that hypervisor, hardware, and connectivity make reliance on specific platform a quandary that requires massive consideration prior to the willingness to undertake such a purchase. If you accommodate for one of these things, but not all of them, you’ll find yourself open to a tool that simply doesn’t do what you need it to. Let’s say that you’ve chosen a tool that relies on a specific hardware platform, but then choose a third party provider that uses different switching gear, or different host servers. What would your solution be in this case? The answer is that your solution won’t solve anything, and becomes shelfware. So, what becomes necessary in this case is a full-evaluation of your potential solution, along with an eye toward the future growth of your environment.

Can your management layer provide insight into the hybrid data center? If so, does your solution rely on standard MIBs, via SNMP? Will it give accurate predictive monitoring of server/switch/storage through this tunnel? How does it handle the migration of workloads or does it at all? What if you’re trying to monitor variable hypervisor stacks? Will you be able to interact from your internal hypervisor to an external OpenStack environment? What about the storage on the other end? Will you gain insight to your own storage? How about that on the other end?

As I’ve stated, any monitoring solution that relies on a specific hardware platform, not APIs or SNMP; a system that relies on literally anything proprietary will only work if your systems comply with these standards. You’ll need to be willing to comply with lock-in in order for these systems to work for you. This is a key consideration of any purchasing decision. Of course, today’s architecture may fit into any proprietary rule set, but what about the future. Does management want to invest in such a massive undertaking (not only the investment in software, but systems, and manpower) with short-sightedness? Remember, this is a capex as well as opex investment. Often, the operational expenses can far outreach those of the capital investment.

In my opinion, the future of the cloud world is still uncertain. OpenStack has changed the game, yet its future too is uncertain. The fragmentation in this nascent technology leaves me wondering where its future may lie. And again, I’m still not sold on the cost model. This particular business needs to solidify, and engage in deeper standards, meanwhile allowing for full-stack OpenStack models to be created with the inclusion of a support model that doesn’t leave the customer in the lurch.

I wonder what this means for the future of these products. Should an organization invest in a single stack? I wonder…

Business man's hand reaching for the brass ring

Business man’s hand reaching for the brass ring

Monitoring and Management of our Architectures and the Brass Ring

What do we mean today when we talk about managing our environments in the cloud? In the old physical server days, we had diverse systems to manage the network, the storage, the server infrastructure. As time moved on, these systems began to merge into products like Spectrum and OpenView. There came to be many players in a space that involved quite often a vendor specific tool Your server manufacturer would often tie you in to a specific management tool.

Again, as time moved on, we began to see 3rd party tools built to specifications that used SNMP traps and API’s that were no longer unique to particular vendors and furthered the ability to monitor hardware for faults, and alert to high utilization, or failures. This helped our abilities extensively. But, were these adequate to handle the needs of a virtual environment? Well, in enterprises, we had our virtual management tools to give us good management for that infrastructure as well. However, we still had to dig into our storage and our networks to find hot-spots, so this was not going to allow us to expand our infrastructure to hybrid and to secondary environments.

This whole world changed drastically as we moved things to the cloud. Suddenly, we needed to manage workloads that weren’t necessarily housed on our own infrastructure, we needed to be able to move them dynamically, we needed to make sure that connectivity and storage in these remote sites as well as our own were able to be monitored within the same interface. Too many “Panes of Glass” were simply too demanding for our already overtaxed personnel. In addition, we were still in monitor but not remediate modes. We needed tools that could not only alert us to problems, but also to help us diagnose and repair the issues that arose, as they inevitably did, quickly and accurately. It was no longer enough to monitor our assets. We needed more.

Today with workloads sitting in public, managed, and private spaces, yet all ours to manage, we find ourselves in a quandary. How do we move them? How do we manage our storage? What about using new platforms like OpenStack or a variety of different hypervisors? These tools are getting better every day, they’re moving toward a model wherein your organization will be able to use whatever platform with whatever storage, and whatever networking you require to manage your workloads, your data, your backups, and move them about freely. We’re not there yet on any one, yet many are close.

In my opinion, the brass ring will be when we can live migrate workloads regardless of location virtualization platform, etc. To be sure, there are tools that will allow us to do clones and cutovers, but to move these live with no data loss and no impact to our user base as we desire to AWS, to our preferred provider or in and out of our own data centers is truly the way of the future.brassring