VDI The Promise versus the Reality

Back in 2005, when I managed the VMware infrastructure at a large insurance company, we had many contractors located off-shore. These contractors were located mostly in India and were primarily programmers working on internal systems. We had a number of issues with them having to do with inconsistent latencies, and inconsistent desktop designs when we had them VPN into our network. We decided to deploy a cluster of VMware hosts, and onto these deploy static Windows XP desktops with the goal of making the environment more stabile, consistent and manageable. While this was not what we consider today to be VDI, I’ll call it Virtual Desktop 1.0. It worked. We were able to deploy new machines from images, dedicate VLans to specific security zones, having them sit in the DMZ when appropriate, etc. Plus, we were able to mitigate the latency issues between interface and application back end, as the desktops resided inside the data center. We no longer had any issues with malware or viruses, and when a machine would become compromised in any way, we were able to swiftly respond to the end user’s needs by simply redeploying machines for them. The location of their data resided on a network volume, so in the event of a redeploy, they were able to retain their settings and personal drives from the redirected home directory. It was a very viable solution for us.

Citrix had accomplished this type of concept years earlier, but as a primarily VMware shop, we wanted to leverage our ELA for this. However, we did have a few applications deployed by MetaFrame, which was still a functional solution.

Time moved on, and VMware View was released. This had the added ability to deploy applications and images from thin images, and ease the special requirements on the storage. In addition, the desktop images now could be either persistent or non-persistent. Meaning, if our goal was to put out fresh desktops to the user upon log in. In this case, our biggest benefit was that the desktop would only take up space on the storage when in use, and if the user was not in the system, they’d have no footprint whatsoever.

There were some issues, though, in this. The biggest concern in this was that the non-persistent desktops, upon login, would demand such processing power that we’d experience significant “Boot Storms.” It would cause our users to experience significant drag on the system. At the time, with a series of LUNS dedicated to this environment, only spinning disc, we had IO issues forcing us to sit in a traditional or fully persistent state.

In my next post, I’m going to talk about how the issues of VDI became one of the industry’s main drivers to add IO to the storage, and to expand the ease at which we were able to push applications to these machines.

The promise of VDI has some very compelling rationale. I’ve only outlined a few above, but in addition, the concepts of pushing apps to mobile devices, “Bring Your Own Device” as well as other functions are so very appealing. I will talk next about how VDI has grown, solutions have become more elegant, and how hardware has fixed many of our issues.

Make your Data Center sing with Converged Architectures

What does this CI concept mean? A converged architecture is, to put it simply, a purpose built system, with the inclusion of Server, Storage, and Network, designed to ease the build of a centralized environment for the data center. By this I mean, there’re servers and storage combined with networking, so that with very minor configuration after plugging the equipment in, one will be able to manage the environment as a whole.

In the early days of this concept, prior to the creation of VCE, I worked on a team at EMC called the vSpecialists. Our task was to seek out appropriate sales opportunities wherein an architecture like this would be viable, and qualify our prospects for what was called the vBlock. These included Cisco Switches, the just released Nexus line, Cisco Servers, the also freshly released UCS blades and EMC storage. These vBlocks were very conscripted toward sizing, and very dedicated to housing virtualized environments. VMware was critical to the entire infrastructure. In order for these systems to be validated by the federation, all workloads on these would need to be virtualized. The key, and the reason that this was more significant than what customers may already have in their environments was the management layer. A piece of software combining UCS Manager, IOS for the Switch layer, and storage management was pulled together called Ionix. This was where the magic occurred.

Then came a number of competitors. NetApp released the FlexPod in response. The FlexPod was just that. More flexible. Workloads were not required to be virtual exclusively, obviously, storage in this case would be NetApp, and importantly, the customer would be able to configure these based less rigidly on their sizing requirements, and build them up further as they needed.

There were other companies, most notably Hewlett Packard and IBM that built alternative solutions, but the vBlock and FlexPod were really the main players.

After a bit of time, a new category was created. This was called HyperConvergence. The early players in this field were Nutanix and Simplivity. Both these companies built much smaller architectures. These are called Hyper Converged, most reasonably. They were originally seen as entry points for organizations looking to virtualize from a zero point, or point type solutions for new circumstantial projects like VDI. They’ve grown both in technology and function to the point where companies today are basing their entire virtual environments on them. While Nutanix is leveraging new OS models, building management layers onto KVM and replication strategies for DR, Simplivity has other compelling pieces, such as a storage deduplication model, and replication making for compelling rationale for pursuing.

There are also many new players in the HyperConverged marketplace, making it the fastest growing segment of the market now. Hybrid cloud models are making these types of approaches very appealing to IT managers setting direction for the future of their data centers. Be sure to look for new players in the game like Pivot3, Scale Computing, IdealStor, and bigger companies like Hewlett Packard and the EVO approach from Vmware with Evo Rail and Evo Rack getting it’s official launch this week at VMworld.

Orchestration Considerations on Disaster Recovery As A Service

In previous posts, I’ve explored differing aspects of concerns regarding orchestrations from a management level on Application Deployment, particularly in hybrid environments, and what to both seek and avoid in orchestration and monitoring in Cloud environments. I believe one of the most critical pieces of information, as well as one of the most elegant tasks to be addressed in the category of orchestration is that of Disaster Recovery.

Screenshot 2015-08-19 10.03.34

While Backup and Restore are not really the key goals of a disaster recovery environment, in many cases, the considerations of backup and data integrity are part and parcel to a solid DR environment.

When incorporating cloud architectures into what began as a simple backup/recovery environment, we find that the geographic disbursement of locations is both a blessing and a curse.

As a blessing, I’ll say that the ability to accommodate for more than one data center and full replications means that with the proper considerations, an organization can have, at best a completely replicated environment with the ability to support either a segment of their users or as many as all their applications and users in the event of a Katrina or Sandy-like disaster. When an organization has this consideration in place, while we’re not discussing restoring files, we’re discussing a true disaster recovery event including uptime and functionality concerns.

As a curse, technical challenges in terms of replication, cache coherency, bandwidth and all compute storage and network require considerations. While some issues can be resolved by the sharing of infrastructure in hosted environments, some level of significant investment must be made and crossed off against the potential loss in business functionality should that business be faced with catastrophic data and functionality loss.

For the purposes of this thought, let’s go under the assumption that dual data centers are in place, with equivalent hardware to support a fully mirrored environment. The orchestration level, replication software, lifecycle management of these replications, management level ease of use, insight into physical and virtual architectures, these things are mission critical. Where do you go as an administrator to ensure that these pieces are all in place? Can you incorporate an appropriate backup methodology into your disaster recovery implementation? How about a tape-out function for long term archive?

In my experience, most organizations are attempting to retrofit their new-world solution to their old-world strategies, and with the exception of very few older solutions, these functions are not able to be incorporated into newer paradigms.

If I were to be seeking a DR solution based on already existing infrastructure, including but not exclusive to an already existing backup scenario, I want to find the easiest, most global solution that allows my backup solution to be incorporated. Ideally, as well, I’d like to include technologies such as global centralized Dedupe, lifecycle management, a single management interface, Virtual and Physical server backup, and potentially long term archival storage (potentially in a tape based archive) into my full scope solution. Do these exist? I’ve seen a couple solutions that feel as if they’d solve my goals. What are your experiences?

I feel that my next post should have something to do with Converged Architectures. What are your thoughts?

Orchestration, Specifically Application Deployment in the Hybrid Cloud

On occasion considerations will be made regarding the the migration of applications to the cloud. There are may reasons why an organization might choose to do this.

Full application upgrades, and the desire to simplify that process moving forward. The desire for licensing centralization, and rollout of future upgrades could be a sure cause of functionally moving application sets to the cloud. A big motivator is the End of Life of older versions of operating systems requiring upgrades. Corporate acquisitions and the goal of bringing in a large workforce, particularly a geographically diverse one, into the organization’s structures can be a large motivator.

Above all, the desire to provide a stability in hardware redundancies can be a motivator.

When a company has desire to augment the internal datacenter with additional function, redundancy, hardware, uptimes, etc., the movement of applications to a hybrid cloud model can be the ideal solution to get them to that application nirvana.

To move a legacy application to a hybrid model is a multi-layered process, and sometimes cannot be done. In the case of legacy, home-grown applications, we often find ourselves facing a decision as to complete rewrite or retrofit. In many cases, particularly arcane database cases in which the expertise regarding design, upgrade, or installation has long since left the organization, a new front-end, possibly web-based may make the most sense. Often, these databases can remain intact, or require just a small amount of massaging to make them function properly in their new state.

Other applications, those more “Information Worker” in focus, such as Microsoft Office already have online equivalents, so potentially, the migration of these apps to a cloud function is unlikely to be as much of a problem as a complete reinstallation, or migration. Components like authentication have been smoothed out, such that these functions have become far more trivial than they were at inception.

However, as stated above, there are a number of high visibility and highly reasonable purposes for moving apps to the cloud. The question has to be which approach is most ideal? As I’ve stated in many of my previous posts, the global approach must be paramount in the discussion process. For example, should there be a motivation of application upgrade in the future, the goal may be to create a new hosted environment, with a full-stack install of the new version code, a cloning/synchronization of data, then whatever upgrades to the data set requisite. At that point, you have a new platform fully functional, awaiting testing, and awaiting deployment. This method allows for a complete backup to be maintained of a moment in time, and the ability to revert should it become necessary. At that point, another consideration that must have been entertained is how to deploy that front end. How is that application delivered to the end-users? Is there a web-based front end? Or, must there be fat applications pushed to the endpoints? A consideration at that point would be a centralized and cloud based application rollout such as VDI.

As you can see, there are many planning considerations involved in this type of scenario. Good project management, with time-stream consideration will ensure a project such as this proceeds smoothly.

My next installment, within the next couple of weeks will take on the management of Disaster Recovery as a Service.

The tarnishing of the Brass Ring and fragmentation in the Monitoring space

There are many areas in which existing monitoring, and orchestration fall short. Subsequent to the discussions in response to my previous posting there were some interesting and appropriate concerns raised.

The added complexity of the addition of the hybrid cloud into the equation regarding monitoring dictate that hypervisor, hardware, and connectivity make reliance on specific platform a quandary that requires massive consideration prior to the willingness to undertake such a purchase. If you accommodate for one of these things, but not all of them, you’ll find yourself open to a tool that simply doesn’t do what you need it to. Let’s say that you’ve chosen a tool that relies on a specific hardware platform, but then choose a third party provider that uses different switching gear, or different host servers. What would your solution be in this case? The answer is that your solution won’t solve anything, and becomes shelfware. So, what becomes necessary in this case is a full-evaluation of your potential solution, along with an eye toward the future growth of your environment.

Can your management layer provide insight into the hybrid data center? If so, does your solution rely on standard MIBs, via SNMP? Will it give accurate predictive monitoring of server/switch/storage through this tunnel? How does it handle the migration of workloads or does it at all? What if you’re trying to monitor variable hypervisor stacks? Will you be able to interact from your internal hypervisor to an external OpenStack environment? What about the storage on the other end? Will you gain insight to your own storage? How about that on the other end?

As I’ve stated, any monitoring solution that relies on a specific hardware platform, not APIs or SNMP; a system that relies on literally anything proprietary will only work if your systems comply with these standards. You’ll need to be willing to comply with lock-in in order for these systems to work for you. This is a key consideration of any purchasing decision. Of course, today’s architecture may fit into any proprietary rule set, but what about the future. Does management want to invest in such a massive undertaking (not only the investment in software, but systems, and manpower) with short-sightedness? Remember, this is a capex as well as opex investment. Often, the operational expenses can far outreach those of the capital investment.

In my opinion, the future of the cloud world is still uncertain. OpenStack has changed the game, yet its future too is uncertain. The fragmentation in this nascent technology leaves me wondering where its future may lie. And again, I’m still not sold on the cost model. This particular business needs to solidify, and engage in deeper standards, meanwhile allowing for full-stack OpenStack models to be created with the inclusion of a support model that doesn’t leave the customer in the lurch.

I wonder what this means for the future of these products. Should an organization invest in a single stack? I wonder…

Business man's hand reaching for the brass ring

Business man’s hand reaching for the brass ring

Monitoring and Management of our Architectures and the Brass Ring

What do we mean today when we talk about managing our environments in the cloud? In the old physical server days, we had diverse systems to manage the network, the storage, the server infrastructure. As time moved on, these systems began to merge into products like Spectrum and OpenView. There came to be many players in a space that involved quite often a vendor specific tool Your server manufacturer would often tie you in to a specific management tool.

Again, as time moved on, we began to see 3rd party tools built to specifications that used SNMP traps and API’s that were no longer unique to particular vendors and furthered the ability to monitor hardware for faults, and alert to high utilization, or failures. This helped our abilities extensively. But, were these adequate to handle the needs of a virtual environment? Well, in enterprises, we had our virtual management tools to give us good management for that infrastructure as well. However, we still had to dig into our storage and our networks to find hot-spots, so this was not going to allow us to expand our infrastructure to hybrid and to secondary environments.

This whole world changed drastically as we moved things to the cloud. Suddenly, we needed to manage workloads that weren’t necessarily housed on our own infrastructure, we needed to be able to move them dynamically, we needed to make sure that connectivity and storage in these remote sites as well as our own were able to be monitored within the same interface. Too many “Panes of Glass” were simply too demanding for our already overtaxed personnel. In addition, we were still in monitor but not remediate modes. We needed tools that could not only alert us to problems, but also to help us diagnose and repair the issues that arose, as they inevitably did, quickly and accurately. It was no longer enough to monitor our assets. We needed more.

Today with workloads sitting in public, managed, and private spaces, yet all ours to manage, we find ourselves in a quandary. How do we move them? How do we manage our storage? What about using new platforms like OpenStack or a variety of different hypervisors? These tools are getting better every day, they’re moving toward a model wherein your organization will be able to use whatever platform with whatever storage, and whatever networking you require to manage your workloads, your data, your backups, and move them about freely. We’re not there yet on any one, yet many are close.

In my opinion, the brass ring will be when we can live migrate workloads regardless of location virtualization platform, etc. To be sure, there are tools that will allow us to do clones and cutovers, but to move these live with no data loss and no impact to our user base as we desire to AWS, to our preferred provider or in and out of our own data centers is truly the way of the future.brassring

Go Go Gadget

Most of you who know me know as well that I’m a gadget guy. I love technology, and love how it’s made my life richer. Over the course of the past few months, I’ve gotten a couple additional toys to help me on that ongoing quest A review and perspective on these devices follows.


Amazon Fire Stick

I already have AppleTV devices all over my home, and can feed to them from multiple devices, stream from various internet content providers, etc. In my office, I also love how I can add my television as an additional monitor using these great, and well thought out products. I’d had only one complaint, which was that I couldn’t stream Amazon Prime, and as a Prime member, I wanted that. I did have a work-around, which was to stream the Prime content to my laptop, and then beam to my TV, but that caused some lag. So, for very little money, I purchased this cute little device. For that one gap, it does perform the discrete task. It performs this fine. Sometimes, when the network is saturated, I still get some lag with this, as it’s only WiFi, but as there’s no hard-wired option available, I’ll deal.

Amazon Echo

I like the device, the implementation, and the sound from this remarkable little device. As to functionality, I see it as enabling or emerging. I’ve yet to play with a some of the IFTTT (If This Then That) programming tools that are being released for it, but these will add functionality. I know that there will be software updates to it that will make things far more functional moving forward. At the moment, asking “Alexa” for the weather, to tell me a joke, and to play some music of my choosing is quite phenomenal. The voice recognition surpasses anything that I’ve had on my iPhone to date.


As a wearer of the Pebble for the previous year, I was excited about the prospect of a new “Smart Watch.” Let me start by saying that I find it able to do everything that the pebble did and better. The display is fantastic, I chose the 42mm sport version, and I don’t find that I miss the icons I choose very often. Implementation on this cute little tool is awesome, and Siri voice recognition is truly better than I’ve ever gotten on my phone. Combining my fitness tracking as well as notifications into one device is great. The ability to use the watch for responding to text messages allows me to send a response while driving even if it’s just to say that I’m driving, and will respond later is a vast improvement over my Pebble. If the person to whom I’m responding has an apple device, I can even send the message as voice. Again, future software updates will promise to improve the functionality even as it stands today.

All in all, I am very happy with these new toys, and truly look forward to the future.

Sometimes, you get caught in the switches…

I’ve been so happy working for such an amazing organization as VMware. My role has been challenging, exciting and a very interesting ride. I’ve had the opportunity to meet some wonderful people, and play with really cool tech on which I hadn’t previously had the chance or time. The Software Defined Data Center is in many ways a huge step toward the future of infrastructure. VVols, vSan, and NSX are really game changers, and functionally amazing. In the Hybrid Cloud, we’ll probably find that these technologies, particularly NSX will make the migration of workloads a much more viable option.

Unfortunately, my team, Velocity, which was in many ways a skunkworks project, recently got disbanded. That has left me with some decisions to make. What to do next? How to leverage the things I’ve done thus far, and make a career choice that will give me the opportunity to shine, as well as give my employer the full breadth of my experiences?

I decided to join a great firm on the partner side called OnX Enterprise Solutions. http://www.onx.com/

OnX has a huge set of solutions in so many vast areas of solutions, including Cloud, Apps, Professional Services, and traditional consulting that the opportunity to provide end-to-end answers to real issues faced by today’s enterprises, with so many potential options in those solutions that I feel like a kid in a candy store.

We practically have any option available to us as potential solutions. I’ve incredibly excited to be joining such a great group of people, and to be able to deliver our customers so many of the things that OnX can do. And, if we haven’t done it yet, we are able to add new tech to the repertoire as new and exciting things become available.

It is just this kind of “outside the box”approach that truly makes the difference.

So, sometimes you do get caught in the switches, but sometimes, too, you get an opportunity you would never have had otherwise.

Please look for me at my home on Twitter @MBLeib, on LinkedIn http://www.linkedin.com/in/matthewleib  and here at my blog for great doings from our team.

Looking back on a tough year…

It’s been an interesting year.

In this year of deaths: Friends, family members, pets, and marriages; and new beginnings: jobs, new pets, friends with children born… I’ll include bizarre weather, the loss of a number of jets from the sky, ebola, strange fluctuations in the economic status of the world, Bill Cosby and Derek Jeter. The sad truth is that this has been a horrible year for most of us. It’s only natural to look back and my own life and say, “Phew, I’ve survived.”

I saw my daughter off to college for her first year, and have seen her grow into the remarkable, beautiful, smart, funny and savvy young woman she’s become. I miss her often, but am thrilled that she’s off and taking on life on her own terms.

I lost my dog Reggie to cancer, and while I still grieve, I made the decision to adopt a new puppy, Stella. She’s brought all the joy I once had with Reggie, as well as all the puppy goofiness that I needed. I’ll always miss Reggie, but never be too sad because I know that people rarely adopt a senior dog and my daughter and I gave him a great last couple years.

I began working with VMware back in August, and have been really ecstatic with the trajectory that my career has taken. True, I’ve had a number of setbacks in my job history, some of my own creation and others completely unavoidable, but through the kindness, support and belief of others in my abilities, I’ve become a member of an amazing team doing amazing things at an amazing company. Is it clear how happy this job has made me?

While I have countless people to thank for the good that has happened in my past, I’d like to call out a few who’ve been instrumental in my career. Chad Sakac (@Sakacc), Stephen Spellicy (@spellicy), Eric Ledyard (@eledyard), John Troyer (@JTroyer), Kat Troyer (@dailykat), Phil Bradham (@PBradz), Michael Letschin (@MLetschin), Caroline McCrory (@CloudOfCaroline), Chris Birdwell (@vDirtyBird), Jeramiah Dooley (@jdooley), Chuck Hollis (@ChuckHollis), Peter White (@PAWhite), and Mark Thiele (@MThiele), (this leaves off all my family and non-socially connected friends) and so many others have assisted me in recommendations, guidance, support, and just plain listening to me complain, that I simply cannot thank them appropriately. But, I will say thank you all, and my gratitude is immeasurable.

Matt, 12-31-2014

One Month In and SDDC

This past month has been another one of those “Firehose” months. This is such an exciting time for me in my career. VMware has been treating me quite well, and I’ve been having so much fun.

My team, the Accelerate Velocity team is dedicated to promoting the full stack of Software Defined Data Center. The SDDC is the most recent iteration in an evolution of converged architectures. I remember my first experiences with converged deployments at EMC, and being so impressed by the vBlock . So well thought out and robust. Since then, a number of newer approaches have been taken toward this concept. Companies such as Simplivity, Nutanix, and of course, NetApp have evolved the concept of merging Computer/Storage/Network into one highly detailed package. These approaches have grown in both dedication to unique use-cases, and scalability.

As you may know, I previously worked with Nexenta, one of the leders in Software Defined Storage. The concept behind “Software Defined” beyond the marketecture or buzzword status is that Commodity-Off-The-Shelf (COTS) with software focused on the task at hand can be leveraged to best utilize the highest level of performance from that hardware. The key is that software makes the difference. The better the software, the better the solution.

This concept has evolved over the past couple of years to include network with spectacular growth in companies like Nicera which was acquired by VMware, and Cumulus. Again, these software applications take advantage of commodity equipment rather than necessarily the more expensive mainstays in the space while providing a software layer that’s easily as robust, and often moreso than the embedded software that comes with the native components. These can also leverage API’s and third party apps like Puppet, Salt, and Chef to automate deployment to multiple devices or even to enterprise level rollouts.

One can certainly recognize that vSphere has done much the same thing with the virtualization of operating systems within standardized x86 server class machines.

Along comes the VMware with the most impressive attempt to tie all these pieces together. This is the SDDC approach.

As we move together in this blog, I’ll discuss some of the solutions we present, deployment methodologies, recipes and various components necessary to the creation of the SDDC.