Orchestration, Specifically Application Deployment in the Hybrid Cloud

On occasion considerations will be made regarding the the migration of applications to the cloud. There are may reasons why an organization might choose to do this.

Full application upgrades, and the desire to simplify that process moving forward. The desire for licensing centralization, and rollout of future upgrades could be a sure cause of functionally moving application sets to the cloud. A big motivator is the End of Life of older versions of operating systems requiring upgrades. Corporate acquisitions and the goal of bringing in a large workforce, particularly a geographically diverse one, into the organization’s structures can be a large motivator.

Above all, the desire to provide a stability in hardware redundancies can be a motivator.

When a company has desire to augment the internal datacenter with additional function, redundancy, hardware, uptimes, etc., the movement of applications to a hybrid cloud model can be the ideal solution to get them to that application nirvana.

To move a legacy application to a hybrid model is a multi-layered process, and sometimes cannot be done. In the case of legacy, home-grown applications, we often find ourselves facing a decision as to complete rewrite or retrofit. In many cases, particularly arcane database cases in which the expertise regarding design, upgrade, or installation has long since left the organization, a new front-end, possibly web-based may make the most sense. Often, these databases can remain intact, or require just a small amount of massaging to make them function properly in their new state.

Other applications, those more “Information Worker” in focus, such as Microsoft Office already have online equivalents, so potentially, the migration of these apps to a cloud function is unlikely to be as much of a problem as a complete reinstallation, or migration. Components like authentication have been smoothed out, such that these functions have become far more trivial than they were at inception.

However, as stated above, there are a number of high visibility and highly reasonable purposes for moving apps to the cloud. The question has to be which approach is most ideal? As I’ve stated in many of my previous posts, the global approach must be paramount in the discussion process. For example, should there be a motivation of application upgrade in the future, the goal may be to create a new hosted environment, with a full-stack install of the new version code, a cloning/synchronization of data, then whatever upgrades to the data set requisite. At that point, you have a new platform fully functional, awaiting testing, and awaiting deployment. This method allows for a complete backup to be maintained of a moment in time, and the ability to revert should it become necessary. At that point, another consideration that must have been entertained is how to deploy that front end. How is that application delivered to the end-users? Is there a web-based front end? Or, must there be fat applications pushed to the endpoints? A consideration at that point would be a centralized and cloud based application rollout such as VDI.

As you can see, there are many planning considerations involved in this type of scenario. Good project management, with time-stream consideration will ensure a project such as this proceeds smoothly.

My next installment, within the next couple of weeks will take on the management of Disaster Recovery as a Service.

The tarnishing of the Brass Ring and fragmentation in the Monitoring space

There are many areas in which existing monitoring, and orchestration fall short. Subsequent to the discussions in response to my previous posting there were some interesting and appropriate concerns raised.

The added complexity of the addition of the hybrid cloud into the equation regarding monitoring dictate that hypervisor, hardware, and connectivity make reliance on specific platform a quandary that requires massive consideration prior to the willingness to undertake such a purchase. If you accommodate for one of these things, but not all of them, you’ll find yourself open to a tool that simply doesn’t do what you need it to. Let’s say that you’ve chosen a tool that relies on a specific hardware platform, but then choose a third party provider that uses different switching gear, or different host servers. What would your solution be in this case? The answer is that your solution won’t solve anything, and becomes shelfware. So, what becomes necessary in this case is a full-evaluation of your potential solution, along with an eye toward the future growth of your environment.

Can your management layer provide insight into the hybrid data center? If so, does your solution rely on standard MIBs, via SNMP? Will it give accurate predictive monitoring of server/switch/storage through this tunnel? How does it handle the migration of workloads or does it at all? What if you’re trying to monitor variable hypervisor stacks? Will you be able to interact from your internal hypervisor to an external OpenStack environment? What about the storage on the other end? Will you gain insight to your own storage? How about that on the other end?

As I’ve stated, any monitoring solution that relies on a specific hardware platform, not APIs or SNMP; a system that relies on literally anything proprietary will only work if your systems comply with these standards. You’ll need to be willing to comply with lock-in in order for these systems to work for you. This is a key consideration of any purchasing decision. Of course, today’s architecture may fit into any proprietary rule set, but what about the future. Does management want to invest in such a massive undertaking (not only the investment in software, but systems, and manpower) with short-sightedness? Remember, this is a capex as well as opex investment. Often, the operational expenses can far outreach those of the capital investment.

In my opinion, the future of the cloud world is still uncertain. OpenStack has changed the game, yet its future too is uncertain. The fragmentation in this nascent technology leaves me wondering where its future may lie. And again, I’m still not sold on the cost model. This particular business needs to solidify, and engage in deeper standards, meanwhile allowing for full-stack OpenStack models to be created with the inclusion of a support model that doesn’t leave the customer in the lurch.

I wonder what this means for the future of these products. Should an organization invest in a single stack? I wonder…

Business man's hand reaching for the brass ring

Business man’s hand reaching for the brass ring

Monitoring and Management of our Architectures and the Brass Ring

What do we mean today when we talk about managing our environments in the cloud? In the old physical server days, we had diverse systems to manage the network, the storage, the server infrastructure. As time moved on, these systems began to merge into products like Spectrum and OpenView. There came to be many players in a space that involved quite often a vendor specific tool Your server manufacturer would often tie you in to a specific management tool.

Again, as time moved on, we began to see 3rd party tools built to specifications that used SNMP traps and API’s that were no longer unique to particular vendors and furthered the ability to monitor hardware for faults, and alert to high utilization, or failures. This helped our abilities extensively. But, were these adequate to handle the needs of a virtual environment? Well, in enterprises, we had our virtual management tools to give us good management for that infrastructure as well. However, we still had to dig into our storage and our networks to find hot-spots, so this was not going to allow us to expand our infrastructure to hybrid and to secondary environments.

This whole world changed drastically as we moved things to the cloud. Suddenly, we needed to manage workloads that weren’t necessarily housed on our own infrastructure, we needed to be able to move them dynamically, we needed to make sure that connectivity and storage in these remote sites as well as our own were able to be monitored within the same interface. Too many “Panes of Glass” were simply too demanding for our already overtaxed personnel. In addition, we were still in monitor but not remediate modes. We needed tools that could not only alert us to problems, but also to help us diagnose and repair the issues that arose, as they inevitably did, quickly and accurately. It was no longer enough to monitor our assets. We needed more.

Today with workloads sitting in public, managed, and private spaces, yet all ours to manage, we find ourselves in a quandary. How do we move them? How do we manage our storage? What about using new platforms like OpenStack or a variety of different hypervisors? These tools are getting better every day, they’re moving toward a model wherein your organization will be able to use whatever platform with whatever storage, and whatever networking you require to manage your workloads, your data, your backups, and move them about freely. We’re not there yet on any one, yet many are close.

In my opinion, the brass ring will be when we can live migrate workloads regardless of location virtualization platform, etc. To be sure, there are tools that will allow us to do clones and cutovers, but to move these live with no data loss and no impact to our user base as we desire to AWS, to our preferred provider or in and out of our own data centers is truly the way of the future.brassring

Go Go Gadget

Most of you who know me know as well that I’m a gadget guy. I love technology, and love how it’s made my life richer. Over the course of the past few months, I’ve gotten a couple additional toys to help me on that ongoing quest A review and perspective on these devices follows.

1379573697_jxynVYBxRMuTC98MGgja_go_go_gadget

Amazon Fire Stick

I already have AppleTV devices all over my home, and can feed to them from multiple devices, stream from various internet content providers, etc. In my office, I also love how I can add my television as an additional monitor using these great, and well thought out products. I’d had only one complaint, which was that I couldn’t stream Amazon Prime, and as a Prime member, I wanted that. I did have a work-around, which was to stream the Prime content to my laptop, and then beam to my TV, but that caused some lag. So, for very little money, I purchased this cute little device. For that one gap, it does perform the discrete task. It performs this fine. Sometimes, when the network is saturated, I still get some lag with this, as it’s only WiFi, but as there’s no hard-wired option available, I’ll deal.

Amazon Echo

I like the device, the implementation, and the sound from this remarkable little device. As to functionality, I see it as enabling or emerging. I’ve yet to play with a some of the IFTTT (If This Then That) programming tools that are being released for it, but these will add functionality. I know that there will be software updates to it that will make things far more functional moving forward. At the moment, asking “Alexa” for the weather, to tell me a joke, and to play some music of my choosing is quite phenomenal. The voice recognition surpasses anything that I’ve had on my iPhone to date.

AppleWatch

As a wearer of the Pebble for the previous year, I was excited about the prospect of a new “Smart Watch.” Let me start by saying that I find it able to do everything that the pebble did and better. The display is fantastic, I chose the 42mm sport version, and I don’t find that I miss the icons I choose very often. Implementation on this cute little tool is awesome, and Siri voice recognition is truly better than I’ve ever gotten on my phone. Combining my fitness tracking as well as notifications into one device is great. The ability to use the watch for responding to text messages allows me to send a response while driving even if it’s just to say that I’m driving, and will respond later is a vast improvement over my Pebble. If the person to whom I’m responding has an apple device, I can even send the message as voice. Again, future software updates will promise to improve the functionality even as it stands today.

All in all, I am very happy with these new toys, and truly look forward to the future.

Sometimes, you get caught in the switches…

I’ve been so happy working for such an amazing organization as VMware. My role has been challenging, exciting and a very interesting ride. I’ve had the opportunity to meet some wonderful people, and play with really cool tech on which I hadn’t previously had the chance or time. The Software Defined Data Center is in many ways a huge step toward the future of infrastructure. VVols, vSan, and NSX are really game changers, and functionally amazing. In the Hybrid Cloud, we’ll probably find that these technologies, particularly NSX will make the migration of workloads a much more viable option.

Unfortunately, my team, Velocity, which was in many ways a skunkworks project, recently got disbanded. That has left me with some decisions to make. What to do next? How to leverage the things I’ve done thus far, and make a career choice that will give me the opportunity to shine, as well as give my employer the full breadth of my experiences?

I decided to join a great firm on the partner side called OnX Enterprise Solutions. http://www.onx.com/

OnX has a huge set of solutions in so many vast areas of solutions, including Cloud, Apps, Professional Services, and traditional consulting that the opportunity to provide end-to-end answers to real issues faced by today’s enterprises, with so many potential options in those solutions that I feel like a kid in a candy store.

We practically have any option available to us as potential solutions. I’ve incredibly excited to be joining such a great group of people, and to be able to deliver our customers so many of the things that OnX can do. And, if we haven’t done it yet, we are able to add new tech to the repertoire as new and exciting things become available.

It is just this kind of “outside the box”approach that truly makes the difference.

So, sometimes you do get caught in the switches, but sometimes, too, you get an opportunity you would never have had otherwise.

Please look for me at my home on Twitter @MBLeib, on LinkedIn http://www.linkedin.com/in/matthewleib  and here at my blog for great doings from our team.
ONX

Looking back on a tough year…

It’s been an interesting year.

In this year of deaths: Friends, family members, pets, and marriages; and new beginnings: jobs, new pets, friends with children born… I’ll include bizarre weather, the loss of a number of jets from the sky, ebola, strange fluctuations in the economic status of the world, Bill Cosby and Derek Jeter. The sad truth is that this has been a horrible year for most of us. It’s only natural to look back and my own life and say, “Phew, I’ve survived.”

I saw my daughter off to college for her first year, and have seen her grow into the remarkable, beautiful, smart, funny and savvy young woman she’s become. I miss her often, but am thrilled that she’s off and taking on life on her own terms.

I lost my dog Reggie to cancer, and while I still grieve, I made the decision to adopt a new puppy, Stella. She’s brought all the joy I once had with Reggie, as well as all the puppy goofiness that I needed. I’ll always miss Reggie, but never be too sad because I know that people rarely adopt a senior dog and my daughter and I gave him a great last couple years.

I began working with VMware back in August, and have been really ecstatic with the trajectory that my career has taken. True, I’ve had a number of setbacks in my job history, some of my own creation and others completely unavoidable, but through the kindness, support and belief of others in my abilities, I’ve become a member of an amazing team doing amazing things at an amazing company. Is it clear how happy this job has made me?

While I have countless people to thank for the good that has happened in my past, I’d like to call out a few who’ve been instrumental in my career. Chad Sakac (@Sakacc), Stephen Spellicy (@spellicy), Eric Ledyard (@eledyard), John Troyer (@JTroyer), Kat Troyer (@dailykat), Phil Bradham (@PBradz), Michael Letschin (@MLetschin), Caroline McCrory (@CloudOfCaroline), Chris Birdwell (@vDirtyBird), Jeramiah Dooley (@jdooley), Chuck Hollis (@ChuckHollis), Peter White (@PAWhite), and Mark Thiele (@MThiele), (this leaves off all my family and non-socially connected friends) and so many others have assisted me in recommendations, guidance, support, and just plain listening to me complain, that I simply cannot thank them appropriately. But, I will say thank you all, and my gratitude is immeasurable.

Matt, 12-31-2014

One Month In and SDDC

This past month has been another one of those “Firehose” months. This is such an exciting time for me in my career. VMware has been treating me quite well, and I’ve been having so much fun.

My team, the Accelerate Velocity team is dedicated to promoting the full stack of Software Defined Data Center. The SDDC is the most recent iteration in an evolution of converged architectures. I remember my first experiences with converged deployments at EMC, and being so impressed by the vBlock . So well thought out and robust. Since then, a number of newer approaches have been taken toward this concept. Companies such as Simplivity, Nutanix, and of course, NetApp have evolved the concept of merging Computer/Storage/Network into one highly detailed package. These approaches have grown in both dedication to unique use-cases, and scalability.

As you may know, I previously worked with Nexenta, one of the leders in Software Defined Storage. The concept behind “Software Defined” beyond the marketecture or buzzword status is that Commodity-Off-The-Shelf (COTS) with software focused on the task at hand can be leveraged to best utilize the highest level of performance from that hardware. The key is that software makes the difference. The better the software, the better the solution.

This concept has evolved over the past couple of years to include network with spectacular growth in companies like Nicera which was acquired by VMware, and Cumulus. Again, these software applications take advantage of commodity equipment rather than necessarily the more expensive mainstays in the space while providing a software layer that’s easily as robust, and often moreso than the embedded software that comes with the native components. These can also leverage API’s and third party apps like Puppet, Salt, and Chef to automate deployment to multiple devices or even to enterprise level rollouts.

One can certainly recognize that vSphere has done much the same thing with the virtualization of operating systems within standardized x86 server class machines.

Along comes the VMware with the most impressive attempt to tie all these pieces together. This is the SDDC approach.

As we move together in this blog, I’ll discuss some of the solutions we present, deployment methodologies, recipes and various components necessary to the creation of the SDDC.

Another new beginning

I’ve been fortunate in my career for a number of reasons. I’ve had the chance to work for many fantastic companies, had the opportunity to work on some amazing technology, and have retained positive relationships with all my employers after leaving. I’ve been able to increase my network, increase my knowledge of so many different areas of great tech within the industry, and promote these to some wonderful customers. I feel truly blessed.

Sometimes, a role comes to you that is so amazing that you can do nothing but change your job. Such a circumstance has arisen for me. While I’ve not been at EarthLink long, and bear only positives for their excellent organization, I’m taking a role at VMware as a Pre-Sales Architect, Accelerate Velocity Strategist.

 This group is tasked with focusing on new technologies being released by VMware, and getting these technologies rolled in as functional implementations by some of the highest profile organizations in the world.

I can be no more excited to be rolling into this group, with its leader, Eric Ledyard ( @EricLedyard ) and some of the finest engineers I’ve had the chance to meet in my career.

 

The opportunity gives me the potential for learning and growing my skills, and to work with a team that will truly challenge me toward the excellence I’m always seeking for myself. There is no greater compliment for me than to be welcomed into such a group, to be given the opportunity to be on the bleeding edge of an organization I’ve admired, and worked with for over a decade, and to evangelize these things to a phenomenal customer base.

 

I want to say thank you, and tell the world how honored and humbled I am for this opportunity.

GeekWhisperers and I

While at Cisco Live this past week, I was fortunate to be invited to be a guest on the Geek Whisperers. This was a real highlight of my trip. While it may just have been convenient, I was very pleased to be included. I have long listened to the podcast created by Amy Lewis, John Troyer, and Matthew Brender . Respectively: @CommsNinja, @MJBrender, and @JTroyer on Twitter. Distinct presences in social media, who’ve with knowledge, intelligence and humor debated cajoled and enlightened the whole concepts of social media and community. I told Amy, actually, that I feel like listening to the podcast is like sitting around with some friends and listening to some intriguing conversation to which I wish I could add my admittedly limited value. I guess this was that opportunity.

http://geek-whisperers.com/2014/06/the-collective-noun-of-influencers-episode-50/

Episode 50 seems a bit of a landmark as well. Image

As you can see, the production methodology of this show was not at the average levels for my friends, but surprisingly, the show sound quality was really impressive.

What makes their conversations so compelling for me is that I get to hear the perspectives of marketing and other approaches to what social media can bring to you and your company in ways I honestly about which I never thought. They’ve opened my eyes to a lot of thought in this vein that I’d really never even considered. And they do it consistently in a compelling manner that never lectures, but always intrigues. They talk of Twitter, Team Building, Branding, Communities, metrics and so many other things. Their guests are generally, with the exclusion of myself, experts in the field.

So, with an amazingly professional rig, we sat and discussed many of the issues in what can be a complexly textured conversation.

I have been a member of the social media world for a long time. When I was an employee at EMC, my boss, Chad Sakac (@SakacC on twitter) said simply: If you’re not already on it, get on twitter. So I did. I hadn’t really thought about a personal/professional blog yet, but I’ve now been on Twitter for over 4 years. I’d thought of what I wanted to say, and how I wanted to be perceived on Twitter and made a number of rules for myself. These guiderails were mine, and I could preach them to others, but that’s irrelevant. What your rules are, what you choose to show about yourself, and what interests you have either inside or out side the professional career. I think that we all recognize that what we do actually does become part of the digital archive forever. But how important is that aspect to you? What you choose to publish versus what you don’t should, in my mind, be guided by that basic rule. Those who do follow me know that I’m just as likely to tweet about music, the Blackhawks, as I am virtualization and storage.

Ultimately, many of the conversations that I’ve heard about these things, in my mind, have rolled up to common sense. Some of the things that I think about are my rules on curse words, my thoughts on politics, and my thoughts on religion. I rarely have these conversations in a public sense, or with my coworkers, so why would I use a different voice on public social media? Sometimes I think about the book “Everything I ever really needed to know, I learned in Kindergarten.” I do believe in honesty, kindness and tolerance. So I would on social media as well.

I recommend The Geek Whisperers highly, not just because I like these folks, but because I respect their opinions, and intelligent approach to a potential quagmire of a subject.

 

Second Time Around

Once again, the fantastic team at VMware, led by John Mark Troyer @JTroyer and Corey Romero have compiled a comprehensive list of vExperts. I am thrilled to be included in this elite corps. The vExpert community is not a certification, but more an industry recognition of efforts. These people who are bloggers, tweeters, and have years of deep experience in this technology we all love.

The full listing and blog post by Corey is located here:
http://blogs.vmware.com/vmtn/2014/04/vexpert-2014-announcement.html#comment-9611

My first acceptance into that group was gratifying to say the least, but to be recognized for a second year in a row is truly humbling.

It comes with great appreciation and true sincerity. My commitment to you, the community of readers, and technologists who use this great tech, that I will continue to push myself forward to deliver content of meaning, and assist my customers in implementing best of breed solutions.

Quite simply, I want to thank the team, and let my co-inductees how appreciative I am to be included.

Thank you-

Matt