Diablo Technologies: A unique approach to In Memory Databases

At the Tech Field Day vendor presentations, I was privileged to see a presentation from the group at Diablo.

At first glance, Memory1 appeared to me to be yet another DIMM (Dual in-line memory module) based memory technology. To be fair, there’s a ton of stuff going on in that area. We’ve seen ULLTraDimm from SanDisk extend the Memory Socket Dimm into what is quite actually a memory socket based Solid State Disc.

The idea here is that with incredibly fast storage, we’ve realized that much of the latency that exists with fast storage is the connectivity of that storage to the processor. Traditional SSD is typically connected via SATA or SAS connections to the motherboard via some level of PCIe bus adaptor or motherboard based controller. We’ve been thrilled with the speed and durabilities of these discs for the most part.

The goal of moving the storage closer to the processor, and thereby eliminating the gap between the processor to the storage brought us to the need for new paradigms of storage. A number of years back, FusionIO developed the PCIe based memory card, which certainly eliminated the cable connection from the PCIe bus to the storage, and improved the latency numbers. A number of the companies who manufacture Solid State have come to market with solutions similar to the original FusionIO product. Intel and Samsung as well as SanDisk have achieved this tech, which while improving and coming down in cost have removed that typically SATA latency which has historically been limited to less than 600Mb/s or 6Gb/s which is the Serial ATA theoretical limit. These vendor’s products are by no means the only ones in this space, but a representative sample or the most obvious choices.

So the ability to create a storage tech sitting directly on the motherboard, and close to the processor bank brought us to these newer technologies. In all fairness, the costs of these truly “Memory Bank” storage platforms can be prohibitive.

Enter Diablo Technologies. What they’ve done here is bridge the gap between Volatile DRAM and Storage. While the product from Diablo (Memory1) appears to simply slower Volatile memory, it’s actually quite a bit more elegant than that. This is not a storage product. It is truly positioned as memory, but a very different approach than standard DRAM. Less expensive than DRAM, and hugely greater capacities.

Memory1 sits in the DIMM socket on the motherboard, as close to the processor as standard memory, but through the use of the magic of Diablo’s tech, they are substantially larger in capacity than standard memory. To a factor of many multiples, actually.

Currently, you can put 128Gb in a traditional memory socket with Memory1, which, could extend the potential of a server’s onboard RAM to 4Tb in a 2 socket server. If you’re running SAP or Spark databases as your application, the ability to load the entire database into system memory, while not requiring read/write activity to the storage will massively improve the performance of your database.

In addition, this ability to load the entire database into memory can easily reduce the sheer number of devices required to feed that database to the rest of the system. In so doing, one can easily imagine reducing the number of servers existing within that architecture. Diablo claims up to 90% fewer servers in this environment. Disclaimer: I have not seen the reality here, but I do see it as a reasonable possibility.

They have also claimed that through the limitations of data paths on the motherboard, a 2 socket server outperforms a 4 socket server, in many cases. This is due to the number of paths the memory feeds through the processors being substantially fewer than the architecture of a quad-socket server, and thus reducing latencies. Again, numbers here to be validated, but I can definitely see the likelihood.

I’ve tried to envision the benefits of this technology into environments that require lots of memory, for example Virtualization Hosts, but honestly, I haven’t seen a lot of evidence that this is as powerful a use-case. I imagine that we’d see distinct improvements wherever and whenever the server should experience memory overload.

Please note that the use cases for this memory are specific, but if you think about it, there’s a compelling reason for it. I think this approach is unique and quite an interesting evolution to server based memory/storage.

How has Social Media assisted me in my Professional and Personal life

Back when I joined EMC, which in all was an amazing experience, the first thing that Chad Sakac (@Sakacc) said to our team was “If you don’t have a twitter account or a blog, do that.” Sage words indeed. Through the use of social media and communities, I’ve made some of the best friends and contacts of my life. The many times I’ve been successful in reaching out to my peers across the world for solutions, jokes, and IRL meetings is countless. I owe Chad for this advice.

A couple years back, I was fortunate and honored to be invited to be interviewed on a “Geek Whisperer’s Podcast.” For that, I once again thank Amy Lewis (@Commsninja), John Troyer (@JTroyer), and Matt Brender (@MJBrender). I had the ability to explore a bit about how I got started in social media, and how I approach it. For example, I choose not to talk about politics, sex, or even swear on these platforms. This was an election I made. I judge nobody who chooses to engage in these conversations, but for me it just didn’t seem right. My interests (Music, Sports, technology, etc.) are the things I tweet about, and sometimes blog about.

 

So, what has social media done for me? Since leaving the customer side, with literally no social media presence, my circle of friends has grown exponentially. I know that I can grab a meal in practically any town in the US, and many places in Europe. I can receive technical assistance practically at a moment’s notice. I’m able to reach out to people I haven’t seen in years, and delve into conversations that teach me consistently and allow me to voice opinions thave may or may not be controversial. And this is is just the beginning.

 

When people who follow my posts either via blog or twitter, I’m granted a level of understanding and credibility off-the-bat not just due to time on the job, but because of how they’ve already received valuable information from me via my online stature.

 

By no means am I the most prolific blogger, nor am I always entirely on-target with my technical acumen. However, I’ve proven over time that when I don’t know something, I will work to get the answer. And, through my connections, I will know where to go. Integrity has always been a watchword for me.

 

There’s only one piece of advice I can offer, though, and that’s to dive in. We all have something to say, to be honest. The difficulty at the outset is to develop your voice. When I began blogging, I had quite a bit of grammatical skill, as well as a solid command of the language, but what I didn’t have was an innate skill in communicating my voice. I used (and often still do) words that were self-conscious and pretentious (You see???). I had heard of the Kiss principle before (Keep it simple, stupid) but really didn’t utilize it well. I felt that in order to add some level of solidity to my writing, I needed to impress with my language. The truth is that I really needed to communicate as simply as possible. The only way to truly get a voice is to practice. Don’t be afraid to publish what’s not perfect. Just try.

 

I’d say the same thing about Twitter. Try to build a voice and a perspective. It can be a little daunting because your potential stretch can be huge. But honesty and integrity are key. If you’ve some technical voice, personal tastes, and perspective don’t be afraid to use these. Follow key people who are integral in the communities in which you exist, and spread that out a bit. Engage. Over time, you’ll improve your following and develop a persona with which you feel comfortable.

 

In terms of career benefits to Social Media, I’d have to say that the single biggest benefit has been the relationships I’ve built. To have this reach, have made and paid attention to these contacts, kept track of their career moves, and tracked the progress of these people with whom I am honored to be associated, I have gained insight and inroads to many different arenas of tech. I also know where to go when I need some advice.

 

In the times I’ve searched for work, the twitter profile, LinkedIn connections and blog have helped to open doors, loaned credibility, and helped me to land the amazing jobs I’ve had. Many people skip the functionality of LinkedIn. I believe that the usefulness of the platform to be hugely beneficial.

 

Now, recently, and as I’ve joined the channel side, rather than the vendor side, I’ve been honored to join the Tech Field Day crew. (@TechFieldDay) What a huge compliment to me and the efforts I’ve put in to attempt to gain industry knowledge out side the realm of the products I represented and compliment that with the technical acumen to be able to add some level of hands-on knowledge on top of the breadth of products I sell. Being a member of this crew, alongside some real luminaries as well as being part of the amazing vExpert (@vExpert) and EMC Elect (@EMCElect) group have all been due to the efforts I’ve made in blogging, Twitter, and LinkedIn.

 

It’s certainly true that I’ve been blessed with a number of huge compliments. This is not due to outstanding skill, but that little bit of extra effort to mete out the branding I have with the tools provided.

 

A musical dwelling on the loss of some mythicals

Those of you who know me know that as a music fan, I’ve a passionate nature about the musicians who’ve created such brilliance and have enhanced my life so profoundly over the years.

So many people have written about the passing of David Bowie. I feel that there’s likely nothing that I can say that would ever compare to the writings of these writers. Still, with the addition of Lemmy from Motorhead and Hawkwind, and Glenn Frey most notably of the Eagles, as well as the great Dallas Taylor the drummer for CSN&Y, I find myself compelled to express my sadness as well as my appreciation for all these guys have meant to me, and all they’ve given to the world.

Glenn Frey both with the Eagles and in his solo career has created some indelible melodies and wonderful lyrics that have resonated over decades. He helped to create a mode at the outset of the California sound. Along with luminaries like Jackson Browne, the entire world was changed. These singer/songwriter artists made the world take listen to a new sensitivity in rock and roll. This was a sea change in the type of music we were hearing in the early seventies. RIP, Glenn.

Lemmy or Ian Kilmister, a huge influence on heavy metal. He started his career as a roadie and guitar tech for Jimi Hendrix and soon joined a band called Hawkwind, which really heralded a progressive space rock. Later, he founded Motorhead who’s name came from a Hawkwind song. Aside from his prodigious skill on the Bass, he was a known collector of German WWII paraphernalia. Though he certainly didn’t follow Nazi philosophy. His bass playing was notable for having derived most directly from his guitar background, notable for double stops and many chording structures that set him apart. The world will miss you, Lemmy.

Dallas Taylor was another key component to the California sound. He appeared on two of the most significant albums of the early seventies the debut from Crosby Stills and Nash, and the follow-up including Neil Young: Déjà Vu. Later he went on to play with Steven Stills, Van Morrison, and Paul Butterfield’s Blues Band. An influential drummer, he was known as a truly consistent timekeeper and a really nice guy. Thank you, and rest in peace, Dallas.

But, David Bowie is the death that will effect me for years to come. “The Thin White Duke” created melodies and sounds that not only defined the times, but also created entire genre of music. He took his influences from everywhere that he went. Places like New York and Berlin effected his output. Motown, Pink Floyd and others held appeal to him, and he was quite happy to incorporate some of these sounds, as well as create new ones. His pursuit of excellence was well known, and his willingness to not stop until the project was finished is a cornerstone of his both long career, and his reinvention of himself as characters, and his gender bending persona were trademarks. He was willing to perform with practically anyone, and collaborated with many of the key creative influences in the business. I’m particularly fond of the output he made with Brian Eno and Robert Fripp. He was noted for both promoting and utilizing some of the greatest lead guitarists of the day as well. I first saw Adrian Belew perform as Bowie’s lead guitarist in the early 90’s. I also had the rare opportunity in high school to see him perform in the play The Elephant Man in Chicago and was completely moved by his work. He was an astounding and self-effacing artist who never rested on his laurels as a musician, or an actor. To me, he defined avant garde. David Bowie, Rest in Peace.
Oddly, the one thing that made me feel better as I lamented the loss of David Bowie was listening to another artist who left us too early. Lou Reed, a genius in his own right, who’s loss also effected me profoundly made me feel somewhat better when listening to the New York release last week.

It seems that I’m living in a time where my heroes (no pun intended) are moving on. I am saddened that these greats have gone, and yet I revel in the output they’ve created. I hope to revel in the joy of their music rather than dwell too much on their losses.

Honored to be a TFD Delegate

Back at VMworld 2015, I was invited to sit in on a day’s presentations from a variety of cool vendors as a blogger with perspective on OpenStack, Storage, Virtualization and orchestration. It was a great experience hanging with Steve Foskett (@SFoskett), Tom Hollingsworth (@NetworkingNerd)  and a slew of great industry bloggers to get a perspective on these vendor’s newer products or updates. Also, I was blessed to get fed by Enrico Signoretti (@Esignoretti ) with an authentic Italian pasta meal complete with Parmesan brought from his home.

 

Recently, I was asked to participate in a Tech Field Day (@TechFieldDay) event in Austin in the beginning of February. The list of presenters to the group is outstanding, and I’m really looking forward to hearing what they have to say. I’ll be blogging on my findings and thoughts as I hear what they’ve got to say. This is an honor and a genuine kindness. Plus, I do certainly love Austin.

 

Enrico is holding his Tech UnPlugged conference that Monday, but unfortunately, due to a work obligation in Oklahoma City, I will not be able to make it.

 

Austin is a fantastic Tech hub, certainly, helmed by Dell and AMD (at who’s location, I worked a number of years back on a consulting gig). Highlights will be the opportunity to meet up again with Chris Wahl (@ChrisWahl) who’s a bright star at Rubrik.

 

Check out the following link for details on this and other TFD events.
http://techfieldday.com/event/tfd10/

 

How does your VMW environment look today? A rationale for the VOA.

VMware as a technology has a huge potential to sprawl. Have you created an environment for testing purposes? Have projects required a number of guest VM’s to be built, but are no longer in use? How about your storage? Is there some chance that your storage can be helpfully augmented by vSAN? What about the distribution of your VM’s: Are there some that are not appropriately balanced, performing well, or could be served better by sitting on a lesser utilized host?

pg-doctor-check-up-intro-full

VMware has a tool entitled the VMware Optimization Assessment which digs in and allows for a much better insight into the utilization, life-cycle, and optimization of your environment, using the back end technologies of vRealize Operations and vRealize Business as components to the data collection.

Along with providing you the piece of mind of having a healthy and well configured virtual environment, it also gives you some of the best ways to manage your infrastructure for future growth, and to remain healthy. Even the best virtual environment administrators desire these kinds of health assessments so as to provide reporting up through their management structure and to aid in growth patterns.

This can help to guide business related decisions for companies that are in heavy growth mode, corporate splits and acquisitions, as well as those that are actively seeking to limit their physical footprints into a more virtual data center. Active Cloud projects can directly benefit from this too, as the best laid projects for orchestration of applications and workloads into the cloud will be benefitted immensely by the knowledge that those systems being migrated are being effectively and appropriately utilized.

A big use-case that I can see for this is whether or not your organization will be better suited by the Enterprise License Agreement (ELA) process giving a larger environment a deeper discount and lack of limitation on growth from within their day-to-day operation.

Another valid piece is that of the hardware refresh process. You’re going to replace your servers, storage or build-out a new ROBO (Remote Office/Back Office), the knowledge of how to effectively size these new pieces of equipment ensures that will end up likely saving you substantially.

An administrator may be in a highly volatile and growing environment, but would rather not go back to purchasing on a regular basis to purchase additional licensing to support their infrastructure and remain within constraining license models. Remember, each socket on each server in your environment must have valid licensing. This not only includes your production environment, but Dev, Test, and even Sandbox environments. A VMware Optimization Assessment coupled with an Enterprise License Agreement will allow for the growth of an environment like this unencumbered by license limitations inherent with the need for constant returns to the purchasing group, and will definitely serve the goal of saving money on licensing.

So, let’s talk about your ultimate ELA? What components do you want? Does your environment have a “Cloud” component? Maybe vRealize orchestration or automation may be beneficial? How about life-cycle issues? Maybe vRealize business is appropriate? Maybe your VMW environment can benefit from the micro-segmentation or enhanced security model of NSX? If so, I strongly recommend enhancing the environment with NSX. Maybe you have a desire for eliminating the traditional SAN, and want to leverage the power of vSAN and the benefits of utilizing the internal storage available to your hosts and the drive bays to grow your own? I do love the Software Defined Data Center and the benefits therein. The VMware Optimization Assistant tools will help you determine what you need, and how best to craft the ELA.

In my next posting, I’ll be happy to expound on any of these parts pieces and software benefits. Just ask for what moves you.

When should you NOT virtualize that system?

Virtualization is an amazing and valuable tool for so many reasons. I’ve been a proponent of the concept since GSX server, and early Workstation products. In the early days, I used to use it as a fresh image to load on a workstation for a development crew who required a pristine testing environment for a specific environment. When their code inevitably blew up the image, all we did was recreate that test machine OS with the existing image already located on that machine.

vMotion was a huge plus for us. We created a POC for a large soft drink company wherein we had two hosts vMotioning a file-server every 30 minutes for 2 weeks. When we arrived back in to discuss the viability of vMotion, they’d experienced no headache, and were not even yet aware that we’d done anything. When we showed the logs of the vMotion taking place 48 times a day for two weeks, they were completely convinced.

Things have only gotten better, with additional capacities in terms of disc size, processor capacities and amount of RAM that could be added to an individual machine. Fault tolerance, DRS, and many other technologies have taken away any doubt that most any X86 platform app is a viable target for the virtual world.

But, are there circumstances wherein a device should NOT be virtualized? Perish the thought!

I can envision only a few cases.

For example, one might say that a machine that requires all the resources that a host might have to offer shouldn’t be virtualized. I still say though, that in this case, a VM is preferable to a physical machine. Backup and recovery is easier, uptime can be far more viable, in that DRS allows the movement of the virtual machine off the host and onto another one for hardware maintenance, etc. However, licensing may just find this unacceptable. When you’ve an ELA in place, and can virtualize as much as you want, this actually does become a great solution.

Maybe, in another case, the application hosted on that server is not approved by the vendor? Well, it’s my experience that the OS is the key, and while the app may not have approval by the creator, but testing often makes that a non-issue. However, there may be circumstances wherein the app is tied to a physical hardware entity, and the process of virtualizing it makes it truly not function. I would call this poor application development, but these things are often hard to get around. Another similar case here is when, as in many older apps, the server requires a hardware dongle, or serialized device connected to a physical port on the server. These create challenges, which often can be overcome with the assistance of the company who created the app.

I would posit that in some cases, when a VM relies on a time-sync specific concerns may pose an issue. An example of this machine is a Radius or RSA Server device, in which the connecting device must sync with the connection device as part of its remote access. Assuming that you’ve configured all hosts to connect to an authoritative NTP source, and connection to this is both consistent and redundant, there still exists some small possibility of a time drift. Most importantly, one must be aware of this issue and ensure all efforts to resolve it have been made before virtualizing that server.

And, finally, operating system interoperability. I recently, and remember this is the latter half of 2015, had a customer ask me to virtualize their OpenVMS server. It’s generally not an X86 Operating system, as, for example, OpenSolaris is a port of the original Risc based OS to be able to run on X86. This may be virtualized, but OpenVMS is still a proprietary hardware reliant OS, and thus is, as well as some but few others cannot virtualize into X86 architectures. I am aware that there is a way to virtualize this, but the tech on the Hypervisor is not at all standard.

Generally, any X86 based operating system, or application today is fair game. While we’ll be unlikely to achieve 100% Virtualization Nirvana anytime soon, but, believe me, there are benefits beyond the obvious to ensuring that an application resides within a virtual environment.

A Treatise on Geekdom: What does it mean to be a Geek?

According to Wikipedia, the definition of Geek is:

The word geek is a slang term originally used to describe eccentric or non-mainstream people; in current use, the word typically connotes an expert or enthusiast or a person obsessed with a hobby or intellectual pursuit, with a general pejorative meaning of a “peculiar or otherwise dislikable person, especially] one who is perceived to be overly intellectual

I’m not entirely sure that perception is correct, nor would I agree with the dislikable part, but I do think that obsessed enthusiast is most accurate.

In my profession, many of us gladly wear the mantle of Geek. Our IT friends tend toward a focus on the tech we’ve chosen for our profession. A storage geek may argue about IOps, or the benefits of File based versus Block based, or whether object file systems will eventually win out. A virtualization geek would want to discuss the value of the the orchestration elements of VMware and its associated costs, versus the lack of support on a KVM deployment. We would have so very many degrees of conversation regarding the nuance of our industries that we happily jargon-on ad-infinitum.

One thing that I’ve noticed in my time in this rarified world of IT Geekdom is that our members tend to geek out on many other things as well. The Princess Bride, Monty Python and the Holy Grail, The Big Lebowski, Sean of the Dead, and other comedies can be quoted verbatim by an inordinately large percentage of us. Many of my peers are into high performance cars, and know the variables regarding Nitrous powered vehicles, and torque ratios.

In my case, the geekiness extends most deeply into music. While I tend toward obsession about certain bands, and I find many who share these tastes, they aren’t the most popular, and can often be a bit more obscure.

Few things satisfy me as much as sharing those musical gems with my friends, turning them on to the music that enriches my life, and hopefully finding a kindred joy in my friend’s appreciation of the same.

Do you think that Genesis was better when Peter Gabriel was the lead vocalist, and did the leaving of Steve Hackett from the band send them on the downward spiral as do I? I will argue to the point of annoyance that Phil Collins, while an amazing drummer, was not the front man that helped to define the greatness of this ensemble.

In the Grateful Dead, who was your favorite keyboard player? Was it Brent? He certainly was mine, but there is merit for Tom Constantin, Keith Godchaux, Pigpen, and even Vince Welnick.

One of my all-time favorite musicians, songwriters, and even producers is Todd Rundgren. His many incarnations, bands, and styles all hold my interest consistently. Do any of my peers dig deeply into his catalogue? This core group of Todd fans is small but incredibly loyal. I’ve seen many of the same faces throughout the years at his shows. Not a whole lot of people share this particular taste.

Another thing that really moves me is the guitar. Not only the player of the instrument, but the instrument itself. I find myself looking at pictures, learning about the unique elements of various pickups, strings, tuning pegs, bridges, nuts, etc. Even to the point, where recently I began building them myself. I’ve put together two guitars from parts. I suppose that since I’m a truly mediocre player, that the instrument itself gives me so much joy. Yeah, I have too many of them. Is that a crime?

So, what’s your particular geek obsession? I’d love to hear, and learn about it. These great distinctions between us are one of the most interesting differentiations within our group. The intelligence is not necessarily the key detail, but really the desire to know everything we can about our unique foci.

So tell me, what’s your geek focus?

How to explain Virtualization to newbies

In the world of enterprise architecture, many parts of our daily conversation are taken for granted. We talk about Virtualization, vMotion, Storage vMotion, Replication, IOps, and so very many jargon filled statements. We know these things, understand the concepts and feel that our conversations amongst industry veterans can utilize these concepts without explanation. Even when we talk about things at a higher level, we leverage many of these concepts with so many presumptions that to go back and describe some of the most basic of them are exercises we almost never have to perform.

Sometimes, in my role as pre-sales evangelist, I find myself in the unenviable position many of us do to explain the concept of virtualization to people who have no basis on which to conceptualize. Often this is a conversation arising on dates, with family, or the parents of friends. And, often a lesson in futility. I’ve been in this space since ESX version 2, and have struggled with explaining this in a way that this population could grasp many times.

Simply using the phrase oft-quoted “Turning hardware into software” really doesn’t cut it.

I would love to get some of your analogies.

As I so often do, I usually use the connection to music, and the evolution of how music has been consumed through the years.

Forgive me for some of the obvious gaps in this conversation, like DAT, MiniDisc, etc, but here goes: Originally, we bought albums, 8-tracks, or cassettes. We were able to duplicate these media onto cassettes, but we’d experience degradation. Along came CD’s. CD’s, (apoloties to audiophiles who always and still believed that sound quality of this digital media never compared to their analogue counterparts, the LP) gave us perfect files with perfect consistency.

Hardware developed, and we were able to buy CD players that were able to load up many discs into them. I likened this concept to the ESX host, while the CD was analogized with the Virtual Machine. This worked because the disc itself was like a software version of the hardware that the physical version of the server. The analogy was Physical to Virtual (P2V). Of course, the difference was that these CD decks were unable to play all the discs simultaneously. I point to this as a differentiation, but find the conversational leap not all that difficult to bridge.

vMotion seems an easy concept to broach at this point, pointing out the assumption that many of the multi-disc changers are bound together in a “Cluster.”

And from that point, the idea of migrating these to the cloud, makes for again, an easy jump in conversation.

As time passed in my conversations, I found that using the analogy of our MP3 players made the idea of hardware becoming software became even easier, because the novices were more able to visualize that the music itself was just files.

This posting may seem rudimentary, but I find it to be an honest, real-world conversation that many of us find ourselves unable to perform adequately. I hope, again, that the posting will promote some conversation, and that I can read some of your approaches to this.

VDI – Problems and current considerations

In my last posting, I wrote about my history with VDI, and some of the complications I dealt with as these things moved forward. In these cases, the problems we encountered were largely amplified exponentially when scaling the systems up. The difficulties we encountered, as I alluded to previously had mostly to do with storage.

Why is that? In many cases, particularly when implementing non-persistent desktops (those that would be destroyed upon log off, and regenerate upon fresh logins) we would see much load be placed on the storage environment. Often when many of these were being launched, we’d encounter what became known as a boot storm. To be fair most of the storage IO at the time, was generated by placing more discs into the disc group, or LUN. Mathematically, for example, a single 1500RPM disc produces at most 120 iops, so that if you aggregate 8 discs into one LUN, you receive a maximum throughput from the disc side of 960 iops. Compared to even the slowest of solid state discs today, that’s a mere pittance. I’ve seen SSD’s operate at as many as 80,000 iops. Or over 550MB/s. These discs were cost-prohibitive for the majority of the population, but pricing has dropped them to the point where today even the most casual of end user can purchase these discs for use in standard workstations.

Understand, please, that just throwing SSD at an issue like this isn’t necessarily a panacea to the issues we’ve discussed, but you can go a long way towards resolving things like boot storms with ample read-cache. You can go a long way toward resolving other issues by having ample write cache.

Storage environments, even monolithic storage providers, are essentially servers attached to discs. Mitigating many of these issues also requires adequate connection within the storage environment from server to disc. Also, ample RAM and processor in those servers or number of servers (Heads, Controllers, and Nodes are other names for these devices) are additional functional improvements for the IO issues faced. However, in some cases, and as I don’t focus on product, but solution here, one must establish the best method of solving these. Should you care to discuss discrete differentiations between architectures, please feel free to ask. Note: I will not recommend particular vendors products in this forum.

There have been many developments also in terms of server-side cache that aid in these kinds of boot storms. These typically involve placing either PCIe based solid state devices or true solid state discs in the VDI host servers, onto which the vdi guest images are loaded, and from which they are deployed. This alleviates the load on the storage environment itself.

The key in this from a mitigation perspective is not just hardware, but more often than not, the management software that allows a manager to allocate these resources most appropriately.

Remember, when architecting this kind of environment, the rule of the day is “Assess or Guess.” This means, that unless you have a good idea of what kind of IO will be required, you couldn’t possibly know what you will require. Optimizing the VM’s is key. Think about this: a good-sized environment with 10,000 desktops, running, for example Windows 7 at 50 iops per desktop as opposed to an optimized environment in which those same desktops are optimized down to 30 iops shows a differentiation of 200,000 iops at running state. Meanwhile, one must architect the storage mitigation at peak utilization. It’s really difficult to achieve these kinds of numbers on spinning discs alone.

I can go deeper into these issues if I find the audience is receptive.

VDI The Promise versus the Reality

Back in 2005, when I managed the VMware infrastructure at a large insurance company, we had many contractors located off-shore. These contractors were located mostly in India and were primarily programmers working on internal systems. We had a number of issues with them having to do with inconsistent latencies, and inconsistent desktop designs when we had them VPN into our network. We decided to deploy a cluster of VMware hosts, and onto these deploy static Windows XP desktops with the goal of making the environment more stabile, consistent and manageable. While this was not what we consider today to be VDI, I’ll call it Virtual Desktop 1.0. It worked. We were able to deploy new machines from images, dedicate VLans to specific security zones, having them sit in the DMZ when appropriate, etc. Plus, we were able to mitigate the latency issues between interface and application back end, as the desktops resided inside the data center. We no longer had any issues with malware or viruses, and when a machine would become compromised in any way, we were able to swiftly respond to the end user’s needs by simply redeploying machines for them. The location of their data resided on a network volume, so in the event of a redeploy, they were able to retain their settings and personal drives from the redirected home directory. It was a very viable solution for us.

Citrix had accomplished this type of concept years earlier, but as a primarily VMware shop, we wanted to leverage our ELA for this. However, we did have a few applications deployed by MetaFrame, which was still a functional solution.

Time moved on, and VMware View was released. This had the added ability to deploy applications and images from thin images, and ease the special requirements on the storage. In addition, the desktop images now could be either persistent or non-persistent. Meaning, if our goal was to put out fresh desktops to the user upon log in. In this case, our biggest benefit was that the desktop would only take up space on the storage when in use, and if the user was not in the system, they’d have no footprint whatsoever.

There were some issues, though, in this. The biggest concern in this was that the non-persistent desktops, upon login, would demand such processing power that we’d experience significant “Boot Storms.” It would cause our users to experience significant drag on the system. At the time, with a series of LUNS dedicated to this environment, only spinning disc, we had IO issues forcing us to sit in a traditional or fully persistent state.

In my next post, I’m going to talk about how the issues of VDI became one of the industry’s main drivers to add IO to the storage, and to expand the ease at which we were able to push applications to these machines.

The promise of VDI has some very compelling rationale. I’ve only outlined a few above, but in addition, the concepts of pushing apps to mobile devices, “Bring Your Own Device” as well as other functions are so very appealing. I will talk next about how VDI has grown, solutions have become more elegant, and how hardware has fixed many of our issues.