What do s’mores and VDI have in common? Perhaps more than you would expect. A new article, featured in Virtual Strategy Magazine, examines the trend towards the shrinking datacenter through the eyes of a former pastry chef (our CEO).  The story is about having your cake and eating it too by deconstructing the VDI environment. Have a read!

—–

You may not know this about me, but I used to be a pastry chef. Bear with me, I swear this becomes relevant.

Years ago, after studying rocket science at MIT but before my rise to CEO at Leostream, I toiled away in various kitchens around Boston. From my modest beginnings as a lowly (overworked and underpaid) pastry cook, I rose to a position as the head (overworked and underpaid) Pastry Chef at a very nice seafood restaurant here in town. (No, not Legal Sea Foods.)

Through all those years, I never caught on to the trend of deconstructing desserts. Part of the reason is simply that I lack the artistic acumen. Part of me wondered what was the point. If I want a s’more, it can look like a s’more.

Well, all these years later, I’m finally embracing the “deconstructing trend”, just not in desserts. Now, I’m applying it to VDI. No, really, here it goes!

The Original Full-Stack S’more of VDI

smores.jpgFor years, the key players in the VDI market sold full-stack solutions that included hypervisors to host virtual desktops, connection brokers to handle assignments, display protocols to connect users to their desktops, security gateways to tunnel users into the network, and a host of other components geared at making VDI a roaring success.

The problem? Those full stack solutions have a high cost that limit your ROI. They lock you into certain workflows, which may or may not match your business use cases. And, they generally don’t future-proof your datacenter, instead making it more difficult for you to try new technologies that come to market. Those three factors all benefit the virtualization vendor (they make more up-front money, develop fewer features, and keep you paying support), at the expense of your budget and IT department.

VDI Deconstructed, Part 1 – Your Resources Can be Anywhere

I realize IT doesn’t want to reconstruct a deconstructed solution from a long list of vendors, which is one reason full-stack solutions seem attractive. But, deconstructing VDI isn’t about separating each and every component. It’s about artistically, realistically, and technically separating the components that make sense. So, what makes sense?

IT is always looking for ways to improve business processes, lower cost, and work more efficiently. Virtualization technology provided those things in spades for the server world. Virtualization improved the utilization of the datacenter and servers, and turned tasks that took days into something that could be done in hours.

Now software-defined datacenters, clouds, and hyperconverged hardware are simplifying deployments one step further, bundling compute, storage, and networking resources into easy-to-deploy (or already-deployed-in-someone-else’s-datacenter) solutions that can host any of your virtual workloads.

And that is deconstructed VDI. It takes the resource layer out of the VDI stack, allowing you to mix and match hosting environments to best meet your needs. Go ahead and place some virtual desktops in AWS, run some RDS sessions in Azure, build a private OpenStack cloud, and wrap it all together with the vSphere servers already in your datacenter. The key to deconstructed VDI is that you bring all those pieces together to form a single, coherent system. How do you do that? Well, with a smaller VDI stack.

VDI Deconstructed, Part 2 – Your Stack Just Got Smaller

A VDI deployment is not made up of the resource layer, alone. At its simplest, you also need to consider two, maybe three, additional components.

First, the connection broker. The connection broker is the brains of the system, ideally managing the capacity in your resource layer (automatically provisioning and terminating virtual machines, as required by your business) and managing user assignments and connections to those resources. Your connection broker should provide the flexibility to use any system to host your resources, including any hypervisor, hyperconverged system, cloud, you name it. And, it should support any display protocol you need. Which brings us to component number two.

You need a display protocol to connect user’s chosen client device to their remote desktop. Display protocols come in many shapes and sizes, from built-in Microsoft RDP to high performance HP Remote Graphics Software (RGS). Which protocol you use is a factor of the types of tasks your users perform. If you have task workers running applications with low graphics loads, then RDP is likely fine. If you have remote workers who need to use a CAD application, you probably need to investigate a protocol with better performance.

That remote workforce leads us to the third component, a gateway. Remote users need a way to tunnel into the network that hosts their desktops. Additionally, if you plan to use a public cloud like AWS or create desktops in OpenStack, the desktops may be on a private network that even users on your LAN must tunnel into.

I had to leave the pastry business to finally find a deconstruction I can support, and it’s deconstructed VDI. It allows you to keep your options open when it comes to your resource layer, and even change where you host your resources, over time. It narrows down your VDI stack to a connection broker, display protocol, and gateway. Ultimately, it makes IT more flexible, your datacenter more future-proof, and your end users more productive. How’s that for have your dessert and eat it too?

This article originally appeared in Virtual Strategy Magazine.