Oct 19

There are some physical things in life that I like to associate with as part of my identity. My Jeep Wrangler is one of them. My laptop is not. How I live and work is not defined by a physical compute device, but rather by my online identity. Email, Twitter, accessing documents and meeting notes on SharePoint, tracking client engagements in Salesforce, evaluating and testing virtualization solutions in my lab – those are all part of my day. My PC? It’s just what connects me to my online work space and personal space. Sorry Latitude D820, but you mean nothing to me.

From talks with colleagues and clients, I know I’m not alone. My device doesn’t define me, and certainly doesn’t hold all the data and applications I need to do my job. So what does all this have to do with Microsoft licensing? Well in my opinion, they’re getting it.

I’m not going to rehash much of the excellent commentary already out there on yesterday’s Microsoft announcement. See these posts for more background information on the Microsoft announcement:

As Simon noted in his blog, VECD licensing is going away and the right to run a desktop OS as a server-hosted virtual desktop is now included with Microsoft’s Software Assurance (SA). For devices not covered by SA, organizations can purchase Virtual Desktop Access (VDA) licenses at a cost of $100 per device per year. Furthermore, Microsoft’s license transfer restrictions still apply. So if you want to license a virtual desktop for external contractors, you can assign a VDA license to each contractor system. If a contractor completes a project and leaves, you can re-assign the license to another contractor’s device (you just can’t reassign the license more than once per 90 days).

Simon also mentioned that “extended roaming rights” is the big deal in the announcement, and it is. While not perfect (Simon describes the issues), it’s a step toward licensing a desktop for a user and not a device (sure technically we’re still talking about device licensing, but the user can access his desktop from a myriad of personal devices). So let’s call it an alpha release of per-user licensing. Does a per-user model solve all of our problems? No. But organizations want it offered as a choice, and it’s good to see that Microsoft is listening to their customers.

Looking past the good news that came out of yesterday’s announcement, considerable work remains. Microsoft has still not addressed the service provider market. Considerable clarity is still needed for licensing virtual desktops on shared infrastructure. For example, if a user needs a Windows desktop for a week, he essentially has to pay for 90 days worth of licensing. Why? Even with VDA, the service provider technically has to associate the VDA with the subscribers physical device and can’t transfer it for another 90 days. The result is that desktop-as-a-service (DaaS) is far more costly than it should be. This problem will grow once companies like HP, IBM, and Dell offer client hypervisors, and look to offer services where user desktop VMs are automatically replicated from their personal system to the cloud. Again, this takes us back to a physical device not defining the user. For the IHVs, they get the opportunity to sell additional services to make up for the low margins they see on hardware sales. Sooner or later Microsoft will have to address this issue, and let’s hope it’s sooner.

On the support side, Microsoft’s internal application teams need to step up and offer clear support statements for the leading client and application virtualization platforms. Officially supporting App-V would be a nice first step. The push for Microsoft client applications to fully support the major client virtualization solutions must come from the top. I’m hopeful that Microsoft’s key executives will make that push.

Finally, let’s not forget that even with SA, Windows Server OS licenses cannot be virtualized (without mobility restrictions). Instead, most large enterprises have to upgrade to Data Center edition licenses (for practical purposes) for the sake of virtualizing. I talked about this issue extensively in this post, so I won’t repeat the details here. If lifting licensing constraints for client virtualization is good, I’d argue that doing the same for servers would be even better, especially if you look at the amount of servers already virtualized today.

Microsoft customers – you’re voice is being heard. Now’s a great time to pat Microsoft on the back. However, it’s not time to back off. Keep communicating your licensing needs to Microsoft. It’s clear that they are listening, and taking steps to make your life easier.

Full Article

Oct 19

For many years, one of the common factors in x86 servers has been a graphics subsystem characterized by the cheapest graphic chip the vendor could find to put on the motherboard. The logic (if you’ll forgive the pun) behind this was simple; you don’t buy servers to run graphics intensive applications, that’s what workstations are for! However, 2010 is shaping up to be different with two applications for graphics cards helping to make the case for graphics in the server.

The first of these; high performance compute clusters with graphics cards used to accelerate math calculations applications is not entirely new. NVIDIA has been pushing it’s CUDA program language graphics cards for at least 18 months with ATI/AMD expected to join the fray this year. However, the second application; accelerating graphics for virtual desktops is very new, with Microsoft’s announcements related to it’s RemoteFX protocol setting the stage for the use of high-performance graphics in servers (see here for more on RemoteFX). One of RemoteFX’s capabilities is the ability to use host-based graphics hardware acceleration to offload the graphics processing needed to support hundreds of Windows 7 virtual desktops (RemoteFX may support Vista as well, but I’d be surprised if they back-port the capability to Windows XP).

If Microsoft is successful with RemoteFX, then the real question will be how will graphics cards be integrated into today’s server platforms, and it’s not as simple as you might think. Today’s hi-end graphics cards have a number of requirements that largely rule them out for use in servers:

  • Power requirements: Typical graphics cards require 150 watts or more of power, and that’s not going to be easy to satisfy in typical server designs where the power supplies tend to be small, and limited by the server form factor. It’s also quite common for the graphics card to require it’s own dedicated connection to the power supply, a feature not found on server power supplies.
  • PCIe slot requirements: Graphics cards typically require a PCIe x16 slot on the motherboard, and these are rare on server motherboards where a pair of PCIe x8 slots is a more common configuration.
  • Physical size: Graphics cards are big beasts, NVIDIA’s Tesla C2050 card is a full length, double width PCIe card, and that’s a lot of space to find in most servers.

So off-the-shelf graphics cards aren’t good fit for servers, and have components that aren’t needed at all such as the ability to send graphics out over VGA or DVI connections. If things are difficult for conventional rack mount servers, they are much worse for blades where the restrictions on power and physical space are even greater.

My bet is that we’ll need something a bit different if this type of hardware acceleration is going to take off. Here’s what I think we’ll see:

  • A PCIe card designed for servers, with lower power requirements,a more reasonable form-factor, and a PCIe x8 (or even x4) interface.
  • External, dedicated graphics engines connected via a PCIe ribbon cable to the server.  NVIDIA has already gone down this path for HPC applications with products like the Tesla S2050. This approach may work for blades as well as rack mount servers.
  • Graphics cards in a blade form factor, i.e. a graphics card that takes up a blade slot and connects via PCIe over the blade chassis backplane.

The external graphics engine may also be a good place to make use of multi-root I/O virtualization that can share the graphics engine between multiple servers. Anyway this is certainly going to be an interesting space to watch as desktop virtualization becomes a mainstay of enterprise desktop strategies.

Posted by: Nik Simpson

Full Article

Oct 19

Apparently Intel and AMD don’t care about poor analysts trying to get their content completed by the end of the quarter, as evidenced by their decision to launch major server processor refreshes (AMD, Intel) on consecutive days this week. Needless to say there has been a vast amount of coverage from web pundits already, so I’m not going rehash it in detail.

Instead I’m going to focus on things that should matter to anybody buying servers this year. First, lets look at Intel’s Xeon 75xx and 65xx processors with respect to:

  • Virtualization performance
  • Memory subsystem
  • System reliability
  • Scale up

I’ll look at the AMD Opteron 6000 in a later blog.

Virtualization Performance

Virtualization is one of the few workloads that really make sense on these new processors, and judging by the numbers, they really do deliver. Using VMware’s VMmark benchmark, Intel claims a top score of 71.85 @ 49 tiles using 32-cores (4 x 8-core processors) on an IBM System X3850 X5. To put that in perspective, the previous best 32-core result topped out at 31.56 @ 21tiles on HP ProLiant DL785 using 8 x 4-core AMD processors. The result also trounced a 64-core result (48.23 @ 32 tiles) using 16 of the previous generation Intel XEON processors. That’s a hugely impressive result for a 4-socket system!

Memory Subsystem

This release drives the final nail into Intel’s aging “Front Side Bus” (FSB) memory architecture that dates back the mid-1990s. The FSB architecture had all the processors connected to a common memory controller (AKA “Northbridge”) so that each processor competed with its peers for access to a common pool of memory, thus becoming a bottleneck for performance. The new processors have integrated memory controllers supporting up to 16 memory DIMMs for a total of 64 DIMMs on a 4-socket server (512 GB with 8GB DIMMs), or 32 DIMMs on a 2-socket server ( 256 GB with 8 GB DIMMs).

System Reliability

In the past, x86-based servers have have used a unsophisticated approach to error handling; tell the operating system that something horrible happened and keel over in a heap on the floor. The new Xeons take a very different approach that allows the hardware and operating system to react in a much more flexible way to errors. For example if an unrecoverable memory error occurs, the operating system can look at the error and simply map that memory location out of use, or kill the process (or virtual machine) that is using the affected memory location.  In effect, Intel has blurred the line between it’s own high-end Itanium architecture and Xeon, which is good for Xeon, but not so good for Itanium.

Scale Up

The new Xeon architecture can be used to build 8-socket servers with off-the-shelf components (sometimes referred to as a “glue less design”). But they don’t stop at sockets, the Xeon 7500 family in combination with third-party developed memory hubs is designed to support configurations as large as 256 sockets (4096 threads) with 16 TB of memory! The combination of a highly scalable architecture and high system reliability has the potential to put commodity server architectures into direct competition high-end RISC architectures. For a glimpse of where the future of scale up architectures may lie, see SGI’s Altix UV announcement.


Posted by: Nik Simpson

Full Article

Oct 19

Everything looks like a nail, a truism that ASHRAE (The American Society of Heating, Refrigerating and Air-Conditioning Engineers) has just demonstrated in its advice to data center operators on how to achieve greater cooling efficiency in the data center (see ASHRAE Standard 90.1). In ASHRAE’s opinion, the way to achieve data center cooling efficiency is through the use of various economizer techniques (air-side, water-side …) to reduce the amount of energy used by the cooling plant. Unfortunately, this is a rather narrow view of the problem, focused on the stuff that ASHRAE members do best, i.e. air-conditioning and chilled water plants. The ASHRAE standard has now drawn a universal thumbs down from some of the largest data center operators (Google, Microsoft, Amazon, Digital Realty Trust, DuPont Fabros Technology and Nokia) who issued a joint statement urging ASHRAE to re-think it’s position.

The truth is that economizer are just one approach to achieving more efficient energy usage and the best approach will vary based on a host of factors. For example, one company might favor widespread adoption of server virtualization as a way to reduce energy consumption, while another might be able to scavenge energy from the waste heat produced by the data center and use it heat buildings. Both approaches lead to more efficient use of energy without requiring the use of economizers. The problem with ASHRAE’s approach is that there is a danger it will get built into building construction codes and potentially restrict innovation in the data center. James Hamilton’s blog entry on the subject (see here) does an excellent job of describing the problems of such a restrictive definition.

Posted by: Nik Simpson

Full Article

Oct 19

With the postponement of Catalyst Europe, I had the opportunity to virtually attend the Microsoft MMS conference keynotes on Tuesday and Wednesday of this week. MMS has long been one of Microsoft’s best conferences, and this year didn’t disappoint. I’m not going to rehash the major announcements, but you can read the full details in the following Microsoft System Center team blog posts:

As I had done in previous conferences, I commented throughout the keynotes via twitter. Here is a summary of my take on Bob Muglia’s Tuesday keynote and Brad Anderson’s Wednesday keynote.

Tuesday Keynote – Bob Muglia

  • Bob opened by stating that Microsoft’s has been building dynamic IT management for the last seven years as part of their Dynamic Systems Initiative. Microsoft is essentially underlining the fact that they are not newcomers to dynamic IT and cloud and is playing on its strength in systems management.
  • Bob highlighted the need for standard service models, and I agree. I started discussing this topic with vendors in 2008 and blogged about standard models in the security context early last year. I recently discussed this issue in my posts on metadata standards and the infrastructure authority. Still, vendors need to move beyond talk about standards for service delivery, metadata, and application interfaces, and deliver them. Mobility to and among cloud infrastructure-as-a-service providers requires these standard models. It’s time for vendors to show their hands, even if they’re holding proprietary service delivery models, metadata sets, and interfaces today. There are far too many competing interests to expect vendors to agree on an industry standard any time soon. Still, progress is being made on the standards front. SNIA’s Cloud Data Management Interface (CDMI), the DMTF’s Open Cloud Standards Incubator, and the Cloud Security Alliance’s work on standard cloud security models are three good examples.
  • It would be nice if Microsoft would offer a complete set of documentation on how to stage their on-stage demos. The orchestration practices that were demonstrated are of high value and Microsoft should share the configuration information with their clients.
  • Microsoft’s demos were Microsoft-centric, as expected. I would like to see Microsoft demonstrate integration with third party management products, which would strengthen their position on interoperability. Most Gartner clients are not homogenous Microsoft shops; demonstrating orchestration capabilities across multi-vendor management stacks would speak to the needs of the typical enterprise organization. If Microsoft doesn’t want to do this at a conference, then why not offer this information online?
  • I thought Microsoft made a great move in acquiring Opalis, and liked seeing the Opalis integration and System Center Service Manager 2010 shown on-stage.
  • Microsoft demonstrated long distance VM live migration, and in the process Muglia took a swipe at VMware, noting that moving VMs to new sites requires deep integration and validation across all management services. In the demo, Microsoft was able to show processes such as validating that a recent backup was completed before allowing the inter-site live migration to continue. While the demo was impressive, I would have been even more impressed if Microsoft validated the recent backup by integrating with a third party tool such as NetBackup.
  • Microsoft is talking cloud using the terms “shared cloud” and “dedicated cloud.” There are so many disparate terms out there for cloud that pretty soon Rosetta Stone will release a CD on speaking cloud. The Gartner/Burton teams have been working closely on defining a core set of cloud terminology, and its important for vendors in the space to adopt common definitions.
  • Edwin Yuen demonstrated System Center Virtual Machine Manager (SCVMM) vNext, which will include drag-and-drop capabilities for deploying multi-tier applications. The demo was powerful, but my existing concerns about SCVMM were unanswered. Today the product is not extensible, and it does not support the Open Virtualization Format (OVF) industry standard; I’m hoping those two features make it in to SCVMM vNext.
  • Microsoft’s demo of cloud service management looked solid from the administrator’s point of view, but nothing was shown from the consumer’s point of view. IT service delivery requires the presentation of services to consumers using intuitive interfaces that the customer understands. Microsoft has yet to show a consumer-centric view of how consumers will interact with Microsoft cloud service management.

Wednesday Keynote – Brad Anderson

  • Brad opened by talking about how the Windows 7 release was the most significant event in the desktop space in a very long time. I would counter that equally significant was Microsoft’s announcement to end-of-life (EOL) Windows XP in April 2014. The XP EOL announcement put IT organizations “on the clock” to replace their existing client endpoint OS, and in many cases re-architect all major aspects of user experience and application delivery.
  • There was a good discussion about power management, but one interesting area of research that was not mentioned was Microsoft’s work on in-guest VM power management. Take a look at Joulemeter for more information.
  • I liked hearing Brad talk about the future desktop representing a convergence of services. This is a concept I recently discussed in the post “The Next Gen Desktop’s Cloudy Future.”
  • I saw a bit of irony seeing Microsoft discuss Hyper-V R2 SP1’s dynamic memory feature on stage. A year ago Microsoft was solidly against VMware’s memory overcommit feature, which allows VMs to share a physical pool of memory on a server. Jeff Woolsey did a nice job describing Hyper-V’s dynamic memory capabilities in the following posts:
  • Microsoft demonstrated the RemoteFX technology that was acquired from Calista. It will be interesting to see how quickly Microsoft’s IHV partners offer a shipping solution. Several have stated their intent to support the technology.
  • Microsoft demonstrated their new Windows InTune product – a cloud service for managing PCs. While I like where Microsoft is taking PC management, I’m still disappointed that they have yet to address desktop OS licensing for cloud-based desktop-as-a-service (DaaS) deployments. Device-based desktop OS licensing is incompatible with the on-demand and device-agnostic nature of cloud service delivery, and Microsoft needs to address this issue sooner rather than later.
  • I was disappointed by the System Center Service Manager demonstration on compliance validation. The demo included no mention of virtualization or virtual infrastructure, which is the default x86 application platform of many of our clients. If the product is not providing controls and validation capabilities for multi-tenant VMware vSphere, Microsoft Hyper-V, and Citrix XenServer environments, then it is not ready for prime time.

Overall I was very impressed with the conference keynotes. System Center Service Manager and Microsoft’s increasing integration of the Opalis software are two areas to watch. Muglia’s talk about standard service delivery models also leads me to believe that Microsoft is poised to aggressively go after the cloud provider space. The release of Microsoft’s Dynamic Infrastructure Toolkit and growing number of partners in Microsoft’s Dynamic Data Center Alliance (DDA) is proof of that. What did you think of MMS 2010? I’d love to hear your thoughts. 

Full Article

Oct 19

Just to let everybody know, this will be my last post on the DCS blog.

Ok, don’t panic, I’m not disappearing from the blogosphere, just moving my blog onto the Gartner Blogging Network effectively immediately. So if you want to continue to see my thoughts, musings, and general ramblings on topics such as server design, blades, benchmarks, I/O virtualization, operating systems, cloud computing etc, just point your feed reader to:

https://blogs.gartner.com/nik-simpson.

Posted by: Nik Simpson

Full Article

Oct 19

Despite being the market leader, we recognized the need to transform and reinvent our business at Dynatrace, before someone else disrupted the market. Over the course of three years, we changed everything - our technology, our culture and our brand image. In this session we’ll discuss how we navigated through our own innovator’s dilemma, and share takeaways from our experience that you can apply to your own organization.

read more

Oct 18

The 45th version of the OpenBSD project has been released, bringing more hardware support (Radeon driver updates, Intel microcode integration, and more), a virtualization tool that supports the disk format qcow2, and a network interface where you can quickly join and switch between different Wi-Fi networks. Root.cz also notes that audio recording is now disabled by default. If you need to record audio, it can be enabled with the new sysctl variable. An anonymous Slashdot reader first shared the announcement. You can download it from any of the mirrors here.

Read more of this story at Slashdot.

full article

Oct 18

Incorta, the industry’s first hyperconverged analytics software company, today announced $15 million in funding by M12, Microsoft’s venture fund… Read more at VMblog.com.

Oct 18

Dell Technologies announces the Dell EMC VxRail hyper-converged infrastructure appliances, powered by VMware vSphere and VMware vSAN software, are… Read more at VMblog.com.

«     |     ?     |     »