CIO's Choice: How to Benefit from Server Virtualization? Mid-market data centers may have the most architectural requirements. CIOs must consider the possibility of growth, but at the same time, they cannot exceed the budget. To achieve this, many people choose to virtualize their server resources. However, they soon discover that if it is a traditional server design pattern, it is almost impossible to intelligently and cost-effectively scale up the performance of the data center.
"Most mid-sized enterprises quickly realize that they need an entirely new server to handle the memory resources required for virtualization," said Benard Golden, CEO of HyperStratus Consulting and author of the book "Virtualization for Dummies."
The old server model is based on the concept of one application per physical server and does not support virtualization. These servers are not designed with differences in virtualization scenarios in mind, so issues such as applications running slower or service interruptions exceeding the overall level of the data center can occur. At the same time, it is difficult to transfer loads to other devices to achieve flexible load balancing, making maintenance and disaster recovery almost impossible.
Energy consumption is another issue; outdated servers waste energy, which can upset frugal CIOs. These servers are equipped with processor cores that are not convenient for flexible handling of active and idle processes, causing companies to pay unnecessary fees for electricity and cooling equipment.
Due to the limitations of old servers, they must also establish dedicated data center teams for maintenance. The complexity of deploying virtualization using emergency servers will greatly affect user efficiency and response time for launching new projects.
CIOs do not necessarily have to endure this method. Instead, they can opt for a more flexible and efficient server platform. They should choose standardized, virtualizable, energy-saving, and cost-effective servers. This is the only way for mid-sized enterprises to achieve easy-to-use intelligence that meets the actual needs of the data center.
The Vision of Virtualization
In the past few years, CIOs have been caught up in the daily maintenance work of the data center. An article in CIO magazine titled "The State of the CIO" conducted a survey recording key decision-making goals. This year, it was found that CIOs mainly focus on improving IT operations/system performance (53%) and business goal-oriented IT leadership (58%).
However, the survey also shows that they hope to take IT operations to the next level soon. Over the next three to five years, CIOs want to spend their time on business innovation (54%) and leading change efforts (42%).
They believe that achieving this goal lies in virtualization. They hope that virtualizing the data center will automate daily operations, freeing employees to focus on more innovative work. But CIOs have also witnessed the obstacles faced by pioneers of virtualization when building virtualized environments using traditional servers.
After Hurricane Ike ravaged Texas in 2008, the IT team of Woodforest National Bank located in Odlands decided urgently to build a virtualized disaster recovery center. Under the "leave or be destroyed" rule, this bank, which has 723 branches in 17 states, indeed needed a second data center to deal with future hurricane seasons. Due to its characteristic of needing to provide services 24/7, maintaining Woodforest is very challenging. They require an intelligent, virtualized system to support current business needs.
"Before Hurricane Ike, we had never tried moving all our systems to a second data center," said Richard Ferrara, Chief Technology Officer and Senior Vice President of the bank. "So when Hurricane Ike hit, we thought it was the best time to perform the migration."
Compared to rebuilding existing servers, IT chose to refresh the existing servers. "New technology drives bank innovation and provides a competitive edge for businesses," said Ferrara.
A virtualized environment must be flexible, allowing for upgrades or downgrades in performance as needed. Choosing the right processor platform is essential to achieving this flexibility.
Charles King, president and chief analyst of Pund-IT Consulting, said that combining virtualization technology with the appropriate processor can well support various needs of mid-sized enterprises. "You don't have to choose RFP processors to drive more servers; you can simply generate a new virtual machine," he said. However, he also warned against using unsuitable platforms for virtualization because processors without virtualization intelligence can lead to serious faults like memory leaks. "This can severely impact application performance, thereby affecting the entire business operation," he said.
Golden said that mid-sized enterprises have long realized that different processors can produce different effects on virtualization. "They now understand that virtualization requires large amounts of memory and processor performance. The stronger these performances, the more work it can handle. To significantly increase the capacity of the data center might cost up to $50 million to $100 million."
What CIOs care most about is the overall planning of the data center. They understand that traditional servers have significant limitations compared to what they hope to gain from virtualization technology. For example, a physical server can only load a very limited number of virtual machines, which limits the growth of the data center. Additionally, these servers consume more energy while handling fewer tasks, increasing the demand for power and cooling systems.
They need servers that can automatically adjust consumption. Traditional servers cannot activate processors based on workload. When the workload is low, they cannot put processors into sleep mode to save energy and costs. They also cannot offload certain tasks handled by virtual machines and operating systems, which affects CPU cycles and user experience.
Finally, traditional servers cannot collaborate with each other, making scaling difficult. "The data center should be an open platform, allowing free addition of capacity when needed," said Mike Wolfe, senior vice president and CIO of AMD.
Restrictive platforms can have negative impacts on mid-range data centers that need flexibility and economy. Not only do they limit options, but they also require additional skills for maintenance. As the data center grows, it will need more manpower and may lead to more downtime. These systems often offer no price advantages. If you lock into a specific supplier, your bargaining space becomes even more limited," said Wolfe.
For growing mid-sized enterprises, these barriers are extremely harmful.