Ready to Virtualize? Here’s some important considerations…

When a business makes the decision to virtualize, a whole new world opens up. Virtualization increases the capabilities of performance, disaster recovery, uptime, management of servers, management of infrastructure, the list goes on.

However, you need to make sure you Virtualize properly!

Often businesses make ill-informed decisions either based on budget, or based on misinformation provided by I.T. professionals. I want to talk briefly about a few key points which should be in your mind before making the jump to a Virtualized environment in regards to design of your solution.

1)      Performance

In virtualized environments, there are 2 key points that usually bog down the infrastructure, storage and memory (RAM).

Storage:

I’m sure you’re all familiar with the “thinking” light on computers, this is actually a LED that shows hard drive access (both writes and reads). You know how much it flashes when on a single computer when you’re doing things, now imagine 10-20 virtualized servers using the same storage system, the LED would be on solid. Enterprise (and server) storage is designed to allow more throughput and higher speeds, but keep in mind that under normal conditions even enterprise storage is designed for a single system or server to access it and provide resources to the network. This is why you really need to plan out and design your storage system.

All virtualization storage systems should be designed with virtualization in mind. With numerous virtual machines accessing the storage system (or SAN), the SAN not only has to provide high speeds and throughput, but also has to be able to process high IOPS (Input/Output per second). The storage has to respond to these I/O requests at very high speeds, all while providing high throughput of data access to each of the virtual machines. Right now for small/medium sized businesses, we recommend the HP MSA2024 SAN, which was designed for virtualization, provides extremely high IOPSs, throughput, and is extremely reliable with dual controllers. The dual controllers allow for multiple links from the SAN to the actual physical servers, this provides higher throughput, also redundancy in case a single link goes down.

There are a lot inexpensive NAS/SAN devices out there that advertise as VMWare ready, and while they are compatible, you will experience HORRIBLE issues in a production business environment with numerous VMs. I always say that these devices should only be used in labs, testing, or hobby environments. You need to expect that your SAN (including drives) will cost more than 1-2 servers.

Memory:

I can’t stress enough how important it is to load your physical virtualization host servers up with as much memory as possible. Thankfully in the last few years, using the latest generation of HP servers, RAM has become EXTREMELY affordable.

Don’t plan your solution with ONLY the amount of RAM you’ll need for the virtual machines running on a single host. In a 2-3 physical host environment add up all the RAM all you’re virtual machines will use (across all hosts), take that value and multiply it by 1.5, the product of that should be the amount of RAM you should have in each physical host. Keep in mind, if two of your physical hosts die, you’ll want to move all the VMs from the failed hosts on to the healthy host which is still running.

Further comments:

In most virtualization environments for small businesses, I always recommend to load up each physical host server with (2) X 8 or 10 Core Xeon processors. While this isn’t a rule, I always like to make sure that the virtualized CPUs correspond with a physical core, and try not to have them shared (although it’s fine if you do).

 

2)      Disaster Recovery

This is one of the most overlooked topics in virtualization. A lot of professionals actually believe that snapshots are backups, they are NOT. Snapshots are used for testing, to rollback when applying patches, are involved with backups, but are not actual real backups. Snapshots are great, but they aren’t what you’re looking for. I actually prefer not to leave virtual machines in a snapshot state for performance reasons.

It’s critical to build a disaster recovery solution that will actually allow you to have the data (or backup media) off-site. This can be achieved by using a backup system that pushes backups over fiber to a remote location, or software like Symantec Backup Exec that will actually allow you to backup to removable disk, or tape storage for the larger environments. You want to be able to have multiple point in time backups so that you can restore a system or file from 3 months ago. You don’t want to be stuck with a single backup.

A good consideration is to utilize BackupExec to backup the virtual machines to disk then to tape. This will quickly back your virtual machines up, then move/replicate the backup to tapes which in turn can be taken off-site. I know tape backup technology has been around for a while, but it’s anything from old. The latest tape backup technology can store multiple times more data that removable disks can, they also offer superior read/write speeds versus hard drives. They are also super easy to transport.

 

In conclusion, keeping these key points in mind can help you implement the best virtualized solution. It’ll help you get the best bang for your buck, and will help you avoid some problems that most first-time virtualizing companies are making today.

 

Call us for more information, and how we can help you get started with virtualization.


Occupation: President of Digitally Accurate Inc.
Bio: 13 Year IT Service and Solution Provider, Managed Services Provider, Tech Blogger, Entrepreneur

Full biography available here.
Personal Technology Blog: https://www.stephenwagner.com

Connect with Stephen:
Connect on LinkedIn
Connect on Facebook
Connect on Twitter
Connect on YouTube

Leave a Reply

Your email address will not be published. Required fields are marked *

*