Category Archives: Storage

PowerShell Scripting Options for Dell Storage

Back in 2008, Compellent released their first iteration of the PowerShell Command Set.  They were the first storage vendor on the scene to provide PowerShell automation capabilities with the Series 40 array.

Fast forward a bunch of years.  The PowerShell Command Set has grown from 50 cmdlets to over 100, and added the capability to work with more advanced features like replication and Live Volume. Continue reading

Seattle Tech Field Day

I’m still getting caught up on events, so I thought I’d share with you a little about our participation in the 2010 Tech Field Day held in Seattle, WA. 

Back in the middle of July, Compellent had an opportunity to participate in Gestalt IT Tech Field Day.  As it says on their website, “This unique event brings together innovative IT product vendors and independent thought leaders, allowing them to get to know one another. It is a forum for engagement, education, hands-on experience, and feedback.”

Compellent was thrilled to be part of the experience as one of five sponsors for this event.  Others included F5, NEC, Veeam, and Nimble Storage, who used Tech Field Day as their official launch.

The event focused around these different vendors who had the opportunity to present their technologies to an esteemed panel of delegates.  The delegates, which comprised of technologists and bloggers, came from around the world.

The evening of July 15th included a reception and dinner at the Boeing Museum of Flight.  This was about the coolest thing I’ve seen.  I have a love for aviation, but to see where some of the first aircraft were built was simply amazing. 

First Flying Machine

The welcome reception was held in the “Red Barn”.  This is the original Boeing airplane factory.  The smell of the wood barn interior makes you feel like you were there.  Seeing the woodshop tools that were used to create the different components of the flying machine was pretty cool.

Red Barn - The Original Boeing Airplane Factory

This was an opportunity for us to meet the other vendors in attendance, but more importantly to meet all of the delegates and learn more about them and what they do.  Liem Nguyen, the director of Corporate Communications for Compellent helped to coordinate Compellent’s sponsorship and involvement, and is seen below with Kirby Wadsworth, a marketing exec with F5 Networks.  You can’t tell from this picture, but Kirby was rockin’ some pretty sweet yellow slacks that night.

Liem Nguyen (Compellent) and Kirby Wadsworth (F5 Networks)

Most of the delegates in one form or another were involved in IT, but specifically this Tech Field Day was focused on virtualization.  So, the basis of what we talked about centered around our virtualized storage solution, but also the integration points with Hyper-V and VMware.

Bob Fine, Director of Product Marketing, Scott DesBles, Director of Technical Solutions, and myself tag-teamed to present the Compellent solution.  Bob and Scott provided the Compellent overview and a roadmap discussion which seemed to keep the panel engaged, and we also discussed Live Volume while demonstrating the Compellent Storage Center and its ease of use in addition to Enterprise Manager, the “single pane of glass” which can be used to manage multiple Storage Centers in your environment and the interface that enables the world-famous “6 clicks to replicate a volume’”. 

Check out Liem’s blog post about Tech Field Day with some exclusive interview footage of the delegates and shots from the Museum of Flight.

We had a blast meeting with the delegates and other vendors in Seattle.  We’d love the opportunity to do this again and continue to share the Compellent story.

Cargo plane on approach, Mount Rainer in background

Did I mention the view in Seattle?  For this last picture, I was amazed at how close the parking lot was to the runway at Boeing Field.  We were able to get some great photos and videos of the experience.  Here’s a nice shot of a cargo aircraft on approach with Mount Rainer in the distance.

Compellent Disk Management with PowerShell: Windows Server 2008 Disks

I was provisioning some Compellent storage today for a a series of tests that I am working on that required 62 volumes per server on two different servers.  These volumes are multi-pathed and although using the Compellent Storage Center GUI is easy and straightforward, completing this process would take a long time doing by hand and seemed fit to be automated using the Compellent Storage Center Command Set for Windows PowerShell.

I wrote a script a while back that handles my provisioning for me; in this case a couple of mount point root volumes followed by data volumes that would be accessed by mount point instead of drive letter.  The script is flexible enough to handle different volume counts and whether or not drive letters would be used, but the catch was I had only used it with Windows Server 2003. 

I tried to run the script this morning and found a flaw pretty quickly.  The volume was created on the Storage Center, mapped properly across the available paths, but when the script tried to initialize the volume in Windows, it would come back as “failed to initialize” with VDS error code 80070013.  This VDS error code indicates that the “media is write-protected”.  How could that be on a new volume?

Windows 2008 changed the way disk management is handled especially around delivery of the disk to the server.  By default, a disk mapped to a Windows 2008 server via VDS will be delivered in offline mode and also read-only.  In Windows Server 2008 there is a policy new to Windows related to SAN disks. This "SAN policy" determines whether a newly discovered disk is brought online or remains offline, and whether it is made read/write or remains read-only.  By default, the “Offline All” policy is set.  This means All newly discovered disks remain offline and read-only.  You can change this default policy in DISKPART by running the SAN POLICY=<POLICY NAME> from a DISKPART command prompt.

image

changing the default SAN policy in DISKPART

 

You can read more here, but in the meantime, the fix for this from a scripting perspective is quite simple.  The inability to initialize the disk because it was read-only was due to the SAN policy which presented the volume in a read-only fashion (and offline too).  We can change the disk attribute  of the volume so it is not read-only and then we can bring the disk online so it is usable.  Here is a sample of how to use the Command Set to change the read-only attribute and the state of the drive:

$scvolume is a variable that refers to the volume object that is created when we create a new volume using New-SCVolume.  The serial number is used to identify the disk mapped to the Windows Server.  It is also important to note that although the “Online” and “ReadOnly” switches come from the same cmdlet, these must be executed separately as they are in the sample. (Thanks for that important tidbit, Sean!)

Windows 2008 Hyper-V Resource Kit Now Available

On June 10, Microsoft Press published the new “Windows 2008 Hyper-V Resource Kit” by Robert Larson and Janique Carbone.

For the past year, Shane Burton a fellow Microsoft Product Specialist here at Compellent, and myself have been working with Robert and Janique on this project and providing content, particularly “Notes from the Field” for the book, while our Compellent Marketing Alliance partner, John Porterfield kept us in line.

Compellent is a project sponsor at the Microsoft Partner Solution Center and provided Robert and Janique access to a Compellent Storage Center for testing storage-related scenarios that are included in the book. Compellent users will recognize a lot of the screenshots which were taken directly from the Storage Center Manager.

Shane and I are proud to be contributing authors on this project. We hope the Windows 2008 Hyper-V Resource Kit will prove to be an invaluable reference for administrators and IT pros who are responsible for the architecture, design, implementation and ongoing maintenance of a Hyper-V environment.

The book is now available at Amazon and Barnes & Noble.


How Does Exchange 2010 Impact Storage?

Last week I had to opportunity to setup Exchange 2010, which is currently in beta.  Microsoft had a great story about the improvements with Exchange 2007, particularly around storage and IO.  Although I was not a big fan of role-based implementations, this type of setup has allowed for great scalability and also has made some components of Exchange viable candidates for virtualization.

Exchange 2010 uses some cool new technologies like PowerShell v2 and Windows Remote Management v2, both of which are still CTP.

Microsoft has improved the performance of Exchange again in Exchange 2010.  When Exchange 2007 was released, Microsoft boasted a 70% decrease in IO.  For example, an Exchange 2003 heavy mailbox profile used 1 IOPS/mailbox, while a Exchange 2007 heavy mailbox profile only uses .32 IOPS/mailbox.

In Exchange 2010, you can expect up to a 50% reduction in disk IO from Exchange 2007 levels.  This means that more disks meet the minimum performance required to run Exchange, driving down storage costs.  In addition, IO patterns have been optimized and are not “bursty” like they have been in previous versions.

With the ability to replicate up to 16 copies of each mailbox database, automatic page patching takes advantage of these replicated copies by using them as the source for repairs in the event page corruption or other minor database glitches occur.  Sounds pretty cool!

The schema has been revamped and message/header content is now stored in a single table.  In addition, Single Instance Storage is out, but automatic attachment compression is in. 

One of the bigger changes in Exchange 2007 was the page in the database page size from 4K to 8K.  In Exchange 2010, this changes again to 32K, allowing for larger block IO.  This charge is particularly helpful in keeping chunks of data together like attachments instead of having them scatter all about.

Exchange 2010 is expected to be released in late 2009, but the beta is available for download at http://www.microsoft.com/exchange/2010/en/us/try-it.aspx.

Storage Center Command Set Makes Automation a Snap!

This week we announced the availability of the Storage Center Command Set for Windows PowerShell as a free download for Compellent customers.

PowerShell is the powerful, new scripting interface from Microsoft with support for Windows Server 2008 (including Hyper-V), Windows Server 2003, Windows XP, and Windows Vista.  We’ve integrated PowerShell automation with our Command Set scripting shell. From Command Set, IT pros can  automate administration tasks on the Windows platforms for their Compellent Storage Center.

We’ve integrated PowerShell automation with our Command Set scripting shell. From Command Set, IT pros can automate administration tasks on the Windows platforms for their Compellent Storage Center.

We’ve exposed over 60 cmdlets that you can use to manage different objects within Storage Center including the controllers, servers, volumes, and alerts in addition to features like Copy-Mirror-Migrate (CMM). With the installation of the Command Set, we also include a few samples scripts to get you started in your scripting endeavors.

What types of scenarios are the Storage Center Command Set useful for?

Exchange Volume Provisioning

With Exchange Server 2007, you can have up to 50 storage groups. In a scenario where you need to deploy 50 storage groups with one database each, you’ll probably deploy with an individual volume for each database and each storage group for the transaction logs totally 100 volumes.

Although the Storage Center web management tool is very easy to use, it can be time consuming to complete the process of creating the volume, mapping it to the server, and then assigning a drive letter or mountpoint server-side for the volume. With the Command Set, you can automate this process from start to finish and complete your provisioning process in minutes instead of hours or days.

To augment this process, you can leverage the Exchange 2007 cmdlets to create your storage groups and databases all in the same pass after the volumes are created.

Windows Failover Cluster Deployment

Taking the same principles from above, you can handle mapping the same volume on the Storage Center to multiple servers at the same time including multiple paths for scenarios that require Multipath I/O (MPIO). You can also leverage CLUSTER.EXE (part of MSCS) to automate the creation of your cluster and the individual cluster resources including details like dependencies.

Virtual Machine Deployment

Using a combination of the Storage Center Command Set and cmdlets that are available within Hyper-V, you can automate the process of provisioning storage as well as deploying virtual machines in literally minutes!

Use Replays for Backups

Instead of backing up each individual volume, how about using the replays for each volume as the source for a backup? Included with the Storage Center Command Set is sample script called “Push2Tape”. This script will take the name of the volume on the Storage Center, retrieve a list of available replays for that volume, and then create a view of the latest replay map that to a tape or media server as either a drive letter or mountpoint. With the replays mounted on the tape or media server, that server is then the sole point for all your backups. This process will take the overhead that is typically associated with running backups on a production server, and put that on the tape or media server.

The possibilities are endless. Login to Knowledge Center today and download your copy of the Storage Center Command Set for Windows PowerShell. You can also register to join our secure online group and share best practices with other Compellent PowerShell users.