10-30-2019 04:03 PM
We have a RS140 that has been experiencing issues with the 2 mirrored (RAID 1) drives connected to the onboard RAID 100 controller. We originally thought the drives were the problem and we have replaced both multiple times. We now think the RAID 100 onboard controller is the problem. We have purchased a RAID 500 controller that we wish to install as a replacement. Can I just install the new controller and simply move the drives over to it or is it more involved? What is the proper procedure for handling this transition? Right now the machine is booting off one of the drives and the second drive is not being used at all. The controller sees both drives but I can't create a mirror array (I get an error). Screenshots are attached.
10-31-2019 11:03 AM
You cannot just move the drives from C100 to RAID 500, you will need to recreate the RAID volume and reinstall all the software (including OS if that volume was hosting the OS).
Your post does not detail what Windows version is running on the system or what drivers are loaded.
If this is Windows Server, I would suggest to upgrade the RSTe driver first: v. 5.5
Some of the changes on the new driver include an enhancement of how to handle drive errors, those now will be corrected instead of marking the drive bad. This could improve the current behavior
If the current dirver is 10.xxx or higher, that will be the RST driver (without the e=enterprise), and the features of that driver are lower than on the RSTe, Upgrade to RSTe.
10-31-2019 11:58 AM
11-01-2019 06:03 AM
These are RAID solutions from different vendors, they are not compatible, you cannot migrate the RAID volume from one controller to the other.
The preferred (and official) recommendation for this operation is to back up the data, move the drives, recreate the RAID volume on the new RAID controller initializing it, install OS+software and restore the data.
Take a backup, critical before any operation that could put data at risk. If this were my system and was forced to move to RAID 500 which would force me to reinstall everything, knowing I have a good backup of the data I need, I would try:
First step should be to install the RAID 500 card on the system still booting form RSTe, this will allow you to install the driver for the RAID 500 card.
Once the RAID 500 driver is properly install, install an utility to manage RAID 500: MSM or LSA or StorCLI (this last one is command line only)
Connect one drive to the RAID 500 and set the other one aside, disconnect from the server (as an additional backup)
Boot to the BIOS RAID configuration utility and create a RAID 0 volume with just that one drive and DO NOT initialize (if you initialize all data is lost). When creating the RAID volume, there will be a questions about initializing the drive, pick NO (very important).
Once the RAID 0 is created, try to boot to the OS . . . as the RAID 500 uses the back of the drive for RAID stuff (the Windows boot partitions in the front should be OK) and the RAID 500 driver is already installed, the system may boot . . . may (this is not a supported process that is tested).
As RAID 500 uses part of the drive capacity for the RAID configuration information, the actual size left for data may be different than what is defined for Windows and this could be a problem if Windows were expecting higher capacity . . . RSTe also uses some space but I am not familiar with how or how much.
If the system boots, and Windows does not run into problems (do some testing before moving on), now the other drive can be added, from one of the management utilities, execute RAID level migration from a RAID 0 with one drive to a RAID 1 with two. At this point you should be done. This last part can be done from the OS, no need to reboot again.
If boot with one drive on RAID 0 did not work or there were OS stability problems, you can reconnect the drive set aside back on the RSTe controller, boot with only that one drive, then add the other one and rebuild the RAID 1 on RSTe.
I hope this helps, please let us know . . .
11-05-2019 01:04 PM
Understood. Just to clarify, we have 2 drives currently installed and the controller sees both drives but we aren't able to create a RAID volume. So the server is booting off the 1 drive. Would it technically still be a RAID disk if the volume hasn't been created? Basically I only have 1 drive with the OS & data that we need to keep the server up and running.
11-06-2019 08:17 AM
If there is no RAID, moving to the RAID 500 should be simpler.
Install the RAID 500 controller (without changing the drive) and add the RAID driver from the OS (required to be able to boot later, this may need an additional reboot to verify the driver is properly isntalled).
Boot to the uEFI/BIOS configuration utility, set the drive to JBOD mode.
Reboot to the OS
11-06-2019 08:21 AM
When in the process you just mentioned would I move the functioning drive and the drive to be used for the mirror to the 500 card? After all those steps or prior to rebooting to the OS?
11-08-2019 07:21 AM
There is not proven safe path to convert from JBOD to RAID volume, it may work but there a system stability risk
A drive on JBOD mode is presented ‘raw’ to the OS, which has access to the full capacity and will create storage space according to that data. When a drive is on a RAID volume, part of the back of the drive is dedicated to the RAID metadata, so when the drive is converted from JBOD to single drive RAID 0, data is written to it and the usable capacity reduces, which most likely impact the stability of the OS because the storage capacity defined on the OS is different (larger) than the current capacity presented by the RAID volume. The beginning of the drive is not altered and the OS could boot, usability depends on how the OS matches that mismatch on storage capacity . . . When this data is written to the end of the partition without OS knowledge, it could also make going back to JBOD a problem.