Suse 10 software raid
This means there are no more objects to create segments or volumes on. Select the entire tree disk and click Next. A configuration screen appears, where you can set a few options. In this case we will make a ReiserFS filesystem. On this Volume object we just created a filesystem, so now we should be able to mount the Volume. When you see the completion screen below, the volume is mounted. The only thing left is to make sure that EVMS is started during server startup.
You will see that the checkbox? When you restart the server, make sure the file or directory is still there. Then and only then you know for sure the configuration is OK. If possible, also check to see what happens when you remove one of the tree disks where the RAID was created.
Your email address will not be published. IT Modernization. SAP Solutions. AI and Analytics. Hybrid Cloud Solutions. Add the following lines if the file exists, the first line probably already exists, too. Multiple copies of all data blocks are arranged on multiple drives following a striping discipline. Component devices should be the same size.
The far layout provides sequential read throughput that scales by number of drives, rather than number of RAID 1 pairs. Configure a spare for each underlying mirrored array, or configure a spare to serve a spare group that serves all mirrors. When configuring a complex RAID 10 array, you must specify the number of replicas of each data block that are required.
The default number of replicas is two, but the value can be two to the number of devices in the array. You must use at least as many component devices as the number of replicas you specify. However, the number of component devices in a RAID 10 array does not need to be a multiple of the number of replicas of each data block.
The effective storage size is the number of devices divided by the number of replicas. For example, if you specify two replicas for an array created with five component devices, a copy of each block is stored on two different devices. The complex RAID 10 setup supports three different layouts which define how the data blocks are arranged on the disks.
The available layouts are near default , far and offset. They have different performance characteristics, so it is important to choose the right layout for your workload. With the near layout, copies of a block of data are striped near each other on different component devices.
That is, multiple copies of one data block are at similar offsets in different devices. Near is the default layout for RAID For example, if you use an odd number of component devices and two copies of data, some copies are perhaps one chunk further into the device.
The far layout stripes data over the early part of all drives, then stripes a second copy of the data over the later part of all drives, making sure that all copies of a block are on different drives. The second set of values starts halfway through the component drives.
With a far layout, the read performance of the complex RAID 10 is similar to a RAID 0 over the full number of drives, but write performance is substantially slower than a RAID 0 because there is more seeking of the drive heads.
It is best used for read-intensive operations such as for read-only file servers. Using RAID 10 in the far layout is well suited for mirrored writing applications. The offset layout duplicates stripes so that the multiple copies of a given chunk are laid out on consecutive drives and at consecutive offsets. Effectively, each stripe is duplicated and the copies are offset by one device. This should give similar read characteristics to a far layout if a suitably large chunk size is used, but without as much seeking for writes.
The number of replicas and the layout is specified as Parity Algorithm in YaST or with the --layout parameter for mdadm. The following values are accepted:. Specify n for near layout and replace N with the number of replicas.
Specify f for far layout and replace N with the number of replicas. The resize is done on the block device of your RAID array.
To extend the file system size to the maximum available size of the device, enter. When no size is specified, this increases the volume to the full size of the partition. Replace size with the desired size in bytes. You can also specify units on the value, such as K kilobytes , M megabytes , or 2G gigabytes.
When decreasing the size of the file system on a RAID device, ensure that the new size satisfies the following conditions:. Use the appropriate procedure below for decreasing the size of your file system. Ensure that you modify commands to use the name of your own device. Replace size with an integer value in kilobytes for the desired size. A kilobyte is bytes. If the file system is not mounted, mount it now. Alternatively, you can specify a decrease to the current size by prefixing the value with a minus - sign.
After you have resized the file system, the RAID array configuration continues to use the original array size until you force it to reduce the available space. Use the mdadm --grow mode to force the RAID to use a smaller segment size. To do this, you must use the -z option to specify the amount of space in kilobytes to use from each device in the RAID. This size must be a multiple of the chunk size, and it must leave about KB of space for the RAID superblock to be written to the device.
You can leave partitions at their current size to allow for the RAID to grow at a future time, or you can reclaim this now unused space. To reclaim the space, you decrease the component partitions one at a time. To allow for metadata, you should specify a slightly larger size than the size you specified for the RAID in Section Even for RAIDs that can tolerate multiple concurrent disk failures, you should never remove more than one component partition at a time.
Ensure that you modify the commands to use the names of your own devices. Decrease the size of the partition that you removed in Step 3 to a size that is slightly larger than the size you set for the segment size. Use a disk partitioner such as fdisk , cfdisk , or parted to decrease the size of the partition. If you get a message that tells you that the kernel could not re-read the partition table for the RAID, you must reboot the computer after resizing all of its component partitions.
Optional Expand the size of the RAID and file system to use the maximum amount of space in the now smaller component partitions:. Expand the size of the RAID to use the maximum amount of space that is now available in the reduced-size component partitions:. Expand the size of the file system to use all of the available space in the newly resized RAID.
For information, see Section Contents Contents. Warning Before starting any of the tasks described in this section, ensure that you have a valid backup of all of the data. Table
0コメント