logo
Manual Pages
Table of Contents

NAME

na_vol - commands for managing volumes, displaying volume status, and copying volumes

SYNOPSIS

vol command argument ...

DESCRIPTION

The vol family of commands manages volumes. A volume is a logical unit of storage, containing a file system image and associated administrative options such as snapshot schedules. The disk space that a volume occupies (as well as the characteristics of the RAID protection it receives) is provided by an aggregate (see na_aggr(1)). Prior to Data ONTAP 7.0, volumes and aggregates were fused into a single administrative unit, where each aggregate (RAID-level collection of disks) contained exactly one volume (logical, user-visible file system). The vol family of commands managed both the lower-level disk storage aspects and the higher-level file system aspects of these tightly-bound volume/aggregate pairs. Such traditional volumes still exist for backwards compatibility. Administrators can now decouple the management of logical file systems (volumes) from their underlying physical storage (aggregates). In particular, this new class of flexible volumes provides much greater freedom: Aggregates can be created, destroyed, and managed independently (via the aggr command family). When an aggregate is created, it is a completely clean slate, free of any independent logical file systems (flexible volumes). Aggregates can contain multiple, completely independent flexible volumes. A filer's flexible volumes may all be placed in a single aggregate if desired, or they may be spread out across any of the filer's aggregates. Flexible volumes may be snapshotted, snap-restored, copied, and SnapMirrored independently from all other flexible volumes contained in the same aggregate. The maximum number of volumes (traditional or flexible in any combination) on a filer is generally determined by the amount of main memory. All filers with up to 1 GB of main memory can support a maximum of 200 volumes. The FAS2040 is also limited to a maximum of 200 volumes. All other filers with more than 1 GB of main memory can support up to 500 volumes. Aggregates that contain one or more flexible volumes cannot be restricted or offlined. In order to restrict or offline an aggregate, it is necessary to first destroy all of its contained flexible volumes. This guarantees that flexible volumes cannot disappear in unexpected and unclean ways, without having their system state properly and completely cleaned up. This also makes sure that any and all protocols that are being used to access the data in the flexible volumes can perform clean shutdowns. Aggregates that are embedded in traditional volumes can never contain flexible volumes, so they do not operate under this limitation. Since flexible volumes are independent entities from their containing aggregates, their size may be both increased and decreased. Flexible volumes may be as small as 20 MB. The maximum size for a flexible volume depends on the filer model and configuration, but is never over 16 TB. Clone volumes can be quickly and efficiently created. A clone volume is in effect a writable snapshot of a flexible volume. Initially, the clone and its parent share the same storage. More storage space is consumed only as one volume or the other changes. Clones may be split from their parents, promoting them to fully-independent flexible volumes that no longer share any blocks. A clone is always created in the same aggregate as its parent. Clones of clones may be created. FlexCache volumes can be quickly created using the vol command. FlexCache volumes are housed on the local filer, referred to as caching filer, and are cached copies of separate volumes, which are on a different filer, referred to as the origin filer. Clients access the FlexCache volume as they would access any other volume exported over NFS. FlexCache must be licensed on the caching filer but is not required for the origin filer. On the origin filer, option flexcache.enable must be set to "on" and option flexcache.access must be appropriately set. The current version of FlexCache only supports client access via NFSv2 and NFSv3. The vol command family is compatible in usage with earlier releases and can manage both traditional and flexible volumes. Some new vol commands in this release apply only to flexible volumes. The new aggr command family provides control over RAIDlevel storage. The underlying aggregate of flexible volumes can only be managed through this command. The vol commands can create new volumes, destroy existing ones, change volume status, increase the size of a volume (or decrease the size if it is a flexible volume), apply options to a volume, copy one volume to another, display status, and create and manage clones of flexible volumes. Each volume has a name, which can contain letters, numbers, and the underscore character(_); the first character must be a letter or underscore. A volume may be online, restricted, iron_restricted, or offline. When a volume is restricted, certain operations are allowed (such as vol copy and parity reconstruction), but data access is not allowed. When a volume is iron_restricted, wafliron is running in optional commit mode on the volume and data access is not allowed. Volumes can be in combinations of the following states: active_redirect
The flexible volume is in an aggregate on which aggregate reallocation or file reallocation with the -p option has started but has not completed. Read performance may be degraded until reallocation is successfully completed. copying
The volume is currently the target of active vol copy or snapmirror operations. degraded
The volume's containing aggregate contains at least one degraded RAID group that is not being reconstructed. flex The volume is a flexible volume contained by an aggregate and may be grown or shrunk in 4K increments. foreign
The disks that the volume's containing aggregate contains were moved to the current filer from another filer. growing
Disks are in the process of being added to the volume's containing aggregate. initializing
The volume or its containing aggregate is in the process of being initialized. invalid
The volume does not contain a valid file system. This typically happens only after an aborted vol copy operation. ironing
A WAFL consistency check is being performed on the volume's containing aggregate. mirror degraded
The volume's containing aggregate is a mirrored aggregate, and one of its plexes is offline or resyncing. mirrored
The volume's containing aggregate is mirrored and all of its RAID groups are functional. needs check
A WAFL consistency check needs to be performed on the volume's containing aggregate. out-of-date
The volume's containing aggregate is mirrored and needs to be resychronized. partial
At least one disk was found for the volume's containing aggregate, but two or more disks are missing. raid0 The volume's containing aggregate consists of RAID-0 (no parity) RAID groups (V-Series and NetCache only). raid4 The volume's containing aggregate consists of RAID-4 RAID groups. raid_dp
The volume's containing aggregate consists of RAIDDP (Double Parity) RAID groups. reconstruct
At least one RAID group in the volume's containing aggregate is being reconstructed. redirect
The flexible volume is in an aggregate on which aggregate reallocation or file reallocation with the -p option has been started. resyncing
One of the plexes of the volume's containing mirrored aggregate is being resynchronized. snapmirrored
The volume is a snapmirrored replica of another volume. sv-restoring
Restore-on-Demand is currently in progress on this volume. The volume is accessible, even though all of the blocks in the volume may not have been restored yet. Use the snapvault status command to view the restore progress. trad The volume is what is referred to as a traditional volume. It is fused to an aggregate, and no other volumes may be contained by this volume's containing aggregate. This type is exactly equivalent to the volumes that existed before Data ONTAP 7.0. unrecoverable
The volume is a flexible volume that has been marked unrecoverable. Please contact Customer Support if a volume appears in this state. verifying
A RAID mirror verification operation is currently being run on the volume's containing aggregate. wafl inconsistent
The volume or its containing aggregate has been marked corrupted. Please contact Customer Support if a volume appears in this state. flexcache
The volume is a FlexCache volume. connecting
The volume is a FlexCache volume, and the network connection between this volume and the origin volume is not yet established.

USAGE

The following commands are available in the vol suite:
  add          create         offline     scrub
  autosize     destroy        online      size
  clone        lang           options     split
  container    media_scrub    rename      status
  copy         mirror         restrict    verify
vol add volname
[ -f ]
[ -n ]
[ -g raidgroup ]
{ ndisks[@size]
|
-d disk1 [ disk2 ... ] [ -d diskn [ diskn+1 ... ] ] }
Adds the specified set of disks to the aggregate portion of the traditional volume named volname, and grows the user-visible file system portion of the traditional volume by that same amount of storage. See the na_aggr (1) man page for a description of the various arguments. The vol add command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations on their containing aggregates be handled via the new aggr command suite. In this specific case, aggr add should be used. vol autosize volname [ -m size [k|m|g|t] ] [ -i size [k|m|g|t] ] [ on | off | reset ] Volume autosize allows a flexible volume to automatically grow in size with in an aggregate. This is useful when a volume is about to run out of available space, but there is space available in the containing aggregate for the volume to grow. This feature works together with snap autodelete to automatically reclaim space when a volume is about to get full. The volume option try_first controls the order in which these two reclaim policies are used. By default autosize is disabled. The on sub-command can be used to enable autosize on a volume. The reset sub-command resets the settings of volume autosize to defaults. The off sub-command can be used to disable autosize. The -m switch allows the user to specify the maximum size to which a flexible volume will be allowed to grow. The size of the volume will be increased by the increment size specified with the -i switch. A volume will not automatically grow if the current size of the volume is greater than or equal to the maximum size specified with the -m option. vol clone create clone_vol
[ -s none | file | volume ]
-b parent_vol [ parent_snap ] The vol clone create command creates a flexible volume named clone_vol on the local filer that is a clone of a "backing" flexible volume named par_ent_vol. A clone is a volume that is a writable snapshot of another volume. Initially, the clone and its parent share the same storage; more storage space is consumed only as one volume or the other changes. If a specific parent_snap within parent_vol is provided, it is chosen as the backing snapshot. Otherwise, the filer will create a new snapshot named clone_parent_<UUID> (using a freshly-generated UUID) in parent_vol for that purpose. The parent_snap is locked in the parent volume, preventing its deletion until the clone is either destroyed or split from the parent using the vol clone split start command. Backing flexible volume parent_vol may be a clone itself, so "clones of clones" are possible. A clone is always created in the same aggregate as its parent_vol. The vol clone create command fails if the chosen parent_vol is currently involved in a vol clone split operation. The vol clone create command fails if the chosen parent_vol is a traditional volume. Cloning is a new capability that applies exclusively to flexible volumes. By default, the clone volume is given the same storage guarantee as the parent volume; the default may be overridden with the -s switch. See the vol create command for more information on the storage guarantee. A clone volume may not be currently used as a target for vol copy or volume snapmirror. A clone volume can be used as the target for qtree snapmirror. vol clone split start volname This command begins separating clone volume volname from its underlying parent. New storage is allocated for the clone volume that is distinct from the parent. This process may take some time and proceeds in the background. Use the vol clone split status command to view the command's progress. Both clone and parent volumes remain available during this process of splitting them apart. Upon completion, the snapshot on which the clone was based will be unlocked in the parent volume. Any snapshots in the clone are removed at the end of processing. Use the vol clone split stop command to stop this process. The vol clone split start command also fails if the chosen volname is a traditional volume. Cloning is a new capability that applies exclusively to flexible volumes. vol clone split status [volname] This command displays the progress in separating clone volumes from their underlying parent volumes. If volname is specified, then the splitting status is provided for that volume. If no volume name appears on the command line, then status for all clone splitting operations that are currently active is provided. The vol clone split status command fails if the chosen volname is a traditional volume. Cloning is a new capability that applies exclusively to flexible volumes. vol clone split estimate [volname] This command displays an estimate of the free disk space required in the aggregate to split the indicated clone volume from its underlying parent volume. The value reported may differ from the space actually required to perform the split, especially if the clone volume is changing when the split is being performed. vol clone split stop volname This command stops the process of separating a clone from its parent volume. All of the blocks that were formerly shared between volname and its backing volume that have already been split apart by the vol clone split start will remain split apart. The vol clone split stop command fails if the chosen volname is a traditional volume. Cloning is a new capability that applies exclusively to flexible volumes. vol container volname This command displays the name of the aggregate that contains flexible volume volname. The vol container command fails if asked to operate on a traditional volume, as its tightly-bound aggregate portion cannot be addressed independently. vol copy abort operation_number | all This command terminates volume copy operations. The operation_number parameter in the vol copy abort command specifies which operation to terminate. If all is specified, all volume copy operations are terminated. vol copy start [ -p {inet | inet6 } ] [ -S | -s snapshot ] source destination Copies all data, including snapshots, from one volume to another. If the -S flag is used, the command copies all snapshots in the source volume to the destination volume. To specify a particular snapshot to copy, use the -s flag followed by the name of the snapshot. If neither the -S nor -s flag is used in the command, the filer automatically creates a distinctively-named snapshot at the time the vol copy start command is executed and copies only that snapshot to the destination volume. The -p option is used for selecting the IP connection mode. The value for this argument can be inet or inet6. When the value is inet6, the connection will be established using IPv6 addresses only. If there is no IPv6 address configured for the destination, then the connection will fail. When the value is inet, the connection will be established using IPv4 addresses only. If there is no IPv4 address configured on the destination, then the connection will fail. When this argument is not specified, then the connection will be tried using both IPv6 and IPv4 addresses. inet6 mode will have higher precedence than inet mode. If a connection request using inet6 mode fails, the connection will be retried using inet mode. This option is not meaningful when an IP address is specified instead of a hostname. If the IP address format and connection mode doesn't match, the operation prints an error message and aborts. The source and destination volumes must either both be traditional volumes or both be flexible volumes. The vol copy command will abort if an attempt is made to copy between different volume types. The source and destination volumes can be on the same filer or on different filers. If the source or destination volume is on a filer other than the one on which the vol copy start command was entered, specify the volume name in the filer_name:volume_name format. Note that the source and destination volumes must be of the same type, either both flexible or both traditional. The filers involved in a volume copy must meet the following requirements for the vol copy start command to be completed successfully: The source volume must be online and the destination volume must be offline. If data is copied between two filers, each filer must be defined as a trusted host of the other filer. That is, the filer's name must be in the /etc/hosts.equiv file of the other filer. If one filer is not in the /etc/hosts.equiv file of the other filer then "Permission denied" error message is displayed to the user. If data is copied on the same filer, localhost must be included in the filer's /etc/hosts.equiv file. Also, the loopback address must be in the filer's /etc/hosts file. Otherwise, the filer cannot send packets to itself through the loopback address when trying to copy data. The usable disk space of the destination volume must be greater than or equal to the usable disk space of the source volume. Use the df pathname command to see the amount of usable disk space of a particular volume. Each vol copy start command generates two volume copy operations: one for reading data from the source volume and one for writing data to the destination volume. Each filer supports up to four simultaneous volume copy operations. vol copy status [ operation_number] Displays the progress of one or all active volume copy operations, if any. The operations are numbered from 0 through 3. If no operation_number is specified, then status for all active vol copy operations is provided. vol copy throttle [ operation_number ] value This command controls the performance of the volume copy operation. The value ranges from 10 (full speed) to 1 (one-tenth of full speed). The default value is maintained in the filer's vol.copy.throttle option and is set 10 (full speed) at the factory. The performance value can be applied to an operation specified by the operation_number parameter. If an operation number is not specified, the command applies to all active volume copy operations. Use this command to limit the speed of volume copy operations if they are suspected to be causing performance problems on a filer. In particular, the throttle is designed to help limit the volume copy's CPU usage. It cannot be used to fine-tune network bandwidth consumption patterns. The vol copy throttle command only enables the speed of a volume copy operation that is in progress to be set. To set the default volume copy speed to be used by future volume copy operations, use the options command to set the vol.copy.throttle option. vol create flex_volname
[ -l language_code ]
[ -s none | file | volume ]
aggrname size vol create trad_volname
[ -l language_code ]
[-f] [-n] [-m]
[-L [compliance | enterprise]]
[-t raidtype ] [-r raidsize ]
{ ndisks[@size]
|
-d disk1 [ disk2 ... ] [ -d diskn [ diskn+1 ... ] ] }
vol create flexcache_volname
[ -l language_code ]
aggrname size
[ size [k|m|g|t] ]
[ -S remotehost:remotevolume ] Creates a flexible, traditional, or FlexCache volume. If the first format is used, a flexible volume named flex_volname is created in the storage provided by aggregate aggrname. The size argument specifies the size of the flexible volume being created. It is a number, optionally followed by k, m, g, or t, denoting kilobytes, megabytes, gigabytes, or terabytes respectively. If none of the above letters is used, the unit defaults to bytes (and is rounded up to the nearest 4 KB). Flexible volumes may be as small as 20 MB. The maximum size for a flexible volume depends on the filer model and configuration, but is never over 16 TB. The optional -s switch controls whether the volume is guaranteed some amount of disk space. The default value is volume, which means that the entire size of the volume will be preallocated. The file value means that space will be preallocated for all the space-reserved files and LUNs within the volume. Storage is not preallocated for files and LUNs that are not space-reserved. Writes to these can fail if the underlying aggregate has no space available to store the written data. The none value means that no space will be preallocated, even if the volume contains space-reserved files or LUNs; if the aggregate becomes full, space will not be available even for space-reserved files and LUNs within the volume. Note that both the none and file settings allow for overbooking the containing aggregate aggrname. As such, it will be possible to run out of space in the new flexible volume even though it has not yet consumed its stated size. Use these settings carefully, and take care to regularly monitor space utilization in overbooking situations. To create a clone of a flexible volume, use the vol clone create command. If the underlying aggregate aggrname upon which the flexible volume is being created is a SnapLock aggregate, the flexible volume will be a SnapLock volume and automatically inherit the SnapLock type, either Compliance or Enterprise, from the aggregate. If the second format is used, a traditional volume named trad_volname is created using the specified set of disks. See the na_aggr (1) man page for a description of the various arguments to this traditional form of volume creation. If the third format is used, a FlexCache volume named flexcache_volname is created in the aggreagate aggrname. The FlexCache volume is created for the volume remotevolume located on the filer remotehost. This option is only valid if FlexCache functionality is licensed. If the size is not specified then the FlexCache volume will be created with autogrow enabled. The original size of the volume will be the smallest possible size of a flexible volume, but the size will automatically grow as more spaces is needed in the FlexCache volume to improve performance by avoiding evictions. Although the size is left as an optional parameter, the recommended way of using FlexCache volumes is with autogrow enabled. If the -l language_code argument is used, the filer creates the volume with the language specified by the language code. The default is the language used by the filer's root volume. Language codes are:
          C            (POSIX)
          ar           (Arabic)
          cs           (Czech)
          da           (Danish)
          de           (German)
          en           (English)
          en_US        (English (US))
          es           (Spanish)
          fi           (Finnish)
          fr           (French)
          he           (Hebrew)
          hr           (Croatian)
          hu           (Hungarian)
          it           (Italian)
          ja           (Japanese euc-j)
          ja_JP.PCK    (Japanese PCK (sjis))
          ko           (Korean)
          no           (Norwegian)
          nl           (Dutch)
          pl           (Polish)
          pt           (Portuguese)
          ro           (Romanian)
          ru           (Russian)
          sk           (Slovak)
          sl           (Slovenian)
          sv           (Swedish)
          tr           (Turkish)
          zh           (Simplified Chinese)
          zh.GBK       (Simplified Chinese (GBK))
          zh_TW        (Traditional Chinese euc-tw)
          zh_TW.BIG5   (Traditional Chinese Big 5)
To use UTF-8 as the NFS character set, append `'.UTF-8'' to the above language codes. vol create will create a default entry in the /etc/exports file unless the option nfs.export.auto-update is disabled. To create a SnapLock volume, specify -L flag with vol create command. This flag is only supported if either SnapLock Compliance or SnapLock Enterprise is licensed. The type of the SnapLock volume created, either Compliance or Enterprise, is determined by the type of installed SnapLock license. If both SnapLock Compliance and SnapLock Enterprise are licensed, use -L compliance or -L enterprise to specify the desired volume type. vol destroy { volname | plexname } [ -f ] Destroys the (traditional or flexible) volume named volname, or the plex named plexname within a traditional mirrored volume. Before destroying the volume or plex, the user is prompted to confirm the operation. The -f flag can be used to destroy a volume or plex without prompting. It is acceptable to destroy flexible volume volname even if it is the last one in its containing aggregate. In that case, the aggregate simply becomes devoid of user-visible file systems, but fully retains all its disks, RAID groups, and plexes. If a plex within a traditional mirrored volume is destroyed in this way, the traditional volume is left with just one plex, and thus becomes unmirrored. All of the disks in the plex or traditional volume destroyed by this operation become spare disks. Only offline volumes and plexes can be destroyed. vol destroy will delete all entries belonging to the volume in the /etc/exports file unless the option nfs.export.auto-update is disabled. vol lang [ volname [ language_code ] ] Displays or changes the character mapping on vol_name. If no arguments are given, vol lang displays the list of supported languages and their language codes. If only volname is given, it displays the language of the specified volume. If both volname and language-code are given, it sets the language of the specified volume to the given language. This will require a reboot to fully take effect. vol media_scrub status [ volname | plexname | groupname -s disk-name ]
[-v] This command prints the status of the media scrub on the named traditional volume, plex, RAID group or spare drive. If no name is given, then status is given on all RAID groups and spare drives currently running a media scrub. The status includes a percent-complete and the suspended status (if any). The -v flag displays the date and time at which the last full media scrub completed, the date and time at which the current instances of media scrub started, and the current status of the named traditional volume, plex RAID group or spare drive. This is provided for all RAID groups if no name is given. The vol media_scrub status command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr media_scrub status command. vol mirror volname
[ -n ]
[ -v victim_volname ]
[ -f ]
[ -d disk1 [ disk2 ... ] ] Mirrors the currently-unmirrored traditional volume volname, either with the specified set of disks or with the contents of another unmirrored traditional volume victim_volname, which will be destroyed in the process. The vol mirror command fails if either the chosen volname or victim_volname are flexible volumes. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. For more information about the arguments used for this command, see the information for the aggr mirror command on the na_aggr(1) man page. vol offline { volname | plexname }
[ -t cifsdelaytime ] Takes the volume named volname (or the plex named plexname within a traditional volume) offline. The command takes effect before returning. If the volume is already in restricted or iron_restricted state, then it is already unavailable for data access, and much of the following description does not apply. The current root volume may not be taken offline. Neither may a volume marked to become root (by using vol options volname root) be taken offline. If a volume contains CIFS shares, users should be warned before taking the volume offline. Use the -t option to do this. The cifsdelaytime argument specifies the number of minutes to delay before taking the volume offline, during which time CIFS users are warned of the pending loss of service. A time of 0 means that the volume should be taken offline immediately and without warning. CIFS users can lose data if they are not given a chance to terminate applications gracefully. If a plexname is specified, the plex must be part of a mirrored traditional volume, and both plexes must be online. Prior to offlining a plex, the system will flush all internally-buffered data associated with the plex and create a snapshot that is written out to both plexes. The snapshot allows for efficient resynchronization when the plex is subsequently brought back online. A number of operations being performed on the volume in question can prevent vol offline from succeeding for various lengths of time. If such operations are found, there will be a one-second wait for such operations to finish. If they do not, the command is aborted. A check is also made for files on the volume opened by internal ONTAP processes. The command is aborted if any are found. The vol offline command fails if plexname resides not in a traditional mirrored volume, but in an independent aggregate. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should consult the na_aggr(1) man page for a more detailed description of the aggr offline command. vol online { volname [ -f ] | plexname } This command brings the volume named volname (or the plex named plexname within a traditional volume) online. It takes effect immediately. If there are CIFS shares associated with the volume, they are enabled. If a volname is specified, it must be currently offline, restricted, or in a foreign aggregate. If volname belongs to a foreign aggregate, the aggregate will be made native before being brought online. A foreign aggregate is an aggregate that consists of disks moved from another filer and that has never been brought online on the current filer. Aggregates that are not foreign are considered native. If the volume is inconsistent but has not lost data, the user will be cautioned and prompted before bringing it online. The -f flag can be used to override this behavior. It is advisable to run WAFL_check (or do a snapmirror initialize in case of a replica volume) prior to bringing an inconsistent volume online. Bringing an inconsistent volume online increases the risk of further file system corruption. If the volume is inconsistent and has experienced possible loss of data, it cannot be brought online unless WAFL_check (or snapmirror initialize) has been run on the volume. If the volume is a flexible volume and the containing aggregate can not honor the space guarantees required by this volume, the volume online operation will fail. The -f flag can be used to override this behavior. It is not advisable to use volumes with their space guarantees disabled. Lack of free space can lead to failure of writes which in turn can appear as data loss to some applications. If a plexname is specified, the plex must be part of an online, mirrored traditional volume. The system will initiate resynchronization of the plex as part of online processing. The vol online command fails if plexname resides not in a traditional volume, but in an independent aggregate. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should consult the na_aggr(1) man page for a more detailed description of the aggr online command. vol options volname [ optname optval ] This command displays the options that have been set for volume volname, or sets the option named optname of the volume named volname to the value optval. The command remains effective after the filer is rebooted, so there is no need to add vol options commands to the /etc/rc file. Some options have values that are numbers. Other options have values that may be on (which can also be expressed as yes, true, or 1) or off (which can also be expressed as no, false, or 0). A mixture of uppercase and lowercase characters can be used when typing the value of an option. The vol status command displays the options that are set per volume. The root option is special in that it does not have a value. To set the root option, use this syntax: vol options volname root There are four categories of options handled by this command. The first category is the set of options that are defined for all volumes, both flexible and traditional, since they have to do with the volume's user-visible file system aspects. The second category is the set of aggregate-level (i.e., disk and RAID) options that only apply to traditional volumes and not to flexible volumes. The third category is the set of options that are
applicable only to
flexible volumes and not to traditional volumes. The fourth category is the set of options that are applicable only to FlexCache volumes.
This section documents all four categories of options. It begins by describing, in alphabetical order, options common to all volumes (both flexible and traditional) and their possible values: convert_ucode on | off Setting this option to on forces conversion of all directories to UNICODE format when accessed from both NFS and CIFS. By default, it is set to off, in which case access from CIFS causes conversion of pre-4.0 and 4.0 format directories. Access from NFS causes conversion of 4.0 format directories. The default setting is off. create_ucode on | off Setting this option to on forces UNICODE format directories to be created by default, both from NFS and CIFS. By default, it is set to off, in which case all directories are created in pre-4.0 format, and the first CIFS access will convert it to UNICODE format. The default setting is off. extent on | space_optimized | off Setting this option to on or space_optimized enables extents in the volume. This causes application writes to be written in the volume as a write of a larger group of related data blocks called an extent. Using extents may help workloads that perform many small random writes followed by large sequential reads. However, using extents may increase the amount of disk operations performed on the filer, so this option should only be used where this trade-off is desired. If the option is set to space_optimized then the reallocation update will not duplicate snapshot blocks into the active file system, and will result in conservative space utilization. Using space_optimized may be useful when the volume has snapshots or is a SnapMirror source, when it can reduce the storage used in the Flexible Volume and the amount of data that SnapMirror needs to move on the next update. The space_optimized value may result in degraded snapshot read performance; and may only be used for Flexible Volumes. The default value is off, in which case read reallocation is not used. fractional_reserve <pct> This option decreases the amount of space reserved for overwrites of reserved objects (LUNs, files) in a volume. The option is set to 100 by default and indicates that 100% of the required reserved space will actually be reserved so the objects are fully protected for overwrites. The value can vary from 0 to 100. Using a value of less than 100 indicates what percentage of the required reserved space should actually be reserved. This returns the extra space to the available space for the volume, decreasing the total amount of space used. However, this does leave the protected objects in the volume vulnerable to out of space errors since less than 100% of the required reserved space is actually reserved. If reserved space becomes exhausted this will cause disruptions on the hosts using the objects. If the percentage is decreased below 100%, it is highly recommended that the administrator actively monitor the space usage on the volume and take corrective action if the reserved space nears exhaustion. fs_size_fixed on | off This option causes the file system to remain the same size and not grow or shrink when a SnapMirrored volume relationship is broken, or when a vol add is performed on it. This option is automatically set to be on when a volume becomes a SnapMirrored volume. It will remain on after the snapmirror break command is issued for the volume. This allows a volume to be SnapMirrored back to the source without needing to add disks to the source volume. If the volume is a traditional volume and the size is larger than the file system size, turning off this option will force the file system to grow to the size of the volume. If the volume is a flexible volume and the volume size is larger than the file system size, turning off this option will force the volume size to become equal to the file system size. The default setting is off. guarantee file | volume | none This option controls whether the volume is guaranteed some amount of disk space. The default value is volume, which means that the entire size of the volume will be preallocated. The file value means that space will be preallocated for all the spacereserved files and LUNs within the volume. Storage is not preallocated for files and LUNs that are not space-reserved. Writes to these can fail if the underlying aggregate has no space available to store the written data. The none value means that no space will be preallocated, even if the volume contains space-reserved files or LUNs; if the aggregate becomes full, space will not be available even for space-reserved files and LUNs within the volume. Note that both the none and file settings allow for overbooking the containing aggregate aggrname. As such, it will be possible to run out of space in the new flexible volume even though it has not yet consumed its stated size. Use these settings carefully, and take care to regularly monitor space utilization in overbooking situations. For flexible root volumes, to ensure that system files, log files, and cores can be saved, the guarantee must be volume. This is to ensure support of the appliance by customer support, if a problem occurs. Disk space is preallocated when the volume is brought online and, if not used, returned to the aggregate when the volume is brought offline. It is possible to bring a volume online even when the aggregate has insufficient free space to preallocate to the volume. In this case, no space will be preallocated, just as if the none option had been selected. The vol options and vol status command will display the actual value of the guarantee option, but will indicate that it is disabled. maxdirsize number Sets the maximum size (in KB) to which a directory can grow. This is set to 1% of the total system memory by default. Most users should not need to change this setting. If this setting is changed to be above the default size, a notice message will be printed to the console explaining that this may impact performance. This option is useful for environments in which system users may grow a directory to a size that starts impacting system performance. When a user tries to create a file in a directory that is at the limit, the system returns a ENOSPC error and fails the create. minra on | off If this option is on, the filer performs minimal file read-ahead on the volume. By default, this option is off, causing the filer to perform speculative file read-ahead when needed. Using speculative read-ahead will improve performance with most workloads, so enabling this option should be used with caution. no_atime_update on | off If this option is on, it prevents the update of the access time on an inode when a file is read. This option is useful for volumes with extremely high read traffic, since it prevents writes to the inode file for the volume from contending with reads from other files. It should be used carefully. That is, use this option when it is known in advance that the correct access time for inodes will not be needed for files on that volume. The default setting is off. no_i2p on | off If this option is on, it disables inode to parent pathname translations on the volume. The default setting is off. nosnap on | off If this option is on, it disables automatic snapshots on the volume. The default setting is off. nosnapdir on | off If this option is on, it disables the visible .snapshot directory that is normally present at client mount points, and turns off access to all other .snapshot directories in the volume. The default setting is off. nvfail on | off If this option is on, the filer performs additional status checking at boot time to verify that the NVRAM is in a valid state. This option is useful when storing database files. If the filer finds any problems, database instances hang or shut down, and the filer sends error messages to the console to alert administrators to check the state of the database. The default setting is off. read_realloc on | space_optimized | off Setting this option to on or space_optimized enables read reallocation in the volume. This results in the optimization of file layout by writing some blocks to a new location on disk. The layout is updated only after the blocks have been read because of a user read operation, and only when updating their layout will provide better read performance in the future. Using read reallocation may help workloads that perform a mixture of random writes and large sequential reads. If the option is set to space_optimized then the reallocation update will not duplicate snapshot blocks into the active file system, and will result in conservative space utilization. Using space_optimized may be useful when the volume has snapshots or is a snapmirror source, when it can reduce the storage used in the Flexible Volume and the amount of data that snapmirror needs to move on the next update. The space_optimized value may result in degraded snapshot read performance; and may only be used for Flexible Volumes. The default value is off, in which case read reallocation is not used. root [ -f ] The volume named volname will become the root volume for the filer on the next reboot. This option can be used on one volume only at any given time. The existing root volume will become a non-root volume after the reboot. Until the system is rebooted, the original volume will continue to show root as one of its options, and the new root volume will show diskroot as an option. In general, the volume that has the diskroot option is the one that will be the root volume following the next reboot. The only way to remove the root status of a volume is to set the root option on another volume. The act of setting the root status on a flexible volume will also move the HA mailbox disk information to disks on that volumes. A flexible volume must meet the minimum size requirement for the appliance model, and also must have a space guarantee of volume, before it can be designated to become the root volume on the next reboot. This is to ensure support of the appliance by customer support, because the root volume contains system files, log files, and in the event of reboot panics, core files. Since setting a volume to be a root volume is an important operation, the user is prompted if they want to continue or not. If system files are not detected on the target volume, then the set root operation will fail. You can override this with the -f flag, but upon reboot, the appliance will need to be reconfigured via setup. Note that it is not possible to set the root status on a SnapLock volume. schedsnapname create_time | ordinal If this option is ordinal, the filer formats scheduled snapshot names using the type of the snapshot and its ordinal (such as hourly.0) If the option is create_time, the filer formats scheduled snapshot names base on the type of the snapshot and the time at which it was created, such as hourly.2005-04-21_1100. The default setting is ordinal. snaplock_compliance This read only option indicates that the volume is a SnapLock Compliance volume. Volumes can only be designated SnapLock Compliance volumes at creation time. snaplock_default_period min | max | infinite <count>d|m|y This option is only visible for SnapLock volumes and specifies the default retention period that will be applied to files committed to WORM state without an associated retention period. If this option value is min then snaplock_minimum_period is used as the default retention period. If this option value is max then snaplock_maximum_period is used as the default retention period. If this option value is infinite then a retention period that never expires will be used as the default retention period. The retention period can also be explicitly specified as a number followed by a suffix. The valid suffixes are d for days, m for months and y for years. For example, a value of 6m represents a retention period of 6 months. The maximum valid retention period is 70 years. snaplock_enterprise This read only option indicates that the volume is a SnapLock Enterprise volume. Volumes can only be designated SnapLock Enterprise volumes at creation time. snaplock_maximum_period infinite | <count>d|m|y This option is only visible for SnapLock volumes and specifies the maximum allowed retention period for files committed to WORM state on the volume. Any files committed with a retention period longer than this maximum will be assigned this maximum value. If this option value is infinite then files that have retention periods that never expire may be committed to the volume. Otherwise, the retention period is specified as a number followed by a suffix. The valid suffixes are d for days, m for months and y for years. For example, a value of 6m represents a retention period of 6 months. The maximum allowed retention period is 70 years. snaplock_minimum_period infinite | <count>d|m|y This option is only visible for SnapLock volumes and specifies the minimum allowed retention period for files committed to WORM state on the volume. Any files committed with a retention period shorter than this minimum will be assigned this minimum value. If this option value is infinite then every file committed to the volume will have a retention period that never expires. Otherwise, the retention period is specified as a number followed by a suffix. The valid suffixes are d for days, m for months and y for years. For example, a value of 6m represents a retention period of 6 months. The maximum allowed retention period is 70 years. snapmirrored off If SnapMirror is enabled, the filer automatically sets this option to on. Set this option to off if SnapMirror is no longer to be used to update the mirror. After setting this option to off, the mirror becomes a regular writable volume. This option can only be set to off; only the filer can change the value of this option from off to on. snapshot_clone_dependency on | off Setting this option to on will unlock all initial and intermediate backing snapshots for all inactive LUN clones. For active LUN clones, only the backing snapshot will be locked. If the option is off the backing snapshot will remain locked until all intermediate backing snapshots are deleted. try_first volume_grow | snap_delete A flexible volume can be configured to automatically reclaim space in case the volume is about to run out of space, by either increasing the size of the volume or deleting snapshots in the volume. If this option is set to volume_grow ONTAP will try to first increase the size of volume before deleting snapshots to reclaim space. If the option is set to fBsnap_delete ONTAP will first automatically delete snapshots and in case of failure to reclaim space will try to grow the volume. svo_allow_rman on | off If this option is on, the filer performs SnapValidator for Oracle data integrity checks that are compatible with volumes that contain Oracle RMAN backup data.If the filer finds any problems, the write will be rejected if the svo_reject_errors option is set to on. The default setting is off. svo_checksum on | off If this option is on, the filer performs additional SnapValidator for Oracle data integrity checksum calculations of all writes on the volume. If the filer finds any problems, the write will be rejected if the svo_reject_errors option is set to on. The default setting is off. svo_enable on | off If this option is on, the filer performs additional SnapValidator for Oracle data integrity checking of all operations on the volume. If the filer finds any problems, the operation will be rejected if the svo_reject_errors option is set to on. The default setting is off. svo_reject_errors on | off If this option is on, the filer will return an error to the host and log the error if any of the SnapValidator for Oracle checks fail. If the option is off, the error will be logged only. The default setting is off. The second category of options managed by the vol options command comprises the set of things that are closely related to aggregate-level (i.e., disk and RAID) qualities, and are thus only accessible via the vol options command when dealing with traditional volumes. Note that these aggregate-level options are also accessible via the aggr family of commands. The list of these aggregate-level options is provided below in alphabetical order: ignore_inconsistent on | off If this option is set to on, then aggregatelevel inconsistencies that would normally be considered serious enough to keep the associated volume offline are ignored during booting. The default setting is off. raidsize number The -r raidsize argument specifies the maximum number of disks in each RAID group in the traditional volume. The maximum and default values of raidsize are platformdependent, based on performance and reliability considerations. raidtype raid4 | raid_dp | raid0 The -t raidtype argument specifies the type of RAID group(s) to be used to create the traditional volume. The possible RAID group types are raid4 for RAID-4, raid_dp for RAID-DP (Double Parity), and raid0 for simple striping without parity protection. Setting the raidtype on V-Series systems is not permitted; the default of raid0 is always used. resyncsnaptime number This option is used to set the mirror resynchronization snapshot frequency (in minutes). The default value is 60 minutes. For new volumes, options convert_ucode, create_ucode, and maxdirsize get their values from the root volume. If the root volume doesn't exist, they get the default values. The following are the options that only apply to flexible volumes: nbu_archival_snap on | off [-f] Setting this option to on for a volume enables archival snapshot copies for SnapVault for NetBackup. If this option is set to off, no archival snapshot copy is taken after a backup. Drag-and-drop restores are only available for those backups that are captured in archival snapshot copies. Enabling or re-enabling archival snapshot copies will only be permitted on a volume if no SnapVault for NetBackup backups exist on that volume. If the nbu_archival_snap vol option is not configured at the time the first SnapVault for NetBackup backup starts for that volume, the vol option is then set according to the value of the snapvault.nbu.archival_snap_default option. The
-f option disables the prompt that asks for confirmation. There are a set of options managed by the vol options command that are tied to FlexCache volumes. The list of these options are as follows: acregmax <timeout>m|h|d|w Attribute Cache regular file timeout. The amount of time (in seconds) which the cache considers regular files on the given volume to be valid before consulting the origin. The timeout value is a number, optionally followed by m, h, d or w, denoting minutes, hours, days or weeks respectively. If none of the above letters is used, the unit defaults to seconds. The default value is 30 seconds. A value of zero means the cache will perform an attribute verify for every client request. acdirmax <timeout> m|h|d|w Similar to acregmax, but for directories. acsymmax <timeout> m|h|d|w Similar to acregmax, but for symbolic links. actimeo <timeout> m|h|d|w Attribute Cache default timeout. Similar to acregmax, but is applied to all filetypes that have no explicit timeout assigned by one of the above attribute cache options. acdisconnected <timeout> m|h|d|w Attribute cache timeout value used when the disconnected mode feature is enabled on this volume. If this option is set to 0 (the default value), access will be allowed indefinitely. disconnected_mode off | hard | soft This option is used to configure the behavior of the cache volume when it is disconnected from the origin and the normal TTL (e.g. acregmax) on the object has expired. When disabled (off), all access attempts will hang. When set to hard or soft, readonly access attempts will be allowed up to the value of the acdisconnected option. After the acdisconnected timeout is exceeded, attempts will either hang (hard) or have an error returned (soft). All attempts to modify the file system contents or access data that is not currently in the cache volume will hang. flexcache_autogrow on | off Setting this option to on enables autogrow on the FlexCache volume. This causes the FlexCache volume to automatically grow, if there is room in the aggregate, in order to avoid evictions. Setting this option to off will cause the FlexCache volume to no longer automatically grow. The size will not be reverted back to its original size. This option is only valid on FlexCache volumes. Autogrow will be enabled by default on new FlexCache volumes that are created without a size parameter. flexcache_min_reserve size Alter the space reserved in the aggregate for the given FlexCache volume, such that the volume is guaranteed to be able to cache up to size data. The size paramater is given as in the vol create command. vol rename volname newname Renames the volume named volname to the name new_name. vol rename will rewrite all entries belonging to the volume in the /etc/exports file unless the option nfs.export.auto-update is disabled. vol restrict volname
[ -t cifsdelaytime ] Put the volume volname in restricted state, starting from either online or offline state. If the volume is online, then it will be made unavailable for data access as described above under vol offline. If a volume contains CIFS shares, users should be warned before taking the volume offline. Use the -t option for this. The cifsdelaytime argument specifies the number of minutes to delay before taking the volume offline, during which time CIFS users are warned of the pending loss of service. A time of 0 means take the volume offline immediately with no warnings given. CIFS users can lose data if they are not given a chance to terminate applications gracefully. vol scrub resume [ volname | plexname | groupname ] Resume parity scrubbing on the named traditional volume, plex, or RAID group. If no name is given, then all suspended parity scrubs are resumed. The vol scrub resume command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr scrub resume command. vol scrub start [ volname | plexname | groupname ] Start parity scrubbing on the named traditional volume, plex, or RAID group. If volname is a flexible volume, vol scrub start aborts. Parity scrubbing compares the data disks to the parity disk in a RAID group, correcting the parity disk's contents as necessary. If no name is given, then start parity scrubs on all online RAID groups on the filer. If a traditional volume is given, scrubbing is started on all RAID groups contained in the traditional volume. Similarly, if a plex name is given, scrubbing is started on all RAID groups in the plex. The vol scrub start command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr scrub start command. vol scrub status [ volname | plexname | groupname ] [ -v ] Print the status of parity scrubbing on the named traditional volume, plex or RAID group. If no name is provided, the status is given on all RAID groups currently undergoing parity scrubbing. The status includes a percent-complete as well as the scrub's suspended status (if any). The -v flag displays the date and time at which the last full scrub completed, along with the current status on the named traditional volume, plex, or RAID group. If no name is provided, full status is provided for all RAID groups on the filer. The vol scrub status command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr scrub status command. vol scrub stop [ volname | plexname | groupname ] Stop parity scrubbing for the named traditional volume, plex or RAID group. If no name is given, then parity scrubbing is stopped on any RAID group on which one is active. The vol scrub stop command fails if the chosen vol_name is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr scrub stop command. vol scrub suspend [ volname | plexname | groupname ] Suspend parity scrubbing on the named traditional volume, plex, or RAID group. If no name is given, all active parity scrubs are suspended. The vol scrub suspend command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. In this specific case, the administrator should use the aggr scrub suspend command. vol size volname [[+|-]size] This command sets or displays the given flexible volume's size as specified, using space from the volume's containing aggregate. It can make the flexible volume either larger or smaller. The size argument has the same form and obeys the same rules as when it is used in the vol create command to create a flexible volume. Be careful if the sum of the sizes of all flexible volumes in an aggregate exceeds the size of the aggregate. If [+|-]size is used, then the flexible volume's size is changed (grown or shrunk) by that amount. Otherwise, the volume size is set to size (rounded up to the nearest 4 KB). When displaying the flexible volume's size, the units used have the same form as when creating the volume or setting the volume size. The specific unit chosen for a given size is based on matching the volume size to an exact number of a specific unit. k is used if no larger units match. The file system size of a readonly replica flexible volume, such as a snapmirror destination, is determined from the replica source. In such cases, the value set in vol size is interpreted as an upper limit on the size. A flexible volume with the fs_size_fixed option set may have its size displayed, but not changed. A flexible root volume cannot be shrunk below a minimum size determined by the appliance model. This to ensure that there is sufficient space in the root volume to store system files, log files, and core files for use by NetApp technical support if a problem with the system occurs. The amount of space available for the active filesystem in a volume is limited by the snapshot reservation set for that volume. The snapshot reservation should be taken into account when sizing a volume. See na_snap (1) for details on how to set a volume's snapshot reservation. vol split volname/plexname new_volname This command removes plexname from a mirrored traditional volume and creates a new, unmirrored traditional volume named new_volname that contains the plex. The original mirrored traditional volume becomes unmirrored. The plex to be split from the original traditional volume must be functional (not partial), but it could be inactive, resyncing, or out-of-date. The vol split can therefore be used to gain access to a plex that is not up to date with respect to its partner plex if its partner plex is currently failed. If the plex is offline at the time of the split, the resulting traditional volume will be offline. Otherwise, the resulting traditional volume will be in the same online/offline/restricted state as the original traditional volume. A split mirror can be joined back together via the -v option to vol mirror. The aggr split command is the preferred way to split off plexes. It is the only way to split off plexes from mirrored aggregates that contain flexible volumes. vol status [ volname ]
[ -r | -v | -d | -l | -c | -b | -s | -f | -m | -w ] Displays the status of one or all volumes on the filer. If volname is used, the status of the specified volume is printed. Otherwise, the status of all volumes in the filer are printed. By default, it prints a one-line synopsis of the volume, which includes the volume name, its type (either traditional or flexible), whether it is online or offline, other states (for example, partial, degraded, wafl inconsistent and so on) and per-volume options. Per-volume options are displayed only if the options have been turned on using the vol options command. If the wafl inconsistent state is displayed, please contact Customer Support. When run in a vfiler context only the -v, -l, -b, and -? flags can be passed to vol status. The -v flag shows the on/off state of all per-volume options and displays information about each plex and RAID group within the traditional volume or the aggregate containing the flexible volume. aggr status -v is the preferred manner of obtaining the per-aggregate options and the RAID information associated with flexible volumes. The -r flag displays a list of the RAID information for the traditional volume or the aggregate containing the flexible volume. If no volname is specified, it prints RAID information about all traditional volumes and aggregates, information about file system disks, spare disks, and failed disks. For more information about failed disks, see the -f option description below. The -d flag displays information about the disks in the traditional volume or the aggregate containing the flexible volume. The types of disk information are the same as those from the sysconfig -d command. aggr status -d is the preferred manner of obtaining this low-level information for aggregates that contain flexible volumes. The -l flag displays, for each volume on a filer, the name of the volume, the language code, and language being used by the volume. The -c flag displays the upgrade status of the Block Checksums data integrity protection feature for the traditional volume or the aggregate containing the flexible volume. aggr status -c is the preferred manner of obtaining this information for a flexible volume's containing aggregate. The -b is used to get the size of source and destination traditional volumes for use with SnapMirror. The output contains the size of the traditional volume and the size of the file system in the volume. SnapMirror and aggr copy use these numbers to determine if the source and destination volume sizes are compatible. The file system size of the source must be equal or smaller than the volume size of the destination. These numbers can be different if using SnapMirror between volumes of dissimilar geometry. The -s flag displays a list of the spare disks on the system. aggr status -s is the preferred manner of obtaining this information. The -m flag displays a list of the disks in the system that are sanitizing, in recovery mode, or in maintenance testing. The -f flag displays a list of the failed disks on the system. The command output includes the disk failure reason which can be any of following:
      unknown           Failure reason unknown.
      failed            Data ONTAP failed disk, due to a
                        fatal disk error.
      admin failed      User issued a 'disk fail' command
                        for this disk.
      labeled broken    Disk was failed under Data ONTAP
                        6.1.X or an earlier version.
      init failed       Disk initialization sequence failed.
      admin removed     User issued a 'disk remove' command
                        for this disk.
      not responding    Disk not responding to requests.
      pulled            Disk was physically pulled or no
                        data path exists on which to access
                        the disk.
      bypassed          Disk was bypassed by ESH.
aggr status -f is the preferred manner of obtaining this information. The -w flag displays expiry date of the volume which is maximum retention time of WORM files and WORM snapshots on that volume. A value of "infinite" indicates that the volume has infinite expiry date. A value of "Unknown...volume offline" indicates that expiry date is not displayed since the volume is offline. A value of "Unknown...scan in progress" indicates that expiry date is not displayed since WORM scan on the volume is in progress. A value of "none" indicates that the volume has no expiry date. The volume has no expiry date when it does not hold any WORM files or WORM snapshots. A value of "-" is displayed for regular volumes. vol verify resume [ volname ] Resume RAID mirror verification on the given traditional volume. If no volume name is given, then resume all suspended RAID mirror verification operations. The vol verify resume command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled by the new aggr command suite. In fact, the administrator should always use the aggr verify resume command. vol verify start [ volname ] [ -f plexnumber ] Start RAID mirror verification on the named online, mirrored traditional volume. If no name is given, then RAID mirror verification is started on all traditional volumes and aggregates on the filer. RAID mirror verification compares the data in both plexes of a mirrored traditional volume or aggregate. In the default case, all blocks that differ are logged, but no changes are made. If the -f flag is given, the plex specified is fixed to match the other plex when mismatches are found. A volume name must be specified with the -f plexnumber option. The vol verify start command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled by the new aggr command suite. In fact, the administrator should always use the aggr verify start command. vol verify status [ volname ] Print the status of RAID mirror verification on the given traditional volume. If no volume name is given, then provide status for all active RAID mirror verification operations. The status includes a percent-complete and the verification's suspended status (if any). The vol verify status command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled by the new aggr command suite. In fact, the administrator should always use the aggr verify status command. vol verify stop [ volname ] Stop RAID mirror verification on the named traditional volume. If no volume name is given, stop all active RAID mirror verification operations on traditional volumes and aggregates. The vol verify stop command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled by the new aggr command suite. In fact, the administrator should always use the aggr verify stop command. vol verify suspend [ volname ] Suspend RAID mirror verification on the named traditional volume. If no volume name is given, then suspend all active RAID mirror verification operations on traditional volumes and aggregates. The vol verify suspend command fails if the chosen volname is a flexible volume. Flexible volumes require that any operations having directly to do with their containing aggregates be handled by the new aggr command suite. In fact, the administrator should always use the aggr verify suspend command.

CLUSTER CONSIDERATIONS

Volumes on different filers in a cluster can have the same name. For example, both filers in a cluster can have a volume named vol0. However, having unique volume names in a cluster makes it easier to migrate volumes between the filers in the cluster.

VFILER CONSIDERATIONS

A subset of the vol subcommands are available via vfiler contexts. They are used for vfiler SnapMirror operations. These subcommands are: online, offline, and restrict. These volume operations are only allowed if the vfiler owns the specified volumes. See na_vfiler(1) and na_snapmirror(1) for details on vfiler and snapmirror operations.

EXAMPLES

vol create vol1 aggr0 50g Creates a flexible volume named vol1 using storage from aggregate aggr0. This new flexible volume's size will be set to 50 gigabytes. vol create vol1 -r 10 20 Creates a traditional volume named vol1 with 20 disks. The RAID groups in this traditional volume can contain up to 10 disks, so this traditional volume has two RAID groups. The filer adds the current spare disks to the new traditional volume, starting with the smallest disk. vol create vol1 20@9 Creates a traditional volume named vol1 with 20 9-GB disks. Because no RAID group size is specified, the default size (8 disks) is used. The newly created traditional volume contains two RAID groups with 8 disks and a third RAID group with four disks. vol create vol1 -d 8a.1 8a.2 8a.3 Creates a traditional volume named vol1 with the specified disks. vol create vol1 aggr1 20m -S kett:vol2 Creates a flexible volume named vol1 on aggr1 of size 20 megabytes, which caches source volume vol2 residing on the origin filer kett. vol create vol1 10
vol options vol1 raidsize 5 The first command creates a traditional volume named vol1 with 10 disks that belong to one RAID group. The second command specifies that if any disks are subsequently added to this traditional volume, they will not cause any current RAID group to have more than five disks. Each existing RAID group will continue to have 10 disks, and no more disks will be added to those RAID groups. When new RAID groups are created, they will have a maximum size of five disks. vol size vol1 250g Changes the size of flexible volume vol1 to 250 gigabytes. vol size vol1 +20g Adds 20 gigabytes to the size of flexible volume vol1. vol clone create vol2 -b vol1 snap2 The filer will create a writable clone volume vol2 that is backed by the storage of flexible volume vol1, snapshot snap2. vol clone create will create a default entry in the /etc/exports file unless the option nfs.export.auto-update is disabled. vol clone split start vol2 The filer will start an operation on clone volume vol2 to separate the it from its parent volume. The backing snapshot for vol2 will be unlocked once the separation is complete. vol options vol1 root The volume named vol1 becomes the root volume after the next filer reboot. vol options vol1 nosnapdir on In the volume named vol1, the snapshot directory is made invisible at the client mount point or at the root of a share. Also, for UNIX clients, the .snapshot directories that are normally accessible in all the directories become inaccessible. vol status vol1 -r Displays the RAID information about the volume named vol1:
  Volume vol1 (online, raid4) (zoned checksums)
    Plex /vol1/plex0 (online, normal, active)
      RAID group /vol1/plex0/rg0 (normal)

        RAID Disk Device  HA    SHELF BAY CHAN  Used (MB/blks)    Phys (MB/blks)
        --------- ------  --------------- ----  --------------    --------------
        parity    3a.0    3a    0     0   FC:A  34500/70656000    35239/72170880
        data      3a.1    3a    0     1   FC:A  34500/70656000    35239/72170880
vol copy start -s nightly.1 vol0 toaster1:vol0 Copies the nightly snapshot named nightly.1 on volume vol0 on the local filer to the volume vol0 on a remote filer named toaster1. vol copy status Displays the status of all active volume copy operations. vol copy abort 1 Terminates volume copy operation 1. vol copy throttle 1 5 Changes volume copy operation 1 to half (50%) of its full speed.

SEE ALSO

na_aggr (1), na_partner (1), na_snapmirror (1), na_sysconfig (1), na_license (1).
Table of Contents