NAS – Network Access Storage – (Protocol
– NFS, CIFS)
Network File System (NFS) Protocol
NFS is a distributed
file system protocol allowing a user on a client computer to access files over
a network in a manner similar to how local storage is accessed. The NFS
protocol has evolved from being simple and stateless in NFSv2 to being stateful
and secure in NFSv4.
NFSv4.1 adds
significant capability to improve weaknesses within NFSv4 and adds parallel NFS
(pNFS) to take advantage of clustered server deployments with the ability to
provide scalable parallel access to files distributed among multiple servers.
NFSv4.1 builds a session layer on top of the transport layer to improve the
reliability of the NFSv4 protocol
Overview of NFS Architecture and General Operation
NFS follows the
classical TCP/IP client/server model of operation. A hard disk or a directory
on a storage device of a particular computer can be set up by an administrator
as a shared resource. This resource can then be accessed by client computers,
which mount the shared drive or directory, causing it to appear like a local
directory on the client machine. Some computers may act as only servers or only
clients, while others may be both: sharing some of their own resources and
accessing resources provided by others.
NFS uses an
architecture that includes three main components that define its operation. The
External Data Representation (XDR) standard defines how data is represented in
exchanges between clients and servers. The Remote Procedure Call (RPC) protocol
is used as a method of calling procedures on remote machines. Then, a set of
NFS procedures and operations works using RPC to carry out various requests.
The separate Mount protocol is used to mount resources as mentioned above.
One of the most
important design goals of NFS was performance. Obviously, even if you set up a
file on a distant machine as if it were local, the actual read and write
operations have to travel across a network. Usually this takes more time than
simply sending data within a computer, so the protocol itself needed to be as
“lean and mean” as possible. This decision led to some interesting decisions,
such as the use of the unreliable User Datagram Protocol (UDP) for transport in
TCP/IP, instead of the reliable TCP like most file transfer protocols do. This
in turn has interesting implications on how the protocol works as a whole.
Another key design
goal for NFS was simplicity (which of course is related to performance). NFS
servers are said to be stateless, which means that the protocol is designed so
that servers do not need to keep track of which files have been opened by which
clients. This allows requests to be made independently of each other, and
allows a server to gracefully deal with events such as crashes without the need
for complex recovery procedures. The protocol is also designed so that if
requests are lost or duplicated, file corruption will not occur.
NFS Versions and Standards
Since it was
initially designed and marketed by Sun, NFS began as a de facto standard. The
first widespread version of NFS was version 2, and this is still the most
common version of the protocol. NFS version 2 was eventually codified as an
official TCP/IP standard when RFC 1094, NFS: Network File System Protocol Specification
was published in 1989.
NFS Version 3 was
subsequently developed and published in 1995 as RFC 1813, NFS Version 3
Protocol Specification. It is similar to version 2 but makes a few changes and
adds some new capabilities. These include support for larger file transfers,
better support for setting file attributes, and several new file access and
manipulation procedures. NFS version 3 also provides support for larger files
than version 2 did.
NFS Version 4 was
published in 2000 as RFC 3010, NFS version 4 Protocol. Where version 3 of NFS
contained only relatively small changes to version 2, NFSv4 is virtually a
rewrite of NFS. It includes numerous changes, most notably the following:
·
Reflecting
the needs of modern internetworking, NFSv4 puts greater emphasis on security.
·
NFSv4
introduces the concept of a Compound procedure, which allows several simpler
procedures to be sent from a client to server as a group.
·
NFSv4
almost doubles the number of individual procedures that a client can use in
accessing a file on an NFS server.
·
Version
4 also makes a significant change in messaging, with the specification of TCP
as the transport protocol for NFS.
·
Finally,
NFS integrates the functions of the Mount protocol into the basic NFS protocol,
eliminating it as a separate protocol as it is in versions 2 and 3.
·
The
version 4 standard also has a lot more details about implementation and
optional features than the earlier standards—it's 275 pages long. So much for
simplicity! J RFC 3010 was later updated by RFC 3530, Network File System (NFS)
version 4 Protocol, in April 2003. This standard makes several further
revisions and clarifications to the operation of NFS version 4.
Considered from the
perspective of the TCP/IP protocol suite as a whole, the Network File System
(NFS) is a single protocol that resides at the application layer of the TCP/IP
(DOD) model. This TCP/IP layer encompasses the session, presentation and
application layers of the OSI Reference Model. As I have said before in this
Guide, I don't see much value in trying to differentiate between layers 5
through 7 most of the time. In some cases, however, these layers can be helpful
in understanding the architecture of a protocol, and that's the case with NFS.
NFS Architecture and Main Components
The operation of NFS
is defined in the form of three main components that can be viewed as logically
residing at each of the three OSI model layers corresponding to the TCP/IP
application layer (see Figure 253). These components are:
·
Remote
Procedure Call (RPC): RPC is a generic session layer service used to implement
client/server internetworking functionality. It extends the notion of a program
calling a local procedure on a particular host computer, to the calling of a
procedure on a remote device across a network.
·
External
Data Representation (XDR): XDR is a descriptive language that allows data types
to be defined in a consistent manner. XDR conceptually resides at the
presentation layer; its universal representations allow data to be exchanged
using NFS between computers that may use very different internal methods of
storing data.
·
NFS
Procedures and Operations: The actual functionality of NFS is implemented in
the form of procedures and operations that conceptually function at layer seven
of the OSI model. These procedures specify particular tasks to be carried out
on files over the network, using XDR to represent data and RPC to carry the
commands across an internetwork.
Other Important NFS Functions
Aside from these
three components, the NFS protocol as a whole involves a number of other
functions, some of which I think are worth specific mention:
·
Mount Protocol: A specific decision was made by the creators
of NFS to not have NFS deal with the particulars of file opening and closing.
Instead, a separate protocol called the Mount protocol is used for this
purpose. Accessing a file or other resource over the network involves first
mounting it using this protocol. The Mount Protocol is architecturally
distinct, but obviously closely related to NFS, and is even defined in an
appendix of the NFS standard. I describe it in the last topic of this section.
(Note that in NFSv4 the functions of the Mount Protocol have been incorporated
into NFS “proper”.)
·
NFS File System Model: NFS uses a particular model to implement the
directory and file structure of the systems that use it. This model is closely
based on the file system model of UNIX but is not specific to only that
operating system. It is discussed in conjunction with the explanation of the
Mount Protocol.
·
Security: Versions 2 and 3 of NFS include only limited security provisions. They
use UNIX style authentication to check permissions for various operations. NFS
version 4 greatly increases the security options available for NFS
implementations. This includes both the option of multiple authentication and
encryption algorithms, and many changes made to the protocol as a whole to make
it more “security minded”.
Like other TCP/IP
protocols, NFS is implemented in the form of client and server software that
implements the functions above. The NFS standards, especially for versions 3
and 4, discuss numerous issues related to proper NFS client/server
implementation, including interaction between servers and clients, file
locking, permission issues, caching, retransmission policies, international
support and more. Many of these issues require extensive discussion that is
beyond the scope of this Guide. You will want to refer to the standards for
NFS, especially versions 3 and 4, for full details.
NFS Data Storage and Data Types, and the
External Data Representation (XDR) Standard
The overall idea
behind NFS is to allow someone on one computer to read from or write to a file
on another computer as readily as they do on a local machine. Of course, the
files on your local machine are all stored in the same file system, using the
same file structure and the same means of representing different types of data.
You can't be sure that this will be the case when accessing a remote device,
and this creates a bit of a “Tower of Babel” problem that NFS has to deal with.
Creating a Method of Universal Data Exchange:
XDR
One approach to
representation consistency would be to simply restrict access only to remote
files on machines that use the same operating system. However, this would remove
much of the effectiveness of NFS. It would also be highly impractical to
require every computer to understand the internal representation of every other
one. A more general method was needed to allow even very dissimilar machines to
share data. To this end, the creators of NFS defined it so that it deals with
data using a universal data description language. This language is called the
External Data Representation (XDR) standard, and was originally described in
RFC 1014; it was updated in RFC 1832, XDR: External Data Representation
Standard, in 1995.
The idea behind XDR
is simple, and can be easily understood in the form of an analogy. If you had
delegates speaking 50 different languages at a convention, they would have a
hard time communicating. You could hire translators to facilitate, but you'd
never find translators to handle all the different possible combinations of
languages. A more practical solution is to declare one language, such as
English, to be a common language. You then only need 49 translators: one to
translate from English to each of the non-English languages and back again. To
translate from Swedish to Portuguese, you translate from Swedish to English and
then from English to Portuguese. The common language could be French, or Spanish,
or something else, as long as a translator could be found from all the other
languages to that common language.
XDR works in the
same manner. When information about how to access a file is to be transferred
from device A to device B, device A first converts it from A's internal
representation to the XDR representation of those data types. The information
is transmitted across the network using XDR encoding. Then, device B translates
from XDR back to its own internal representation, so it can be presented to the
user as if it were on the local file system. Each device needs to know only how
to convert from its own “language” to XDR and back again; device A doesn't need
to know device B's internal details and vice-versa. This sort of translation is
of course a classic job of the presentation layer, which is where XDR resides
in the OSI Reference Model. XDR is itself based on an ISO standard called the
Abstract Syntax Notation.
Incidentally, the
idea described here is also used in other protocols to allow the exchange of
data independent of the nature of the underlying systems. For example, a
similar idea is behind the way management information is exchanged using the
Simple Network Management Protocol (SNMP). The same basic idea underlies the
important Network Virtual Terminal (NVT) paradigm used in the Telnet protocol.
NFS Client/Server Operation Using Remote
Procedure Calls (RPCs)
Almost all
applications deal with files and other resources. When a software program on a
particular computer wants to read a file, write a file or perform related
tasks, it needs to use the correct software instructions for this purpose. It
would be inefficient to require each software program to contain a copy of
these instructions, so instead, they are encoded as standardized software
modules, sometimes called procedures. To perform an action, a piece of software
calls the procedure; the procedure temporarily takes over for the main program
and performs a task such as reading or writing data. The procedure then returns
control of the program back to the software that called it, and optionally,
returns data as well.
Since the key
concept of NFS was to make remote file access look like local file access, it
was designed around the use of a network-based version of the procedure calling
method just described. A software application that wants to do something with a
file still makes a procedure call, but it makes the call to a procedure on a
different computer instead of the local one. A special set of routines is used
to handle the transmission of the call across the network, in a way largely
invisible to software performing the call.
This functionality
could have been implemented directly in NFS, but instead Sun created a separate
session-layer protocol component called the Remote Procedure Call (RPC)
specification, which defines how this works. RPC was originally created as a
subcomponent of NFS, but is generic enough and useful enough that it has been
used for other client/server applications in TCP/IP. For this reason, it is really
considered in many respects a distinct protocol.
Because RPC is the
actual process of communicating in NFS, NFS itself is different from many other
TCP/IP protocols. Its operation can't be described in terms of specific message
exchanges and state diagrams the way a protocol like HTTP or DHCP or even TCP
can, because RPC does all of that. NFS is in fact defined in terms of a set of
RPC server procedures and operations that an NFS server makes available to NFS
clients. These procedures and operations each allow a particular type of action
to be taken on a file, such as reading from it, writing to it or deleting it.
RPC Operation and Transport Protocol Usage
When a client wants
to perform some type of action on a file on a particular machine, it uses RPC
to make a call to the NFS server on that machine. The server accepts the
request and performs the action required, then returns a result code and
possibly data back to the client, depending on the request. The result code
indicates if the action was successful. If it was, the client can assume that
whatever it asked to be done was completed. For example, in the case of writing
data, the client can assume the data has been successfully written to long-term
storage.
NFS can operate over
any transport mechanism that has a valid RPC implementation at the session
layer. Of course in TCP/IP we have two transport protocols, UDP and TCP. It's
interesting to see that NFS has seen an evolution of sorts in its use of
transport protocol. The NFSv2 standard says that it operates “normally” using
UDP, and this is still a common way that NFS information is carried. NFSv3 says
that either UDP or TCP may be used, but NFSv4 specifies TCP to carry data. The
nominal registered port number for use by NFS is 2049, but in fact other port
numbers are sometimes used for NFS, through the use of RPC's “port mapper”
capability.
Client and Server Responsibilities in NFS
Since UDP is
unreliable, the use of that protocol to transport important information may
seem strange. For example, we obviously don't want data that we are trying to
write to a file to be lost in transit. Remember, however, that UDP doesn't
preclude the use of measures to ensure reliable communications, it simply
doesn't provide those capabilities itself. UDP can be used by NFS because the
protocol itself is designed to tolerate loss of transmitted data and to recover
from it.
Consistent with this
concept, the general design of NFS puts most of the responsibility for
implementing the protocol on the client, not the server. As the NFSv3 standard
says, “NFS servers are dumb and NFS clients are smart.” What this means is that
the servers focus only on responding to requests, while clients must take care
of most of the “nitty-gritty” details of the protocol, including recovery from
failed communications. This is in fact a common requirement when UDP is used,
because if a client request is lost in transit, the server has no way of
knowing that it was ever sent.
As mentioned in the
NFS overview, NFS servers are designed to be “stateless”. In simplified terms,
this means that the NFS server does not keep track of the state of the clients
using it from one request to another. Each request is independent of the
previous one, and the server in essence has “no memory” of what it did before
when it gets a new command from a client. This again requires more “smarts” to
be put into the clients, but has the important advantage of simplifying
recovery in the case that the server crashes. Since there is nothing that the
server was keeping track of for the client, there's nothing that can be lost.
This is an important part of ensuring that files are not damaged as a result of
network problems or congestion.
Client and Server Caching
Both NFS clients and
servers can make use of caching to improve performance. Servers may use caching
to store recently-requested information in case it is needed again. They may
also use predictive caching, sometimes called prefetching. In this technique, a
server that receives a request to read a block of data from a file may load
into memory the next block after it, on the theory that it will likely be
requested next. Client-side caching is used to satisfy repeat NFS requests from
applications while avoiding additional RPC calls. Like almost everything else about
NFS, caching is implemented much more thoroughly in NFS version 4 than in the
previous versions.
NFS Server Procedures and Operations
The actual exchange
of information between an NFS client and server is performed by the underlying
Remote Procedure Call (RPC) protocol. NFS functionality is therefore described
not in terms of specific protocol operations, but by delineating the different
actions that a client may take on files residing on a server. In the original
version of NFS, NFSv2, these are called NFS server procedures.
Each procedure
represents a particular action that a client may perform, such as reading from
a file, writing to a file, or creating or removing a directory. The operations
performed on the file require that the file be referenced using a data
structure called a file handle. As the name suggests, the file handle, like the
handle of a real object, lets the client and server “grasp” onto the file. The
Mount protocol is used to mount a file system, to enable a file handle to be
accessed for use by NFS procedures.
NFS version 3 uses
the same basic model for server procedures, but makes certain changes. Two of
the NFSv2 procedures were removed, and several new ones added to support new
functionality. The numbers assigned to identify each procedure were also
changed.
NFS Version 4 Server Procedures and
Operations
It is common that a
client may want to perform multiple actions on a file: several consecutive
reads, for example. One of the problems with the server procedure system in
NFSv2 and NFSv3 is that each client action required a separate procedure call.
This was somewhat inefficient, especially when NFS was used over a high-latency
link.
To improve the
efficiency of server procedures, NFS version 4 makes a significant change to
the way that server procedures are implemented. Instead of each client action
being a separate procedure, a single procedure called compound is defined.
Within this “compound” procedure, a large number of server operations are
encapsulated. These are all sent as a single unit and the server interprets and
follows the instructions in each operation in sequence.
NFS File System Model and the Mount Protocol
Since NFS is used by
a client to simulate access to remote directories of files as if they were
local, the protocol must “present” the files from the remote system to the
local user. Just as files on a local storage device are arranged using a
particular file system, NFS too uses a file system model to represent how files
are shown to a user.
The NFS File System Model
The file system
model used by NFS is the same one that most of us are familiar with: a
hierarchical arrangement of directories that contain files and subdirectories.
The top of the hierarchy is the root, which contains any number of files and
first level directories. Each directory may contain more files or other
directories, allowing an arbitrary tree structure to be created.
A file can be
uniquely specified by using its file name and a path name that shows the
sequence of directories one must traverse from the root to find the file. Since
NFS is associated with UNIX, files in NFS discussions are usually shown in UNIX
notation; for example, “/etc/hosts”. The same basic tree idea can also be
expressed using the method followed by Microsoft operating systems:
“C:\WINDOWS\HOSTS”.
The Mount Protocol
Before NFS can be
used to allow a client to access a file on a remote server, the client must be
given a way of accessing the file. This means that a portion of the remote file
system must be made available to the client, and the file opened for access. A
specific decision was made when NFS was created to not put file access, opening
and closing functions into NFS proper. Instead, a separate protocol was created
to work with NFS, so that if in the future the method of providing file access
needed to be changed, it wouldn't require changes to NFS itself. This separate
mechanism is called the Mount Protocol, and is described in Appendix A of RFC
1094 (NFSv2). Note that while functionally distinct, Mount is considered part
of the overall NFS package.
When NFS was revised
to version 3, the Mount Protocol was similarly modified. The NFSv3 version of
the Mount Protocol is defined in Appendix I of RFC 1813 (NFSv3). It contains
some changes to how the protocol works, but the overall operation of the two
versions of Mount is pretty much the same.
The term “mount” is
actually an analog to a hardware term that refers to making a physical storage
volume available. In the “olden dayse” storage devices were usually removable
disk packs, and to use one you mounted it onto a drive unit. In a similar
manner, NFS resources are logically mounted using the Mount protocol, which
makes the shared file system available to the client. A file can then be opened
and a file handle returned to the NFS client so it can reference the file for
operations such as reading and writing.
NETAPP Filer Support for NFS and CIFS
·
Data ONTAP controls access to file according to
the Authentication-based and File-based restriction that we apply.
·
File Access using NFS and exporting and
unexporting file system path can be done either by editing /etc/exports file or
by running the exportfs command.
·
nfs.export.auto-update
option is on, which it is by default, Data ONTAP automatically updates the
/etc/exports file when you create, rename, or delete volumes
·
To export a file system path and add a
corresponding export entry to the /etc/exports file, you can use the exportfs
-p command. viz. exportfs -p
[options] path
·
To export all file system paths specified in the
/etc/exports file, you can enter the exportfs
-a command.
·
To unexport one file system path without
removing its corresponding entry from the /etc/exports file, you can use the exportfs -u path command. To unexport
one file system path and remove its corresponding entry from the /etc/exports
file, you can use the exportfs –z path command.
·
To unexport all file system paths without
removing their corresponding entries from the /etc/exports file, use the exportfs -ua command.
·
To export all file system paths specified in the
/etc/exports file and unexport all file system paths not specified in the
/etc/exports file, you can enter the exportfs
-r command.
·
To display the actual file system path for an
exported file system path, you can use the exportfs
-s command.
·
To display the export options for a file system
path, which can help you in debugging an export problem, you can use the exportfs -q path command.
·
To revert the /etc/exports file to an old
format, you can use the exportfs -d ver command.
·
Data ONTAP uses an access cache to reduce the
likelihood it will have to perform a reverse DNS lookup or parse netgroups when
granting or denying an NFS client access to a file system path.
·
Whenever an NFS client attempts to access a file
system path, Data ONTAP must determine whether to grant or deny access. Except
in the most simple cases (for example, when file systems paths are exported
with just the ro or rw option), Data ONTAP grants or denies access according to
a value in the access cache that corresponds to the following things:
• The file
system path
• The NFS
client's IP address, access type, and security type
·
To remove entries from the access cache, you can
use the exportfs -f command.
·
To view access cache statistics, you can enter
the nfsstat -d command.
·
To enable Kerberos v5 security services, you can
use the nfs setup command.
·
Data ONTAP provides secure NFS access using the
Kerberos v5 authentication protocol to ensure the security of data and the
identity of users within a controlled domain.
·
To display NFS statistics for all NFS versions,
you can use the nfsstat command
Exportfs
exportfs
– exports or unexports a file system path, making it available or unavailable,
respectively, for mounting by NFS clients.
exportfs
exportfs
[ -v ] [ -io options ] path
exportfs
-a [ -v ]
exportfs
-b [ -v ] enable | disable save | nosave
allhosts | clientid[:clientid...] allpaths | path[:path...]
exportfs
-c [ -v ] clientaddr path [ [ ro | rw
| root ] [ sys | none | krb5 | krb5i | krb5p
] ]
exportfs
-d [ -v ] [ 6.4 | 6.5 ]
exportfs
-f [ -v ] [path]
exportfs
-h | -r [ -v ]
exportfs
-p [ -v ] [options] path
exportfs
-q | -s | -w | -z [ -v ] path
exportfs
-u [ -v ] path | -a
Use
the exportfs command to perform any of the following tasks:
*
Export or unexport a file system path.
*
Add an export entry to or remove an export entry from the /etc/exports
file.
*
Export or unexport all file system paths specified in the /etc/exports
file.
*
Enable or disable fencing of specific NFS clients from specific file system
paths.
*
Check whether an NFS client has a specific type of access to a file system
path.
*
Flush entries from the access cache.
*
Revert the /etc/exports file to a format compatible with a previous Data
ONTAP release.
*
Display exported file system paths and export options.
*
Display the actual file system path corresponding to an exported file system
path.
*
Save exported file system paths and their export options into a file.
(none)
Displays
all exported file system paths.
path
Exports
a file system path without adding a corresponding export entry to the /etc/exports
file. To override any export options specified for the file system path in the /etc/exports
file, specify the -io options followed by a comma-delimited list of
export options. For more information about export options, see exports .
Note: To export a file system path and add a corresponding entry to the /etc/exports
file, use the -p option instead.
-a
Exports
all file system paths specified in the /etc/exports file. To export all
file system paths specified in the /etc/exports file and unexport all
file system paths not specified in the /etc/exports file, use the -r
option instead. Note: Data ONTAP reexports a file system path only if its
persistent export options (those specified in the /etc/exports file) are
different from its current export options, thus ensuring that it does not
expose NFS clients unnecessarily to a brief moment during a reexport in which a
file system path is not available.
-b
Enables
or disables fencing of specific NFS clients from specific file system paths,
giving the NFS clients read-only or read-write access, respectively. To enable
fencing, specify the enable option; to disable fencing, specify the disable
option. To update the /etc/exports file, specify the save option;
otherwise, specify the nosave option. To affect all NFS clients, specify
the allhosts option; otherwise, specify a colon-delimited list of NFS
client identifiers. To affect all exported file system paths, specify the allpaths
option; otherwise, specify a colon-delimited list of file system paths. Data
ONTAP drains all of the NFS requests in its queue before it enables or disables
fencing, thereby ensuring that all file writes are atomic. Note: When you
enable or disable fencing, Data ONTAP moves the NFS client to the front of its
new access list (rw= or ro=). This reordering can change your
original export rules.
-c
Checks
whether an NFS client has a specific type of access to a file system path. You
must specify the IP address of the NFS client (hostip) and the exported
(not actual) file system path (path). To check whether the NFS client
has read-only, read-write, or root access to the file system path, specify the ro,
rw, or root option, respectively. If you do not specify an access
type, Data ONTAP simply checks whether the NFS client can mount the file system
path. If you specify an access type, you can also specify the NFS client’s
security type: sys, none, krb5, krb5i, or krb5p.
If you do not specify a security type, Data ONTAP assumes the NFS client’s
security type is sys. Note: If Data ONTAP does not find an entry in the
access cache corresponding to the file system path and (2) the NFS client’s IP
address, access type, and security type, Data ONTAP determines the NFS client’s
host name from its IP address (for example, it performs a reverse DNS lookup),
(2) checks the NFS client’s host name, access type, and security type against
the file system path’s export options, and (3) adds the result to the access
cache as a new entry.
-d
Reverts
the /etc/exports file to a format compatible with a previous Data ONTAP
release. Specify the 6.4 option or 6.5 option to revert the /etc/exports
file to a format compatible with the Data ONTAP 6.4 release or Data ONTAP 6.5
release, respectively. Before reverting the /etc/exports file, Data
ONTAP backs it up under /etc/exports.pre.revert. Note: Always check the
reverted /etc/exports file before accepting it. Reverting an /etc/exports
file that uses features not supported in a previous Data ONTAP release can lead
to unexpected results. For more information about reverting the /etc/exports
file, see exports .
-f
Flushes
entries from the access cache. To flush access cache entries corresponding to a
specific file system path, specify the file system path; otherwise, to flush
all access cache entries, do not specify a file system path. Note: To control
when access cache entries expire automatically, set the nfs.export.harvest.timeout,
nfs.export.neg.timeout, and nfs.export.pos.timeout options. For
more information about these options, see options .
-h
Displays
help for all exportfs options.
-i
Ignores
the options specified for a file system path in the /etc/exports file.
If you do not specify the -i option with the -o option, Data
ONTAP uses the options specified for the file system path in the /etc/exports
file instead of the options you specify on the command line.
-o
Specifies
one or more export options for a file system path as a comma-delimited list.
For more information about export options, see exports . Note: To
override the options specified for the file system path in the /etc/exports
file, you must specify the -i and -o options together.
-p
Exports
a file system path and adds a corresponding export entry to the /etc/exports
file. If you do not specify any export options, Data ONTAP automatically
exports the file system path with the rw and -sec=sys export
options. Use the -p option to add a file system path to the /etc/exports
file without manually editing the /etc/exports file. Note: Data ONTAP
exports the file system paths specified in the /etc/exports file every
time NFS starts up (for example, when the filer reboots). For more information,
see exports .
-q
Displays
the export options for a file system path. Use the -q option to quickly
view the export options for a single file system path without manually
searching through the /etc/exports file. In addition to displaying the
options, it also displays the ruleid for each "rule" in the export.
This ruleid is used to display the in-memory and on-disk access cache for each
"rule”. Rule is a set of host access permissions defined for a security
flavor in an export and a ruleid uniquely identifies a rule for the duration
when a filer is up. e.g.
exportfs -q /vol/vol0 /vol/vol0
-sec=krb5, (ruleid=2), rw
This
means that the filesystem /vol/vol0 is exported via the rule "rw" and
this rule has a ruleid of 2.
exportfs -q /vol/vol1 /vol/vol1
-sec=sys, (ruleid=2), rw,
sec=krb5, (ruleid=10), ro=172.16.27.0/24, rw=172.16.36.0/24
This
means that the filesystem /vol/vol1 is exported via the rule "rw"
(ruleid 2) to everyone who is coming with AUTH_SYS security and is also
exported via the rule "ro=172.16.27.0/24, rw=172.16.36.0/24" (ruleid
10) to everyone coming in with Kerberos.
-r
Exports
all file system paths specified in the /etc/exports file and unexports
all file system paths not specified in the /etc/exports file. To export
all file system paths specified in the /etc/exports file without
unexporting any file system paths, use the -a option instead. Note: Data
ONTAP reexports a file system path only if its persistent export options (those
specified in the /etc/exports file) are different from its current
export options, thus ensuring that it does not expose NFS clients unnecessarily
to a brief moment during a reexport in which a file system path is not
available.
-s
Displays
the actual file system path corresponding to an exported file system path.
Note: Unless a file system path is exported with the -actual option, its
actual file system path is the same as its exported file system path.
-u
Unexports
a file system path. To unexport a single file system path, specify the path;
otherwise, to unexport all file system paths specified in the /etc/exports
file, specify the -a option. Note: The -u option does not remove
export entries from the /etc/exports file. To unexport a file system
path and remove its export entry from the /etc/exports file, use the -z
option instead.
-v
Specifies
that Data ONTAP should be verbose. Use the -v option with any other
option. For example, specify the -v option with the -a option to
specify that Data ONTAP should display all file system paths that it exports.
-w
Saves
exported file system paths and their export options into a file.
-z
Unexports
a file system path and removes its export entry from the /etc/exports
file. Use the -z option to remove a file system path from the /etc/exports
file without manually editing the /etc/exports file. Note: By default
entries are actually commented out and not removed from the /etc/exports
file. To change the behaviour to actually remove entries switch off the
nfs.export.exportfs_comment_on_delete option. For more information see options
.
clientaddr
An
NFS client’s IP address.
clientid
One
of the following NFS client identifiers: host name, IP address, netgroup,
subnet, or domain name. For more information, see exports .
options
A
comma-delimited list of export options. For more information, see exports .
path
A
file system path: for example, a path to a volume, directory, or file.
When
you export a file system path, specify the -p option to add a
corresponding entry to the /etc/exports file; otherwise, specify the -i
and -o options to override any export options specified for the file
system path in the /etc/exports file with the export options you specify
on the command line.
When
you specify the -b option (or the rw=, ro=, or root=
export option), you must specify one or more NFS client identifiers as a
colon-delimited list. An NFS client identifier is a host name, IP address,
netgroup, subnet, or domain name. For more information about client
identifiers, see exports .
Unlike
UNIX systems, Data ONTAP lets you export a file system path even if one of its
ancestors has been exported already. For example, you can export /vol/vol0/home
even if /vol/vol0 has been exported already. However, you must never
export an ancestor with fewer access controls than its children. Otherwise, NFS
clients can mount the ancestor to circumvent the children’s access controls.
For example, suppose you export /vol/vol0 to all NFS clients for
read-write access (with the rw export option) and /vol/vol0/home
to all NFS clients for read-only access (with the ro export option). If
an NFS client mounts /vol/vol0/home, it has read-only access to /vol/vol0/home.
But if an NFS client mounts /vol/vol0, it has read-write access to vol/vol0
and /vol/vol0/home. Thus, by mounting /vol/vol0, an NFS client
can circumvent the security restrictions on /vol/vol0/home.
When
an NFS client mounts a subpath of an exported file system path, Data ONTAP
applies the export options of the exported file system path with the longest
matching prefix. For example, suppose the only exported file system paths are /vol/vol0
and /vol/vol0/home. If an NFS client mounts /vol/vol0/home/user1,
Data ONTAP applies the export options for /vol/vol0/home, not /vol/vol0,
because /vol/vol0/home has the longest matching prefix.
Managing
the access cache
Whenever an NFS client attempts to access an exported file system path, Data ONTAP checks the access cache for an entry corresponding to the file system path and (2) the NFS client’s IP address, access type, and security type. If an entry exists, Data ONTAP grants or denies access according to the value of the entry. If an entry does not exist, Data ONTAP grants or denies access according to the result of a comparison between the file system path’s export options and (2) the NFS client’s host name, access type, and security type. In this case, Data ONTAP looks up the client’s host name (for example, Data ONTAP performs a reverse DNS lookup) and adds a new entry to the access cache. To manually add access cache entries, use the -c option.
Whenever an NFS client attempts to access an exported file system path, Data ONTAP checks the access cache for an entry corresponding to the file system path and (2) the NFS client’s IP address, access type, and security type. If an entry exists, Data ONTAP grants or denies access according to the value of the entry. If an entry does not exist, Data ONTAP grants or denies access according to the result of a comparison between the file system path’s export options and (2) the NFS client’s host name, access type, and security type. In this case, Data ONTAP looks up the client’s host name (for example, Data ONTAP performs a reverse DNS lookup) and adds a new entry to the access cache. To manually add access cache entries, use the -c option.
Note:
The access cache associates an NFS client’s access rights with its IP address.
Therefore, changes to an NFS client’s host name will not change its access
rights until the access cache is flushed. Data ONTAP automatically flushes an
access cache entry when its corresponding file system path is exported or
unexported or (2) it expires. To control the expiration of access cache
entries, set the nfs.export.harvest.timeout, nfs.export.neg.timeout,
and nfs.export.pos.timeout options. For more information about
these options, see options . To manually flush access cache entries, use
the -f option.
Running
exportfs on a vFiler unit
To run exportfs on a vFiler (TM) unit, use the vfiler run command. All paths you specify must belong to the vFiler unit. In addition, all IP addresses you specify must be in the vFiler unit’s ipspace. For more information, see vfiler .
To run exportfs on a vFiler (TM) unit, use the vfiler run command. All paths you specify must belong to the vFiler unit. In addition, all IP addresses you specify must be in the vFiler unit’s ipspace. For more information, see vfiler .
Debugging
mount and access problems
To debug mount and access problems, temporarily set the nfs.mountd.trace option to on and (2) monitor related messages that Data ONTAP displays and logs in the /etc/messages file. Some common access problems include:
To debug mount and access problems, temporarily set the nfs.mountd.trace option to on and (2) monitor related messages that Data ONTAP displays and logs in the /etc/messages file. Some common access problems include:
*
Data ONTAP cannot determine an NFS client’s host name because it does not have
a reverse DNS entry for it. Add the NFS client’s host name to the DNS or the /etc/hosts
file.
*
The root volume is exported with a file system path consisting of a single forward
slash (/), which misleads some automounters. Export the file system path
using a different file system path name.
Exporting
Origin Filer for FlexCache
Exporting a volume using the /etc/exports file does not affect whether the volume is available to a FlexCache volume; To enable a volume to be a FlexCache origin volume, use the the flexcache.access option.
Exporting a volume using the /etc/exports file does not affect whether the volume is available to a FlexCache volume; To enable a volume to be a FlexCache origin volume, use the the flexcache.access option.
Exporting
file system paths
Each of the following commands exports /vol/vol0 to all hosts for read-write access:
Each of the following commands exports /vol/vol0 to all hosts for read-write access:
exportfs -p /vol/vol0 exportfs -io rw /vol/vol0
Each
of the following commands exports /vol/vol0 to all hosts for read-only
access:
exportfs -p ro /vol/vol0 exportfs -io ro /vol/vol0
Each
of the following commands exports /vol/vol0 to all hosts on the
10.45.67.0 subnet with the 255.255.255.0 netmask for read-write access:
exportfs -io rw=10.45.67.0/24 /vol/vol0 exportfs -io rw=”network 10.45.67.0 netmask
255.255.255.0″ /vol/vol0 exportfs -io rw=”10.45.67.0 255.255.255.0″
/vol/vol0
The
following command exports /vol/vol0 to the hosts in the trusted
netgroup for root access, the hosts in the friendly netgroup for
read-write access, and all other hosts for read-only access:
exportfs -io ro, root=@trusted, rw=@friendly
/vol/vol0
The
following command exports all file system paths specified in the /etc/exports
file:
exportfs -a
The
following command exports all file system paths specified in the /etc/exports
file and unexports all file system paths not specified in the /etc/exports
file:
exportfs -r
Unexporting
file system paths
The following command unexports /vol/vol0:
The following command unexports /vol/vol0:
exportfs -u /vol/vol0
The
following command unexports /vol/vol0 and removes its export entry from
the /etc/exports file:
exportfs -z /vol/vol0
The
following command unexports all file system paths:
exportfs -ua
Displaying
exported file system paths
The following command displays all exported file system paths and their corresponding export options:
The following command displays all exported file system paths and their corresponding export options:
exportfs
The
following command displays the export options for /vol/vol0:
exportfs -q /vol/vol0
Enabling
and disabling fencing
Suppose /vol/vol0 is exported with the following export options:
Suppose /vol/vol0 is exported with the following export options:
-rw=pig:horse:cat:dog, ro=duck, anon=0
The
following command enables fencing of cat from /vol/vol0:
exportfs -b enable save cat /vol/vol0
Note:
cat moves to the front of the ro= list for /vol/vol0:
-rw=pig:horse:dog, ro=cat:duck, anon=0
The
following command disables fencing of cat from /vol/vol0:
exportfs -b disable save cat /vol/vol0
Note:
cat moves to the front of the rw= list for /vol/vol0:
-rw=cat:pig:horse:dog, ro=duck, anon=0
Checking
an NFS client’s access rights
The following command checks whether an NFS client with an IP address of 192.168.208.51 and a security type of sys can mount /vol/vol0:
The following command checks whether an NFS client with an IP address of 192.168.208.51 and a security type of sys can mount /vol/vol0:
exportfs -c 192.168.208.51 /vol/vol0
The
following command checks whether an NFS client with an IP address of
192.168.208.51 and a security type of none has read-only access to /vol/vol0:
exportfs -c 192.168.208.51 /vol/vol0 ro none
Flushing
entries from the access cache
The following command flushes all entries from the access cache:
The following command flushes all entries from the access cache:
exportfs -f
The
following command flushes all entries for /vol/vol0 from the access
cache:
exportfs -f /vol/vol0
Reverting
the /etc/exports file
The following command reverts the /etc/exports file to a format compatible with the Data ONTAP 6.5 release:
The following command reverts the /etc/exports file to a format compatible with the Data ONTAP 6.5 release:
exportfs -d 6.5
Note:
Before reverting the /etc/exports file, Data ONTAP backs it up under /etc/exports.pre.revert.
Displaying
an actual file system path
The following example displays the actual file system path corresponding to /vol/vol0:
The following example displays the actual file system path corresponding to /vol/vol0:
exportfs -s /vol/vol0
Note:
The actual file system path will be the same as the exported file system path
unless the file system path was exported with the -actual option.
Saving
file system paths
The following example saves the file system paths and export options for all currently and recently exported file paths into /etc/exports.recent:
The following example saves the file system paths and export options for all currently and recently exported file paths into /etc/exports.recent:
exportfs -w /etc/exports.recent
ipspace
, options , vfiler , exports , hosts , netgroup , passwd
Exports
NAME
exports
– directories and files exported to NFS clients
SYNOPSIS
/etc/exports
DESCRIPTION
The
/etc/exports file contains a list of export entries for all file system
paths that Data ONTAP exports automatically when NFS starts up. The /etc/exports
file can contain up to 10, 240 export entries. Each export entry can contain up
to 4, 096 characters, including the end-of-line character. To specify that an
export entry continues onto the next line, you must use the line continuation
character "\”.
An
export entry has the following syntax:
Each
export entry is a line in the following format:
pathname
-option[, option ] …
The
following list describes the fields in an export entry:
pathname
path
name of a file or directory to be exported.
option
the
export option specifying how a file or directory is exported.
You
can specify an option in one of the following formats:
actual=path
Specifies
the actual path to use when a NFS client attempts to mount the original path.
This option is useful for moving mount points without reconfiguring the clients
right away. Note that while the exported pathname need not exist, the pathname
given as a parameter to actual must exist.
anon=uid|name
If a request comes from user ID of 0 (root user ID on the client), use uid
as the effective user ID unless the client host is included in the root
option. The default value of uid is 65534. To disable root access, set uid
to 65535. To grant root access to all clients, set uid to 0. The user ID
can also be specified by a name string corresponding to an entry in /etc/passwd.
nosuid
If
a request tries to set either with either setuid or setgid or if a special
file, i.e., block or character type via mknod, then nosuid disallows
that request. The default mode is to allow such requests.
ro
| ro=hostname[:hostname]…
A pathname can be either exported ro to all hosts or to a set of specified hosts.
A pathname can be either exported ro to all hosts or to a set of specified hosts.
rw
| rw=hostname[:hostname]…
A pathname can be either exported rw to all hosts or to a set of specified hosts. If no access modifiers are provided, then the default is rw.
A pathname can be either exported rw to all hosts or to a set of specified hosts. If no access modifiers are provided, then the default is rw.
root=hostname[:hostname]…
Give root access only to the specified hosts. Note that there is no -root option, i.e., this option always takes at least one hostname as a parameter.
Give root access only to the specified hosts. Note that there is no -root option, i.e., this option always takes at least one hostname as a parameter.
sec=secflavor[:secflavor]…
Allow access to the mounted directory only using the listed security flavors. If no sec directive is provided, then the default of sys is applied to the export. The sec directive may appear multiple times in a rule, which each appearance setting the context of the following directives: anon, nosuid, ro, root, and rw. The contexts apply in order. If only one security context is provided in an export, then it applies regardless of where it appears in the export. Note that any given secflavor can only appear once in an export rule.
Allow access to the mounted directory only using the listed security flavors. If no sec directive is provided, then the default of sys is applied to the export. The sec directive may appear multiple times in a rule, which each appearance setting the context of the following directives: anon, nosuid, ro, root, and rw. The contexts apply in order. If only one security context is provided in an export, then it applies regardless of where it appears in the export. Note that any given secflavor can only appear once in an export rule.
The
supported security flavors are:
sys
for
Unix(tm) style security based on uids and gids
krb5
for
Kerberos(tm) Version 5 authentication.
krb5i
for
Kerberos(tm) Version 5 integrity service
krb5p
for
Kerberos(tm) Version 5 privacy service
The
Kerberos(tm) authentication service verifies the identity of the users
accessing the filer on all accesses, and also verifies to the client that the
responses are from the filer. The integrity service provides a strong assurance
that the messages have not been tampered with. The privacy service ensures that
messages intercepted on the wire cannot be read by any other party. The
integrity and privacy services both include authentication. The default
security flavor is sys.
The
security flavor of none can also be applied to an export. If the client
uses this flavor, then all requests get the effective UID of the anonymous
user. Also, if a request arrives with a security context which is not present
in the export, and none is allowed, then that request is treated as if
it arrived with the flavor of none.
A
host is allowed to mount an export if it has either ro or rw
access permissions.
A
hostname is described as:
[-][machine
name|netgroup|machine IP|subnet|DNS domain]
Where,
`-’ indicates that the host is to be denied access.
A
machine name is an alphanumeric string.
A
netgroup is also an alphanumeric string and describes a group of machine names.
If NIS is not enabled, then each netgroup must be defined in the /etc/netgroup
file. If NIS is enabled, then each netgroup may either be in a NIS mapping or
defined in the /etc/netgroup file.
If
a netgroup occurs in both NIS and /etc/netgroup, then the ordering given
in /etc/nsswitch.conf determines which definition is used.
A
netgroup can be differentiated from a hostname by prepending an `@’ to the
name. When an entry begins with an `@’, ONTAP treats it as netgroup and not a
hostname. When an entry does not begin with `@’, the handling depends on the
setting of the option nfs.netgroup.strict.
If
nfs.netgroup.strict is set, then the `@’ determines whether an entry is
either a netgroup or a hostname. In this case, when an entry appears without a
prepended `@’, it is assumed to be a hostname, i.e., it cannot be a netgroup.
If
nfs.netgroup.strict is not set, then an entry with `@’ will still only
denote a netgroup, but the absence of the `@’ does not determine that an entry
is a host.
The
use of the nfs.netgroup.strict option eliminates spurious netgroup
lookups (which can be helpful to performance). If it is not used, backwards
compatibility with export specifications in which netgroups are not specified
with an `@’ is retained.
A
machine IP is in dotted decimal format: AAA.BBB.CCC.DDD
A
subnet is in the forms:
dotted_IP/num_bits
The dotted_IP field is a subnet number. The num_bits field specifies the size of the subnet by the number of leading bits of the netmask.
The dotted_IP field is a subnet number. The num_bits field specifies the size of the subnet by the number of leading bits of the netmask.
"[network]
subnet [netmask] netmask” The subnet field is the subnet
number. The netmask field is the netmask. Note that the keywords network
and netmask are optional.
A
DNS domain starts with a `.’ and is alphanumeric.
If
there is a machine name and a netgroup with the same name, then the hostname is
assumed to be the name of a machine.
In
UNIX, it is illegal to export a directory that has an exported ancestor in the
same file system. Data ONTAP does not have this restriction. For example, you
can export both the /vol/vol0 directory and the /vol/vol0/home
directory. In determining permissions, the filer uses the longest matching
prefix.
Neither
the same path nor the same file handle can be advertised for exports. We
restrict the path names to make mounts unique and the file handle restriction
makes per NFS request checking also be unique.
As
the /etc/exports file is parsed and the same path is determined to be
used for exporting, then the last instance of the export rule is stored in
memory. Note that different path names may evaluate to the same advertised
path:
/home
/vol/vol0/home
/vol/vol0/home/ontap/..
The
addition of actual complicates the rules for determining what gets
exported. If an export uses -actual, then neither the advertised path
nor the actual storage path may be duplicated in memory.
There
is no set ordering of options, but as the ro and rw options
interact, there is a strict interpretation of these options:
1)
-rw is the default if -ro, -ro=, -rw, and -rw=
are all not present.
2)
If only -rw= is present, ro is not the default for all
other hosts. This rule is a departure from pre-6.5 semantics.
3)
-ro, ro= and -rw, rw= are errors.
4)
-ro=A, rw=A is an error
5)
-ro=A, rw=-A is an error
6)
-ro=-A, rw=A is an error
7)
The position of -rw, -rw= -ro, and -ro= in the
options does not have any significance
8)
-ro trumps -rw
9)
-ro= trumps -rw
10)
-rw= trumps -ro
11)
A specific host name in either -ro= or -rw= overrides a grouping
in the other access specifier.
12)
-ro= trumps -rw=
13)
Left to right precedence, which determines `-’ and the order we go across the
wire.
Note,
"A trumps B" means that option A overrules option B.
Given
the following netgroups:
farm
pets (alligator, , ) livestock workers
pets
(dog, , ) (cat, , ) (skunk, , ) (pig, , ) (crow, , )
livestock
(cow, , ) (pig, , ) (chicken, , ) (ostrich, , )
(cow, , ) (pig, , ) (chicken, , ) (ostrich, , )
workers
(dog, , ) (horse, , ) (ox, , ) (mule, , )
(dog, , ) (horse, , ) (ox, , ) (mule, , )
predators
(coyote, , ) (puma, , ) (fox, , ) (crow, , )
(coyote, , ) (puma, , ) (fox, , ) (crow, , )
We
can illustrate the access rules thusly:
/vol/vol0
-anon=0
All
hosts have rw access, and root at that.
/vol/vol0
-root=horse, rw
All
hosts have rw access, but only horse has root access.
/vol/vol0
-anon=0, rw=horse
Only
horse has access and it is rw. Note the departure from the prior rule
format, in which all other hosts would by default have ro access.
/vol/vol0
-anon=0, ro, rw=horse
All
hosts have ro access, except horse, which has rw access.
/vol/vol1
-ro=@workers, rw=@farm:canary /vol/vol1 -rw=@farm:canary,
ro=@workers
All
hosts in the netgroup farm have rw access, except dog, horse, ox, and
mule. All of which have ro access. In addition, canary has rw
access to the export. Note that both lines are identical with respect to
determining access rights.
/vol/vol2
-ro=@pets, rw
All
hosts have rw access, except for dog, cat, skunk, pig, and crow, all of
which have ro access.
/vol/vol2
-ro=-@pets, rw
All
hosts have rw access, except for dog, cat, skunk, pig, and crow, all of
which have no access at all.
By
rule #9, all members of the netgroup pets are denied rw access. By
negation, all members of the netgroup pets are denied ro access.
/vol/vol2
-ro, rw=@pets:canary
All
hosts have ro access, except for canary, dog, cat, skunk, pig, and crow,
all of which have rw access.
/vol/vol2
-ro, rw=-@pets:canary
All
hosts have ro access, except for canary which has rw access.
/vol/vol2
-ro, rw=@pets:@farm:canary
All
hosts have ro access, except for canary and all hosts in the netgroups
pets and farm, which all have rw access.
/vol/vol2
-ro, rw=-@pets:@farm:canary
All
hosts have ro access, except for all hosts in the netgroup farm,
excluding all hosts in the netgroup pets, which have rw access. The host
canary also has rw access.
If
the host cat wants to write to /vol/vol2, by rule #10, we first check the -rw=
access list. By rule #13, we check for access in order of -@pets, @farm, and
finally canary. We match cat in the netgroup pets and therefore cat is denied rw
access. It will however be granted ro access.
/vol/vol2
-ro, rw=@farm:-@pets:canary
Effectively,
all hosts have ro access, except for canary and all hosts in the
netgroup farm, which all have rw access.
If
the host cat wants to write to /vol/vol2, by rule #10, we first check the -rw=
access list. By rule #13, we check for access in order of @farm, -@pets, and
finally canary. We match cat in the netgroup farm, by expansion, and therefore
cat is granted rw access.
/vol/vol2a
-rw=@pets:-@workers, ro=@livestock
By
rule #12, cow, pig, chicken, and ostrich all have ro access.
By
rule #13, dog, cat, and skunk all have rw access.
By
negation, horse, ox, and mule have no rw access and by rule #2, they
have no access at all.
/vol/vol2a
-rw=-@workers:pets, ro=@livestock
By
rule #12, cow, pig, chicken, and ostrich all have ro access.
By
rule #13, negation, and rule #2, dog, horse, ox, and mule have no access.
cat
and skunk have rw access.
/vol/vol3
-ro=@pets, rw=@farm:lion
All
hosts in the netgroup farm have rw access, except for all hosts in the
netgroup pets, which all have ro access. In addition, the host lion has rw
access.
If
the host cat wants to write to /vol/vol3, by rule #12, we first check the -ro=
access list. We match cat in the netgroup pets and therefore we deny rw
access.
/vol/vol4
-ro=10.56.17/24, rw=10.56/16
All
hosts in the subnet 10.56/16 have rw access, except those in the subnet
10.56.17/24, which have ro access.
/vol/vol17
-ro=10.56.17/24, rw=10.56.17.5:10.56.17.6:farm
-ro=10.56.17/24, rw=10.56.17.5:10.56.17.6:farm
All
hosts in the subnet 10.56.17/24 have ro access, except, by rule #11, for
10.56.17.5 and 10.56.17.6, which have rw access. If the hosts in the
netgroup farm are on the 10.56.17/24 subnet, they have ro access, else
they have rw access. Rule #11 allows for specific hosts to be excluded
from a range provided by a group. Since it makes no sense to compare netgroups
to subnets, we do not allow exceptions by groups.
/vol/vol19
-ro=10.56.17.9:.frogs.fauna.mycompany.com, \\ rw=.fauna.mycompany.com
-ro=10.56.17.9:.frogs.fauna.mycompany.com, \\ rw=.fauna.mycompany.com
All
hosts in the subdomain .fauna.mycompany.com get rw access, except those
in the subdomain Note that we determine this result from rule #12 and not rule
#11; we do not evaluate if one grouping construct is a subset of another. If 10.56.17.9
is in the subdomain .fauna.mycompany.com, then by rule #11, it gets ro
access.
/vol/vol21
-ro=10.56.17.9, rw=-pets:farm:skunk
Rule
#11 interacts with rules #5 and #6 in an interesting way, if a host is
mentioned in an export by either name or IP, then it appears that it will
always be granted the access given by whether it is in -ro= or -rw=.
However, rule #13 still applies. Thus, 10.56.17.9 always gets ro access,
but in this case by rule #13, skunk is denied access to the mount. Since skunk
is a member of the netgroup pets, and pets is denied rw access by
negation, skunk is denied access.
/vol/vol5
-ro=.farm.mycompany.com, sec=krb5, rw, anon=0
-ro=.farm.mycompany.com, sec=krb5, rw, anon=0
If
the secflavor is sys, then all hosts in the DNS subdomain of .farm.mycompany.com
are granted ro access. If the secflavor is krb5, then all
hosts are granted rw access.
/vol/vol6
-sec=sys:none, rw, sec=krb5:krb5i:k4b5p, rw, anon=0
If
the secflavor is sys or none, then all hosts are granted rw
access, but effectively all root access is blocked. If the secfla_vor
is from one of the secure krb5, krb5i, or krb5p, then rw
and effectively root access are both granted.
Exports
defined prior to ONTAP 6.5 contain a different option, -access, which
defined which hosts were permitted to mount an export. With the newer finer
grained options, and by allowing more flexibility such as netgroups in the
options, -access has been removed as an option.
Another
significant change is that -ro is no longer the default if -rw=
is present as an option.
During
the upgrade process, the /etc/exports file is converted to the newer
format.
The
rules for upgrading to the new format are:
1)
-root= options stay the same
2)
No access list => -rw
3)
-access=X => -rw=X
4)
-ro => -ro
5)
-access=X, ro => -ro=X
6)
-rw=X => -rw=X
This
is more secure than the change -rw=X, ro.
Remember
from Access Rule #2, -ro is never a default.
If
the less restrictive form is desired, then the option needs to be manually
changed. Note that if an export file has a mix of old and new style options,
the more secure new style option of -rw=X can not be differentiated from
the less secure option of -rw=X(, ro) with the implicit ro
modifier. To solve this problem, we always interpret -rw=X in the most
secure format.
7)
-access=Y, rw=X => -rw=X, ro=(Y-X)
There is a potential to remove write access here, but we keep the most secure translation.
There is a potential to remove write access here, but we keep the most secure translation.
In
all cases, we preserve ordering inside an option.
/vol/vol0
-anon=0
By rule #2, this becomes:
By rule #2, this becomes:
/vol/vol0
-rw, anon=0
/vol/vol3
-ro By rule #4, this becomes:
/vol/vol3
-ro
/vol/vol0/home
-rw=dog:cat:skunk:pig:mule By rule #6, this becomes:
/vol/vol0/home
-rw=dog:cat:skunk:pig:mule
Note
that by the access rules given above, all other hosts are denied ro
access.
Since
the upgrade code does not know about netgroups and netgroups used to not be
allowed inside the -rw host list, this could be rewritten as:
/vol/vol0/home
-rw=@pets
Also,
if the security style is desired to be the older style, this could be further
rewritten as:
/vol/vol0/home
-ro, rw=@pets
/vol/vol1
-access=pets:workers:alligator:mule,
\\ rw=dog:cat:skunk:pig:horse:ox:mule
By
rule #7, this becomes:
/vol/vol1
-ro=pets:workers:alligator,
\\ rw=dog:cat:skunk:pig:horse:ox:mule
This
can be rewritten as:
/vol/vol1
-ro=pets:workers:alligator,
\\ rw=pets:workers
And
should be:
/vol/vol1
-ro=alligator, rw=@pets:@workers
The
/etc/exports file is changed by ONTAP for any of the following
conditions:
vol
create
A default entry is added for the new volume. If an admin host had been defined during the setup process, access is restricted to that host, otherwise all hosts have access to the new volume.
A default entry is added for the new volume. If an admin host had been defined during the setup process, access is restricted to that host, otherwise all hosts have access to the new volume.
vol
rename
All entries which have either a pathname or an -actual pathname which matches the old volume name are changed to be that of the new volume name.
All entries which have either a pathname or an -actual pathname which matches the old volume name are changed to be that of the new volume name.
vol
destroy
All entries which have either a pathname or an -actual pathname which matches the old volume name are removed from the file.
All entries which have either a pathname or an -actual pathname which matches the old volume name are removed from the file.
upgrade
During every invocation of exportfs -a, the exports file is checked for old style formatting. If this style is found, the exports file is upgraded to follow the current formatting.
During every invocation of exportfs -a, the exports file is checked for old style formatting. If this style is found, the exports file is upgraded to follow the current formatting.
Please
note that when we upgrade exports which contain subnets, we always rewrite the
subnets in the compact format of dotted_IP/num_bits.
If
the option nfs.export.auto-update is disabled, then the automatic
updates for the vol commands will not take place. Instead the need for
manual updates is syslogged.
A
new feature in ONTAP 6.5 is the access cache, which allows netgroups to appear
in -ro=, -rw=, and -root= options. Each time a request
arrives from a host, it refers to an exported path. To avoid lengthy delays, we
first check for that host and path in the cache to determine if we will accept
or reject the request. If there is cache miss, we reject the request and do
name resolution in another thread. On the next request, we should get a cache
hit (i.e., the hit or miss depends on network traffic).
The
time that a entry lives in the cache is determined by the two options:
nfs.export.neg.timeout
dictates how long an entry which has been denied access lives
dictates how long an entry which has been denied access lives
nfs.export.pos.timeout
dictates how long an entry which has been granted access lives
dictates how long an entry which has been granted access lives
There
are several ways that the cache can be flushed:
exportfs
-f
Flushes the entire access cache.
Flushes the entire access cache.
exportfs
-f pathname
Flushes the cache for the longest leading prefix match for the path.
Flushes the cache for the longest leading prefix match for the path.
Also,
any command which alters an export entry will result in the access cache for
that export being flushed. E.g., exportfs -au, exportfs -a,
exportfs -io -rw /vol/vol1, etc.
As
the access cache is designed to eliminate name service lookups, entries inside
it can become stale when the name services are modified. For example, if a
netgroup is changed or a DNS server is found to have corrupt maps. If the
access cache is found to have stale data, then either parts of it or all of it
must be flushed. If the stale data applies to only a few exports, then each may
be flushed with the exportfs -f pathname command. The
entire cache may be cleared with the exportfs -f command.
Note
that the same effect may be had by using commands to reload the exports table.
In prior versions of ONTAP, either the exportfs -au; exportfs
-a command sequence or a simple exportfs -a command was
commonly used to clear away exports issues. While these can be used to clear
the access cache, they can also result in extra work and lead to very small
windows when an export is unavailable.
All
mount requests, and NFS requests, come across the wire with an IP address and
not the hostname. In order for an address to be converted to a name, a reverse
lookup must be performed. Depending on the contents and ordering in /etc/nsswitch.conf,
DNS, NIS, and/or /etc/hosts may be examined to determine the mapping.
A
common problem with reverse DNS lookups is the existence of a mapping from name
to IP, but not IP to name.
The
option nfs.mountd.trace can be turned on to help debug access requests.
Note that as this option can be very verbose and it writes to the syslog, care
should be taken to only enable it while trying to resolve an access problem.
Another
useful tool is to use exportfs -c to check for access
permissions.
All
exported pathnames which do not begin with a leading "/vol/" or
"/etc/" pathname are being deprecated.
Exporting
the root volume as / can be misleading to some automounters.
/etc/hosts
host name database
/etc/nsswitch.conf
determines name resolution search order
How to mount a
volume (NetApp shared Volume) with NFS4 support in the client?
1) Enable
the NFS4 support on the NetApp Box -
2) Mount
the volume in the NFS client :# mount -t nfs4
NASBox:/et-data /et/nfs/data
How can I prevent
the use of NFS Version 2 and use Version 3?
Mount the volume in the NFS client :# mount -t nfs -o
vers=3 NASBox:/vol/vol0/share /mnt/nfsshare
How to see the mounted volume detail in the NFS client?
cat /proc/mounts
How to clear NetApp NFS filer locks?
Execute the following from the
NetApp filer command line:
#lock status -f
#priv set advanced
#sm_mon –l
QNS: How to retrieve
a list of clients connected to the NFS server ?
showmount -a
QNS: Name of
Configuration file for NFS Server ?
/etc/exports
QNS: What is meaning
of "no_root_squash" option ?
Treat remote root user as local root. Do not map requests
from root to the anonymous user and group ID.
QNS: What is
different between NFS Version 2 & 3?
NFS Version 2 clients can access only the lowest
2GB of a file (signed 32 bit offset). Version 3 clients support larger files
(up to 64 bit offsets). Maximum file size depends on the NFS server's local
file systems.
NFS Version
2 limits the maximum size of an on-the-wire NFS read or write operation to 8KB
(8192 bytes). NFS Version 3 over UDP theoretically supports up to 56KB (the
maximum size of a UDP datagram is 64KB, so with room for the NFS, RPC, and UDP
headers, the largest on-the-wire NFS read or write size for NFS over UDP is
around 60KB). For NFS Version 3 over TCP, the limit depends on the
implementation. Most implementations don't support more than 32KB.
NFS Version
3 introduces the concept of Weak Cache Consistency. Weak Cache Consistency
helps NFS Version 3 clients more quickly detect changes to files that are
modified by other clients. This is done by returning extra attribute information
in a server's reply to a read or write operation. A client can use this
information to decide whether its data and attribute caches are stale.
NFS Version 2 clients interpret a file's mode bits
themselves to determine whether a user has access to a file. Version 3 clients
can use a new operation (called ACCESS) to ask the server to decide access
rights. This allows a client that doesn't support Access Control Lists to
interact correctly with a server that does.
NFS Version 2 requires that a server must save all
the data in a write operation to disk before it replies to a client that the
write operation has completed. This can be expensive because it breaks write
requests into small chunks (8KB or less) that must each be written to disk
before the next chunk can be written. Disks work best when they can write large
amounts of data all at once.
NFS Version
3 introduces the concept of "safe asynchronous writes." A Version 3
client can specify that the server is allowed to reply before it has saved the
requested data to disk, permitting the server to gather small NFS write
operations into a single efficient disk write operation. A Version 3 client can
also specify that the data must be written to disk before the server replies,
just like a Version 2 write. The client specifies the type of write by setting
the stable_how field in the arguments of each write operation to UNSTABLE to
request a safe asynchronous write, and FILE_SYNC for an NFS Version 2 style
write.
Servers
indicate whether the requested data is permanently stored by setting a
corresponding field in the response to each NFS writes operation. A server can
respond to an UNSTABLE write request with an UNSTABLE reply or a FILE_SYNC
reply, depending on whether or not the requested data resides on permanent
storage yet. An NFS protocol-compliant server must respond to a FILE_SYNC
request only with a FILE_SYNC reply.
Clients ensure that data that was written using a
safe asynchronous write has been written onto permanent storage using a new
operation available in Version 3 called a COMMIT. Servers do not send a
response to a COMMIT operation until all data specified in the request has been
written to permanent storage. NFS Version 3 clients must protect buffered data
that has been written using a safe asynchronous write but not yet committed. If
a server reboots before a client has sent an appropriate COMMIT, the server can
reply to the eventual COMMIT request in a way that forces the client to resend
the original write operation. Version 3 clients use COMMIT operations when
flushing safe asynchronous writes to the server during a close (2) or fsync (2)
system call, or when encountering memory pressure.
QNS: What
are the main new features in version 4 of the NFS protocol?
NFS Versions 2 and 3 are stateless protocols, but
NFS Version 4 introduces state. An NFS Version 4 client uses state to notify an
NFS Version 4 server of its intentions on a file: locking, reading, writing,
and so on. An NFS Version 4 server can return information to a client about
what other clients have intentions on a file to allow a client to cache file
data more aggressively via delegation. To help keep state consistent, more sophisticated
client and server reboot recovery mechanisms are built in to the NFS Version 4
protocol.
NFS
Version 4 introduces support for byte-range locking and share reservation.
Locking in NFS Version 4 is lease-based, so an NFS Version 4 client must
maintain contact with an NFS Version 4 server to continue extending its open
and lock leases.
NFS Version 4 introduces file
delegation. An NFS Version 4 server can allow an NFS Version 4 client to access
and modify a file in it's own cache without sending any network requests to the
server, until the server indicates via a callback that another client wishes to
access a file. This reduces the amount of traffic between NFS Version 4 client
and server considerably in cases where no other clients wish to access a set of
files concurrently.
NFS
Version 4 uses compound RPCs. An NFS Version 4 client can combine several
traditional NFS operations (LOOKUP, OPEN, and READ, for example) into a single
RPC request to carry out a complex operation in one network round trip.
NFS
Version 4 specifies a number of sophisticated security mechanisms, and mandates
their implementation by all conforming clients. These mechanisms include
Kerberos 5 and SPKM3, in addition to traditional AUTH_SYS security. A new API
is provided to allow easy addition of new security mechanisms in the future.
NFS
Version 4 standardizes the use and interpretation of ACLs across Posix and
Windows environments. It also supports named attributes. User and group
information is stored in the form of strings, not as numeric values. ACLs, user
names, group names, and named attributes are stored with UTF-8 encoding.
NFS
Version 4 combines the disparate NFS protocols (stat, NLM, mount, ACL, and NFS)
into a single protocol specification to allow better compatibility with network
firewalls.
NFS
Version 4 introduces protocol support for file migration and replication.
NFS
Version 4 requires support of RPC over streaming network transport protocols
such as TCP. Although many NFS Version 4 clients continue to support RPC via
datagrams, this support may be phased out over time in favor of more reliable
stream transport protocols.
QNS:
Difference between NFS v3 and NFS v4?
NFSv3
A collection of protocols (file access, mount,
lock, status)
Stateless
UNIX-centric, but seen in Windows too
Deployed with weak authentication
32 bit numeric uids/gids
Ad-hoc caching
UNIX permissions
Works over UDP, TCP
Needs a-priori agreement on character sets
NFSv4
One protocol to a single port (2049)
Lease-based state
Supports UNIX and Windows file semantics
Mandates strong authentication
String-based identities
Real caching handshake
Windows-like access
Bans UDP
Uses a universal character set for file names
QNS: Can we grant
access by Username and password for nfs share?
No, access is granted only for IP address.
QNS: What is the
role of "all_squash" option?
Treat all client users as anonymous users. Map all user and
group IDs to the anonymous user and group ID.
QNS: What is the
role of "root_squash" option?
All requests from the user root are translated or mapped as
if they came from the user anonymous (default).
QNS: Explain command
"/usr/sbin/exportfs -f"?
It will flush everything out of the kernels export table.
Any clients that are active will get new entries added by mountd when they make
their next request.
QNS: Which option is
used with exportfs command to display the current export list, also displays
the list of export options?
exportfs -v
QNS: Which option is used with exportfs command to
re-export all directories?
exportfs -r
QNS: How you will
export directory (/data) to host 192.168.1.51, allowing asynchronous writes
without adding the entry in /etc/exports file?
exportfs -o async 192.168.1.51:/data
QNS: Explain
"nfsstat" command?
The nfsstat command displays the statistics about NFS client
and NFS server activity.
QNS: What do you
understand by "nfsstat -o all -234" command?
It will Show all information about all versions of NFS.
QNS: Explain
"Soft Mounting" option at NFS Client?
If a file request fails, the NFS client will report an error
to the process on the client machine requesting the file access. if it cannot
be satisfied (for example, the server is down), then it quits. This is called
soft mounting.
The soft option enables the mount to time out if the server
goes down.
QNS: Explain
"Hard Mounting" option at NFS Client?
If a file request fails, the NFS client will report an
error to the process on the client machine requesting the file access. if it
cannot be satisfied, then it will not quit until the request is satisfied. This
is called Hard mounting.
The hard option keeps the request alive even if the server
goes down, has the advantage that whenever the server comes back up, the file
activity continues where it left off.
QNS: What is "portmap"?
The portmapper keeps a list of what services are running on
what ports. This list is used by a connecting machine to see what ports it
wants to talk to access certain services.
QNS: How you will
check "portmap" service is running or not?
rpcinfo -p
QNS: I am unable to
mount a NFS share. How will you trace out the reason?
Firstly, check that you have permissions to mount nfs share
or not. Check /etc/exports file.
Secondly you can get RPC error: Program Not Registered (or
another "RPC" error)
For this check your NFS server and portmap service running
or not by "rpcinfo -p"
QNS: Can I modify
export permissions without needing to remount clients in order to have them
take effect?
Yes. The safest thing to do is edit /etc/exports and run
"exportfs -r".
QNS: What is the
role of "sync" option for NFS server?
If sync is specified, the server waits until the request is
written to disk before responding to the client. The sync option is recommended
because it follows the NFS protocol.
QNS: Explain the
working of NFS mount daemon "rpc.mountd"?
The rpc.mountd program implements the NFS mount protocol.
When receiving a MOUNT request from an NFS client, it checks the request
against the list of currently exported file systems. If the client is permitted
to mount the file system, rpc.mountd obtains a file handle for requested
directory and returns it to the client.
No comments:
Post a Comment