A Network File system (NFS) allows a server to share directory hierarchies (file systems) with remote systems over a network. NFS servers export the directory and NFS clients mount the exported directory. The server directory then appears to the client systems as if they were local directories. NFS reduces storage needs and improves data consistency and reliability, because users are accessing files that are stored on a centralized server.

 

RedHat Linux 7 does not support NFS version 2 (NFSv2). The following two versions are supported:

NFS version 3 (NFSv3).

NFS version 4 (NFSv4).

 

NFS relies on Remote Procedure Calls (RPC) between clients and servers. RPC services are controlled by the rpcbind service. The rpcbind service replaces portmap, which was used in previous versions of Linux to map RPC program numbers to IP address port number combinations. rpcbind responds to requests for RPC services and sets up connections to the requested RPC service.rpcbind is not used with NFSv4, because the server listens on well-known TCP port 2049. The mounting and locking protocols have also been incorporated into the NFSv4 protocol, so NFSv4 does not interact with the lockd and rpc.statd daemons either.

 

NFS Server and RPC Processes

 

Starting the nfs-server service starts the NFS server and other RPC processes needed to service requests for shared NFS file systems. You can use the short name “nfs” rather than “nfs-server” when starting the service. Example:

# systemctl start nfs

 

This is the NFS server process that implements the user level part of the NFS service. The main functionality is handled by the nfsd kernel module. The user space program merely specifies what sort of sockets the kernel server listens on, what NFS versions it supports, and how many nfsd kernel threads it uses. Use the ps –e command to show the number of running threads.

# systemctl start nfs

 

Starting the nfs-server service starts the NFS server and other RPC processes needed to service requests for shared NFS file systems. You can use the short name “nfs” rather than “nfs-server” when starting the service. Example:

# ps -ef | grep nfs
root      9093     2  0 11:21 ?        00:00:00 [nfsd4_callbacks]
root      9099     2  0 11:21 ?        00:00:00 [nfsd]
root      9100     2  0 11:21 ?        00:00:00 [nfsd]
root      9101     2  0 11:21 ?        00:00:00 [nfsd]
root      9102     2  0 11:21 ?        00:00:00 [nfsd]
root      9103     2  0 11:21 ?        00:00:00 [nfsd]
root      9104     2  0 11:21 ?        00:00:00 [nfsd]
root      9105     2  0 11:21 ?        00:00:00 [nfsd]
root      9106     2  0 11:21 ?        00:00:00 [nfsd]

 

The number of nfsd threads to run is defined in the /proc/fs/nfsd/threads file. In this example, 8 nfsd threads are specified:

# cat /proc/fs/nfsd/threads
8

 

Starting the nfs-server service also starts the RPC processes. You can use the ps –e command to display the names of the RPC processes.

# ps -e | grep -i rpc
  177 ?        00:00:00 rpciod
 9080 ?        00:00:00 rpc.statd
 9081 ?        00:00:00 rpc.idmapd
 9082 ?        00:00:00 rpcbind
 9083 ?        00:00:00 rpc.mountd
 9084 ?        00:00:00 rpc.rquotad

 

rpc.statd

This process implements the Network Status Monitor (NSM) RPC protocol, which notifies NFS clients when an NFS server is restarted without being gracefully brought down. This is not used with NFSv4.

 

rpc.mountd

This is the NFS mount daemon that implements the server side of the mount requests from NFSv3 clients. It checks that the requested NFS share is currently exported by the NFS server, and that the client is allowed to access it. For NFSv4, the rpc.mountd daemon is required only on the NFS server to set up the exports.

 

rpc.idmapd

This provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4 names (which are strings in the form of user@domain) and local UIDs and GIDs. For idmapd to function with NFSv4, /etc/idmapd.conf must be configured. This service is required for use with NFSv4, although not when all hosts share the same DNS domain name.

 

rpc.rquotad

This process provides user quota information for remote users. It is started automatically by the nfs service and does not require user configuration. The results are used by the quota command to display user quotas for remote file systems and by the edquota command to set quotas on remote file systems.

 

lockd

This is a kernel thread that runs on both clients and servers. It implements the Network Lock Manager (NLM) protocol, which allows NFSv3 clients to lock files on the server. It is started automatically whenever the NFS server is run and whenever an NFS file system is mounted.

 

nfslock

Starting this service starts the RPC processes that allow NFS clients to lock files on the server.

 

NFS Server Configuration

 

To begin configuring a system as an NFS server, install the nfs-utils package:

# yum install nfs-utils

 

The main configuration file for the NFS server is /etc/exports. This file stores a list of exported directory hierarchies that remote systems can mount. The format for entries is:

export-point client1(options) [client2(options) ... ]

 

The export-point is the absolute path name of the directory hierarchy to be exported. One or more client systems, each with specific options, can mount export-point. There are no spaces between the client attribute and the open bracket. When no client options are specified, the following default settings apply:

  • ro: Read-only. Client hosts cannot change the data shared on the file system. To allow client hosts to make changes to the file system, specify the rw (read/write) option.
  • sync: The NFS server replies to requests only after changes made by previous requests are written to disk. async specifies that the server does not have to wait.
  • wdelay: The NFS server delays committing write requests when it suspects another write request is imminent. To disable the delay, use the no_wdelay option. no_wdelay is available only if the default sync option is also specified.
  • root_squash: Prevents root users connected remotely from having root privileges, effectively “squashing” the power of the remote root user. Requests appear to come from the user nfsnobody, an unprivileged user on the local system, or as specified by anonuid. To disable root squashing, specify the no_root_squash option.
  • no_all_squash: Does not change the mapping of remote users. To squash every remote user (including root), use the all_squash option.

 

To specify the user ID (UID) and group ID (GID) that the NFS server assigns to remote users, use the anonuid and anongid options as follows:

Starting the nfs-server service starts the NFS server and other RPC processes needed to service requests for shared NFS file systems. You can use the short name “nfs” rather than “nfs-server” when starting the service. Example:

export-point client(anonuid=uid,anongid=gid)

 

The anonuid and anongid options allow you to create a special user and group account for remote NFS users to share. By default, access control lists (ACLs) are supported by NFS. To disable this feature, specify the no_acl option when exporting the file system.

 

You can use wildcard characters, such as (*) and (?) in client names. You can also export directories to all hosts on an IP network. To do this, specify an IP address and netmask pair as address/netmask. Either of the following forms is valid:

192.168.1.0/24
192.168.1.0/255.255.255.0

 

/etc/exports Examples

In the following example, a client system with the IP address of 192.0.2.102 can mount the /export/directory with read/write permissions. All writes to the disk are asynchronous

/export/directory 192.0.2.102(rw,async)

 

The following example exports the /exports/apps directory to all clients, converts all connecting users to the local anonymous nfsnobody user, and makes the directory read-only:

/exports/apps *(all_squash, ro)

 

The following example exports the /spreadsheets/proj1 directory with read-only permissions to all clients on the 192.168.1.0 subnet, and read/write permissions to the client system named mgmtpc:

/spreadsheets/proj1 192.168.1.0/24(ro) mgmtpc(rw)

 

Starting the NFS Service

 

The rpcbind service must be started before starting nfs. The following command checks if the rpcbind service is enabled and running.

# systemctl status rpcbind

 

If the rpcbind service is running, the nfs service can be started. Restart nfs after making any configuration changes in /etc/exports or run the exportfs -a command.

# systemctl start nfs

 

Check if the nfslock service is enabled and running. Starting this service starts the RPC processes that allow NFS clients to lock files on the server.

# systemctl status nfslock

 

Use the systemctl enable command to automatically start the services at boot time. Use the full name of nfs-server when enabling the NFS service.

# systemctl enable nfs-server

 

Specify configuration options and arguments by placing them in /etc/sysconfig/nfs. This file contains several comments to assist you in specifying options are arguments. Use the showmount –e command to display exported file systems:

# showmount –e

 

exportfs Utility

 

You can also configure an NFS server from the command line by using exportfs. This command allows the root user to selectively export or unexport directories without changing /etc/exports and without restarting the NFS service. The syntax for the command is:

# exportfs [options] [client:dir ...]

 

The client argument is the name of the client system that dir is exported to. The dir argument is the absolute path name of the directory being exported. The following is a list of some of the options:

  • -r: Re-export the entries in /etc/exports and synchronize /var/lib/nfs/etab with /etc/exports. The /var/lib/nfs/etab file is the master export table. rpc.mountd reads this file when a client sends an NFS mount command.
  • -a: Export the entries in /etc/exports but do not synchronize /var/lib/nfs/etab. Run exportfs –a after making any configuration changes.
  • -i: Ignore the entries in /etc/exports and use only command-line arguments.
  • -u: Unexport one or more directories.
  • -o: Specify client options as specified in /etc/exports.

 

NFS Client Configuration

 

To begin configuring a system as an NFS client, install the nfs-utils package:

# yum install nfs-utils

 

Use the mount command to mount exported file systems (NFS shares) on the client side. Syntax for the command is:

# mount -t nfs -o options host:/remote/export /local/directory

 

The following are descriptions of the arguments:

  • -t nfs: Indicates that the file system type is nfs. With this option, mount uses NFSv4 if the server supports it; otherwise, it uses NFSv3.
  • -o options: A comma-delimited list of mount options
  • host:/remote/export: The host name exporting the file system, followed by a colon, followed by the absolute path name of the NFS share
  • /local/directory: The mount point on the client system

 

For example, to mount the /home directory exported from host abc with read-only permissions (ro option) on local mount point /abc_home, and prevent remote users from gaining higher privileges by running a setuid program (nosuid option):

# mount -t nfs -o ro,nosuid abc:/home /abc_home

 

To mount NFS shares at boot time, add entries to the file system mount table, /etc/fstab. Entries are in the following format:

# vi /etc/fstab
server:/exported-filesystem    local_mount_point   nfs   options   0 0

 

For example, the /etc/fstab entry that replicates the mount command on the previous page is:

# vi /etc/fstab
abc:/home    /abc_home    nfs    ro,nosuid    0 0

 

The df command displays mounted file systems, including NFS-mounted file systems. For NFS mounts, the “File system” column displays the server:/exported-filesystem information. Use the -T option to include a “Type” column:

# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
host03:/Dev    nfs4      976M  2.5M 907M  1%   /remote_dev

 

Was this answer helpful? 0 Users Found This Useful (0 Votes)