DEV Community

Cover image for Configuring an NFS Server
Waji
Waji

Posted on

Configuring an NFS Server

I am working in a VM, using 2 Linux systems. One is the NFS server and the other as the client.

πŸ‘‰ I have added 1GB HDD to the NFS Server and made 4 partitions of 250MBs. Will be mounting them to /NFS_Server1, /NFS_Server2, /NFS_Server3 and /NFS_Server4 respectively

Using the fdisk /dev/sdb command created 4 250MB partitions

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048      514047      256000   83  Linux
/dev/sdb2          514048     1026047      256000   83  Linux
/dev/sdb3         1026048     1538047      256000   83  Linux
/dev/sdb4         1538048     2050047      256000   83  Linux
Enter fullscreen mode Exit fullscreen mode

Formatting these partitions using the mkfs.xfs command

mkfs.xfs /dev/sdb1
mkfs.xfs /dev/sdb2
mkfs.xfs /dev/sdb3
mkfs.xfs /dev/sdb4
Enter fullscreen mode Exit fullscreen mode

After that, mounting them in the /NFS_Server directories that we need to create beforehand

mount /dev/sdb1 /NFS_Server1
mount /dev/sdb2 /NFS_Server2
mount /dev/sdb3 /NFS_Server3
mount /dev/sdb4 /NFS_Server4
Enter fullscreen mode Exit fullscreen mode

Checking mount status

df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.6M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G  1.6G   16G   9% /
/dev/sda1               1014M  168M  847M  17% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sdb1                247M   13M  234M   6% /NFS_Server1
/dev/sdb2                247M   13M  234M   6% /NFS_Server2
/dev/sdb3                247M   13M  234M   6% /NFS_Server3
/dev/sdb4                247M   13M  234M   6% /NFS_Server4
Enter fullscreen mode Exit fullscreen mode

Setting Auto-mount for these partitions

vi /etc/fstab 

# /etc/fstab
# Created by anaconda on Tue Jan 10 11:36:10 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=23c31983-af1e-48ed-8d0a-ce25c13dd641 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0

/dev/sdb1       /NFS_Server1                    xfs     defaults        0 0
/dev/sdb2       /NFS_Server2                    xfs     defaults        0 0
/dev/sdb3       /NFS_Server3                    xfs     defaults        0 0
/dev/sdb4       /NFS_Server4                    xfs     defaults        0 0
Enter fullscreen mode Exit fullscreen mode

We can reboot and check if the auto-mount is working properly

Configuring the NFS Server

NFS basically means storing files on a network. You can connect the storage directly on the network and client PCs can access them as if they are locally present in their PCs.

To configure NFS, we will be needing rpcbind package on both Linux systems

yum -y install rpcbind
Enter fullscreen mode Exit fullscreen mode

Enabling and starting the service on both systems

systemctl enable rpcbind
systemctl start rpcbind
systemctl status rpcbind
Enter fullscreen mode Exit fullscreen mode

Adding the service in the Firewall

firewall-cmd --permanent --add-service=rpc-bind
success
firewall-cmd --reload
success
Enter fullscreen mode Exit fullscreen mode

Looking at rpcbind info

rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
Enter fullscreen mode Exit fullscreen mode

πŸ‘‰ Basically the RPC works as the bridge between the network storage and the client. NFS relies on Remote Procedure Calls (RPC) to route requests between clients and servers

We now have to install nfs service in the Linux system that we want to make as the Storage server

yum -y install nfs*
Enter fullscreen mode Exit fullscreen mode

Now, we just need to share the NFS partitions that we created with the client

We will access the configuration file

vi /etc/exports
Enter fullscreen mode Exit fullscreen mode

Inside this configuration file,

# [ Share Dir ] [ Allow Host/Network ][ NFS_Option ]
/NFS_Server1    <Client IP Address>(rw,no_root_squash,sync)
/NFS_Server2    <Your network ID>/<subnet>(rw,root_squash,async,no_wdelay)
/NFS_Server3    *(rw,all_squash,sync)
/NFS_Server4    <Client IP Address>(rw,all_squash,anonuid=1005,anongid=1005,sync)
Enter fullscreen mode Exit fullscreen mode

πŸ’‘ "no_root_squash" + '*' these both are very dangerous and almost never used 🚫 We are using them for testing purpose!

We are actually sharing the mount points as in, sharing the directory, not the partition itself (similar to sharing a folder in windows)

In the next column we see Allowed Host/Network. The best practice would be allowing the Client IP address directly. But if that is hard or impossible, in those situations we can use the Network ID as well.

The final column that says NFS Option declares the host or network permissions. 'rw' meaning read and write. 'ro' meaning read only. We also are using 'sync' and 'async' options that declares when we want to synchronize changes.

Mostly this is how sync setting is set up =>

1:1 -> sync
N:1 -> async

πŸ’‘ We use 'no_wdelay' with async to remove the delay time for write jobs in async type of connection

root_squash = Maps the UID/GID into 'nfsnobody' when the client connects as the root user

no_root_squash = Maps the UID/GID into the Server's root user when the client connects as the root user

all_squash = Maps the UID/GID into 'nfsnobody' when the client connects as any user

no_all_squash = Maps the UID/GID of the client into the same user as the server when connected

anonuid = maps the saved UID into the anonymous UID
anongid = maps the saved GID into the anonymous GID

πŸ’‘ root_squash and no_all_squash are the defualts

Now, we need to enable and start the nfs-server

systemctl start nfs-server
systemctl enable nfs-server
Enter fullscreen mode Exit fullscreen mode

Also, adding nfs into firewall

firewall-cmd --permanent --add-service=nfs
success
firewall-cmd --reload
success
Enter fullscreen mode Exit fullscreen mode

Now if we use the rpcinfo -p command again

rpcinfo -p

   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  45729  status
    100024    1   tcp  45185  status
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp  52300  nlockmgr
    100021    3   udp  52300  nlockmgr
    100021    4   udp  52300  nlockmgr
    100021    1   tcp  45856  nlockmgr
    100021    3   tcp  45856  nlockmgr
    100021    4   tcp  45856  nlockmgr
Enter fullscreen mode Exit fullscreen mode

We can see many of the new entries that confirm the nfs is running

We can also use

exportfs -v

/NFS_Server1    192.168.1.128(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/NFS_Server4    192.168.1.128(sync,wdelay,hide,no_subtree_check,anonuid=1005,anongid=1005,sec=sys,rw,secure,root_squash,all_squash)
/NFS_Server2    192.168.1.0/24(async,no_wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,no_all_squash)
/NFS_Server3    <world>(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,all_squash)
Enter fullscreen mode Exit fullscreen mode

πŸ’‘ Another command to remember is the exportfs -ra as it re-exports all directories that are listed in the /etc/exports file
This option is useful when you have made changes to the /etc/exports file and want to immediately apply the changes without having to restart the NFS server

We will be creating a group with 1005 GID

groupadd -g 1005 nfs_group
Enter fullscreen mode Exit fullscreen mode

After that, we will be adding a user to this group

useradd -g nfs_group -u 1005 -s /sbin/nologin nfs_user

# Confirming the user details
tail -3 /etc/passwd
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
nfs_user:x:1005:1005::/home/nfs_user:/sbin/nologin
Enter fullscreen mode Exit fullscreen mode

πŸ’‘ nfsnobody is present but we created a nfs_user so that it would be 'secure' against hackers that attempt to mischief using the nfsnobody user. Also, as this user doesn't nened to login, we used/sbin/nologin

The NFS Server part is over now so we can move to the Client Linux and connect to this server.


Connecting as a client

πŸ’‘ We can use the autofs service to connect to the NFS or the simplest approach would be to just use fstab

First, we will create directories that will be the mount points for the NFS

mkdir /NFS_Client1
mkdir /NFS_Client2
mkdir /NFS_Client3
mkdir /NFS_Client4
Enter fullscreen mode Exit fullscreen mode

Next, we have to install the nfs-utils package

yum -y install nfs-utils
Enter fullscreen mode Exit fullscreen mode

Using the following command,

mount -t nfs <NFS_SERVER_IP>:/NFS_Server1 /NFS_Client1
mount -t nfs <NFS_SERVER_IP>:/NFS_Server2 /NFS_Client2
mount -t nfs <NFS_SERVER_IP>:/NFS_Server3 /NFS_Client3
mount -t nfs <NFS_SERVER_IP>:/NFS_Server4 /NFS_Client4
Enter fullscreen mode Exit fullscreen mode

Confirming the mount,

df -h
Filesystem                        Size  Used Avail Use% Mounted on
.
.
.
/dev/sda1                        1014M  199M  816M  20% /boot
tmpfs                              98M     0   98M   0% /run/user/0
192.168.1.129:/NFS_Server1        247M   13M  234M   6% /NFS_Client1
192.168.1.129:/NFS_Server2        247M   13M  234M   6% /NFS_Client2
192.168.1.129:/NFS_Server3        247M   13M  234M   6% /NFS_Client3
192.168.1.129:/NFS_Server4        247M   13M  234M   6% /NFS_Client4
Enter fullscreen mode Exit fullscreen mode

A small test we can perform is

# From Linux Client
cd /NFS_Client1

touch A

ls
A

# From NFS Server
cd /NFS_Server1

ls
A
Enter fullscreen mode Exit fullscreen mode

So the NFS was configured without any issues however currently if we reboot the Client, the mount will be automatically unmounted.

This is why we have to use the /etc/fstab config file

vi /etc/fstab

# Add these entries
192.168.1.129:/NFS_Server1      /NFS_Client1    nfs     defaults,_netdev        0 0
192.168.1.129:/NFS_Server2      /NFS_Client2    nfs     defaults,_netdev        0 0
192.168.1.129:/NFS_Server3      /NFS_Client3    nfs     defaults,_netdev        0 0
192.168.1.129:/NFS_Server4      /NFS_Client4    nfs     defaults,_netdev        0 0
Enter fullscreen mode Exit fullscreen mode

πŸ’‘ The _netdev option in /etc/fstab is used to specify that a network file system (NFS) or other network-based filesystem is to be mounted after the network has been initialized. When this option is used, the mount operation is delayed until the network is up and running

βœ” Perfect! The NFS server is setup and running and the client will be automatically connected even after reboots as we have set automount

Top comments (0)