November 14th, 2007 @ 09:30:40
There are very many options that can be used while exporting and mounting NFS partitions as well as while custom-configuring the firewall with IPTABLES. Please read through the man pages to see what fits best. Below given instructions are what I used to set up NFS within my internal network and these may very well work for you. However, please note that you are using these instructions at your very own risk and this website, sgowtham.net, is not responsible for any/all damage caused to your property, intellectual or otherwise.
It is not uncommon to find people (or organizations) who have multiple computers at their disposal and more often than not, these people (or organizations) find themselves in following situation:
One of these machines, often pretty powerful, contains data that need to be accessed from one or more of the other machines.
As is the case with most problems, there exists more than one way to solve this issue. This article discusses, in step-by-step fashion, one such possible approach – Network File System (abbreviated as NFS) – as applicable to Red Hat Enterprise Linux distributions. But unlike the previous write-up along similar lines, this one makes use of autofs service on client machines, so that the NFS shares are mounted on demand and are automatically unmounted when they are not being used. Few more suggestions from buddy Chong have also been incorporated to improve the performance.
What is NFS?
According to Wikipedia, it is a network file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The Network File System protocol is specified in RFC 1094, RFC 1813, and RFC 3530.
The Server Part
- Let us assume that the IP address of the server is 192.168.1.2
- Login as root
- Decide on two things:
- What file systems should be made available to clients? Let us assume that /usr/local (as read only) and /home (as read/write) partitions need to be exported.
- Which machines/clients (IP range or specific hostnames) should be allowed to access the exported file systems? Let us assume that all machines in the internal network – identified by IP addresses 192.168.1.xxx/255.255.255.0 – should have access to the exported partitions.
- Once the above is determined, this information needs to be put in a file that NFS will look up and do the needful. Add the following to /etc/exports:
1 2 3
# /etc/exports /usr/local 192.168.1.0/24(ro,async) /home 192.168.1.0/24(rw,async)
- Save and close the file, Run the following command:
- Assuming that a full/complete/maximum installation of the linux distribution was done, start the NFS service:
- If you plan on keeping this service active over reboots, then:
chkconfig - -level 345 nfs on
The Client Part – With AutoFS
- Login as root
- Autofs uses the automount daemon to manage your mount points by only mounting them dynamically when they are accessed. Autofs consults the master map configuration file /etc/auto.master to determine which mount points are defined. It then starts an automount process with the appropriate parameters for each mount point. Each line in the master map defines a mount point and a separate map file that defines the filesystems to be mounted under this mount point. For example, the /etc/auto.misc file might define mount points in the /misc directory; this relationship would be defined in the /etc/auto.master file.
Each entry in auto.master has three fields. The first field is the mount point. The second field is the location of the map file, and the third field is optional (can contain information such as a timeout value).
- To mount the exported partitions under the mount point /misc/, add the following line to auto.master:
/misc /etc/auto.misc --timeout 60
The directory /misc must exist on the local filesystem (it does exist on most Red Hat Linux distributions). There should be no subdirectories in /misc on the local filesystem.
- Add the following line to /etc/auto.misc:
192_168_1_2_usr_local -ro,soft,intr,rsize=8192,wsize=8192 192.168.1.2:/usr/local 192_168_1_2_home -rw,soft,intr,rsize=8192,wsize=8192 192.168.1.2:/home
The first field in /etc/auto.misc is the name of the /misc subdirectory. This directory is created dynamically by automount. It should not actually exist on the client machine. The second field contains mount options. The third field is the location of the NFS export including the hostname and directory.
- Once the above is done, start the AutoFS service:
- If you plan on keeping this set up active over reboots, then run:
chkconfig - -level 345 autofs on
- If you modify the /etc/auto.master configuration file while autofs is running, the automount daemon(s) must be notified to reload by typing the following command at a shell prompt:
/sbin/service autofs reload
- To access the mounted NFS partition,
(or something similar) – 192_168_1_2_home folder as well as its content will automagically show up.
In spite of rigorously following the above steps (some of which were adopted from the Red Hat documentation), I could not proceed beyond
stage – it would just fail and repetitively so. Fortunately, the troubleshooting I did for the previous entry solved this one as well. For completeness sake, here it is: First thing I had to check was to make sure I was using the proper syntax (in commands) and appropriate arguments/options for a given command. Since the error was very easily reproducible, I thought SELinux (Security Enhanced Linux) feature might be obstructing proper functioning of NFS and as such, I disabled it. As root, I edited the /etc/sysconfig/selinux (in server as well as client) and made it look like:
1 2 3 4 5 6 7 8 9 10
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. # SELINUXTYPE=targeted
After rebooting (both server and client) and re-attempting the NFS set up, I still got the same error message – meaning, something else was obstructing the process. A little bit of digging around and Google!ng led me to believe that the default firewall rules in the server were the culprit. The following steps were followed to resolve this issue:
- Login as root on the server (192.168.1.2)
cd /etc/sysconfig/ cp iptables iptables.default cd
- Based on firewall rules implemented in a beowulf linux cluster, I created a file called custom_firewall.sh, with following contents:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
#! /bin/bash # # Define a local variable, IPTABLES export IPTABLES=/sbin/iptables # Flush out all existing rules $IPTABLES -F INPUT # Set default Policy for Input, Output and Forward chains # If nothing else matches, these are followed $IPTABLES -P INPUT ACCEPT $IPTABLES -P OUTPUT ACCEPT $IPTABLES -P FORWARD DROP # Allow self-access by loopback interface $IPTABLES -A INPUT -i lo -p all -j ACCEPT $IPTABLES -A OUTPUT -o lo -p all -j ACCEPT # Accept established connections $IPTABLES -A INPUT -i eth0 -p tcp -m state --state ESTABLISHED -j ACCEPT $IPTABLES -A INPUT -i eth0 -p udp -m state --state ESTABLISHED -j ACCEPT $IPTABLES -A INPUT -i eth0 -p icmp -m state --state ESTABLISHED -j ACCEPT # Ping requests $IPTABLES -A INPUT -p icmp -j ACCEPT # FTP requests - not secure enough $IPTABLES -A INPUT -p tcp --dport 20 -j DROP $IPTABLES -A INPUT -p tcp --dport 21 -j DROP # TelNet requests - not secure enough $IPTABLES -A INPUT -p tcp --dport 23 -j DROP # HTTP requests $IPTABLES -A INPUT -p tcp --dport 80 -j ACCEPT $IPTABLES -A INPUT -p tcp --dport 443 -j ACCEPT # SSH requests - allows ssh, scp and sftp requests $IPTABLES -A INPUT -p tcp --dport 22 -s 192.168.1.0/255.255.255.0 -j ACCEPT # If more than 5 packets are dropped in 3 seconds they will be ignored # Helps to prevent a DOS attack crashing the computer $IPTABLES -A INPUT -m limit --limit 3/second --limit-burst 5 -i ! lo -j LOG # NFS $IPTABLES -A INPUT -p tcp --dport nfs -s 192.168.1.0/255.255.255.0 -j ACCEPT $IPTABLES -A INPUT -p udp --dport nfs -s 192.168.1.0/255.255.255.0 -j ACCEPT $IPTABLES -A INPUT -p tcp --dport 111 -s 192.168.1.0/255.255.255.0 -j ACCEPT $IPTABLES -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Keep track of log in attempts - /var/log/messages $IPTABLES -A INPUT -j LOG --log-prefix "INPUT_DROP: " $IPTABLES -A OUTPUT -j LOG --log-prefix "OUTPUT_DROP: "
chmod 700 custom_firewall.sh
- To keep these rules intact over reboots,
/sbin/service iptables save
After these steps, my attempt to set up NFS (both server and client with AutoFS) worked just fine. I understand my options for NFS as well as the firewall rules are neither comprehensive nor complete. As such, I (as well as others) would very much appreciate any thoughts to improve them.