Posts Tagged ‘nfs’
To login to any Linux machine, you need a username/password pair that is valid on that machine via the /etc/passwd file and the shadow file. This can become a problem if you have more than one Linux machine on your network as you have to maintain a separate user account on each system which isn’t very goof for network transparency.
I’ve already talked about NFS, the Network File System and how you can use auto-mount to transparently share directories across a network. NIS or Network Information Service is built on top of NFS to transparently provide files and advanced services that do not fix into a specialised service such as DNS, the Domain Name System. If you think about it, this makes sense as everything under Linux is a file or a directory. Here, we’re going to set up our NIS server to hold one set of user credentials and use a client Linux system to use those credentials to log in. We’ll then export that user’s home directory using NFS so that their home environment and account credentials can be used on any Linux client on your network.
Firstly you need to verify that you have the packages required to implement NIS. I’m using Fedora 15 for this exercise, but any RPM/Yum-based distribution will work. Debian/Ubuntu users will have to use apt-get to get the required packages.
yum install ypserv ypbind yp-tools
Before getting started with the configuration, you need to decide what your server’s NIS domain is. Each NIS server will only serve clients from the domain it is part of. This allows you to “cut up” your network into virtual sections and have an NIS server on each. Clustering, if you will.
One thing to make very clear here that the NIS domain name is not the same thing as the DNS domain name. In fact, your DNS domain and NIS domain should be different for security reasons.
As the NIS server only listens to clients within the same NIS domain, you need some method to add clients to an NIS domain. On each machine that will authenticate with the NIS server (including the NIS server itself), you’ll need to install ypbind and rpmbind: -
yum install ypbind rpcbind
Once these are installed, on each client run: -
You also need a method to have this command run at system boot. Fortunately, you can simply add the following line to ‘/etc/sysconfig/network’ file with: -
adding line: -
where test.nisdomain.net is the NIS domain I’m going to use.
NIS Server Configuration
Now that we’ve got all the machines on the same NIS domain, we need to start configuring the NIS server. We’ll assume this server has an IP address of 192.168.1.2. On the server as root, open the /etc/ypserv.conf file with: -
and look for the lines below. These will be commented out by default, so press ‘i’ and uncomment them so they read: -
* : passwd.byname : port : yes
* : passwd.byuid : port : yes
This makes sure that authentication on the server is required, else any machine on the network could issue the ‘ypcat passwd’ command and read the entire passwd database exported by the NIS server regardless of domain.
Next, you need to set what services to provide via NIS. Open the Makefile with: -
and looks for the below line.
# If you don't want some of these maps built, feel free to comment
# them out from this list.
all: passwd group\
# all: passwd group hosts rpc services netid protocols mail
# netgrp shadow publickey networks ethers bootparams printcap \
# amd.home auto.master auto.home auto.local passwd.adjunct \
# timezone locale netmasks
This defines all the files to be made available in the NIS domain. As NIS can be used for more than the user authentication we are going to do, you can see that you could even make available say – the /etc/hosts file where each machine looks for hosts it knows about before handing off to the domain name servers further up the Internet hierarchy. So you could use NIS to rig up a little home DNS system, for example :-) THis might even be useful, as for small networks, setting up bind for DNS may not be worth the hassle. We’re just going for centralised user authentication here, so make sure that passwd and group file names are un-commented but the default is fine. Just make sure /etc/passwd and /etc/group are able to be exported.
Save this file and verifying you’re in the same directory as the Makefile, run: -
This will create the NIS database maps. You will need to run this every time you change any source file listed in the /var/yp/Makefile as above. For example, when you add/remove a user from the /etc/passwd file on the NIS server. Changes to passwords do not require a make as they are held in the /etc/shadow file which should NOT EVER be exported on an NIS server :-)
If the make fails, make a note of which files gave the error and then edit /var/yp/Makefile, search for the “all” entry and comment out the files that gave the error. Run the make again.
The next file you need to edit is ‘/var/yp/securenets’. This file defines
the NIS clients that are allowed to access your NIS server. This file only
takes IP addresses and not hostnames. The IP addresses are specified as a netmask/IP address pair. ‘Localhost’ will need to be in this file. The entry should already be there, but if not then enter the following:-
This essentially lets all machines on the local private network (192.168.1.*) and the localhost access the NIS server. You can also allow specific hosts access to the NIS server by giving the keyword “host” followed by an IP address.
Now you need to start the server. The portmap daemon must be started before the ypserv daemon so run the following in the order shown: -
Verify the server started correctly with: -
It should output similar to: -
00004 2 udp 1003 ypserv
100004 1 udp 1003 ypserv
100004 2 tcp 1006 ypserv
100004 1 tcp 1006 ypserv
If you want your users to be able to change their user account passwords over the network, you now need to start the yppassword daemon with: -
This needs to be started after the other daemons mentioned above, but not if you don’t want to allow your users this functionality.
Users who want to change their password will have to use the ‘yppasswd’ command rather than the usual ‘passwd’ command. I guess you could add an alias to your ~/.bashrc file…
…assuming that is all passwd is used for :-) But that should work, or simply remind your users to use ‘yppasswd’.
NIS Client Configuration
On each client, edit the /etc/yp.conf file: -
The default file will have all entries commented out but will look a little like: -
# /etc/yp.conf - ypbind configuration file
# Valid entries are
# domain NISDOMAIN server HOSTNAME
# Use server HOSTNAME for the domain NISDOMAIN.
# domain NISDOMAIN broadcast
# Use broadcast on the local net for domain NISDOMAIN
# domain NISDOMAIN slp
# Query local SLP server for ypserver supporting NISDOMAIN
# ypserver HOSTNAME
# Use server HOSTNAME for the local domain. The
# IP-address of server must be listed in /etc/hosts.
# If no server for the default domain is specified or
# none of them is rechable, try a broadcast call to
# find a server.
It will contain a list of all the NIS servers in this NIS domain. You can specify as many NIS servers here as you like, but for a small network, you only need the one we configured above.
While NIS doesn’t use DNS, the server list can contain host names as well as IP addresses as long as the host name is listed in that machine’s /etc/hosts file. However here we’ll use just IP addresses as my home network is small.
The next thing to do is to remove the users that will be authenticated against NIS from the local client system if these users exist there. This is because when authenticating a user login, the client system will first check for the user in it’s local ‘/etc/passwd’ and ‘/etc/shadow’ files and then only check NIS if the user doesn’t exist locally. You can see this in action with the ‘/etc/nsswitch.conf’ file which has commented out sections for each file location the system should check for authentication for users, passwords and groups.
passwd: files nisplus nis
shadow: files nisplus nis
group: files nisplus nis
You can see above that the local files are checked before NIS is for users. I would recommend making a backup of these files before you starting deleting entries :-)
cp /etc/passwd /etc/passwd.BACKUP
cp /etc/shadow /etc/shadow.BACKUP
cp /etc/group /etc/group.BACKUP
Then use vi to delete each user entry that will be used by NIS from the /etc/passwd, /etc/shadow and /etc/group files.
The above system for the /etc/nsswitch.conf file is how Fedora does it. It’s simple and effective and if you like, you can skip to starting the client daemon.
However if you want more fine-grained control or you’re using a different distribution, you can change the nsswitch.conf entries for each entry to: -
Save this and at the bottom of the /etc/passwd file, you need to add the following line: -
For the /etc/group file, add the following line to the bottom: -
The above ‘+’ entries means that anyone who is in the NIS password database will be able to login to this machine. If however, you want to limit access to certain users or groups, edit your ‘/etc/passwd’ file and replace the ‘+::::::’ with:-
In this example, user1 has access to the NIS server, so does user2 but they cannot log in to this machine and also the network group nisgrp is allowed. To disallow access to few users/netgroups prefix the username with a ‘-’ instead. Also make sure that ‘+::::::’ is a the bottom of the file and your entries appear above it.
Lastly, you need to start up the ypbind daemon.
You can test your NIS configuration now by attempting to login on the client machine with a user that you know is only in the NIS server’s /etc/passwd file. If the login is successful then your NIS setup is
Server NFS Configuration
Now that the login procedure has been setup, you need to ensure that the users have access to their home directories. If users currently have home directories on a client machine, this will need to be copied with something like ftp, scp or sftp commands to the NIS server as it’s /home directory will store all the home directories of the users using NIS.
You can copy the user home directories on each client to the NIS server’s /home with something like: -
This will copy the client contents of /home/user1 to /home/user1 on the NIS server.
As in my NFS article, the aim of NFS is to transparently share directories present on a server to a client so it looks like part of the local file system. Which is exactly what we want because we want the client to think it’s NIS-served /home directory for user1 is actually on the local client file system. Perfect!
The NFS server takes its setting from the file ‘/etc/exports’. The format of the file is quite simple, so the entry to export everything under the home directory would be:-
Like NIS, NFS’s export file accepts only IP addresses. To export the same directory to several hosts, use the address/netmask pairing as given above. In brackets, other options specific to the mount may be given. Here I have assigned ‘rw’, which means that the directory is exported with read-write
permissions as by default, NFS exports are read-only.
NFS Client Configuration
On the client machine you can then mount the home directory on the server on every client by modifying the /etc/fstab file on each client like below: -
192.168.1.2:/home /home nfs defaults 0 0
So at boot, the system will mount the entire home directory off the nfs
server and all the user home directories will be available. Make sure that there are no sub-directories under the local /home because after the NFS mount they won’t be visible. You should either delete the /home directory on each client and perhaps even backup /home to something like /home_local :-)
Anyway, reboot the client and try to login again. You will now be authenticated off the NIS server and you
will find yourself in your home directory via NFS. Done!
A word of warning: -
NIS is great for internal small networks, but please be aware that an NIS/NFS solution for network user authentication is not terribly secure. This will probably not be a problem in homes or small offices, but is a major drawback in the enterprise or really large organisations. For secure enterprise information serving, you should look into using LDAP.
If you’re sharing files and directories to Windows-based clients on your network, it’s probably best to use Samba. If however, you’ve got other Linux machines you’d like to share directories on your server with, you can use NFS or Network File System.
First you have to tell Linux which directories you want to share by editing the “/etc/exports” file. We’ll share a directory called “/usr/local/inst”.
If you’d wanted this directory to be read-only, change the “rw” to “ro”. Once you’ve added this line to the exports file, you need to export the file system to make it available to the network.
Lastly, you need to start the NFS service daemons if they’re not already started.
service portmap start
service nfslock start
service nfs start
Next you’ll have to configure each NFS client machine to automatically mount the server’s networked file system in a local directory each time it is accessed. So, create a directory to mount the NFS share locally: -
Before configuring automounting, check that you can access the server’s network share manually – assuming the server is on IP address 192.168.1.2: -
mount 192.168.1.2:/usr/local/inst /mnt/inst/
If this doesn’t work, you may need to open various ports that NFS needs in your server’s firewall. Many supposed NFS problems are really problems with the firewall. In order for your NFS server to successfully serve NFS shares, its firewall must enable the following:
ICMP Type 3 packets.
Port 111, the Portmap daemon.
Port 2049, NFS.
The port(s) assigned to the mountd daemon.
You can assign static ports to the various NFS daemons in the “/etc/sysconfig/nfs” file where you can add: -
This will assign ports for 48620 (TCP/UDP) for lockd, 48621 (TCP/UDP) for mountd (used by the “mount” command), 48622 (TCP/UDP) used by statd and 48624 (TCP/UDP) used by the rquotad daemon, although here I’ve turned remote quotas off. The next step is to open ports in your iptables firewall for these ports.
-A INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 2049 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 48620 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 48621 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 48622 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 48623 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 48624 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 2049 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 48620 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 48621 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 48622 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 48623 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 48624 -j ACCEPT
If you now try to mount the server’s shares on your Linux client, it should work. Obviously, the network share needs to be able to be written to/read from the user attempting to connect. Consult your file permissions if you get errors.
If you can now mount network shares manually, you can also automount these shares. This way, the server will server the network share only when the client attempts to enter that directory.
Edit your “/etc/auto.master” file and add this line: -
/mnt/inst /etc/auto.inst --timeout=300
This sets up the configuration file for the automount point at /mnt/inst on the client. The main configuration file is /etc/auto.inst and the timeout for the connection inactivity is 300 seconds. Next, you have to set up the auto.inst file. This is a “map” file for the automount. The /etc/auto.master file can hold many of these mappings and the file can be any name, although we’ve named it as the same as the directory to be mounted for clarity. The /etc/auto.inst file looks like this: -
/mnt/inst -fstype=nfs,soft,intr,nosuid,tcp 192.168.1.2:/usr/local/inst &
The first column for this entry is the mount point. The “fstype” is the file system type we’re attempting to mount. “soft” means that if the network resource is unavailable, return an error and don’t retry after the timeout period has expired (hard means that it will keep attempting to connect). “nosuid” means don’t use suid and “tcp” is the protocol. Then you list the IP address of the server and the remote absolute file path to the network share.
Last thing is to start the automount daemon.
service autofs restart
Now when you cd to that directory, the network share should be automatically mounted – and you don’t even have to be root to do it :-)