Posts Tagged ‘fedora’
Okay, this guide will take you through ripping the contents of a regular DVD movie to an xvid-encoded avi file step by step using Linux command line tools only. I’ll concentrate on Ubuntu and Fedora but it should be easy on other distros too. Before we start, this article hinges on the use of libdvdcss. Depending on your country and Linux distribution, this may be easy or difficult. Google your distro name and “libdvdcss” if you can’t get hold of it.
For Fedora, you’ll need to get the libdvdcss package from a different repo than the standard ones. Add it with: -
rpm -Uvh http://rpm.livna.org/livna-release.rpm
yum install libdvdcss
If that doesn’t work, you can download the RPM file directly here. You can install the RPM manually with: -
rpm -ivh libdvdcss2-1.2.9-1.i386.rpm
For Ubuntu, you can install the libdvdcss package with: -
sudo apt-get install libdvdread4 && sudo /usr/share/doc/libdvdread4/install-css.sh
You’ll need some other tools too. Under Ubuntu, install with: -
sudo apt-get install vobcopy ffmpeg libxvidcore4 lame
Under Fedora, this is slightly different with: -
yum install vbocopy ffmpeg xvidcore lame
Next, create a mount point directory for your DVD files. I’m using /mnt/dvdrom but you can choose anything, anywhere although if you’re not root (as in Ubuntu), you might need to prefixing “sudo” to this.
After you’ve done this, insert the DVD movie you want to burn and mount the DVD-ROM drive. For this example, I’m using the DVD of David Carradine’s 1976 classic “Carquake” also known as “Cannonball”. Man, I love me some of those old muscle cars. Anyway, I digress – but choose a DVD you want to backup :-)
Depending on your distribution, this may be one of the following devices under /dev. On modern distributions this is usually /dev/sr0, so the following will work.
mount /dev/sr0 /mnt/dvdrom
Again, under Ubuntu you might need to use “sudo” for this. Some older Linux distros have /dev/cdrom or a symlink from /dev/cdrom to /dev/sr0. It may even be auto-mounted under /media.
Anyway, one way or another you have your DVD mounted under /mnt/dvdrom. Now you need to use “vobcopy” to copy the VOB chapter files. Create a directory to hold these files. Make sure it’s on a partition with at least 10GB on it as this process can be heavy on file space. I’m using “/home/myuser/tmp” for this. Make sure you’re inside the directory. I’m using /home/myuser/tmp so the following will work: -
Once you’re inside your chosen work directory, run the following command assuming your DVD movie is mounted on /mnt/dvdrom.
vobcopy -m -i /mnt/dvdrom -F 5 -v
This will mirror (-m) the contents of the DVD to your current directory from your chosen mount point. This process can take a long time. So go do something else for a while and come back. Also if you’re worried about your session dying or you want to push this process to the background, you can use: -
nohup vobcopy -m -i /mnt/dvdrom -F 5 -v > output.txt &
…which will to the same thing in the background but write it’s output to output.txt. Check the date/time on the last modifed column with a “ls -lah” on that directory or do a “ps -elf | grep vobcopy” to see if it’s still running. You can check the progress of the output with “tail -f output.txt” to see the percentage done updated in real time.
Once this is done, change to the new subdirectory VIDEO_TS – this is the directory structure of the DVD. If you do an “ls -lah” on that directory, you’ll see the VOB files listed. Mine for the rip of “Carquake” looks like this: -
drwxr-xr-x 2 root root 4.0K Mar 25 02:11 .
drwxr-xr-x 3 root root 4.0K Mar 23 20:03 ..
-rw-r--r-- 1 root root 12K Mar 23 20:03 VIDEO_TS.BUP
-rw-r--r-- 1 root root 12K Mar 23 20:03 VIDEO_TS.IFO
-rw-r--r-- 1 root root 14K Mar 23 20:03 VIDEO_TS.VOB
-rw-r--r-- 1 root root 50K Mar 23 20:03 VTS_01_0.BUP
-rw-r--r-- 1 root root 50K Mar 23 20:03 VTS_01_0.IFO
-rw-r--r-- 1 root root 4.0K Mar 23 20:03 VTS_01_0.VOB
-rw-r--r-- 1 root root 283M Mar 24 03:29 VTS_01_1.VOB
-rw-r--r-- 1 root root 314M Mar 24 10:24 VTS_01_2.VOB
-rw-r--r-- 1 root root 252M Mar 24 17:45 VTS_01_3.VOB
-rw-r--r-- 1 root root 287M Mar 25 00:52 VTS_01_4.VOB
-rw-r--r-- 1 root root 64M Mar 25 02:11 VTS_01_5.VOB
Looking at this, we can surmise that the film itself is probably on the files VTS_01_1.VOB – VTS_01_5.VOB. So the next thing you’ll want to do is to convert these multiple VOB files into one. This is easily done with: -
ffmpeg -i concat:"VTS_01_1.VOB|VTS_01_2.VOB|VTS_01_3.VOB|VTS_01_4.VOB|VTS_01_5.VOB" -c copy concat.vob
This will give you one file called CONCAT.VOB. Now we need to convert the VOB file to our Xvid AVI file. Do this with: -
ffmpeg -i concat.vob -f avi -vcodec libxvid -acodec libmp3lame -qscale 3 output.avi
This should give you a file called “output.avi” which has xvid video encoding and mp3 audio encoding. Consider your DVD ripped :-)
Update 28/03/2013: If you have a DVD with dual audio, for example English and Japanese or something, you can choose to extract one for the audio in your xvid avi. Check out the -map switch to ffmpeg here. Using the -map option, you can choose to extract one of the specific audio tracks to embed in the xvid avi file.
With the sad (and annoying) news that Google Reader is to be shut down on July 1st 2013, I had the dismal job of finding a replacement. Then I thought, why not just host my own RSS aggregator service? At least that way, I’m not at the whims of some corporation shutting down the services I use. So here, I’m going to show you how to set up tt-rss using Fedora Linux, Apache MySQL and PHP. This will allow you to import your Google Reader data into tt-rss’s MySQL database and display as a web application on your web server. There’s even an Android app to go with your installation to replace the Google Reader app. Hooray!
Okay, so I’m going to be using Apache, MySQL and PHP under Fedora Linux for this. If you already have Apache, MySQL and PHP installed, skip to the next section. If not, keep reading.
I’ll assume you’re on your Fedora Linux box as the root user for this. To install a basic Apache web server under Fedora with PHP and the libraries tt-rss will need, run the following: -
yum install apache mysql mysql-client php php-xml
Once all that is installed, start up your Apache installation with: -
service httpd start
In order to see web pages, you’ll also need to make sure your firewall is open on port 80. Fedora uses iptables as it’s firewall so let’s open a port for the web: -
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
If you’re planning on using an SSL certificate for HTTPS secure connections, you’ll need both commands. If not, you’ll just need the first one that opens port 80. For this (or any) web server to be useful you’ll also need to open the port(s) on your router’s firewall too. Check that your server is listening on the right port with: -
netstat -l | grep http
…which should give you back one or both ports listening as below: -
tcp6 0 0 [::]:http [::]:* LISTEN tcp6 0 0 [::]:https [::]:* LISTEN
Check that you can reach your server’s ip address (ifconfig will tell you this) by opening a browser and pointing it to “http//your_ip_address”. If all is well, you now have Apache installed and running. To make sure that PHP is installed and working with Apache, create a test PHP page with the following: -
and put in the following: -
Browse to “http://your_ip_address/test.php” and check that you get the PHP diagnostics page up. If so, you’re ready to install tt-rss. But first, you need to export your Google Reader subscriptions while you still can! If you just want to set up tt-rss without importing from Google Reader, skip this step.
Log in to your Google Reader account here and go to Settings > Reader-Settings -> Import/Export. Click on “Download your data through Takeout” and then click on “Create Archive” which will allow you to download the zip file. Unzip it to a directory on your local hard drive.
..as this is the default location for your web pages, and run: -
If you don’t have wget installed under Fedora, install it with: -
yum install wget
Unpack the archive with: -
tar -xf 1.7.2.tar
You should now have a subdirectory called tt-rss or similar. Change to that directory now.
Previously, you installed MySQL and now we need to set up a database under MySQL for tt-rss to store it’s information. If you already have MySQL installed, skip this step.
Start MySQL with: -
service mysql start
If you’ve not used MySQL before, it’ll prompt you to change the root user’s password. Do that now with: -
mysqladmin -u root password NEWPASSWORD
…where “NEWPASSWORD” is the password you want to use. Login to MySQL with: -
mysql -uroot -p
and enter your password. Next we need to set up a user for tt-rss for security’s sake. Run the following SQL: -
GRANT ALL PRIVILEGES ON ttrss.* TO 'rss'@'127.0.0.1' IDENTIFIED BY 'somepassword';
This will add a new MySQL user called “rss” which has complete access to all tables in the “ttrss” database. Their password is “somepassword”. Next, flush the tables with: -
FLUSH ALL PRIVILEGES
Next, we need to create the ttrss database. Do this with: -
CREATE DATABASE ttrss;
and then “exit” out of MySQL.
Try logging into your MySQL account with: -
mysql -urss -psomepassword
…noting the lack of spaces between the -u and -p for username and password respectively.If you can login and see your ttrss database by running: -
…you are ready to go. Exit out of MySQL again and change to the schema directory under Apache where you installed tt-rss, usually something like: -
Next we need to import the database tables for MySQL for the tt-rss web application using the provided MySQL scheme SQL file.
mysql -urss -psomepassword ttrss < ttrss_schema_mysql.sql
This should import the tables to your previously created database using the MySQL user "rss" we just set up.
Once this is done, back up a directory level with: -
Next we need to copy the default configuration file to something tt-rss can use, so run the following: -
cp config.php-dist config.php
Next, edit the file with vi and change the following directives to your settings: -
define('DB_TYPE', "mysql"); define('DB_HOST', "127.0.0.1"); define('DB_USER', "rss"); define('DB_NAME', "ttrss"); define('DB_PASS', "somepassword"); define('SELF_URL_PATH', "http://your_ip_address/tt-rss/");
Save this file and then try to browse to the URL defined in SELF_URL_PATH. You should be prompted to either unlock the permissions on certain directories or to login. The default username and password is “admin” with “password”. Remember to change them!
Before logging in, we need to choose a method of updating our feeds that we’ll import from the Google Takeout zip file we downloaded. There are two basic ways, via a daemon or via a cron job. The daemon way is recommended and can be done with: -
nohup php /var/www/html/tt-rss/update.php -daemon > /var/log/tt-rss.log &
Here, I’m running the process as a background process as the daemon doesn’t detach from the terminal. It’s recommended not to run this as the root user for security. Otherwise you can add a cron task for the current user with: -
and add the following entry: -
*/30 * * * * cd /var/www/html/tt-rss && /usr/bin/php /var/www/html/tt-rss/update.php -feeds >/dev/null 2>&1
Either way, your feeds will update every 30 minutes.
Okay, so now we need to import the Google Reader data. Login with the default username and password as outlined above. Once you’re logged in, click on Actions in the top right of the screen and then Preferences. Click on the Feeds tab and open the OPML button on the lower part of the screen. Choose the “subscriptions.xml” file you unzipped from the Google Takeout zip file – it’s under the “Reader” sub-directory.
Once you’ve imported your feeds from Google Reader you can now log in to your own server and read your RSS new like normal! Personally, I read a lot of Google Reader stuff on my Android phone, so let’s install the Android app next.
Click here to download and install the Android app for tt-rss. It’s a 7 day trial but the full version is only like £1.72. There are free versions available which you can find if you search the Play store for “ttrss” but I found them buggy. This works a charm.
Once installed, go to the Tiny Tiny RSS app and go to Settings and enter the following information: -
Login: your_tt-rss login - defaults to "admin". Password: defaults to "password" Tiny Tiny RSS URL: http://your_ip_address/tt-rss/
If you’re using HTTPS/SSL you’ll need to check the following if you’re using a self-signed SSL certificate.
Accept Any Certificate: Check No Hostname Verification: Check
The app also has the ability to authenticate Apache Basic Authentication, which if you’re feeling paranoid can be set up here. Back out of this menu and refresh and your Google Reader feeds should start displaying.
Congrats, you now have the exact same Google Reader functionality from your own server :-) Take that Google!
I wonder what they’re going to replace Google Reader with? Something within Google Plus would be my bet.
Update 16/05/2013: You may get a 500 Internal Error under apache when you first start. The two most common reasons for this are your MySQL database is corrupted so delete and reimport your feeds or you don’t have the mbstrings PHP module. If you’re not using a version of PHP that you compiled yourself, you need to download the php-mbstring package. Under CentOS/Fedora, that is “yum install php-mbstring”.
After a couple of hours downtime, The Node is back running the ridiculously named “Beefy Miracle” release 17 of Fedora Linux. This wasn’t a smooth upgrade and I’ve upgraded via “preupgrade-cli” since Fedora 7.
Anyway this looks like a script writing bug basically and the solution is simple if you’re physically sitting in front of your server. The whole reason to use “preupgrade-cli” rather than the graphical “preupgrade” tends to imply no X session or an SSH terminal. Which has always worked in the past.
This is basically a foot-note to this preupgrade article which details how to upgrade Fedora Linux remotely using only preupgrade’s command line. Except, before you reboot to complete the upgrade you’ll see this:-
sh: /sbin/grub: No such file or directory /bin/echo: write error: Broken pipe All finished. The upgrade will begin when you reboot.
Usually after this point (minus the error) you type “reboot” and then lose your connection, wait a couple of hours and then start attempting to login again. When you can, you’ll find you’re back on Fedora 16 again. Huh.
People on forums were trying all sorts of things like reinstalling the preupgrade package and GRUB2 and such…none of which is necessary. Red herring!
After the normal preupgrade has done it’s thing and before you reboot as requested, open up your grub configuration file:-
You’ll see the list of “menuentry” stanzas for each kernel, the top one of which is the upgrade, which is clearly labelled. This is what needs to be booted to without a user necessarily being at the console.
Above this chunk you’ll see this line:-
This is the bit that tells Fedora which kernel to load from “/boot” if no physical keyboard changes happen. They’re loaded in order starting from the top down, one stanza at a time. You’ll notice that the upgrade stanza is top of the list.
In GRUB, the stanzas are counted from 0 not 1. So the default kernel to load will always be the next one down, which is usually your last Fedora 16 kernel.
Told ya it was a simple scripting error :-)
Change the set default line to read: -
Save the file and now reboot. Fixed and your new Fedora release should begin installing. Give it a couple of hours and then retry an SSH session to that machine. Once in, check with: -
Phew! I was worried with all the technical chatter on Bugzilla regarding this release. I was also worried about the reported move from the mess of directories under /sbin/lib /lib /sbin, /usr/sbin and so on…thinking this upgrade rather than a fresh install might knacker my system. But I worried needlessly, the upgrade script creates sym-links for backwards compatibility. So everything works again. Sorry for the downtime.
Back in the good old days of Linux/UNIX, there was a file (usually) located under “/etc/rc.local” that would, after all run-level specific processes have been started, run whatever scripts were in the rc.local file. With Fedora Linux moving away from the more traditional “sysvinit” service manager, this rc.local file no longer exists for running whatever of your own bash scripts/commands you might want to run after the run-level specific processes have been started.
Luckily, you can re-enable this functionality :-) Obviously, you’ll need to be root for this so if you’re not already, elevate yourself to the root user:-
su - root
Then you need to re-create the rc.local file with:-
chmod +x /etc/rc.local
Next, open the file for editing:-
As this is basically a bash script itself, you need to include the bash interpreter as the first line of the file.
You can now add whatever scripts or commands you like here – they will be run after everything else at your specific run-level has been started. In order for systemd to recognise and use this file, the systemd rc-local.service must be enabled.
systemctl enable rc-local.service
You can check the status of this service with:-
systemctl status rc-local.service
That’s all that’s required. Anything contained in this rc.local file will now be executed last-thing on reboot. Enjoy!
Linux is great at logging almost any event that happens in the operating system and pretty much all of this stuff is stored under /var/log/messages. This is fine until a machine is compromised. If a hacker somehow manages to sneak into your server, pretty much the first thing they’ll do is erase the logs to cover their tracks. So while local logging is fine for spotting failed intrusion attempts, there is always the possibility that your server is breached and the logs won’t tell you anything because the intruder has access to those logs by definition.
The solution to this is to use a remote centralised Linux server to log the system logs from other systems. This way, when a system is breached, the hacker has no way of hiding their access as the logs are actually stored in real-time on another, uncompromised system. Some home routers from the likes of NetGear also have the option to store system logs to a remote syslog server. This can be useful for storing access events onto your home network for analysis as routers tend not to have much onboard storage for log files and almost certainly don’t survive between reboots.
Either way, I hope I’ve made the case for setting up a syslog server on your network. I’ll assume you have a spare Linux machine lying around with the minimum of SSH and iptables working. Pretty much any distribution will have all this working by default :-)
Configuring the syslog server
All your system logs are stored under /var/log and the daemon responsible for this is rsyslogd. You can see if it’s running on your system with:-
ps -elf | grep rsyslog
You’ll probably get something back like:-
4 S root 24457 1 0 80 0 - 7742 poll_s 12:23 ? 00:00:00 /sbin/rsyslogd -n -c 5
On newer Fedora releases that use systemd rather than the older traditional sysvinit, you can also check that rsyslogd is running with:-
service rsyslog status
to which you’ll get back the following detailed information about the running process from systemd.
Redirecting to /bin/systemctl status rsyslog.service
rsyslog.service - System Logging Service
Loaded: loaded (/lib/systemd/system/rsyslog.service; enabled)
Active: active (running) since Tue, 20 Mar 2012 12:23:35 +0000; 6min ago
Process: 24454 ExecStartPre=/bin/systemctl stop systemd-kmsg-syslogd.service (code=exited, status=0/SUCCESS)
Main PID: 24457 (rsyslogd)
└ 24457 /sbin/rsyslogd -n -c 5
The interesting part (aside from the fact that it’s running!) is the last line, this one:-
/sbin/rsyslogd -n -c 5
This shows you what parameters are being passed to rsyslogd when it starts with the system. From this, we can tell by using “man rsyslogd” that -n means that the rsyslogd daemon will avoid auto-backgrounding which makes sense as the process is managed by init or in this case, systemd. The -c parameter allows backwards compatibility…in this case, version 5.0. I assume again that this is something to do with systemd which probably has something different “under the hood”. Anyway, nothing very interesting there. You might notice from the man pages that there is a -r parameter which allows logging from remote sources. This is what we want, so we need to know how to set rsyslogd’s parameters upon start up. This is done via the config file. So edit it with:-
…under RedHat/Fedora and:-
The contents of the file is pretty sparse, consisting just of the line:-
Here is where we set our remote switch, so change it so it reads:-
SYSLOGD_OPTIONS="-r -c 5"
Depending on your distribution, your SYSLOGD_OPTIONS parameters might look a little different – this doesn’t matter, the important part is that you’ve added the “-r” switch to the options. Save this file. Next you need to configure the daemon to listen on UDP port 514 for external syslog messages. So open the following file:-
Look for the section near the top that looks like this:-
# Provides UDP syslog reception
Uncomment these two lines by removing the hash character at the beginning. This simply says to listen on UDP port 514 for connections.
Save this file and restart rsyslogd with:-
service rsyslog restart
…under RedHat/Fedora and:-
Remember you’ll need to also open a port for incoming syslog information from remote clients. rsyslogd uses UDP port 514 for this, so make sure you’ve added the port to the iptables firewall with something like:-
iptables -A INPUT -m state --state NEW -m udp -p udp --dport 514 -j ACCEPT
If you want to lock down the firewall access a little more than that, you could use something like:-
iptables -A INPUT -p udp -i eth0 -s 192.168.1.2 -d 192.168.1.1 --dport 514 -j ACCEPT
This rule will ensure that the syslog server on IP address 192.168.1.1 will receive UDP packets containing the system log events from the remote client on IP address 192.168.1.2. Obviously replace these with the correct IP addresses for your network.
Once everything is set up, you can check that your syslog server is listening on the intended port with:-
netstat -an | grep 514
…which should give you this:-
udp 0 0 0.0.0.0:514 0.0.0.0:*
udp 0 0 :::514 :::*
If you’re using a NetGear router and want to log it’s information to your server, you’re now set up to point your router logs to your server. If you’re wanting to log system events from another Linux client on your network to the syslog server, these also need to be configured to log their stuff remotely rather than to /var/log/messages.
Configuring the syslog client
On each system that will log to the syslog server you’ve just set up, you need to configure it to log there rather than to it’s own /var/log directory. Add the file “/etc/syslog.conf” if it doesn’t already exist and add the line at the top of the file:-
…where the 192.168.1.1 IP address is the IP address of your syslog server. Change as appropriate. Finally, restart the syslog daemon on the client with:-
service syslog restart
Remember again that you’ll need to add an outgoing rule to your firewall to allow the 514 port-destined syslog traffic to your syslog server. Here is the iptables rule: -
iptables -A OUTPUT -p udp -i eth0 -s 192.168.1.2 -d 192.168.1.1 --dport 514 -j ACCEPT
If the client is an Ubuntu box, you won’t be using iptables, but ufw. The article on how to use Ubuntu’s firewall can be found here.
The client’s logs (or your router’s, if you have that functionality) will now be written to the syslog server’s /var/log/messages system log.
If you have many Fedora (or any Red Hat based) systems, updating them all via yum separately means that you’re going to be downloading an awful lot of duplicated updates. Also, in some organisations, they will have a local yum repository for officially sanctioned updates so that it doesn’t break bespoke software that relies on specific versions of software packages. Here I’ll show you how to create your own local yum repository on your network via the Apache web server which you can then use for all the machines on your network – the point being that the updates are downloaded from a master server only once, thus saving you time and bandwidth.
First off, you’ll need to make you you have the Apache web server installed and running on the server you want to use as a yum repository. If it is, you can skip to the next part. If not or you’re not sure, install it with: -
yum install httpd
Set the httpd daemon to run with:-
service httpd start
This assumes that you now have your HTML document root at “/var/www/html”, which is the default. Also, if you have a firewall like iptables running you’ll need to open port 80 to allow HTTP traffic through.
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
You can verify the port is open with:-
which should show you something like:-
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
The next thing to do is to create the directory structure required by yum to hold the packages. Create the following directories with:-
You’ll also need your Fedora installation CD for this next part. We’re going to copy the all the installation packages over. You can either use an installation CD/DVD or you can simply mount the ISO file you no doubt downloaded from the Fedora website. If you want to mount the ISO file, you can use:-
mount -o loop -t iso9660 /path/to/your/iso/file /mnt/iso
This will mount your ISO image at the /mnt/iso mount point directory – although the iso sub-directory must exist first, obviously. If not, create it.
The packages you want will be under “Packages” on the CD/DVD or mount point, so copy them to the “/var/www/html/yum/base” sub-directory you created earlier. Assuming you’re using a mount point of “/mnt/iso”, you’d use:-
cp /mnt/iso/Packages/* /var/www/html/yum/base
Next you need to create the base repository headers. For this, you’ll need a small utility called “createrepo”. It’s usually installed by default under Fedora, but if you’re using another Red Hat based distribution or it isn’t installed, install it with:-
yum install createrepo
After that, run:-
This may take some time, depending on your system hardware specifications, but eventually you should end up with a sub-directory from there called “repodata”. If you do an “ls” on that directory, you should see files that look something like the following:-
These are your repository header files. Okay, now you need to select an external mirror site for all the Fedora update packages. We’re going to use rsync (which I’ve covered before to sync backups across servers) to sync our local repository to an externally maintained Fedora repository. You can find a list of public mirror sites for your country and Fedora version here. Make sure the mirror site you choose supports rsync and not just HTTP or FTP else the following won’t work and you’ll have to use something like the “wget” command to copy over all the updated packages. Clearly “rsync” is better in this situation. For the sake of this post, I’m going to assume you’re in the United Kingdom and are using Fedora 15 on 64-bit systems. To sync your repository with the example one based on the above requirements, I’d use the command:-
rsync -av rsync://mirror.bytemark.co.uk/fedora/linux/updates/15/x86_64/ -exclude=debug/ /var/www/html/yum/updates
This example is using rsync to sync our local yum update directory with a Fedora mirror site maintained by Bytemark in the UK for 64-bit Fedora systems. Now would be a good time to create a daily cron job to keep your local repository in sync with the external one, so enter your crontab with:-
and add the following line (or whatever you want, assuming you’re familiar with using cron jobs):-
0 0 * * * rsync -av --delete rsync://mirror.bytemark.co.uk/fedora/linux/updates/15/x86_64/ -exclude=debug/ /var/www/html/yum/upates
Save this by typing “:wq” This will update your repository every night at midnight. I’ve also added the delete flag in order to save space so that source and destination match. Use “man rsync” if you want more information on this.
Lastly you’ll need to configure yum on your servers that you want to be updated via yum to point to your new yum update repository. While you can add this to “/etc/yum.conf”, it’s recommended that you add a separate .repo file under “/etc/yum.repos.d”. So create a new .repo file with:-
and add the following information:-
name=Fedora Released Updates
name=Fedora - Base
If you want to use environment variables to get the Fedora version and architecture, see here. Replace the “192.168.1.3″ IP address with the IP address of your repository server. That’s it – add this file to each Fedora server you want to update locally and you’re good to go :-)
To login to any Linux machine, you need a username/password pair that is valid on that machine via the /etc/passwd file and the shadow file. This can become a problem if you have more than one Linux machine on your network as you have to maintain a separate user account on each system which isn’t very goof for network transparency.
I’ve already talked about NFS, the Network File System and how you can use auto-mount to transparently share directories across a network. NIS or Network Information Service is built on top of NFS to transparently provide files and advanced services that do not fix into a specialised service such as DNS, the Domain Name System. If you think about it, this makes sense as everything under Linux is a file or a directory. Here, we’re going to set up our NIS server to hold one set of user credentials and use a client Linux system to use those credentials to log in. We’ll then export that user’s home directory using NFS so that their home environment and account credentials can be used on any Linux client on your network.
Firstly you need to verify that you have the packages required to implement NIS. I’m using Fedora 15 for this exercise, but any RPM/Yum-based distribution will work. Debian/Ubuntu users will have to use apt-get to get the required packages.
yum install ypserv ypbind yp-tools
Before getting started with the configuration, you need to decide what your server’s NIS domain is. Each NIS server will only serve clients from the domain it is part of. This allows you to “cut up” your network into virtual sections and have an NIS server on each. Clustering, if you will.
One thing to make very clear here that the NIS domain name is not the same thing as the DNS domain name. In fact, your DNS domain and NIS domain should be different for security reasons.
As the NIS server only listens to clients within the same NIS domain, you need some method to add clients to an NIS domain. On each machine that will authenticate with the NIS server (including the NIS server itself), you’ll need to install ypbind and rpmbind: -
yum install ypbind rpcbind
Once these are installed, on each client run: -
You also need a method to have this command run at system boot. Fortunately, you can simply add the following line to ‘/etc/sysconfig/network’ file with: -
adding line: -
where test.nisdomain.net is the NIS domain I’m going to use.
NIS Server Configuration
Now that we’ve got all the machines on the same NIS domain, we need to start configuring the NIS server. We’ll assume this server has an IP address of 192.168.1.2. On the server as root, open the /etc/ypserv.conf file with: -
and look for the lines below. These will be commented out by default, so press ‘i’ and uncomment them so they read: -
* : passwd.byname : port : yes
* : passwd.byuid : port : yes
This makes sure that authentication on the server is required, else any machine on the network could issue the ‘ypcat passwd’ command and read the entire passwd database exported by the NIS server regardless of domain.
Next, you need to set what services to provide via NIS. Open the Makefile with: -
and looks for the below line.
# If you don't want some of these maps built, feel free to comment
# them out from this list.
all: passwd group\
# all: passwd group hosts rpc services netid protocols mail
# netgrp shadow publickey networks ethers bootparams printcap \
# amd.home auto.master auto.home auto.local passwd.adjunct \
# timezone locale netmasks
This defines all the files to be made available in the NIS domain. As NIS can be used for more than the user authentication we are going to do, you can see that you could even make available say – the /etc/hosts file where each machine looks for hosts it knows about before handing off to the domain name servers further up the Internet hierarchy. So you could use NIS to rig up a little home DNS system, for example :-) THis might even be useful, as for small networks, setting up bind for DNS may not be worth the hassle. We’re just going for centralised user authentication here, so make sure that passwd and group file names are un-commented but the default is fine. Just make sure /etc/passwd and /etc/group are able to be exported.
Save this file and verifying you’re in the same directory as the Makefile, run: -
This will create the NIS database maps. You will need to run this every time you change any source file listed in the /var/yp/Makefile as above. For example, when you add/remove a user from the /etc/passwd file on the NIS server. Changes to passwords do not require a make as they are held in the /etc/shadow file which should NOT EVER be exported on an NIS server :-)
If the make fails, make a note of which files gave the error and then edit /var/yp/Makefile, search for the “all” entry and comment out the files that gave the error. Run the make again.
The next file you need to edit is ‘/var/yp/securenets’. This file defines
the NIS clients that are allowed to access your NIS server. This file only
takes IP addresses and not hostnames. The IP addresses are specified as a netmask/IP address pair. ‘Localhost’ will need to be in this file. The entry should already be there, but if not then enter the following:-
This essentially lets all machines on the local private network (192.168.1.*) and the localhost access the NIS server. You can also allow specific hosts access to the NIS server by giving the keyword “host” followed by an IP address.
Now you need to start the server. The portmap daemon must be started before the ypserv daemon so run the following in the order shown: -
Verify the server started correctly with: -
It should output similar to: -
00004 2 udp 1003 ypserv
100004 1 udp 1003 ypserv
100004 2 tcp 1006 ypserv
100004 1 tcp 1006 ypserv
If you want your users to be able to change their user account passwords over the network, you now need to start the yppassword daemon with: -
This needs to be started after the other daemons mentioned above, but not if you don’t want to allow your users this functionality.
Users who want to change their password will have to use the ‘yppasswd’ command rather than the usual ‘passwd’ command. I guess you could add an alias to your ~/.bashrc file…
…assuming that is all passwd is used for :-) But that should work, or simply remind your users to use ‘yppasswd’.
NIS Client Configuration
On each client, edit the /etc/yp.conf file: -
The default file will have all entries commented out but will look a little like: -
# /etc/yp.conf - ypbind configuration file
# Valid entries are
# domain NISDOMAIN server HOSTNAME
# Use server HOSTNAME for the domain NISDOMAIN.
# domain NISDOMAIN broadcast
# Use broadcast on the local net for domain NISDOMAIN
# domain NISDOMAIN slp
# Query local SLP server for ypserver supporting NISDOMAIN
# ypserver HOSTNAME
# Use server HOSTNAME for the local domain. The
# IP-address of server must be listed in /etc/hosts.
# If no server for the default domain is specified or
# none of them is rechable, try a broadcast call to
# find a server.
It will contain a list of all the NIS servers in this NIS domain. You can specify as many NIS servers here as you like, but for a small network, you only need the one we configured above.
While NIS doesn’t use DNS, the server list can contain host names as well as IP addresses as long as the host name is listed in that machine’s /etc/hosts file. However here we’ll use just IP addresses as my home network is small.
The next thing to do is to remove the users that will be authenticated against NIS from the local client system if these users exist there. This is because when authenticating a user login, the client system will first check for the user in it’s local ‘/etc/passwd’ and ‘/etc/shadow’ files and then only check NIS if the user doesn’t exist locally. You can see this in action with the ‘/etc/nsswitch.conf’ file which has commented out sections for each file location the system should check for authentication for users, passwords and groups.
passwd: files nisplus nis
shadow: files nisplus nis
group: files nisplus nis
You can see above that the local files are checked before NIS is for users. I would recommend making a backup of these files before you starting deleting entries :-)
cp /etc/passwd /etc/passwd.BACKUP
cp /etc/shadow /etc/shadow.BACKUP
cp /etc/group /etc/group.BACKUP
Then use vi to delete each user entry that will be used by NIS from the /etc/passwd, /etc/shadow and /etc/group files.
The above system for the /etc/nsswitch.conf file is how Fedora does it. It’s simple and effective and if you like, you can skip to starting the client daemon.
However if you want more fine-grained control or you’re using a different distribution, you can change the nsswitch.conf entries for each entry to: -
Save this and at the bottom of the /etc/passwd file, you need to add the following line: -
For the /etc/group file, add the following line to the bottom: -
The above ‘+’ entries means that anyone who is in the NIS password database will be able to login to this machine. If however, you want to limit access to certain users or groups, edit your ‘/etc/passwd’ file and replace the ‘+::::::’ with:-
In this example, user1 has access to the NIS server, so does user2 but they cannot log in to this machine and also the network group nisgrp is allowed. To disallow access to few users/netgroups prefix the username with a ‘-’ instead. Also make sure that ‘+::::::’ is a the bottom of the file and your entries appear above it.
Lastly, you need to start up the ypbind daemon.
You can test your NIS configuration now by attempting to login on the client machine with a user that you know is only in the NIS server’s /etc/passwd file. If the login is successful then your NIS setup is
Server NFS Configuration
Now that the login procedure has been setup, you need to ensure that the users have access to their home directories. If users currently have home directories on a client machine, this will need to be copied with something like ftp, scp or sftp commands to the NIS server as it’s /home directory will store all the home directories of the users using NIS.
You can copy the user home directories on each client to the NIS server’s /home with something like: -
This will copy the client contents of /home/user1 to /home/user1 on the NIS server.
As in my NFS article, the aim of NFS is to transparently share directories present on a server to a client so it looks like part of the local file system. Which is exactly what we want because we want the client to think it’s NIS-served /home directory for user1 is actually on the local client file system. Perfect!
The NFS server takes its setting from the file ‘/etc/exports’. The format of the file is quite simple, so the entry to export everything under the home directory would be:-
Like NIS, NFS’s export file accepts only IP addresses. To export the same directory to several hosts, use the address/netmask pairing as given above. In brackets, other options specific to the mount may be given. Here I have assigned ‘rw’, which means that the directory is exported with read-write
permissions as by default, NFS exports are read-only.
NFS Client Configuration
On the client machine you can then mount the home directory on the server on every client by modifying the /etc/fstab file on each client like below: -
192.168.1.2:/home /home nfs defaults 0 0
So at boot, the system will mount the entire home directory off the nfs
server and all the user home directories will be available. Make sure that there are no sub-directories under the local /home because after the NFS mount they won’t be visible. You should either delete the /home directory on each client and perhaps even backup /home to something like /home_local :-)
Anyway, reboot the client and try to login again. You will now be authenticated off the NIS server and you
will find yourself in your home directory via NFS. Done!
A word of warning: -
NIS is great for internal small networks, but please be aware that an NIS/NFS solution for network user authentication is not terribly secure. This will probably not be a problem in homes or small offices, but is a major drawback in the enterprise or really large organisations. For secure enterprise information serving, you should look into using LDAP.