Linux Server Administration Tips

Put process in background by nohup command in Linux

For example, a command like this could run a few days if there are so many images:

tar zcf xxxx.images.tgz *

So it’s best to put it in background when started, by nohup:

nohup tar zcf xxxx.images.tgz * > nohup.log 2>&1 &

Linux Server Administration Tips

Whitelist server IPs for SSH connection against ERROR – ssh: connect to host port: Connection refused

If you have multiple servers you’d probably need rsync to transfer files among servers via SSH. An error like this, however, will occur when CSF protects the servers against malicious SSH connection attempts:

ssh: connect to host port 9999: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(226) [Receiver=x.x.x]

The solution is very simple. Just whitelist each server IP on the other server and vice versa:

csf -a

Wherein is the other server. And perform the same:

csf -a

On the other server.

That’s it. Now you can freely SSH from and to the 2 servers.

Business and Marketing Linux Server Administration Tips

Clone any static site by a simple Linux command WGET

Just use this and the WGET command will start crawling the target site and download certain levels of pages from the starting URL, including all its assets such as images or CSS files.

wget -k -K -E -r -l 1 -p -N -F --convert-links -H,,, --restrict-file-names=windows

The -D option specifies all the hosts that WGET should download the resources from in local files. Resources of hosts not specified in the option will be kept as is.

The issue for now is that I don’t know how to make it download dynamic images in data-src attributes, such as the images that will only show when scrolled into view.

Other that that, it’s a perfect command.

Information Security Linux Server Administration Tips PHP Tips & Tutorials

500 Internet Server Error for Incorrect Permissions after Installing suPHP and Uploading PHP Script

Many’s the time after you have uploaded some PHP script to your server and point the web browser to the address it gives 500 Internet Server Error. If you have suPHP installed this is very likely because the uploaded PHP script (files and directories) have wrong permissions set to them.

With regards to Linux permissions, suPHP requires all directories to be at least 755 and all files to be at least 644 for any PHP script to run. If the directory or the PHP script has the wrong permissions set to them, suPHP would give out 500 Internet Server Error until you have corrected them. In addition, the directory and the PHP script must be owned by the current user and group or they wouldn’t run either.

To fix this is very easy, just perform the following command after you have uploaded the PHP script:

chown -R youruser /home/youruser/public_html/*
chgrp -R youruser /home/youruser/public_html/*
find /home/youruser/public_html/* -type f -exec chmod 644 {} \;
find /home/youruser/public_html/* -type d -exec chmod 755 {} \;

The 1st line sets everything (files and directories) under /home/youruser/public_html/ to be owned by user youruser.

The 2nd line sets everything (files and directories) under /home/youruser/public_html/ to be owned by group youruser.

The 3rd line sets all files under /home/youruser/public_html/ to be 644 in permissions.

The 4th line sets all directories under /home/youruser/public_html/ to be 755 in permissions.

Linux Server Administration Tips

Cannot use ctrl-c. How to stop tail -f, etc?

When I ssh into my Debian Squeeze server and start up tail -f to watch a log file or anything else which uses ctrl-c to exit/stop, ctrl-c does not work. It prints the ^C character on screen and just keeps right on going.

Is there a setting somewhere that can be tweaked or maybe a different key combo that needs pressed?

Search results suggest that it’s a pretty common problem to Debian and that tinkering with the getty settings in /etc/inittab can fix it, but I’m hesitant to mess around in there too much. I may just take a snapshot and then see what happens.

Definitely an inittab thing, in case anyone else runs into this.
This line used to be at the top of the getty stuff


8:2345:respawn:/sbin/getty 38400 hvc0

I just moved it to the bottom and now ctrl-c works when connected using ssh.


 1:2345:respawn:/sbin/getty 38400 tty1
 2:23:respawn:/sbin/getty 38400 tty2
 3:23:respawn:/sbin/getty 38400 tty3
 4:23:respawn:/sbin/getty 38400 tty4
 5:23:respawn:/sbin/getty 38400 tty5
 6:23:respawn:/sbin/getty 38400 tty6
 8:2345:respawn:/sbin/getty 38400 hvc0
Linux Server Administration Tips

Linux: How to delete / remove hidden files with ‘rm’ command?

To delete all content in any directory, including all sub-directories and files, I’ve been using this:

rm -rf somedir/*

If it is to delete all content of the current directory:

rm -rf *

However, it turns out ‘rm -rf’ doesn’t remove hidden files such as .htaccess (Files with a name starting with a dot are hidden in Linux). To delete all the hidden files as well, I have to run a 2nd command:

rm -rf .??*
Linux Server Administration Tips

Linux: How to ‘find’ and search ONLY text files?

The ‘find’ command in Linux systems searches through a directory and return files that satisfy certain criteria. For instance, to find a file that contains the string ‘needle text’ in the ‘mydocs’ directory:

find mydocs -type f -exec grep -l "needle text" {} \;

The problem of this approach is that it would search through ALL files in this directory including the binary ones such as images, executables and zip packages. Sensibly, we would only want to search through text files for a specific string. If there are far too many of binary files in present, it’d be a significant waste of CPU usage and time to get what you want, because it’s totally unnecessary to go through the binary files.

To achieve this, use this version of the above command:

find mydocs -type f -exec grep -l "needle text" {} \; -exec file {} \; | grep text | cut -d ':' -f1

I asked the question at and peoro came up with this solution. It works great.

Basically, the bold part checks each file’s mime type information and only searches the files that have ‘text’ in its mime type description. According to the Linux ‘file’ command manual, we can be fairly sure that files with ‘text’ in its mime type string are text files AND all text files have ‘text’ in its mime type description string.

Thus far the best way to do this

find -type f -exec grep -Il . {} \;

Or for a particular needle text:

find -type f -exec grep -Il "needle text" {} \;

The -I option to grep tells it to immediately ignore binary files and the . option along with the -l will make it immediately match text files so it goes very fast.

Hosting Tips & Deals Linux Server Administration Tips

Use stat command to display file system meta information of any file or directory under Linux

PHP has a stat() function that returns an array containing the meta information of a file such as owner, size, time of last access, last modification or last change. It’s basically the stat command under Linux that returns and shows the file system meta information of any file or directory:

stat myfile.txt

Which returns:

  File: `myfile.txt'
  Size: 1707            Blocks: 8          IO Block: 4096   regular file
Device: 811h/2065d      Inode: 96909802    Links: 1
Access: (0644/-rw-r--r--)  Uid: (1354144/    voir)   Gid: (255747/pg940032)
Access: 2010-02-16 08:00:00.000000000 -0800
Modify: 2010-02-18 04:16:51.000000000 -0800
Change: 2010-02-18 04:16:51.000000000 -0800

To get the meta information of the current working directory:

stat .

Which returns:

  File: `.'
  Size: 4096            Blocks: 8          IO Block: 4096   directory
Device: 811h/2065d      Inode: 96904945    Links: 4
Access: (0755/drwxr-xr-x)  Uid: (1354144/    voir)   Gid: (255747/pg940032)
Access: 2009-08-31 17:07:16.000000000 -0700
Modify: 2009-12-20 05:18:57.000000000 -0800
Change: 2009-12-20 05:18:57.000000000 -0800
Hosting Tips & Deals Linux Server Administration Tips

Linux: How to open and extract an RAR zipped file and unrar the archive?

Funny I should use “zipped” for an RAR compressed package. Anyway, you can easily zip or unzip a zip file or tar compress a package, but how does one do it with an RAR file? WinRAR is well distributed across all Windows systems. But on Linux, you have to first install the command package rar.

However, if your host has been around for quite some time such as DreamHost, you will not need to install it yourself as it’s come with the system. Just fire up this command to unrar any RAR archives:

rar x myfiles.rar

Which will then extract all the data from myfiles.rar into the current working directory.

There are other commands you can rely on to achieve the same task though, depending on your host and the server distribution. For example, you may have unrar instead of rar. Other than these, you may also find RAR related installation packages on Debian and Ubuntu by:

aptitude search unrar

It will search and show you related available packages:

p   unrar-free                           - Unarchiver for .rar files

Which is another utility to unrar any RAR files on Linux. Just install it by aptitude install unrar-free and use to unpackage the compressed RAR.

HTTP Tips & Tutorials Linux Server Administration Tips

scp, rsync: Transfer Files between Remote Servers via SSH

Chances are you have a bunch of different hosts that are housing your website files, for the sake of data safety (never put all eggs in a single basket) and possibly some SEO advantage. If that is the case, you will infrequently come to the need to move some files from one host server to another. How does one do that?

Well the straight answers include downloading the files from the source host and then uploading it to destination one via FTP. It’s not much of a time-waster with small number of files, especially those small in size. However, if it’s an impressively large chunk of package, say, 4GB, or thousands of files, this’d be quite a daunting job that may very well take the better part of your day or even a few days.

The shortcut is to transfer those files directly from the original host to the other, via SSH. That is of course, if you have both hosts enabled with SSH.

scp Command

Log into the destination host via SSH and try the following command:

scp -r [email protected]:/home/remoteuser/dir-to-be-transferred/. /home/localuser/backup

Wherein is the address of the source host and remoteuser is the SSH user (shell user) account that can read the remote directory to be transferred, namely /home/remoteuser/dir-to-be-transferred. The last argument is the local path that’s receiving the incoming files / directory.

The dot at the end of dir-to-be-transferred makes sure that all hidden files such as .htaccess are copied as well. Without the current directory sign (dot), hidden files are NOT copied by default.

You can also transfer a specific file:

scp [email protected]:/home/remoteuser/mybackup.tar.gz /home/localuser/backup

As a matter of fact, scp works the exactly same way as an ordinary cp command except it’s able to copy files back and forth remote hosts. The “s” of “scp” stands for safe, because all the data transferred is encrypted on SSH.

It’s a great way to back up your valuable website data across multiple different hosts that are physically far away from each other. With the help of crontab jobs that do the regular backups automatically, this is even better than some of the commercial backup services.

rsync Command

The command of rsync is a more preferable option to scp for synchronizing stuff across different hosts because it compares differences and works incrementally, thus saving bandwidth, especially with large backups. For examples,

rsync -av --progress [email protected]:/home/remoteuser/dir-to-be-transferred /home/localuser/backup

This would copy and transfer the directory dir-to-be-transferred with all its content into backup so that dir-to-be-transferred is a sub-directory of backup.

rsync -av --progress [email protected]:/home/remoteuser/dir-to-be-transferred/. /home/localuser/backup

With an extra /. at the end of the source directory, only the content of the directory dir-to-be-transferred are copied and transferred into backup. Thus all the content of the directory dir-to-be-transferred are now immediate children of backup.

To make the transfer of a very large file resume-able, use the -P switch which automatically includes –progress:

rsync -avP [email protected]:/home/remoteuser/large-file.ext /home/localuser/backup

So when the transfer is interrupted, run the same command again and rsync would automatically continue at the break point.

To specify the SSH port, such as 8023, just add:

 --rsh='ssh -p8023'

rsync automatically takes care of all hidden files, so there’s no need to add a dot at the end of the source directory.

To exclude a specific directory from being synchronized:

 --exclude 'not/being/transferred'
Put long running rsync command in background

When you press “ctrl + z” then the process stopped and go to background.

[1]+ Stopped rsync -ar –partial /home/webup/ /mnt/backup/

Now press “bg” and will start in background the previous process you stopped.

[1]+ rsync -ar –partial /home/webup/ /mnt/backup/ &

Press “jobs” to see the process is running

[1]+ Running rsync -ar –partial /home/webup/ /mnt/backup/ &

If you want to to go in foreground press “fg 1” 1 is the process number