Clone any static site by a simple Linux command WGET

Just use this and the WGET command will start crawling the target site and download certain levels of pages from the starting URL, including all its assets such as images or CSS files.

wget -k -K -E -r -l 1 -p -N -F --convert-links -H,,, --restrict-file-names=windows

The -D option specifies all the hosts that WGET should download the resources from in local files. Resources of hosts not specified in the option will be kept as is.

The issue for now is that I don’t know how to make it download dynamic images in data-src attributes, such as the images that will only show when scrolled into view.

Other that that, it’s a perfect command.

Scroll to Top