The wget command is one of the most reliable tools for downloading files directly from the terminal in Ubuntu/Linux.
Whether you’re grabbing a single file, automating multiple downloads, or mirroring entire websites, wget can handle it all.
In this guide, we’ll walk through practical wget examples, from basic to advanced plus a free Wget cheat sheet that you can print so you can master downloading files like a pro.
Download Your Wget Command Cheat Sheet
I’ve created a handy wget cheat sheet with all the Linux commands and examples for you to keep at your fingertips. Download it in your preferred format below:
Using wget command to download single files
The most basic use of wget is downloading a single file. Open your terminal and run:
wget https://example.com/file.zip
This downloads file.zip
to your current working directory. It’s as simple as that, wget fetches the file and saves it locally without needing a web browser.
Using wget command to download multiple files
If you have a list of URLs you want to download, you can save them in a text file (e.g., urls.txt
) and let wget handle the batch download:
wget -i urls.txt
Each URL in urls.txt
will be processed and downloaded one by one. This is super helpful when downloading datasets, software packages, or media files in bulk.
Using wget command to get files under different names
You can instruct wget to save the downloaded file under a specific name using the -O
option. For example:
wget -O ubuntu-guide.pdf https://example.com/guide.pdf
Even though the original file is named guide.pdf
, it will be saved as ubuntu-guide.pdf
locally.
Using wget command to save files in specified directory
Want to organize downloads into a specific folder? Use the -P
option to define the destination directory:
wget -P ~/Downloads/tutorials https://example.com/lesson1.zip
This saves lesson1.zip
inside the tutorials
folder within your Downloads directory.
Using wget command to limit download speed
To prevent wget from consuming all your bandwidth, you can throttle its speed using --limit-rate
:
wget --limit-rate=500k https://example.com/video.mp4
This restricts the download speed to 500KB per second, allowing you to continue browsing without lag.
Using wget command to set retry attempts
Network instability? No problem. You can tell wget to retry downloads a specified number of times if they fail:
wget --tries=10 https://example.com/bigfile.iso
This makes wget attempt the download up to 10 times before giving up.
Using wget command to download in background
If you’re downloading large files, you might not want to keep the terminal occupied. Use the -b
option to send wget to the background:
wget -b https://example.com/largefile.zip
You’ll see a message that the download is running in the background. You can check the progress in the wget-log
file generated in the same directory.
Using wget command to download via FTP
Need to download files from an FTP server that requires login credentials? Here’s how:
wget --ftp-user=username --ftp-password=password ftp://example.com/file.zip
Replace username
and password
with your actual login information.
Using wget command to continue interrupted downloads
If a download gets interrupted, you don’t have to start over. Resume the download with:
wget -c https://example.com/largefile.zip
This command checks the partially downloaded file and continues where it left off.
Using wget command to retrieve whole websites
You can mirror entire websites for offline access using wget’s recursive options:
wget --mirror --convert-links --page-requisites --no-parent https://example.com
--mirror
: Enables mirroring features (recursive download, timestamping).--convert-links
: Adjusts links to work offline.--page-requisites
: Downloads all assets (images, CSS, JS).--no-parent
: Avoids going up to parent directories.
Using wget command to locate broken links
You can use wget to scan a website and report broken links without actually downloading content:
wget --spider --recursive --no-verbose https://example.com
The --spider
option tells wget to crawl and check links, acting like a web spider.
Using wget command to download numbered files
If you’re downloading files in a numbered sequence (e.g., image1.jpg to image100.jpg), wget can automate it:
wget https://example.com/images/image{1..100}.jpg
This loop downloads files numbered from 1 to 100 without needing individual URLs.
Using wget command with HTTP authentication
To download files from a password-protected webpage using HTTP authentication:
wget --user=username --password=password https://example.com/securefile.zip
Be cautious: this method exposes your password in the terminal’s history.
Using wget command to change user-agent string
Some websites block automated downloaders like wget. You can spoof a browser user-agent string like this:
wget --user-agent="Mozilla/5.0" https://example.com/file.zip
This tricks the server into thinking you’re using Firefox or Chrome.
Using wget command with proxy settings
If your internet access is through a proxy, you can configure wget to use it:
export http_proxy=http://proxyserver:port wget https://example.com/file.zip
This will route the download through the specified proxy.
Using wget command to download files with timestamping
Want to avoid downloading the same file multiple times if it hasn’t changed? Use:
wget -N https://example.com/file.zip
The -N
option checks if the file on the server is newer before downloading.
Using wget command to mirror websites with depth limit
To prevent wget from diving too deep into a site’s structure, set a recursion depth:
wget --recursive --level=2 https://example.com
This restricts wget to only 2 levels deep from the starting URL.
Keep exploring Linux commands with these guides:
- Grep Command Guide with Cheat Sheet
- Tar Command Guide with Cheat Sheet
- Sudo Command Guide with Cheat Sheet
- Top Linux Networking Commands
- How to Use Sudo Command in Ubuntu
- Important Linux Commands You Should Know
- Advanced Linux Commands You Should Know
If you need any help, contact us or leave a comment below!