I found a perfect solution at dougie.io's Using Wget, Grep, and Sed to Download Public Domain Wallpapers From a Web Page. I'm trying to boil it down to the main steps (to keep it available here as well):
Download the HTML page using wget:
wget https://en.wikipedia.org/wiki/Thirty-six_Views_of_Mount_Fuji \
-O page.html -O page.html
Extract the image URLs using grep and sed. Then extract the URLs from the article and write them to a new file
urls.txt:
grep -E "(https?:)?//[^/\s]+/\S+\.(jpg|png|gif|svg)" page.html -o |
sed "s/(^https?)?\/\//https\:\/\//g" -r > urls.txt
Download the images using wget:
Thumb images
If you just need the thumb images you can start that via:
wget -i urls.txt -P downloads/
Full size images
To get the full size images, filter the URLs file (urls.txt) to a new file (urls-new.txt):
sed -E "s/\/thumb//g; s/\/[0-9]+px-.+\.(jpg|png)$//g" urls.txt |
uniq > urls-new.txt
then restart the download:
wget -i urls-new.txt -P downloads_full_size/
Full credit goes to the linked article.