Using wget alone won't work because of the way the URLs are handled by JavaScript. You will have to parse the page using xmllint and then process the URLs into a format wget can handle.
Start by extracting and processing the URLs handled by JavaScript and outputting it to urls.txt:
wget -O - 'https://bcs.wiley.com/he-bcs/Books?action=resource&bcsId=10685&itemId=1119299160&resourceId=42647' | \
xmllint --html --xpath "//li[@class='resourceColumn']//a/@href" - 2>/dev/null | \
sed -e 's# href.*Books#https://bcs.wiley.com/he-bcs/Books#' -e 's/amp;//g' -e 's/&newwindow.*$//' > urls.txt
Now download the PDF files found from opening each URL in urls.txt:
wget -O - -i urls.txt | grep -o 'https.*pdf' | wget -i -
curl alternative:
curl 'https://bcs.wiley.com/he-bcs/Books?action=resource&bcsId=10685&itemId=1119299160&resourceId=42647' | \
xmllint --html --xpath "//li[@class='resourceColumn']//a/@href" - 2>/dev/null | \
sed -e 's# href.*Books#https://bcs.wiley.com/he-bcs/Books#' -e 's/amp;//g' -e 's/&newwindow.*$//' > urls.txt
curl -s $(cat urls.txt) | grep -o 'https.*pdf' | xargs -l curl -O