As you can imagine, this is slow as it downloads the files one by one.
Define "one by one": That's one connection, and there's nothing to re-establish after each file. (by the way, although the program's called scp, the protocol used is almost certainly not SCP, but SFTP – which is more modern.)
Note that SFTP already uses request queuing in all implementations I'm aware of, so there's no "dead time" between finishing the first and starting the next file data / name / attribute transfer.
Are there quicker alternatives that can download the files asynchronously
what would "asynchronous" mean in this context? Because you're waiting for a thing to finish in the background is no faster by any means than to wait for it blockingly,
or apply concurrency to the downloads to increase download speed?
Concurrency does not in itself increase download speed at all. On the contrary, it increases overhead and potentially file system fragmentation on the receiver side, and seek times / cache invalidation on the transmitter side.
Where it helps is when e.g. web servers limit the per-connection speed. Then you're circumventing an artificial limit. I don't think you're limited artificially per-connection here.
Note that the assumption here is that these files are all small, a couple of network buffers in size. If that's not the case, SSH's internal buffer architecture as offered to higher layers (SFTP) limits your throughput; in that case, just use rsync instead, or something like ssh user@host tar -cf - --zstd folder/to/be/sent | tar -xf - --zstd. (Both options are still sequential, as sequentiality is not your problem.)