/dev/sda <-- SCSI device
/dev/nvme0n1 <-- NVMe device
NVMe driver supported blk-mq since kernel version 3.19. It didn't allow turning off blk-mq by using an insmod parameter, or a kernel boot option in grub.
$ modinfo -p nvme
use_threaded_interrupts: (int)
use_cmb_sqes:use controller's memory buffer for I/O SQes (bool)
max_host_mem_size_mb:Maximum Host Memory Buffer (HMB) size per controller (in MiB) (uint)
sgl_threshold:Use SGLs when average request segment size is larger or equal to this size. Use 0 to disable SGLs. (uint)
io_queue_depth:set io queue depth, should >= 2
You could download old nvme driver and recompile kernel module from http://git.infradead.org/users/willy/linux-nvme.git if you want to disable blk-mq. However, this did not allow using e.g. CFQ. The relevant change in 3.19 explains that the NVMe driver previously "[implemented] queue logic within itself", it did not use the single-queue block layer. There are other examples of such block devices, for example Linux mdraid devices.
The following sources were correct at the time, they include some useful notes and links. They are however out of date, as they were written before the introduction of blk-mq IO schedulers, including BFQ (BFQ was accepted as part of Linux 4.12).
https://www.thomas-krenn.com/en/wiki/Linux_Multi-Queue_Block_IO_Queueing_Mechanism_(blk-mq)
Linux Storage Diagram from https://www.thomas-krenn.com.
https://www.thomas-krenn.com/de/wikiDE/images/d/da/Linux-io-stack-diagram_v1.0.png