It's not completely clear to me what you are trying to do, but I'll make a few assumptions and attempt an answer. I will take your comment that mentions "debugging code and/or running interactively", as the basis of what you are trying to do (you might want to add that to your question).
If you are willing to wait in the queue for the initial allocation of your job, but then be able to debug interactively once the job is started, then there are SLURM commands that will allow you to do so.
For example, if you need 3 nodes to debug your code, you could use the slurm command salloc -N 3 which (depending on your configuration) will allocate you 3 nodes, possibly (again depending on the slurm config) give you a prompt on one of those nodes, and then you can use srun to run your parallel code. You can keep running srun commands until you're done debugging (or until your time runs out).
Now let's say you want three specific nodes, you can use the same salloc command, but add --nodelist=casade01,casade02,casade03 to the command.
If however, you were already logged into those three nodes (e.g. with ssh, and not within slurm), and you wanted to specifically use those three login sessions to run your commands, then you should be aware that you may be interfering with other jobs that are being scheduled by slurm. Often, slurm configurations are setup so that you cannot login directly to compute nodes without using slurm commands, but in your setup that does not seem to be the case. The slurm srun command is likely (depending on your setup) using some type of MPI to run your parallel code. You could use mpi commands directly to run your code. If you are not familiar with MPI commands for executing code (e.g. mpiexec), then I would not take this route, especially if the salloc method works.