It might be one for ServerFault.
But you're quite correct. Fiber Channel doesn't have the same protection mechanisms as TCP. It's more like UDP in that regard (although that's a bit of a weak analogy) and for many of the same reasons - for some applications, TCP is bad solution because of those reliability mechanisms - your stream can 'stall' for a retransmit, and that hurts a near-real-time application more than a dropped packet does. Storage IO latency you start to 'hurt' when you're more than about 20ms, and that's not enough time for TCP to do it's thing really.
What happens for FCP is that the SCSI driver on the endpoint handles the reliability, because as part of that it can also do load balancing. Commonly, you won't single-attach a fiber, but instead have dual HBAs with dual independent paths to storage.
So your driver routes packets however it likes (some are smarter than others - most do multipathing these days, but some do some quite clever adaptive multipathing), and keeps track of which IOs have been acknowledged or not. The OS can queue IO where appropriate, or ... well, not, if it thinks that's a bad idea. Practically speaking it does this anyway as part of routine filesystem caching mechanisms anyway.
This is why, for example open has the O_DIRECT option:
O_DIRECT (since Linux 2.4.10)
Try to minimize cache effects of the I/O to and from this
file. In general this will degrade performance, but it is
useful in special situations, such as when applications do
their own caching. File I/O is done directly to/from user-
space buffers. The O_DIRECT flag on its own makes an effort
to transfer data synchronously, but does not give the
guarantees of the O_SYNC flag that data and necessary metadata
are transferred. To guarantee synchronous I/O, O_SYNC must be
used in addition to O_DIRECT. See NOTES below for further
discussion.