Per the open(2) man page:
O_DIRECT (since Linux 2.4.10)
Try to minimize cache effects of the I/O to and from this
file. In general this will degrade performance, but it is
useful in special situations, such as when applications do
their own caching. File I/O is done directly to/from user-
space buffers. The O_DIRECT flag on its own makes an effort
to transfer data synchronously, but does not give the
guarantees of the O_SYNC flag that data and necessary metadata
are transferred. To guarantee synchronous I/O, O_SYNC must be
used in addition to O_DIRECT. See NOTES below for further
discussion.
From the NOTES section:
O_DIRECT support was added under Linux in kernel version 2.4.10.
Older Linux kernels simply ignore this flag. Some filesystems may
not implement the flag and open() will fail with EINVAL if it is
used.
So O_DIRECT used to be simply ignored. And from the LKML, just a couple of months ago:
Who cares how a filesystem implements O_DIRECT as long as it does
not corrupt data? ext3 fell back to buffered IO in many situations,
yet the only complaints about that were performance. IOWs, it's long been
true that if the user cares about O_DIRECT performance then they
have to be careful about their choice of filesystem.
But if it's only 5 lines of code per filesystem to support O_DIRECT
correctly via buffered IO, then exactly why should userspace have
to jump through hoops to explicitly handle open(O_DIRECT) failure?
Especially when you consider that all they can do is fall back to
buffered IO themselves....
I had written counterpoints for all of this, but I thought better of
it. Old versions of the kernel simply ignore O_DIRECT, so clearly
there's precedent.
Given that, it seems that you're safe to simply ignore it. The key phrase seems to be to "not corrupt data".
For now.
Note also that your linked question has answers that say O_DIRECT isn't useful for performance reasons. That is simply incorrect. Passing data through the page cache is slower than not passing it through the page cache. That can be significant on hardware capable of transferring gigabytes per second. And if you only handle each bit of data one time, the caching is literally useless yet it will needlessly impact the entire system.
It's been a few years since I wrote a Linux filesystem module. Unfortunately I don't recall how the VFS systems handle caching.