0

I'm trying to learn how to TCP works by creating a TCP/IP stack, and as the title says, I'm programmatically creating a TAP device in Rust as follows

pub struct Tap {
    fd: i32,
}

#[repr(C)]
struct IFreq {
    name: [c_char; IFNAMSIZ],
    flags: c_short,
}

impl Tap {
    pub fn new(name: &CStr) -> Result<Tap, TapError> {
        let fd = unsafe {
            let fd = libc::open(b"/dev/net/tun\0".as_ptr() as *const _, libc::O_RDWR);
            if fd < 0 {
                Err(TapError::OpenFD(std::io::Error::last_os_error()))
            } else {
                Ok(fd)
            }
        }?;

        let mut ifr = IFreq {
            name: [0; IFNAMSIZ],
            flags: (libc::IFF_TAP | libc::IFF_NO_PI) as c_short,
        };

        for (dst, src) in ifr.name[0..IFNAMSIZ - 1]
            .iter_mut()
            .zip(name.to_bytes().iter())
        {
            *dst = *src as i8;
        }

        unsafe {
            let err = libc::ioctl(fd, TUNSETIFF, &mut ifr as *mut _);
            if err == -1 {
                return Err(TapError::IOCTL(std::io::Error::last_os_error()));
            }
        }

        Ok(Tap { fd })
    }
}

However, when I bring it up using ip link set dev <tap name> up and started reading ethernet frames I seem to only be getting IPv6 frames (ethertype is 0x86DD). Is this normal? How do I stop this from happening? I'm currently only working on implementing IPv4 and don't want to deal with IPv6 traffic.

zee
  • 103
  • 2
  • This site is not about programming questions so we really don't need your source code here. What we would have needed, given `ip link` "creates the problem", is the output of `ip address show $tap_name`. But let me guess: There is a (link-local) IPv6 address on that interface but no IPv4 address. – Hauke Laging May 22 '23 at 20:45
  • @HaukeLaging apologies then, I really wasn't sure if this was a programming issue or a Unix issue so I didn't know whether to post it here or on SO. I did run that command on my TAP device and it did only return an IPv6 address, as opposed to all the other interfaces on my computer which return both an IPv4 and IPv6 address. I'm not sure what the issue here is though, or how I should go about resolving it. – zee May 23 '23 at 00:18

1 Answers1

1

However, when I bring it up using ip link set dev up and started reading ethernet frames I seem to only be getting IPv6 frames (ethertype is 0x86DD). Is this normal? How do I stop this from happening? I'm currently only working on implementing IPv4 and don't want to deal with IPv6 traffic.

On an IPv6-capable system, every interface automatically gets a link-local IPv6 address from the fe80::/64 prefix. This is mostly used for infrastructure purposes, e.g. it allows SLAAC and DHCPv6 packets (ICMPv6 and UDP respectively) to be sent in the regular way instead of resorting to raw sockets like DHCPv4 clients need to do.

The initial packets, I suspect, are the "Duplicate Address Detection" packets that result from the OS assigning the link-local address, and possibly a "Router Solicitation" to initiate SLAAC autoconfiguration.

In your case specifically, I wouldn't recommend disabling this, as it's a normal situation that a network stack needs to handle even if it is not IPv6-capable. If you were implementing TCP/IP for a real device that connects to a real Ethernet, you wouldn't get to choose what other devices are sending you – you would receive all sorts of weird unwanted frames, not just IPv6 (e.g. this ethernet here has LLDP, STP, RoMON, CTP, just to give a realistic sample).

In other words, although preventing the OS from talking IPv6 on your tap interface is possible, it makes the exercise unrealistic and generally is the wrong approach anyway.

Instead, your network stack should just quietly ignore all Ethernet frames with any ethertype that it does not recognize, whether it's IPv6 or DECnet or IPX. (That is, there is no need to special-case 0x86DD here, it's safe to drop anything that's not 0x0800/0x0806.)

u1686_grawity
  • 4,580
  • 20
  • 27