The (partial) solution

Traffic shaping under Linux is done in the kernel, but prioritization of packages is done in user space. There is much talk of a suite called iproute2, but its docs are meager and its links are dead, and on Debian, all that can be installed is the package iproute.

The documentation for traffic shaping is the LARTC HOWTO. It has a short section on limiting a single host or netmask, and this could be used for capping multicast (which could be caught in a netmask: the ranges are here). However, this example uses the cbq queueing discipline, and from the rest of the docs we gather that an htb discipline might be simpler and exhibit less complex behaviour. (It is well documented in a separate HTB User Guide by its author.)

More useful info in the tc manpage, the Packet-Shaping-HOWTO calls traffic shaping packet shaping, but it has some nice comments on the error messages of tc.

Before trying our hand at multicast, we first try to limit bandwidth for a single IP address, thusly:

  1. Install iproute apt-get install iproute

  2. tc Is the traffic controller: a program that manipulates rules for use with the routing and traffic shaping modules. We write a small script that installs a shaping policy, and save it in ~/dev/bash/multicap.sh:

    #!/bin/bash
    #
    # First attempt to limit bandwidth of multicast traffic
    #
    EXT_IF='eth0'
    EXT_BANDWIDTH='100mbit'
    EXT_BURST='120kb' # burst is EXT_BANDWIDTH*timer resolution (tr is 10ms on i386)
    MCAST_CAP='1mbit'
    TC='/sbin/tc'
    
    
    # remove old tree on EXT_IF
    ${TC} qdisc del dev ${EXT_IF}  root
    
    
    # Attach a HTP (Hierarchical Token Bucket) Qeueuing Discipline to ${EXT_IF}
    ${TC} qdisc add dev ${EXT_IF} root handle 1: htb default 1
    
    # This is the parent and default class inside the HTP hierarchy.
    ${TC} class add dev ${EXT_IF} parent 1: classid 1:1 htb rate ${EXT_BANDWIDTH} burst ${EXT_BURST}
    
    # This is the capped mcast queue
    ${TC} class add dev ${EXT_IF} parent 1:1 classid 1:10 htb rate ${MCAST_CAP}
    
    # These filters grab multicast by IP and put it in the right queue
    #${TC} filter add dev ${EXT_IF} protocol ip parent 1:1 prio 1 u32 match ip dst 224.0.0.0/24
    #${TC} filter add dev ${EXT_IF} protocol ip parent 1:1 prio 1 u32 match ip dst 239.0.0.0/8
    
    # This filter just for testing purposes
    ${TC} filter add dev ${EXT_IF} protocol ip parent 1: prio 1 u32 match ip dst 192.168.5.23 flowid 1:10
    
    # Show what we've got on ${EXT_IF}
    ${TC} qdisc show dev ${EXT_IF}
    ${TC} class show dev ${EXT_IF}
    ${TC} filter show dev ${EXT_IF}
    
    # ToDo:
    # - make default queue a child of parent and put SFQ inside both child queues (HTB author recommends it)
    # - grap multicast in the 224.0.1.0 - 238.255.255.255 range (internet-wide) as well
    # - calculate appropriate burst rates and add them to the rules
    	  

  3. Test the script. We start a large scp to the host-to-be-capped (i.c. 192.168.5.23), and while it is running, we execute the TC script:

    1. Start the transfer: scp ~/KNOPPIX.iso user@192.168.5.23:/dev/null

      Watch the transfer and see that the transfer rate is around interface capacity:

      5% 1989MB   9.9MB/s   01:04 ETA

    2. Run the script (in another console) while leaving the transfer running:

      chmod a+x ~/dev/bash/multicap.sh
      sudo ~/dev/bash/multicap.sh

    3. Switch back to the transfer console, and see the transfer rate go down:

      79% 2101MB 112.4KB/s 1:20:38 ETA

      [Note]Note

      Note that the actual transfer rate has already gone down, and the traffic is actually capped, but scp uses a sliding window averager, so it takes a while before we actually see the change.

      [Note]Note

      Also note that if you use tftp (udp) instead of scp (tcp), this still works, which gives some hope for the multicast case.