Previous: Background
Up: What the codec controller actually does
Next: Implementation
Previous Page: Background
Next Page: Implementation
When attempting to transmit video over a packet network, there are two priorities - low delays and robustness to packet loss. These two priorities are contradictory - we could achieve robustness by retransmitting lost data, but only at the expense of unacceptable delays. Instead the only option is to packetised the data so that the receiver can continue to decode as correctly as possible in spite of packet loss.
If there were only one set of framing surrounding H261 (or the frames were nested), then it would have been possible to have used a fixed packet size, and packetised these frames directly, despite the extra data loss when a packet is lost, but as there are two sets of framing, losing a packet would certainly cause a synchronisation loss of one set of framing. Unfortunately on losing either synchronisation, the codecs attempt to re-sync takes up to 15 seconds, and this is clearly not acceptable. The only sensible option is to generate H221 and CRC framing locally at the receiver for feeding into the codec, to transmit only the raw H.261 data over the packet network, and to packetise the data around GOBs which are H.261's minimum synchronisation units.
Figure 2 shows the resultant pipeline: