Which queues can be used to manage bandwidth on transmission line on router?
Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM. Show You are familiar with queues as lines of people waiting to buy tickets or to talk to a service representative on the phone. A queue is also a collection of objects waiting to be processed, one at a time. In the networking environment, packets are queued up into the memory buffers of network devices like routers and switches. Packets in a queue are usually arranged in first-in, first-out order, but various techniques may be used to prioritize packets or ensure that all packets are handled fairly, rather than allowing one source to grab more than its share of resources. Note that a buffer is a physical block of memory, while a queue is a collection of packets waiting in buffers for processing. Queuing algorithms determine how queues are processed. A different type of queuing is message queuing. See "Middleware and Messaging." Packets may arrive at queues in bursts from multiple devices, and a device may temporarily receive more packets than it can process. Buffers hold packets until a device can catch up. If the device cannot catch up, buffers fill up and new incoming packets are dropped. This is called "tail drop." Queue management schemes alleviate congestion. One technique drops packets when necessary or appropriate. So-called "stale" packets (voice or video packets that will arrive too late to be used) may also be dropped to free up space in a queue with the assumption that the receiver will drop them anyway. Scheduling algorithms determine which packet to send next in order to manage and prioritize the allocation of bandwidth among the flows. Queue management is a part of packet classification and QoS schemes, in which flows are identified and classified, and then placed in queues that provide appropriate service levels. RFC 1633 (Integrated Services in the Internet Architecture: An Overview, June 1994) describes the router functions that provide "traffic control" functions. The most important for this discussion is the packet scheduler, which uses queues to forward different packet streams. Other components include the classifier, which identifies and maps packets into queues. Packets of the same class are put into the same queues, where they receive the same treatment by the packet scheduler. Multiple access networks such as intranets and the Internet are referred to as "networks of queues." In a point-to-point link, the receiver monitors its own queues and signals the sender when it is sending too fast. In packet-switched networks with many senders transmitting at any time, it is impossible to predict traffic levels. Some parts of the network may become more congested than others. The solution is to place queues throughout the network to absorb bursts from one or more senders. Queuing Methods This section explains basic queuing techniques. The following questions could be asked about queues when trying to understand how a particular system works:
There are several queuing methods, including FIFO (first-in, first out), priority queuing, and fair queuing. FIFO is common in all the queuing schemes, as it describes the basic method in which packets flow through queues. You can imagine queues as supermarket checkout lanes. Each lane uses FIFO queuing, but cashiers may change the order to reduce congestion (e.g., pull shoppers from busy lanes into less busy lanes). Shoppers with membership cards, cash, or a small number of items may be pulled into an express lane (prioritization).
Queue Behavior A good document on queue management is RFC 2309 (Recommendations on Queue Management and Congestion Avoidance in the Internet, April 1998). It describes methods for improving and preserving Internet performance, including queue management. Important points are outlined here:
Traffic-shaping algorithms control queues in a way that smoothes out the flow of packets into networks by hosts or at routers. For example, a host may use what is called a leaky bucket at the network interface. The bucket is basically a regulated queue. An application may produce an uneven flow of packets that burst or trickle into the bucket, but the bucket has an imaginary hole in the bottom through which only a certain number of packets can exit in a steady stream. A token bucket takes this concept further by allowing a queue to save up permission to send bursts of packets at a later time. A typical algorithm is that for every second of time, a token is saved. If a packet is sent, one of these tokens is destroyed. If no packets are being sent, tokens start to accumulate, which can be used later to burst packets. UDP may be used by applications that do not need TCP's guaranteed services. UDP does not acknowledge receipt of packets to the sender; therefore, it does not support congestion controls based on dropping packets. When queues fill, UDP will keep sending. Rate controllers can help, as discussed under "Congestion Control Mechanisms." As mentioned, TCP congestion control mechanisms and active queue management schemes such as RED are discussed under "Congestion Control Mechanisms." Going beyond congestion control mechanisms, there are QoS techniques such as Int-Serv (Integrated Services) and RSVP (Resource Reservation Protocol). See "QoS (Quality of Service)" for an overview and references to related sections. What are the two places that queuing can occur in a router?Queueing can occur at both the input ports and the output ports of a router. Queueing occurs at the output port when the arriving rate of packets to the outgoing link exceeds the link capacity.
What are the types of queuing system is used for computer network?Depending on the total number of customers the queueing networks can be classified into three categories: open, closed and mixed. Depending on the number of customer classes we have single class networks or multiclass networks.
What is QoS priority queue?The queue priority determines the order of exit for packets in the queue. For example, packets in a high priority queue leave the switch before packets in other queues. To summarize: QOS assigns a priority to each packet based on data contained in its header and the policy that matches this data.
What is queue router?A queue is used to store traffic until it can be processed or serialized. Both switch and router interfaces have ingress (inbound) queues and egress (outbound) queues. An ingress queue stores packets until the switch or router CPU can forward the data to the appropriate interface.
|