Summary
This paper proposes COPE, a new architecture for wireless mesh networks. It uses packet coding in order to reduce the number of required transmissions. Like ExOR, it takes advantage of the broadcast nature of the wireless channel.
The packet coding algorithm makes sure to never delay a packet in order to code a packet. It gives preference to XORing packets of similar length. It will never XOR packets headed to the same nexthop. It tries to limit reordering packets from the same flow. Finally, it ensures that each neighbor who gets a coded packet has a high probability of being able to decode it.
They use "coding gain" as the performance metric for COPE. Coding game is the ratio of the number of transmissions required by the current non-coding approach, to the minimum number of transmissions used b COPE to deliver the same set of packets.
COPE was implemented on a 20-node wireless testbed over 3 topologies. It was found to have about a 1.33 coding gain. The cross topology over TCP was found to be a little lower than expected which they attributed to header overhead, imperfect overhearing and an asymmetry in throughputs of the 4 flows.
Finally, they mention fairness and claim that when packets are coded, increasing fairness will increase the overall throughput of the network.
Criticism & Questions
I thought this paper was interesting, definitely another interesting way to make use of the broadcast nature of wireless. I'm not sure if COPE could be used together with ExOR, maybe if ExOR were modified to accommodate multiple flows at once. But, if not, I would really like to see a side-by-side comparison of the two to determine which has greater performance.
Thursday, October 15, 2009
ExOR: Opportunistic Multi-Hop Routing for Wireless Networks
Summary
This paper proposes ExOR, an integrated routing and MAC technique that uses cooperative diversity to increase throughput of large unicast transfers in multi-hop wireless networks.
ExOR takes advantage of broadcast transmission to send data to multiple nodes at once. The furthest (to source and closest to destination) node that heard the broadcast will then broadcast it out again, until it reaches the destination. This results in a higher throughput because ExOR makes use of transmissions that reach unexpectedly far or fall short. It also increases network capacity since it needs fewer retransmissions than traditional routing.
Data is sent in batches and is stored by each intermediate node until all the data is sent before deciding which node is closest to destination. ExOR schedules when each node sends its fragments so that only one node sends at a time. Once 90% of the packets have been sent, the remaining 10% get sent using traditional routing.
ExOR was implemented on Roofnet. It was found to outperform traditional routing by a factor of 2. It was also found to have the most performance gains on the longest routes.
Criticism & Questions
I think cooperative diversity routing is a very interesting idea. I think it makes sense intuitively why it would have good performance. I am however concerned about its incompatibility with regular TCP, a hindrance to its use in most practical settings. In addition, when doing the experiment, the authors spend about 10 minutes initially just setting up the nodes and then every 20 minutes, stop the experiment to update their link loss measurements. I'm not sure how practical that is and how it would affect overall throughput.
This paper proposes ExOR, an integrated routing and MAC technique that uses cooperative diversity to increase throughput of large unicast transfers in multi-hop wireless networks.
ExOR takes advantage of broadcast transmission to send data to multiple nodes at once. The furthest (to source and closest to destination) node that heard the broadcast will then broadcast it out again, until it reaches the destination. This results in a higher throughput because ExOR makes use of transmissions that reach unexpectedly far or fall short. It also increases network capacity since it needs fewer retransmissions than traditional routing.
Data is sent in batches and is stored by each intermediate node until all the data is sent before deciding which node is closest to destination. ExOR schedules when each node sends its fragments so that only one node sends at a time. Once 90% of the packets have been sent, the remaining 10% get sent using traditional routing.
ExOR was implemented on Roofnet. It was found to outperform traditional routing by a factor of 2. It was also found to have the most performance gains on the longest routes.
Criticism & Questions
I think cooperative diversity routing is a very interesting idea. I think it makes sense intuitively why it would have good performance. I am however concerned about its incompatibility with regular TCP, a hindrance to its use in most practical settings. In addition, when doing the experiment, the authors spend about 10 minutes initially just setting up the nodes and then every 20 minutes, stop the experiment to update their link loss measurements. I'm not sure how practical that is and how it would affect overall throughput.
Thursday, October 8, 2009
A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols
Summary
This paper presents the results of a performance comparison between 4 multi-hop wireless ad hoc network routing protocols. This paper also proposed and implemented changes to ns so that it could be used to study multi-hop ad hoc networks.
The 4 protocols that were tested were:
- Destination-Sequenced Distance Vector (DSDV)
- Temporally-Ordered Routing Algorithms
- Dynamic Source Routing (DSR)
- Ad Hoc On-Demand Distance Vector (AODV)
The changes that were made to ns included a model of the physical and data link layer. The link layer implements the MAC protocol. There was also an implementation of ARP to resolve IP address to link layer addresses. Finally, each protocol has a packet buffer of max size 50 packets.
The authors ran a simulation of 50 nodes on ns to see how each protocol would react to changes in the network topology. The simulation was run using 3 different CBR sources in order to approximate 3 different data sending rates.
They chose the following 3 metrics to evaluate the simulation results:
- packet delivery rate
- routing overhead
- path optimality
When evaluating path delivery rate, DSR and AODV-LL were found to have the best performance at all pause times. TORA performs a little worse, and while DSDV-SQ performs better than TORA when pause time is over 300 ms, it fails to converge at pause times below 300 ms.
When evaluating routing overhead, DSR was found to have the lowest number of overhead packets while AODV-LL was found to have a slightly higher number of overhead packets. However, when looking at the number of overhead bytes, AODV-LL actually has the smallest number of total overhead bytes.
When evaluating path optimality, DSR and DSDV-SQ were found to use routes close to the optimal, while AODV-LL and TORA used routes that were sometimes 4 hops longer than optimal.
Criticism & Questions
I think this paper was interesting to read. The simulation results do suggest many differences between the 4 protocols and would be informative to those choosing between them. However, since these are purely simulation results, I wonder how accurate they are. One of the features of ad-hoc networks is their unpredictability, and without doing a real deployment, I'm not sure how accurately you can test the various protocols.
The authors didn't include a future works section, but I would really like to see the results of a real deployment as the next step.
A High-Throughput Path Metric for Multi-Hop Wireless Routing
Summary
This paper proposes a new metric to use when finding the highest-throughput path on multi-hope wireless routing. This new metric, called Expected Trasmission Count, uses link loss ratios, asymmetry in loss ratios and interference between successive links in a path to find the path that requires the lowest number of transmissions, including retransmissions, to successfully deliver the packet to its destination.
The typical metric currently used in multi-hop wireless routing is minimum hop-count. The authors first determined the performance of minimum-hop-count routing using a 29-node wireless testbed. They used both the DSDV and DSR protocols. They found that minimum-hop-count works well when the shortest route is also the fastest one. However, when it has a choice among a number of multi-hop routes, it usually picks a much slower than optimal path.
Next, the authors explain, implement and evaluate ETX. The ETX metric measures the number of transmissions required to send a packet to its destination.
ETX = 1 / (df * dr)
where df = forward delivery ratio and dr = reverse delivery ratio.
When testing ETX with DSDV, ETX was found to perform better than DSDV when the packet is being sent on a high loss ratio link and the best path is multi-hop. Some part of ETX's improvement is due to avoiding extremely asymmetric links, even better than the handshaking scheme with the same goal.
ETX was tested with DSR with its link-layer feedback turned on and off. When it was turned off, ETX showed a significant improvement in initial route selection. However, when feedback was turned on, ETX showed only a slight improvement to some pairs of nodes.
One drawback to ETX is that it uses a fixed packet size to make its calculations. However, it was shown that packet size has a significant effect on delivery ratio. Because of this, ETX tends to underestimates the delivery ratio of ACK packets, resulting in a overestimation of the number of required transmissions.
Criticism & Questions
I enjoyed reading this paper. I think they had very sound methodology, especially in testing minimum-hop-count so they could have a fair comparison. Since there are at least 2 more ad-hoc multi-hop routing protocols (which we read about in the next paper), I would like to see the effect that ETX would have on those protocols as well. One complaint about the paper is that the graphs were really hard to read and really hard to differentiate between the various lines.
Tuesday, October 6, 2009
Architecture and Evaluation of an Unplanned 802.11b Mesh Network
Summary
This paper evaluated the architecture and performance of Roofnet, an unplanned 802.11b Mesh Network. It explored the effect of node density on connectivity and throughput, the links that the routing protocol uses, performance of a highly connected mesh and comparison to a single-hop network using the same nodes as Roofnet.
Roofnet is primarily characterized by the following design decisions:
Then, the authors use 4 sets of measurements on Roofnet to evaluate many characteristics of Roofnet.
The authors conclude that Roofnet works. Throughput and latency are comparable to that of a DSL link. The average throughput was measured to be 627 kbits/second. Roofnet is shown to be robust as it does not depend on a small number of nodes. It also shown that multi-hop forwarding performs better than single-hop forwarding.
Criticism & Questions
This paper was very well organized and easy to follow. Their evaluation considered many characteristics and they made sound arguments when analyzing their data. The charts and graphs were very helpful in understanding their points.
Feedback
I would vote to keep this paper in the syllabus.
This paper evaluated the architecture and performance of Roofnet, an unplanned 802.11b Mesh Network. It explored the effect of node density on connectivity and throughput, the links that the routing protocol uses, performance of a highly connected mesh and comparison to a single-hop network using the same nodes as Roofnet.
Roofnet is primarily characterized by the following design decisions:
- unconstrained node placement
- omni-directional antennas
- multi-hop routing
- optimization of routing for throughput in a slowly-changing network
Then, the authors use 4 sets of measurements on Roofnet to evaluate many characteristics of Roofnet.
The authors conclude that Roofnet works. Throughput and latency are comparable to that of a DSL link. The average throughput was measured to be 627 kbits/second. Roofnet is shown to be robust as it does not depend on a small number of nodes. It also shown that multi-hop forwarding performs better than single-hop forwarding.
Criticism & Questions
This paper was very well organized and easy to follow. Their evaluation considered many characteristics and they made sound arguments when analyzing their data. The charts and graphs were very helpful in understanding their points.
Feedback
I would vote to keep this paper in the syllabus.
Modeling Wireless Links for Transport Protocols
Summary
This paper puts forth a model to evaluate the performance of wireless links. The authors argue that this is especially useful when trying to decide if link layer protocols should have knowledge of the transport layer and use that knowledge when deciding what to do or if transport protocols should be redesigned for better performance on current wireless links.
This paper considers 3 main classes of wireless links: wireless LANs, wide-area cellular links and satellite links. The authors go into detail on the essential aspects of the model: types of wireless links in each class, topologies most common in each class and the traffic patterns in each class.
The performance metrics used for this experiment are throughput, delay, fairness, dynamics and goodput.
Then, the authors explain that current models are inadequate because they are either unrealistic, realistic but explore only a small part of the parameter space, overly realistic or lacking reproducibility.
The authors then choose several link characteristics and for each, explain what the current state is and how to model it:
I enjoyed this paper. It was well organized and easy to follow. I especially liked how they organized the various link characteristics to consider by explaining what the current state of each was as well as how to model that characteristic.
I am very curious to know if this model caught on or is being used by any networking researchers in the present time.
This paper puts forth a model to evaluate the performance of wireless links. The authors argue that this is especially useful when trying to decide if link layer protocols should have knowledge of the transport layer and use that knowledge when deciding what to do or if transport protocols should be redesigned for better performance on current wireless links.
This paper considers 3 main classes of wireless links: wireless LANs, wide-area cellular links and satellite links. The authors go into detail on the essential aspects of the model: types of wireless links in each class, topologies most common in each class and the traffic patterns in each class.
The performance metrics used for this experiment are throughput, delay, fairness, dynamics and goodput.
Then, the authors explain that current models are inadequate because they are either unrealistic, realistic but explore only a small part of the parameter space, overly realistic or lacking reproducibility.
The authors then choose several link characteristics and for each, explain what the current state is and how to model it:
- error losses and corruption: not a big concern because of FEC and link layer retransmissions. Can be modeled by dropping packets on a per-packet, per-bit or time-based loss probability
- delay variation: delay spikes can cause spurious timeouts, causing retransmissions. Can be modeled by suspending data transmission on the link.
- packet reordering: reordering is not widely enabled in practice. Can be modeled by swapping packets or delaying one packet for a given time.
- on-demand resource allocation
- bandwidth variation
- asymmetry in bandwidth and latency
- queue management
- effects of mobility
I enjoyed this paper. It was well organized and easy to follow. I especially liked how they organized the various link characteristics to consider by explaining what the current state of each was as well as how to model that characteristic.
I am very curious to know if this model caught on or is being used by any networking researchers in the present time.
Thursday, October 1, 2009
A Comparison of Mechanisms for Improving TCP Performance over Wireless Links
Summary
This paper presents an evaluation of several schemes to improve the performance of TCP over lossy links, such as wireless. The authors classify the schemes into the following 3 categories:
1. end-to-end protocols: have the sender detect and handle losses using techniques such as SACKs and ELN
2. link-layer protocols: hide link-layer losses from the transport layer and handle them in the link-layer instead using techniques such as local retransmissions and forward error correction.
3. split-connection protocols: terminate the TCP connection at the station so that the sender is not aware of the wireless link at the end. Use a different protocol from the station to the receiver that deals with losses.
After evaluating the various schemes they got the following results: the enhanced link-layer scheme that has knowledge of the TCP protocol and uses SACKs works much better than a simple link-layer retransmission scheme. Out of the various end-to-end protocols, selective acknowledgements was found to be better than partial acknowledgements or ELN, but not better than the enhanced link-layer scheme. The split-connection protocol is worse than the two schemes above, and shows that a split-connection is not necessary to get optimal performance.
Overall, the link-layer scheme that was TCP aware and uses SACKs was found to be the best scheme.
Criticism & Questions
I would be very interested in finding out which schemes, if any, are currently used in lossy networks. They touch on this a little, but I would like to learn more about the practicality of each of the schemes and have that be one of the criteria considered when choosing the optimal scheme.
Feedback
I enjoyed reading this paper. It was great to learn about the various schemes available to improve performance in lossy links. I think the authors did a great job at explaining each of the schemes and why one was better than the other. I vote to keep this paper in the syllabus.
This paper presents an evaluation of several schemes to improve the performance of TCP over lossy links, such as wireless. The authors classify the schemes into the following 3 categories:
1. end-to-end protocols: have the sender detect and handle losses using techniques such as SACKs and ELN
2. link-layer protocols: hide link-layer losses from the transport layer and handle them in the link-layer instead using techniques such as local retransmissions and forward error correction.
3. split-connection protocols: terminate the TCP connection at the station so that the sender is not aware of the wireless link at the end. Use a different protocol from the station to the receiver that deals with losses.
After evaluating the various schemes they got the following results: the enhanced link-layer scheme that has knowledge of the TCP protocol and uses SACKs works much better than a simple link-layer retransmission scheme. Out of the various end-to-end protocols, selective acknowledgements was found to be better than partial acknowledgements or ELN, but not better than the enhanced link-layer scheme. The split-connection protocol is worse than the two schemes above, and shows that a split-connection is not necessary to get optimal performance.
Overall, the link-layer scheme that was TCP aware and uses SACKs was found to be the best scheme.
Criticism & Questions
I would be very interested in finding out which schemes, if any, are currently used in lossy networks. They touch on this a little, but I would like to learn more about the practicality of each of the schemes and have that be one of the criteria considered when choosing the optimal scheme.
Feedback
I enjoyed reading this paper. It was great to learn about the various schemes available to improve performance in lossy links. I think the authors did a great job at explaining each of the schemes and why one was better than the other. I vote to keep this paper in the syllabus.
Subscribe to:
Posts (Atom)