Thursday, October 29, 2009

Active Network Vision and Reality: Lessons from a Capsule-Based System

Summary

The authors of this paper aim to understand more about Active Networks. They do so by designing, implementing and evaluating ANTS active network toolkit, which provides a flexible network layer and addresses some of the performance and security concerns due to code injected into the network. They also point to 3 areas that they have not yet resolved: capsule model of programming, accessibility of that model to all users, and applications can be constructed using that model.

Criticism & Questions

I think this is an interesting contract to RON. Both are trying to solve the general problem of application-aware routing, but in different ways. While RON is implemented at the application layer, active networks are at the IP layer. Although I think active networks are a great idea in theory, they will be a pain to implement correctly. And, since they allow users to input code into the network, without proper isolation, it could lead to a security nightmare.

Resilient Overlay Networks

Summary

A Resilient Overlay Network (RON) is an architecture at the application-layer that allows distributed Internet applications to detect outages and recover quickly (on the order of seconds rather than minutes as it is in BGP). The RON nodes sit on top of the network, constantly monitoring it. This network information is used when sending packets to decide whether to send by the normal path through the Internet or to send to another RON node.

RON was evaluated based on 2 deployments. It was able to detect all the outages that took place and reroute packets, all of which took less than 20 seconds on average. RON was also able to improve throughput by 5% and reduce loss probability by 5% in some cases.

RON nodes aggresively probe and monitor the paths connecting their nodes. They exchange information about the quality of each path using a routing protocol. They build a forwarding table with lots of path information, including latency, loss rate and throughput. The authors argue that RON should be limited in size, between 2 and 50 nodes in order to reduce the bandwidth overhead resulting from the aggressive probing.

RON has 3 major goals:
  1. allow nodes to communicate with each other, even in the face of outages.
  2. integrate routing with distributed applications more closely than before, allowing for application-specific metrics to be used.
  3. provide a framework for the implementation of expressive routing policies, which allow paths to be chosen based on criteria such as forwarding rate controls or notions of acceptable use.
The RON architecture consists of many RON clients distributed over the network. The client interacts with RON using the conduit API in order to send and receive data. The first node that sees the packet is called the entry node, which classifies the packet on the type of path it should use. Then, it determines a path for the packet, adds a RON header and forwards the packet. All other RON nodes simply forward to the next RON node. The final RON node is called the exit node and it delivers the packet to the application.

The entry node has a lot of control over the flow of the packet. The packet is tagged with a flow ID, and subsequent nodes attempt to keep the packet on the same path as the rest of the flow.

When building up its routing table, RON keeps track of (i) latency, (ii) packet loss and (iii) throughput. It uses a link-state routing protocol. The latency-minimizer forwarding table uses an exponential weighted moving average of round-trip latency.

RON was implemented and evaluated. It was found to detect and recover from errors in 18 seconds. It also improves throughput, loss rates and latency.

Criticisms & Questions

I think this was an interesting paper. It was well organized and presented the protocol well. Their evaluation also seemed sound given that they did 2 implementations and RON was found to help in all cases.

I think RON is a good way to allow applications to give information to the transport protocol without breaking the end-to-end argument. It solves many of the problems with application-specific needs.

Tuesday, October 27, 2009

On Inferring Autonomous System Relationships in the Internet

Summary

This paper proposes a graph representation of the AS relationships in the Internet at the time it was written. The authors characterize the relationships into customer-provider, peering and sibling relationships. They came up with heuristic algorithms that look at the BGP routing tables and infer the augmented AS graph.

They test the graph generated by their algorithm on AT&T's internal information and find that over 99% of their graph is accurate. They also test sibling-to-sibling relationships using the WHOIS lookup service and find that over half of their graph is accurate.

Their annotated AS graph consists of nodes connected by one of 3 kinds of edges: provider-to-customer, peer-to-peer and sibling-to-sibling.

They explain the basic 3-phase heuristic algorithm they use. The first phase parses the routing tables and calculates the degree of each AS. The next phase goes through all the entries in the routing table and parses it to find the top provider and consecutive AS pairs before and after the top provider. Finally, in the third phase, it assigns all those consecutive AS pairs with a transit relationship.

The basic algorithm assumes that there are no misconfiguration errors. They also propose a refined algorithm that accounts for any misconfiguration errors. Furthermore, they propose another algorithm that infers peering relationships in addition to the provider-customer and sibling relationships that the previous 2 infer.

Finally, they implement all 3 algorithms and using AT&T's internal data and the WHOIS service, they determine that their algorithms are fairly accurate.

Criticisms & Questions

I think this was an interesting paper. I think they did a decent job of explaining the algorithms and their experimental results. I would, however, have liked them to expand more on the motivation for their work. They spend a lot of time talking about how BGP works and what each of the different relationships are. However, they only briefly mention several use cases for the augmented graph representation they present. And, with those few examples, I find it hard to understand why any ISP would need to know the entire representation if they wanted to find out information about a specific ISP. If it's just one ISP, can't they just ask them?

On Power-Law Relationships of the Internet Topology

Summary

This paper characterizes the structure of the Internet between Nov. 1997 and Dec. 1998. They authors find that power-laws fit the topology really well. The power-laws that they generate could be used to understand more characteristics of the network, such as average neighborhood size, and can be used to help design protocols.

During the time they studied, the internet by 45%. Unless this time was a transition phase for the Internet, the authors believe the power laws they came up with should continue to hold in the future.

Some characteristics of the Internet during the time of their study are:
  • Nov. 1997: 3015 nodes, 5156 edges, 3.42 avg. outdegree
  • April 1998: 3530 nodes, 6432 edges, 3.65 avg outdegree
  • Dec. 1998: 4389 nodes, 8256 edges, 3.76 avg outdegree
They propose 3 power-laws:
  • Power-Law 1 (rank exponent): The outdegree, dv, of a node v, is proportional to the rank of the node, rv, to the power of a constant, R.
  • Power-Law 2 (outdegree exponent): The frequency, fd, of an outdegree, d, is proportional to the outdegree to the power of a constant, O
  • Power-Law 3 (eigen exponent): The eigenvalues, lambda_i, of a graph are proportional to the order, i, to the power of a constant.
Finally, they suggest several practical applications of these power-laws, including assessment of realism of synthetic graphs, analyze worst-case behavior of protocols, and can help answer questions in different scenarios.

Criticisms & Questions

This was an interesting paper, especially since it was one of the first to characterize the topology of the Internet. Seeing as this was a decade ago, I am very curious to find out of these power-laws still hold. Since the way the internet is used has changed in the last 10 years, I would be really surprised if they were still true.

The authors mention several practical applications for the power-laws. I wonder if anyone has used these power-laws to inform any future research.

Tuesday, October 20, 2009

Interactive WiFi Connectivity for Moving Vehicles

Summary

This paper presents an evaluation of 6 different handoff protocols for WiFi in moving vehicles. It also proposes ViFi, a protocol that exploits basestation diversity to minimize disruptions and support interactive applications for mobile clients.

For the evaluation, they test these protocols in a 2-month long experiments on 2 platforms. One is VanLAN, which consists of 11 basestations and 2 vehicles used for shuttle service around the Microsoft campus in Redmond. The other consists of vehicles in Amherst. One of these vehicles logs all the beacons heard from all the surrounding BSes.

They test the following 2 applications: VoIP and short TCP transfers.

When evaluating the different handoff protocols, they use 2 metrics:
1. aggregate performance: considers total number of packets delivered and the total time or distance over which the vehicle is connected.
2. periods of uninterrupted connectivity: measures contiguous time intervals when the performance of the application is above a certain threshold.

The 6 handoff policies evaluated are:
1. RSSI
2. BRR
3. Sticky
4. History
5. BestBS
6. AllBSes

When testing performance related to BS density, they found that all protocols' performance increases with density but that relative performance of protocols stays the same throughout. And, AllBSes is found to have the best performance while History, RSSI and BRR are found to perform similar in most cases.

When looking at uninterrupted connectivity results, they found that AllBSes had the fewest regions of inadequate connectivity (because it uses multiple BSes), while BestBS had a few more and BRR had the most.

Then, the authors explain the design and implementation of ViFi. ViFi, motivated partially by AllBSes, leverages BS diversity to reduce disruptions. ViFi designates one of the BSes as the anchor, which is responsible for connecting to the internet. The other BSes are designated as auxiliary.

The authors evaluate ViFi and find that the link-layer performance of ViFi is close to ideal. ViFi has a two-fold application performance improvement over current handoff methods.

Criticism & Questions

I liked reading this paper. It did a great job of evaluating the current protocols and then building off one of those protocols to come up with ViFi. They also had a solid implementation and evaluation of ViFi.

What I am most curious about is the practical application of WiFi connectivity in moving vehicles. They motivate their work by mentioning the growing need and opportunity. I would have liked them to expand more on that and possible current use cases of this technology.

In addition, I would like to know more about what the topology of cities that are covered by WiFi are. When I think about the possible uses for interactive WiFi, I would imagine it would be most useful in longer trips, which generally means crossing multiple cities. If wifi is unavailable in many the cities along the path, that would impact the practical application of this technology.

White Space Networking with Wi-Fi like Connectivity

Summary

This paper proposes the first network prototype demonstrating the feasibility of WiFi-like networking over UHF white spaces.

The authors contribute a new spectrum assignment algorithm that determines variable bandwidth during communication, a new AP discovery mechanism using a new technique called SIFT, and a novel method for seamlessly handling unexpected disconnections.

There are several properties of the UHF white space spectrum that pose problems to networking. First is the spatial spectrum variation, which implies that the AP must get information from clients before deciding the channel to operate on. The second is spectrum fragmentation, which results in several problems related to choosing a channel. And, finally temporal variation, which is due to interference from wireless mics.

The authors built the KNOWS hardware platform to address some of the problems with white space networking. The 2 main features it adds are Variable Channel Widths and Signal Inspection before Fourier Transform (SIFT). Variable channel widths allows the channel bandwidth to be greater than 5 MHz. SIFT addresses the problems with intermittent data transmissions and that transmissions can be sent over multiple channel widths.

They describe the details of the spectrum assignment algorithm which takes an adaptive approach to choosing a channel and periodically re-evaluating this choice based on feedback from the clients.

Finally, they perform an evaluation of WhiteFi, using both simulations and experiments on the prototype. They found that SIFT for pretty accurate, with a worst case loss of 2%. When testing SIFT with low signal strengths, they found that it performs well until it hits a threshold of 96 dB. They evaluated the L-SIFT and J-SIFT to discover APs and found that L-SIFT outperforms J-SIFT for narrow white spaces while J-SIFT is much more efficient for wider white spaces and both outperform the baseline algorithm. They also tried this in more realistic settings and found that J-SIFT still outperformed the baseline by 34%. They found that it took about 4 seconds to reconnect following a disconnection. Simulations of the spectrum assignment algorithm indicate that it yields a reasonably accurate prediction.

Criticism & Questions

I think this was an interesting paper. I was not too familiar with white space networking, so this was a good introduction, especially with all the background information they added about problems.

They explained the setup and results of their evaluations very clearly for the most part. However, when they were talking about handling disconnections, they didn't spend much time on that, only briefly mentioning that there was a lag of at most 4 seconds before reconnection. Without any baseline value to compare that to, I'm unsure of how much of a performance gain that is.

Thursday, October 15, 2009

XORs in The Air: Practical Wireless Network Coding

Summary

This paper proposes COPE, a new architecture for wireless mesh networks. It uses packet coding in order to reduce the number of required transmissions. Like ExOR, it takes advantage of the broadcast nature of the wireless channel.

The packet coding algorithm makes sure to never delay a packet in order to code a packet. It gives preference to XORing packets of similar length. It will never XOR packets headed to the same nexthop. It tries to limit reordering packets from the same flow. Finally, it ensures that each neighbor who gets a coded packet has a high probability of being able to decode it.

They use "coding gain" as the performance metric for COPE. Coding game is the ratio of the number of transmissions required by the current non-coding approach, to the minimum number of transmissions used b COPE to deliver the same set of packets.

COPE was implemented on a 20-node wireless testbed over 3 topologies. It was found to have about a 1.33 coding gain. The cross topology over TCP was found to be a little lower than expected which they attributed to header overhead, imperfect overhearing and an asymmetry in throughputs of the 4 flows.

Finally, they mention fairness and claim that when packets are coded, increasing fairness will increase the overall throughput of the network.

Criticism & Questions

I thought this paper was interesting, definitely another interesting way to make use of the broadcast nature of wireless. I'm not sure if COPE could be used together with ExOR, maybe if ExOR were modified to accommodate multiple flows at once. But, if not, I would really like to see a side-by-side comparison of the two to determine which has greater performance.