Summary
This paper explores an alternative fair queueing algorithm that would require much less complexity in the routers. The authors are attempting to get the fairness that results from FQ and DRR without storing any state in the core routers. The authors propose a distributed algorithm, where the edge routers label the packets based on their arrival rate, etc and the core routers simply use FIFO along with the the labels to determine which packets to drop. This means that only the edge routers will be slowed down. The authors describe in detail several algorithms to calculate the flow arrival rate, link fair rate and label values as well as an alternative parameter to support flows with different weights. They use performance bounds to determine that ill-behaved users can only exploit the system for short amounts of time.
They run several simulations with the CSFQ, comparing it to several other algorithms. Two of the algorithms, FIFO and RED, are used as the baseline with no fairness attempted. The other two algorithms, FRED and DRR, represent two approaches to fairness. DRR is the optimal algorithm for fairness. The results of the simulations are that CSFQ achieves a reasonable amount of fairness. CSFQ is much better than FIFO and RED. It is equivalent to FRED in many cases but has the advantage of requiring less complexity. It's generally worse than DRR.
The authors finish up with an explanation of the unfriendly flow problem and two approaches to solve it as well as how to punish ill-behaved hosts.
Criticism & Questions
A large part of the CSFQ algorithm involves the edge routers labeling the packets, however they don't explain where in the packet header they plan to put the label.
When looking at the weighted CSFQ, they say that the algorithm can't accommodate flows with different weights within an island. It would be good to know how often this occurs and whether or not it's a practical assumption (especially because they talk about an entire ISP being one island).
When explaining the unfriendly flow problem, they mention that the percentage of noncooperative hosts on the on the network is unknown. Many of the congestion control and flow control algorithms are based on the assumption that there are many noncoopertive hosts in the network - it would be very beneficial to be able to quantify this.
Wednesday, September 9, 2009
Analysis and Simulation of a Fair Queueing Algorithm
Summary
This paper talks about a fair queueing algorithm that is based on Nagle's fair queueing (FQ) algorithm. The authors set forth 3 quantities that they want to be able to measure and optimize: bandwidth, promptness and buffer space. The main requirement they want to meet is fair bandwidth and buffer space allocation. They use the max-min criterion to define fairness. They also define what a user could be and decide that a user will be a source-destination pair.
They describe the algorithm in detail, which is trying to simulate the bit-by-bit round-robin algorithm. They claim the algorithm they describe asymptotically approaches the fair bandwidth allocation of the bit-by-bit round-robin algorithm. They also describe an addition to the algorithm that gives users who use less than their share of the bandwidth more promptness. And, when the queue becomes full, the last packet of the user with the highest amount of buffer space is dropped, which essentially penalizes ill-behaved hosts.
Finally, the authors run several simulations to compare FQ to FCFS using various flow control algorithms. When running the simulations, they looked at 4 performance criterion: total throughput, average rtt of the packets, number of packet retransmissions and number of dropped packets. The results showed that FQ works the best when paired with the JK flow control algorithm, giving end-hosts an incentive to implement more intelligent flow control algorithms.
Criticism & Questions
When choosing their definition of user, they mention the drawback that comes from an ill-behaved host opening up multiple connections with different destinations in order to congest the network. However, they didn't try this scenario out in the simulations - that would have been interesting to see.
They also put forth several alternative definitions to user and mention that the definition can be easily changed in their algorithm. Their algorithm would be more compelling if they could run their simulations with all the types of users to show that it works equally well for all of them or how it works better or worse for some definitions of user.
This paper talks about a fair queueing algorithm that is based on Nagle's fair queueing (FQ) algorithm. The authors set forth 3 quantities that they want to be able to measure and optimize: bandwidth, promptness and buffer space. The main requirement they want to meet is fair bandwidth and buffer space allocation. They use the max-min criterion to define fairness. They also define what a user could be and decide that a user will be a source-destination pair.
They describe the algorithm in detail, which is trying to simulate the bit-by-bit round-robin algorithm. They claim the algorithm they describe asymptotically approaches the fair bandwidth allocation of the bit-by-bit round-robin algorithm. They also describe an addition to the algorithm that gives users who use less than their share of the bandwidth more promptness. And, when the queue becomes full, the last packet of the user with the highest amount of buffer space is dropped, which essentially penalizes ill-behaved hosts.
Finally, the authors run several simulations to compare FQ to FCFS using various flow control algorithms. When running the simulations, they looked at 4 performance criterion: total throughput, average rtt of the packets, number of packet retransmissions and number of dropped packets. The results showed that FQ works the best when paired with the JK flow control algorithm, giving end-hosts an incentive to implement more intelligent flow control algorithms.
Criticism & Questions
When choosing their definition of user, they mention the drawback that comes from an ill-behaved host opening up multiple connections with different destinations in order to congest the network. However, they didn't try this scenario out in the simulations - that would have been interesting to see.
They also put forth several alternative definitions to user and mention that the definition can be easily changed in their algorithm. Their algorithm would be more compelling if they could run their simulations with all the types of users to show that it works equally well for all of them or how it works better or worse for some definitions of user.
Labels:
fair queueing,
fcfs,
first come first served,
mix-max fairness,
queueing
Tuesday, September 8, 2009
Congestion Avoidance and Control
Summary
This paper provides some simple algorithms to deal with congestion avoidance and control. It also gives some examples of what happens with and without those algorithms. Specifically, the authors put forth 7 new algorithms: RTT variance estimation, exponential retransmit timer backoff, slow-start, more aggressive receiver ack policy and dynamic window sizing on congestion. They determined that the main cause for congestion control is a violation of conversation of packets, which can be caused by (i) the connection failing to get to equilibrium, (ii) sender injecting a new packet before old one has exited and (iii) by being unable to reach equilibrium because of resource limits along the path.
To solve the first problem, they propose the slow-start algorithm. This algorithm increases the congestion window by 1 each time it gets an ack for new data. One the first problem is solved, the second problem is caused by a incorrect implementation of the retransmit timer. The authors say that not estimating the variation in RTT is usually the cause of an incorrect implementation and they provide some suggestions on how to improve it. They also state that exponential backoff is the best scheme for backoff after a retransmit. Finally, for the third problem, they suggest using a retransmit to indicate that the network is congested. Once the sender knows this, they should use the additive increase/multiplicative decrease algorithm to adjust their congestion window size.
Finally, the authors give information about their future work. It involves using the gateways to ensure fair sharing of the network capacity. They argue that the gateway should 'self-protect' against misbehaving hosts. They finish off by mentioning work on being able to detect congestion early because it can be fixed faster the earlier it is detected.
Criticisms & Questions
I liked that they talked about how to deal with uncooperative hosts and suggested gateways as a possible place to automatically moderate them. I would like to know more about this mechanism and if it's been implemented, whether or not it works in the real world.
I also liked that they point out that congestion avoidance and slow-start are 2 different algorithms with 2 different objectives. I was a little confused about that myself and was glad to have that cleared up.
This paper provides some simple algorithms to deal with congestion avoidance and control. It also gives some examples of what happens with and without those algorithms. Specifically, the authors put forth 7 new algorithms: RTT variance estimation, exponential retransmit timer backoff, slow-start, more aggressive receiver ack policy and dynamic window sizing on congestion. They determined that the main cause for congestion control is a violation of conversation of packets, which can be caused by (i) the connection failing to get to equilibrium, (ii) sender injecting a new packet before old one has exited and (iii) by being unable to reach equilibrium because of resource limits along the path.
To solve the first problem, they propose the slow-start algorithm. This algorithm increases the congestion window by 1 each time it gets an ack for new data. One the first problem is solved, the second problem is caused by a incorrect implementation of the retransmit timer. The authors say that not estimating the variation in RTT is usually the cause of an incorrect implementation and they provide some suggestions on how to improve it. They also state that exponential backoff is the best scheme for backoff after a retransmit. Finally, for the third problem, they suggest using a retransmit to indicate that the network is congested. Once the sender knows this, they should use the additive increase/multiplicative decrease algorithm to adjust their congestion window size.
Finally, the authors give information about their future work. It involves using the gateways to ensure fair sharing of the network capacity. They argue that the gateway should 'self-protect' against misbehaving hosts. They finish off by mentioning work on being able to detect congestion early because it can be fixed faster the earlier it is detected.
Criticisms & Questions
I liked that they talked about how to deal with uncooperative hosts and suggested gateways as a possible place to automatically moderate them. I would like to know more about this mechanism and if it's been implemented, whether or not it works in the real world.
I also liked that they point out that congestion avoidance and slow-start are 2 different algorithms with 2 different objectives. I was a little confused about that myself and was glad to have that cleared up.
Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks
Summary
The authors in this paper aim to find the "best" congestion avoidance algorithm. The criteria they use to measure an algorithm are efficiency, fairness, convergence, time and size of oscillations. Congestion avoidance algorithms aim to keep the load oscillating around the "knee". They try a set of algorithms which fall under "increase/decrease" algorithms using a binary feedback scheme. This means that the sender will only get feedback about whether the network is overloaded or not. If the sender gets feedback that the network is overloaded, the sender should decrease the packets its sending and similarly if the network is not overloaded. The 4 main combinations that the authors test are 1) additive increase, additive decrease 2) additive increase, multiplicative decrease 3) multiplicative increase, additive decrease and 4) multiplicative increase, multiplicative decrease. The authors then lay out a mechanism to determine if and when a control converges to efficiency. Then, they figure out which policy of the 4 listed above minimize convergence time and the oscillations that occur. They conclude that additive increase with multiplicative decrease is the optimal policy when considering efficiency and fairness.
Criticisms & Questions
One assumption that the authors make is that "all users receive the same feedback and react to it". I would like to know if this assumption holds in the real world. I'm curious to find out if senders are always required to follow congestion control feedback or if there are ways to bypass them. And, if it is possible to bypass them, are there mechanisms in the congestion control protocol that would be able to control these uncooperative senders?
Feedback
I really liked this paper. It was easy to read and very clear in laying out their plan and their logic.
The authors in this paper aim to find the "best" congestion avoidance algorithm. The criteria they use to measure an algorithm are efficiency, fairness, convergence, time and size of oscillations. Congestion avoidance algorithms aim to keep the load oscillating around the "knee". They try a set of algorithms which fall under "increase/decrease" algorithms using a binary feedback scheme. This means that the sender will only get feedback about whether the network is overloaded or not. If the sender gets feedback that the network is overloaded, the sender should decrease the packets its sending and similarly if the network is not overloaded. The 4 main combinations that the authors test are 1) additive increase, additive decrease 2) additive increase, multiplicative decrease 3) multiplicative increase, additive decrease and 4) multiplicative increase, multiplicative decrease. The authors then lay out a mechanism to determine if and when a control converges to efficiency. Then, they figure out which policy of the 4 listed above minimize convergence time and the oscillations that occur. They conclude that additive increase with multiplicative decrease is the optimal policy when considering efficiency and fairness.
Criticisms & Questions
One assumption that the authors make is that "all users receive the same feedback and react to it". I would like to know if this assumption holds in the real world. I'm curious to find out if senders are always required to follow congestion control feedback or if there are ways to bypass them. And, if it is possible to bypass them, are there mechanisms in the congestion control protocol that would be able to control these uncooperative senders?
Feedback
I really liked this paper. It was easy to read and very clear in laying out their plan and their logic.
Thursday, September 3, 2009
Understanding BGP Misconfiguration
Summary
This paper does a quantitative study and analysis of BGP misconfiguration errors. The authors spent 3 weeks analyzing routing table advertisements to track potential misconfiguration errors. After the 3 weeks, they used the final routing tables to poll a random host within each AS to see if it was reachable. If it wasn't reachable, they emailed the network operators to get more information about the misconfiguration.
Their study focused on 2 types of misconfigurations: origin and export. An origin misconfiguration is when an AS accidentally introduces a route into the global BGP tables. To test this, they looked at short-lived routes because they assumed that an operator would quickly correct configuration errors. An export misconfiguration is when an AS accidentally exports a route to a peer that violates the AS's policy. They use Gao's algorithm to infer the relationships between the ASes. Along with Gao's algorithm, they looked for short-lived paths that violated the "valley free" property.
Once they had the potential misconfigurations, they needed to confirm. They emailed the network operators that they could reach to ask if the incident was in fact a misconfiguration. They further confirmed that by trying to reach random hosts in each AS to see if the AS was still live.
In their results, they found that at least 72% of new routes seen in a day resulted from an origin misconfiguration. Out of those, only 4% resulted in a loss of connectivity. In terms of their duration, over half lasted less than 10 minutes and 80% were corrected within the hour.
Then, the authors explain the various causes of origin misconfiguration (initialization bugs, old configuration, hijacks, forgotten filter, etc.) and export misconfiguration (prefix based configuration, bad ACL).
Finally, the authors describe several solutions to mitigate the problem of misconfiguration. The first is improving the human interface design by using principles such as safe defaults and large edit distance between the wrong and correct version. The second is to provide the operators with a high level language to configure the routers rather than the error-prone low-level language. The third is to do more consistency checks on the data directly in the routers. The last is to extend the protocol like S-BGP does.
Criticisms & Questions
One of their statistics is that about 75% of all new prefix advertisements were results of misconfigurations. However, they say that only 1 in 25 affects connectivity. I'm curious about what happens to the remaining packets that aren't affecting connectivity. Are they simply being ignored by the router and not being inserted in the BGP table, or are they added in and just aren't affecting connectivity? I was fairly surprised that such a high fraction of packets are results of misconfigurations.
When talking about the causes of the misconfigurations, the authors focused on slips and mistakes, both of which are not malicious. However, I would like to know if and how many of the misconfigurations were caused by someone using it as an attack. It seems like an attacker or a shady operator could intentionally cause problems with these misconfigurations.
Feedback
I enjoyed reading the paper and the authors did a good job answering many of the concerns I had as I read the paper.
This paper does a quantitative study and analysis of BGP misconfiguration errors. The authors spent 3 weeks analyzing routing table advertisements to track potential misconfiguration errors. After the 3 weeks, they used the final routing tables to poll a random host within each AS to see if it was reachable. If it wasn't reachable, they emailed the network operators to get more information about the misconfiguration.
Their study focused on 2 types of misconfigurations: origin and export. An origin misconfiguration is when an AS accidentally introduces a route into the global BGP tables. To test this, they looked at short-lived routes because they assumed that an operator would quickly correct configuration errors. An export misconfiguration is when an AS accidentally exports a route to a peer that violates the AS's policy. They use Gao's algorithm to infer the relationships between the ASes. Along with Gao's algorithm, they looked for short-lived paths that violated the "valley free" property.
Once they had the potential misconfigurations, they needed to confirm. They emailed the network operators that they could reach to ask if the incident was in fact a misconfiguration. They further confirmed that by trying to reach random hosts in each AS to see if the AS was still live.
In their results, they found that at least 72% of new routes seen in a day resulted from an origin misconfiguration. Out of those, only 4% resulted in a loss of connectivity. In terms of their duration, over half lasted less than 10 minutes and 80% were corrected within the hour.
Then, the authors explain the various causes of origin misconfiguration (initialization bugs, old configuration, hijacks, forgotten filter, etc.) and export misconfiguration (prefix based configuration, bad ACL).
Finally, the authors describe several solutions to mitigate the problem of misconfiguration. The first is improving the human interface design by using principles such as safe defaults and large edit distance between the wrong and correct version. The second is to provide the operators with a high level language to configure the routers rather than the error-prone low-level language. The third is to do more consistency checks on the data directly in the routers. The last is to extend the protocol like S-BGP does.
Criticisms & Questions
One of their statistics is that about 75% of all new prefix advertisements were results of misconfigurations. However, they say that only 1 in 25 affects connectivity. I'm curious about what happens to the remaining packets that aren't affecting connectivity. Are they simply being ignored by the router and not being inserted in the BGP table, or are they added in and just aren't affecting connectivity? I was fairly surprised that such a high fraction of packets are results of misconfigurations.
When talking about the causes of the misconfigurations, the authors focused on slips and mistakes, both of which are not malicious. However, I would like to know if and how many of the misconfigurations were caused by someone using it as an attack. It seems like an attacker or a shady operator could intentionally cause problems with these misconfigurations.
Feedback
I enjoyed reading the paper and the authors did a good job answering many of the concerns I had as I read the paper.
Interdomain Internet Routing
Summary
This paper focuses on BGP (Border Gateway Protocol) - both the technical details of the protocol as well as the policy that controls the flow of traffic.
BGP is the interdomain routing protocol, used by routers at the boundaries of ISP's to share routing information. What actually controls the flow of traffic are the policies put in place by AS's. An AS is owned by a commercial entity and determines its policies based largely on financial reasons as well as its customers' needs. Each AS uses its own Internal Gateway Protocol.
There are 2 types of inter-AS relationships: transit and peering. A transit relationship usually involves a financial settlement from one AS (the customer) to the other (the provider). A peering relationship, on the other hand, does not include a financial settlement. Instead, they are usually between business competitors and involve reciprocal agreements allowing each AS to access a subset of the other's routing tables.
An AS must decide which routes it will export to its neighboring AS's. Exporting a route means that it agrees to take all traffic to that destination IP address. Because an ISP agrees to provide its customers access to anywhere on the web, it will prioritize the exporting of routes to customers first. In addition, it will export all paths to its customers to the other AS's. An AS will also export some peering routes.
When deciding which routes to import, an AS will always prioritize a customer over a peer over a provider. All the import and export rules are stored in the LOCAL PREF.
BGP itself is a simple protocol, but is designed so that it can also implement policy. BGP was designed to meet 3 goals: scalability, policy and cooperation under competitive circumstances. A BGP router initially shares the subset of routes it wants to share with another BGP router. After that, it simply sends "update" and "withdraw" messages.
There are 2 types of BPG sessions: iBGP and eBPG. eBPG sessions are used between BPG routers in different ASes while iBPG sessions are between routers within the same AS. iBGP is used to share externally learned routes within the AS.
Then the paper provides the details for the various attributes in a BGP route announcement. It also goes into some of the security problems with BGP, giving a real-world example involving Pakistan Telecom and Youtube.
Criticisms & Questions
One thing that was unclear to me throughout the paper was the relationship between an AS and an ISP. I couldn't figure out if an ISP was made up several ASes, if an AS can cover several ISP's, if an ISP is an AS, etc. I would have liked it if the paper had made that point clear when it introduced ASes.
The paper went into detail about the "full-mesh" implementation of iBGP. It also offered route reflectors and confederation of BGP routers as 2 alternative implementations. I would like to know which of those implementations is most commonly used in practice? It also wasn't clear to me if the iBGP implementation is something that each AS chooses and so would have to be independent of any other AS's iBGP.
The problem with the lack of origin authentication really surprised me. It seems to me that "hijacking" routes is not very hard to do. And, if that is the case, why is it that more attackers are not taking advantage of it (possibly to launch a DOS attack)? And, are there any other measures being taken to protect against this vulnerability aside from the poorly-maintained registry?
Feedback
I would say definitely keep this paper in the syllabus. It helped me to understand a lot more about the reasoning behind the inter-AS policies. It's also nice to see the clear contrast between what's technically possible and ideal and what actually happens in practice.
This paper focuses on BGP (Border Gateway Protocol) - both the technical details of the protocol as well as the policy that controls the flow of traffic.
BGP is the interdomain routing protocol, used by routers at the boundaries of ISP's to share routing information. What actually controls the flow of traffic are the policies put in place by AS's. An AS is owned by a commercial entity and determines its policies based largely on financial reasons as well as its customers' needs. Each AS uses its own Internal Gateway Protocol.
There are 2 types of inter-AS relationships: transit and peering. A transit relationship usually involves a financial settlement from one AS (the customer) to the other (the provider). A peering relationship, on the other hand, does not include a financial settlement. Instead, they are usually between business competitors and involve reciprocal agreements allowing each AS to access a subset of the other's routing tables.
An AS must decide which routes it will export to its neighboring AS's. Exporting a route means that it agrees to take all traffic to that destination IP address. Because an ISP agrees to provide its customers access to anywhere on the web, it will prioritize the exporting of routes to customers first. In addition, it will export all paths to its customers to the other AS's. An AS will also export some peering routes.
When deciding which routes to import, an AS will always prioritize a customer over a peer over a provider. All the import and export rules are stored in the LOCAL PREF.
BGP itself is a simple protocol, but is designed so that it can also implement policy. BGP was designed to meet 3 goals: scalability, policy and cooperation under competitive circumstances. A BGP router initially shares the subset of routes it wants to share with another BGP router. After that, it simply sends "update" and "withdraw" messages.
There are 2 types of BPG sessions: iBGP and eBPG. eBPG sessions are used between BPG routers in different ASes while iBPG sessions are between routers within the same AS. iBGP is used to share externally learned routes within the AS.
Then the paper provides the details for the various attributes in a BGP route announcement. It also goes into some of the security problems with BGP, giving a real-world example involving Pakistan Telecom and Youtube.
Criticisms & Questions
One thing that was unclear to me throughout the paper was the relationship between an AS and an ISP. I couldn't figure out if an ISP was made up several ASes, if an AS can cover several ISP's, if an ISP is an AS, etc. I would have liked it if the paper had made that point clear when it introduced ASes.
The paper went into detail about the "full-mesh" implementation of iBGP. It also offered route reflectors and confederation of BGP routers as 2 alternative implementations. I would like to know which of those implementations is most commonly used in practice? It also wasn't clear to me if the iBGP implementation is something that each AS chooses and so would have to be independent of any other AS's iBGP.
The problem with the lack of origin authentication really surprised me. It seems to me that "hijacking" routes is not very hard to do. And, if that is the case, why is it that more attackers are not taking advantage of it (possibly to launch a DOS attack)? And, are there any other measures being taken to protect against this vulnerability aside from the poorly-maintained registry?
Feedback
I would say definitely keep this paper in the syllabus. It helped me to understand a lot more about the reasoning behind the inter-AS policies. It's also nice to see the clear contrast between what's technically possible and ideal and what actually happens in practice.
Tuesday, September 1, 2009
The Design Philosophy of the DARPA Internet Protocols
Summary
This paper describes the reasoning that went into creating the major internet protocols. Many people know the "what" of the protocols; this paper explains the "why". The 2 main protocols explored in this paper are IP and TCP.
The author explains that the fundamental goal of the internet was to connect the many smaller networks that existed at the time. Using several assumptions and the knowledge they had of current networks, they came up with the following design: a network made up several interconnected networks that used packet-switched communication, connected together with store-and-forward gateways.
The author lists the 7 second level goals of the internet in order of importance, and stresses that this list strongly reflects its use at the time, which was in a military context. This explains why survivability was so high on the list and accountability so low.
The author goes on to explain each goal in depth. Survivability made it necessary for state to be stored somewhere in the network; the designers decided to take the "fate-sharing" approach and stored the state in the hosts. IP and TCP, which were originally designed to be combined, were split in order to be able to support many different types of services. This led to a datagram being the basic building block. The internet is successfully able to support many different network technologies because it makes a very minimal set of assumptions about what the internet will provide. Then, the paper goes over the remaining goals and explains how some of the main problems we have with the internet today are due to these goals being so low on the priority list and therefore not well implemented.
The paper emphasizes how a datagram is not a service in itself, but rather was designed to be a building block. It also includes a discussion on TCP and some of the issues due to counting by bytes rather than packets. The paper finishes off with an for a new building block for the "next generation of architecture" called flows, which would be able to better address the goals of resource management and accountability.
Criticisms & Questions
I thought it was interesting how the author mentions "the next generation of architecture". The way he talks about it, it almost seems like there was a plan to redesign and rebuild the internet from scratch. It would be really interesting to know if at that time people did believe it was possible to switch over to a new, redesigned internet.
It would also be really interesting to know what the metrics were when evaluating the various goals, particularly the second level goals. Most of these goals are very hard to measure so I would imagine that knowing whether the goal was achieved or to what extent it was achieved would be hard to determine.
Feedback
I really enjoyed reading this paper. It was great to get more insight into what the designers considerations were when designing the internet. It definitely teaches one to look at it from their perspective before criticizing the design of the Internet. I would agree to keep this paper in the syllabus.
This paper describes the reasoning that went into creating the major internet protocols. Many people know the "what" of the protocols; this paper explains the "why". The 2 main protocols explored in this paper are IP and TCP.
The author explains that the fundamental goal of the internet was to connect the many smaller networks that existed at the time. Using several assumptions and the knowledge they had of current networks, they came up with the following design: a network made up several interconnected networks that used packet-switched communication, connected together with store-and-forward gateways.
The author lists the 7 second level goals of the internet in order of importance, and stresses that this list strongly reflects its use at the time, which was in a military context. This explains why survivability was so high on the list and accountability so low.
The author goes on to explain each goal in depth. Survivability made it necessary for state to be stored somewhere in the network; the designers decided to take the "fate-sharing" approach and stored the state in the hosts. IP and TCP, which were originally designed to be combined, were split in order to be able to support many different types of services. This led to a datagram being the basic building block. The internet is successfully able to support many different network technologies because it makes a very minimal set of assumptions about what the internet will provide. Then, the paper goes over the remaining goals and explains how some of the main problems we have with the internet today are due to these goals being so low on the priority list and therefore not well implemented.
The paper emphasizes how a datagram is not a service in itself, but rather was designed to be a building block. It also includes a discussion on TCP and some of the issues due to counting by bytes rather than packets. The paper finishes off with an for a new building block for the "next generation of architecture" called flows, which would be able to better address the goals of resource management and accountability.
Criticisms & Questions
I thought it was interesting how the author mentions "the next generation of architecture". The way he talks about it, it almost seems like there was a plan to redesign and rebuild the internet from scratch. It would be really interesting to know if at that time people did believe it was possible to switch over to a new, redesigned internet.
It would also be really interesting to know what the metrics were when evaluating the various goals, particularly the second level goals. Most of these goals are very hard to measure so I would imagine that knowing whether the goal was achieved or to what extent it was achieved would be hard to determine.
Feedback
I really enjoyed reading this paper. It was great to get more insight into what the designers considerations were when designing the internet. It definitely teaches one to look at it from their perspective before criticizing the design of the Internet. I would agree to keep this paper in the syllabus.
Subscribe to:
Posts (Atom)