Chat with us, powered by LiveChat How the authors approach to the issue & what proposed ideas they mentioned Strengths and Weakness - strengths & weakness of the proposed approach & design, and about the pa | Wridemy

How the authors approach to the issue & what proposed ideas they mentioned Strengths and Weakness – strengths & weakness of the proposed approach & design, and about the pa

 

I need the following after reviewing the paper

Problem Statement – Issues discussed by the author

Approach & design – How the authors approach to the issue & what proposed ideas they mentioned

Strengths and Weakness – strengths & weakness of the proposed approach & design, and about the paper.  what are the key strengths of the authors proposed system and weakness of the system.

Evaluation(Performance) – How the authors evaluated the proposed system, what parameters they used to test the performance

Conclusion(In readers perspective)

Along with these, I need to have a detailed explanation of the paper section-wise:

sections are:

Abstract

Introduction

DNS: OPERATION AND PROBLEMS 

COOPERATIVE DOMAIN NAME SYSTEM 

EVALUATION 

Comparision with related work

Summary

Conclusion

The Design and Implementation of a Next Generation

Name Service for the Internet

Venugopalan Ramasubramanian Emin Gün Sirer

Dept. of Computer Science, Cornell University, Ithaca, NY 14853

{ramasv,egs}@cs.cornell.edu

ABSTRACT Name services are critical for mapping logical resource names to physical resources in large-scale distributed systems. The Domain Name System (DNS) used on the Internet, however, is slow, vulnerable to denial of service attacks, and does not support fast updates. These problems stem fundamentally from the structure of the legacy DNS.

This paper describes the design and implementation of the Cooperative Domain Name System (CoDoNS), a novel name service, which provides high lookup performance through pro- active caching, resilience to denial of service attacks through automatic load-balancing, and fast propagation of updates. CoDoNS derives its scalability, decentralization, self-organi- zation, and failure resilience from peer-to-peer overlays, while it achieves high performance using the Beehive replication framework. Cryptographic delegation, instead of host-based physical delegation, limits potential malfeasance by names- pace operators and creates a competitive market for names- pace management. Backwards compatibility with existing protocols and wire formats enables CoDoNS to serve as a backup for legacy DNS, as well as a complete replacement. Performance measurements from a real-life deployment of the system in PlanetLab shows that CoDoNS provides fast lookups, automatically reconfigures around faults without man- ual involvement and thwarts distributed denial of service at- tacks by promptly redistributing load across nodes.

Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distribu- ted Systems—Domain Name System

1. INTRODUCTION Translation of names to network addresses is an essen-

tial predecessor to communication in networked systems. The Domain Name System (DNS) performs this transla- tion on the Internet and constitutes a critical component of the Internet infrastructure. While the DNS has sustained

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGCOMM’04, Aug. 30–Sept. 3, 20034, Portland, Oregon, USA. Copyright 2004 ACM 1-58113-862-8/04/0008 …$5.00.

the growth of the Internet through static, hierarchical par- titioning of the namespace and wide-spread caching, recent increases in malicious behavior, explosion in client popu- lation, and the need for fast reconfiguration pose difficult problems. The existing DNS architecture is fundamentally unsuitable for addressing these issues.

The foremost problem with DNS is that it is suscepti- ble to denial of service (DoS) attacks. This vulnerability stems from limited redundancy in nameservers, which pro- vide name-address mappings and whose overload, failure or compromise can lead to low performance, failed lookups and misdirected clients. Approximately 80% of the domain names are served by just two nameservers, and a surprising 0.8% by only one. At the network level, all servers for 32% of the domain names are connected to the Internet through a single gateway, and can thus be compromised by a single failure. The top levels of the hierarchy are served by a rel- atively small number of servers, which serve as easy targets for denial of service attacks [5]. A recent DoS attack [29] on the DNS crippled nine of the thirteen root servers at that time, while another recent DoS attack on Microsoft’s DNS servers severely affected the availability of Microsoft’s web services for several hours [39]. DNS nameservers are easy targets for malicious agents, partly because approximately 20% of nameserver implementations contain security flaws that can be exploited to take over the nameservers.

Second, name-address translation in the DNS incurs long delays. Recent studies [41, 17, 19] have shown that DNS lookup time contributes more than one second for up to 30% of web object retrievals. The explosive growth of the namespace has decreased the effectiveness of DNS caching. The skewed distribution of names under popular domains, such as .com, has flattened the name hierarchy and increased load imbalance. The use of short timeouts for popular map- pings, as is commonly employed by content distribution net- works, further reduces DNS cache hit rates. Further, manual configuration errors, such as lame delegations [30, 28], can introduce latent performance problems.

Finally, widespread caching of mappings in the DNS pro- hibits fast propagation of unanticipated changes. Since the DNS does not keep track of cached copies of mappings, it cannot guarantee cache coherency and, instead relies on timeout-based invalidations of stale mappings. Lack of cache coherency in DNS implies that updates may not be visible to clients for extended periods of time, effectively preventing quick service relocation in response to attacks or emergen- cies.

Fresh design of the legacy DNS provides an opportunity to address these shortcomings. A replacement for the DNS should exhibit the following properties.

• High Performance: Decouple the performance of DNS from the number of nameservers. Achieve lower latencies than legacy DNS and improve lookup perfor- mance in the presence of high loads and unexpected changes in popularity (“the slashdot effect”).

• Resilience to Attacks: Remove vulnerabilities in the system and provide resistance against denial of service attacks through decentralization and dynamic load balancing. Self-organize automatically in response to host and network failures.

• Fast Update Propagation: Enable changes in name- address mappings to quickly propagate to clients. Sup- port secure delegation to preserve integrity of DNS records, and prohibit rogue nodes from corrupting the system.

This paper describes Cooperative Domain Name System (CoDoNS), a backwards-compatible replacement for the le- gacy DNS that achieves these properties. CoDoNS com- bines two recent advances, namely, structured peer-to-peer overlays and analytically informed proactive caching. Struc- tured peer-to-peer overlays, which create and maintain a mesh of cooperating nodes, have been used previously to im- plement wide-area distributed hash tables (DHTs). While their self-organization, scalability, and failure resilience pro- vide a strong foundation for robust large-scale distributed services, their high lookup costs render them inadequate for demanding, latency-sensitive applications such as DNS [19]. CoDoNS achieves high lookup performance on a structured overlay through an analytically-driven proactive caching layer. This layer, called Beehive [33], automatically replicates the DNS mappings throughout the network to match antici- pated demand and provides a strong performance guaran- tee. Specifically, Beehive achieves a targeted average lookup latency with a minimum number of replicas. Overall, the combination of Beehive and structured overlays provides the requisite properties for a large scale name service, suitable for deployment over the Internet.

Our vision is that globally distributed CoDoNS servers self-organize to form a flat peer-to-peer network, essentially behaving like a large, cooperative, shared cache. Clients contact CoDoNS through a local participant in the CoDoNS network, akin to a legacy DNS resolver. Since a complete takeover from DNS is an unrealistic undertaking, we have designed CoDoNS for an incremental deployment path. At the wire protocol level, CoDoNS provides full compatibility with existing DNS clients. No changes to client-side resolver libraries, besides changing the identities of the nameservers in the system configuration (e.g. modifying /etc/resolv.conf or updating DHCP servers), are required to switch over to CoDoNS. At the back end, CoDoNS transparently builds on the existing DNS namespace. Domain names can be explic- itly added to CoDoNS and securely managed by their own- ers. For names that have not been explicitly added, CoDoNS uses legacy DNS to acquire the mappings. CoDoNS sub- sequently maintains the consistency of these mappings by proactively checking with legacy DNS for updates. CoDoNS can thus grow as a layer on top of legacy DNS and act as a safety net in case of failures in the legacy DNS.

Measurements from a deployment of the system in Planet Lab [3] using real DNS workloads show that CoDoNS can substantially decrease the lookup latency, handle large flash- crowds, and quickly disseminate updates. CoDoNS can be deployed either as a complete replacement for DNS, where each node operates in a separate administrative and trust domain, or as an infrastructure service within an ISP, where all nodes are in the same administrative and trust domain.

The peer-to-peer architecture of CoDoNS securely decou- ples namespace management from a server’s location in the network and enables a qualitatively different kind of name service. Legacy DNS relies fundamentally on physical dele- gations, that is, query handoffs from host to host until the query reaches a set of designated servers considered author- itative for a certain portion of the namespace owned by a namespace operator. Since all queries that involve that por- tion of the namespace are routed to these designated servers, the namespace operator is in a unique position of power. An unscrupulous namespace operator may abuse this monopoly by modifying records on the fly, providing differentiated services, or even creating synthetic responses that redirect clients to their own servers. Nameowners that are bound to that namespace have no other recourse. In contrast, name records in CoDoNS are tamper-proof and self-validating, and delegations are cryptographic. Any peer with a valid response can authoritatively answer any matching query. This decoupling of namespace management from the physi- cal location and ownership of nameservers enables CoDoNS to delegate the same portion of the namespace, say .com, to multiple competing namespace operators. These opera- tors, which are each provided with signing authority over the same space, assign names from a shared, coordinated pool, and issue self-validating name bindings into the sys- tem. Since CoDoNS eliminates physical delegations and des- ignated nameservers, it breaks the monopoly of namespace operators and creates an even playing field where namespace operators need to compete with each other on service.

The rest of this paper is organized as follows. In the next section, we describe the basic operation of the legacy DNS and highlight its drawbacks. Section 3 describes the design and implementation of CoDoNS in detail. In Section 4, we present performance results from the PlanetLab deployment of CoDoNS. We summarize related work in Section 5, and conclude in Section 6.

2. DNS: OPERATION AND PROBLEMS The Domain Name System (DNS) is a general-purpose

database for managing host information on the Internet. It supports any kind of data, including network address, own- ership, and service configuration, to be associated with hier- archically structured names. It is primarily used to translate human-readable names of Internet resources to their corre- sponding IP addresses. In this section, we provide a brief overview of the structure and operation of the legacy DNS, identify its major drawbacks, and motivate a new design.

2.1 Overview of Legacy DNS The legacy DNS [26, 27] is organized as a static, dis-

tributed tree. The namespace is hierarchically partitioned into non-overlapping regions called domains. For example, cs.cornell.edu is a sub-domain of the domain cornell.edu, which in turn is a sub-domain of the top-level domain edu. Top-level domains are sub-domains of a global root domain.

Server

Server

Server

Server

Resolver

root name server

name server authoritative

client

local intermediate name server

1

2 3 4

5 6

78

9

10

name server

resolver

query: www.cs.cornell.edu

.edu gTLD

ns.cornell.edu

ns.cs.cornell.edu

Figure 1: Name Resolution in Legacy DNS: Resolvers

translate names to addresses by following a chain of del-

egations iteratively (2-5) or recursively (6-9).

Domain names, such as www.cs.cornell.edu, belong to name- owners.

Extensible data structures, called resource records, are used to associate values of different types with domain names. These values may include the corresponding IP address, mail host, owner name and the like. The DNS query interface al- lows these records to be retrieved by a query containing a domain name and a type.

The legacy DNS delegates the responsibility for each do- main to a set of replicated nameservers called authoritative nameservers. The authoritative nameservers of a domain manage all information for names in that domain, keep track of authoritative nameservers of the sub-domains rooted at their domain, and are administered by namespace operators. At the top of the legacy DNS hierarchy are root nameservers, which keep track of the authoritative nameservers for the top-level domains (TLDs). The top-level domain namespace consists of generic TLDs (gTLD), such as .com, .edu, and .net, and country-code TLDs (ccTLD), such as .uk, .tr, and .in. Nameservers are statically configured with thirteen IP addresses for the root servers. BGP-level anycast is used in parts of the Internet to reroute queries destined for these thirteen IP addresses to a local root server.

Resolvers in the legacy DNS operate on behalf of clients to map queries to matching resource records. Clients typi- cally issue DNS queries to local resolvers within their own administrative domain. Resolvers follow a chain of author- itative nameservers in order to resolve the query. The lo- cal resolver contacts a root nameserver to find the top-level domain nameserver. It then issues the query to the TLD nameserver and obtains the authoritative nameserver of the next sub-domain. The authoritative nameserver of the sub- domain replies with the response for the query. This process continues recursively or iteratively until the authoritative nameserver of the queried domain is reached. Figure 1 il- lustrates the different stages in the resolution of an example domain name www.cs.cornell.edu. While this figure provides a simple overview of the communication involved in name resolution, in practice, each query may trigger additional lookups to resolve intermediate nameservers [26, 27].

Bottlenecks All Domains Top 500

1 0.82 % 0.80 % 2 78.44 % 62.80 % 3 9.96 % 13.20 % 4 4.64 % 13.00 % 5 1.43 % 6.40 % 13 4.12 % 0 %

Table 1: Delegation Bottlenecks in Name Resolution: A

significant number of names are served by two or fewer

nameservers, even for the most popular 500 sites.

0%

10%

20%

30%

40%

1 2 3 4 5 6 7 8 9 10 bottleneck gateways (#)

d o

m ai

n s

(% )

all domains top 500 ccTLDs

Figure 2: Physical Bottlenecks in Name Resolution: A

significant number of domains, including top-level do-

mains, depend on a small number of gateways for their

resolution.

Pursuing a chain of delegations to resolve a query natu- rally incurs significant delay. The legacy DNS incorporates aggressive caching in order to reduce the latency of query resolution. The resolvers cache responses to queries they is- sue, and use the cached responses to answer future queries. Since records may change dynamically, legacy DNS provides a weak form of cache coherency through a time-to-live (TTL) field. Each record carries a TTL assigned by the authorita- tive nameserver, and may be cached until TTL expires.

2.2 Problems with Legacy DNS The current use and scale of the Internet has exposed sev-

eral shortcomings in the functioning of the legacy DNS. We performed a large scale survey to analyze and quantify its vulnerabilities. Our survey explored the delegation chains of 593160 unique domain names collected by crawling the Ya- hoo! and the DMOZ.ORG web directories. These domain names belong to 535088 unique domains and are served by 164089 different nameservers. We also separately examined the 500 most popular domains as determined by the Alexa ranking service. In this section, we describe the findings of our survey, which highlight the problems in failure resilience, performance, and update propagation in the legacy DNS.

Failure Resilience – Bottlenecks

The legacy DNS is highly vulnerable to network failures, compromise by malicious agents, and denial of service at- tacks, because domains are typically served by a very small

number of nameservers. We first examine the delegation bottlenecks in DNS; a delegation bottleneck is the minimum number of nameservers in the delegation chain of each do- main that need to be compromised in order to control that domain. Table 1 shows the percentage of domains that are bottlenecked on different numbers of nameservers. 78.63% of domains are restricted by two nameservers, the minimum recommended by the standard [26]. Surprisingly, 0.82% of domains are served by only one nameserver. Even the highly popular domains are not exempt from severe bottlenecks in their delegation chains. Some domains (0.43%) spoof the minimum requirement by having two nameservers map to the same IP address. Overall, over 90% of domain names are served by three or fewer nameservers and can be disabled by relatively small-scale DoS attacks.

Failure and attack resilience of the legacy DNS is even more limited at the network level. We examined physical bottlenecks, that is, the minimum number of network gate- ways or routers between clients and nameservers that need to be compromised in order to control that domain. We mea- sured the physical bottlenecks by performing traceroutes to 10,000 different nameservers, which serve about 5,000 ran- domly chosen domain names, from fifty globally distributed sites on PlanetLab [3]. Figure 2 plots the percentage of domains that have different numbers of bottlenecks at the network level, and shows that about 33% of domains are bot- tlenecked at a single gateway or router. While this number is not surprising – domains are typically served by a few name- servers, all located in the same sub-network – it highlights that a large number of domains are vulnerable to network outages. These problems are significant and affect many top level domains and popular web sites. Recently, Microsoft suffered a DoS attack on its nameservers that rendered its services unavailable. The primary reason for the success of this attack was that all of Microsoft’s DNS servers were in the same part of the network [39]. Overall, a large portion of the namespace can be compromised by infiltrating a small number of gateways or routers.

Failure Resilience – Implementation Errors

The previous section showed that legacy DNS suffers from limited redundancy and various bottlenecks. In this section, we examine the feasibility of attacks that target these bottle- necks through known vulnerabilities in commonly deployed nameservers. Early studies [11, 23, 28] identified several im- plementation errors in legacy DNS servers that can lead to compromise. While many of these have been fixed, a sig- nificant percentage of nameservers continue to use buggy implementations. We surveyed 150,000 nameservers to de- termine if they contain any known vulnerabilities, based on the Berkeley Internet Name Daemon (BIND) exploit list maintained by the Internet Systems Consortium (ISC) [18]. Table 2 summarizes the results of this survey. Approxi- mately 18% of servers do not respond to version queries, and about 14% do not report valid BIND versions. About 2% of nameserves have the tsig bug, which permits a buffer overflow that can enable malicious agents to gain access to the system. 19% of nameserves have the negcache problem that can be exploited to launch a DoS attack by providing negative responses with large TTL value from a malicious nameserver. Overall, exploiting the bottlenecks identified in the previous section is practical.

problem severity affected nameservers all domains top 500

tsig critical 2.08 % 0.59 % nxt critical 0.09 % 0.15 %

negcache serious 19.03 % 2.57 % sigrec serious 13.56 % 1.32 %

DoS multi serious 11.11 % 1.32 % DoS findtype serious 2.58 % 0.59 %

srv serious 1.89 % 0.59 % zxfr serious 1.81 % 0.44 %

libresolv serious 1.48 % 0 % complain serious 1.33 % 0 % so-linger serious 1.15 % 0.15 % fdmax serious 1.15 % 0.15 %

sig serious 0.70 % 0.15 % infoleak moderate 4.58 % 0.59 % sigdiv0 moderate 1.86 % 0.59 % openssl medium 1.71 % 0.37 % naptr minor 2.58 % 0.15 %

maxdname minor 2.58 % 0.15 %

Table 2: Vulnerabilities in BIND: A significant percent-

age of nameservers use BIND versions with known secu-

rity problems [18].

Performance – Latency

Name resolution latency is a significant component of the time required to access web services. Wills and Shang [41] have found, based on NLANR proxy logs, that DNS lookup time contributes more than one second to 20% of web object retrievals, Huitema et al. [17] report that 29% of queries take longer than two seconds, and Jung et al. [19] show that more than 10% of queries take longer than two seconds. The low performance is due mainly to low cache hit rates, stemming from the heavy-tailed, Zipf-like query distribution in DNS. It is well known from studies on Web caching [7] that heavy- tailed query distributions severely limit cache hit rates.

Wide-spread deployment of content distribution networks, which perform dynamic server selection, have further strained the performance of the legacy DNS. These services, such as Akamai and Digital Island, use the DNS in order to direct clients to closer servers of Web content. They typically use very short TTLs (on the order of 30 seconds) in order to perform fine grain load balancing and respond rapidly to changes in server or network load. But, this mechanism vir- tually eliminates the effectiveness of caching and imposes enormous overhead on DNS. A study on impact of short TTLs on caching [20] shows that cache hit rates decrease significantly for TTLs lower than fifteen minutes. Another study on the adverse effect of server selection [36] reports that name resolution latency can increase by two orders of magnitude.

Performance – Misconfigurations

DNS performance is further affected by the presence of a large number of broken (lame) or inconsistent delegations. In our survey, address resolution failed for about 1.1% of nameservers due to timeouts or non-existent records, mostly stemming from spelling errors. For 14% of domains, author- itative nameservers returned inconsistent responses; a few authoritative nameservers reported that the domain does

not exist, while others provided valid records. Failures stem- ming from lame delegations and timeouts can translate into significant delays for the end-user. Since these failures and inconsistencies largely stem from human errors [28], it is clear that manual configuration and administration of such a large scale system is expensive and leads to a fragile struc- ture.

Performance – Load Imbalance

DNS measurements at root and TLD nameservers show that they handle a large load and are frequently subjected to de- nial of service attacks [5, 6]. A massive distributed DoS attack [29] in November 2002 rendered nine of the thirteen root servers unresponsive. Partly as a result of this attack, the root is now served by more than sixty nameservers and is served through special-case support for BGP-level any- cast. While this approach fixes the superficial problem at the topmost level, the static DNS hierarchy fundamentally implies greater load at the higher levels than the leaves. The special-case handling does not provide automatic repli- cation of the hot spots, and sustained growth in client popu- lation will require continued future expansions. In addition to creating exploitable vulnerabilities, load imbalance poses performance problems, especially for lookups higher in the name hierarchy.

Update Propagation

Large-scale caching in DNS poses problems for maintain- ing the consistency of cached records in the presence of dy- namic changes. Selection of a suitable value for the TTL is an administrative dilemma; short TTLs adversely affect the lookup performance and increase network load [19, 20], while long TTLs interfere with service relocation. For instance, a popular on line brokerage firm uses a TTL of thirty min- utes. Its users do not incur DNS latencies when accessing the brokerage for thirty minutes at a time, but they may ex- perience outages of up to half an hour if the brokerage firm needs to relocate its services in response to an emergency. Nearly 40% of domain names use TTLs of one day or higher, which prohibits fast dissemination of unanticipated changes to records.

3. COOPERATIVE DOMAIN NAME SYS- TEM

The use and scale of today’s Internet is drastically dif- ferent from the time of the design of the legacy DNS. Even though the legacy DNS anticipated the explosive growth and handled it by partitioning the namespace, delegating the queries, and widely caching the responses, this architecture contains inherent limitations. In this section, we present an overview of CoDoNS, describe its implementation, and highlight how it addresses the problems of the legacy DNS.

3.1 Overview of Beehive CoDoNS derives its performance characteristics from a

proactive caching layer called Beehive [33]. Beehive is a pro- active replication framework that enables prefix-matching DHTs to achieve O(1) lookup performance. Pastry [35], and Tapestry [42] are examples of structured DHTs that use prefix-matching [32, 21] to lookup objects. In these DHTs, both objects and nodes have randomly assigned identifiers from the same circular space, and each object is stored at

Q

B

0210

D

E

lookup (2101)L1

2201

2100

2110L2

L1

Figure 3: Proactive Caching in Beehive: Caching an

object at all nodes with i matching prefix-digits ensures

that it can be located in i hops. Beehive achieves O(1)

average lookup time with minimal replication of objects.

the nearest node in the identifier space, called the home node. Each node routes a request for an object, say 2101, by successively matching prefixes; that is, by routing the re- quest to a node that matches one more digit with the object until the home node, say 2100, is reached. Overlay routing by matching prefixes in this manner incurs O(log N) hops in the worst case to reach the home node. Figure 3 illus- trates the prefix matching routing algorithm in Pastry. A routing table of O(log N) size provides the overlay node with pointers to nodes with matching prefixes. In a large system, log N translates to several hops across the Internet and is not sufficient to meet the performance requirements of la- tency critical applications such as DNS.

Beehive proposes a novel technique based on controlled proactive caching to reduce the average lookup latency of structured DHTs. Figure 3 illustrates how Beehive applies proactive caching to decrease lookup latency in Pastry. In the example mentioned above, where a query is issued for the object 2101, Pastry incurs three hops to find a copy of the object. By placing copies of the object at all nodes one hop prior to the home node in the request path, the lookup latency can be reduced by one hop. In this example, the lookup latency can be reduced from three hops to two hops by replicating the object at all nodes that start with 21. Similarly, the lookup latency can be reduced to one hop by replicating the object at all nodes that start with 2. Thus, we can vary the lookup latency of the object between 0 and log N hops by systematically replicating the object more extensively. In Beehive, an object replicated at all nodes with i matching prefixes incurs i hops for a lookup, and is said to be replicated at level i.

The central insight behind Beehive is that by judiciously choosing different levels of replication for different objects, the average lookup performance of the system can be tuned to any desired constant. Naturally, replicating every ob- ject at every node would achieve O(1) lookups, but would incur excessive space overhead, consume significant band- width and lead to large update latencies. Beehive minimizes bandwidth and space consumption by posing the following optimization problem: minimize the total number of replicas subject to the constraint that the aggregate lookup latency is less than a desired constant C. For power-law (or Zipf-like) query distributions, Beehive analytically derives the optimal

closed-form solution to this problem. The derivation of the analytical solution is provided in [33]; the final expression for the closed-form solution that minimizes the total number of replicas for Zipf-like query distributions with parameter α < 1 is the following:

xi = [ di(logN − C)

1 + d + · · · + dlogN−1 ]

1

1−α , where d = b 1−α

α

In this expression, b is the base of the underlying DHT and xi is the fraction of most popular objects that get replicated at level i. This solution is immediately applicable to DNS, since DNS queries follow a Zipf-like distribution [19].

The analytical result provides several properties s

Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.

Do you need an answer to this or any other questions?

About Wridemy

We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.

How It Works

To make an Order you only need to click on “Order Now” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Are there Discounts?

All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure.

Hire a tutor today CLICK HERE to make your first order

Related Tags

Academic APA Writing College Course Discussion Management English Finance General Graduate History Information Justify Literature MLA