Saturday, August 30, 2008

Akamai patents

an update on the patents granted to Akamai in 2008:

United States Patent 7,418,518
Grove , et al. August 26, 2008

Method for high-performance delivery of web content

Abstract

The present invention provides a method and apparatus for increasing the performance of world-wide-web traffic over the Internet. A distributed network of specialized nodes of two types is dispersed around the Internet. A web client's requests are directed to a node of the first type chosen to be close to the client, and the client communicates with this node using a standard protocol such as HTTP. This first node receives the request, and communicates the request to a node of the second type chosen to be close to the request's ultimate destination (e.g., a web server capable of generating a response to the request.) The first node communicates the request to the second node using a different, specialized, protocol that has been designed for improved performance and specifically to reduce traffic volume and to reduce latency. The second node receives communication from the first node using this specialized protocol, converts it back to a standard protocol such as HTTP, and forwards the request to the destination computer or server. Responses from the destination to the client take the corresponding reverse route, and also are carried over a specialized protocol between the two nodes. In addition, these nodes can employ other techniques such as web caches that avoid or improve some communication steps. Thus, specialized, proprietary, or complex protocols and techniques can be quickly deployed to enhance web performance without requiring significant changes to the clients or servers.


Inventors: Grove; Adam J. (Menlo Park, CA), Kharitonov; Michael (New York, NY), Tumarkin; Alexei (Goleta, CA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

-----------

United States Patent 7,409,456
Sitaraman August 5, 2008

Method and system for enhancing live stream delivery quality using prebursting

Abstract

A method to "accelerate" the delivery of a portion of a data stream across nodes of a stream transport network. According to the invention, a portion of a live stream is forwarded from a first node to a second node in a transport network at a high bitrate as compared to the stream's encoded bitrate, and thereafter, the stream continues to be forwarded from the first node to the second node at or near the encoded bitrate. The disclosed technique of forwarding a portion of a stream at a high bitrate as compared to the encoded bitrate of the stream is sometimes referred to as "prebursting" the stream. This technique provides significant advantages in that it reduces stream startup time, reduces unrecoverable stream packet loss, and reduces stream rebuffers as the stream is viewed by a requesting end user that has been mapped to a media server in a distributed computer network such as a content delivery network.


Inventors: Sitaraman; Ramesh K. (Cambridge, MA)
Assignee: Akami Technologies, Inc. (Cambridge, MA)

------------
United States Patent 7,406,627
Bailey , et al. July 29, 2008

Method and apparatus for testing request-response service using live connection traffic

Abstract

The present invention provides for a method and apparatus for comparison of network systems using live traffic in real-time. The inventive technique presents real-world workload in real-time with no external impact (i.e. no impact on the system under test), and it enables comparison against a production system for correctness verification. A preferred embodiment of the invention is a testing tool for the pseudo-live testing of CDN content staging servers, According to the invention, traffic between clients and the live production CDN servers is monitored by a simulator device, which then replicates this workload onto a system under test (SUT). The simulator detects divergences between the outputs from the SUT and live production servers, allowing detection of erroneous behavior. To the extent possible, the SUT is completely isolated from the outside world so that errors or crashes by this system do not affect either the CDN customers or the end users. Thus, the SUT does not interact with end users (i.e., their web browsers). Consequently, the simulator serves as a proxy for the clients. By basing its behavior off the packet stream sent between client and the live production system, the simulator can simulate most of the oddities of real-world client behavior, including malformed packets, timeouts, dropped traffic and reset connections, among others.


Inventors: Bailey; Shannon T. (San Francisco, CA), Cohen; Ross (San Rafael, CA), Stodolsky; Daniel (Somerville, MA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

-----------

United States Patent 7,406,512
Swildens , et al. July 29, 2008

Automatic migration of data via a distributed computer network

Abstract

A method and apparatus for the automatic migration of data via a distributed computer network allows a customer to select content files that are to be transferred to a group of edge servers. Origin sites store all of a customer's available content files. An edge server maintains a dynamic number of popular files in its memory for the customer. The files are ranked from most popular to least popular and when a file has been requested from an edge server a sufficient number of times to become more popular than the lowest popular stored file, the file is obtained from an origin site. The edge servers are grouped into two service levels: regional and global. The customer is charged a higher fee to store its popular files on the global edge servers compared to a regional set of edge servers because of greater coverage.


Inventors: Swildens; Eric Sven-Johan (Mountain View, CA), Cinquini; Maurice (San Jose, CA), Chavarkar; Amol (San Mateo, CA), Agarwal; Anshu (San Jose, CA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

------------

United States Patent 7,395,355
Afergan , et al. July 1, 2008

Method for caching and delivery of compressed content in a content delivery network

Abstract

A content delivery network (CDN) edge server is provisioned to provide last mile acceleration of content to requesting end users. The CDN edge server fetches, compresses and caches content obtained from a content provider origin server, and serves that content in compressed form in response to receipt of an end user request for that content. It also provides "on-the-fly" compression of otherwise uncompressed content as such content is retrieved from cache and is delivered in response to receipt of an end user request for such content. A preferred compression routine is gzip, as most end user browsers support the capability to decompress files that are received in this format. The compression functionality preferably is enabled on the edge server using customer-specific metadata tags.


Inventors: Afergan; Michael M. (Cambridge, MA), Schlossberg; Charisma (Watertown, MA), Hong; Duke P. (Oceanside, CA), Rao; Satish Balusu (Berkeley, CA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

-------------

United States Patent 7,392,325
Grove , et al. June 24, 2008

Method for high-performance delivery of web content

Abstract

The present invention provides a method and apparatus for increasing the performance of world-wide-web traffic over the Internet. A distributed network of specialized nodes of two types is dispersed around the Internet. A web client's requests are directed to a node of the first type chosen to be close to the client, and the client communicates with this node using a standard protocol such as HTTP. This first node receives the request, and communicates the request to a node of the second type chosen to be close to the request's ultimate destination (e.g., a web server capable of generating a response to the request.) The first node communicates the request to the second node using a different, specialized, protocol that has been designed for improved performance and specifically to reduce traffic volume and to reduce latency. The second node receives communication from the first node using this specialized protocol, converts it back to a standard protocol such as HTTP, and forwards the request to the destination computer or server. Responses from the destination to the client take the corresponding reverse route, and also are carried over a specialized protocol between the two nodes. In addition, these nodes can employ other techniques such as web caches that avoid or improve some communication steps. Thus, specialized, proprietary, or complex protocols and techniques can be quickly deployed to enhance web performance without requiring significant changes to the clients or servers.


Inventors: Grove; Adam J. (Menlo Park, CA), Kharitonov; Michael (New York, NY), Tumarkin; Alexei (Goleta, CA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

------------

United States Patent 7,376,736
Sundaram , et al. May 20, 2008

Method and system for providing on-demand content delivery for an origin server

Abstract

An infrastructure "insurance" mechanism enables a Web site to fail over to a content delivery network (CDN) upon a given occurrence at the site. Upon such occurrence, at least some portion of the site's content is served preferentially from the CDN so that end users that desire the content can still get it, even if the content is not then available from the origin site. In operation, content requests are serviced from the site in the usual manner, e.g., by resolving DNS queries to the site's IP address, until detection of the given occurrence. Thereafter, DNS queries are managed by a CDN dynamic DNS-based request routing mechanism so that such queries are resolved to optimal CDN edge servers. After the event that caused the occurrence has passed, control of the site's DNS may be returned from the CDN back to the origin server's DNS mechanism.


Inventors: Sundaram; Ravi (Cambridge, MA), Rahul; Hariharan S. (Cambridge, MA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

----------

United States Patent 7,376,727
Weller , et al. May 20, 2008

Content delivery network service provider (CDNSP)-managed content delivery network (CDN) for network service provider (NSP)

Abstract

A CDN service provider shares its CDN infrastructure with a network to enable a network service provider (NSP) to offer a private-labeled network content delivery network (NCDN or "private CDN") to participating content providers. The CDNSP preferably provides the hardware, software and services required to build, deploy, operate and manage the CDN for the NCDN customer. Thus, the NCDN customer has access to and can make available to participating content providers one or more of the content delivery services (e.g., HTTP delivery, streaming media delivery, application delivery, and the like) available from the global CDN without having to provide the large capital investment, R&D expense and labor necessary to successfully deploy and operate the network itself. Rather, the global CDN service provider simply operates the private CDN for the network as a managed service.


Inventors: Weller; Timothy N. (Cambridge, MA), Leiserson; Charles E. (Cambridge, MA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

-------------

United States Patent 7,376,716
Dilley , et al. May 20, 2008

Method and system for tiered distribution in a content delivery network

Abstract

A tiered distribution service is provided in a content delivery network (CDN) having a set of surrogate origin (namely, "edge") servers organized into regions and that provide content delivery on behalf of participating content providers, wherein a given content provider operates an origin server. According to the invention, a cache hierarchy is established in the CDN comprising a given edge server region and either (a) a single parent region, or (b) a subset of the edge server regions. In response to a determination that a given object request cannot be serviced in the given edge region, instead of contacting the origin server, the request is provided to either the single parent region or to a given one of the subset of edge server regions for handling, preferably as a function of metadata associated with the given object request. The given object request is then serviced, if possible, by a given CDN server in either the single parent region or the given subset region. The original request is only forwarded on to the origin server if the request cannot be serviced by an intermediate node.


Inventors: Dilley; John A. (Los Altos, CA), Berkheimer; Andrew D. (Sommerville, MA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

-------------

United States Patent 7,373,416
Kagan , et al. May 13, 2008

Method and system for constraining server usage in a distributed network

Abstract

A "velvet rope" mechanism that enables customers of a shared distributed network (such as a content delivery network) needing to control their costs to control the amount of traffic that is served via the shared network. A given server in the distributed network identifies when a customer is about to exceed a bandwidth quota as a rate (bursting) or for a given billing period (e.g., total megabytes (MB) served for a given period) and provides a means for taking a given action based on this information. Typically, the action taken would result in a reduction in traffic served so that the customer can constrain its usage of the shared network to a given budget value.


Inventors: Kagan; Marty (North Hollywood, CA), Lauzac; Sylvain (Seattle, WA), Lipkovitz; Eisar (San Francisco, CA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

-------------

United States Patent 7,366,793
Kenner , et al. April 29, 2008

System and method for server-side optimization of data delivery on a distributed computer network

Abstract

A system and method for the optimized storage and retrieval of video data at distributed sites calls for the deployment of "Smart Mirror" sites throughout a network, each of which maintains a copy of certain data managed by the system. User addresses are assigned to specific delivery sites based on an analysis of network performance with respect to each of the available delivery sites. Generalized network performance data is collected and stored to facilitate the selection of additional delivery sites and to ensure the preservation of improved performance in comparison to traditional networks.


Inventors: Kenner; Brian (Encinitas, CA), Colby; Kenneth W. (San Diego, CA), Mudry; Robert N. (Carlsbad, CA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

------------

United States Patent 7,363,361
Tewari , et al. April 22, 2008

Secure content delivery system

Abstract

A secure streaming content delivery system provides a plurality of content servers connected to a network that host customer content that can be cached and/or stored, e.g., images, video, text, and/or software. The content servers respond to requests for customer content from users. The invention load balances user requests for cached customer content to the appropriate content server. A user makes a request to a customer's server/authorization server for delivery of the customer's content. The authorization server checks if the user is authorized to view the requested content. If the user is authorized, then the authorization server generates a hash value and embeds it into the URL which is passed to the user. A content server receives a URL request from the user for customer content cached on the content server. The request is verified by the content server.


Inventors: Tewari; Anoop Kailasnath (San Jose, CA), Garg; Vikas (San Jose, CA), Swildens; Eric Sven-Johan (Mountain View, CA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

-----------

United States Patent 7,359,985
Grove , et al. April 15, 2008

Method and system for high-performance delivery of web content using high-performance communications protocols to optimize a measure of communications performance between a source and a destination

Abstract

The present invention provides a method and apparatus for increasing the performance of world-wide-web traffic over the Internet. A distributed network of specialized nodes of two types is dispersed around the Internet. A web client's requests are directed to a node of the first type chosen to be close to the client, and the client communicates with this node using a standard protocol such as HTTP. This first node receives the request, and communicates the request to a node of the second type chosen to be close to the request's ultimate destination (e.g., a web server capable of generating a response to the request.) The first node communicates the request to the second node using a different, specialized, protocol that has been designed for improved performance and specifically to reduce traffic volume and to reduce latency. The second node receives communication from the first node using this specialized protocol, converts it back to a standard protocol such as HTTP, and forwards the request to the destination computer or server. Responses from the destination to the client take the corresponding reverse route, and also are carried over a specialized protocol between the two nodes. In addition, these nodes can employ other techniques such as web caches that avoid or improve some communication steps. Thus, specialized, proprietary, or complex protocols and techniques can be quickly deployed to enhance web performance without requiring significant changes to the clients or servers.


Inventors: Grove; Adam J. (Menlo Park, CA), Kharitonov; Michael (New York, NY), Tumarkin; Alexei (Goleta, CA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

-----------

United States Patent 7,353,509
Sheehy April 1, 2008

Method and system for managing software installs in a distributed computer network

Abstract

A method of and system for managing installs to a set of one or more field machines in a distributed network environment. In an illustrative embodiment, the system includes at least one change coordinator server that includes a database with data identifying a current state of each field machine, and a change controller routine for initiating a given control action to initiate an update to the current state on a given field machine. In particular, the change controller routine may include a scheduling algorithm that evaluates data from the database and identifies a set of field machines against which the given control action may be safely executed at a given time. At least one install server is responsive to the change controller routine initiating the given control action for invoking the update to the current state on the given field machine.


Inventors: Sheehy; Justin J. (Ithaca, NY)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

--------------

United States Patent 7,346,676
Swildens , et al. March 18, 2008

Load balancing service

Abstract

A load balancing service for a plurality of customers performs load balancing among a plurality of customer Web servers. Requests for Web content are load balanced across the customer Web servers. The load balancing service provider charges a fee to the customers for the load balancing service. A caching service is also provided that comprises a plurality of caching servers connected to a network. The caching servers host customer content that can be cached and stored, e.g., images, video, text, and/or software. The caching servers respond to requests for Web content from clients. The load balancing service provider charges a fee to the customers for the Web caching service.


Inventors: Swildens; Eric Sven-Johan (Mountain View, CA), Day; Richard David (Mountain View, CA), Gupta; Ajit K. (Fremont, CA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

------------

United States Patent 7,340,532
Swildens March 4, 2008

Load balancing array packet routing system

Abstract

A decrypting load balancing array system uses a Pentaflow approach to network traffic management that extends across an array of Decrypting Load Balancing Array (DLBA) servers sitting in front of back end Web servers. One of the DLBA servers acts as a scheduler for the array through which all incoming requests are routed. The scheduler routes and load balances the traffic to the other DLBA servers (including itself) in the array. Each DLBA server routes and load balances the incoming request packets to the appropriate back end Web servers. Responses to the requests from the back end Web servers are sent back to the DLBA server which forwards the response directly to the requesting client. SSL packets are decrypted in the DLBA server before being routed to a back end Web server, allowing the DLBA server to schedule SSL sessions to back end Web servers based on a cookie or session ID. Response packets are encrypted by the DLBA server before being forwarded to the client. The invention also uses cookie injection to map a client to a specific back end Web server. In addition, any DLBA server in the array is capable of taking over the scheduler functionality in case of scheduler failure. URL based scheduling and hash scheduling of request packets with keepalive connections is easily performed due to the invention's architecture.


Inventors: Swildens; Eric Sven-Johan (Mountain View, CA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

------------------

United States Patent 7,340,505
Lisiecki , et al. March 4, 2008

Content storage and replication in a managed internet content storage environment

Abstract

A method for content storage on behalf of participating content providers begins by having a given content provider identify content for storage. The content provider then uploads the content to a given storage site selected from a set of storage sites. Following upload, the content is replicated from the given storage site to at least one other storage site in the set. Upon request from a given entity, a given storage site from which the given entity may retrieve the content is then identified. The content is then downloaded from the identified given storage site to the given entity. In an illustrative embodiment, the given entity is an edge server of a content delivery network (CDN).


Inventors: Lisiecki; Philip A. (Quincy, MA), Nicolaou; Cosmos (Palo Alto, CA), Rose; Kyle R. (Cambridge, MA)
Assignee: Akamai Technologies, Inc. (Cambridge, MA)

------------

No comments: