Network and method for servicing a computation request

Inventors

Yeh, Edmund MengKamran, Khashayar

Assignees

Northeastern University Boston

Publication Number

US-12289205-B2

Publication Date

2025-04-29

Expiration Date

2040-07-02

Interested in licensing this patent?

MTEC can help explore whether this patent might be available for licensing for your application.


Abstract

A framework for joint computation, caching, and request forwarding in data-centric computing-based networks comprises a virtual control plane, which operates on request counters for computations and data, and an actual plane, which handles computation requests, data requests, data objects and computation results in the physical network. A throughput optimal policy, implemented in the virtual plane, provides a basis for adaptive and distributed computation, caching, and request forwarding in the actual plane. The framework provides superior performance in terms of request satisfaction delay as compared with several baseline policies over multiple network topologies.

Core Innovation

The invention provides a framework for joint computation, caching, and request forwarding in data-centric computing-based networks using a two-plane architecture: a virtual control plane and an actual plane. The virtual plane operates on computation request counters (CRCs) and data request counters (DRCs) to capture measured demand for computations and data objects, respectively. Based on these counters, a throughput optimal policy is implemented in the virtual plane, which informs adaptive and distributed decision-making regarding where to perform computations, how to forward computation and data requests, and when to cache data within the physical network (actual plane).

The core problem addressed is the optimal utilization of processing, storage, and bandwidth resources in distributed networks, particularly relevant in the context of data-centric, delay-sensitive applications like Internet of Things (IoT), fog computing, mobile edge computing, and similar dispersed computing paradigms. As centralized clouds struggle to meet ultra-low latency requirements and handle the demands of media-rich and data-driven services, there is a need for distributed, adaptive, and scalable methods that do not require prior knowledge of computation request rates. The invention solves the challenge of jointly and optimally deciding, in a distributed manner, how to execute computations, forward requests, and cache relevant data objects based on real-time demand captured by CRCs and DRCs.

Individual nodes in the network use the local joint policy, dynamically updated based on CRCs and DRCs exchanged with neighboring nodes, to govern decisions about servicing computation requests, forwarding computation or data packets, and managing the cache. The method includes mechanisms for self-determination at each node, allowing for adaptive local policy based on the difference between computation and data request counters. The result is improved request satisfaction delay and superior adaptability compared to existing baseline policies across various network topologies.

Claims Coverage

The patent includes several independent claims that collectively define seven main inventive features covering virtualization, local joint policy calculation, distributed operations, and specific ways in which nodes determine computation, forwarding, and caching decisions.

Node exchanging and updating joint policies based on computation and data request counters

A node in a data distribution network is configured to exchange computation request counters (representing demand for computations) and data request counters (representing demand for data objects) with neighboring nodes. The node updates its local joint policy to define rules for calculations used to make decisions about caching at the node. The local joint policy is updated based on the exchanged counters.

Virtual plane and actual plane for joint computation, forwarding, and caching

The node performs functions of a virtual plane and an actual plane. In the virtual plane, the node updates the local joint policy and those of neighboring nodes. In the actual plane, the node performs computations, forwards computation or data requests, and caches data objects—all according to the local and neighboring joint policies.

Computation request servicing based on maximum difference

The node (and its neighboring nodes) service computation requests by identifying and servicing the computation request having the maximum difference, where the difference is typically calculated between computation request counters and data request counters.

Determining maximum normalized backlogs for forwarding decisions

The node updates its local joint policy by calculating normalized maximum computation request backlogs and normalized maximum data request backlogs for computation requests across its queues. These backlogs are determined by computing differences between the node’s counters and those of neighboring nodes. The node forwards the request (or data) with the maximum normalized backlog to neighboring nodes.

Cache score maximization for caching decisions

The node calculates the maximum possible total cache score for data objects present in its cache and other available data objects, then caches data objects to achieve the calculated maximum possible total cache score.

Cache score based on time-average usage

Each cache score for a data object is determined as a time-average of how many times the data object is received at the node or used in performing computations to service requests. Neighboring nodes use a similar approach for their own cache scores.

Distributed servicing of computation requests and results forwarding

The node (and its neighboring nodes) are equipped with processors and non-transitory computer-readable media, and execute instructions causing them to service computation requests in a distributed manner according to respective joint local policies. Computation results are forwarded toward the originating entity.

The inventive features together establish a distributed, adaptive framework in which nodes dynamically exchange demand metrics, collaboratively determine local joint policies, and optimize processing, forwarding, and caching through a dual-plane architecture to enhance the performance of data-centric distributed networks.

Stated Advantages

The framework provides superior performance in terms of request satisfaction delay compared with several baseline policies over multiple network topologies.

The system enables adaptive and distributed computation, caching, and request forwarding without requiring prior knowledge of computation request rates.

The method allows for efficient joint optimization of computation, caching, and communication in data-centric computing-based networks with arbitrary topology and resource availability at each computing node.

Documented Applications

Medical data analytics

Data processing for wearable devices

Intelligent driving and transportation systems

In-network image and video processing

Processing 3D maps of the environment in augmented reality/virtual reality (AR/VR) applications

High energy physics data-intensive computing networks such as the Large Hadron Collider (LHC)

JOIN OUR MAILING LIST

Stay Connected with MTEC

Keep up with active and upcoming solicitations, MTEC news and other valuable information.