Achieving Greater Efficiency for Fast Data Center Operations

Typography

Today’s data centers eat up and waste a good amount of energy responding to user requests as fast as possible, with only a few microseconds delay. 

Today’s data centers eat up and waste a good amount of energy responding to user requests as fast as possible, with only a few microseconds delay. A new system by MIT researchers improves the efficiency of high-speed operations by better assigning time-sensitive data processing across central processing unit (CPU) cores and ensuring hardware runs productively.

Data centers operate as distributed networks, with numerous web and mobile applications implemented on a single server. When users send requests to an app, bits of stored data are pulled from hundreds or thousands of services across as many servers. Before sending a response, the app must wait for the slowest service to process the data. This lag time is known as tail latency.

Current methods to reduce tail latencies leave tons of CPU cores in a server open to quickly handle incoming requests. But this means that cores sit idly for much of the time, while servers continue using energy just to stay powered on. Data centers can contain hundreds of thousands of servers, so even small improvements in each server’s efficiency can save millions of dollars.

Read more at Massachusetts Institute of Technology (MIT)

Image: A new system by MIT researchers improves the efficiency of high-speed operations in data centers by better assigning time-sensitive data processing across CPU cores and ensuring hardware runs productively.  CREDIT: MIT