Category Archives: Recent News

Infiniswap in USENIX ;login: and Elsewhere

Since our first open-source release of Infiniswap over the summer, we have seen growing interest with many follow-ups within our group and outside. Here is a quick summary of selected writeups on Infiniswap:

Received Two Alibaba Innovation Research Grants

More resources for following up on our recent memory disaggregation and erasure coding works! One of the awards is a collaboration with Harsha Madhyastha. Looking forward to working with Alibaba.
In 2017, many proposals are received by AIR (Alibaba Innovation Research), which are from 99 universities and institutes (domestic 54; overseas 45) in 13 countries … Continue Reading ››

Infiniswap Released on GitHub

Today we are glad to announce the first open-source release of Infiniswap, the first practical, large-scale memory disaggregation system for cloud and HPC clusters.
Infiniswap is an efficient memory disaggregation system designed specifically for clusters with fast RDMA networks. It opportunistically harvests and transparently exposes unused cluster memory to unmodified applications by dividing the … Continue Reading ››

Hermes Accepted to Appear at SIGCOMM’2017

Datacenter load balancing, especially in Clos topologies, remains a hot topic even after almost a decade. The pace of progress has picked up over the last few years with multiple solutions exploring different extremes of the solution space, ranging from edge-based to in-network solutions and using different granularities of load balancing: packets, flowcells, flowlets, or … Continue Reading ››

FaiRDMA Accepted to Appear at KBNets’2017

As cloud providers deploy RDMA in their datacenters and developers rewrite/update their applications to use RDMA primitives, a key question remains open: what will happen when multiple RDMA-enabled applications must share the network? Surprisingly, this simple question does not yet have a conclusive answer. This is because existing work focus primarily on improving individual application's … Continue Reading ››

“No! Not Another Deep Learning Framework” to Appear at HotOS’2017

Our position paper calling for a respite in the deep learning framework building arms race has been accepted to appear at this year's HotOS workshop. We make a simple observation: too many frameworks are being proposed with little interoperability between them, even though many target the same or similar workloads; this inevitably leads to repetitions … Continue Reading ››

Infiniswap Accepted to Appear at NSDI’2017

Update: Camera-ready version is available here. Infiniswap code is now on GitHub!

As networks become faster, the difference between remote and local resources is blurring everyday. How can we take advantage of these blurred lines? This is the key observation behind resource disaggregation and, to some extent, rack-scale computing. In this paper, we take our … Continue Reading ››

TWO NSF Proposals as the Lead PI Awarded. Thanks NSF!

The first one is on rack-scale computing using RDMA-enabled networks with Barzan Mozafari at the University of Michigan, and the second is on theoretical and systems implications of long-term fairness in cluster computing with Zhenhua Liu (Stony Brook University). Thanks NSF! Combined with the recent awards on geo-distributed analytics from NSF and Continue Reading ››