Look forward to more research in the context of federated learning and edge AI/ML going further beyond our recent works on federated analytics and learning.
All posts by Mosharaf
Oort Accepted to Appear at OSDI’2021
Oort's working title was Kuiper.
With the wide deployment of AI/ML in our daily lives, the need for data privacy is receiving more attention in recent years. Federated Learning (FL) is an emerging sub-field of machine learning that focuses on in-situ processing of data wherever it is generated. … Continue Reading ››
Fluid Accepted to Appear at MLSys’2021
While training and inference of deep learning models have received significant attention in recent years (e.g., Tiresias, AlloX, and Salus from our group), hyperparamter tuning is often overlooked or put together in the same bucket of optimizations as training. Existing hyperparameter tuning solutions, primarily … Continue Reading ››
Honored to be Named Morris Wellman Professor!
This is such a great and humbling news! Many many thanks to my students, collaborators, and those who nominated and supported me.
Kayak Accepted to Appear at NSDI’2021
As memory disaggregation and resource disaggregation, in general, become popular, one must make a call about whether to continue moving data from remote memory or to sometimes ship compute to remote data too. This is not a new problem in the context of disaggregated datacenters either. The notion of data locality and associated … Continue Reading ››
Presented Keynote Talk at CloudNet’2020
Earlier this week, I presented a keynote talk on the state of network-informed data systems design at the CloudNet'2020 conference, with a specific focus on our recent works on memory disaggregation (Infiniswap, Leap, and NetLock), and discussed the many open challenges toward making memory … Continue Reading ››
Presented Keynote Speech at HotEdgeVideo’2020
Earlier this week, I presented a keynote speech on the state of resource management for deep learning at the HotEdgeVideo'2020 workshop, covering our recent works on systems support for AI (Tiresias, AlloX, and Salus) and discussing open challenges in this space.
Thanks Google for Supporting Our Research
We are doing a lot of work on systems for AI in recent years and historically have done some work on application-aware networking. This support will help us combine the two directions to provide datacenter network support for AI workloads both at the edge and inside the network.
Many thanks … Continue Reading ››
Leap Wins the Best Paper Award at ATC’2020. Congrats Hasan!
Leap, the fastest memory disaggregation system to date, has won a best paper award at this year's USENIX ATC conference!
This is a happy outcome for Hasan's persistence on this project for more than two years. From coming up with the core idea to executing it … Continue Reading ››
NetLock Accepted to Appear at SIGCOMM’2020
High-throughput, low-latency lock managers are useful for building a variety of distributed applications. Traditionally, a key tradeoff in this context had been expressed in terms of the amount of knowledge available to the lock manager. On the one hand, a decentralized lock manager can increase throughput by parallelization, but it can starve certain … Continue Reading ››