× Upcoming Event! Workshop on Split Learning for Distributed Machine Learning (SLDML’21)

MIT Media Lab's Split Learning: Distributed and collaborative learning

Distributed deep learning and inference without sharing raw data

MIT Alliance for Distributed and Private Machine Learning

Abstract: Friction in data sharing is a large challenge for large scale machine learning. Recently techniques such as Federated Learning, Differential Privacy and Split Learning aim to address siloed and unstructured data, privacy and regulation of data sharing and incentive models for data transparent ecosystems. Split learning is a new technique developed at the MIT Media Lab’s Camera Culture group that allows for participating entities to train machine learning models without sharing any raw data.

SafePaths

A global community-led movement SafePaths develops free, open-source, privacy-by-design tools for residents, public health officials, and larger communities to flatten the curve of COVID-19, reduce fear, and prevent a surveillance-state response to the pandemic.

Split Learning and Inference

Split learning removes barriers for collaboration in a whole range of sectors including healthcare, finance, security, logistics, governance, operations and manufacturing.

Events

Check out some of our recent talks and events.

Videos: Privacy Aware AI, Split Learning at World Economic Forum and Niti Aayog

Key technical idea: In the simplest of configurations of split learning, each client (for example, radiology center) trains a partial deep network up to a specific layer known as the cut layer. The outputs at the cut layer are sent to another entity (server/another client) which completes the rest of the training without looking at raw data from any client that holds the raw data. This completes a round of forward propagation without sharing raw data. The gradients are now back propagated again from its last layer until the cut layer in a similar fashion. The gradients at the cut layer (and only these gradients) are sent back to radiology client centers. The rest of back propagation is now completed at the radiology client centers. This process is continued until the distributed split learning network is trained without looking at each others raw data.

Frequently Asked Questions

How does split learning work and what is new in our approach?

Split learning attains high resource efficiency for distributed deep learning in comparison to existing methods by splitting the models architecture across distributed entities. It only communicates activations and gradients just from the split layer unlike other popular methods that share weights/gradients from all the layers. Split learning requires no raw data sharing; either of labels or features.

How is raw data protected and who can get positively impacted?

Split learning requires absolutely no raw data sharing. Sectors like healthcare, finance, security, surveillance and others where data sharing is prohibited will benefit from our approach for training distributed deep learning models. Another modality of split learning called NoPeek SplitNN also drastically reduces leakage due to any communicated activations by reducing their distance correlation with raw data while maintaining model performance via categorical cross-entropy.

How long will it take to transition from laboratory setting to actual deployments between cooperating entities?

The approach is easily deployable for inter and intra entity or organizational collaboration and is highly versatile in terms of possible network topologies. Due to its high resource efficiency in terms of computations, memory, communication bandwidth it is also naturally suitable for distributed learning where the clients are pervasive and ubiquitous edge devices like mobile phones or IOT devices as well as across larger devices and organizations.

Team

Main Collaborators:
Ramesh Raskar, Associate Professor, MIT Media Lab; Principal Investigator
Praneeth Vepakomma, MIT
Abhishek Singh, MIT
Ayush Chopra, MIT
Vivek Sharma, MIT and Harvard Medical School
Otkrist Gupta, MIT Affiliate
Vitor Pamplona, MIT Affiliate
Kevin Pho, MIT

OpenMined Collaborators:
Andrew Trask, Adam J. Hall, Théo Ryffel

Website Team:
Saurish Srivastava
Sheshank Shankar
Rohan Iyer

News articles:

1. A new AI method can train on medical records without revealing patient data
2. A little-known AI method can train on your health data without threatening your privacy
3. The Algorithm Newsletter: The privacy-preserving AI technique that will transform healthcare
4. Les Echos: Medical secrecy, artificial intelligence and RGPD: irreconcilable? Not so sure…

References

Splintering Papers:
1. Splintering with distributions: A stochastic decoy scheme for private computation, Praneeth Vepakomma, Julia Balla, Ramesh Raskar, (2020) (PDF)

Split Learning Papers:
1. Distributed learning of deep neural network over multiple agents, Otkrist Gupta and Ramesh Raskar, In: Journal of Network and Computer Applications 116, (2018) (PDF)
2. DISCO: Dynamic and Invariant Sensitive Channel Obfuscation, Abhishek Singh, Ayush Chopra, Vivek Sharma, Ethan Z. Garza, Emily Zhang, Praneeth Vepakomma, Ramesh Raskar, Accepted to CVPR 2021. (2021) (PDF)
3. FedML: A Research Library and Benchmark for Federated Machine Learning, (Baidu Best Paper Award at NeurIPS-SpicyFL 2020) (PDF)
4. NoPeek: Information leakage reduction to share activations in distributed deep learning, Praneeth Vepakomma, Otkrist Gupta, Abhimanyu Dubey, Ramesh Raskar, (2020) (PDF)
5. Split learning for health: Distributed deep learning without sharing raw patient data, Praneeth Vepakomma, Otkrist Gupta, Tristan Swedish, Ramesh Raskar, Accepted to ICLR 2019 Workshop on AI for social good. (2018) (PDF)
6. Detailed comparison of communication efficiency of split learning and federated learning, Abhishek Singh, Praneeth Vepakomma, Otkrist Gupta, Ramesh Raskar, (2019) (PDF)
7. ExpertMatcher: Automating ML Model Selection for Users in Resource Constrained Countries, Vivek Sharma, Praneeth Vepakomma, Tristan Swedish, Ken Chang, Jayashree Kalpathy-Cramer, and Ramesh Raskar (2019) (PDF)
8. Split Learning for collaborative deep learning in healthcare, Maarten G.Poirot, Praneeth Vepakomma, Ken Chang, Jayashree Kalpathy-Cramer, Rajiv Gupta, Ramesh Raskar (2019)

Survey Papers:
1. Advances and open problems in federated learning (with, 58 authors from 25 institutions!) (2019) (PDF)
2. No Peek: A Survey of private distributed deep learning, Praneeth Vepakomma, Tristan Swedish, Ramesh Raskar, Otkrist Gupta, Abhimanyu Dubey, (2018) (PDF)
3. A Review of Homomorphic Encryption Libraries for Secure Computation, Sai Sri Sathya, Praneeth Vepakomma, Ramesh Raskar, Ranjan Ramachandra, Santanu Bhattacharya, (2018) (PDF)

AutoML Papers:
1. Accelerating neural architecture search using performance prediction, Bowen Baker, Otkrist Gupta, Ramesh Raskar, Nikhil Naik, In: conference paper at ICLR, (2018) (PDF)
2. Designing neural network architecture using reinforcement learning, Bowen Baker, Otkrist Gupta, Nikhil Naik & Ramesh Raskar, In: conference paper at ICLR, (2017) (PDF)

Differential Privacy Papers:
1. Differentially Private Supervised Manifold Learning with Applications like Private Image Retrieval, Praneeth Vepakomma, Julia Balla, Ramesh Raskar, (2021) (PDF)
2. DAMS: Meta-estimation of private sketch data structures for differentially private COVID-19 contact tracing, Praneeth Vepakomma, Subha Nawer Pushpita, Ramesh Raskar, PPML-NeurIPS 2020, (2020) (PDF)

Contact

Potential partner or want to connect with us? Please fill out this simple form to reach out!