[R] ‘Lambda Networks’ Achieve SOTA Accuracy, Save Massive Memory Computer Science |
- [R] ‘Lambda Networks’ Achieve SOTA Accuracy, Save Massive Memory
- Software Preservation in Networked Art and Reflections on Philip Agre
- Computer Science
[R] ‘Lambda Networks’ Achieve SOTA Accuracy, Save Massive Memory Posted: 21 Oct 2020 02:31 PM PDT The paper LambdaNetworks: Modeling Long-Range Interactions Without Attention proposes a novel concept called "lambda layers," a class of layers that provides a general framework for capturing long-range interactions between an input and a structured set of context elements. The paper also introduces "LambdaResNets", a family architecture based on the layers that reaches SOTA accuracies on ImageNet, and is approximately 4.5x faster than the popular modern machine learning accelerator EfficientNets. Here is a quick read:ICLR 2021 Submission | 'Lambda Networks' Achieve SOTA Accuracy, Save Massive Memory The paper LambdaNetworks: Modeling Long-Range Interactions Without Attention is currently under double-blind review by ICLR 2021 and is available on OpenReview. The PyTorch code can be found on the project GitHub. [link] [comments] |
Software Preservation in Networked Art and Reflections on Philip Agre Posted: 21 Oct 2020 09:14 AM PDT |
Posted: 22 Oct 2020 02:59 AM PDT |
You are subscribed to email updates from Computer Science: Theory and Application. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment