Simplifying BERT-based models to increase efficiency, capacity

New method would enable BERT-based natural-language-processing models to handle longer text strings, run in resource-constrained settings — or sometimes both.

In recent years, many of the best-performing models in the field of natural-language processing (NLP) have been built on top of BERT language models. Pretrained on large corpora of (unlabeled) public texts, BERT models encode the probabilities of sequences of words. Because a BERT model begins with extensive knowledge of a language as a whole, it can be fine-tuned on a more targeted task — like question answering or machine translation — with relatively little labeled data.

BERT models, however, are very large, and BERT-based NLP models can be slow — even prohibitively slow, for users with limited computational resources. Their complexity also limits the length of the inputs they can take, as their memory footprint scales with the square of the input length.

Pyramid-BERT architecture.png
A simplified illustration of the Pyramid-BERT architecture.

At this year’s meeting of the Association for Computational Linguistics (ACL), my colleagues and I presented a new method, called Pyramid-BERT, that reduces the training time, inference time, and memory footprint of BERT-based models, without sacrificing much accuracy. The reduced memory footprint also enables BERT models to operate on longer text sequences.

BERT-based models take sequences of sentences as inputs and output vector representations — embeddings — of both each sentence as a whole and its constituent words individually. Downstream applications such as text classification and ranking, however, use only the complete-sentence embeddings. To make BERT-based models more efficient, we progressively eliminate redundant individual-word embeddings in intermediate layers of the network, while trying to minimize the effect on the complete-sentence embeddings.

We compare Pyramid-BERT to several state-of-the-art techniques for making BERT models more efficient and show that we can speed inference up 3- to 3.5-fold while suffering an accuracy drop of only 1.5%, whereas, at the same speeds, the best existing method loses 2.5% of its accuracy.

Related content
Combination of distillation and distillation-aware quantization compresses BART model to 1/16th its size.

Moreover, when we apply our method to Performers — variations on BERT models that are specifically designed for long texts — we can reduce the models’ memory footprint by 70%, while actually increasing accuracy. At that compression rate, the best existing approach suffers an accuracy dropoff of 4%.

A token’s progress

Each sentence input to a BERT model is broken into units called tokens. Most tokens are words, but some are multiword phrases, some are subword parts, some are individual letters of acronyms, and so on. The start of each sentence is demarcated by a special token called — for reasons that will soon be clear — CLS, for classification.

Each token passes through a series of encoders — usually somewhere between four and 12 — each of which produces a new embedding for each input token. Each encoder has an attention mechanism, which decides how much each token’s embedding should reflect information carried by other tokens.

For instance, given the sentence “Bob told his brother that he was starting to get on his nerves,” the attention mechanism should pay more attention to the word “Bob” when encoding the word “his” but “brother” when encoding the word “he”. It’s because the attention mechanism must compare every word in an input sequence to every other that a BERT model’s memory footprint scales with the square of the input.

Related content
Determining the optimal architectural parameters reduces network size by 84% while improving performance on natural-language-understanding tasks.

As tokens pass through the series of encoders, their embeddings factor in more and more information about other tokens in the sequence, since they’re attending to other tokens that are also factoring in more and more information. By the time the tokens pass through the final encoder, the embedding of the CLS token ends up representing the sentence as a whole (hence the CLS token’s name). But its embedding is also very similar to those of all the other tokens in the sentence. That’s the redundancy we’re trying to remove.

The basic idea is that, in each of the network’s encoders, we preserve the embedding of the CLS token but select a representative subset — a core set — of the other tokens’ embeddings.

Embeddings are vectors, so they can be interpreted as points in a multidimensional space. To construct core sets we would, ideally, sort embeddings into clusters of equal diameter and select the center point — the centroid — of each cluster.

Centroid core set.png
Ideally, for each encoder in the network, we would construct a representative subset of token embeddings (green dots) by selecting the centroids (red dots) of token clusters (circles). The centroids would then pass to the next layer of the network.

Unfortunately, the problem of constructing a core set that spans a layer of a neural network is NP-hard, meaning that it’s impractically time consuming.

As an alternative, our paper proposes a greedy algorithm that selects n members of the core set at a time. At each layer, we take the embedding of the CLS token, and then we find the n embeddings farthest from it in the representational space. We add those, along with the CLS embedding, to our core set. Then we find the n embeddings whose minimum distance from any of the points already in our core set is greatest, and we add those to the core set.

Related content
"Perfect hashing" is among the techniques that reduce the memory footprints of machine learning models by 94%.

We repeat this process until our core set reaches the desired size. This is provably an adequate approximation of the optimal core set.

Finally, in our paper, we consider the question of how large the core set of each layer should be. We use an exponential-delay function to determine the degree of attenuation from one layer to the next, and we investigate the trade-offs between accuracy and speedups or memory reduction that result from selecting different rates of decay.

Acknowledgements: Ashish Khetan, Rene Bidart, Zohar Karnin

Research areas

Related content

GB, Cambridge
We are looking for a passionate, talented, and resourceful Applied Scientist with background in Natural Language Processing (NLP), Large Language Models (LLMs), Question Answering, Information Retrieval, Reinforcement Learning, or Recommender Systems to invent and build scalable solutions for a state-of-the-art conversational assistant. The ideal candidate should have a robust foundation in machine learning and a keen interest in advancing the field. The ideal candidate would also enjoy operating in dynamic environments, have the self-motivation to take on challenging problems to deliver big customer impact, and move fast to ship solutions and then iterate on user feedback and interactions. Key job responsibilities * Work collaboratively with scientists and developers to design and implement automated, scalable NLP/ML/QA/IR models for accessing and presenting information; * Drive scalable solutions end-to-end from business requirements to prototyping, engineering, production testing to production; * Drive best practices on the team, deal with ambiguity and competing objectives, and mentor and guide junior members to achieve their career growth potential. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
DE, BE, Berlin
We are looking for a passionate, talented, and resourceful Applied Scientist with background in Natural Language Processing (NLP), Large Language Models (LLMs), Question Answering, Information Retrieval, Reinforcement Learning, or Recommender Systems to invent and build scalable solutions for a state-of-the-art conversational assistant. The ideal candidate should have a robust foundation in machine learning and a keen interest in advancing the field. The ideal candidate would also enjoy operating in dynamic environments, have the self-motivation to take on challenging problems to deliver big customer impact, and move fast to ship solutions and then iterate on user feedback and interactions. Key job responsibilities * Work collaboratively with scientists and developers to design and implement automated, scalable NLP/ML/QA/IR models for accessing and presenting information; * Drive scalable solutions end-to-end from business requirements to prototyping, engineering, production testing to production; * Drive best practices on the team, deal with ambiguity and competing objectives, and mentor and guide junior members to achieve their career growth potential. We are open to hiring candidates to work out of one of the following locations: Berlin, BE, DEU
US, WA, Bellevue
Are you inspired by invention? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Last Mile Solutions Engineering team. WW AMZL Solutions Engineering team is looking to build out our Simulation team to drive innovation across our Last Mile network. We start with the customer and work backwards in everything we do. If you’re interested in joining a rapidly growing team working to build a unique, solutions advisory group with a relentless focus on the customer, you’ve come to the right place. This is a blue-sky role that gives you a chance to roll up your sleeves and dive into big data sets in order to build simulations and experimentation systems at scale, build optimization algorithms and leverage cutting-edge technologies across Amazon. This is an opportunity to think big about how to solve a challenging problem for the customers. As a Simulation Scientist, you are an analytical problem solver who enjoys diving into data from various businesses, is excited about investigations and algorithms, can multi-task, and can credibly interface between scientists, engineers and business stakeholders. Your expertise in synthesizing and communicating insights and recommendations to audiences of varying levels of technical sophistication will enable you to answer specific business questions and innovate for the future. As a simulation scientist, you will apply cutting edge designs and methodologies for complex use cases across Last Mile network to drive innovation. In addition, you will contribute to the end state vision for simulation and experimentation of future delivery stations at Amazon. Key job responsibilities • Design, develop, and simulate engineering solutions for complex material handling challenges considering human/equipment interactions for the Last Mile network • Lead and coordinate simulation efforts for optimal solutions through equipment specification, material flow, process design, ergonomics, associate experience, operational considerations and site layout • The candidate must have the ability to work with diverse customer groups to solve business problems and provide data solutions that are organized and simple to understand. • Working with technical and non-technical customers to design experiments, simulations, and communicate results • Develop, document and update simulation standards, including standard results reports • Create basic to highly advanced models and run "what-if" scenarios to help drive to optimal solutions • Work closely with internal teams to ensure that every detail is thought through and documented using Standard Operating Procedures and/or structured change control • Work closely with vendors, suppliers and other cross functional teams to come up with innovative solutions • Simultaneously manage multiple projects and tasks while effectively influencing, negotiating, and communicating with internal and external business partners • Conduct post-mortem on simulations, after implementation of new designs, in partnering with Safety and Operations A day in the life If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Benefits: Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan Learn more about our benefits here: https://amazon.jobs/en/internal/benefits/us-benefits-and-stock We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
BR, SP, Sao Paulo
Amazon launched the Generative AI Innovation Center in June 2023 to help AWS customers accelerate innovation and business success with Generative AI (https://press.aboutamazon.com/2023/6/aws-announces- generative -ai -innovation center). This Innovation Center provides opportunities to innovate in a fast-paced organization that contributes to breakthrough projects and technologies that are deployed across devices and the cloud. As a data scientist, you are proficient in designing and developing advanced generative AI solutions to solve diverse customer problems. You'll work with terabytes of text, images, and other types of data to solve real-world problems through Gen AI. You will work closely with account teams and ML strategists to define the use case, and with other ML scientists and engineers on the team to design experiments and find new ways to deliver customer value. The selected person will possess technical and customer-facing skills that will enable you to be part of the AWS technical team within our solution providers ecosystem/environment as well as directly to end customers. You will be able to lead discussion with customer and partner staff and senior management. A day in the life Here at AWS, we embrace our differences. We are committed to promoting our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in more than 190 branches around the world. We have innovative benefit offerings and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon's culture of inclusion is reinforced by our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and build trust. About the team Work/life balance Our team highly values work-life balance. It's not about how many hours you spend at home or at work; it's about the flow you establish that brings energy to both parts of your life. We believe that finding the right balance between your personal and professional life is fundamental to lifelong happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own work-life balance. Mentoring and career growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and mandates and are building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one guidance and thorough but gentle code reviews. We care about your career growth and strive to assign projects based on what will help each team member become a more well-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: Sao Paulo, SP, BRA
MX, DIF, Mexico City
Amazon launched the Generative AI Innovation Center (GAIIC) in Jun 2023 to help AWS customers accelerate the use of Generative AI to solve business and operational problems and promote innovation in their organization (https://press.aboutamazon.com/2023/6/aws-announces-generative-ai-innovation-center). GAIIC provides opportunities to innovate in a fast-paced organization that contributes to game-changing projects and technologies that get deployed on devices and in the cloud. As a Data Science Manager in GAIIC, you'll partner with technology and business teams to build new GenAI solutions that delight our customers. You will be responsible for directing a team of data scientists, deep learning architects, and ML engineers to build generative AI models and pipelines, and deliver state-of-the-art solutions to customer’s business and mission problems. Your team will be working with terabytes of text, images, and other types of data to address real-world problems. The successful candidate will possess both technical and customer-facing skills that will allow them to be the technical “face” of AWS within our solution providers’ ecosystem/environment as well as directly to end customers. You will be able to drive discussions with senior technical and management personnel within customers and partners, as well as the technical background that enables them to interact with and give guidance to data/research/applied scientists and software developers. The ideal candidate will also have a demonstrated ability to think strategically about business, product, and technical issues. Finally, and of critical importance, the candidate will be an excellent technical team manager, someone who knows how to hire, develop, and retain high quality technical talent. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services. A day in the life A day in the life Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. About the team Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: Mexico City, DIF, MEX
US, CA, Palo Alto
The Amazon Search Mission Understanding (SMU) team is at the forefront of revolutionizing the online shopping experience through the Amazon search page. Our ambition extends beyond facilitating a seamless shopping journey; we are committed to creating the next generation of intelligent shopping assistants. Leveraging cutting-edge Large Language Models (LLMs), we aim to redefine navigation and decision-making in e-commerce by deeply understanding our users' shopping missions, preferences, and goals. By developing responsive and scalable solutions, we not only accomplish the shopping mission but also foster unparalleled trust among our customers. Through our advanced technology, we generate valuable insights, providing a guided navigation system into various search missions, ensuring a comprehensive and holistic shopping experience. Our dedication to continuous improvement through constant measurement and enhancement of the shopper experience is crucial, as we strategically navigate the balance between immediate results and long-term business growth. We are seeking an Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. The ideal candidate will have a profound expertise in developing, deploying, and contributing to the next-generation shopping search engine, including but not limited to Retrieval-Augmented Generation (RAG) models, specifically tailored towards enhancing the Rufus application—an integral part of our mission to revolutionize shopping assistance. You will take the lead in conceptualizing, building, and launching groundbreaking models that significantly improve our understanding of and capabilities in enhancing the search experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology. We are looking for individuals who are determined, analytically rigorous, passionate about applied sciences, creative, and possess strong logical reasoning abilities. Join the Search Mission Understanding team, a group of pioneering ML scientists and engineers dedicated to building core ML models and developing the infrastructure for model innovation. As part of Amazon Search, you will experience the dynamic, innovative culture of a startup, backed by the extensive resources of Amazon.com (AMZN), a global leader in internet services. Our collaborative, customer-centric work environment spans across our offices in Palo Alto, CA, and Seattle, WA, offering a unique blend of opportunities for professional growth and innovation. Key job responsibilities Collaborate with cross-functional teams to identify requirements for ML model development, focusing on enhancing mission understanding through innovative AI techniques, including retrieval-Augmented Generation or LLM in general. Design and implement scalable ML models capable of processing and analyzing large datasets to improve search and shopping experiences. Must have a strong background in machine learning, AI, or computational sciences. Lead the management and experiments of ML models at scale, applying advanced ML techniques to optimize science solution. Serve as a technical lead and liaison for ML projects, facilitating collaboration across teams and addressing technical challenges. Requires strong leadership and communication skills, with a PhD in Computer Science, Machine Learning, or a related field. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA | Seattle, WA, USA
US, WA, Seattle
Alexa Personality Fundamentals is chartered with infusing Alexa's trustworthy, reliable, considerate, smart, and playful personality. Come join us in creating the future of personality forward AI here at Alexa. Key job responsibilities As a Data Scientist with Alexa Personality, your work will involve machine learning, Large Language Model (LLM) and other generative technologies. You will partner with engineers, applied scientists, voice designers, and quality assurance to ensure that Alexa can sing, joke, and delight our customers in every interaction. You will take a central role in defining our experimental roadmap, sourcing training data, authoring annotation criteria and building automated benchmarks to track the improvement of our Alexa's personality. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA
US, CA, Palo Alto
The Amazon Search Mission Understanding (SMU) team is at the forefront of revolutionizing the online shopping experience through the Amazon search page. Our ambition extends beyond facilitating a seamless shopping journey; we are committed to creating the next generation of intelligent shopping assistants. Leveraging cutting-edge Large Language Models (LLMs), we aim to redefine navigation and decision-making in e-commerce by deeply understanding our users' shopping missions, preferences, and goals. By developing responsive and scalable solutions, we not only accomplish the shopping mission but also foster unparalleled trust among our customers. Through our advanced technology, we generate valuable insights, providing a guided navigation system into various search missions, ensuring a comprehensive and holistic shopping experience. Our dedication to continuous improvement through constant measurement and enhancement of the shopper experience is crucial, as we strategically navigate the balance between immediate results and long-term business growth. We are seeking an Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. The ideal candidate will have a profound expertise in developing, deploying, and contributing to the next-generation shopping search engine, including but not limited to Retrieval-Augmented Generation (RAG) models, specifically tailored towards enhancing the Rufus application—an integral part of our mission to revolutionize shopping assistance. You will take the lead in conceptualizing, building, and launching groundbreaking models that significantly improve our understanding of and capabilities in enhancing the search experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology. We are looking for individuals who are determined, analytically rigorous, passionate about applied sciences, creative, and possess strong logical reasoning abilities. Join the Search Mission Understanding team, a group of pioneering ML scientists and engineers dedicated to building core ML models and developing the infrastructure for model innovation. As part of Amazon Search, you will experience the dynamic, innovative culture of a startup, backed by the extensive resources of Amazon.com (AMZN), a global leader in internet services. Our collaborative, customer-centric work environment spans across our offices in Palo Alto, CA, and Seattle, WA, offering a unique blend of opportunities for professional growth and innovation. Key job responsibilities Collaborate with cross-functional teams to identify requirements for ML model development, focusing on enhancing mission understanding through innovative AI techniques, including retrieval-Augmented Generation or LLM in general. Design and implement scalable ML models capable of processing and analyzing large datasets to improve search and shopping experiences. Must have a strong background in machine learning, AI, or computational sciences. Lead the management and experiments of ML models at scale, applying advanced ML techniques to optimize science solution. Serve as a technical lead and liaison for ML projects, facilitating collaboration across teams and addressing technical challenges. Requires strong leadership and communication skills, with a PhD in Computer Science, Machine Learning, or a related field. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA | Seattle, WA, USA
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems. Key job responsibilities As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
US, WA, Bellevue
Do you enjoy solving complex problems, driving research innovation, and creating insightful models that tackle real-world challenges? Join Amazon's Modeling and Optimization team. Our science models and data-driven solutions continuously reshape Amazon global supply chain - one of the most sophisticated networks in the world. Key job responsibilities In this role, you will use science to drive measurable improvements across customer experience, network speed, cost efficiency, safety, sustainability, and capital investment returns. You will collaborate with scientists to solve complex problems and with cross-functional teams to analyze systems and drive business value. You will develop optimization, simulation, and predictive models to identify improvement opportunities. You will develop innovative, scalable solutions. You will quantify expected improvements and evaluate trade-offs between competing objectives. You will communicate model insights to stakeholders and influence positive changes in Amazon's systems and operations. A day in the life Collaboration will be key - you will collaborate with scientists to design end-to-end solutions, work with business stakeholders to simplify and streamline processes, and partner with engineers to simplify systems and enhance their performances. The focus is on driving value through scientific thinking, technical knowledge, simplification, and cross-functional teamwork. About the team Our team of scientists specializes in network modeling, optimization, algorithms, control theory, machine learning and related disciplines. Our focus is driving supply chain improvements through applied science. By analyzing data and building insightful models, we identify opportunities and influence positive change across Amazon's end-to-end systems and operations - from vendors to customers. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA