Papers
arxiv:2504.12285

BitNet b1.58 2B4T Technical Report

Published on Apr 16
Β· Submitted by hongyuw on Apr 17
#1 Paper of the day
Authors:
,
,
,

Abstract

BitNet b1.58 2B4T, a 1-bit Large Language Model with 2 billion parameters, matches the performance of full-precision models while improving computational efficiency.

AI-generated summary

We introduce BitNet b1.58 2B4T, the first open-source, native 1-bit Large Language Model (LLM) at the 2-billion parameter scale. Trained on a corpus of 4 trillion tokens, the model has been rigorously evaluated across benchmarks covering language understanding, mathematical reasoning, coding proficiency, and conversational ability. Our results demonstrate that BitNet b1.58 2B4T achieves performance on par with leading open-weight, full-precision LLMs of similar size, while offering significant advantages in computational efficiency, including substantially reduced memory footprint, energy consumption, and decoding latency. To facilitate further research and adoption, the model weights are released via Hugging Face along with open-source inference implementations for both GPU and CPU architectures.

Community

Paper author Paper submitter

We introduce BitNet b1.58 2B4T, the first open-source, native 1-bit Large Language Model (LLM) at the 2-billion parameter scale. Trained on a corpus of 4 trillion tokens, the model has been rigorously evaluated across benchmarks covering language understanding, mathematical reasoning, coding proficiency, and conversational ability. Our results demonstrate that BitNet b1.58 2B4T achieves performance on par with leading open-weight, full-precision LLMs of similar size, while offering significant advantages in computational efficiency, including substantially reduced memory footprint, energy consumption, and decoding latency. To facilitate further research and adoption, the model weights are released via Hugging Face along with open-source inference implementations for both GPU and CPU architectures.

πŸ”₯ Inference weights: https://huggingface.co/microsoft/bitnet-b1.58-2B-4T
πŸ”₯ Training weights (bf16): https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-bf16
🧰 Inference code: https://github.com/microsoft/bitnet
πŸ“„ Technical report will be available soon.

Try it out via https://bitnet-demo.azurewebsites.net, we are cooking larger modelsπŸ₯³πŸ₯³

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 9

Browse 9 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.12285 in a dataset README.md to link it from this page.

Spaces citing this paper 25

Collections including this paper 12

OSZAR »