Skip to content Skip to footer

Quiet-STaR instructs language models to contemplate prior to expressing themselves.

Quiet-STaR, a language training model, has been developed by researchers from Stanford University and Notbad AI. The system allows artificial intelligence (AI) to internally reason before creating a response, mimicking the thought process humans undergo before speaking.

Described in a research paper, the technique involved training a language model, known as Mistral-7B, to mimic this human thought process universally. This was an expansion on a prior technique called Self-Taught Reasoner (STaR), which trained a model to answer questions based on a few examples with explanations (rationales) for the answers. The model was tasked with generating its rationales and refining them based on the correctness of its answers.

Whilst effective, STaR’s reasoning capabilities were confined to question-answering contexts during training. The intention of Quiet-STaR is to equip a language model with a broader skill of reasoning development or rationale creation across a wider variety of texts.

Quiet-STaR works by generating these rationales in parallel with all tokens in the text it is processing, but doesn’t output these reasonings, hence the term “Quiet”. Each rationale is assessed for the accuracy of the next-token prediction it generated, compared to the prediction made by the base model. The model then applies a reinforcement learning algorithm to discern whether rationales are beneficial to its process or an impediment to correct outcomes.

When tested on two reasoning benchmarks, Mistral-7B trained with Quiet-STaR demonstrated improved results, justifying significant gains in perplexity and zero-shot direct reasoning abilities. However, higher quality responses create a computing overhead by generating a larger quantity of tokens during the thought process. This issue may be mitigated by future hardware advancements and further optimization of the Quiet-STaR model.

The use of Quiet-STaR also raises ethical questions, as the researchers noted the impossibility of knowing whether the model reasoning correctly represents internal thought processes. As there are also no safeguards against detrimental or biased reasoning patterns, answers provided by the AI may not be readily understandable or agreeable to human users.

Leave a comment

0.0/5