PaLM
[1] Researchers also trained smaller versions of PaLM (with 8 and 62 billion parameters) to test the effects of model scale.[2][3][4][5] When combined with chain-of-thought prompting, PaLM achieved significantly better performance on datasets requiring reasoning of multiple steps, such as word problems and logic-based questions.[1][2] The model was first announced in April 2022 and remained private until March 2023, when Google launched an API for PaLM and several other technologies.[10] Google also extended PaLM using a vision transformer to create PaLM-E, a state-of-the-art vision-language model that can be used for robotic manipulation.[16] PaLM is pre-trained on a high-quality corpus of 780 billion tokens that comprise various natural language tasks and use cases.