Meta is notable for standing alone among the tech giants in supporting major openly-licensed and weights-available foundation models, while those in the closed-source corner include OpenAI, Microsoft, and Google.Ĭritics say that open source AI models carry potential risks, such as misuse in synthetic biology or in generating spam or disinformation. While open AI models with weights available have proven popular with hobbyists and people seeking uncensored chatbots, they have also proven controversial. Llama 2 brings this activity more fully out into the open with its allowance for commercial use, although potential licensees with "greater than 700 million monthly active users in the preceding calendar month" must request special permission from Meta to use it, potentially precluding its free use by giants the size of Amazon or Google. It's on par or better than PaLM-540B on most benchmarks, but still far behind GPT-4 and PaLM-2-L." More details on Llama 2's performance, benchmarks, and construction can be found in a research paper released by Meta on Tuesday. According to Jim Fan, senior AI scientist at Nvidia, "70B is close to GPT-3.5 on reasoning tasks, but there is a significant gap on coding benchmarks. While it can't match OpenAI's GPT-4 in performance, Llama 2 apparently fares well for a source-available model. Meta also says that the Llama 2 fine-tuned models, developed for chat applications similar to ChatGPT, have been trained on "over 1 million human annotations." The context window determines the length of the content the model can process at once. ![]() "Llama-v2 is available on Microsoft Azure and will be available on AWS, Hugging Face, and other providers."Īccording to Meta, its Llama 2 "pretrained" models (the bare-bones models) are trained on 2 trillion tokens and have a context window of 4,096 tokens (fragments of words). "This is going to change the landscape of the LLM market," tweeted Chief AI Scientist Yann LeCun. ![]() ![]() Further Reading Meta unveils a new large language model that can run on a single GPU
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |