From the course: LLaMa for Developers

Unlock the full course today

Join today to access over 24,900 courses taught by industry experts.

Accessing LLaMA in an enterprise environment

Accessing LLaMA in an enterprise environment - Llama Tutorial

From the course: LLaMa for Developers

Accessing LLaMA in an enterprise environment

- [Instructor] In this video, we're going to learn how to run LLaMa in an enterprise environment. Most enterprises have special rules for running GenAI, and this includes LLaMa. The easiest way to do so is to run a managed version of LLaMa and call it through an API. Let's go through some examples. The first example is running LLaMa on Azure's AI Studio. I'm here on the AI.Azure.com and I'm under Llama-2-70b-chat. You can see here, I can deploy the model under the deploy option, running it as a pay as you go, which is running it continuously, or as a realtime endpoint. Now, you can do the same thing on a number of different vendors. I can do the same thing on Amazon Bedrock, or on Google's Vertex AI Model Garden. Many companies also use Databricks to host and run their large language models, as well as the Hugging Face enterprise option. Now, your company may already have LLaMa or other GenAI models running on their…

Contents