Skip to main content

LLM

·2 mins
N Acharya
Author
N Acharya
Cloud, Data Engineering, Data Science & AI

Scope
#

Given there is so much information available in the Internet on Generative AI, let’s keep this brief and to the point of relevant topics.

Let’s start by asking ChatGPT
#

ChatGPT : What is Large Language Model ?

A large language model, often referred to simply as an “LLM,” is a type of artificial intelligence (AI) model designed to understand and generate human language…….

Here are the main points that ChatGPT came with ….

  • Text Generation: LLMs can generate coherent and contextually relevant text. This ability has applications in content generation, chatbots, and more.

  • Text Completion: They can predict and complete text or sentences based on given input, making them useful for autocomplete features and text suggestion systems.

  • Text Classification: LLMs can classify text into different categories or labels, which is valuable for tasks like sentiment analysis, spam detection, and topic classification.

  • Language Translation: LLMs can be used for machine translation, converting text from one language to another.

  • Question Answering: They can answer questions posed in natural language, making them useful for virtual assistants and search engines.

  • Text Summarization: LLMs can generate concise summaries of longer text documents, which is beneficial for information retrieval and content summarization.

  • Language Understanding: These models can understand the meaning and context of text, including ambiguous or complex language.

  • Conversational Agents: LLMs can be used to build chatbots and virtual assistants capable of holding text-based conversations with users.

Core Areas
#

If you are building applications you need access to:

OpenAI API References : APIs needed for make queries through various models in OpenAI . e.g. gpt-3.5-turbo-0301, gpt-4-0314, gpt-4-32k-0314

Langchain Documentation : Allows for easier development of LLM applications by allowing connections to various API endpoints as inpit-output. This allows a lot of services to communicate with each other to derive what the Application is attempting to do.

Lamma-Index

Autogen Framework : A multi-agent abstraction layer that allows building LLM workflows with a lot lot more Automation.

Links#

Here are a few links really worth looking into