Getting Started with LangChain for Beginners

Explore the power of LangChain with a few simple examples.

Tirendaz AI
Level Up Coding

--

Image by Author

Whether you’re a tech enthusiast or a developer, you’re probably used AI tools like ChatGPT. It’s no secret that these tools were built with large language models (LLMs). Well, would you like to go a step further and create your own LLMs applications? If yes, LangChain is for you.

In this blog, you’ll learn how to use LangChain with a few simple examples. You will see the power of this framework at the end of the article. Here are the topics we’ll cover:

  • What is LangChain?
  • Installation
  • Environment setup
  • Building a Language Model Application
  • Prompt Templates & Chains
  • Creating Chains

If you don’t feel like reading, you can watch my video below.

Let’s dive in!

What is LangChain?

ChatGPT has opened Pandora’s box of large language models (LLMs). Many AI tools were built with these models. LLMs have great potential to develop modern applications.

To work with LLMs, all you need to do is connect them to other data sources and then let these models interact with its environment. This is where LangChain comes in.

LangChain is a new framework for building applications powered by large language models. You can use it for autonomous agents, personal assistants, question-answering, chatbots, code understanding, and much more. As you can see in the chart below, LangChain’s growth is pretty fast and honestly impressive.

LangChain GitHub Star History

Chains are the core idea of this framework that allows you to use different components. Let’s go through these components.

The models module offers a significant benefit by providing a standardized interface to models, enabling effortless switching between them.

The prompts module allows you to manage and optimize prompts.

The memory module provides the ongoing state between calls of a chain/agent.

The agent module helps to decide the actions to take and take the action.

There are several more modules in LangChain. You can check these modules here.

Cool, we’ve seen briefly what LangChain is. Let’s move on to getting our hands dirty with LangChain step by step.

Installation

To use Langchain, let’s first install it with the pip command.

!pip install -q langchain

To work with LangChain, you need integrations with one or more model providers, such as OpenAI or Hugging Face. For this example, we’ll be leveraging OpenAI’s APIs, so we’ll need to install it first.

!pip install -q openai

Awesome, we installed the necessary library. Let’s go ahead and set our environment variable.

Environment Setup

After installing OpenAI, you need to set the environment variable. To do this, let’s use the os module with the environ method.

import os
os.environ["OPENAI_API_KEY"] = "Your-API-Key"

To get your OPENAI_API_KEY for free, go to the OpenAI website. If you want, you can create a new API key. Note that you can find the link to the notebook I used on this blog here.

Nice, we set the environment variable. Let’s move on to building a language model application.

Building a Language Model Application

Image by Author

In this section, we’ll see how to get predictions from a language model. LangChain has many modules that can be used to create language model applications. Large Language Models are a key component of LangChain. Note that LangChain is not an LLMs provider, but instead allows you to work with various LLMs. Now, let’s get an LLM like OpenAI and predict with this model.

# Importing OpenAI 
from langchain.llms import OpenAI
# Initializing an OpenAI model
llm = OpenAI(temperature=0.7)
# Creating a text
text = "What are the 5 most expensive capital cities?"
# Getting a prediction from the language model
llm(text)

# Output:
1. Tokyo, Japan
2. Beijing, China
3. Singapore
4. Zurich, Switzerland
5. Hong Kong, China

As you can see, our model made a prediction and the 5 most expensive capitals were printed.

If you want, you can use a specific model in OpenAI. Let me show this.

llm_1 = OpenAI(model_name="gpt-3.5-turbo")
llm("Tell me a joke")

# Output:
# Why don't scientists trust atoms? Because they make up everything!

Here you go. Note that you’ll see a different joke each time you run this command. Pretty good, right?

Prompt Templates & Chains

In the previous example, we sent user input directly to the LLM. However, when using an LLM in an application, you usually take user input and construct a prompt. This is a piece of cake to perform with LangChain! Let’s first define the prompt template.

from langchain.prompts import PromptTemplate
# Creating a prompt
prompt = PromptTemplate(
input_variables=["input"],
template="Which are the 5 most {input} capital cities?",
)

Ok, we create our prompt. To get a prediction, let’s now call the format method and pass it an input.

print(prompt.format(input="popular")))

# Output:
# Which are the 5 most popular capital cities?

Here, you can see the prompt format. It is time to combine LLMs and prompts.

Creating Chains

So far, we’ve seen how to initialize a LLM model, and how to get a prediction with this model. Now, let’s take a step forward and chain these steps using the LLMChain class.

To show this, let’s initialize a language model and create a prompt.

from langchain.chains import LLMChain
# Instancing a LLM model
llm = OpenAI(temperature=0.7)
# Creating a prompt
prompt = PromptTemplate(
input_variables=["country"],
template= "where is the capital of {country}",
)
# Chaining
chain = LLMChain(llm=llm, prompt=prompt)
# Getting a prediction
chain.run("USA")

# Output:
# Washington, D.C. is the capital of the United States of America.

Here, you can see the prediction of the model. Note that this is one of the simplest kinds of chains. Let’s move on to creating more complex chains.

Combining Chains

Until now, we learned about creating simple chains in LangChain. To combine more complex chains, you can use agents. At this point, you can ask what the agent is.

An agent is actually a Chain that decides an action, performs the action and, observes the outcome. To do this, it uses an LLM, given a high-level directive and a set of tools like Google Search, math calculator, weather APIs, etc. It is very crucial to use the correct agents to make good predictions.

Working on an example is the best way to learn. So let’s practice.

For this example, we’re going to use Goole Search API (SerpAPI). You can get this API key for free here.

!pip install -q google-search-results
os.environ["SERPAPI_API_KEY"]="Your-API-Key"

Next, let’s initialize an agent.

# Importing functions
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType

# Creating an LLM model
llm = OpenAI(temperature=0)
# Loading our tools to use
tools = load_tools(["serpapi","llm-math"], llm=llm)
# Initializing an agent
agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Testing
agent.run("Who is the director of the walking dead series? What is his current age raised to the 0.35 power?")

# Output:
"""
I need to find out who the director is and then calculate his age raised to the 0.35 power
Action: Search
Action Input: "director of the walking dead series"
Observation: Darabont Ferenc, better known as Frank Darabont, is an Academy Award winning Hungarian-American director, screenwriter and producer.
Thought: I need to find out his age
Action: Search
Action Input: "Frank Darabont age"
Observation: 64 years
Thought: I need to calculate his age raised to the 0.35 power
Action: Calculator
Action Input: 64^0.35
Observation: Answer: 4.2870938501451725
Thought: I now know the final answer
Final Answer: Frank Darabont is 64 years old and his age raised to the 0.35 power is 4.2870938501451725.

> Finished chain.
'Frank Darabont is 64 years old and his age raised to the 0.35 power is 4.2870938501451725.'
"""

As you can see, our agent dynamically carried out chains based on our input. The agent we just created found the director of the walking dead series, and then it calculated the 0.35 power of his age.

Pretty cool, right?

Wrap-Up

LangChain is a powerful tool that helps you build applications related to large language models. You can use this framework to implement projects such as autonomous agents, personal assistants, answering questions, chatbots, understanding code and much more. In this blog, we’ve seen the power of LangChain and covered how to use it with a few simple examples.

You can find the link to the notebook I used in this blog here.

That’s it. Thanks for reading. Let’s connect YouTube | Medium | Twitter | LinkedIn.

Level Up Coding

Thanks for being a part of our community! Before you go:

🚀👉 Join the Level Up talent collective and find an amazing job

--

--