Introduction to LangChain
LangChain is a powerful framework designed to simplify building applications with Large Language Models. In this lesson, we'll explore what makes LangChain essential for LLM development and how to get started.
What is LangChain?
LangChain is an open-source framework that provides abstractions and tools for developing applications powered by language models. It addresses common challenges in LLM application development:
LangChain Definition: An open-source framework that standardizes and simplifies building applications with Large Language Models by providing reusable components, integrations, and orchestration tools.
Key Benefits of LangChain:
- Standardized interfaces for different LLM providers
- Composable components for building complex workflows
- Built-in memory management for conversational applications
- Integration with external data sources and tools
- Production-ready utilities for monitoring and debugging
Why Use LangChain?
Without LangChain, building LLM applications requires:
- Writing custom code for each LLM provider
- Manually managing conversation history
- Implementing complex prompt templates from scratch
- Building custom integrations with external tools
LangChain provides these capabilities out of the box, letting you focus on your application logic.
Installation and Setup
Python Installation
# Install core LangChain
pip install langchain
# Install OpenAI integration
pip install langchain-openai
# Install additional dependencies
pip install python-dotenv
JavaScript/TypeScript Installation
# Install core LangChain.js
npm install langchain
# Install OpenAI integration
npm install @langchain/openai
# Install additional dependencies
npm install dotenv
Environment Setup
Create a
.envOPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
Never commit API keys to version control. Always use environment variables or secure key management systems in production.
Core Concepts
1. Models
Models are the foundation of LangChain. They represent the LLMs you'll interact with.
Python Example:
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
# Initialize a chat model
llm = ChatOpenAI(
model="gpt-4",
temperature=0.7,
max_tokens=500
)
# Simple invocation
response = llm.invoke("What is LangChain?")
print(response.content)
JavaScript Example:
import { ChatOpenAI } from "@langchain/openai";
import dotenv from "dotenv";
dotenv.config();
// Initialize a chat model
const llm = new ChatOpenAI({
modelName: "gpt-4",
temperature: 0.7,
maxTokens: 500
});
// Simple invocation
const response = await llm.invoke("What is LangChain?");
console.log(response.content);
2. Prompts
Prompts are templates that help you structure inputs to your models consistently.
Prompt Template Definition: A reusable pattern for structuring LLM inputs with variables that can be dynamically filled, ensuring consistent formatting and reducing repetitive prompt engineering.
Python Example:
from langchain.prompts import ChatPromptTemplate
# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful AI assistant specializing in {domain}."),
("human", "{question}")
])
# Format the prompt
formatted = prompt.format_messages(
domain="machine learning",
question="Explain gradient descent in simple terms."
)
print(formatted)
JavaScript Example:
import { ChatPromptTemplate } from "@langchain/core/prompts";
// Create a prompt template
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful AI assistant specializing in {domain}."],
["human", "{question}"]
]);
// Format the prompt
const formatted = await prompt.formatMessages({
domain: "machine learning",
question: "Explain gradient descent in simple terms."
});
console.log(formatted);
Prompt templates support multiple message types:
systemhumanaifunction3. Chains
Chains combine models and prompts into reusable components. They represent a sequence of operations.
Chain Definition: A sequence of operations that combines prompts, models, and other components into a reusable pipeline, allowing complex workflows to be executed in a single call.
Python Example:
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
# Initialize components
llm = ChatOpenAI(model="gpt-3.5-turbo")
prompt = ChatPromptTemplate.from_template(
"Write a short poem about {topic} in the style of {style}."
)
output_parser = StrOutputParser()
# Create a chain using LCEL (LangChain Expression Language)
chain = prompt | llm | output_parser
# Execute the chain
result = chain.invoke({
"topic": "artificial intelligence",
"style": "haiku"
})
print(result)
JavaScript Example:
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
// Initialize components
const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo" });
const prompt = ChatPromptTemplate.fromTemplate(
"Write a short poem about {topic} in the style of {style}."
);
const outputParser = new StringOutputParser();
// Create a chain using LCEL
const chain = prompt.pipe(llm).pipe(outputParser);
// Execute the chain
const result = await chain.invoke({
topic: "artificial intelligence",
style: "haiku"
});
console.log(result);
LCEL (LangChain Expression Language) Definition: A declarative syntax for composing LangChain components using the pipe operator (|), providing better type safety, streaming support, and more readable chain construction.
First Complete LangChain Example
Let's build a simple translation application that demonstrates all core concepts.
Python Complete Example:
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Initialize the model
llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0.3 # Lower temperature for more consistent translations
)
# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a professional translator. Translate the following text from {source_lang} to {target_lang}. Preserve the tone and meaning."),
("human", "{text}")
])
# Create output parser
parser = StrOutputParser()
# Build the chain
translation_chain = prompt | llm | parser
# Function to translate text
def translate(text, source_lang="English", target_lang="Spanish"):
return translation_chain.invoke({
"text": text,
"source_lang": source_lang,
"target_lang": target_lang
})
# Example usage
if __name__ == "__main__":
original = "Good morning! How are you today?"
translated = translate(original, "English", "French")
print(f"Original: {original}")
print(f"Translated: {translated}")
# Translate multiple languages
languages = ["Spanish", "German", "Italian"]
for lang in languages:
result = translate(original, "English", lang)
print(f"{lang}: {result}")
JavaScript Complete Example:
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import dotenv from "dotenv";
dotenv.config();
// Initialize the model
const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
temperature: 0.3 // Lower temperature for more consistent translations
});
// Create a prompt template
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a professional translator. Translate the following text from {source_lang} to {target_lang}. Preserve the tone and meaning."],
["human", "{text}"]
]);
// Create output parser
const parser = new StringOutputParser();
// Build the chain
const translationChain = prompt.pipe(llm).pipe(parser);
// Function to translate text
async function translate(text, sourceLang = "English", targetLang = "Spanish") {
return await translationChain.invoke({
text: text,
source_lang: sourceLang,
target_lang: targetLang
});
}
// Example usage
async function main() {
const original = "Good morning! How are you today?";
const translated = await translate(original, "English", "French");
console.log(`Original: ${original}`);
console.log(`Translated: ${translated}`);
// Translate multiple languages
const languages = ["Spanish", "German", "Italian"];
for (const lang of languages) {
const result = await translate(original, "English", lang);
console.log(`${lang}: ${result}`);
}
}
main().catch(console.error);
Key Takeaways
What You've Learned:
- LangChain provides standardized abstractions for building LLM applications
- Core components include Models, Prompts, and Chains
- LCEL (LangChain Expression Language) enables composable, readable chains
- The same patterns work across Python and JavaScript implementations
- Environment variables keep API keys secure
Best Practices
- Use LCEL for chains - Modern syntax with better type safety and streaming support
- Keep prompts modular - Separate prompt templates from business logic
- Handle errors gracefully - LLM calls can fail; implement retry logic
- Monitor token usage - Track costs and optimize prompt length
- Version your prompts - Track changes to prompts like you track code
Next Steps
In the next lesson, we'll explore:
- Advanced chain types (SequentialChain, RouterChain)
- Building agents that can use tools
- The ReAct pattern for reasoning and acting
- Integrating external tools and APIs
Quiz
Test your understanding of LangChain basics: