Back
beginner

Your First API Call: Hands-On Setup

Stop using chat interfaces - learn to call AI APIs directly and build custom applications

35 min read· API· Hands-on· JavaScript· Python

Your First API Call: Hands-On Setup

In this hands-on tutorial, you'll break free from chat interfaces and learn to call AI APIs directly. This is the critical skill that separates casual users from builders who can create custom AI applications.

Understanding API Architecture

REST API Definition: Representational State Transfer (REST) is an architectural style for designing networked applications. REST APIs use standard HTTP methods (GET, POST, PUT, DELETE) to perform operations, making them simple, scalable, and widely compatible across different programming languages and platforms.

Chat Interface vs Direct API Access

When you use ChatGPT or Claude through their web interfaces, you're using a simplified version of what's actually available. Here's what happens under the hood:

Chat Interface (What you see):

  • Click send button
  • Wait for response
  • Limited customization
  • No integration capabilities

Direct API Access (What developers use):

  • Full control over parameters (temperature, max tokens, system prompts)
  • Integration into your own applications
  • Batch processing capabilities
  • Custom workflows and automation
  • Cost optimization

Think of it this way: using a chat interface is like ordering from a fixed menu, while using the API is like having access to the entire kitchen. You can create custom dishes (applications) tailored to your exact needs.

How API Calls Work

Your Application → HTTP Request → AI Provider's API → AI Model Processing → HTTP Response → Your Application

The communication happens through REST APIs using HTTP requests:

  1. Authentication: Your API key proves you're authorized
  2. Request: You send a JSON payload with your prompt and parameters
  3. Processing: The AI model generates a response
  4. Response: You receive JSON data with the AI's output
  5. Handling: Your code processes and displays the result

Environment Setup

We'll set up both JavaScript (Node.js) and Python environments. Choose one to start, but I recommend trying both to understand the differences.

JavaScript Setup (Node.js)

Prerequisites:

  • Node.js 16+ installed (download here)
  • A code editor (VS Code recommended)
  • Terminal/Command Prompt access

Step 1: Create Project Directory

bash
mkdir ai-api-tutorial
cd ai-api-tutorial
npm init -y

Step 2: Install Required Packages

bash
npm install openai dotenv

Step 3: Project Structure

ai-api-tutorial/
├── .env                 # API keys (never commit this!)
├── .gitignore          # Prevent committing secrets
├── package.json        # Project dependencies
├── first-call.js       # Your first API call
└── chatbot.js          # Simple chatbot with history

Python Setup

Prerequisites:

  • Python 3.8+ installed (download here)
  • pip (comes with Python)
  • A code editor

Step 1: Create Project Directory

bash
mkdir ai-api-tutorial
cd ai-api-tutorial
python -m venv venv

Step 2: Activate Virtual Environment

bash
# Windows
venv\Scripts\activate

# macOS/Linux
source venv/bin/activate

Step 3: Install Required Packages

bash
pip install openai python-dotenv

Step 4: Project Structure

ai-api-tutorial/
├── .env                 # API keys (never commit this!)
├── .gitignore          # Prevent committing secrets
├── requirements.txt    # Python dependencies
├── first_call.py       # Your first API call
└── chatbot.py          # Simple chatbot with history

API Keys and Environment Variables

API Key Definition: A unique identifier that authenticates your application when making requests to an API. Think of it as a password that proves you're authorized to use the service. API keys are tied to your account for billing and usage tracking.

Getting Your API Key

For OpenAI:

  1. Go to platform.openai.com
  2. Sign up or log in
  3. Navigate to API Keys section
  4. Click "Create new secret key"
  5. Copy the key immediately (you won't see it again!)

For Anthropic (Claude):

  1. Go to console.anthropic.com
  2. Sign up or log in
  3. Navigate to API Keys
  4. Generate a new key
  5. Copy and save it securely

Critical Security Rule: NEVER hardcode API keys in your code. NEVER commit them to Git. Always use environment variables.

Environment Variables Definition: Configuration values stored outside your code, typically in a .env file, that can change between different environments (development, production). They keep sensitive information like API keys separate from your codebase, preventing accidental exposure in version control.

Setting Up .env File

Create a

.env
file in your project root:

bash
# .env file
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Creating .gitignore

Create a

.gitignore
file to prevent committing secrets:

# .gitignore
.env
node_modules/
venv/
__pycache__/
*.pyc
.DS_Store

If you accidentally commit an API key to Git, it's compromised. You must immediately revoke it from the provider's dashboard and generate a new one.

Your First API Call

Let's make your first actual API call to OpenAI's GPT model.

JavaScript Version

Create

first-call.js
:

javascript
// first-call.js
import OpenAI from 'openai';
import dotenv from 'dotenv';

// Load environment variables from .env file
dotenv.config();

// Initialize OpenAI client
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

async function makeFirstCall() {
  try {
    console.log('Sending request to OpenAI...\n');

    // Make the API call
    const response = await openai.chat.completions.create({
      model: 'gpt-3.5-turbo',  // The AI model to use
      messages: [
        {
          role: 'system',
          content: 'You are a helpful assistant that explains concepts clearly.'
        },
        {
          role: 'user',
          content: 'Explain what an API is in one sentence.'
        }
      ],
      temperature: 0.7,        // Creativity level (0-2)
      max_tokens: 100,         // Maximum response length
    });

    // Extract and display the response
    const aiMessage = response.choices[0].message.content;
    console.log('AI Response:', aiMessage);

    // Display usage statistics
    console.log('\n--- Usage Stats ---');
    console.log('Prompt tokens:', response.usage.prompt_tokens);
    console.log('Completion tokens:', response.usage.completion_tokens);
    console.log('Total tokens:', response.usage.total_tokens);

    // Estimate cost (GPT-3.5-turbo pricing: ~$0.0015/1K input, $0.002/1K output)
    const estimatedCost = (
      (response.usage.prompt_tokens / 1000 * 0.0015) +
      (response.usage.completion_tokens / 1000 * 0.002)
    ).toFixed(6);
    console.log('Estimated cost: $' + estimatedCost);

  } catch (error) {
    console.error('Error making API call:', error.message);

    if (error.status === 401) {
      console.error('Authentication failed. Check your API key.');
    } else if (error.status === 429) {
      console.error('Rate limit exceeded. Wait a moment and try again.');
    } else if (error.status === 500) {
      console.error('OpenAI server error. Try again later.');
    }
  }
}

// Run the function
makeFirstCall();

To run it:

bash
# Add "type": "module" to package.json first
node first-call.js

Python Version

Create

first_call.py
:

python
# first_call.py
import os
from openai import OpenAI
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

# Initialize OpenAI client
client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))

def make_first_call():
    try:
        print('Sending request to OpenAI...\n')

        # Make the API call
        response = client.chat.completions.create(
            model='gpt-3.5-turbo',  # The AI model to use
            messages=[
                {
                    'role': 'system',
                    'content': 'You are a helpful assistant that explains concepts clearly.'
                },
                {
                    'role': 'user',
                    'content': 'Explain what an API is in one sentence.'
                }
            ],
            temperature=0.7,        # Creativity level (0-2)
            max_tokens=100,         # Maximum response length
        )

        # Extract and display the response
        ai_message = response.choices[0].message.content
        print(f'AI Response: {ai_message}')

        # Display usage statistics
        print('\n--- Usage Stats ---')
        print(f'Prompt tokens: {response.usage.prompt_tokens}')
        print(f'Completion tokens: {response.usage.completion_tokens}')
        print(f'Total tokens: {response.usage.total_tokens}')

        # Estimate cost (GPT-3.5-turbo pricing: ~$0.0015/1K input, $0.002/1K output)
        estimated_cost = (
            (response.usage.prompt_tokens / 1000 * 0.0015) +
            (response.usage.completion_tokens / 1000 * 0.002)
        )
        print(f'Estimated cost: ${estimated_cost:.6f}')

    except Exception as error:
        print(f'Error making API call: {str(error)}')

        # Handle common errors
        if hasattr(error, 'status_code'):
            if error.status_code == 401:
                print('Authentication failed. Check your API key.')
            elif error.status_code == 429:
                print('Rate limit exceeded. Wait a moment and try again.')
            elif error.status_code == 500:
                print('OpenAI server error. Try again later.')

if __name__ == '__main__':
    make_first_call()

To run it:

bash
python first_call.py

Understanding the Code

Let's break down the key components:

1. Messages Array

javascript
messages: [
  { role: 'system', content: 'You are a helpful assistant...' },
  { role: 'user', content: 'Explain what an API is...' }
]
  • system
    : Sets the AI's behavior and personality
  • user
    : Your actual question or prompt
  • assistant
    : The AI's responses (used in conversation history)

2. Temperature Parameter

javascript
temperature: 0.7

Controls randomness/creativity:

  • 0.0
    : Deterministic, focused, consistent
  • 0.7
    : Balanced (recommended for most tasks)
  • 1.0-2.0
    : Very creative, unpredictable

3. Max Tokens

javascript
max_tokens: 100

Limits response length. 1 token ≈ 4 characters or ≈ 0.75 words.

Start with lower max_tokens during development to save costs. You can always increase it later.

Building a Simple Chatbot

Now let's build a chatbot that maintains conversation history - this is how you create interactive applications.

JavaScript Chatbot with History

Create

chatbot.js
:

javascript
// chatbot.js
import OpenAI from 'openai';
import dotenv from 'dotenv';
import readline from 'readline';

dotenv.config();

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

// Store conversation history
const conversationHistory = [
  {
    role: 'system',
    content: 'You are a friendly and helpful AI assistant. Keep responses concise but informative.'
  }
];

// Set up readline interface for user input
const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout
});

async function chat(userMessage) {
  // Add user message to history
  conversationHistory.push({
    role: 'user',
    content: userMessage
  });

  try {
    // Make API call with full conversation history
    const response = await openai.chat.completions.create({
      model: 'gpt-3.5-turbo',
      messages: conversationHistory,
      temperature: 0.8,
      max_tokens: 500,
    });

    // Get AI response
    const aiMessage = response.choices[0].message.content;

    // Add AI response to history
    conversationHistory.push({
      role: 'assistant',
      content: aiMessage
    });

    // Display response
    console.log('\nAI:', aiMessage);

    // Show token usage
    const totalTokens = response.usage.total_tokens;
    console.log(`\n[Tokens used: ${totalTokens}]`);

    return aiMessage;

  } catch (error) {
    console.error('\nError:', error.message);

    // Remove the failed user message from history
    conversationHistory.pop();

    return null;
  }
}

function promptUser() {
  rl.question('\nYou: ', async (input) => {
    const userMessage = input.trim();

    // Exit commands
    if (userMessage.toLowerCase() === 'exit' ||
        userMessage.toLowerCase() === 'quit') {
      console.log('\nGoodbye!');
      rl.close();
      return;
    }

    // Show conversation history command
    if (userMessage.toLowerCase() === 'history') {
      console.log('\n--- Conversation History ---');
      console.log(JSON.stringify(conversationHistory, null, 2));
      promptUser();
      return;
    }

    // Clear history command
    if (userMessage.toLowerCase() === 'clear') {
      conversationHistory.length = 1; // Keep only system message
      console.log('\nConversation history cleared.');
      promptUser();
      return;
    }

    // Empty message check
    if (!userMessage) {
      promptUser();
      return;
    }

    // Process the message
    await chat(userMessage);

    // Continue conversation
    promptUser();
  });
}

// Start the chatbot
console.log('=== AI Chatbot ===');
console.log('Commands: "exit" to quit, "history" to view, "clear" to reset\n');
promptUser();

To run it:

bash
node chatbot.js

Python Chatbot with History

Create

chatbot.py
:

python
# chatbot.py
import os
from openai import OpenAI
from dotenv import load_dotenv
import json

load_dotenv()

client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))

# Store conversation history
conversation_history = [
    {
        'role': 'system',
        'content': 'You are a friendly and helpful AI assistant. Keep responses concise but informative.'
    }
]

def chat(user_message):
    """Send a message and get a response, maintaining conversation history."""

    # Add user message to history
    conversation_history.append({
        'role': 'user',
        'content': user_message
    })

    try:
        # Make API call with full conversation history
        response = client.chat.completions.create(
            model='gpt-3.5-turbo',
            messages=conversation_history,
            temperature=0.8,
            max_tokens=500,
        )

        # Get AI response
        ai_message = response.choices[0].message.content

        # Add AI response to history
        conversation_history.append({
            'role': 'assistant',
            'content': ai_message
        })

        # Display response
        print(f'\nAI: {ai_message}')

        # Show token usage
        total_tokens = response.usage.total_tokens
        print(f'\n[Tokens used: {total_tokens}]')

        return ai_message

    except Exception as error:
        print(f'\nError: {str(error)}')

        # Remove the failed user message from history
        conversation_history.pop()

        return None

def main():
    """Main chatbot loop."""
    print('=== AI Chatbot ===')
    print('Commands: "exit" to quit, "history" to view, "clear" to reset\n')

    while True:
        # Get user input
        user_message = input('\nYou: ').strip()

        # Exit commands
        if user_message.lower() in ['exit', 'quit']:
            print('\nGoodbye!')
            break

        # Show conversation history command
        if user_message.lower() == 'history':
            print('\n--- Conversation History ---')
            print(json.dumps(conversation_history, indent=2))
            continue

        # Clear history command
        if user_message.lower() == 'clear':
            # Keep only system message
            conversation_history[:] = conversation_history[:1]
            print('\nConversation history cleared.')
            continue

        # Empty message check
        if not user_message:
            continue

        # Process the message
        chat(user_message)

if __name__ == '__main__':
    main()

To run it:

bash
python chatbot.py

Token Definition: The basic unit of text that AI models process. One token is roughly 4 characters or 0.75 words in English. Tokens are how API usage and costs are measured—you pay for both input tokens (your prompt) and output tokens (the AI's response).

How Conversation History Works

The key to building a chatbot is maintaining the conversation history:

javascript
conversationHistory = [
  { role: 'system', content: 'You are helpful...' },
  { role: 'user', content: 'What is Python?' },
  { role: 'assistant', content: 'Python is a programming language...' },
  { role: 'user', content: 'What can I build with it?' },  // References "it" = Python
  { role: 'assistant', content: 'You can build web apps, AI...' }
]

Each API call includes the ENTIRE history, allowing the AI to:

  • Remember context
  • Answer follow-up questions
  • Maintain coherent conversations

Token Cost Warning: Conversation history grows with each exchange. Longer conversations consume more tokens and cost more. Consider implementing history limits for production applications.

Exponential Backoff Definition: A retry strategy where the wait time between attempts increases exponentially (1s, 2s, 4s, 8s, etc.). This prevents overwhelming a rate-limited server while giving the system time to recover, making it the standard approach for handling temporary API failures.

Error Handling Best Practices

Robust error handling is crucial for production applications. Here's a comprehensive error handler:

JavaScript Error Handler

javascript
// error-handler.js
async function robustAPICall(openai, messages, options = {}) {
  const maxRetries = 3;
  let attempt = 0;

  while (attempt < maxRetries) {
    try {
      const response = await openai.chat.completions.create({
        model: options.model || 'gpt-3.5-turbo',
        messages: messages,
        temperature: options.temperature || 0.7,
        max_tokens: options.maxTokens || 500,
      });

      return response;

    } catch (error) {
      attempt++;

      // Authentication errors - don't retry
      if (error.status === 401) {
        throw new Error('Invalid API key. Please check your OPENAI_API_KEY environment variable.');
      }

      // Rate limiting - retry with exponential backoff
      if (error.status === 429) {
        if (attempt < maxRetries) {
          const waitTime = Math.pow(2, attempt) * 1000; // Exponential backoff
          console.log(`Rate limited. Waiting ${waitTime/1000}s before retry ${attempt}/${maxRetries}...`);
          await new Promise(resolve => setTimeout(resolve, waitTime));
          continue;
        }
        throw new Error('Rate limit exceeded. Please try again later.');
      }

      // Server errors - retry
      if (error.status >= 500) {
        if (attempt < maxRetries) {
          console.log(`Server error. Retrying ${attempt}/${maxRetries}...`);
          await new Promise(resolve => setTimeout(resolve, 1000 * attempt));
          continue;
        }
        throw new Error('OpenAI server error. Please try again later.');
      }

      // Invalid request - don't retry
      if (error.status === 400) {
        throw new Error(`Invalid request: ${error.message}`);
      }

      // Other errors
      throw error;
    }
  }
}

export default robustAPICall;

Python Error Handler

python
# error_handler.py
import time
from openai import OpenAI

def robust_api_call(client, messages, options=None):
    """Make an API call with retry logic and error handling."""

    if options is None:
        options = {}

    max_retries = 3
    attempt = 0

    while attempt < max_retries:
        try:
            response = client.chat.completions.create(
                model=options.get('model', 'gpt-3.5-turbo'),
                messages=messages,
                temperature=options.get('temperature', 0.7),
                max_tokens=options.get('max_tokens', 500),
            )

            return response

        except Exception as error:
            attempt += 1

            # Check for specific error types
            status_code = getattr(error, 'status_code', None)

            # Authentication errors - don't retry
            if status_code == 401:
                raise Exception('Invalid API key. Please check your OPENAI_API_KEY environment variable.')

            # Rate limiting - retry with exponential backoff
            if status_code == 429:
                if attempt < max_retries:
                    wait_time = (2 ** attempt)  # Exponential backoff
                    print(f'Rate limited. Waiting {wait_time}s before retry {attempt}/{max_retries}...')
                    time.sleep(wait_time)
                    continue
                raise Exception('Rate limit exceeded. Please try again later.')

            # Server errors - retry
            if status_code and status_code >= 500:
                if attempt < max_retries:
                    print(f'Server error. Retrying {attempt}/{max_retries}...')
                    time.sleep(attempt)
                    continue
                raise Exception('OpenAI server error. Please try again later.')

            # Invalid request - don't retry
            if status_code == 400:
                raise Exception(f'Invalid request: {str(error)}')

            # Other errors
            raise error

Cost Tracking and Optimization

Understanding and tracking costs is essential for building sustainable AI applications.

Token Usage Tracker

javascript
// cost-tracker.js
class CostTracker {
  constructor() {
    this.totalPromptTokens = 0;
    this.totalCompletionTokens = 0;
    this.apiCalls = 0;

    // Pricing per 1K tokens (as of 2024)
    this.pricing = {
      'gpt-3.5-turbo': { input: 0.0015, output: 0.002 },
      'gpt-4': { input: 0.03, output: 0.06 },
      'gpt-4-turbo': { input: 0.01, output: 0.03 },
    };
  }

  trackUsage(response, model = 'gpt-3.5-turbo') {
    this.totalPromptTokens += response.usage.prompt_tokens;
    this.totalCompletionTokens += response.usage.completion_tokens;
    this.apiCalls++;

    const callCost = this.calculateCost(
      response.usage.prompt_tokens,
      response.usage.completion_tokens,
      model
    );

    return callCost;
  }

  calculateCost(promptTokens, completionTokens, model) {
    const pricing = this.pricing[model] || this.pricing['gpt-3.5-turbo'];

    const cost = (
      (promptTokens / 1000 * pricing.input) +
      (completionTokens / 1000 * pricing.output)
    );

    return cost;
  }

  getTotalCost(model = 'gpt-3.5-turbo') {
    return this.calculateCost(
      this.totalPromptTokens,
      this.totalCompletionTokens,
      model
    );
  }

  getReport() {
    return {
      apiCalls: this.apiCalls,
      promptTokens: this.totalPromptTokens,
      completionTokens: this.totalCompletionTokens,
      totalTokens: this.totalPromptTokens + this.totalCompletionTokens,
      estimatedCost: this.getTotalCost()
    };
  }

  reset() {
    this.totalPromptTokens = 0;
    this.totalCompletionTokens = 0;
    this.apiCalls = 0;
  }
}

export default CostTracker;

Usage Example

javascript
import CostTracker from './cost-tracker.js';

const tracker = new CostTracker();

// After each API call
const response = await openai.chat.completions.create({...});
const callCost = tracker.trackUsage(response, 'gpt-3.5-turbo');
console.log(`This call cost: $${callCost.toFixed(6)}`);

// Get session report
const report = tracker.getReport();
console.log('Session Report:', report);
console.log(`Total cost: $${report.estimatedCost.toFixed(4)}`);

Cost Optimization Tips

Cost Optimization Strategies:

  1. Use GPT-3.5-turbo for simple tasks - It's 10-20x cheaper than GPT-4
  2. Limit max_tokens - Set appropriate limits for your use case
  3. Implement caching - Cache responses for identical queries
  4. Truncate conversation history - Keep only recent messages
  5. Use lower temperatures - Reduces retry needs for consistent outputs
  6. Batch similar requests - Process multiple items in one call
  7. Set up billing alerts - Monitor spending in the OpenAI dashboard

Troubleshooting Guide

Common Issues and Solutions

1. "Invalid API Key" Error

Error: Invalid API key

Solutions:

  • Verify
    .env
    file exists and contains correct key
  • Check for spaces or quotes around the key in
    .env
  • Ensure
    dotenv.config()
    is called before using the key
  • Verify the key hasn't been revoked in the OpenAI dashboard

2. "Module not found" Error

Error: Cannot find module 'openai'

Solutions:

bash
# JavaScript
npm install openai dotenv

# Python
pip install openai python-dotenv

3. "Rate Limit Exceeded" Error

Error: Rate limit exceeded

Solutions:

  • You're making too many requests too quickly
  • Wait a few seconds and try again
  • Implement exponential backoff (see error handler above)
  • Upgrade to a paid plan for higher limits

4. "Insufficient Quota" Error

Error: You exceeded your current quota

Solutions:

  • Add payment method to your OpenAI account
  • Check billing settings at platform.openai.com
  • Verify you have available credits

5. Response is Cut Off Mid-Sentence

Cause: Hit max_tokens limit

Solutions:

javascript
// Increase max_tokens
max_tokens: 1000  // Instead of 100

// Or check finish_reason
if (response.choices[0].finish_reason === 'length') {
  console.log('Response was truncated. Increase max_tokens.');
}

6. Inconsistent or Random Responses

Cause: Temperature too high

Solutions:

javascript
// Lower temperature for more consistent results
temperature: 0.3  // Instead of 1.5

7. AI Doesn't Remember Context

Cause: Not including conversation history

Solutions:

javascript
// Include full conversation history
messages: conversationHistory  // Not just the latest message

8. Import/Export Errors in Node.js

SyntaxError: Cannot use import statement outside a module

Solutions: Add to

package.json
:

json
{
  "type": "module"
}

Or use CommonJS:

javascript
const OpenAI = require('openai');

Testing Your Setup

Here's a comprehensive test script to verify everything works:

JavaScript Test Script

javascript
// test-setup.js
import OpenAI from 'openai';
import dotenv from 'dotenv';

dotenv.config();

async function testSetup() {
  console.log('=== Testing API Setup ===\n');

  // Test 1: Environment variables
  console.log('Test 1: Checking environment variables...');
  if (!process.env.OPENAI_API_KEY) {
    console.error('❌ OPENAI_API_KEY not found in environment');
    return;
  }
  console.log('✓ API key found\n');

  // Test 2: API authentication
  console.log('Test 2: Testing API authentication...');
  const openai = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY
  });

  try {
    const response = await openai.chat.completions.create({
      model: 'gpt-3.5-turbo',
      messages: [{ role: 'user', content: 'Say "Hello"' }],
      max_tokens: 10
    });
    console.log('✓ Authentication successful\n');

    // Test 3: Response parsing
    console.log('Test 3: Testing response parsing...');
    const message = response.choices[0].message.content;
    console.log('Response:', message);
    console.log('✓ Response parsing successful\n');

    // Test 4: Token usage
    console.log('Test 4: Checking token usage...');
    console.log('Tokens used:', response.usage.total_tokens);
    console.log('✓ Token tracking working\n');

    console.log('=== All tests passed! ✓ ===');

  } catch (error) {
    console.error('❌ Test failed:', error.message);
  }
}

testSetup();

Python Test Script

python
# test_setup.py
import os
from openai import OpenAI
from dotenv import load_dotenv

load_dotenv()

def test_setup():
    print('=== Testing API Setup ===\n')

    # Test 1: Environment variables
    print('Test 1: Checking environment variables...')
    if not os.getenv('OPENAI_API_KEY'):
        print('❌ OPENAI_API_KEY not found in environment')
        return
    print('✓ API key found\n')

    # Test 2: API authentication
    print('Test 2: Testing API authentication...')
    client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))

    try:
        response = client.chat.completions.create(
            model='gpt-3.5-turbo',
            messages=[{'role': 'user', 'content': 'Say "Hello"'}],
            max_tokens=10
        )
        print('✓ Authentication successful\n')

        # Test 3: Response parsing
        print('Test 3: Testing response parsing...')
        message = response.choices[0].message.content
        print(f'Response: {message}')
        print('✓ Response parsing successful\n')

        # Test 4: Token usage
        print('Test 4: Checking token usage...')
        print(f'Tokens used: {response.usage.total_tokens}')
        print('✓ Token tracking working\n')

        print('=== All tests passed! ✓ ===')

    except Exception as error:
        print(f'❌ Test failed: {str(error)}')

if __name__ == '__main__':
    test_setup()

Next Steps

Congratulations! You've successfully made your first API calls and built a working chatbot. Here's what to explore next:

  1. Different AI Models: Try GPT-4, Claude, or open-source models
  2. Advanced Parameters: Experiment with streaming, function calling, and JSON mode
  3. Production Patterns: Learn about rate limiting, caching, and monitoring
  4. Build Real Applications: Create custom tools for your specific needs

The code you wrote today is the foundation for every AI application. Whether you build chatbots, content generators, data analyzers, or automation tools, it all starts with these API calls.

Quiz

Test your understanding of API basics and implementation.

Summary

In this tutorial, you learned:

  • API Architecture: How direct API access differs from chat interfaces
  • Environment Setup: Configuring Node.js and Python for AI development
  • Security: Managing API keys safely with environment variables
  • First API Call: Making authenticated requests to OpenAI
  • Conversation History: Building chatbots that maintain context
  • Error Handling: Robust strategies for production applications
  • Cost Tracking: Monitoring and optimizing token usage
  • Troubleshooting: Solving common API integration issues

You now have the fundamental skills to build custom AI applications. The chat interface was training wheels - you're ready to build real tools now.