AI Integrator: Connecting Python Applications to AI Services
27.1 Chapter Outline
- Understanding the AI integration landscape for Python
- Working with AI APIs and services
- Using popular Python libraries for AI integration
- Building text and image processing capabilities
- Implementing conversational AI in Python applications
- Handling API authentication and usage limits
- Testing and troubleshooting AI integrations
- Enhancing your chatbot with external AI capabilities
27.2 Learning Objectives
By the end of this chapter, you will be able to: - Understand the ecosystem of AI services available for Python integration - Connect to AI APIs using Python’s requests library - Implement basic text analysis using natural language processing (NLP) services - Use Python to access image recognition and generation capabilities - Add conversational AI features to Python applications - Handle authentication and API key management securely - Test and troubleshoot common AI integration issues - Enhance your chatbot with external AI capabilities
27.3 1. Introduction: Connecting Python to AI Capabilities
While Python itself is a powerful programming language, it becomes even more powerful when connected to external AI services. Modern AI capabilities like natural language understanding, image recognition, and text generation are now accessible through simple API calls, allowing Python developers to create increasingly intelligent applications without needing to train their own AI models.
This chapter explores how to integrate these AI capabilities into your Python applications. We’ll focus on practical integration patterns rather than the theory behind AI models, giving you the tools to enhance your projects with AI features regardless of your background in machine learning.
The skills you’ll learn in this chapter represent a significant trend in modern programming: the ability to compose applications by connecting to specialized AI services rather than building everything from scratch. This approach lets you leverage cutting-edge AI research in your projects without requiring expertise in data science or machine learning.
AI Tip: When integrating AI services into your Python applications, always start with a minimal proof-of-concept to test the API connection and response format before building more complex features. This helps identify integration issues early and ensures you understand the service’s capabilities and limitations.
27.4 2. The AI Integration Landscape
The AI services landscape offers a variety of options for Python developers, ranging from general-purpose AI platforms to specialized services for specific tasks.
27.4.1 Types of AI Services
- Large Language Models (LLMs)
- Services like OpenAI’s GPT models, Anthropic’s Claude, or Google’s Gemini
- Capabilities include text generation, summarization, translation, and question answering
- Access through REST APIs with API key authentication
- Vision AI Services
- Image classification, object detection, OCR (Optical Character Recognition)
- Available through Google Cloud Vision, Azure Computer Vision, AWS Rekognition
- Typically requires sending image data and receiving structured analysis
- Speech and Audio Services
- Speech-to-text, text-to-speech, audio analysis
- Google Cloud Speech, Amazon Transcribe, Azure Speech Services
- Often involves streaming audio or uploading audio files
- Specialized NLP Services
- Sentiment analysis, named entity recognition, language detection
- Available through various providers or as standalone APIs
- Usually takes text input and returns structured analysis
- Multi-modal Services
- Combining text, image, audio in a single API
- Examples include OpenAI’s GPT-4 Vision, Google’s Gemini, Anthropic’s Claude Opus
- Allows for more complex interactions like “describe this image” or “create an image based on this text”
27.4.2 Integration Models
When integrating AI services, you’ll generally use one of these patterns:
- REST API Integration
- Making HTTP requests to AI service endpoints
- Sending data in JSON format and processing JSON responses
- Managing authentication through API keys or OAuth tokens
- SDK-based Integration
- Using official Python libraries provided by AI service vendors
- Typically wraps the REST API with Python-friendly interfaces
- Often handles authentication and request formatting
- Library-based Integration
- Using third-party Python libraries that abstract away the API details
- May provide higher-level functionality or combine multiple services
- Examples include LangChain, LlamaIndex, and similar frameworks
- Self-hosted Models
- Running open-source AI models directly in your application
- Libraries like Hugging Face’s Transformers make this possible
- Requires more computational resources but reduces API costs and latency
27.5 3. Getting Started with AI API Integration
Let’s start with the most common integration pattern: connecting to an AI service via REST API.
27.5.1 Basic API Integration Pattern
Most AI API integrations follow a similar pattern:
- Sign up for the service and obtain API credentials
- Install the necessary Python packages (
requestsat minimum) - Prepare your data for the API request
- Make the API call
- Process and use the response
27.5.2 Example: Text Generation with OpenAI
Here’s a basic example of integrating with OpenAI’s API:
import os
import requests
import json
def generate_text(prompt, api_key=None):
"""
Generate text using OpenAI's API.
Args:
prompt (str): The text prompt to send to the API
api_key (str, optional): OpenAI API key. If None, uses environment variable.
Returns:
str: The generated text response
"""
# Get API key from environment variable if not provided
if api_key is None:
api_key = os.environ.get("OPENAI_API_KEY")
if api_key is None:
raise ValueError("API key must be provided or set as OPENAI_API_KEY environment variable")
# API endpoint
url = "https://api.openai.com/v1/chat/completions"
# Prepare the request payload
payload = {
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": prompt}],
"temperature": 0.7,
"max_tokens": 150
}
# Prepare headers with authentication
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
# Make the API request
try:
response = requests.post(url, headers=headers, data=json.dumps(payload))
response.raise_for_status() # Raise exception for HTTP errors
# Parse the response
response_data = response.json()
# Extract the generated text
generated_text = response_data["choices"][0]["message"]["content"]
return generated_text.strip()
except requests.exceptions.RequestException as e:
print(f"Error calling OpenAI API: {e}")
if response and response.text:
print(f"Response: {response.text}")
return None27.5.3 Example: Image Analysis with Azure Computer Vision
Here’s how you might integrate with Microsoft Azure’s Computer Vision service:
import os
import requests
import json
def analyze_image(image_url, api_key=None, endpoint=None):
"""
Analyze an image using Azure Computer Vision API.
Args:
image_url (str): URL of the image to analyze
api_key (str, optional): Azure API key. If None, uses environment variable.
endpoint (str, optional): Azure endpoint. If None, uses environment variable.
Returns:
dict: The analysis results
"""
# Get credentials from environment variables if not provided
if api_key is None:
api_key = os.environ.get("AZURE_VISION_API_KEY")
if api_key is None:
raise ValueError("API key must be provided or set as AZURE_VISION_API_KEY environment variable")
if endpoint is None:
endpoint = os.environ.get("AZURE_VISION_ENDPOINT")
if endpoint is None:
raise ValueError("Endpoint must be provided or set as AZURE_VISION_ENDPOINT environment variable")
# API URL
vision_url = f"{endpoint}/vision/v3.2/analyze"
# Parameters
params = {
"visualFeatures": "Categories,Description,Objects",
"language": "en"
}
# Headers
headers = {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": api_key
}
# Request body
body = {
"url": image_url
}
# Make the request
try:
response = requests.post(vision_url, headers=headers, params=params, json=body)
response.raise_for_status()
# Return the analysis results
return response.json()
except requests.exceptions.RequestException as e:
print(f"Error calling Azure Vision API: {e}")
if response and response.text:
print(f"Response: {response.text}")
return None28 Python AI Integration
28.1 Chapter Outline
- Understanding the AI integration landscape for Python
- Working with AI APIs and services
- Using popular Python libraries for AI integration
- Building text and image processing capabilities
- Implementing conversational AI in Python applications
- Handling API authentication and usage limits
- Testing and troubleshooting AI integrations
- Enhancing your chatbot with external AI capabilities
28.2 Learning Objectives
By the end of this chapter, you will be able to: - Understand the ecosystem of AI services available for Python integration - Connect to AI APIs using Python’s requests library - Implement basic text analysis using natural language processing (NLP) services - Use Python to access image recognition and generation capabilities - Add conversational AI features to Python applications - Handle authentication and API key management securely - Test and troubleshoot common AI integration issues - Enhance your chatbot with external AI capabilities
28.3 1. Introduction: Connecting Python to AI Capabilities
While Python itself is a powerful programming language, it becomes even more powerful when connected to external AI services. Modern AI capabilities like natural language understanding, image recognition, and text generation are now accessible through simple API calls, allowing Python developers to create increasingly intelligent applications without needing to train their own AI models.
This chapter explores how to integrate these AI capabilities into your Python applications. We’ll focus on practical integration patterns rather than the theory behind AI models, giving you the tools to enhance your projects with AI features regardless of your background in machine learning.
The skills you’ll learn in this chapter represent a significant trend in modern programming: the ability to compose applications by connecting to specialized AI services rather than building everything from scratch. This approach lets you leverage cutting-edge AI research in your projects without requiring expertise in data science or machine learning.
AI Tip: When integrating AI services into your Python applications, always start with a minimal proof-of-concept to test the API connection and response format before building more complex features. This helps identify integration issues early and ensures you understand the service’s capabilities and limitations.
28.4 2. The AI Integration Landscape
The AI services landscape offers a variety of options for Python developers, ranging from general-purpose AI platforms to specialized services for specific tasks.
28.4.1 Types of AI Services
- Large Language Models (LLMs)
- Services like OpenAI’s GPT models, Anthropic’s Claude, or Google’s Gemini
- Capabilities include text generation, summarization, translation, and question answering
- Access through REST APIs with API key authentication
- Vision AI Services
- Image classification, object detection, OCR (Optical Character Recognition)
- Available through Google Cloud Vision, Azure Computer Vision, AWS Rekognition
- Typically requires sending image data and receiving structured analysis
- Speech and Audio Services
- Speech-to-text, text-to-speech, audio analysis
- Google Cloud Speech, Amazon Transcribe, Azure Speech Services
- Often involves streaming audio or uploading audio files
- Specialized NLP Services
- Sentiment analysis, named entity recognition, language detection
- Available through various providers or as standalone APIs
- Usually takes text input and returns structured analysis
- Multi-modal Services
- Combining text, image, audio in a single API
- Examples include OpenAI’s GPT-4 Vision, Google’s Gemini, Anthropic’s Claude Opus
- Allows for more complex interactions like “describe this image” or “create an image based on this text”
28.4.2 Integration Models
When integrating AI services, you’ll generally use one of these patterns:
- REST API Integration
- Making HTTP requests to AI service endpoints
- Sending data in JSON format and processing JSON responses
- Managing authentication through API keys or OAuth tokens
- SDK-based Integration
- Using official Python libraries provided by AI service vendors
- Typically wraps the REST API with Python-friendly interfaces
- Often handles authentication and request formatting
- Library-based Integration
- Using third-party Python libraries that abstract away the API details
- May provide higher-level functionality or combine multiple services
- Examples include LangChain, LlamaIndex, and similar frameworks
- Self-hosted Models
- Running open-source AI models directly in your application
- Libraries like Hugging Face’s Transformers make this possible
- Requires more computational resources but reduces API costs and latency
28.5 3. Getting Started with AI API Integration
Let’s start with the most common integration pattern: connecting to an AI service via REST API.
28.5.1 Basic API Integration Pattern
Most AI API integrations follow a similar pattern:
- Sign up for the service and obtain API credentials
- Install the necessary Python packages (
requestsat minimum) - Prepare your data for the API request
- Make the API call
- Process and use the response
28.5.2 Example: Text Generation with OpenAI
Here’s a basic example of integrating with OpenAI’s API:
import os
import requests
import json
def generate_text(prompt, api_key=None):
"""
Generate text using OpenAI's API.
Args:
prompt (str): The text prompt to send to the API
api_key (str, optional): OpenAI API key. If None, uses environment variable.
Returns:
str: The generated text response
"""
# Get API key from environment variable if not provided
if api_key is None:
api_key = os.environ.get("OPENAI_API_KEY")
if api_key is None:
raise ValueError("API key must be provided or set as OPENAI_API_KEY environment variable")
# API endpoint
url = "https://api.openai.com/v1/chat/completions"
# Prepare the request payload
payload = {
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": prompt}],
"temperature": 0.7,
"max_tokens": 150
}
# Prepare headers with authentication
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
# Make the API request
try:
response = requests.post(url, headers=headers, data=json.dumps(payload))
response.raise_for_status() # Raise exception for HTTP errors
# Parse the response
response_data = response.json()
# Extract the generated text
generated_text = response_data["choices"][0]["message"]["content"]
return generated_text.strip()
except requests.exceptions.RequestException as e:
print(f"Error calling OpenAI API: {e}")
if response and response.text:
print(f"Response: {response.text}")
return None28.5.3 Example: Image Analysis with Azure Computer Vision
Here’s how you might integrate with Microsoft Azure’s Computer Vision service:
import os
import requests
import json
def analyze_image(image_url, api_key=None, endpoint=None):
"""
Analyze an image using Azure Computer Vision API.
Args:
image_url (str): URL of the image to analyze
api_key (str, optional): Azure API key. If None, uses environment variable.
endpoint (str, optional): Azure endpoint. If None, uses environment variable.
Returns:
dict: The analysis results
"""
# Get credentials from environment variables if not provided
if api_key is None:
api_key = os.environ.get("AZURE_VISION_API_KEY")
if api_key is None:
raise ValueError("API key must be provided or set as AZURE_VISION_API_KEY environment variable")
if endpoint is None:
endpoint = os.environ.get("AZURE_VISION_ENDPOINT")
if endpoint is None:
raise ValueError("Endpoint must be provided or set as AZURE_VISION_ENDPOINT environment variable")
# API URL
vision_url = f"{endpoint}/vision/v3.2/analyze"
# Parameters
params = {
"visualFeatures": "Categories,Description,Objects",
"language": "en"
}
# Headers
headers = {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": api_key
}
# Request body
body = {
"url": image_url
}
# Make the request
try:
response = requests.post(vision_url, headers=headers, params=params, json=body)
response.raise_for_status()
# Return the analysis results
return response.json()
except requests.exceptions.RequestException as e:
print(f"Error calling Azure Vision API: {e}")
if response and response.text:
print(f"Response: {response.text}")
return None28.6 7. Secure Authentication and API Key Management
When integrating with AI services, proper handling of API keys and authentication is crucial.
28.6.1 Using Environment Variables
The simplest approach is to use environment variables:
import os
from dotenv import load_dotenv
# Load environment variables from a .env file
load_dotenv()
# Access API keys
openai_api_key = os.environ.get("OPENAI_API_KEY")
azure_api_key = os.environ.get("AZURE_API_KEY")You would create a .env file with your keys:
OPENAI_API_KEY=sk-your-key-here
AZURE_API_KEY=your-azure-key-here
Important: Add .env to your .gitignore file to avoid accidentally committing API keys.
28.6.2 Secure Key Storage Options
For more robust applications, consider:
- Secret management services: AWS Secrets Manager, Azure Key Vault, HashiCorp Vault
- Database storage: Encrypted database fields for API keys
- Configuration management: Tools like Ansible or Chef with encryption
Example using AWS Secrets Manager:
import boto3
import json
def get_secret(secret_name, region_name="us-west-2"):
"""Retrieve a secret from AWS Secrets Manager."""
# Create a Secrets Manager client
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
try:
# Get the secret value
response = client.get_secret_value(SecretId=secret_name)
# Parse the secret JSON
if 'SecretString' in response:
secret = json.loads(response['SecretString'])
return secret
else:
# Binary secrets need to be decoded
return response['SecretBinary']
except Exception as e:
print(f"Error retrieving secret: {e}")
return None
# Usage
secrets = get_secret("ai-service-keys")
openai_api_key = secrets.get("OPENAI_API_KEY")28.7 8. Testing and Troubleshooting AI Integrations
Testing AI integrations can be challenging due to rate limits, costs, and the sometimes non-deterministic nature of AI responses.
28.7.1 Creating Test Fixtures for AI APIs
import unittest
import json
import os
from unittest.mock import patch, MagicMock
class MockResponse:
"""A mock Response object for testing."""
def __init__(self, status_code, json_data):
self.status_code = status_code
self.json_data = json_data
self.text = json.dumps(json_data)
def json(self):
return self.json_data
def raise_for_status(self):
if self.status_code >= 400:
raise Exception(f"HTTP Error: {self.status_code}")
class TestOpenAIIntegration(unittest.TestCase):
def setUp(self):
# Load test fixtures
fixture_path = os.path.join(os.path.dirname(__file__), "fixtures", "openai_response.json")
with open(fixture_path, 'r') as f:
self.mock_openai_response = json.load(f)
@patch('requests.post')
def test_generate_text(self, mock_post):
# Configure the mock
mock_post.return_value = MockResponse(200, self.mock_openai_response)
# Import the function to test
from my_ai_module import generate_text
# Call the function
result = generate_text("Test prompt", api_key="fake_key")
# Assert the result
self.assertIsNotNone(result)
self.assertIn("This is a test response", result)
# Verify the API was called correctly
mock_post.assert_called_once()
args, kwargs = mock_post.call_args
self.assertEqual(kwargs['headers']['Authorization'], "Bearer fake_key")
payload = json.loads(kwargs['data'])
self.assertEqual(payload['messages'][0]['content'], "Test prompt")28.7.2 Creating a Test Mode for AI-Dependent Features
class AIService:
"""A service that integrates with AI APIs."""
def __init__(self, api_key=None, test_mode=False):
"""
Initialize the AI service.
Args:
api_key (str, optional): API key for the service
test_mode (bool): If True, use mock responses instead of real API calls
"""
self.api_key = api_key or os.environ.get("OPENAI_API_KEY")
self.test_mode = test_mode
# Load test responses if in test mode
if test_mode:
self._load_test_responses()
def _load_test_responses(self):
"""Load mock responses for test mode."""
self.test_responses = {
"greeting": "Hello! This is a test response.",
"question": "This is a mock answer to your question.",
"default": "This is a default test response."
}
def get_completion(self, prompt):
"""
Get a completion from the AI.
Args:
prompt (str): The prompt to send
Returns:
str: The AI's response
"""
# If in test mode, return a mock response
if self.test_mode:
if "hello" in prompt.lower() or "hi" in prompt.lower():
return self.test_responses["greeting"]
elif "?" in prompt:
return self.test_responses["question"]
else:
return self.test_responses["default"]
# Otherwise, call the real API
try:
openai.api_key = self.api_key
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}],
max_tokens=150
)
return response.choices[0].message.content.strip()
except Exception as e:
print(f"Error calling AI API: {e}")
return "Error: Could not get response from AI service."28.8 9. Project Corner: Enhancing Your Chatbot with AI
Now let’s apply these AI integration techniques to enhance our chatbot project with more advanced capabilities.
28.8.1 Adding AI-Powered Understanding to the Chatbot
First, let’s create a class that uses AI to better understand user messages:
import openai
import os
from dotenv import load_dotenv
class MessageUnderstanding:
"""A class that uses AI to understand user messages."""
def __init__(self, api_key=None):
# Load environment variables if needed
load_dotenv()
# Set API key
self.api_key = api_key or os.environ.get("OPENAI_API_KEY")
if not self.api_key:
print("Warning: No OpenAI API key provided. AI features will not work.")
def analyze_intent(self, message):
"""
Analyze the intent of a user message.
Args:
message (str): The user's message
Returns:
dict: Intent analysis with confidence scores
"""
if not self.api_key:
return {"intent": "unknown", "confidence": 0.0}
try:
# Create the prompt for intent analysis
prompt = f"""
Analyze the following message and determine the user's intent.
Return ONLY a JSON object with "intent" and "confidence" fields.
Possible intents: greeting, farewell, question, request, complaint, smalltalk, other
Message: "{message}"
JSON:
"""
# Call the API
openai.api_key = self.api_key
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}],
max_tokens=100,
temperature=0.3
)
# Parse the response as JSON
import json
result_text = response.choices[0].message.content.strip()
# Handle potential formatting in the response
try:
# Try to parse the raw response
result = json.loads(result_text)
except json.JSONDecodeError:
# If that fails, try to extract JSON from the text
import re
json_match = re.search(r'\{.*\}', result_text, re.DOTALL)
if json_match:
result = json.loads(json_match.group(0))
else:
raise ValueError("Could not parse JSON from response")
return result
except Exception as e:
print(f"Error analyzing intent: {e}")
return {"intent": "unknown", "confidence": 0.0}
def extract_entities(self, message):
"""
Extract entities from a user message.
Args:
message (str): The user's message
Returns:
dict: Extracted entities by type
"""
if not self.api_key:
return {}
try:
# Create the prompt for entity extraction
prompt = f"""
Extract entities from the following message.
Return ONLY a JSON object where keys are entity types and values are the extracted entities.
Possible entity types: person, location, organization, date_time, product, quantity, other
Message: "{message}"
JSON:
"""
# Call the API
openai.api_key = self.api_key
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}],
max_tokens=150,
temperature=0.3
)
# Parse the response as JSON
import json
result_text = response.choices[0].message.content.strip()
# Handle potential formatting in the response
try:
# Try to parse the raw response
result = json.loads(result_text)
except json.JSONDecodeError:
# If that fails, try to extract JSON from the text
import re
json_match = re.search(r'\{.*\}', result_text, re.DOTALL)
if json_match:
result = json.loads(json_match.group(0))
else:
raise ValueError("Could not parse JSON from response")
return result
except Exception as e:
print(f"Error extracting entities: {e}")
return {}28.8.2 AI-Enhanced Chatbot Implementation
Now, let’s integrate this into our chatbot:
class AIEnhancedChatbot:
"""A chatbot enhanced with AI capabilities."""
def __init__(self, name="AI Chatbot", api_key=None):
"""
Initialize the AI-enhanced chatbot.
Args:
name (str): The chatbot's name
api_key (str, optional): API key for AI services
"""
self.name = name
self.user_name = None
self.conversation_history = []
# Initialize AI components
self.message_understanding = MessageUnderstanding(api_key)
self.conversational_ai = ConversationalAI(
api_key=api_key,
system_prompt=f"You are {name}, a helpful assistant. Keep responses concise and friendly."
)
# Flag to control AI usage
self.use_ai = True
def greet(self):
"""Greet the user and get their name."""
print(f"Hello! I'm {self.name}, an AI-enhanced chatbot. Type 'bye' to exit.")
self.user_name = input("What's your name? ")
print(f"Nice to meet you, {self.user_name}!")
self.add_to_history("SYSTEM", f"Conversation started with {self.user_name}")
def add_to_history(self, speaker, text):
"""Add a message to the conversation history."""
from datetime import datetime
timestamp = datetime.now().strftime("%H:%M:%S")
entry = f"[{timestamp}] {speaker}: {text}"
self.conversation_history.append(entry)
def get_response(self, user_input):
"""Generate a response to the user input using AI capabilities."""
# Handle special commands
if user_input.lower() == "help":
return self.get_help()
elif user_input.lower() == "history":
return self.show_history()
elif user_input.lower() == "toggle ai":
self.use_ai = not self.use_ai
return f"AI features turned {'on' if self.use_ai else 'off'}"
if not self.use_ai:
# Fall back to rule-based response if AI is disabled
return self.get_rule_based_response(user_input)
try:
# Use AI to understand the message
intent_analysis = self.message_understanding.analyze_intent(user_input)
entities = self.message_understanding.extract_entities(user_input)
# Log the understanding (in a real system, you might not show this to the user)
understanding_log = f"Intent: {intent_analysis.get('intent', 'unknown')} ({intent_analysis.get('confidence', 0):.2f})"
if entities:
understanding_log += f", Entities: {entities}"
self.add_to_history("SYSTEM", understanding_log)
# Get a response from the conversational AI
ai_response = self.conversational_ai.get_response(user_input)
return ai_response
except Exception as e:
print(f"Error in AI processing: {e}")
# Fall back to rule-based response if AI fails
return self.get_rule_based_response(user_input)
def get_rule_based_response(self, user_input):
"""Generate a response using simple rule-based patterns."""
user_input = user_input.lower()
if "hello" in user_input or "hi" in user_input:
return f"Hello, {self.user_name}! How can I help you today?"
elif "how are you" in user_input:
return "I'm doing well, thank you for asking!"
elif "your name" in user_input:
return f"My name is {self.name}. I'm an AI-enhanced chatbot."
elif "bye" in user_input or "goodbye" in user_input:
return f"Goodbye, {self.user_name}! It was nice chatting with you."
else:
return "I'm not sure how to respond to that. Can you try asking something else?"
def show_history(self):
"""Show the conversation history."""
if not self.conversation_history:
return "No conversation history yet."
history = "\n----- Conversation History -----\n"
for entry in self.conversation_history:
history += f"{entry}\n"
history += "-------------------------------"
return history
def get_help(self):
"""Get help information."""
help_text = f"""
Available Commands:
- 'help': Display this help message
- 'history': Show conversation history
- 'toggle ai': Turn AI features on/off
- 'bye': End the conversation
You can also just chat with me normally, {self.user_name}!
"""
return help_text
def run(self):
"""Run the main chatbot loop."""
self.greet()
while True:
user_input = input(f"{self.user_name}> ")
self.add_to_history(self.user_name, user_input)
if user_input.lower() == "bye":
response = f"Goodbye, {self.user_name}! I hope to chat again soon."
print(f"{self.name}> {response}")
self.add_to_history(self.name, response)
break
response = self.get_response(user_input)
print(f"{self.name}> {response}")
self.add_to_history(self.name, response)28.8.3 Running the AI-Enhanced Chatbot
Here’s how you might run the AI-enhanced chatbot:
if __name__ == "__main__":
# Import necessary modules
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Get API key from environment
api_key = os.environ.get("OPENAI_API_KEY")
# Check if API key is available
if not api_key:
print("Warning: No OpenAI API key found in environment variables.")
print("The chatbot will run with limited AI capabilities.")
# Ask if user wants to proceed
proceed = input("Do you want to continue without AI features? (y/n): ")
if proceed.lower() != "y":
print("Exiting. Please set up an API key in a .env file to use AI features.")
exit()
# Create and run the chatbot
chatbot = AIEnhancedChatbot(name="AIBot", api_key=api_key)
chatbot.run()28.9 10. Handling API Costs and Rate Limits
When using AI services in your chatbot, it’s important to be mindful of costs and rate limits:
- Track usage: Implement a counter for API calls and log costs
- Set limits: Define maximum daily/monthly usage thresholds
- Cache responses: Store responses for common queries
- Use fallbacks: Implement rule-based responses as fallbacks
- Tier your approach: Use cheaper AI services for routine tasks, reserve expensive ones for complex queries
Here’s an example of a cost-tracking wrapper:
class CostTrackingAI:
"""A wrapper for AI services that tracks usage and costs."""
def __init__(self, api_key, cost_per_1k_tokens=0.002, daily_budget=1.0):
"""
Initialize the cost tracking AI wrapper.
Args:
api_key (str): API key for the service
cost_per_1k_tokens (float): Cost per 1000 tokens in USD
daily_budget (float): Maximum daily budget in USD
"""
self.api_key = api_key
self.cost_per_1k_tokens = cost_per_1k_tokens
self.daily_budget = daily_budget
# Initialize usage tracking
self.tokens_used_today = 0
self.requests_made_today = 0
self.estimated_cost_today = 0.0
# Initialize the OpenAI client
openai.api_key = api_key
# Set the date for tracking
from datetime import date
self.current_date = date.today()
def _update_date(self):
"""Update the current date and reset counters if needed."""
from datetime import date
today = date.today()
if today > self.current_date:
# It's a new day, reset counters
self.current_date = today
self.tokens_used_today = 0
self.requests_made_today = 0
self.estimated_cost_today = 0.0
def _update_usage(self, tokens_used):
"""
Update usage statistics.
Args:
tokens_used (int): Number of tokens used in the request
"""
self._update_date()
self.tokens_used_today += tokens_used
self.requests_made_today += 1
# Calculate cost
cost = (tokens_used / 1000) * self.cost_per_1k_tokens
self.estimated_cost_today += cost
def can_make_request(self, estimated_tokens=500):
"""
Check if a request can be made within budget constraints.
Args:
estimated_tokens (int): Estimated tokens for the request
Returns:
bool: True if the request can be made, False otherwise
"""
self._update_date()
# Estimate cost of this request
estimated_cost = (estimated_tokens / 1000) * self.cost_per_1k_tokens
# Check if it would exceed the budget
return (self.estimated_cost_today + estimated_cost) <= self.daily_budget
def get_completion(self, prompt, max_tokens=150):
"""
Get a completion from the AI while tracking costs.
Args:
prompt (str): The prompt to send
max_tokens (int): Maximum tokens in the response
Returns:
str: The AI's response
"""
# Check if we can make the request
if not self.can_make_request(estimated_tokens=len(prompt.split()) + max_tokens):
return "I'm sorry, I've reached my usage limit for today. Please try again tomorrow."
try:
# Make the API call
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}],
max_tokens=max_tokens
)
# Update usage tracking
prompt_tokens = response.usage.prompt_tokens
completion_tokens = response.usage.completion_tokens
total_tokens = response.usage.total_tokens
self._update_usage(total_tokens)
# Log usage
print(f"Request used {total_tokens} tokens (${(total_tokens/1000)*self.cost_per_1k_tokens:.6f})")
print(f"Daily usage: {self.tokens_used_today} tokens, Est. cost: ${self.estimated_cost_today:.4f}")
return response.choices[0].message.content.strip()
except Exception as e:
print(f"Error in AI request: {e}")
return "Sorry, I encountered an error processing your request."28.10 Cross-References
- Previous Chapter: AI Programming Assistants
- Next Chapter: AI Assistance Tips
- Related Topics: Object-Oriented Programming (Chapter 20), Modules and Packages (Chapter 19)
AI Tip: When integrating AI APIs with Python, follow the “graceful degradation” pattern - design your system to work in a reduced capacity when AI services are unavailable or when rate limits are reached. This ensures your application remains functional even when AI services are temporarily inaccessible.
28.11 Summary
In this chapter, we’ve explored how to integrate AI capabilities into Python applications:
We began by understanding the landscape of AI services available for Python integration, from LLMs to vision and speech processing.
We learned how to make direct API calls to AI services using Python’s requests library, and how to use specialized SDKs for more convenient integration.
We implemented various text processing capabilities, including sentiment analysis, named entity recognition, and text summarization.
We built conversational AI features, both stateless and with conversation memory, to enable more natural interactions.
We covered important practical considerations, including API authentication, testing strategies, and cost management.
Finally, we applied these techniques to enhance our chatbot project with AI capabilities, making it more intelligent and helpful.
As AI services continue to evolve and become more accessible, the ability to integrate them into Python applications will be an increasingly valuable skill. By understanding the patterns and practices covered in this chapter, you’re well-equipped to build applications that leverage the power of AI, even without deep expertise in machine learning or data science.
Remember that integrating external AI services involves trade-offs in terms of cost, privacy, and dependence on third-party systems. Always consider these factors when deciding whether and how to incorporate AI into your Python projects.