Python Design Patterns: Factory, Singleton, Observer, and SOLID Principles
Master essential Python design patterns and SOLID principles for scalable ML applications. Learn Factory, Singleton, Observer, Strategy, Builder, Adapter, Facade, Command patterns with practical examples

When My ML Project Became Unmaintainable
My ML project had 3000 lines in one file—duplicate code everywhere, tightly coupled classes, and adding a new model broke everything.
Then I discovered design patterns. They're proven solutions to common programming problems. My code transformed from spaghetti to professional architecture.
In this comprehensive guide, I'll share the design patterns that changed how I write Python code for ML projects.
Why Design Patterns Matter for ML
- Code Reusability - Build reusable ML components
- Maintainability - Easy to modify and extend pipelines
- Scalability - Handle growing datasets and models
- Team Collaboration - Common vocabulary for ML engineers
- Professional Code - Industry-standard architecture
Creational Patterns: Object Creation
1. Singleton Pattern
Ensures only ONE instance of a class exists (perfect for config managers, database connections).
class ConfigManager:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance.config = {}
return cls._instance
def set(self, key, value):
self.config[key] = value
def get(self, key):
return self.config.get(key)
# Usage
config1 = ConfigManager()
config1.set('learning_rate', 0.001)
config2 = ConfigManager()
print(config2.get('learning_rate')) # 0.001 (same instance!)
print(config1 is config2) # True
When to use: Configuration managers, loggers, database connections.
2. Factory Pattern
Creates objects without specifying their exact class.
from abc import ABC, abstractmethod
class Model(ABC):
@abstractmethod
def train(self, data):
pass
@abstractmethod
def predict(self, data):
pass
class LinearRegression(Model):
def train(self, data):
return f"Training Linear Regression on {len(data)} samples"
def predict(self, data):
return f"Predictions for {len(data)} samples"
class RandomForest(Model):
def train(self, data):
return f"Training Random Forest on {len(data)} samples"
def predict(self, data):
return f"Forest predictions for {len(data)} samples"
class ModelFactory:
@staticmethod
def create_model(model_type):
models = {
'linear': LinearRegression,
'forest': RandomForest
}
model_class = models.get(model_type)
if model_class:
return model_class()
raise ValueError(f"Unknown model: {model_type}")
# Usage
factory = ModelFactory()
model = factory.create_model('linear')
print(model.train([1, 2, 3, 4, 5]))
Benefits: Easy to add new models, centralized creation, loose coupling.
3. Builder Pattern
Constructs complex objects step by step.
class NeuralNetwork:
def __init__(self):
self.layers = []
self.optimizer = None
self.loss = None
def __str__(self):
return f"Network: {len(self.layers)} layers, {self.optimizer} optimizer"
class NeuralNetworkBuilder:
def __init__(self):
self.network = NeuralNetwork()
def add_layer(self, layer_type, units):
self.network.layers.append(f"{layer_type}({units})")
return self # Return self for chaining
def set_optimizer(self, optimizer):
self.network.optimizer = optimizer
return self
def set_loss(self, loss):
self.network.loss = loss
return self
def build(self):
return self.network
# Usage - Method chaining!
network = (NeuralNetworkBuilder()
.add_layer("Dense", 128)
.add_layer("Dense", 64)
.add_layer("Dense", 10)
.set_optimizer("Adam")
.set_loss("categorical_crossentropy")
.build())
print(network) # Network: 3 layers, Adam optimizer
When to use: Complex object construction with many optional parameters.
Structural Patterns: Object Composition
4. Adapter Pattern
Makes incompatible interfaces work together.
class OldDataProcessor:
def process_csv(self, filename):
return f"Processing CSV: {filename}"
class NewDataProcessor:
def process_data(self, source, format_type):
return f"Processing {format_type} from {source}"
class DataProcessorAdapter:
def __init__(self, old_processor):
self.old_processor = old_processor
self.new_processor = NewDataProcessor()
def process_data(self, source, format_type):
if format_type == "csv":
return self.old_processor.process_csv(source)
return self.new_processor.process_data(source, format_type)
# Usage
adapter = DataProcessorAdapter(OldDataProcessor())
print(adapter.process_data("data.csv", "csv"))
print(adapter.process_data("data.json", "json"))
When to use: Integrating legacy code with new systems.
5. Decorator Pattern
Adds functionality without modifying original code.
import functools
import time
def timing_decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
print(f"⏱️ {func.__name__} took {time.time() - start:.2f}s")
return result
return wrapper
def validation_decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if args and len(args[0]) == 0:
raise ValueError("Data cannot be empty")
return func(*args, **kwargs)
return wrapper
@timing_decorator
@validation_decorator
def train_model(data):
time.sleep(0.1) # Simulate training
return f"Trained on {len(data)} samples"
# Usage
result = train_model([1, 2, 3, 4, 5])
When to use: Adding logging, timing, validation, caching to functions.
6. Facade Pattern
Provides simplified interface to complex subsystem.
class DataLoader:
def load(self, source):
return f"Loading from {source}"
class Preprocessor:
def preprocess(self, data):
return f"Preprocessing {len(data)} samples"
class ModelTrainer:
def train(self, data):
return f"Training on {len(data)} samples"
class Evaluator:
def evaluate(self, model, test_data):
return f"Evaluating on {len(test_data)} samples"
class MLPipelineFacade:
def __init__(self):
self.loader = DataLoader()
self.preprocessor = Preprocessor()
self.trainer = ModelTrainer()
self.evaluator = Evaluator()
def run_pipeline(self, data_source, test_data):
print("=== ML Pipeline ===")
print(f"1. {self.loader.load(data_source)}")
print(f"2. {self.preprocessor.preprocess([1,2,3,4,5])}")
print(f"3. {self.trainer.train([1,2,3,4,5])}")
print(f"4. {self.evaluator.evaluate('model', test_data)}")
return "Pipeline complete"
# Usage - Simple interface!
pipeline = MLPipelineFacade()
result = pipeline.run_pipeline("database", [1, 2, 3])
When to use: Simplifying complex systems, creating APIs.
Behavioral Patterns: Object Interaction
7. Observer Pattern
Notifies multiple objects when state changes.
from abc import ABC, abstractmethod
class Observer(ABC):
@abstractmethod
def update(self, message):
pass
class EmailNotifier(Observer):
def update(self, message):
print(f"📧 Email: {message}")
class SlackNotifier(Observer):
def update(self, message):
print(f"💬 Slack: {message}")
class TrainingSubject:
def __init__(self):
self.observers = []
self.epoch = 0
self.loss = 10.0
def attach(self, observer):
self.observers.append(observer)
def notify(self, message):
for observer in self.observers:
observer.update(message)
def train_epoch(self):
self.epoch += 1
self.loss -= 0.5
self.notify(f"Epoch {self.epoch}: Loss = {self.loss:.2f}")
# Usage
trainer = TrainingSubject()
trainer.attach(EmailNotifier())
trainer.attach(SlackNotifier())
for _ in range(3):
trainer.train_epoch()
When to use: Event-driven systems, real-time monitoring.
8. Strategy Pattern
Swaps algorithms at runtime.
from abc import ABC, abstractmethod
class PreprocessingStrategy(ABC):
@abstractmethod
def preprocess(self, data):
pass
class NormalizationStrategy(PreprocessingStrategy):
def preprocess(self, data):
max_val = max(data)
return [x / max_val for x in data]
class StandardizationStrategy(PreprocessingStrategy):
def preprocess(self, data):
mean = sum(data) / len(data)
std = (sum((x - mean)**2 for x in data) / len(data)) ** 0.5
return [(x - mean) / std for x in data]
class DataPreprocessor:
def __init__(self, strategy):
self.strategy = strategy
def set_strategy(self, strategy):
self.strategy = strategy
def preprocess(self, data):
return self.strategy.preprocess(data)
# Usage
data = [1, 5, 10, 15, 20]
preprocessor = DataPreprocessor(NormalizationStrategy())
print(f"Normalized: {preprocessor.preprocess(data)}")
preprocessor.set_strategy(StandardizationStrategy())
print(f"Standardized: {preprocessor.preprocess(data)}")
When to use: Multiple dsa, runtime algorithm selection.
9. Command Pattern
Encapsulates requests as objects (undo/redo support).
from abc import ABC, abstractmethod
class Command(ABC):
@abstractmethod
def execute(self):
pass
@abstractmethod
def undo(self):
pass
class TrainModelCommand(Command):
def __init__(self, model, data):
self.model = model
self.data = data
self.previous_state = None
def execute(self):
self.previous_state = self.model
self.model = f"Trained on {len(self.data)} samples"
return f"Executed: {self.model}"
def undo(self):
self.model = self.previous_state
return f"Undone: Reverted to {self.model}"
class CommandInvoker:
def __init__(self):
self.history = []
def execute(self, command):
result = command.execute()
self.history.append(command)
return result
def undo_last(self):
if self.history:
command = self.history.pop()
return command.undo()
return "No commands to undo"
# Usage
invoker = CommandInvoker()
cmd = TrainModelCommand("Untrained model", [1, 2, 3, 4, 5])
print(invoker.execute(cmd))
print(invoker.undo_last())
When to use: Undo/redo functionality, transaction systems.
SOLID Principles
S - Single Responsibility Principle
One class = One responsibility
# ✅ GOOD: Separate responsibilities
class DataLoader:
def load(self, file):
return f"Loading {file}"
class DataPreprocessor:
def preprocess(self, data):
return f"Preprocessing {len(data)} samples"
class ModelTrainer:
def train(self, data):
return f"Training on {len(data)} samples"
O - Open/Closed Principle
Open for extension, closed for modification
from abc import ABC, abstractmethod
class ModelTrainer(ABC):
@abstractmethod
def train(self, data):
pass
# Add new trainers WITHOUT modifying existing code
class LinearTrainer(ModelTrainer):
def train(self, data):
return "Training Linear Model"
class NeuralTrainer(ModelTrainer):
def train(self, data):
return "Training Neural Network"
L - Liskov Substitution Principle
Subclasses should be substitutable for base classes
class Model(ABC):
@abstractmethod
def predict(self, data):
pass
class LinearModel(Model):
def predict(self, data):
return [0.8, 0.6, 0.9] # Always returns predictions
class NeuralModel(Model):
def predict(self, data):
return [0.7, 0.8, 0.75] # Always returns predictions
# Both work the same way
def evaluate(model, data):
predictions = model.predict(data)
return f"Got {len(predictions)} predictions"
I - Interface Segregation Principle
Many small interfaces > One large interface
from abc import ABC, abstractmethod
# ✅ GOOD: Specific interfaces
class Trainable(ABC):
@abstractmethod
def train(self, data):
pass
class Savable(ABC):
@abstractmethod
def save(self, path):
pass
class APICompatible(ABC):
@abstractmethod
def send_to_api(self, endpoint):
pass
# Classes implement only what they need
class SimpleModel(Trainable):
def train(self, data):
return "Training..."
class PersistentModel(Trainable, Savable):
def train(self, data):
return "Training..."
def save(self, path):
return f"Saving to {path}"
D - Dependency Inversion Principle
Depend on abstractions, not concretions
from abc import ABC, abstractmethod
# ✅ GOOD: Depend on abstraction
class DataStorage(ABC):
@abstractmethod
def save(self, data):
pass
class CSVStorage(DataStorage):
def save(self, data):
return f"Saving to CSV: {data}"
class JSONStorage(DataStorage):
def save(self, data):
return f"Saving to JSON: {data}"
class MLPipeline:
def __init__(self, storage: DataStorage):
self.storage = storage # Depends on abstraction!
def save_results(self, data):
return self.storage.save(data)
# Easy to swap implementations
pipeline = MLPipeline(CSVStorage())
pipeline = MLPipeline(JSONStorage())
Context Managers: Python-Specific Pattern
from contextlib import contextmanager
import time
class ModelTrainingContext:
def __init__(self, model_name):
self.model_name = model_name
self.start_time = None
def __enter__(self):
print(f"🚀 Starting: {self.model_name}")
self.start_time = time.time()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
duration = time.time() - self.start_time
print(f"✅ Completed in {duration:.2f}s")
return False
# Usage
with ModelTrainingContext("Neural Network"):
time.sleep(0.1)
print("Training...")
Common Mistakes to Avoid
1. Overusing Patterns
# Bad - pattern for the sake of pattern
class SimpleCalculatorFactorySingletonStrategy:
# Way too complex for add(2, 3)!
pass
# Good - simple solution for simple problem
def add(a, b):
return a + b
2. God Objects (Violating SRP)
# Bad - does everything!
class MLPipeline:
def load_data(self): pass
def clean_data(self): pass
def train_model(self): pass
def evaluate(self): pass
def deploy(self): pass
def monitor(self): pass
# This class has too many responsibilities!
# Good - separate concerns
class DataLoader: pass
class DataCleaner: pass
class ModelTrainer: pass
class ModelEvaluator: pass
3. Premature Optimization
# Don't implement patterns you don't need yet!
# Start simple, refactor when complexity grows
# Start here:
def train_model(data):
# Simple implementation
pass
# Refactor to pattern when you need it:
class ModelFactory:
# Add pattern when managing multiple models
pass
4. Ignoring Python Idioms
# Bad - Java-style in Python
class SingletonMeta(type):
_instances = {}
def __call__(cls, *args, **kwargs):
# Complex singleton implementation
pass
# Better - use Python's simplicity
def get_config():
if not hasattr(get_config, 'config'):
get_config.config = Config()
return get_config.config
When to Use Which Pattern
| Pattern | Use When | ML Example |
|---|---|---|
| Singleton | Single instance needed | Config, Logger, DB Connection |
| Factory | Multiple similar objects | Creating different models |
| Strategy | Swappable algorithms | Different preprocessing methods |
| Observer | Event notification | Training progress tracking |
| Builder | Complex object construction | ML pipeline with many steps |
| Adapter | Interface compatibility | Legacy model integration |
| Facade | Simplify complex system | High-level ML API |
| Command | Action encapsulation | ML experiment tracking |
Best Practices
1. Start Simple, Refactor When Needed
# Start with simplest solution
def train_model(data):
model = RandomForest()
model.fit(data)
return model
# Refactor to pattern when you have 5+ models
class ModelFactory:
# Add pattern when complexity justifies it
pass
2. Combine Patterns
# Factory + Strategy = Powerful combination
class ModelFactory:
@staticmethod
def create_model(model_type, strategy):
model = ModelFactory.get_model(model_type)
model.set_strategy(strategy)
return model
3. Follow SOLID Principles
- Single Responsibility
- Open/Closed
- Liskov Substitution
- Interface Segregation
- Dependency Inversion
4. Use Python's Native Features
# Leverage decorators, context managers, @property
# Don't fight Python's dynamic nature
5. Test Your Patterns
def test_singleton_returns_same_instance():
config1 = ConfigManager()
config2 = ConfigManager()
assert config1 is config2 # Verify singleton behavior
Conclusion
Design patterns transformed my ML code from unmaintainable mess to professional architecture. Master these patterns and SOLID principles—your future self will thank you!
Start with:
- Singleton - Config managers
- Factory - Model creation
- Strategy - Swappable algorithms
- Observer - Training monitoring
- SRP - Single responsibility
Within weeks, you'll write production-ready ML systems that scale!
If you found this guide helpful, I'd love to hear about your experience! Connect with me on Twitter or LinkedIn.
Support My Work
If this guide helped you understand Python design patterns, implement Factory and Singleton patterns, or write cleaner object-oriented code, I'd really appreciate your support! Creating comprehensive, practical content like this takes significant time and effort. Your support helps me continue sharing knowledge and creating more helpful resources for aspiring programmers.
☕ Buy me a coffee - Every contribution, big or small, means the world to me and keeps me motivated to create more content!
Cover image by Clark Van Der Beken on Unsplash