Basic Usage¶
PyLogShield extends Python's standard logging with features like sensitive data masking, log rotation, asynchronous logging, rate limiting, and dynamic configuration.
Getting Started¶
from pylogshield import get_logger, PyLogShield
# Recommended: Use get_logger for reusable named loggers
logger = get_logger(name="my_app", log_level="DEBUG")
# Alternative: Create directly with PyLogShield
logger = PyLogShield(name="my_app", log_level="DEBUG")
# Standard logging methods
logger.debug("Debug message")
logger.info("Informational message")
logger.warning("Warning message")
logger.error("Error message")
logger.critical("Critical message")
See the PyLogShield API Reference for all available parameters.
Sensitive Data Masking¶
Automatically mask sensitive fields like passwords, tokens, and API keys.
flowchart LR
IN(["logger.info(data, mask=True)"])
M{{"_mask(data)"}}
subgraph TYPES ["Input type routing"]
direction TB
S["str → regex replace\nsensitive= patterns"]
D["dict → recurse\neach value"]
L["list / tuple → recurse\neach element"]
E["Exception → mask\nstring .args"]
end
OUT(["Masked Output → LogRecord → Handler"])
IN --> M
M --> S --> OUT
M --> D --> OUT
M --> L --> OUT
M --> E --> OUT
from pylogshield import get_logger
logger = get_logger("my_app")
# Enable masking with mask=True
logger.info({"user": "john_doe", "password": "secret123"}, mask=True)
# Output: {"user": "john_doe", "password": "***"}
# Works with nested structures
logger.info({
"user": "john",
"credentials": {
"api_key": "abc123",
"token": "xyz789"
}
}, mask=True)
# Output: {"user": "john", "credentials": {"api_key": "***", "token": "***"}}
# Also masks in plain text
logger.info("User logged in with password: secret123", mask=True)
# Output: User logged in with password: ***
Exception tracebacks
mask=True masks the exception's .args string values. Traceback locals and frame variables are formatted by the log handler and are not redacted. Avoid passing sensitive data as local variables in functions that may log exceptions.
Managing Sensitive Fields¶
from pylogshield import (
get_logger,
add_sensitive_fields,
remove_sensitive_fields,
get_sensitive_fields,
)
# Inspect the current set (28 fields by default)
print(get_sensitive_fields())
# frozenset({'password', 'token', 'api_key', 'secret', 'jwt', ...})
# Add domain-specific sensitive fields
add_sensitive_fields(["national_id", "dob", "account_number", "sort_code"])
# Remove a field that isn't sensitive in your context
remove_sensitive_fields(["auth"])
# Or add via logger instance (equivalent to the module-level call)
logger = get_logger("my_app")
logger.add_sensitive_fields(["nhs_number", "tax_id"])
Fields are registered globally for the process lifetime. Any logger that logs with mask=True will redact all registered fields.
What gets masked¶
String values under a sensitive key are replaced with "***":
logger.info({"user": "alice", "password": "secret"}, mask=True)
# → {"user": "alice", "password": "***"}
Non-string values (int, float, None, bool) are also fully replaced:
add_sensitive_fields(["account_number", "balance"])
logger.info({
"account_number": 12345678, # int → "***"
"balance": 9999.99, # float → "***"
"holder": "Bob", # not sensitive → unchanged
}, mask=True)
# → {"account_number": "***", "balance": "***", "holder": "Bob"}
Inline strings matching key: value or key=value patterns are redacted:
add_sensitive_fields(["sort_code"])
logger.info("Payment: sort_code: 12-34-56 amount: 100", mask=True)
# → "Payment: sort_code: *** amount: 100"
Nested dicts and lists of dicts are recursively scanned:
add_sensitive_fields(["secret_pin"])
logger.info({
"user": "carol",
"payment": {"secret_pin": 9876, "card": "Visa"},
}, mask=True)
# → {"user": "carol", "payment": {"secret_pin": "***", "card": "Visa"}}
Rate Limiting¶
Prevent log flooding by suppressing duplicate messages within a time interval.
from pylogshield import get_logger
import time
# Initialize logger with rate limiting (2 seconds between identical messages)
logger = get_logger("my_app", rate_limit_seconds=2.0)
# First call logs immediately
logger.info("Connection attempt") # Logged
# Same message within 2 seconds is suppressed
logger.info("Connection attempt") # Suppressed
logger.info("Connection attempt") # Suppressed
time.sleep(2.1)
# After interval passes, message is logged again
logger.info("Connection attempt") # Logged
Checking Rate Limiter Statistics¶
logger = get_logger("my_app", rate_limit_seconds=1.0)
# After some logging...
if logger.limiter:
print(f"Suppressed messages: {logger.limiter.suppressed_count}")
print(f"Tracked messages: {logger.limiter.tracked_messages}")
Log Filtering¶
Filter logs by keywords to include or exclude specific messages.
from pylogshield import get_logger, KeywordFilter
# Include only logs containing specific keywords
logger = get_logger("my_app", log_filter=["error", "critical", "failed"])
logger.info("Application started") # Not logged (no matching keyword)
logger.info("Connection failed") # Logged (contains "failed")
logger.error("An error occurred!") # Logged (contains "error")
# Or create a filter manually for more control
exclude_filter = KeywordFilter(
keywords=["debug", "trace"],
include=False, # Exclude mode
case_insensitive=True
)
Custom Log Levels¶
Register custom log levels at runtime. Each level gets its own method on the logger class, and the method signature is identical to the built-in ones — including mask=True support.
Level value convention¶
| Built-in level | Value | Good slots for custom levels |
|---|---|---|
| CRITICAL | 50 | — |
| ERROR | 40 | — |
| — | 35 | SECURITY |
| WARNING | 30 | — |
| — | 26 | AUDIT |
| — | 25 | NOTICE |
| INFO | 20 | — |
| — | 5–15 | TRACE, VERBOSE |
| DEBUG | 10 | — |
Basic usage¶
from pylogshield import get_logger, add_log_level, PyLogShield
# Register once at application startup — before any loggers are created
add_log_level("SECURITY", 35, logger_cls=PyLogShield)
add_log_level("AUDIT", 26, logger_cls=PyLogShield)
logger = get_logger("secure_app", log_level="DEBUG")
logger.audit("User alice logged in session_id=abc123")
# 2026-05-09 00:00:01.001 AUDIT secure_app ... User alice logged in ...
logger.security("Privilege escalation blocked user=mallory")
# 2026-05-09 00:00:01.002 SECURITY secure_app ... Privilege escalation blocked ...
With mask=True¶
Custom levels fully support mask=True — both for string patterns and dict payloads:
add_log_level("SECAUDIT", 35, logger_cls=PyLogShield)
logger = get_logger("app")
# Mask a sensitive string
logger.secaudit("api_key: topsecret action=login", mask=True)
# → "api_key: *** action=login"
# Mask a dict payload
logger.secaudit(
{"user": "alice", "token": "abc123", "action": "login"},
mask=True,
)
# → {"user": "alice", "token": "***", "action": "login"}
Minimum level filtering¶
Custom levels obey the logger's minimum level setting exactly like built-in levels:
add_log_level("TRACE", 5, logger_cls=PyLogShield)
logger = get_logger("app", log_level="INFO") # INFO = 20
logger.trace("Detailed trace — suppressed") # 5 < 20, not emitted
logger.set_log_level("DEBUG") # lower to DEBUG = 10
logger.trace("Still suppressed") # 5 < 10, still not emitted
logger.set_log_level(1)
logger.trace("Now emitted") # 5 ≥ 1, emitted
Using with from_config¶
from pylogshield import PyLogShield, add_log_level
add_log_level("NOTICE", 25, logger_cls=PyLogShield)
logger = PyLogShield.from_config("app", {
"level": "DEBUG",
"enable_json": True,
})
logger.notice("Application config loaded")
Register before creating loggers
Call add_log_level before creating any PyLogShield instance that needs the custom method. The method is added to the class, so existing instances gain it immediately — but loggers created before registration with a level above the custom value will suppress those messages until set_log_level is called.
Dynamic Log Level Adjustment¶
Change log levels at runtime without restarting the application.
from pylogshield import get_logger
logger = get_logger("my_app", log_level="INFO")
logger.debug("This won't be logged") # Below INFO level
logger.info("This will be logged")
# Change level at runtime
logger.set_log_level("DEBUG")
logger.debug("Now this will be logged") # DEBUG is now enabled
Performance Metrics¶
Track logging activity with built-in metrics.
from pylogshield import get_logger
logger = get_logger("my_app", enable_metrics=True)
# Log some messages
logger.info("Processing started")
logger.info("Processing item 1")
logger.error("Item 2 failed")
logger.info("Processing complete")
# Get metrics
metrics = logger.get_metrics()
print(metrics)
# {
# 'INFO': 1.5, # logs per second
# 'ERROR': 0.5,
# 'count': 4, # total count
# 'elapsed': 2.0 # seconds since start
# }
# Get counts only
if logger.metrics_handler:
print(logger.metrics_handler.counts())
# {'INFO': 3, 'ERROR': 1} ← plain dict, not Counter
JSON Log Formatting¶
Output structured JSON logs for integration with log aggregation tools (ELK, Splunk, etc.).
from pylogshield import get_logger
logger = get_logger("my_app", enable_json=True)
logger.info("User logged in")
Output:
{
"timestamp": "2024-01-15T10:30:00.123+00:00",
"host": "server-01",
"logger": "my_app",
"level": "INFO",
"message": "User logged in"
}
Log Rotation¶
Automatically rotate log files when they reach a certain size.
from pylogshield import get_logger
logger = get_logger(
"my_app",
rotate_file=True,
rotate_max_bytes=5_000_000, # 5 MB
rotate_backup_count=3 # Keep 3 backup files
)
logger.info("This log will rotate when the file exceeds 5 MB")
This creates files like:
- my_app.log (current)
- my_app.log.1 (previous)
- my_app.log.2
- my_app.log.3
Asynchronous Logging¶
Offload logging to a background thread for improved performance in high-throughput applications.
flowchart LR
A(["logger.info('msg')\nnon-blocking"]) --> B["QueueHandler"]
B --> C[/"Bounded Queue\nqueue_maxsize=N"/]
C --> D["QueueListener\nbackground thread"]
D --> E["File Handler"]
D --> F["Console Handler"]
D --> G["JSON Formatter"]
D --> H["Metrics Handler"]
from pylogshield import get_logger
logger = get_logger("my_app", use_queue=True)
# Logs are queued and written in background
for i in range(10000):
logger.info(f"Processing item {i}")
# Important: Shutdown cleanly to flush remaining logs
logger.shutdown()
Bounded Queue¶
By default the async queue is unbounded. In high-throughput scenarios you can cap it to avoid unbounded memory growth:
logger = get_logger("app", use_queue=True, queue_maxsize=10_000)
# Messages are dropped (not blocked) when the queue is full
logger.shutdown() # Always flush on exit
Context Scrubbing¶
Automatically remove cloud provider credentials from log records.
from pylogshield import get_logger, ContextScrubber
# Enabled by default - removes AWS_, AZURE_, GCP_, GOOGLE_, TOKEN prefixed attributes
logger = get_logger("my_app", enable_context_scrubber=True)
# Or disable it
logger = get_logger("my_app", enable_context_scrubber=False)
# Custom prefixes
from pylogshield import ContextScrubber
scrubber = ContextScrubber(forbidden_prefixes=("SECRET_", "PRIVATE_", "INTERNAL_"))
Context Propagation¶
Inject structured fields into every log record within a block using log_context (sync) or async_log_context (async). Requires enable_context=True on the logger.
from pylogshield import get_logger
from pylogshield.context import log_context, async_log_context
logger = get_logger("app", enable_context=True, enable_json=True)
# Sync context
with log_context(request_id="abc-123", user_id=42):
logger.info("Processing order")
# JSON includes request_id and user_id automatically
# Nested contexts — inner fields merge on top of outer
with log_context(service="payments"):
with log_context(transaction_id="tx-7"):
logger.info("Charge") # has both service and transaction_id
# Async context (asyncio-safe, no cross-task bleed)
async def handle(req_id: str):
async with async_log_context(request_id=req_id):
logger.info("Handling request")
See the Context Propagation reference for full details.
Decorators¶
Wrap any function with automatic exception logging using log_exceptions, or enable full entry/exit tracing with trace.
from pylogshield import get_logger, log_exceptions, trace
logger = get_logger("my_app", log_level="DEBUG")
# Log exceptions only — re-raises after logging
@log_exceptions(logger)
def fetch_data(url: str) -> dict:
response = requests.get(url)
response.raise_for_status()
return response.json()
# Full tracing: logs call args, return value, and exceptions
@trace(logger)
async def process_item(item_id: int) -> dict:
return await db.get(item_id)
# DEBUG Calling process_item(args=(42,), kwargs={}) from app.py:15
# DEBUG process_item returned: {'id': 42, 'status': 'ok'}
# Mask sensitive arguments and return values
@log_exceptions(logger, log_calls=True, log_returns=True, mask=True)
def authenticate(username: str, password: str) -> str:
return auth_service.get_token(username, password)
# DEBUG Calling authenticate(args=('alice',), kwargs={'password': '***'}) from ...
# DEBUG authenticate returned: token: ***
# Suppress exceptions (useful for non-critical side-effects)
@log_exceptions(logger, raise_exception=False)
def notify_webhook(url: str, payload: dict) -> bool:
requests.post(url, json=payload).raise_for_status()
return True
# Returns None (not raises) if the request fails; exception is still logged at ERROR
See the Decorators reference for all parameters and examples.
FastAPI / Starlette Middleware¶
Automatically inject request context into all logs for a FastAPI app. Requires pip install "pylogshield[fastapi]".
from fastapi import FastAPI
from pylogshield import get_logger
from pylogshield.middleware import PyLogShieldMiddleware
app = FastAPI()
logger = get_logger("api", enable_context=True, enable_json=True)
app.add_middleware(PyLogShieldMiddleware, logger=logger)
@app.get("/items")
async def list_items():
logger.info("Listing items")
# Logs automatically include request_id, http_method, http_path, client_ip
return []
See the Middleware reference for full details.
Rich Console Output¶
Enable colorized terminal output for better readability during development.
from pylogshield import get_logger
logger = get_logger("my_app", use_rich=True)
logger.debug("Debug message") # Dim
logger.info("Info message") # Normal
logger.warning("Warning message") # Yellow
logger.error("Error message") # Red
logger.critical("Critical!") # Bold Red
Interactive Log Viewer (CLI / Rich)¶
The LogViewer class renders a colour-coded Rich table. Useful for scripted inspection or integration into admin scripts.
from pylogshield import LogViewer
from pathlib import Path
viewer = LogViewer(Path("~/.logs/my_app.log").expanduser())
# Display last 100 logs
viewer.display_logs(limit=100)
# Filter by level (shows WARNING, ERROR, CRITICAL)
viewer.display_logs(limit=50, level="WARNING")
# Combine level and keyword filters
viewer.display_logs(limit=50, level="ERROR", keyword="timeout")
# Live follow (like tail -f) — press Ctrl+C to stop
viewer.follow_logs(level="INFO", interval=0.5)
Interactive TUI Viewer (full-screen)¶
The TUI viewer is a full-screen terminal application built on Textual. Install the extra first:
Launch from Python¶
from pathlib import Path
from pylogshield.tui.app import LogViewerApp
# Open log file in the TUI
LogViewerApp(log_path=Path("~/.logs/myapp.log").expanduser()).run()
# Start with a pre-applied level filter and live-follow enabled
LogViewerApp(
log_path=Path("app.log"),
initial_level="ERROR",
start_following=True,
).run()
Keyboard shortcuts¶
| Key | Action |
|---|---|
/ |
Focus the search bar |
Ctrl+R |
Toggle regex search mode |
Ctrl+F |
Open filter panel (level, time range, logger name) |
F |
Toggle live-follow (tail -f) mode |
E |
Open export modal (CSV / JSON / text / HTML) |
Enter |
Expand detail view for the selected row |
End |
Jump to bottom and resume live-follow |
? |
Show keyboard reference |
Q / Ctrl+C |
Quit |
Programmatic log reading¶
LogReader and Exporter work without the TUI app — useful in scripts:
from pathlib import Path
from pylogshield.tui.reader import LogReader
from pylogshield.tui.exporter import Exporter
reader = LogReader(Path("app.log"))
# Tail last 500 lines and filter in Python
rows = reader.tail(limit=500)
errors = [r for r in rows if r.level in {"ERROR", "CRITICAL"}]
print(f"Found {len(errors)} errors in last 500 lines")
for r in errors[:5]:
print(f" [{r.timestamp}] {r.message}")
# Export the errors to CSV and HTML
Exporter(errors, Path("errors.csv")).to_csv()
Exporter(errors, Path("errors.html")).to_html()
Configuration from Dictionary¶
Create loggers from configuration dictionaries (useful for loading from YAML/JSON files).
from pylogshield import PyLogShield
config = {
"level": "DEBUG",
"enable_json": True,
"rotate_file": True,
"rotate_max_bytes": 10_000_000,
"rate_limit_seconds": 1.0,
"log_filter": ["error", "warning"],
"enable_metrics": True
}
logger = PyLogShield.from_config("my_app", config)
Complete Example¶
from pylogshield import get_logger, add_sensitive_fields
from pylogshield.context import log_context
# Configure sensitive fields
add_sensitive_fields(["ssn", "credit_card"])
# Create a production-ready logger
logger = get_logger(
"production_app",
log_level="INFO",
enable_json=True,
rotate_file=True,
rotate_max_bytes=10_000_000,
rotate_backup_count=5,
rate_limit_seconds=0.5,
use_queue=True,
queue_maxsize=50_000,
enable_metrics=True,
enable_context_scrubber=True,
enable_context=True,
)
logger.info("Application starting")
with log_context(session_id="s-abc123"):
logger.info({"action": "login", "user": "john", "token": "abc123"}, mask=True)
try:
pass
except Exception as e:
# Note: mask=True masks exception .args, but traceback locals are not masked
logger.exception("Unexpected error occurred")
metrics = logger.get_metrics()
logger.info(f"Session stats: {metrics['count']} logs in {metrics['elapsed']:.1f}s")
logger.shutdown()
For more end-to-end examples — FastAPI service, data pipelines, asyncio workers, custom log levels, and testing patterns — see the Recipes page.