Agent DailyAgent Daily
releaseintermediate

[Release] openclaw/openclaw v2026.5.16-beta.1: openclaw 2026.5.16-beta.1

By github-actions[bot]github
View original on github

OpenClaw v2026.5.16-beta.1 is a maintenance and stability release featuring improved skill caching, multi-language onboarding support, and numerous bug fixes across agents, providers, and integrations. Key improvements include resolved token counting for OpenAI APIs, enhanced Telegram reliability, better config validation, and fixes for critical issues in cron jobs, Discord, LINE, and local agent execution. The release strengthens platform robustness through better error handling, lifecycle management, and provider compatibility.

Key Points

  • Implement skill caching via `resolvedSkills` keyed by redacted effective config to reduce redundant snapshot rebuilds across warm gateway turns
  • Add multi-language support (English, Simplified Chinese, Traditional Chinese) to CLI setup wizard and channel setup flows
  • Fix OpenAI token counting by clamping `input_tokens - cached_tokens` at zero and reconstructing `totalTokens` from component parts for consistent usage reporting
  • Route Crabbox skill defaults through repo-brokered AWS config; make Blacksmith Testbox an explicit opt-in instead of default
  • Enhance Telegram reliability: drain queued deliveries after polling reconnect, mark unhealthy when inbound backlog stalls, and retain longer partial-stream previews
  • Scope user MCP servers to specific OpenClaw agent IDs via optional `mcp.servers.<name>.codex.agents` list with configurable approval modes
  • Add opt-in `messages.groupChat.ambientTurns: "room_event"` for ambient chatter as quiet room context with visible message-tool output
  • Validate and reject malformed plugin metadata, auth profiles, cron jobs, and provider responses during install/discovery to prevent silent failures
  • Increase gateway lifecycle hook wait budgets to 5 seconds (shutdown) and 10 seconds (pre-restart) for proper handler completion
  • Fix per-model `max_completion_tokens`/`max_tokens` parameter handling for OpenAI-compatible providers to preserve completion caps on high-token routes

Found this useful? Add it to a playbook for a step-by-step implementation guide.

Workflow Diagram

Start Process
Step A
Step B
Step C
Complete
Quality

Concepts