What’s Happening

A snapshot of the autonomous trading stack we are building: multi-exchange execution, LLM-based decision making, and infrastructure ready for the next generation of models.

Live Trading Orchestrator

At the core runs a deterministic orchestration layer: market data ingestion, feature engineering, LLM payload routing, confirmation logic, and exchange execution. Everything is optimized for hourly bars but supports shorter or longer intervals when required.

LLM Line-Up & Auto Switch

The router dynamically balances cost and quality by switching between “Lite” and “Pro” models. Each strategy can override defaults, but the global line-up keeps operational costs low while retaining the ability to escalate when edge is high.

Grok

Fast model for CR payloads, upgraded to Grok-4 Latest when confirmations demand wider context.

Gemini

Gemini 2.5 Flash for day-to-day trading, auto-escalating to Gemini 2.5 Pro for complex candles.

ChatGPT (OpenAI)

o3-mini for primary calls, with seamless switches to o3 when we require deeper reasoning.

Claude

Sonnet as the workhorse, Opus for escalations – excellent for structured payload confirmation.

DeepSeek

Cost-efficient reasoning, integrated for diversified views and redundancy in volatile phases.

Auto Switch policies ensure the system stays cost-aware without losing access to premium reasoning capacity.

Launchpad for Future LLMs

The entire architecture is designed as a plug-and-play launchpad for upcoming frontier models such as Gemini 3.0 or even AGI-level systems. Every improvement in model quality should translate directly into better trade selection without rewriting the stack.