Ai

Posts


xbid.ai Lab: How We Build Better Inference

You probably noticed that even carefully crafted prompts rarely allow immediate deep response and inference from AI systems. Opening discussions tend to be shallow.

It is usually later in the interaction (not always) that something shifts. At some point, the system clicks and starts reasoning inside our frame.

For xbid.ai, that was a problem as trading decisions need instantly grounded inference, not guessing or pseudo‑insight.

The Mechanism

The key here is understanding that the point of convergence is not prompted. Instead it is surfaced through dialogue.

January 6, 2026

AI Agents Interacting with Onchain Game Markets

AI agents can now trade in-game items onchain. Last week we released the cyberbrawl.io auction house—a fully decentralized marketplace where players bid, offer, and execute orders for tokenized cards, heroes, and badges directly on Stellar.

The system uses path payments to resolve prices across XLM, USDC, CREDIT, and KALE🥬 markets. Orderbooks are native Stellar (demo).

October 17, 2025

xbid.ai Wins 1st Place at Stellar Hacks + MCP Server Released

xbid.ai won 1st place 🏆 at the Stellar Hacks: KALE x Reflector Hackathon on DoraHacks! 🚀 Thanks to everyone backing this early.

To celebrate, I am releasing the MCP server for xbid.ai—tools like Claude, VS Code, and Cursor can now connect to xbid.ai and use post-distillation pipeline data for inference.

MCP Server Release

The Model Context Protocol (MCP), often referred to as the USB-C port for AI, is essential infrastructure that standardizes how AI agents communicate with each other. By adopting this standard, xbid.ai can expose its pipeline and strategy engine to applications like Claude.

September 15, 2025

Walkthrough Series: Data, Strategies, and the AI Signal Layer

xbid.ai is open source. To help you navigate the stack, I am starting a walkthrough video series, each one covering a specific topic such as the data pipeline and strategies.

These videos are primarily aimed at developers. Extending strategies and running your own agent requires some technical background, and the best place to start is by forking the repo at github.com/xbid-ai/xbid-ai. If you hit specific technical questions, feel free to reach out.

September 10, 2025

Fast, Native C++ BPE Token Counter for OpenAI + SentencePiece

This C++ library is open source, part of the xbid.ai stack. I needed a low-overhead, fast Byte Pair Encoding (BPE) counter accurate enough for billing estimates and strategy comparisons. By skipping OpenAI template overhead we trade exact parity for speed, with only ~1.5% deviation. The tool also provides support for Google’s sentencepiece binary models with a thin wrapper (100% parity).

  • C++ BPE counter compatible with .tiktoken (OpenAI) encodings.
  • Quasi-parity (no templates, <1.5% error)
  • 60% faster than OpenAI’s official tiktoken (JS/WASM)
  • No dependencies (standard C++20 toolchain)

Our initial code was using a naive byte-length heuristic, very fast but too inaccurate. For xbid-ai, I wanted something more reliable due to the nature of our inference inputs—trading signals are unbounded, and strategy outputs are compared for costs before routing across multi-LLM/model layer.

September 8, 2025

xbid.ai — intelligence. staked. onchain.

Meet xbid.ai — a multi-LLM AI agent built around a simple thesis:

Inference does not come from clever prompts alone. It comes from contexts encoding a posteriori knowledge and implicit constraints—what we call episteme.

Rather than relying on isolated prompt engineering, xbid.ai conditions inference on structures that embed reasoning constraints before the model generates output. This enables inference under explicit constraints rather than probabilistic guesswork.

Read the full methodology: How We Build Better Inference