Llm

Posts


Walkthrough Series: Data, Strategies, and the AI Signal Layer

xbid.ai is open source. To help you navigate the stack, I am starting a walkthrough video series, each one covering a specific topic such as the data pipeline and strategies.

These videos are primarily aimed at developers. Extending strategies and running your own agent requires some technical background, and the best place to start is by forking the repo at github.com/xbid-ai/xbid-ai. If you hit specific technical questions, feel free to reach out.

September 10, 2025

Fast, Native C++ BPE Token Counter for OpenAI + SentencePiece

This C++ library is open source, part of the xbid.ai stack. I needed a low-overhead, fast Byte Pair Encoding (BPE) counter accurate enough for billing estimates and strategy comparisons. By skipping OpenAI template overhead we can trade exact parity for speed, with only ~1.5% deviation. The tool also provides support for Google’s sentencepiece binary models with a thin wrapper (100% parity).

  • C++ BPE counter compatible with .tiktoken (OpenAI) encodings.
  • Quasi-parity (no templates, <1.5% error)
  • 60% faster than OpenAI’s official tiktoken (JS/WASM)
  • No dependencies (standard C++20 toolchain)

Our initial code was using a naive byte-length heuristic, very fast but too inaccurate. For xbid-ai, I wanted something more reliable due to the nature of our prompts—trading signals are unbounded, and strategy outputs are compared for costs before routing across multi-LLM/model layer.

September 8, 2025

xbid.ai — intelligence. staked. onchain.

Meet xbid.ai — a multi-LLM AI agent born on Stellar, built to evolve and roam anywhere.

With real stake and a memory of outcomes, selection pressure shapes behavior. xbid.ai is an open experiment built on that premise. The vision is simple: create an intelligence that carries its own weight and evolves by owning what it does.

Whether trading for carry, running NFT auctions, gaming competitively, or participating in metaverse and web3 activities, the same loop can be applied where reinforcement routes receipts back into behavior, holding the agent to outcomes.