Deterministic replay. Serverless multiplayer. Forkable worlds. Built on Automerge, with Gym/PettingZoo interfaces so any game doubles as a training environment.
Play Blackjack β Play Cuttle β| Serverless multiplayer | CRDTs sync state across peers without a server |
| Perfect replay | Every action recorded with actor and timestamp |
| Forkable worlds | Snapshot state, explore alternatives, compare outcomes |
| AI training environments | Gym/PettingZoo interfaces out of the box |
| Offline-first | Peers diverge safely, converge mathematically |
Blackjack, Poker, Cuttle, custom TCGs. Tokens compose with provenance tracking.
Game theory simulations, tournaments, agent competitions.
P2P sync, no servers, games that outlive their creators.
Any game is automatically a Gym environment. Multi-agent via PettingZoo.
git clone https://github.com/flammafex/hypertoken
cd hypertoken
npm install && npm run build
npm run blackjack
# Multiplayer
npm run blackjack:server # Terminal 1: Host
npm run blackjack:client # Terminal 2: Join
# AI training bridge
npx hypertoken bridge --env blackjack --port 9999
const enchantedSword = engine.dispatch("token:merge", {
tokens: [sword, fireEnchantment]
});
// enchantedSword._mergedFrom = [sword.id, fireEnchantment.id]
const host = new Engine();
host.connect("ws://relay.local:8080");
const client = new Engine();
client.connect("ws://relay.local:8080");
// Both make changes β CRDTs merge β identical final state
from hypertoken import HyperTokenAECEnv
env = HyperTokenAECEnv("ws://localhost:9999")
env.reset(seed=42)
for agent in env.agent_iter():
obs, reward, term, trunc, info = env.last()
action = policy(obs) if not (term or trunc) else None
env.step(action)
GitHub β Source, issues, PRs
Documentation β Guides and API reference
Cuttle β Play the demo