Dogukan Tuna
Open-source tiny research lab focusing on generalization and continual learning continuaLM Lab — early stage, heavy research
Multi-agent system infrastructure at Manuel AI — HVAC, energy, marine

Mem-RLM — Memory-Augmented Inference for Recursive Language Models
An open-source memory layer for Recursive Language Models that records execution trajectories, extracts reusable strategies, and injects them into future runs. Models stop starting cold and actually learn which approaches work for which problem types — 26% accuracy improvement on weaker models, fully stateful inference.
A self-improving skill catalog for AI agents
An open-source skill catalog that agents use, extend, and improve themselves. 19 skills covering the full LLM lifecycle, autonomous research, GPU/TPU/QPU programming, and scientific computing — built by agents, for agents.
Claude Code-Time Skill Acquisition with Agent Teams
A team of agents researched, synthesized, and integrated a production-grade React Native skill into a shared knowledge base in under 15 minutes — just through coordination at Claude Code-time.
On Compression, Computation and the Space Between
Kolmogorov complexity, neural networks as program search and Wolfram's ruliology seem to be looking at the same thing from different rooms.
Defeating Nondeterminism in LLM Inference: Reproducing Batch-Invariant Ops (RMSNorm & Tiled Matrix Multiplication) in JAX
This learning log is my beginning of a series exploring various kernel-related topics. As a starting point, I will reproduce the implementation of batch-invariant NN operations in JAX, drawing from Thinking Machines Lab's seminal collaborative work, \"Defeating Nondeterminism in LLM Inference.\"
Streaming deepagents and task delegation with real-time output
This post demonstrates how to implement streaming capabilities on top of DeepAgents' package with multi-agent setup, with practical code examples and architectural patterns you can apply to your own projects.
Energetics of Allosteric Communication in Ubiquitin Revealed by Hybrid MCTS-Langevin Simulations
Exploring protein conformational landscapes and identifying potential allosteric communication pathways remain significant challenges in computational biophysics. This study presents a hybrid computational approach combining Monte Carlo Tree Search (MCTS) with Langevin Dynamics (LD) simulations using the OpenMM toolkit to enhance conformational sampling.