profile

Dogukan Tuna

Hi, I'm Dogukan; thanks for reading! I'm fascinated by AI research, neuroscience, connectomics, and biomedical research, along with the cross sections of all three fields and more, tbh, and I'm doing some budding research in these fields. Currently, I'm focused on next-gen AI architectures for super fast connectomics & brain emulation, AI-powered methods for speeding up future brain map reconstructions, high-compute RL, grokking megakernels, and recursive self-improvement machinery that future-paces the rest of the field. The brain is the only object in the universe that tries to map itself. A hundred billion neurons, a quadrillion synapses and still we have no scalable blueprint to wire & map that. For every other system of comparable complexity we have an atlas. For the thing that writes the atlases, almost nothing. To me this is the most beautiful unsolved problem of the century. And I think AI is finally the instrument equal to it: a compressor of petabyte-scale imaging, a microscope for dynamics, a translator between matter and mind. Extremely bullish on developing methods for the brain emulation problem so that we can flourish in a world with MAS 100T+. So I have turned toward it. Learning the field in the open, sharing study notes, experiments, and trials as I go. Toward connectomics, simulated brain emulation, flourishing with intelligence. April 2026.

Some things I'm working on:

*

@ConnectomeX Labs

*

connectomics for simulated emulation, AI compute that can accelerate 3D mapping of neural connections (incredibly bullish)

*

curated singularity track (curiosity)

//
1

Introducing Verified Replay Distillation (VRD) recipe for continual learning in verifiable domains

A short tour of VRD, an on-policy continual learning recipe that uses nothing more than a verifier, a replay buffer, and a failure-driven curriculum to teach a language model new task families without forgetting old ones.

Apr 12, 2026·7 min read
2

autoresearch-mamba: Karpathy-Style Autoresearch for Mamba-2, Mamba-3, and Hybrid Mamba-Transformer MoE

Karpathy-style autoresearch for Mamba-2, Mamba-3, and Nemotron-H style hybrid Mamba-Transformer MoE language models on MLX and GPU.

Mar 25, 2026·8 min read
3

Mem-RLM — Memory-Augmented Inference for Recursive Language Models

An open-source memory layer for Recursive Language Models that records execution trajectories, extracts reusable strategies, and injects them into future runs. Models stop starting cold and actually learn which approaches work for which problem types — 26% accuracy improvement on weaker models, fully stateful inference.

Feb 23, 2026·6 min read
4

A self-improving skill catalog for AI agents

An open-source skill catalog that agents use, extend, and improve themselves. 19 skills covering the full LLM lifecycle, autonomous research, GPU/TPU/QPU programming, and scientific computing — built by agents, for agents.

Mar 11, 2026·6 min read
5

Claude Code-Time Skill Acquisition with Agent Teams

A team of agents researched, synthesized, and integrated a production-grade React Native skill into a shared knowledge base in under 15 minutes — just through coordination at Claude Code-time.

Feb 7, 2026·24 min read
6

On Compression, Computation and the Space Between

Kolmogorov complexity, neural networks as program search and Wolfram's ruliology seem to be looking at the same thing from different rooms.

Feb 1, 2026·12 min read
7

Defeating Nondeterminism in LLM Inference: Reproducing Batch-Invariant Ops (RMSNorm & Tiled Matrix Multiplication) in JAX

A learning log reproducing the implementation of batch-invariant NN operations in JAX, drawing from Thinking Machines Lab's seminal collaborative work, \"Defeating Nondeterminism in LLM Inference.\"

Nov 25, 2025·25 min read
8

Streaming deepagents and task delegation with real-time output

This post demonstrates how to implement streaming capabilities on top of DeepAgents' package with multi-agent setup, with practical code examples and architectural patterns you can apply to your own projects.

Oct 20, 2025·9 min read
9

Energetics of Allosteric Communication in Ubiquitin Revealed by Hybrid MCTS-Langevin Simulations

Exploring protein conformational landscapes and identifying potential allosteric communication pathways remain significant challenges in computational biophysics. This study presents a hybrid computational approach combining Monte Carlo Tree Search (MCTS) with Langevin Dynamics (LD) simulations using the OpenMM toolkit to enhance conformational sampling.

May 6, 2025·14 min read

Photo Shoots & Sketches

View all