C. elegans

7 x 10³

D. larva

5 x 10⁵

FlyWire

5 x 10⁷

MICrONS

5 x 10⁸

Mouse brain

~1 x 10¹²

Human brain

~1 x 10¹⁵

profile

Dogukan Tuna

Studying newer connectomics methods and sim brain emulation
*

As of April 2026, focused on learning connectomics, simulated brain emulation and uploading, AI compute that can accelerate 3D construction of neural connections, scanning hardware. Sharing study notes, experiments and trials as I learn the field.

*

Also working on architectures for AI for accelerated scientific discovery.

The brain is the only object in the universe that tries to map itself. A hundred billion neurons, a quadrillion synapses, and still no working blueprint. For every other system of comparable complexity we have an atlas. For the thing that writes the atlases, almost nothing. To me this is the most beautiful unsolved problem of the century. And I think AI is finally the instrument equal to it: a compressor of petabyte-scale imaging, a microscope for dynamics, a translator between matter and mind. Extremely bullish on developing methods for the brain emulation problem so that we can flourish in a world with 100T+. So I have turned toward it. Learning the field in the open, sharing study notes, experiments, and trials as I go. Toward connectomics and simulated brain emulation. April 2026.
//
1

Verified Replay Distillation: a small recipe for continual learning in verifiable domains

A short tour of VRD, an on-policy continual learning recipe that uses nothing more than a verifier, a replay buffer, and a failure-driven curriculum to teach a language model new task families without forgetting old ones.

Apr 12, 2026·7 min read
2

autoresearch-mamba: Karpathy-Style Autoresearch for Mamba-2, Mamba-3, and Hybrid Mamba-Transformer MoE

Karpathy-style autoresearch for Mamba-2, Mamba-3, and Nemotron-H style hybrid Mamba-Transformer MoE language models on MLX and GPU.

Mar 25, 2026·8 min read
3

Mem-RLM — Memory-Augmented Inference for Recursive Language Models

An open-source memory layer for Recursive Language Models that records execution trajectories, extracts reusable strategies, and injects them into future runs. Models stop starting cold and actually learn which approaches work for which problem types — 26% accuracy improvement on weaker models, fully stateful inference.

Feb 23, 2026·6 min read
4

A self-improving skill catalog for AI agents

An open-source skill catalog that agents use, extend, and improve themselves. 19 skills covering the full LLM lifecycle, autonomous research, GPU/TPU/QPU programming, and scientific computing — built by agents, for agents.

Mar 11, 2026·6 min read
5

Claude Code-Time Skill Acquisition with Agent Teams

A team of agents researched, synthesized, and integrated a production-grade React Native skill into a shared knowledge base in under 15 minutes — just through coordination at Claude Code-time.

Feb 7, 2026·24 min read
6

On Compression, Computation and the Space Between

Kolmogorov complexity, neural networks as program search and Wolfram's ruliology seem to be looking at the same thing from different rooms.

Feb 1, 2026·12 min read
7

Defeating Nondeterminism in LLM Inference: Reproducing Batch-Invariant Ops (RMSNorm & Tiled Matrix Multiplication) in JAX

A learning log reproducing the implementation of batch-invariant NN operations in JAX, drawing from Thinking Machines Lab's seminal collaborative work, \"Defeating Nondeterminism in LLM Inference.\"

Nov 25, 2025·25 min read
8

Streaming deepagents and task delegation with real-time output

This post demonstrates how to implement streaming capabilities on top of DeepAgents' package with multi-agent setup, with practical code examples and architectural patterns you can apply to your own projects.

Oct 20, 2025·9 min read
9

Energetics of Allosteric Communication in Ubiquitin Revealed by Hybrid MCTS-Langevin Simulations

Exploring protein conformational landscapes and identifying potential allosteric communication pathways remain significant challenges in computational biophysics. This study presents a hybrid computational approach combining Monte Carlo Tree Search (MCTS) with Langevin Dynamics (LD) simulations using the OpenMM toolkit to enhance conformational sampling.

May 6, 2025·14 min read

Photo Shoots & Sketches

View all