Training mRNA Language Models Across 25 Species for $165
116 points - last Wednesday at 8:38 PM
We built an end-to-end protein AI pipeline covering structure prediction, sequence design, and codon optimization. After comparing multiple transformer architectures for codon-level language modeling, CodonRoBERTa-large-v2 emerged as the clear winner with a perplexity of 4.10 and a Spearman CAI correlation of 0.40, significantly outperforming ModernBERT. We then scaled to 25 species, trained 4 production models in 55 GPU-hours, and built a species-conditioned system that no other open-source project offers. Complete results, architectural decisions, and runnable code below.
Comments
On top of that, we don't have a clear understanding on how certain positions (conformations) of a structure affect underlying biological mechanisms.
Yes, these models can predict surprisingly accurate structures and sequences. Do we know if these outputs are biologically useful? Not quite.
This technology is amazing, don't get me wrong, but to the average person they might see this and wonder why we can't go full futurism and solve every pathology with models like these.
We've come a long way, but there's still a very very long way to go.
I am a structural biologist working in pharmaceutical design and this type of thing could be wildly useful (if it works).
JEPA is going to break the whole industry :D