PyTorch invented coarrays:
(but seriously now, looks like a very interesting and potentially powerful approach for distributing work in Python)
PyTorch invented coarrays:
(but seriously now, looks like a very interesting and potentially powerful approach for distributing work in Python)
What are the missing things in Fortran so that it can be used for LLM training and inference?
I’m not familiar enough with neural-fortran by @milancurcic or fiats by @rouson but I’d say easy-to-use automatic differentiation and a package implementing the standard optimizers used for training such networks. In a nutshell, I’d say it rather is that the ecosystem is still lacking some building blocks rather than an actual limitation of the language.
A good playground may be to port as much as possible of nanochat in Fortran.
I agree with @loiseaujc that the language is not the limiting factor: a first implementation of transformers was added in neural-fortran but I never tested it. The library athena seems to be quite complete too (but I didn’t tested it either).