A long time ago I remember reading a paper (I forgot where) where someone had taken the fairly mechanical process of laying out data and disk for efficient access and adding a trained ML (not LLM) to aid the process, with a great leap in efficiency.
There are a lot of problems where it is much easier to check a proposed solution than to come up with the proposal. I suspect that using an LLM to re-write code with some sort of verification process could be among those. Even having the LLM propose a proof of equivalence that can be checked by a proof checker.
Will LLM translation be a required optimization step in future compilers? Here is some code in lisp, please re-write it in efficient JavaScript :)
I don't know. Compilation seems more deterministic than what LLMs are good for now.
A long time ago I remember reading a paper (I forgot where) where someone had taken the fairly mechanical process of laying out data and disk for efficient access and adding a trained ML (not LLM) to aid the process, with a great leap in efficiency.
There are a lot of problems where it is much easier to check a proposed solution than to come up with the proposal. I suspect that using an LLM to re-write code with some sort of verification process could be among those. Even having the LLM propose a proof of equivalence that can be checked by a proof checker.