If we observe how us human communicate, we see a lot of guessing. Human languages are all ambiguous and their great efficiency depend on (intuitive) guesses -- inexpensive guesses. We make guesses as we listen/read, but we do not commit our guesses; rather we modify our guesses as we accumulate further information.
Now imagine if we forbid ourselves to modify our guesses, you'll see how understanding natural language a forbidding task ...
Compilation is essentially a guess of meaning with no room for modifications -- neither self-modifications nor post-hoc modifications. So it is doomed for inefficiency or un-trackable difficulties.
I believe the right way for the future of programming is multi-stage compilation -- with first meta-compilation followed with static compilation. Currently our compilation only refers to the latter. The meta compilation produces human readable intermediate result -- not much different from our current code in C or Java and it will be as easy to feedback from as we do today in C or Java. However the meta-compilations takes many guesses and quite often adventurous guesses but the coder can easily tell whether it is on-track or guessed wrong. When the coder detect the meta-compilation guessed wrong, he will simply modify his source code -- slight change styles or simply write the code in a different way (which is not much different from human communication when we say "what I meant was ..."); or, in a few specific or difficult cases, the coder may opt to directly modify the intermediate code (C or Java or any of current popular languages) -- not much different from natural language when we reduce to more specific way as we write laws and contracts.
I have been exploring such idea with MyDef -- a meta programming system -- and have been more convinced it is the more sensible way (than trying to make break-throughs in the mature field of programming languages).