Lately I have been doing some reading on term-rewriting systems, and designing a simple string->sexp rewriter, which could form the basis of a compiler or partial evaluator: a compiler is nothing more than parsing into an AST and rewriting AST all the way down to another string (of machine code bytes). A partial evaluator for C would be much easier to write if you first turn it into parsed AST, which would look a lot like S-expressions from Lisp.
Anyway, I smile and cringe at the idea of embedding LLVM into the kernel.
[1]: https://en.wikipedia.org/wiki/Singularity_(operating_system)
I think the question becomes: how do you all creation of safe kernel code when that code becomes more functional. historically, BPF was just performing some filtering (near reception -> efficient). eBPF originally added more metrics and aggregation. But it's also been extended to change, drop, direct packets, and hook into all sorts of parts of the kernel. All based on the premise of compiling to safe code (which mainly means that you prove it does only what you intend).
So the Synthesis question is really: suppose a kernel only comes with eBPF, and you synthesize all the other interfaces you need?
I think this leads to some metaphysical navel-gazing - what is an kernel+OS, really? Practically, it's about providing stable interfaces (both upwards to user-space and downwards to hardware) so you can run things. Philosophically it's really just resource management, isolation.