I was reading this paper about the funarg problem, which is really the problem of maintaining the environments of lexical closures. It\'s an old paper and I\'m not sure if the a
Lexical scope is obviously possible with "hard real-time" -- after all, you say that you're using C for real-time applications, and C is a lexically scoped language. My guess is that you're actually concerned about first-class functions, not lexical scope. Assuming that, there's a whole bunch of known compilation techniques for dealing with first-class functions efficiently.
First of all, what you'll see in textbooks etc is almost always doing the usual tree-shaped environment, but in practice that's not needed at all if functions are not used as values. Practically every decent functional language compiler will identify such code and will use the stack instead, so there's zero overhead for allocation. (Note that at this point this means that if you limit yourself to the kind of things that you write in C then no allocation is needed.) Then, there's a bunch of additional ways to reduce allocations -- for example, consider something like (lambda (x) (* x 9))
-- this function doesn't really close over any free identifiers, and therefore compilers will lift it to the top so there's a single copy of the function and, again, no allocation. (Related note: with this optimization you already get more than what C gives you and still no allocation.)
There's a whole bunch of additional optimizations that avoid allocation, but of course there are cases where you really need to allocate a new closure. But these places are statically identifiable, and if you really care about such allocations then it shouldn't be difficult to hack up a compiler that warns you about such allocations. There have been a number of such languages, see for example GOAL, the very low level prescheme. But IME most people quickly get the hang of things and it's getting obvious where allocations happen -- so you get the benefit of a high level language, and when it gets to some code that you want to optimize, it's easy to see where allocation happens, and it's easy to avoid it in the usual ways. (And of course, all of this is unrelated to optimizing allocations.)
I found some real-time allocators so I'd say. lexical scope in real-time is possible:
http://rtportal.upv.es/rtmalloc/
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.441&rep=rep1&type=pdf
Adding to your digession, before writing my own micro-lisp, I would try to find or port lua for the embedded system in question. It is very small and offers much of LISP: first-class functions, closures, no continuations but co-routines.
Lua is small
Adding Lua to an application does not bloat it. The tarball for Lua 5.1.4, which contains source code, documentation, and examples, takes 212K compressed and 860K uncompressed. The source contains around 17000 lines of C. Under Linux, the Lua interpreter built with all standard Lua libraries takes 153K and the Lua library takes 203K.
Lua is free
Lua is free open-source software, distributed under a very liberal license (the well-known MIT license). It may be used for any purpose, including commercial purposes, at absolutely no cost. Just download it and use it.