And yeah, I am definitely coming for Lisp.
Viz. the array reference a[0] in algols is a function application in lisps (vector-ref a 0).
The same is true for everything else. Semantic white space in such a language is a terrible idea as everyone eventually finds out.
Note that RX is not like normal semantic white space, but simpler. It is hard coded into the text without taking its content into consideration. RX is basically nested records of text, making it very robust, but encoded as plain text instead of JSON or something like that.
Everyone thinks there's something better and the very motivated write an interpreter for their ideal language in lisp.
The ideas inevitably have so much jank when used in anger that you always come back to sexp.
Now if you discover a way to linearize dags or arbitrary graphs without having to keep a table of symbols I'd love to hear it.
S-expressions are one way to linearize a tree.
Now, "simple" can mean different things depending on what you are trying to achieve. RX is simpler than s-expressions if you prefer indentation over brackets, and like the robustness that it brings. Abstraction algebra terms are simpler than s-expressions if you want to actually reason about and with them.
In short: I get all the pain of semantic white space with all the pain of lisp s-exp's with the benefits of neither.
It's been done before; see Scheme SRFI-110, a.k.a. Sweet Expressions or t-expressions:
What kind of content you put into the blocks, depends on you. How you parse one block is independent from how you parse another block, which means embedding DSLs and so on is painless. You could view the content of a block as RX, but you can also just see it as plain text that you can parse however you choose.
This also means if you make a syntax error in one block, that does not affect any other sibling block.
The benefits of RX, especially at the outer level, is that all those ugly brackets go away, and all you are left with is clear and pleasing structure. This is especially nice for beginners, but I am programming for over 30 years now, and I like it also much better.
If you don't see that as a benefit, good for you!
This is not at all obvious.
Obviously, I (usually) would not want to write things like
+
*
a
b
*
c
d
but rather +
a * b
c * d
or, even better of course, (a * b) + (c * d)
I think of blocks more as representing high-level structure, while brackets are there for low-level and fine-grained structure. As the border between these can be fluid, where you choose the cutoff will depend on the situation and also be fluid.I have more important things to think about in my code than when I switch between two dialects of the language.
Especially since I get no extra expressive power by doing so.
>Obviously, I (usually) would not want to write things like
Or just use (+ (* a b) (* c d)) which is simpler that any of the example above. Then I can chose between any of:
( +
(* a b)
(* c d))
( +
( *
a
b
)
( *
c
d
)
)
Or whatever else you want to do.>As the border between these can be fluid, where you choose the cutoff will depend on the situation and also be fluid.
It's only fluid because you've had XX years of infix notation caused brain damage to make you think that.
Granted. The example I gave was just to demonstrate that switching between the styles is not a problem and can be fluid, if you need it to be.
> It's only fluid because you've had XX years of infix notation caused brain damage to make you think that.
No, infix is just easier to read and understand, it matches up better with the hardware in our brains for language. If that is different for you, well ... you mentioned brain damage first.
Of course you'll think that the two are weird when you've never had a chance to use them before you're an adult.
Much like how adults who are native english speakers see nothing wrong with the spelling, but children and everyone else does.
That's the exact same flexibility but in a different order. It's not simpler.
For example, this shows a bit of the mix available:
for-each
λ : task
if : and (equal? task "help") (not (null? (delete "help" tasks)))
map
λ : x
help #f #f x
delete "help" tasks
run-task task
. tasks
[0] https://srfi.schemers.org/srfi-119/srfi-119.htmlWhich is irrelevant, because you can visualize code however you want via editor extensions.
> And yeah, I am definitely coming for Lisp.
An endeavor which is simultaneously hopeless and pointless.
Semantically, of course this does not matter. A block is a block, no matter if delineated by indentation or brackets. But RX looks better as plain text, and there is much less of a disconnect between RX as plain text, and RX as presented in a special editor (extension) for RX, than there would be for Lisp.
> An endeavor which is simultaneously hopeless and pointless.
Challenge accepted.
What's even funnier is how much they attack anyone who points this out.
I think most people's amazement with lsp relates to the practical benefits of such a project _not_ being thrown away but taken that last 10% (which is 90% of the work) to make it suitable for so many use cases and selling people on the idea of doing so.
Only having exposure to the algol family of languages does for your mental capabilities what a sugar only diet does for your physical capabilities. It used to be the case that all programmers had exposure to assembly/machine code which broke them out of the worst habits algols instill. No longer.
Pointing out that the majority of programmers today have the mental equivalent of scurvy is somehow condescending but the corp selling false teeth along with their sugar buckets is somehow commendable.
And editor actions can be useful for any language which either allow you to edit things, or has more than one way to do the same thing (among a bunch of other things), which includes basically everything. Of course editor functionality isn't a thing that'd be 100% beneficial 100% of the time, but it's plenty above 0% if you don't purposefully ignore it.
You can (and many people do!) say the exact same thing in a different tone and with different word choice and have people nod along in agreement. If you're finding that people consistently react negatively to you when you say it, please consider that it might be because of the way in which you say it.
I'm one of those who would normally nod along in agreement and writing in support, but your comments here make me want to disagree on principle because you come off as unbearably smug.
So much the worse for you.
Some concrete examples for us lesser mortals please.
The general advice is that you shouldn't mix them, but the general advice today is that you shouldn't use ASM anyway.
It's against the HN guidelines (https://news.ycombinator.com/newsguidelines.html), boring, unenlightening, not intellectually gratifying, degrades the quality of the site, and takes far less intelligence than the "mental equivalent of scurvy" that you name. Don't do it.
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
I'm curious about this unnamed ongoing work (that is unaware of incremental parsing).
Anyone know what he is referring to?
Heck, incremental lexing is even easy to explain. For each token, track where the lexer actually looked in the input stream to make decisions. Any time that part of the input stream changes, every token to actually look at the changed portion of the input stream is re-lexed, and if the result changes, keep re-lexing until the before/after tokenstreams sync up again or you run out of input. That's it.
You can also make a dumber version that statically calculates the maximum lookahead (lookbehind if you support that too) of the entire grammer, or the maximum possible lookahead per token, and uses that instead of tracking the actual lookahead used. In practice, this is often harder than just tracking the actual lookahead used.
In an LL system like ANTLR, incremental parsing is very similar - since it generates top-down parsers, it's the same basic theory - track what token ranges were looked at as you parse. During incremental update, only descend into portions of the parse tree where the token ranges looked at contain modified tokens.
Bottom up is trickier. Error recovery is the meaningfully tricky part in all of this.
Before tree-sitter, I was constantly explaining this stuff to people (I followed the projects that these algorithms came out of - ENSEMBLE, HARMONIA, etc). After more people get that there are ways of doing this, but you still run into people who are re-creating things we solved in pretty great ways many years ago.
I do miss the "wrap" command when using other editors, but it could be implemented reasonably easily without a parse tree. I found that a lot of the structural edits correspond to indentation levels anyway, but the parse tree definitely helps.
Seems like "annoying" refers to a user interface annoyance.
I'm guessing the following since I couldn't tell what structured editing is like from the article:
Keyboard entry is immediate, but prone to breaking the program structure. Structured editing through specific commands is an abstraction on top of key entry (or mouse), both of which add a layer of resistance. Another layer might come from having to recall the commands, or if recognizing them, having to peruse a list of them, at least while learning it.
What does the developer's experience with incremental parsing feel like?
It's essentially the experience most of us already have when using Visual Studio, IntelliJ, or any modern IDE on a daily basis.
The term "incremental parsing" might be a bit misleading. A more accurate (though wordier) term would be a "stateful parser capable of reparsing the text in parts". The core idea is that you can write text seamlessly while the editor dynamically updates local fragments of its internal representation (usually a syntax tree) in real time around the characters you're typing.
An incremental parser is one of the key components that enable modern code editors to stay responsive. It allows the editor to keep its internal syntax tree synchronized with the user's edits without needing to reparse the entire project on every keystroke. This stateful approach contrasts with stateless compilers that reparse the entire project from scratch.
This continuous (or incremental) patching of the syntax tree is what enables modern IDEs to provide features like real-time code completion, semantic highlighting, and error detection. Essentially, while you focus on writing code, the editor is constantly maintaining and updating a structural representation of your program behind the scenes.
The article's author suggests an alternative idea: instead of reparsing the syntax tree incrementally, the programmer would directly edit the syntax tree itself. In other words, you would be working with the program's structure rather than its raw textual representation.
This approach could simplify the development of code editors. The editor would primarily need to offer a GUI for tree structure editing, which might still appear as flat text for usability but would fundamentally involve structural interactions.
Whether this approach improves the end-user experience is hard to say. It feels akin to graphical programming languages, which already have a niche (e.g., visual scripting in game engines). However, the challenge lies in the interface.
The input device (keyboard) designed for natural text input and have limitations when it comes to efficiently interacting with structural data. In theory, these hurdles could be overcome with time, but for now, the bottleneck is mostly a question of UI/UX design. And as of today, we lack a clear, efficient approach to tackle this problem.
Google just recently figured this out (That Documents need to be Hierarchical):
https://lifehacker.com/tech/how-to-use-google-docs-tabs
Also interestingly both Documents and Code will some day be combined. Imagine a big tree structure that contains not only computer code but associated documentation. Again probably Jupyter Notebooks is the closest thing to this we have today, because it does incorporate code and text, but afaik it's not fully "Hierarchical" which is the key.
I've had a tree-based block-editor CMS (as my side project) for well over a decade and when Jupyter came out they copied most of my design, except for the tree part, because trees are just hard. That was good, because now when people ask me what my app "is" or "does" I can just say it's mostly like Jupyter, which is easier than starting from scratch with "Imagine if a paragraph was an actual standalone thing...yadda yadda."
#define FOO }
int main() {
FOO
"No one writes code like that!" Actually, they do, and mature C code-bases are full of such preprocessing magic.