and games like Dreams have proven that you can ship world class experiences using CPU rasterization. If it's easier and it performs good enough, nothing wrong with it.
The custom CPU rasteriser (Star Machine) that pushes 4k 120hz is mentioned in the intro, but the implementation of spline-based terrain covered by the article is just a prototype developed in blender. Blender is used for faster rapid iteration of the algorithm.
While the Blender version is at least partially GPU accelerated, the final implementation in Star Machine will be entirely on the CPU. It's currently unknown if the CPU implementation will trace against the cached height map or against a sparse point cloud (also cached)
[1] https://github.com/Aeva/star-machine/blob/excelsior/star_mac...
- A ray tracer runs on the CPUs, and generates surfels (aka splats)
- The surfels are uploaded to the GPU
- Then the GPU rasterizes the surfels into a framebuffer (and draws the UI, probably other things too)
So it's the ray tracing that's running on the CPU, not the rasterizer. Compared to a traditional CPU ray tracer, the GPU is not idle and still doing what it does best (memory intensive rasterization), and the CPU can do the branchy parts of ray tracing (which GPU ray tracing implementations struggle with).
The number of surfels can be adjusted dynamically per frame, to keep a consistent framerate, and there will be far less surfels than pixels, reducing the PCIe bandwidth requirements. The surfels can also by dynamically allocated within a frame, focusing on the parts with high-frequency detail.
It's an interesting design, and I'm curious what else is possible. Can you do temporal reproduction of surfels? Can you move materials partially onto the GPU?
Next thing i knew, the environment artists had started using stripes to sculpt the terrain :-P
Node-based environments can be powerful for exploring solutions like this, but they don’t often make it to Hacker News.
And unlike a text-based language where there is (usually) a single flow path going towards a single direction, with graphics like the one in the article you have several "flows" so your eyes need to move all over the place all the time to connect the "bits" in order to figure out what is going on.
Though that last part could probably be solved by using DRAKON graphs instead (which can also use less screen space) since those have a more strict flow.
IMO graph-based visual languages are nice for very high level flows but the moment you need to use more than a couple "dot product" or "add values" nodes, it is time to switch to a text-based one.
I disagree about the single-direction “1D” nature of textual programs being unequivocally a benefit. When there are many independent paths that eventually combine, it’s easier to see the complete flow in a 2D graph. Linear text creates the illusion of dependency and order, even if your computation doesn’t actually have those properties.
Conserving screen space is a double-edged sword. If that were the most important thing in programming, we’d all be writing APL-style dense one-liners for production. But code legibility and reusability is a lot more than just who does the most with the least symbols.
This is where having separate functions help - as a bonus you can focus on that independent path without the rest being a distraction.
If there are multiple independent paths that connect at separate points where you can't isolate them easily if they were a function, the graph is already a mess.
> Conserving screen space is a double-edged sword. If that were the most important thing in programming, we’d all be writing APL-style dense one-liners for production.
That is a bit of an extreme case, it isn't about conserving all the screen real estate, just not being overly wasteful. After all the most common argument you'd see about -say- Pascal-style syntax is that it is too verbose compared to C-like syntax, despite the difference not being that great. You'll notice that despite APL-like syntax not being very popular, COBOL-like syntax isn't popular either.
You don't have to go to extremes - notice that in my comment i never wrote that all uses of graphs should be avoided.
Think of it like using a shell script vs something like Python: you can do a ton of things in shell scripts and write very complex shell scripts, but chances are if a shell script is more than a few lines that glue other programs together, it'd be better to use Python (or something similar) despite both being "scripting" languages.
https://github.com/derkork/openscad-graph-editor
though I usually use:
Geometry nodes, on the other hand, I think are amazing. They really do provide a very useful abstraction over what would be a lot of tedious and error-prone boiler plate code.
With just a few nodes I can instances and properly rotate some mesh on every face of some other mesh. Boom, done in two minutes. And you can do so, so much more, of course.
The obvious downside is in complex scenarios the graph can be more difficult to manage and because of the nature of how its evaluated there are times you need to think about "capturing" attributes (basically a variable to hold an older value of something that gets changed later). But nothing is perfect.
Looks like blender has something reminiscent? Maybe? I haven't used it. https://docs.blender.org/manual/en/latest/render/shader_node...
[1] https://66.media.tumblr.com/142beeb156e568a2e1329775ad053fde...
> I cannot stress this enough the thing we want here is the thing you're probably used to calling "the surface normal", but—for reasons I am not responsible for—the thing we want is instead called the "binormal" here and the thing that is called the "normal" is instead a different thing. Why did the mathematicians do this to us?!
In case the author is reading, this paragraph has a some misconceptions in it. Mathematicians did not cause your curve binormal to line up with your surface normal, that’s something you have control over. Curve normals will not usually be tangent to your surface, and binormals will not usually line up with a surface normal. There are multiple valid ways to define a curve normal & binormal. The most famous curve normal is the “Frenet” normal, which is by definition pointing in the direction of curvature. If you use the Frenet frame, then the curve normal will be close to (but not exactly the same as) what you want for reconstructed surface normal.
You can for sure set up your authoring system so that a curve normal you define is the surface normal you want, and the curve binormal becomes one of the surface tangents. (Curve tangent will be the other surface tangent.)
One thing that might be really useful to recognize is that even when curves have an intrinsic normal and binormal provided by your curve authoring tools, you can assign your own normal and binormal to the curve, and encode them as offsets from the intrinsic frame. (The “Frame” is the triple of [tangent,normal,binormal].) If Blender is letting you control the curve normals without moving control points, then it’s already doing this for you, it already has two different frames, and internal one you don’t see and a visible one you can manipulate.