edit: turns out there is more .. thanks
I remember at the time there was a lot of PR around this being the first game to introduce that effect and how the developers basically invented it.
I can’t comment on whether that was actually true or just PR BS, but it was definitely the first time I experienced it as a gamer.
So if so if Jet Set Radio was released June 2000, you can look for related papers a couple years before to see if new techniques were appearing. And, in fact, they were!
Disney paper (1998) on texture mapping for cell shading (the color of a cartoon):
https://media.disneyanimation.com/uploads/production/publica...
NYU paper (1998) applying outlines to 3d objects (the black outline of a cartoon) :
https://mrl.cs.nyu.edu/publications/npr-course1999/hertzmann...
It only looked like crap in its era because carts were expensive compared to CDs. Which is less of an issue now.
Also the hardware antialiasing and overuse of fog didn’t help its case. Thankfully the former can be fixed either via hardware mods or emulation.
I’d be still be interested to see if those demos you saw were full games or not. I’ve seen a lot of cool effects in games get abandoned because they didn’t scale well to a fully fleshed out game.
I developed the cel shading effect for the Dreamcast game 'Looney Tunes: Space Race' (developed by Infogrames Melbourne House) literally during the first week we had access to a Dreamcast development kit. Infogrames Sheffield (devs of Wacky Racers) were shown an early version of our implementation, and added the similar effect to their game. It looked great, but went into their game pretty late in production, so the game hadn't really been optimised for it the way that ours was.
And the folks behind Jet Grind Radio came up with the effect on their own as well, and beat both of us to market. They were using exactly the same algorithm, but were using it in a very different way; they were fully embracing and leaning into the uneven, wide and jagged outlines, where Sheffield and we were fighting against them and trying to match a more uniform and traditional art style.
And then only about a year later, somebody seemed to have figured out how to make the edge-detection cel shading approach work in real-time on Xbox, for the game "Dragons Lair 3D". I had done a test implementation of that approach on the Dreamcast, but it wasn't nearly performant enough for us to run it on multiple characters at once while playing a game too! Not sure whether it was due to the Xbox being more powerful or them just having a smarter algorithm than mine, but you can't argue with their results! If you're making a game that you want to look like an actual hand-drawn cartoon, that is still absolutely the best quality way to do it, IMHO.
Someday I'll find an excuse to try my hand at implementing one of those again. Performance shouldn't be a problem at all any more, I imagine!
When looking into the edge detection approach recently, I came across this great method from the developer of Mars First Logistics:
https://www.reddit.com/r/Unity3D/comments/taq2ou/improving_e...
It looks like a frame from dutch comic book Franka !
Some open questions:
- How do you reduce the detail of a toon-rendered 3D model as the camera zooms out? How do you seamlessly transition between its more-stylised and less-stylised appearance?
- Hand-drawn 2D animations often have watercolour backgrounds. Can we convincingly render 3D scenery as a watercolour painting? How can we smoothly animate things like brush-strokes and paper texture in screen space?
- How should a stylised 3D game portray smoke, flames, trees, grass, mud, rainfall, fur, water...?
- Hand-drawn 2D animations (and some recent 3D animations) can be physically incorrect: the artist may subtly reshape the "model" to make it look better from the current camera angle. In a game with a freely-moving camera, could we automate that?
- When dealing with a stylised 3D renderer, what would the ideal "mesh editor" and "scenery editor" programs look like? Do those assets need to have a physically-correct 3D surface and 3D armature, or could they be defined in a more vague, abstract way?
- Would it be possible to render retro pixel art from a simple 3D model? If so, could we use this to make a procedurally-generated 2D game?
- Could we use stylisation to make a 3D game world feel more physically correct? For example, when two meshes accidentally intersect, could we make that intersection less obvious to the viewer?
There are probably enough questions there to fill ten careers, but I suppose that's a good thing!
There are various techniques to do this. The most prominent one IMO is from the folks at Blender [0] using geometry nodes. A Kuwahara filter is also "good enough" for most people.
> When dealing with a stylised 3D renderer, what would the ideal "mesh editor" and "scenery editor" programs look like? Do those assets need to have a physically-correct 3D surface and 3D armature, or could they be defined in a more vague, abstract way?
Haven't used anything else but Blender + Rigify + shape keys + some driver magic is more than sufficient for my needs. Texturing in Blender is annoying but tolerable as a hobbyist. For more NPR control, maybe DillonGoo Studio's fork would be better [1]
> Would it be possible to render retro pixel art from a simple 3D model? If so, could we use this to make a procedurally-generated 2D game?
I've done it before by rending my animations/models at a low resolution and calling it a day. Results are decent but takes some trial and error. IIRC, some folks have put in more legwork with fancy post-processing to eliminate things like pixel flickering but can't find any links right now.
So the future here may be a 3D mesh based game engine on a system fast enough to do realtime stable-diffusion style conversion of the frame buffer to strictly adhering (for pose and consistency) “AI” pixel art generation.
[1] https://civitai.com/search/models?sortBy=models_v9&query=Pix...
It's kind of ridiculous that this occurs just as the dream of raytracing hardware approaches viability.
It hasn't 'occurred' at all. People extrapolated what they saw in the 50s to cars the size of houses, personal flying cars and robot helpers too.
Apparently you are not alone in that.
Sadly for you AI griefer bots are a thing, so that side of your reason to exist is also under threat, but you can deny the existence of those too if it will make you feel better.
You said "It's occurring" which is the present.
Sadly for you AI griefer bots are a thing, so that side of your reason to exist is also under threat, but you can deny the existence of those too if it will make you feel better.
What is this supposed to mean? You think pointing out that was you said is happening right now isn't actually happening is 'griefing' you? You aren't being persecuted by someone replying to you. You can always avoid saying things that aren't true or give evidence that they are.
If you show me some sort of realtime hallucination that takes rough renders and outputs temporally coherent images in 16ms or less I'll say that you are right that this is happening.
Hallucination.
I don't think you're hallucinating though, I think you just got mixed up with thinking a wild extrapolation was automatically coming true right now.
LOL! Convincing griefing requires a slightly larger context window!
The question is what for? The original claims:
> The future of 3D graphics is going to be feeding generative NNs with very simple template scenes, and using the NN to apply almost all the lighting and styling.
> It's kind of ridiculous that this occurs just as the dream of raytracing hardware approaches viability.
Or what you extrapolated that to in your imagination:
> If you show me some sort of realtime hallucination that takes rough renders and outputs temporally coherent images in 16ms or less I'll say that you are right that this is happening.
These are not the same. If you think so you have serious comprehension problems.
You're a simple troll arguing against things you've imagined. Get back under your bridge.
You go by the name CyberDildonics. You claim to think “is occurring” and “occurs” means the same thing, so your ability to understand is clearly limited. The world does not owe you an explanation just because you want one, and insulting those that point this out is classic trolling, so the label is deserved.
Low res: https://x.com/Navy_Green/status/1525564342975995904 Stabilization: https://x.com/Navy_Green/status/1693820282245431540
Not exactly retro pixel art, or maybe it is since it's been 25 years (omfg) but in Commandos 2+ we had 3d models for the characters, vehicles, etc which we rendered at runtime to a 2d sprite which we then mixed with the rest of the pre-rendered 2d sprites and backgrounds.
https://bgolus.medium.com/the-quest-for-very-wide-outlines-b...
So fascinating! Thanks for indirectly leading me to this! I love thinking about all the various approaches available at the pixel/texel/etc level!
It's also another case where it's a very clever way of generating a type of SDF (Signed Distance Field) that is doing a lot of the heavy-lifting. Such a killer result here as well! Any-width-outline-you-like in linear time?!! Amazing when compared to the cost of the brute-force ones at huge widths!
I wholeheartedly endorse SDFs, whether they are 'vector' ones, function-based, like Inigo Quilez's amazing work, Or 'raster' ones like in the article, texel-or-voxel-based. Houdini supports raster-SDFs very well I think, has a solid, mature set of SDF-tools worth checking out (there's a free version if you don't have a lic)!
And of course there's all the many other places SDFs are used!! So useful! Definitely worth raising-awareness of I reckon!
https://news.ycombinator.com/item?id=36809404
Unfortunately, it requires random access writes (compute shaders) if you want to run it on the GPU. But if CPU is fine, here are a few implementations:
JavaScript: https://parmanoir.com/distance/
C++: https://github.com/opencv/opencv/blob/4.x/modules/imgproc/sr...
Python: https://github.com/pymatting/pymatting/blob/afd2dec073cb08b8...
This (https://x.com/alexanderameye/status/1663523972485357569) 3D line painting tool also uses SDFs that I then write to a tiny texture and sample at runtime.
SDFs are very powerful!
Articles like this one make me miss the field - working with 3d graphics, collisions, shaders, etc. had a magical feeling that is hard to find in other areas. You're practically building worlds and recreating physics (plus, math comes up far more practically and frequently than in any other programming field).
I went the other way from webdev to working in games and in my experience it really is as fun/interesting as it sounds, the satisfaction of the work is so much higher and the ceiling/depth of the topic is very high.
Been doing it for 4 years so far and I've never hit a wall of boredom like I did in webdev
Nothing beats coming in to work on monday, opening up the engine editor, seeing the mini world you're working on being rendered, and thinking about what cool feature you'll add next
There are interesting challenges in web dev, but it's mostly related to scale, architecture and organizational complexity - realistically, no one is going to have their mind blown reading your loginController.
Game programming does have a lot more space for wizardry, you can code a shader or a mesh splitting algorithm that feels like black magic to others, and it's just you with a code editor.
There are still many reasons for me not to regret my move, mostly related to the realities of the market - lower salaries, crunch, the seasonal/project based employment, limited choice of OS/dev tools, etc.
But credit where credit is due, that field is super fun.
Explaining a difficult concept in terms anyone can understand. Great diagrams and examples. And top marks on readability UX for spacing and typography.
OP, what inspired you to create your current theme? Have you ever considered creating an engineer-focused publishing platform?
It's similar to the one described as "Blurred Buffer", except that instead of doing a blur pass, I'm exploiting the borders created by the antialiasing (I think via multi sampling, or maybe texture filtering).
I draw the object in plain opaque white on a transparent black background, and in the fragment shader I filter what does not have a fully opaque or transparent alpha channel (according to some hardcoded threshold). It gives decent enough results, it's cheap performance-wise and is very simple to implement.
const float MIN_ALPHA_THRESHOLD = 0.3;
const float MAX_ALPHA_THRESHOLD = 0.7;
[...]
if (fragmentColor.a < MIN_ALPHA_THRESHOLD) {
// Outer edge of the border
float outputAlpha = fragmentColor.a / MIN_ALPHA_THRESHOLD;
outputColor = vColor * outputAlpha;
} else if (fragmentColor.a > MAX_ALPHA_THRESHOLD) {
// Inner edge of the border
float outputAlpha = 1 - ((fragmentColor.a - MAX_ALPHA_THRESHOLD) / (1 - MAX_ALPHA_THRESHOLD));
outputColor = vColor * outputAlpha;
} else {
// "Inside" of the border
outputColor = vColor;
}
So if the border goes like transparent->opaque, I divide it into segments using a threshold (transparent->min_threshold->max_threshold->opaque) and change the alpha values:- Smoothen out the transparent->min_threshold segment, so that it goes linearly from a=0 to a=1
- make opaque (a=1) the min_threshold->max_threshold segment
- Revert and smoothen out the max_threshold->opaque segment so that it goes linearly from a=1 to a=0
Another similar trick - we didn't have full res antialiasing for some reason (performance?), and most of the canvas was just a bunch of 2D rectangles (representing video frames), however they could be rotated, and aliasing was visible along the edges. Instead of enabling full screen antialiasing I just extruded all quads a little bit, while proportionally shrinking UV coordinates - so that the visible "edge" was inside the actual 3D quad, and texture filtering, again, did all the work for free :)
https://en.wikipedia.org/wiki/Pok%C3%A9mon_Sun_and_Moon
and also used to make other illustration-like styles such as
this repo is a great example of post processing in godot: https://github.com/sphynx-owner/JFA_driven_motion_blur_demo
https://store.steampowered.com/app/1294420/Rollerdrome/
I wonder if they used something like that
Great article by the way OP
1) Create an array storing all unique edges of the faces (each edge being composed of a vertex pair V0, V1), as well as the two normals of the faces joined by that edge (N0 and N1).
2) For each edge, after transformation into view space: draw the edge if sign(dot(V0, N0)) != sign(dot(V0, N1)).
Suprised this isn’t obvious.
Articles like this are awesome, I wish I could actually write a shader.
Mostly due to laziness, as a cell shaded look requires less retexturing for my game than simply creating proper PBR materials.
The inverted hull method + cell shaded look I initially used however actually really does have quite a performance hit.