I'd love it if they had a mastodon account.
Fixed the RSS feed (and sitemap as well since both were not updated) Those scripts were silently skipped since January
Apologies for the inconvenience, it should be updated now!
I asked him if anyone suggested experimenting with the Symmetric Nearest Neighbor filter[0], which is similar to Kuwahara but works slightly differently. With a little bit of tweaking it can actually embed an unsharp mask-effect into itself, which may eliminate the need for a separate Sobel pass.
I did some javascript-based experiments with SNN here:
https://observablehq.com/@jobleonard/symmetric-nearest-neigh...
... and have no shader experience myself so no idea how it compares to Kuwahara in terms of performance cost (probably worse though).
Looking at the results, it seems that the author defines a base color (green for the leaves) and moves up and down in brightness using what amounts to a multiplication for the shadows and an add for the lights. The result feels a bit harsh and unnatural.
I would suggest that he also plays a bit with the hue values.
1. That he darkens the lights but moves the green towards yellow.
2. That he lightens the darks, but moves the green towards blue.
In a full rendering pipeline (not the simplified one just for the effect) this would already be implemented, but it's a good thing to remember.
I guess this also applies to the diffusion models. They look just as if a human did it! But as that gets more common, there will be added value in doing whatever they can't do, because only that will look truly human.
I think the core issue is the difference between the digital and the analog as mediums. My understanding is that the computor is not a medium in the real sense. Rather (for the most part) it is a meta-medium: a medium that impersonates other mediums. It impersonates the photographers darkroom, a typewriter, a movie editing station etc.
TFA describes a more effective way to impersonate. It is no more a replacement of a painting than a photograph of my wife is a replacement of my wife.
> But as that gets more common, there will be added value in doing whatever they can't do, because only that will look truly human.
Here I would agree. Standing in front of a real painting is an entirely different experience to looking at a digital emulation of a painting on a screen. My new problem as an art teacher is that I find myself talking to students who claim to have seen the work of (for example) Michelangelo, yet I know they have never visited Italy.
I don't know that there has to be a point to it in that sense. In general, the goal isn't to generate static images that can be passed off as paintings. It's a real-time 3D rendering technique, for one thing -- you can't paint this.
The term painterly refers to non-photorealistic computer graphics art styles that mimic some aspects of painterly paintings. The point of doing this is to (try to) make a thing (a 3D scene, generally) look the way you want it to.
[0] https://sketchfab.com/3d-models/watercolor-bird-b2c20729dd4a...
I created a flash app where you could upload a photo and then paint an impressionist style painting from it. You could either auto generate or use your mouse to paint sections of the photo.
I only have the one screen shot of it now https://imgur.com/a/5g40UEr
If I recall correctly I I'd take the position of the mouse, plus a rough direction it was traveling in, and then apply a few random brush strokes which would have a gradient applied from the start to the end of the stroke which was sampled from the underlying photo. The strokes length would be limited if it detected any edges on the photo near to the starting point.
In the end it was only a couple of hundred lines of ActionScript but it all came together to achieve quite a neat effect.