3 people illuminated by 2 lamps will project 6 shadows. Where the 6 shadows all overlap, that will be "black" (or only picking ambient light). In other places where less shadows overlap, you will get a gradient of illumination.
"There’s just one light source, and relatively far away, so the shadow is simply an absence of light."
An ant outside any shadow can see both light sources.
An ant in a non-overlapping shadow cast from one object will have one light source blocked out but the other light source will be visible to the ant.
An ant in overlapping shadows from two objects will have both light sources blocked out. (Geometrically in this case it is necessary the overlapping shadows be cast from two separated points each from distinct light sources)
When one light source is visible to the ant that area must be lighter than when no sources are visible. This is the scenario presented by the OP.
edit: Since people seem to not believe this you can find a representation in part E in this diagram
https://i.imgur.com/r6x6QPQ.jpeg
and a photograph of this nicely done with colored lights here
otikik pointed out a case of multiple light sources where overlapping shadows will be darker (otikik is correct)
fregus says "that's not correct" and argues a true case (shadows from one light source will not be darker) which is a bad argument because it tries to overgeneralize. The case from fregus is for the same light source and they cannot use this to argue otikik is incorrect because otikik's argument explicitly requires multiple light sources.
I point out, responding to fregus, how otikik is correct and how you need to consider multiple sources as well as including examples and physical evidence.
You question my reading comprehension for some reason
@fregus explicitly described a scene with two light sources. Thus the question about your reading comprehension.
They actually reference two light sources TWICE in their comment: ("any two shadows, one from each light", "from light A and in shadow from light B"). Hence the question of reading comprehension.
I think "overlapping shadows get darker" is just not a very intuitive way to think about it because it disregards the stuff that actually matters, which is the light sources and how the scene may block their light paths.
For early games that could not afford advanced methods of global illumination, making overlapping shadows get darker seems like a reasonable, though not necessarily correct, way of faking it.
And the other thing ignored in these comments is perception. For the case of the two businessmen looking at their shadows under a bridge, it is easy to show with a diagram that some areas of overlapping shadows are, in fact, darker. But in such a poorly lit scene, people are likely to conclude that there is no difference, which isn't to say that there isn't one but, rather, that they don't perceive one.
edit: really nice and nostalgic read, I played almost all of the games mentioned.
https://old.reddit.com/r/gaming/comments/4jc38z/til_in_uncha...
In offline rendering the sky is the limit when it comes to SSS quality, if you have enough compute to throw at it. It's essential for getting skin to look right.
A GPU that can handle ray tracing, however, can do a lot of the techniques mentioned in the article (and others) more efficiently without doing what you’d consider full scene ray tracing, because the fundamental path tracing algorithms are very versatile.
Raytracing simulates real lighting. Shadows are, where no light is.
IIRC in dark environments they also rig the shadow to be brighter than the ground to make sure it remains visible.
F-29 RETAL aka F29 Retaliator aka F29
That shadow was another small tidbit what gave this game the enormous feel of speed.
: :
: :____ ground
: /|
: / | wall
ground __:/__|
I expect area lights and soft shadows to become the norm as ray-traced techniques are adopted. If you have the hardware, it's worth checking out Quake 2 RTX to see what the future might look like.
Lastly, I've added your blog to my growing list of graphics resources: https://github.com/aaron9000/c-game-resources
The shadow overlap in MGS is not completely incorrect as there's ambient light, scattering and other similar global illumination phenomena.
>Mirror’s Edge (2008, PC) is basically Lightmaps: The Game.
Lol, true. Impressive game at the time, and even nowadays.
It was criminally underrated at the time; perhaps because it was shorter than some of its competitors.
I am toying with lighting little voxel grid scene these days, targeting RP2040 and a measly 160x120 px screen and it's crazy how computationally and memory expensive this stuff is.
It's not an optical illusion or artistic vibe or anything. The sky is blue, shadows on a clear day are illuminated by bounced light from the sky, therefore shadows are blue.
If you look underneath cars you can see it - A sharp blue shadow where the sky is visible, that fades to true black where the car's body occludes light from the sky.
If you combine this sharp blue sun shadow with a soft and black "AO" sky shadow you can get very pretty shadows for cheap.
But a good graphics rendering engine will do it. Shadows should carry a slight tint from the color of the sky.
Which is why some old screenshots of No Man's Sky bothered me. Pretty sure I saw scenes where shadows were purple despite a green sky.
Correlation/Causation: lots of things in graphics rendering work because of observed phenomena. A “good graphics engine” is only as good as the eyes that implemented it. Todays engines still fall short of what is in front of us, and not because of a technical limitation.
> Pretty sure I saw scenes where shadows were purple despite a green sky.
If it’s a stylistic effect then it’s a stylistic effect. But otherwise, purple shadows are literally everywhere. Shadows can have an immense amount of chroma and vibrancy to them, or they can be incredibly cool and muted. It all depends on the context.
As we are moving more and more towards physics-based rendering, engine are shifting from imitating what's observed to properly defining the world and its interactions and get realism as a byproduct.
The traditional (cheap) way of lighting stuff is to model ambiant lighting as some constant, regardless where light actually comes from, and render shadows as dark patches. It is not at all how physics work, so you need an artist with a good sense of observation to make it look right. The artist may suggest some blue tint for the shadows because it looks right.
The physical approach is to start with just the sun, no ambiant light, and simulate Rayleigh scattering, which will naturally give you a blue sky and blueish shadows. An artist can still be involved, but his job will be more about stylistic choices and evaluating corner-cutting techniques.
Sigh. Anyone who’s done physically based rendering will tell you it’s all bullshit. Even Maxwell’s equations are a special case of a much broader set of interactions. But a few decades ago you’d think that’s all there is. You don’t get realism as a byproduct, you get “looks real, ship it” as a byproduct.
It’s approximations. And that’s perfectly fine.
They should indeed get darker when there are multiple significant light sources, as in the Metal Gear Solid screenshot. This is because the addition of another obstruction (i.e. Solid Snake) causes more sources of light to be blocked.
The shadows of buildings were pretty light color and walking through them wasn't changing the temperature noticeably. But between the trees almost all of the sky was blocked - so the diffused light wasn't getting there - and the shadow there was much darker and it was significantly colder than every other part of the city.
So - shadows can get darker or lighter, even if there's just one light source and it's very far :).
https://technology.riotgames.com/news/valorant-shaders-and-g...
The moment that’s still stuck with me happened while stealing a car in a back alley at night. Right as my player character entered the car a police man came around the corner. He „saw“ me stealing the car and pulled his gun right when the headlights of the car turned on and cast a huge shadow of the police man in motion onto a nearby wall.
Revolte, for the PowerVR PCX1, had stencil shadows in 1996.
https://www.youtube.com/watch?v=7BvtML5dIuI
The PowerVR PCX1 had hardware support for shadow volumes, which were implemented more efficiently than standard stencil shadows. Rather than drawing the scene multiple times, it basically did a depth-only pre-pass (in hardware, to an on-chip depth/stencil buffer) to determine visible pixels and test the shadow volumes to determine what pixels are in shadow, then it preformed texture sampling and shading afterwards, with lighting brightness adjusted by shadow volume results. It would only shade visible pixels, overdraw would not waste bandwidth on unnecessary texture fetches.
The Dreamcast, based on the successor to the PCX1, also had many games with shadow volumes. The Dreamcast's implementation was more flexible, and its volumes could adjust more than lighting, such as what texture is used, UV mapping, or even what blending equation is used for transparent polygons.
I've managed to get soft shadows on the DC (https://imgur.com/a/DyaqzZD at the end), although it's pretty fill rate heavy, since it falls back to a more standard stencil method and redraws the shadow multiple times.
[1] https://www.slideshare.net/slideshow/henrikgdc09-compat/3128...