Light Fields camera tech produces dynamic reflections due to capturing and interpolating reality itself, I wonder how well this can be simulated with Cesium Photogrammetry and the Unreal VR pipeline so you don't need an expensive Light Fields camera to produce similar realistic static VR scenes.
That's impressive
"An MSI consists of a series of concentric spherical shells, each with an associated RGBA texture map. Like the MPI, the multi-sphere image is a volumetric scene representation. MSI shells exist in three dimensional space, so their content appears at the appropriate positions relative to the viewer, and motion parallax when rendering novel viewpoints works as expected. As with and MPI, the MSI layers should be more closely spaced near the viewer to avoid depth-related aliasing. The familiar inverse depth spacing used for MPI’s yields almost the correct depth sampling for MSI’s, assuming depth is measured radially from the rig center. The spacing we use is determined by the desired size of the interpolation volume and angular sampling density as described in Appendix A.
3.2.1 MSI Rendering. For efficient rendering, we represent each sphere in the MSI as a texture-mapped triangle mesh and we form the output image by projecting these meshes to the novel viewpoint, and then compositing them in back-to-front order. Specifically, given a ray r corresponding to a pixel in the output view, we first find all ray-mesh intersections along the ray. We denote Cr = {c1, . . . , c } and Ar = {1, . . . , } as the color and alpha components at each intersection, sorted by decreasing depth. We then compute the output color cr by repeatedly over-compositing these colors. [..] We parameterize the MSI texture maps using equi-angular sampling, although other parameterizations could be used if it were necessary to dedicate more samples to important parts of the scene."
https://storage.googleapis.com/immersive-lf-video-siggraph20...
You are better off using Google Maps API if I'm being honest.
We had a lot of Cesium 3D tiles of construction sites, captured by drones. It was quite easy to place them to Unreal engine. Is was fun to place random things in the map and mess around it.
There will always be a need for software like this as long as the world has these restrictions.
A prairie is called a steppe if it's in Asia. A steppe is called a prairie if it's in America.
A hurricane is a typhoon, except it's striking the west coast of the Atlantic instead of the west coast of the Pacific.
None of these make any difference to what the object is like. Why do we care? We don't call mountains something different when they're in Asia. We don't even call them something different when they're underwater, which makes a huge difference.
Also, prairie and steppe are subtly different, though if it weren't for historic reasons they might be named the same. A prairie is more moist and has more vegetation as a result, and can support more trees and general flora/fauna.
This is not true of American English, where "cyclone" unambiguously refers to a tornado.
Outside the heartland, the NHC categorizes many storms below the level of hurricane (64 knot sustained winds) as various kinds of cyclone (tropical cyclone, extratropical cyclone, potential tropical cyclone, post-tropical cyclone, alongside depressions).
In fact, in meteorology terms, a tornado is definitively different from a cyclone (a column of rotating area vs an area of area rotating around a low-pressure system). Hurricanes and typhoons are both kinds cyclones.
No. Some people just like realism in their video games, and prefer the immersion or just think it looks cool. Not everything is a manifestation of existential dread.
If there were a general trend towards creating realistic enviroments as a means of psychological escape, one would expect those environments to tend to be more pristine and idyllic than reality. But often that realism is used to depict worlds no better, and sometimes far worse, than our own.
I'm sure some people are using realism as a means of escaping the destruction of our real world, walking simulators and the like exist, but I don't think that's a relevant influence overall. Just like with realistic CGI, people mostly do it because they can.
If I play a mile I'm like Arma, I don't actually want to be a soldier. Me playing Cooking Mama isn't a veiled psychological thing: I like both cooking and the gamified, simulated cooking.
That's one use.
There are many others, such as simulation of human behaviour in a digital twin (you can make sure experiment variables remain the same, difficult to do in the real world), collaboration and ease of access to archeological sites around the world, aiding architectural work, etc...
It’s used for architectural Visualization, film backgrounds, and industrial visualization among other things.
Surely you can imagine why having realistic settings is beneficial in those scenarios.
I know people who will practice flying at unfamiliar sites on it first - so that they’re better prepared when they get to the real thing.
For example, the INTENTION of antidepressants may have been at one time to treat severe depression, but as society (hypothetically) gets worse and more depressing, they may be used to help people cope with systemic problems and therefore ameliorate people's reaction to them.
Or phones: they initially were created to help people communicate but they are subverted (co-opted) by the system to make people more dependent on technology for the sake of technological growth.
Any number of products or ideas can start as one thing and become co-opted to serve a more insidious function beyond their initial purpose.
They point of having virtual environments is to enable you to see things in them you'd rather not see in real life.
On the other hand there are some recent backlash suggesting these direction and default in Unreal is making games look worst. [1] [2] And Nanite may not be the silver bullet we were looking for.
[1] https://www.vg247.com/unreal-engine-5-has-been-a-disappointm...
Nanite was never supposed to be a silver bullet but a way to enable high density mesh streaming and high fidelity shadows and lighting when used with Lumen. Both techniques work incredibly well.
The only valid criticism is games getting too expensive (to make) and frame generation can work against gameplay. But both are really up to game designers to use appropriately, not engine providers to dictate.
I'm not really sure what the alternative here is supposed to be anyway. Is it Unity? Godot isn't a realistic competitor yet for this level of fidelity, though it'll probably get there eventually.
On PC hardware it seems unlikely, more chance of Epic fixing their engine.