jojon
Member-
Posts
8 -
Joined
-
Last visited
-
Evangelisation warning... I've been guzzling down the realtime raytracing cool-aid since the day, long ago, when I popped a 50MHz MC68060 into my Amiga, and a fullscreen preview render of a simple scene in Real3D v2 suddenly took only a few seconds to render. (That preview = single lightsource pinned to the camera, no shadows, everything the same opaque diffuse material) Even after these long years of waiting, I remain impatient for things to get to where rasterization (including the hybrid approach currently used in games (EDIT3: ...with RTX or DX equivalent)) can be consigned to a museum, and I do think the difference is "game changing"; A lot of the nice lighting in current games comes from static, prerendered lightmaps (sometimes multiple sets, to simulate a day-night cycle), and even UE5's Lumen is but a poor substitute. I'd argue that even when you do not consciously take note of the exchange of reflected light between one's shoes, and the floor one stand on, or the lack thereof, "your brain does", and the latter inevitably produces this "cut-and-paste clipart" appearence. :7 There are a few bits of reasoning to my madness... On one hand, raytracing is *a lot* more processing heavy than rasterisation, but on the other hand: Given it pretty much works per-pixel, it has the potential to allow for some optimisation options that are much harder to do full-buffer: For starters: If the implementer thinks ahead just a little bit, computing should be a fair bit more distributable. It is not going to happen of course, but I'd like to see multi-GPU return, in a form where you could supplement a main graphics card with an arbitrary number of GPU-and-cache-only, RT-cores-only, power/performance-optimised, "render farm" cards, where the bus/network/protocol that connects these would be an industry standard that allows you to freely mix cards from different manufacturers and generations, with the managing software doling out work allotments between them, in accordance with their respective capabilities. Home- and pro users could run the exact same application builds, and differ only in the size of their hardware stacks. Next: Given this fragmentisation of the work, one could conceivably cast rays in passes, per working unit, adding fidelity with each one, and stop dead at any arbitrary point in time (typically in anticipation of imminent screen refresh), to collate what one has so far, and construct the finished frame from it, inherently dynamically scaling rendering quality to performance and scene complexity. Then there is the matter of the viewplane. Working per-pixel, this does not necessarily have to be a flat rectangle - it could be a sphere or cone section, or anything, which could save a lot of unnecessary work and buffer memory -- in particular with wide fields of view, since rendering to a single rectangle becomes more and more inefficient (by the tangent), the farther out from the centre you go, until you reach infinity at 180°. Continuing along the line of thought above, we have the distortion (usually "pincushion" type) caused by the lenses in a VR headset, which blows up the imagery in the centre of the lens, and compresses it at the periphery (EDIT2: Ehm... It does the opposite, of course: Compresses the centre and stretches the edges -- the compensating software *counter* distortion does what I wrote). This too could be accounted for (e.g. by shaping the viewplane), saving work by weighting the distribution of it to the parts of the image where it makes the most good, and doesn't go to waste. Here one could from the get-go cast one's rays with direction deviations given by modelling the lens; Again optimising one's efforts by working smarter-not-harder, and eliminating the need to distort the rendered image in post, in order to compensate for the lens distortions. ...and then the matter of foveated rendering... Given the very heavy bias in the density of cone type photoreceptors on the human retina, to a tiny spot aligned with one's gaze, there is a fair bit of work to save, by putting the lions share of one's rays in the narrow view cone of that spot, where one's vision is the sharpest -- many upcoming HMDs will purportedly have eyetracking that is fast and accurate enough to support following the direction the user is watching, and updating what part of the frame receives this preferential treatment on the fly. (EDIT: ...again something that that is easier to do when one can do it per-pixel. All the techniques I have mentioned are going to be pretty much prerequisite to make decent use of high resolution and high field of view VR headsets, to my mind.) So there are my assessments and opinions... I fully expect the industry to disappoint. :7
-
Ah, yes; The limitation to anti-aliasing options that comes with deferred rendering -- love the increase in how many dynamic light sources one can have, that it enables, but the aliasing is indeed an eyesore, and the blurry AA options that remain are IMHO almost universally *worse* than the aliasing itself.
-
Had a Logitech.. I think it was MX1100, wireless, which was a huge heavy chunk, but was as if moulded after my hand, and ran for months off of two AA batteries. Lost it when I accidently dropped it into a cup of tea (yes, really). Replaced it with an MX Master, since the 1100 was out of production, and that thing is too tiny and badly angled in every way possible, for my hand, and its built-in chargeable battery barely lasted a day when it was new; At this point in its life, the thing has become a de-facto wired device.
-
Neuschwanstein castle is among the "featured" bookmarks in Google Earth VR, by the way... :7
-
Although "Move as soon as possible, unless the landlord completely blows out and renovates the entire building", seems like the inevitable thing to do, I thought I'd mention that there are combination washer/dryer units -- got one myself, after once too many having forgotten (for several days) to empty the washing machine. Takes longer, tends to cost more than two separate machines, and have limited capacity, but should suffice for a household of two, and the one-step procedure is rather convenient. EDIT: Quite terrified about seeing the washer that close to the bathtub, by the way.
-
Not the animation side of things as such, but a good watch none the less (EDIT: pertinent to the rendering end): 8ecfZF-IuSI
-
The best I can offer is to keep trying to get to demo both at some length. :7 Demoing things like Valve's "The Lab", and the "Oculus Home" environment, may not be entirely representative, because their respective art direction are both balanced around certain overall scene lighting levels, that happen to be optimal for minimising apparent artefacts caused by the fresnel lenses; A pleasant slightly-above-medium gray-ish - low on saturation and contrast; Think watercolour, but a tiny bit darker . :7 Go, instead, into a high contrast environment; A dark place dotted with bright accent lights, such as many cockpits in Elite Dangerous, whilst out in dark space, and the "god rays" may jump out at you and smear themselves intrusively all over your vision, millimetres from your eyes. Overall, the Rift (Consumer Version 1) chooses more pixels per degree of your field of view (EDIT3: ...for a significantly sharper image), at the natural cost of a reduction in the width of the latter. This is mitigated a bit by a secondary sacrifice in binocular overlap; You'll still see almost as far to the sides as in a Vive, but the last 5-10 degrees out to either side, can only be seen by the eye on that side, as if you had the largest nose in the world. By all accounts, most people never seem to even notice this, but to me, personally, it is very annoying. The Rift is also a more convenient experience; It is lighter, has adjustable headphones that are mounted on its rigid frame, which makes putting it on much the same as putting on a baseball cap, without any fiddling with straps and cables and separate headphones, whatsoever, and the Home environment starts right up, automatically, just from donning it. Its lenses argueably offer a clearer and more consistent image, where text out to the sides is still pretty damn legible, whereas with the Vive, sharpness falls off, and you can get a bit of "double vision" between the outermost frenels lens rings. A few of us, however, experience a bit of distortion, in the Rift, when looking around, which can cause some "brain strain". The Rift will get its own motion controllers, like the Vive's wands, sometime in the autumn. They are more designed to be "part of your hand", so to speak, taking shape from a hand in resting position, so that you can almost forget you are holding anything at all. They also have analogue buttons with capacitive touch sensing, for groups of fingers, which allows them to infer different hand gestures; E.g, squeeze the buttons under all four fingers, and take your thumb off all of the spots where it may rest, and that is interpreted as a thumbs up, or down, depending on where you are pointing the device. The Vive is brighter, but possibly may not conform to sRGB standard - this appears to result in significantly higher overall contrast ratios in most stuff, compared to the Rift. On one hand, this kind of seems to add a bit of a "realistic" feel to the imagery - on the other, it causes a lot of loss in visual detail, with things in shadow crushing toward black, and bright things blowing out. If we had a good HDR and extensive colour gamut mastering standard, maybe an adaption of BT.2020, together with colour profiling for each HMD, things could be reined in, and those extra lumens exploited... (EDIT: Such a standard can not come soon enough, as far as I am concerned; I want today's content to look as right as possible in tomorrow's hardware, regardless of manifacturer, or API, or capability.) (EDIT2: ...and I also have some strong disagreeable opinions on the act of tying drivers and hardware/metaverse framework APIs to vendor frontends. ) Despite its numerous shortcomings (...and by most accounts untypically, I experience more fresnel glare than in the Rift), so far I find myself constantly turning to my Vive first - for standing and sitting experiences both, but that's me; People have wildly differing experiences with the devices; How well they fit the shape of each individual's face, alone, and how this affects not only comfort, but more importantly the optical path, tends to be rather important. :7 Why is this the first guy to think of this? It's basic human anatomy... Not the first to think of it, I am pretty sure, but certainly one of the few to try it, in the face of rather condescending self-proclaimed know-it-all detractors. :7 The method has a few shortcomings; It can most noteably not overcome the lack of vestibular feedback from velocity changes, but it does give you a helpful amount of sensory input from the physical treading, and is the method I favour at this time (EDIT4: ...as a complement to room-scale free movement, mind you - not a replacement); just needs tracking of the user's feet, and perfect walk cycle phase matching. :7
-
VorpX does offer the 3D-projection-in-cinema mode, for the games it can deal with. Unfortunately, at some version it stopped supporting older revisions of the Oculus runtime, and it comes with a fantastically clever auto-update-online-at-startup system, which rules out rolling back to an older version that can be used with a DK1.
×
- Create New...
This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.