I want a Mixed Reality thing that works the opposite of how I am seeing it done right now...
Right now it's augmenting reality with fake shit. Like you see the real world and can put non-real things in it. What I'd like to be able to do is be in a fake world and bring real world objects into that.
Like, say I'm in VRchat just sitting somewhere hanging out it would be cool if I could bring my drink, my food, my vape, etc into the game so I don't have to fumble around looking for them with my hands. I haven't found anything that does this though... :/
I think you will have the same problem as with 3D printers and accuracy. Like, people don't realize their 3D printers are precision machines that are not accurate. There is a considerable variation in the actual zero point location of the tip of the nozzle on the bed. It simply does not matter that this is the case because every location is relative to the zero point.
This is why 3D printers are cheap as a hobby tool. When you start trying to design a printer where there are two extruder heads that move independently, you need to know absolute position for each print head and that position must be accurate at all times. This is a MUCH harder problem to solve.
In your VR headset, you are floating like how a cheap printer works. Your absolute position is irrelevant. If you want to incorporate external objects, you need to know absolute position. Computationally that is a larger problem, but also it requires a much more expensive sensory system, especially if you want to triangle location based on sensors mounted to the headset where the separation distance is small. I think you would really struggle to incorporate that and keep a half decent battery life to weight balance.
I feel like that would be significantly more difficult to implement. You’d have to be constantly tracking the objects, cutting out a portion of your gameplay to render the objects, and also making sure it doesn’t overlap with anything in the game world making you unable to see it. I think it’d be better for them to add a slider that changes the amount of passthrough, so you can see both the game world and the real world. (facebook showed a demo of this in one of their quest ads, but has yet to deliver)
They already do this with your hands. And those change shape. It shouldn't be harder to track a thing that is rather static using the cameras and software it already uses to do hand tracking. It would just have to be trained on them since I assume it uses AI to handle some of this.
Oh I thought you meant passing the object itself through the playspace. I guess something like that could work, but again I don’t know exactly how everything works.
For now you can just bounce into passthru mode for a half second. On oculus you can set it so a double tap on the side of the device toggles passthru mode on and off
I do. Sometimes... I like to turn the boundary off when just sitting somewhere because even if I'm nowhere near the edge, the boundary lines often pop up and mess with my vision. It sucks that passthrough mode only works when the boundary is on. It doesn't even make sense, as if you're in passthrough, you don't need a boundary 🤷🏻♂️
But these clearly have pre-made 3D models onboard for rendering and appear to use IR tracking markers like the controllers (clues re: the technical hurdles of the feature). Trying to mask/clip-in the headset camera feed for tracked IRL objects would be at the mercy of variables like local lighting quality, object feature count, etc. Basically it could be hard to do well.
That said, an interim feature that might allow real-time interaction with IRL objects without switching to passthrough — and might be simple enough for an individual user to implement — would be to overlay an edge detection shader from the camera feed. In fact I bet a few headsets have something like this baked-in already, perhaps under accessibility settings.
I can technically accomplish what I want with a more complicated setup using my PC, a camera and some printed out QR tags since there are a couple of FOSS tools on GitHub specifically for adding tracking to anything. But I keep coming back to the fact the HMD itself (in my case, a Quest 3) has hand tracking that works good enough that it should be able to do what I am asking here without too much additional resources.
That would honestly be dope. It kinda reminds me of this one YouTube video I saw where this dude recreated his room in VR so he didn't bump into things and could easily walk to his bed or chair without taking the headset off