-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some sort of local shared space #82
Comments
An image marker printed on the floor would be the simplest approach but perhaps we can do something smarter? |
For phones, I've tried to so something like: showing an image on one, capture marker (image) position with the other one and sync spaces with webrtc, can share my wip if interested on glitch. |
After discussion we think it would be really useful to build an example of how to do this tom make it easier for non-experts to make. Maybe it could be an API if usage takes off. 3 tracked anchors from a shared image to maintain a shared reference space. |
Hello! Following along here. I am currently experimenting with building something that sounds very much like what is described here. We are trying to make it easier for people to build AR experiences that use an image as an anchor to place 3D objects around. I have it working using WebXR with the We are also trying to build a higher-level API to define content and anchors. We call the combination/package of all of these an augmented world. The API would essentially allow each component of the position and rotation to be anchored relative to a plane or image. This could be an image on the floor, with an object placed around it on the floor. Or also an image hanging on a wall, with an object placed on the floor in front of it. That latter is also using the The experiment is not in a very shareable form at the moment. But I could clean it up and share it here if anyone is interested? Could someone explain a little more about what the goal is here? We would love to contribute an example if that would be helpful. We would also appreciate any feedback! |
It would be very good to see your demo. I think the main demonstration that we would love to show is a mobile device and a headset device working in a shared space and unfortunately image-tracking doesn't work well in headsets. In addition on mobile it's a heavy performance overhead. Hit Test and Anchors are lowest-common-denominator features which are shared by both and would be a robust and performant starting point. And keeping the synthesized-space updated using the continuously updated anchor positions would be very powerful. Although mathematically a little more difficult. Pretend there is a piece of paper with "A", "B" and "C" printed on them, like below, I think I would want the space to sit on B pointing out of the page and aligned with X along AB and Z along BC.
|
I will make sure to share a demo whenever it's ready. I may be misunderstanding the goal here. How would a shared space be bootstrapped using only hit testing and anchors? Would each user select the same point in physical space and have an anchor placed there? Also, is the goal to have some dynamic state that is synchronized (like moving objects) or would static objects work? |
Yes, only using hit testing and anchors, the users would select three points which lie in the same plane but on lines AB BC which are orthogonal to each other. In the same order (A -> B -> C) in the real world. This is enough information to generate an offset reference space which is the same for both users. |
Ok I think I understand. Thanks! I will try to experiment with this in the next few weeks. |
Thank you so much!! It is really appreciated! |
I have done some work in this area and can create a demo video if helpful. The method is roughly as follows:
While it's not elegant as it requires some manual steps on behalf of the user, it does work and it is repeatable across a variety of device vendors (Android / iOS / Quest / AVP) |
My understanding is based on the Image Tracking discussions is that some devices don't track QR code markers well, Android iirc but this needs verification but otherwise yes this seems like a good idea. |
The image tracking functionality as currently implemented in Chrome for https://github.com/immersive-web/marker-tracking/blob/main/explainer.md requires naturalistic images, it can't track QR codes or similar synthetic markers. I had made an experiment tracking ArUco markers using Raw Camera Access on top of Chrome's WebXR AR mode, that did work: https://storage.googleapis.com/chromium-webxr-test/r1255390/proposals/camera-access-marker.html |
Correction, it's possible to make hybrid QR codes that incorporate enough texture to also work as naturalistic images for use with the ARCore-based image tracking. See for example https://antfu.me/posts/ai-qrcode as an extreme case, but this may degrade how well they work for traditional QR code detection. |
Hi all, here is v1 progress on a proof of concept following the steps outlined above:
Videos: Part 1 - QR Code Marker Generator: https://github.com/immersive-web/proposals/assets/470477/7aba5c66-64f3-4adf-a34d-6e2fecc39c33 Part 2 - Mobile App (WebXR AR mode) using QR for localization: https://github.com/immersive-web/proposals/assets/470477/f202d49c-e84d-464d-a271-242621293ee8 Part 3 - Desktop mode mapper after AR app: https://github.com/immersive-web/proposals/assets/470477/9772a866-78e0-49bc-8534-e88c93648971 Part 4 - Send to VR (WebXR VR mode): https://github.com/immersive-web/proposals/assets/470477/f5c8ad33-515a-4717-859a-afe760a5e1e5 Discussion:
Potential improvements:
|
Shared Anchors are not happening any time soon, what is another method we could use?
/facetoface to start some effort in this direction
The text was updated successfully, but these errors were encountered: