Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some sort of local shared space #82

Open
AdaRoseCannon opened this issue Mar 21, 2023 · 14 comments
Open

Some sort of local shared space #82

AdaRoseCannon opened this issue Mar 21, 2023 · 14 comments

Comments

@AdaRoseCannon
Copy link
Member

Shared Anchors are not happening any time soon, what is another method we could use?

/facetoface to start some effort in this direction

@AdaRoseCannon
Copy link
Member Author

An image marker printed on the floor would be the simplest approach but perhaps we can do something smarter?

@HyroVitalyProtago
Copy link

HyroVitalyProtago commented Mar 21, 2023

For phones, I've tried to so something like: showing an image on one, capture marker (image) position with the other one and sync spaces with webrtc, can share my wip if interested on glitch.
But that's not compatible with vr/ar headsets...
So a marker inside the space is probably the best solution for now!

@AdaRoseCannon
Copy link
Member Author

After discussion we think it would be really useful to build an example of how to do this tom make it easier for non-experts to make. Maybe it could be an API if usage takes off.

3 tracked anchors from a shared image to maintain a shared reference space.

@codynhat
Copy link

Hello! Following along here. I am currently experimenting with building something that sounds very much like what is described here. We are trying to make it easier for people to build AR experiences that use an image as an anchor to place 3D objects around. I have it working using WebXR with the image-tracking feature flag and experimental flag enabled on Chrome for Android.

We are also trying to build a higher-level API to define content and anchors. We call the combination/package of all of these an augmented world. The API would essentially allow each component of the position and rotation to be anchored relative to a plane or image. This could be an image on the floor, with an object placed around it on the floor. Or also an image hanging on a wall, with an object placed on the floor in front of it. That latter is also using the plane-detection feature.

The experiment is not in a very shareable form at the moment. But I could clean it up and share it here if anyone is interested? Could someone explain a little more about what the goal is here? We would love to contribute an example if that would be helpful. We would also appreciate any feedback!

@AdaRoseCannon
Copy link
Member Author

It would be very good to see your demo.

I think the main demonstration that we would love to show is a mobile device and a headset device working in a shared space and unfortunately image-tracking doesn't work well in headsets. In addition on mobile it's a heavy performance overhead.

Hit Test and Anchors are lowest-common-denominator features which are shared by both and would be a robust and performant starting point. And keeping the synthesized-space updated using the continuously updated anchor positions would be very powerful. Although mathematically a little more difficult.

Pretend there is a piece of paper with "A", "B" and "C" printed on them, like below, I think I would want the space to sit on B pointing out of the page and aligned with X along AB and Z along BC.
(I may have the right hand rule mixed up here)

A B
  C

@codynhat
Copy link

I will make sure to share a demo whenever it's ready.

I may be misunderstanding the goal here. How would a shared space be bootstrapped using only hit testing and anchors? Would each user select the same point in physical space and have an anchor placed there? Also, is the goal to have some dynamic state that is synchronized (like moving objects) or would static objects work?

@AdaRoseCannon
Copy link
Member Author

Yes, only using hit testing and anchors, the users would select three points which lie in the same plane but on lines AB BC which are orthogonal to each other. In the same order (A -> B -> C) in the real world. This is enough information to generate an offset reference space which is the same for both users.

@codynhat
Copy link

Ok I think I understand. Thanks! I will try to experiment with this in the next few weeks.

@AdaRoseCannon
Copy link
Member Author

Thank you so much!! It is really appreciated!

@kfarr
Copy link

kfarr commented Apr 16, 2024

I have done some work in this area and can create a demo video if helpful.

The method is roughly as follows:

  • Creation of a QR code that has embedded longitude / latitude / elevation as querystring along with target hostname for application
  • Use mobile device to scan QR and localize webxr scene based on this
  • Fetch appropriate content given the long/lat/el. Now user may use application in mobile device
  • Optional -- if user desires HMD, use mobile device to initiate session in headset while maintaining long/lat/el values
  • Use same placed QR to manually localize mixed reality session: QR semitransparent overlay aids user in accurate localization

While it's not elegant as it requires some manual steps on behalf of the user, it does work and it is repeatable across a variety of device vendors (Android / iOS / Quest / AVP)

@AdaRoseCannon
Copy link
Member Author

My understanding is based on the Image Tracking discussions is that some devices don't track QR code markers well, Android iirc but this needs verification but otherwise yes this seems like a good idea.

@klausw
Copy link

klausw commented Apr 16, 2024

The image tracking functionality as currently implemented in Chrome for https://github.com/immersive-web/marker-tracking/blob/main/explainer.md requires naturalistic images, it can't track QR codes or similar synthetic markers.

I had made an experiment tracking ArUco markers using Raw Camera Access on top of Chrome's WebXR AR mode, that did work: https://storage.googleapis.com/chromium-webxr-test/r1255390/proposals/camera-access-marker.html

@klausw
Copy link

klausw commented Apr 16, 2024

Correction, it's possible to make hybrid QR codes that incorporate enough texture to also work as naturalistic images for use with the ARCore-based image tracking. See for example https://antfu.me/posts/ai-qrcode as an extreme case, but this may degrade how well they work for traditional QR code detection.

@kfarr
Copy link

kfarr commented Apr 30, 2024

Hi all, here is v1 progress on a proof of concept following the steps outlined above:

  1. Creation of a QR code that has embedded longitude / latitude / elevation as querystring along with target hostname for application: https://glitch.com/edit/#!/bollard-buddy-qr-maker
  2. Use mobile device to scan QR and localize webxr scene based on this
  3. Fetch appropriate content given the long/lat/el. Now user may use application in mobile device WebXR AR mode https://glitch.com/edit/#!/bollard-buddy-ar
  4. Send creation to desktop / VR https://glitch.com/edit/#!/bollard-buddy-mapper

Videos:

Part 1 - QR Code Marker Generator: https://github.com/immersive-web/proposals/assets/470477/7aba5c66-64f3-4adf-a34d-6e2fecc39c33

Part 2 - Mobile App (WebXR AR mode) using QR for localization: https://github.com/immersive-web/proposals/assets/470477/f202d49c-e84d-464d-a271-242621293ee8

Part 3 - Desktop mode mapper after AR app: https://github.com/immersive-web/proposals/assets/470477/9772a866-78e0-49bc-8534-e88c93648971

Part 4 - Send to VR (WebXR VR mode): https://github.com/immersive-web/proposals/assets/470477/f5c8ad33-515a-4717-859a-afe760a5e1e5

Discussion:

  • This is far from complete.
  • This method does not actually use the QR target nor image target for localization per se, it requires manual user instruction to initiate the session facing the correct direction.
  • This method does not yet leverage compass orientation yet for the mobile AR experience, although it is possible and displayed in the UI on the mobile app.
  • The WebXR AR experience is optimized for mobile devices, not headsets.
  • This method uses polyfill for iOS but should work without flag or modification in Android Chrome

Potential improvements:

  • Use camera access to actually use the QR during webxr session to initialize at start of session (or reset mid-session)
  • Use compass orientation from AR device
  • Test and bugfix for headset use of WebXR AR mode experience
  • Consider standards implementation for iOS WebXR AR mode to not require polyfill
  • Consider creation of a simpler application that is simply shared space anchor creation + viewing experience (2 total steps) instead of the current 3-steps process
  • Consider experimentation with image target API to have "companion" image that is tracked immediately adjacent to QR code to localize via image instead of QR from within webxr session

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants