Mneme Map

Mneme Map

Mneme Map

Mneme Map

Mneme Map

A mixed reality experience centered around unfolding map that guides users back to a cherished location with personal memories.

Mixed Reality

Hand Tracking

Passthrough

Timeline

Sep 2023 - Dec 2023

Project Type

Personal Project

Tool

Unity / Luma AI / Photoshop / Figma

Overview

We often tie memories to places. Usually, we save these moments with photos and videos, but we can't step back into them. With 3D scanning and mixed reality, we can capture an entire setting and the people in it, letting us relive memories anytime, anywhere. My goal is to let everyone revisit cherished times beyond the limits of when and where they happened.

Prototypes

I developed the prototype with Unity and Meta Quest 3, incorporating hand tracking and full-color passthrough. In the demo, unfolding the map transported me back to my memories in New York City. I navigated the familiar streets and soaked in the sounds of each place. Working with passthrough for the first time was tricky, especially trying to sync the virtual objects with the movements of physical input objects. I resolved these challenges through multiple iterations and debugging.

Technology Centered Research

Hand Tracking

I experimented with hand tracking by creating a prototype using a transparent bucket and the Oculus SDK. Depending on the holding gesture, the bucket turns into different kinds of lamp in the virtual reality.

Key Findings

  1. Hand tracking accuracy improves as the headset camera detects more fingers.

  2. To pick up an object, fingers must be fully or nearly closed.

  3. Transparent objects enhance the precision of hand tracking.

  4. The way an object is held can alter its affordance.

Scene Understanding

I prototyped with the Oculus SDK to explore the scene understanding capabilities of the Meta Quest 3. I scanned the room and labeled the furniture. Using the controller, I toggled each piece between virtual mode and passthrough mode, and I shot small balls around the room to test the mesh's physics.

Key Findings

  1. Once scene models are labeled, virtual prefabs appear at the specified locations and sizes to align with the mesh.

  2. The scene is configured entirely through manual labeling by the user, without automatic object recognition, and only a limited range of model categories are available.

  3. Scene understanding only works when the user is in the same room they previously scanned and labeled.

  4. Virtual objects can interact physically with the scene mesh, and switching between virtual reality and passthrough mode does not affect the physics.

Spatial Anchor

I developed a basic prototype using the spatial anchor in the Oculus SDK. I created a cube and linked it to a spatial anchor. Each time I played the game, the cube consistently reappeared at the saved spatial anchor point.

Key Findings

  1. Once the spatial anchor is saved, it can be reloaded each time the user plays the game. The spatial anchor is linked to a specific real-world location, rather than to a device or object.

  2. All users can view the same shared spatial anchor within a 3-meter range. Beyond this distance, the anchor may begin to drift.

  3. To observe the same shared spatial anchor, users must be in the same physical space.

NeRF Scanning

I experimented with NeRF 3D scanning through Luma AI, scanning various objects with diverse materials and textures. After scanning, I imported the models into Unity. Additionally, I explored scene scanning using different approaches, including taking photos and uploading videos.

Key Findings

  1. Creating a smooth and clean object mesh from scans is challenging, making it difficult to develop a high-fidelity prototype in Unity using the scanned models.

  2. Scene scanning requires significantly more input images compared to object scanning, and obtaining clean edges for the scene is hard.

Design Iteration

From the initial concept to the final design, I experimented with various objects as physical inputs. My aim was to introduce new interactions within the XR environment that illustrate the connection between the user and their past memories. Guided by the physical inputs, I continually refined the virtual output and the overall experience.

Version 1.0

Version 2.0

Version 2.1

Version 2.2

Version 2.3

Visual Design

I aimed to create an experience that was deeply emotional and nostalgic, with a consistent mood across both physical and virtual objects. I started by identifying three main visual attributes that captured the essence of my concept, and then used these as the basis to develop the final design.

Design Elements

Map Visual Design

To capture a nostalgic vibe and mimic the appearance of the physical paper map, I went through several iterations to give the map a less digital look. In the final version, I added a grid and a texture that resembles cardboard to the virtual map.

Model Visual Design

For the New York City model, I aimed to make the city resemble a cardboard world, aligning with the overall experience style. I modified the model's texture in Photoshop to make it look less realistic. After experimenting with several versions, I selected the one that most resembled a raised relief on cardboard.