The snap-to concept reminds me of cats and wolves in minecraft.

Sorry to disappoint, but early after meeting Dale, May explains that she is not a holographic projection, since it'd be too expensive. She is part of the augmented reality displayed on his glasses. So, only the one wearing the glasses can see and hear her.
I assume that May's understanding of the world is the same as most humans. From our basic field of vision, our minds make assumptions about what it can't perceive. May can only see what Dale sees, although she probably comes with google maps installed. If Dale is walking down the street and a car is out of control that he can not see, May can not warn him, as it is in neither of their fields of vision. However, she could be nice and stay connected to traffic and news reports if she wished.
I wonder what would happen with sound though. Let's say Dale is listening to some heavy tunes while he walks to the local supermarket, and although he hears nothing around him, May could presumably use microphones to listen around for dangerous tones, interrupt his jolly carols for a moment, and inform him as necessary.
This opens up some interesting possibilities, including the potential for echolocation sensed by an AI like May, which is processed into a visual representation to overlay onto Dale's field of vision. This could potentially let him see farther and even through stuff!
So, yeah. Cool.