RayBan Meta smart glasses — now fully loaded for all!
It’s been a week and a half since Meta pushed out the trailing “Look and Ask” functionality to the rest of us purchasers who had been so anxiously awaiting to get access to in order to experiment and test the boundaries of this transient intersect of AI, voice prompting, and visuality (https://www.ray-ban.com/usa). And though there is still so, so much more to test and experiment with beyond seemingly basic (but successful):
“Hey Meta - look and tell me how many fingers am I holding up?” (when I’m holding up 4 fingers in front of me and getting a response of “you are holding up four fingers” back)
“Hey Meta - look and tell me the best move I should make” (looking at a staged chess-board configuration, and getting a good recommendation for Knight to F6")
“Hey Meta - look at that restaurant and tell me how late it’s open until” (looking at a restaurant while out walking my dog and getting a correct response naming the subject restaurant and closing time @ 10pm)
“Hey Meta - look at that (same) restaurant and tell me if they serve margaritas” (the Meta AI voice assistant responded back not only confirming that the restaurant did have margaritas on their menu, but they actually had several specialty types of margaritas to offer)
And can I tell you how surprised I was to find out that additionally provocative tech rolled out at the same time I finally got the “Look and Ask” functionality pushed out in the big April 23rd Meta mass-release push out to remaining RayBan Meta owners who had not been able to get access to the “Look and Ask” functionality earlier (or even get onto the previously staged “early access waiting list”), is a means for sharing your 1st-person video feed from your RayBan’s with someone you’re talking to on either Messenger or WhatsApp! Imagine the use-cases and scenarios this opens up — from Job guidance/training, to remote guitar lessons, to sharing a monumental event or mountain climb view….on and on…
So now what? As much as I’m so very impressed with the results from a limited amount of experiments, I already am hoping that Meta may be working to (very) soon enable the following:
The “Look and Ask” functionality expanded to include reading from and available video clip capture (vs. current single picture snap) to be able to factor more variables into Meta’s response….not to mention that a video feed incorporated into the ask could potentially provide AI insights into factoring windspeed, surrounding audio / voice inputs, and ‘simply’ scanning the room with full of people to “read the room” from a facial expression analysis standpoint, voice tones, etc.
allow for conversational (back and forth) “Look and Ask” and/or ‘regular’ Hey Meta call & response, considering right now you can only seem to ask one question to get one response, and that any question about the response needs to be reintroduced as if it were the first time you were raising the topic or question.
Utilize the video capability coupled with iterative/conversational call & response Meta responses to build upon a problem or project in-process…very much as illustrated in this recent FB post showcasing Google’s Gemini AI next-level capabilities: https://www.facebook.com/share/r/q2gL3vh2zLTjYRQT/?mibextid=xCPwDs
But again, all I can say is WOW with respect to this fully-loaded, discrete, and multi-purposeful piece of tech that the world needs to adjust to whether it’s ready for it or not.