Within days of launch of OpenAI’s GPT-4 language model, Be My Eyes, a free mobile app that connects individuals who are blind or have low-vision, with sighted persons, has integrated the technology into its existing app to develop its ‘Virtual Volunteer’ digital assistant.
“This tool will push us further toward achieving our goal to improve accessibility, usability, and access to information globally, and aligns us with OpenAI’s stated principles on developing safe and responsible AI,” Be My Eyes said in a statement.
OpenAI, too, issued a release introducing Virtual Volunteer.
What is Virtual Volunteer?
(1.) According to Be My Eyes, the GPT-4 powered Virtual Volunteer uses the language model’s dynamic image-to-text generator feature. When an image is sent, it will use the generator feature to answer any question about that picture, and will also provide instant virtual assistance for a number of tasks.
(2.) Giving an example of the digital assistant’s ability, the statement also claimed that if a picture sent is of the inside of a refrigerator, it will not only correctly identify what all ingredients are inside the refrigerator, but also tell what can be prepared with those ingredients.
(3.) The sighted volunteer experience, however, is not going anywhere. Therefore, if the tool, which will come for free, is unable to answer a question, it will automatically give users the option to be connected to a sighted person.
(4.) On the availability front, Be My Eyes is currently holding beta testing of the service with a small subset of users. Over the next few weeks, this group of testers will be expanded, followed by a broad release of Virtual Volunteer in the coming months.
(5.) The company further said it welcomes feedback to fine-tune this new tool. The priority, it said, is (user) safety.
Leave a Comment
You must be <a href="https://b2bchief.com/wp-login.php?redirect_to=https%3A%2F%2Fb2bchief.com%2Fthis-digital-assistant-built-using-openais-gpt-4-what-is-virtual-volunteer%2F">logged in</a> to post a comment.