Byron SpiceThursday, December 5, 2024Print this page.
"Cook until golden brown" is a recipe instruction virtually every home cook encounters. It's simple enough for sighted people to follow but for people with limited vision, it's just one of many obstacles that can make following a recipe frustrating. The list goes on: How do I know all pieces of meat have been flipped in the sauté pan? Do I put the onions in before or after the meat? Did I add all the ingredients? Did I skip a step?
To better understand how people with low or no vision find and follow recipes, School of Computer Science researchers interviewed 20 home cooks who are either totally or legally blind and four cooking instructors at a vision rehabilitation center. By gathering information about how blind people find and follow recipes and what problems remain to be solved, the researchers hope to identify new research opportunities and prioritize research that will have the most impact.
"People with vision impairments access recipes the same way sighted people do — sometimes in cookbooks but increasingly online," said Franklin Mingzhe Li, a Ph.D. student in the Human-Computer Interaction Institute (HCII).
But finding a recipe is just the beginning, Li said. People with vision impairments typically will copy and paste internet recipes into a word processing program or a text editor, go through it line by line to eliminate ads and extraneous information, and use a text-to-speech device or screen reader to access it.
Before they start making a recipe, users must also be sure they have the right equipment. A variety of kitchen tools have been adapted for use by people with low or no vision, including audio-based smart tools, such as "talking" kitchen scales; and tools with tactile labels, such as Braille measuring cups and spoons. But sometimes people lose parts of their kitchen sets or simply never acquired certain tools, creating the potential for mismatches between what they have and what the recipe requires.
Digital recipes — the ones cooks find on the internet — could automatically change the tools required, said Patrick Carrington, an assistant professor in the HCII.
"There's no reason why these digital recipes can't be augmented to fit your tools," he said.
Digital recipes could also automatically change their instructions to swap visual cues, such as "sauté until golden brown," for guidance based on time, sounds, smell or texture. A first step, Li said, should be to modify recipes to make them more accessible. Another important step would be to create a nonvisual database of common cooking references that could be automatically substituted for visual descriptors.
Some technology used to access recipes doesn't work well in the kitchen. Cooks sometimes use smart speakers, for instance, but exhaust fans and other cooking noises can make it difficult to communicate. Smartwatches could provide a better way to interact with a smart speaker or other audio reader.
"The smartwatch offers a unique advantage in using physical gestures for interacting with recipes," Li said. "For example, users can perform actions like moving forward or backward in a recipe, repeating steps, or pausing the process by pressing buttons, tapping the smartwatch, or using other hand gestures."
Cooks don't just read a recipe once, though. They often refer to it as they're preparing the dish. Creating a way to navigate through a recipe without repeating it in its entirety when the cook only needs to learn about a particular ingredient or the next step would enhance the use of technological interfaces.
Kitchens can be messy, and touching screens and keyboards with greasy or sticky fingers presents its own problems. Technology such as smart speakers and smartwatches could help. Alternatively, some study participants noted that they had found their own nontechnological solutions to this challenge.
"When I was using my laptop, I would get Saran wrap or something and kind of wrap it over the keyboard. So when I was pressing the buttons, I knew it was kind of covered," one cook told researchers.
Advancements in technology like computer vision, large language models and artificial intelligence could also assist people with low vision in the kitchen. Computer vision systems could help a cook check their progress, navigate the kitchen, or find a tool they'd set down. A computer vision system could also tell a cook if the knife they are reaching for was previously used for cutting vegetables or slicing raw chicken.
Computer vision might also help cooks read warning labels or expiration notices or detect mold or discoloration. Large language models and other types of artificial intelligence could help with retrieving ingredients or other information from within a recipe.
The growing popularity of cooking videos, by both professional and amateur chefs, presents a unique set of challenges for people with low or no vision, Li said.
"The video might show someone saying, 'Pour this into that,'" Li said, using two glasses to mimic a typical video demonstration. A person with low vision wouldn't have a clue what 'this' or 'that' references.
SCS researchers have begun collaborations on how to extract recipe information from these videos using large languages models and AI.
Li and Ashley Wang, a master's graduate in the Computer Science Department who is now a software engineer with Meta, are lead authors of "A Recipe for Success? Exploring Strategies for Improving Non-Visual Access to Cooking Instructions." It was presented at the ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2024). Carrington and Shaun Kane, a research scientist at Google, are co-authors.
"Solving the recipe problem will not solve all of the issues in nonvisual cooking," Carrington said. Interacting with tools, foods and other physical objects in the kitchen is also important and is the focus of complementary research underway in the HCII, he added.
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu