When we see Food, a Particular Area of the Brain Lights Up.

“We consume with our eyes first,”

These statements are attributed to the Roman gourmet Apicius in the first century AD. Approximately 2000 years hence, scientists may be able to confirm his assertion.

Researchers from the Massachusetts Institute of Technology have identified a previously unidentified region of the brain that activates when we view food. This section, known as the “ventral food component,” is located in the visual cortex of the brain, which is a region associated with the recognition of faces, objects, and words.

The work entailed creating a computer model of this area of the brain using artificial intelligence (AI) technology. It was published in the journal Current Biology. Similar models are emerging in a variety of scientific disciplines to replicate and investigate the complicated bodily systems. The ideal body position for ingesting pills was recently determined using a computer model of the digestive system.

According to Meenakshi Khosla, PhD, the study’s author, “the research is still cutting-edge.” There is still much to learn about how experience or familiarity with certain cuisines affects this region and whether it is the same or different in different people.

According to Khosla, identifying these disparities may offer insights into how people make food decisions or possibly teach us more about the causes of eating disorders.

This study is special in part because of the researchers’ “hypothesis neutral” methodology. They simply began examining the data to see what they could uncover, rather than setting out to support or refute a specific hypothesis. The intention, according to the study, is to move beyond “the eccentric hypothese scientists have already thought to test.” So they started looking through the Natural Scenes Dataset, a collection of brain scans from eight individuals who had viewed 56,720 photos.

Indeed, the model responded to meals by lighting up. It didn’t matter what colour the food images were; even black and white ones still caused the reaction, albeit less powerfully. Additionally, the model was able to distinguish between actual food and things that resembled it, such as a banana from a crescent moon or a blueberry muffin from a puppy with a muffin-like face.

The researchers discovered from the human data that certain people reacted slightly differently to processed foods like pizza and unprocessed foods like apples. They want to investigate how other factors, like liking or disliking a cuisine, may affect how someone reacts to it.

Other study fields might also become accessible thanks to this technology. Khosla intends to use it to investigate how the brain reacts to social cues like body language and expressions on the face.

As of right now, Khosla has already started scanning the brains of a fresh batch of volunteers to check the computer model in actual individuals. We recently obtained pilot data in a few people, and we were successful in localising this component, the researcher claims.

Leave a Reply

Your email address will not be published. Required fields are marked *