By Daniel Oldis © 2017
Lucid dreamers have been communicating with the outside since the 1970’s. They have used their eyes and hands in the dream to send a “Hello” to all of us in the “real” world. Some have even attempted Morse code as a way to send a more detailed message from their lucid dream. Yet mankind’s earliest communication tool, spoken language, has never been used to communicate, mostly because we cannot speak while dreaming— speech muscles are largely paralyzed during REM dreams, at least from what can be observed.
That may be about to change. In the near future, lucid dreamers may be able to provide a first-hand description of their dreams while in the dream.
Though we cannot observe someone speaking in a dream, technology can still detect nerve impulses sent to speech muscles and convert these impulses into sounds, words and sentences—at least theoretically. It is documented that dreamed speech elicits corresponding phasic muscle potential in facial, laryngeal and chin muscles (MacNeilage, 1971; Shimizu, 1986). Measurement of such musculature electrical activity is in the domain of electromyography (EMG). One researcher notes: “Approximately 4.5 % of sleep time or approx. 20 minutes per night were accompanied by activity of speech muscles.”
Yet is dream speech functionally equivalent to waking speech? Kilroe (2016) has demonstrated that dream speech is largely coherent and purposeful (though not always). Earlier research I had performed using EMG on the chin and arm during REM supports this idea: the data reveals gesturing accompanying speech behavior, suggesting the speaker was attempting to emphasize, similar to waking expression.
While never performed on a sleeping subject, speech recognition of waking subjects performing silent or subvocal speech production using EMG has been the subject of many research projects worldwide involving academic, military and commercial institutions. The physical methods have varied regarding the number and selection of speech muscles to be monitored and the sampling frequency of the sensors. Analysis and programming methods have varied regarding the granularity of the speech unit to be decoded and transcribed: phoneme sounds, sub-word, word or phrase. The granularity, in turn, affects the units employed in the training cycle for the software and the domain of the training set: there are 44 (+/-) phonemes in English, a large set of sub-word and word domains and an immense set of phrases.
Results have also varied, but generally scored in the 80 -90+% accuracy range of recognition of the speech unit targeted and the training set exemplars used. Phoneme sound recognition performs well due to the limited training domain, but turning sounds into recognizable syllables, words and sentences is complex, though Amazon’s Alexa™ and Apple’s Siri™ have shown us that this is quite possible. These companies even provide free tools for speech recognition and speech synthesis applications—like dream speech recognition. So, with the help of lucid dreamers and electromyography, it may not be long before we can say: “Alexa, replay my dream, please.”
Note: Lucid dreamers, speech experts and programmers who would like to be involved with dream speech decoding research, please contact the author at [email protected].
Citations
MacNeilage, P., & MacNeilage, L. (1971). Central Processes Controlling Speech Production during Sleep and Waking. The Psychophysiology of Thinking: Studies of Covert Processes.
Shimizu, A., & Inoue, T. (1986). Dreamed speech and speech muscle activity. Psychophysiology.
Kilroe, P. (2016). Reflections on the Study of Dream Speech. Dreaming