MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Undergraduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Undergraduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Comparisons of harmony and rhythm of Japanese and English through signal processing

Author(s)
Nakano, Aiko
Thumbnail
DownloadFull printable version (5.194Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Mechanical Engineering.
Advisor
Barbara Hughey.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Japanese and English speech structures are different in terms of harmony, rhythm, and frequency of sound. Voice samples of 5 native speakers of English and Japanese were collected and analyzed through fast Fourier transform, autocorrelation, and statistical analysis. The harmony of language refers to the spatial frequency content of speech and is analyzed through two different measures, Harmonics-to-Noise-Ratio (HNR) developed by Boersma (1993) and a new parameter "harmonicity" which evaluates the consistency of the frequency content of a speech sample. Higher HNR values and lower harmonicity values mean that the speech is more harmonious. The HNR values are 9.6+0.6Hz and 8.9±0.4Hz and harmonicities are 27±13Hz and 41+26Hz, for Japanese and English, respectively; therefore, both parameters show that Japanese speech is more harmonious than English. A profound conclusion can be drawn from the harmonicity analysis that Japanese is a pitch-type language in which the exact pitch or tone of the voice is a critical parameter of speech, whereas in English the exact pitch is less important. The rhythm of the language is measured by "rhythmicity", which relates to the periodic structure of speech in time and identifies the overall periodicity in continuous speech. Lower rhythmicity values indicate that the speech for one language is more rhythmic than another. The rhythmicities are 0.84±0.02 and 1.35±0.02 for Japanese and English respectively, indicating that Japanese is more rhythmic than English. An additional parameter, the 80th percentile frequency, was also determined from the data to be 1407±242 and 2021±642Hz for the two languages. They are comparable to the known values from previous research.
Description
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2009.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (p. 24-25).
 
Date issued
2009
URI
http://hdl.handle.net/1721.1/54519
Department
Massachusetts Institute of Technology. Department of Mechanical Engineering
Publisher
Massachusetts Institute of Technology
Keywords
Mechanical Engineering.

Collections
  • Undergraduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.