top of page

1) Three plots (or more) of your data in Matlab. This can be a plot of the original data, a plot of filtered data, a plot of the FFT of your data, a plot with interpolated data, a plot of your data convolved with some impulse response, etc.

 

Plot 1: Time domain plot of a sequence of clarinet notes (original audio file)

 

 

Plot 2: Power spectrum (frequency domain) of clarinet note with fundamental frequency of 659.79 Hz

 

Plot 3: Time domain waveform for clarinet note with fundamental frequency of 659.79 Hz

 

2) Describe what you have done so far. Describe any difficulties in loading your data into matlab or filtering the data, etc.

 

 

The first major script we wrote reads .wav files that contain instrument samples and exports data on the fundamental frequency and magnitudes of the harmonics of each separate note. Importing the data into matlab was relatively easy. The difficulty lies in analyzing this data. The format of the data is such that each pure note is one to two seconds displaced from the next note in the sequence. Separating these notes, and really determining when a note ended was difficult. Part of the difficulty lies in something we don’t cover in this course: the transient response of the note. There is transience at the beginning and at the end of the note. We had to take care not to measure these parts of the signal, because they significantly alter the frequency response of the signal.

 

The next part of our project is a script that maps a user-specified frequency to the harmonic series of an instrument at that frequency. It takes both the data (for a specific instrument) and an arbitrary frequency as input and outputs a vector of the magnitudes of the harmonic series that correspond to the specified instrument. A difficulty in designing this script is that different instruments have different timbres in different registers. We made this function adjust for this by finding magnitude data from the previous step that most closely matched the specified frequency using both rectangular (finds an exact magnitude vector that is closest to the provided frequency) and linear (finds a magnitude vector using the slope between the two nearest vectors) approaches.

 

The last step we took was to create a GUI for the purpose of interacting with the other two scripts. The GUI allows an instrument to be selected and a frequency to be specified, then it plays said frequency with the timbre (or approximations thereof) of the selected instrument and displays a frequency domain representation of the note. In addition to having several instruments available (clarinet, violin, flute), we also included a button to select a pure sine wave for comparison purposes. Using the built-in Matlab GUI interface, we had no trouble constructing a functional tool that allows for testing of the other components.

 

 

 

3) Describe one thing you have learned about in working on your data so far. It may be a new DSP tool you are trying to use, or it may just be a new Matlab command, etc. Tell me why it's relevant for your project (or why you thought it was relevant, but why it's actually not, if that's the case).

 

 

The first tool we have become a lot more familiar with is the FFT command. Analyzing a musical note in the time domain is more or less completely useless. Yes, it is a sinusoidal signal, but because it is often the sum of many sinusoids, there is not much to glean from it (besides perhaps the period of the signal). With the FFT command, we’re given a much clearer picture of what’s going on. We were able to clearly see the combination of specific frequencies that compose the signal that we register as a musical note. Conceptually, it is something we have already learned about in class, but analyzing these musical notes has played a tremendous effort in uniting the concepts covered in class and the real-world signals that can be analyzed with those concepts.

 

The analyzation of these musical notes leads to the next DSP tool we have utilized in our project: additive synthesis. We scrutinized several different musical synthesis methods before we decided on using additive synthesis. Additive synthesis is the summing of sinusoids in order to synthesize the desired signal. This is really just an extension of the fourier transform: with one caveat: the values returned by the FFT command are complex-valued. Our initial analyzation method only returned the magnitude of the FFT from 0 to N/2 (because with our real valued time-signal, magnitude is even!). This did not do anything with the phase of the Fourier Transform. Our reformed additive synthesis added sinusoids with the magnitude, phase, and frequency that corresponded to the harmonic frequencies of the note. However, we discovered that the additive synthesis resulted in the same sound being produced, with or without the inclusion of phase. From our own research, we discovered that human beings cannot in general hear the difference in phase! This was a major discovery for some members of the group, and forced us to reevaluate the fact that it is the human ear, and not a computer that we are catering to.

We are able to synthesize musical notes pretty well as of now. However, there is one problem with our current method of synthesis: the transient response. What we are synthesizing currently only analyzes the frequency response of the system, this does nothing to characterize the transient response of the note. The next part of our project will involve using the Fourier Transform to characterize the transient response of the system. The Laplace Transform is the continuous time version of the z transform, and it is characterized by a real and imaginary response, such that s = α + j*w. The fourier transform is solely the imaginary part of this (j*w). The fourier transform can be considered as varying the omega of s while alpha is held constant. Through this line of thought, it stands to reason that if we take many fourier transforms in a short amount of time, we would be able to represent s = α + j*w! Then, by evaluating the magnitude of each fourier transform, we can watch for values where the magnitude becomes zero, and where the magnitude becomes very large, to determine the poles and zeros of the transfer function. From there, we will be able to accurately synthesize even the transient response of the musical notes!

bottom of page