Thank you for visiting our site! You landed on this page because you entered a search term similar to this: calculating a 3rd order polynomial with one variable, here's the result:
Matlab Programming
Project 9:
Curve fitting and other operations
1) For a nomal curve with mean of 10 and standard deviation of 4 what is:
a) the area under the curve integrated from minus infinity to -2?
b) the area under the curve from 4 to infinity?
c) the area under the curve from 10 to infinity?
d) the height of the distribution (i.e., the probability density) at a value of 3? Is this probability density height the probability of occurrence of the number 3?
a) the area under the curve integrated from minus infinity to -2?
To get this area, you need to use the cummulative density function for the normal distribution.
area=normcdf(-2,10,4)
area=013
b) the area under the curve from 4 to infinity?
Since normcdf gives you the area from minus infinity to the point of
interest and since the area under the normal curve from minus infinity
to infinity is equal to one, then the area from 4 to infinity is:
area=1-normcdf(4,10,4) = 0.9332
c) the area under the curve from 10 to
infinity?
This is the simplest of the cases since 10 is the distributions mean, you should immediately recognize that the area from 10 to infinity is 0.5 without any further calculations.
d) the height of the distribution (i.e., the probability density) at a value of 3? What does this density height mean?
height= normpdf(3,10,4) = 0.0216
Note that we are using normpdf not norm cdf.
The height of the distribution at x is NOT the probability of
occurrence of x. The probability of occurrence of an event
between x1 and x2 equals the area under the curve between x1 and
x2. In reality, the probability of exact occurrence of x is zero.
2) Find the best fitting polynomial to the following data:
x=-3:0.5:3;
y=[0.2 -1.3 -1.6 -2.1 -1.2 -0.3 0.6 1.7 3.0 3.3 2.9 2.3 1.0];
First, plot the data to try to figure out what order polynomial may be a best fit.
Step 1:
First plot the data:
clear;
x=-3:0.5:3;
y=[0.2 -1.3 -1.6 -2.1 -1.2 -0.3 0.6 1.7 3.0 3.3 2.9 2.3 1.0];
plot(x,y,'*');
Step 2:
It's not linear. A second-order polynomial would probably also not work since second-order polynomials are usually either U-shaped or inverse U-shaped. But, lets try it anyway:
clear;
x=-3:0.5:3;
y=[0.2 -1.3 -1.6 -2.1 -1.2 -0.3 0.6 1.7 3.0 3.3 2.9 2.3 1.0];
p=polyfit(x,y,2);
yfit=polyval(p,x);
plot(x,y,'*',x,yfit);
Step 3:
Lets try a 3rd-order polynomial
clear;
x=-3:0.5:3;
y=[0.2 -1.3 -1.6 -2.1 -1.2 -0.3 0.6 1.7 3.0 3.3 2.9 2.3 1.0];
p=polyfit(x,y,3);
yfit=polyval(p,x);
plot(x,y,'*',x,yfit);
This turns out to be a pretty good fit. You may also want to try a 4th or 5th order to see how much improvement you will get. If you do, you'll notice that there isn't much of an improvement over a 3rd order, and it costs us an additional parameter or two. So a 3rd-order fit with the coefficients in p should be our choice.
Typs p in the command window and hit return.
You'll see 4 numbers (if you fit a 3rd-order polynomial):
-0.2303 -0.0212 2.1801 0.7280
These are the coefficients of the polynomial. Therefore, your fitted equation is:
y = p(1)*x.^3 + p(2)*x.^2 + p(3)*x +p(4)
or
y=-0.2303*x.^3 - 0.0212*x.^2 + 2.1801*x +0.7280
3) Assume that you have come up with some
model that predicts a biological function which itself is the linear
sum of three process best be described as: 1) tan, 2) exp, and 3) a
quadratic operations:
where x is your independent variable (e.g., stimulus value), and a, b, and c are parameters to be determined. You believe that this model should be a good predictor of you data:
x=-1:0.2:1;
y=[9.2 6.6 5.3 4.7 4.6 4.9 5.5 6.6 8.4 11.3 16.4];
Using the model equation above, determine parameters a, b, and c to optimize (in a least-squares sense) the model's fit to your data.
Step 1: Plot the data to get a sense of what things look like (helps in initial parameter selection)
clear;
x=-1:0.2:1;
y=[9.2 6.6 5.3 4.7 4.6 4.9 5.5
6.6 8.4 11.3 16.4];
plot(x,y,'*');
Step 2:
Start a new m file. This is where you will write your least-squares fitting function. Note that you will have to write two programs, first is a script or a regular mfile (you could save this as week10) and second is a function (you could call this mymodel). In Step 1, you started the week10 program, now start the function (all code for the function will be written in red to distinguish it from the code written for the program week10, which will be entirely written in green). The function's first line is:
function ssd=mymodel(p)
where ssd stands for "sum of the squared deviations". I just made this up, you could select a different variable name. The point is that what we are trying to minimize is the output of this function, which is the sum of the squared deviations between the actual data and the model. p is the input to the model, since we have 3 parameters (a,b,c) which we want to find optimized values for, p is a vector with three entries. These entries are our intial guesses at values for a, b, and c. We'll get back to this later.
Step 3:
Continue with building the function. Since we want our data (x and y), which were declared in the program week10, to be available to this function, we will declare them as global (also remember to declare them as global in the week10 program).
Also note that you should NOT have a comma between x and y when declaring them as global.
function ssd=mymodel(p)
global x y
Step 4:
Continue with building the function. Write the equation for predicted y (our model output). We'll call this yhat (the hat stands for predicted value). Note that the parameters a,b, and c are in the vector p:
function ssd=mymodel(p)
global x y
yhat=p(1)*tan(x) +p(2)*exp(x.^2)+p(3)*x.^2;
Step 5:
Continue with building the function: Determine the sum of the squared deviations (ssd) between your data (y) and the predicted y from the model. Note that ssd is the functions output and the value the we will try to minimize:
function ssd=mymodel(p)
global x y
yhat=p(1)*tan(x) +p(2)*exp(x.^2)+p(3)*x.^2; %model prediction
ssd=sum((y-yhat).^2); %sum of the squared deviations between data and model
Step 6:
We are done with the function. This function should have been save under the file name mymodel
Continue with building the main program named week10. We arbitrarily select 3 initial values for a, b, and c. These are of course just our guesses and not optimal values. Looking at the figure we probably should not pick really large values (>100) or really small values (<0.1). We'll pick 2, 3, and 4 (try other numbres if you like). Note also that we are now declaring x and y as global.
clear;
global x y;
x=-1:0.2:1;
y=[9.2 6.6 5.3 4.7 4.6 4.9 5.5 6.6 8.4 11.3 16.4];
plot(x,y,'*');
p=[2 3 4];
Step 7:
Use fminsearch, the function you created (mymodel) and the initial guesses to search for optimum parameter values. Save these optimum values in a variable called ep (for estimated parameters).
clear;
global x y;
x=-1:0.2:1;
y=[9.2 6.6 5.3 4.7 4.6 4.9 5.5 6.6 8.4 11.3 16.4];
plot(x,y,'*');
p=[2 3 4];
ep = fminsearch('mymodel',p);
Note that ep is a vector containing 3 values corresponding to a, b, and c in our model.
Step 8:
This step isn't necessary, but to make the equation more readable, define variables a, b, and c, and have each be assigned the appropriate value from ep.
clear;
global x y;
x=-1:0.2:1;
y=[9.2 6.6 5.3 4.7 4.6 4.9 5.5 6.6 8.4 11.3 16.4];
plot(x,y,'*');
p=[2 3 4];
ep = fminsearch('mymodel',p);
a=ep(1);
b=ep(2);
c=ep(3);
Step 9:
Finally, calculate the model equation using these three estimated parameters and plot the data and model fit.
clear;
global x y;
x=-1:0.2:1;
y=[9.2 6.6 5.3 4.7 4.6 4.9 5.5 6.6 8.4 11.3 16.4];
plot(x,y,'*');
p=[2 3 4];
ep = fminsearch('mymodel',p);
a=ep(1);
b=ep(2);
c=ep(3);
yhat=a*tan(x) +b*exp(x.^2)+c*x.^2;
plot(x,y,'*',x,yhat,'r-');
Note: After you run the program, type ep and hit return to see the optimized values of a, b and c (compare these to your initial guess at these parameter values).