Editorial Reviews. About the Author. Steven C. Chapra (Medford, MA) is Professor of Civil and Numerical Methods for Engineers - site edition by Chapra. Editorial Reviews. About the Author. Steven C. Chapra (Medford, MA) is Professor of Civil and site Store · site eBooks · Engineering & Transportation. Ebook Jaan Kausalas - Numerical Methods in Engineering with MATLAB He has taught numerical methods, including finite element and boundary el- ement.
|Language:||English, Portuguese, Hindi|
|ePub File Size:||28.70 MB|
|PDF File Size:||10.22 MB|
|Distribution:||Free* [*Registration Required]|
Numerical methods for engineers / Steven C. Chapra, Berger chair in computing and engineering, Tufts University, Raymond P. Canale, professor emeritus of. eBook free PDF download on Numerical Methods for Engineers By Steven C. Chapra, Raymond P. Canale. Book download link provided by. Numerical. Methods for. Engineers and. Scientists. Second Edition. Revised and Expanded. Joe D. Hoffman. Department of Mechanical Engineering.
This problem was solved by interpolation in Prob. This problem was solved in Prob. The table shows the variation of the relative thermal conductivity k of sodium with temperature T. Singer, C. Knowing that radioactivity decays exponentially with time: If x is an array, y is computed for all elements of x.
If x is a matrix, s is computed for each column of x. If x is a matrix, xbar is computed for each column of x. Before proceeding further, it might be helpful to review the concept of a function. In numerical computing the rule is invariably a computer algorithm. The roots of equations may be real or complex. Complex zeroes of polynomials are treated near the end of this chapter. There is no universal recipe for estimating the value of a root.
If the equation is associated with a physical problem, then the context of the problem physical insight might suggest the approximate location of the root. Otherwise, the function must be plotted, or a systematic numerical search for the roots can be carried out. One such search method is described in the next article.
Prior bracketing is, in fact, mandatory in the methods described in this chapter. Another useful tool for detecting and bracketing roots is the incremental search method. The basic idea behind the incremental search method is simple: If the interval is small enough, it is likely to contain a single root.
There are several potential problems with the incremental search method: However, these locations are not true zeroes, since the function does not cross the x-axis. Plot of tan x. The search starts at a and proceeds in steps dx toward b. Once a zero is detected, rootsearch returns its bounds x1,x2 to the calling program. This can be repeated as long as rootsearch detects a root. This procedure yields the following results: This technique is also known as the interval halving method. Bisection is not the fastest method available for com- puting roots, but it is the most reliable.
Once a root has been bracketed, bisection will always close in on it. The method of bisection uses the same principle as incremental search: Otherwise, the root lies in x1 , x3 , in which case x2 is replaced by x3. In either case, the new interval x1 , x2 is half the size of the original interval. The number of bisections n required to reduce the interval to tol is computed from Eq.
Solution The best way to implement the method is to use the table shown below. Note that the interval to be bisected is determined by the sign of f x , not its magnitude. Utilize the functions rootsearch and bisect. Thus the input argument fex4 3 in rootsearch is a handle for the function fex4 3 listed below.
In most problems the method is much faster than bisection alone, but it can become sluggish if the function is not smooth. Inverse quadratic iteration. These points allow us to carry out the next iteration of the root by inverse quadratic interpolation viewing x as a quadratic function of f. If the result x of the interpolation falls inside the latest bracket as is the case in Figs. Otherwise, another round of bisection is applied.
Relabeling points after an iteration. We have now recovered the orig- inal sequencing of points in Figs. First interpolation cycle Substituting the above values of x and f into the numer- ator of the quotient in Eq. Second interpolation cycle Applying the interpolation in Eq. Solution 2. The sensible approach is to avoid the potentially troublesome regions of the function by bracketing the root as tightly as possible from a visual inspection of the plot.
The Newton—Raphson formula can be derived from the Taylor series expansion of f x about x: Graphical interpretation of the Newton—Raphson f xi formula. The for- mula approximates f x by the straight line that is tangent to the curve at xi.
The algorithm for the Newton—Raphson method is simple: Only the latest value of x has to be stored. Here is the algorithm: Examples where the Newton—Raphson method diverges. Although the Newton—Raphson method converges fast near the root, its global convergence characteristics are poor. The reason is that the tangent line is not al- ways an acceptable approximation of the function, as illustrated in the two examples in Fig.
The midpoint of the bracket is used as the initial guess of the root. The brackets are updated after each iteration. Since newtonRaphson uses the function f x as well as its derivative, function routines for both denoted by func and dfunc in the listing must be provided by the user.
Compute this root with the Newton—Raphson method. The same argument applies to the function newtonRaphson. We used the following program, which prints the number of iterations in addition to the root: After making the change in the above program, we obtained the result in 5 iterations. The trouble is the lack of a reliable method for bracketing the solution vector x.
Therefore, we cannot provide the solution algorithm with a guaranteed good starting value of x, unless such a value is suggested by the physics of the problem. The simplest and the most effective means of computing x is the Newton— Raphson method. It works well with simultaneous equations, provided that it is sup- plied with a good starting point.
There are other methods that have better global con- vergence characteristics, but all of them are variants of the Newton—Raphson method.
Newton—Raphson Method In order to derive the Newton—Raphson method for a system of equations, we start with the Taylor series expansion of fi x about the point x: Estimate the solution vector x. Evaluate f x. Compute the Jacobian matrix J x from Eq. Set up the simultaneous equations in Eq.
As in the one-dimensional case, success of the Newton—Raphson procedure depends entirely on the initial estimate of x. If a good starting point is used, convergence to the solution is very rapid. Otherwise, the results are unpredictable. This formula can be obtained from Eq. The simultaneous equations in Eq. The function subroutine func that returns the array f x must be supplied by the user.
It is often possible to save computer time by neglecting the changes in the Jacobian matrix between iterations, thus computing J x only once.
From the plot we also get a rough estimate of the coordi- nates of an intersection point: Then we would be left with a single equation which can be solved by the methods described in Arts. In this problem, we obtain from Eq. Start with the point 1, 1, 1. Find this root with three decimal place accuracy by the method of bisection.
Use the Newton—Raphson method. De- termine this root with the Newton—Raphson method within four decimal places.
Utilize the functions rootsearch and brent. You may use the program in Example 4. The maximum compressive stress in the column is given by the so-called secant formula: Start by estimating the locations of the points from a sketch of the circles, and then use the Newton—Raphson method to compute the coordinates. If the coordinates of three points on the circle are x 8.
Note that there are two solutions. But if complex roots are to be computed, it is best to use a method that specializes in polynomials. Here we present a method due to Laguerre, which is reliable and simple to implement. Evaluation of Polynomials It is tempting to evaluate the polynomial in Eq.
But computational economy is not the prime reason why this algorithm should be used. Because the result of each multiplication is rounded off, the procedure with the least number of multiplications invariably accumulates the smallest roundoff error.
From Eq. Moreover, by eliminating the roots that have already been found, the chances of computing the same root more than once are eliminated.
It turns out that the result, which is ex- act for the special case considered here, works well as an iterative formula with any polynomial.
Differentiating Eq. Compute G x and H x from Eqs. Determine the improved root r from Eq. This process is repeated until all n roots have been found. If a computed root has a very small imaginary part, it is very likely that it rep- resents roundoff error.
Therefore, polyRoots replaces a tiny imaginary part by zero. Hence the results should be viewed with caution when dealing with polynomials of high degree. Solution Use the given estimate of the root as the starting value. Determine all the other zeroes of Pn x by using a calculator. Problems 10—16 Find all the zeroes of the given Pn x. Thus the eigenvalues of A are the zeroes of Pn x. An equally effective tool is the Taylor series expansion of f x about the point xk.
The latter has the advantage of providing us with information about the error involved in the approximation. Numerical differentiation is not a particularly accurate process. For this reason, a derivative of a function can never be computed with the same precision as the function itself.
We also record the sums and differences of the series: Equations a — h can be viewed as simultaneous equations that can be solved for various derivatives of f x. The number of equations involved and the number of terms kept in each equation depend on the order of the derivative and the desired degree of accuracy.
The term O h2 reminds us that the truncation error behaves as h2. Table 5. For example, consider the situation where the function is given at the n discrete points x1 , x2 ,.
Since central differences use values of the function on each side of x, we would be unable to compute the derivatives at x1 and xn. Solving Eq. We can derive the approximations for higher derivatives in the same manner. For example, Eqs. The results are shown in Tables 5. The common practice is to use expressions of O h2.
To obtain noncentral difference formulas of this order, we have to retain more terms in the Taylor series. We start with Eqs.
As you can see, the computations for high-order derivatives can become rather tedious. The effect on the roundoff error can be profound.
On the other hand, we cannot make h too large, because then the truncation error would become excessive. This unfortunate situation has no remedy, but we can obtain some relief by taking the following precautions: We carry out the calculations with six- and eight-digit precision, using different values of h. The results, shown in Table 5. Above optimal h, the dominant error is due to truncation; below it, the roundoff error becomes pronounced. Because the extra precision decreases the roundoff error, the optimal h is smaller about 0.
Suppose that we have an approximate means of computing some quantity G. Moreover, assume that the result depends on a parameter h. We work with six-digit precision and utilize the results in Table 5. The result is 22 g 0. Note that it is as accurate as the best result obtained with eight-digit computations in Table 5. Solution From the forward difference formulas in Table 5. Referring to the formulas of O h2 in Table 5. Forward and backward differences of O h2 are used at the endpoints, central differences elsewhere.
The idea is to approximate the derivative of f x by the derivative of the interpolant. As pointed out in Art. In view of the above limitation, the interpolation should usually be a local one, involving no more than a few nearest-neighbor data points.
Several methods of polynomial interpolation were introduced in Art. Unfor- tunately, none of them is suited for the computation of derivatives. Cubic Spline Interpolant Due to its stiffness, cubic spline is a good global interpolant; moreover, it is easy to differentiate.
This can be done with the function splineCurv as explained in Art. The normal equations, Eqs. This is not unexpected, consid- ering the general rule: It is impossible to tell which of the two results is better without knowing the expression for f x. What is the order of the truncation error? Given the data x 0.
Numerical Analysis for Science, Engineering and Technology
In each case, use h that gives the most accurate result this requires experimentation. Use cubic spline interpolation. All rules of quadrature are derived from polynomial interpolation of the integrand. Therefore, they work best if f x can be approximated by a polynomial.
Methods of numerical integration can be divided into two groups: Newton—Cotes formulas and Gaussian quadrature. They are most useful if f x has already been computed at equal intervals, or can be computed at low cost.
In Gaussian quadrature the locations of the abscissas are chosen to yield the best possible accuracy. Polynomial approximation of f x.
Therefore, an approxima- tion to the integral in Eq. The most important of these is the trapezoidal rule. Trapezoidal rule. It represents the area of the trapezoid in Fig. It can be obtained by integrating the interpolation error in Eq. Composite trapezoidal rule. Figure 6. The function f x to be integrated is approximated by a straight line in each panel.
The truncation error in the area of a panel is from Eq. Hence the truncation error in Eq. Note that if k is increased by one, the number of panels is doubled. Observe that the summation contains only the new nodes that were created when the number of panels was doubled. Therefore, the computation of the sequence I1 , I2 , I3 ,. A form of Eq. The area under the parabola, which represents an approximation of a f x dx, is see derivation in Example 6. Applying Eq.
The error in Eq. Solution of Part 2 The new nodes created by the doubling of panels are located at midpoints of the old panels. How many function evaluations are required to achieve this result? Solution The program listed below utilizes the function trapezoid. Apart from the value of the integral, it displays the number of function evaluations used in the computation.
Conse- quently, the error does not behave as shown in Eq. The leading error term c1 h2 is then eliminated by Richardson extrapolation. Note that the most accurate estimate of the integral is always the last diagonal term of the array. In this manner, r1 always contains the best current result. It returns the value of the integral and the required number of function evaluations. Denoting the abscissas of the nodes by x1 , x2 ,. Work with four decimal places.
Solution From the recursive trapezoidal rule in Eq. It required function evaluations as compared to evaluations with the composite trapezoidal rule in Example 6.
Explain the results. The table shows the power P supplied to the driving wheels of a car as a function of the speed v. F x The table below gives the pull F of the bow as a function of the draw x.
If the bow is drawn 0. This is also true for Gaussian quadrature. However, Gaussian formulas are also good at estimating integrals of the form , b w x f x dx 6. Gaussian integration formulas have the same form as Newton—Cotes rules: The difference lies in the way that the weights Ai and nodal abscissas xi are determined.
In Newton—Cotes integration the nodes were evenly spaced in a, b , i. In Gaussian quadrature the nodes and weights are chosen so that Eq. These formulas can used without knowing the theory behind them, since all one needs for Gaussian integra- tion are the values of xi and Ai.
If you do not intend to venture outside the classical formulas, you can skip the next two topics. They have been studied thoroughly and many of their properties are known. What follows is a very small compendium of a large topic. That is, each set of orthogonal polynomials is asso- ciated with certain w x , a and b. Some of the classical orthogonal polynomials, named after well-known mathemati- cians, are listed in Table 6.
The last column in the table shows the standardization used. Table 6. According to Eq. These tables should be adequate for hand computation, but in programming you may need more precision or a larger number of nodes. In that case you should consult other references,12 or use a subroutine to compute the abscissas and weights within the integration program. The expression for K n depends on the particular quadrature being used. If the derivatives of f x can be evaluated, the error formulas are useful is estimating the error bounds.
Abramowitz and I. Stegun, Dover Publications ; A. Stroud and D. Press et al.
The truncation error in Eq. Multiply numbers by 10k, where k is given in parentheses n! Gauss—Hermite quadrature: Note that gaussNodes calls the subfunction legendre, which returns pn t and its derivative.
The nodal abscissas and the weights are obtained by calling gaussNodes. However, the exact integral can obtained with the Gauss— Chebyshev formula.
The abscissas of the nodes are obtained from Eq. Solution We split the integral into two parts: The sum is evaluated in the following table: The exact integral, rounded to six places, is 1.
Solution The integrand is a smooth function; hence it is suited for Gauss—Legendre integration. We used the following program that computes the quadrature with 2, 3,.
This can be done using the functions newtonPoly or neville listed in Art. Use a two nodes and b four nodes. The integral 0 sin x dx is evaluated with Gauss—Legendre quadrature using four nodes.
What are the bounds on the truncation error resulting from the quadrature? Compute 0 sin x ln x dx to four decimal places. Calculate the bounds on the truncation error if 0 x sin x dx is evaluated with Gauss—Legendre quadrature using three nodes. What is the actual error? Test the program by verifying that erf 1. Use the program to compute C 0. The other two parts can be evaluated with Gauss—Legendre quadrature. Use this method to evaluate I to six decimal places.
The computations are straightforward if the region of integration has a simple geometric shape, such as a triangle or a quadrilateral. However, an irregular region A can always be approximated as an assembly of triangular or quadrilateral subregions A1 , A2 ,.
Boundary of region A AI Figure 6. Finite element model of an irregular region. Mapping a quadrilateral into the standard rectangle. In order to apply quadrature to the quadrilateral element in Fig. Consequently, straight lines remain straight upon mapping. Substituting from Eqs.
FÃŒr andere kaufen
The determinant of the Jacobian matrix is obtained by calling detJ; mapping is performed by map. Refer- ring to Eq. Solution From the quadrature formula in Eq. It follows that the points labeled a contribute equal amounts to I ; the same is true for the points labeled b. Solution The required integration order is determined by the integrand in Eq.
Quadrilateral with two coincident corners. Therefore, the integration for- mulas over a quadrilateral region can also be used for a triangular element.
EBOOK: Applied Numerical Methods with MATLAB for Engineers and Scientists
However, it is computationally advantageous to use integration formulas specially developed for triangles, which we present without derivation. Triangular element.
A3 2 1 Consider the triangular element in Fig. Drawing straight lines from the point P in the triangle to each of the corners divides the triangle into three parts with areas A1 , A2 and A3. See, for example, O. Zienkiewicz and R. The locations of the integration points are shown in Fig. The quadrature in Eq. Integration points of trian- c d gular elements. See, for example, S.
Timoshenko and J. Goodier, Theory of Elasticity, 3rd ed. The command used for the computations is similar to the one in Example 6.
Note that only four function evaluations were required when using the tri- angle formulas. In contrast, the function had to be evaluated at nine points in Part 1. Note that the integration points lie in the middle of each side; their coordinates are 6, 10 , 8, 5 and 14, The area of the triangle is obtained from Eq. Evaluate the following integral with Gauss—Legendre quadrature: Use integration order a two and b three.
The true value of the integral is 2. Evaluate A x dx dy over the triangle shown in Prob. Evaluate A Use the cubic integration formula for a triangle. Consider the triangle as a degenerate quadrilateral.
To speed up execution, vectorize the computation of func by using array operators. It is recommended if very high accuracy is desired and the integrand is smooth. There are no functions for Gaussian quadrature. The solution of this equation contains an arbitrary constant the constant of integration. In this chapter we consider only initial value problems.
Its basis is the truncated Taylor series for y about x: Because Eq. The last term kept in the series determines the order of integration. For the series in Eq. The amount of data is controlled by the printout frequency freq. Also compute the estimated error from Eq. The main drawback of the Taylor series method is that it requires repeated differ- entiation of the dependent variables.
These expressions may become very long and thus error-prone and tedious to compute. Moreover, there is the extra work of coding each of the derivatives. Due to excessive truncation error, this method is rarely used in practice. The area between the rectangle and the plot represents the truncation error.
Some of the popular choices and the names associated with the resulting formulas are: Most program- mers prefer integration formulas of order four, which achieve a given accuracy with less computational effort.
The second equation then approximates the area of the panel by the area K 2 of the cross-hatched rectangle. Fourth-Order Runge—Kutta Method The fourth-order Runge—Kutta method is obtained from the Taylor series along the same lines as the second-order method. Since the derivation is rather long and not very instructive, we skip it. The most popular version, which is known simply as the Runge—Kutta method, entails the following sequence of operations: Therefore, we must guess the integration step size h, or determine it by trial and error.
In contrast, the so-called adaptive methods can evaluate the truncation error in each integration step and adjust the value of h accordingly but at a higher cost of computation.
One such adaptive method is introduced in the next article. Keep four decimal places in the computations. A summary of the computations is shown in the table below. Therefore, up to this point the numerical solution is accurate to four decimal places.
However, it is unlikely that this precision would be maintained if we were to continue the integration. This problem was solved by the Taylor series method in Example 7. Here are the results of integration: This was expected, since both methods are of the same order. According to the analytical solution, y should decrease to zero with increasing x, but the output shows the opposite trend: The explanation is found by taking a closer look at the analytical solution. The cause of trouble in the numerical solution is the dormant term Ce 3x.
Since errors inherent in the numerical solution have the same effect as small changes in initial conditions, we conclude that our numerical solution is the victim of numerical instability due to sensitivity of the solution to initial conditions.
The lesson here is: Note that the inde- pendent variable t is denoted by x. A more accurate value of t can be obtained by polynomial interpolation. If no great precision is needed, linear interpolation will do. Com- pare the result with Example 7. Verify your conclusions by integrating with any numerical method. In the following sets of coupled differential equations t is the independent vari- able.
Stability has nothing to do with accuracy; in fact, an inaccurate method can be very stable. Unfortunately, it is not easy to determine stability beforehand, unless the differential equation is linear. Your email address will not be published. Notify me of follow-up comments by email. Notify me of new posts by email. PDF Download link. Related Posts Fundamentals of Machining Processes: Conventional and Nonconventional Process. J Davies. Basics and Applied Thermodynamics by P. Develop mass-balance equations for the reactors and solve the three simultaneous linear algebraic equations for their concentrations.
In such systems, a stream containing a weight fraction Yin of a chemical enters from the left at a mass flow rate of F1. Simultaneously, a solvent carrying a weight fraction Xin of the same chemical enters from the right at a flow rate of F2. Mathcad software. Every chapter contains new and revised homework problems. Eighty percent of the problems are new or revised. Challenging problems drawn from all engineering disciplines are included in the text.If the equa- tions are overdetermined A has more rows than columns , the least-squares solution is computed.
This is called underrelaxation. Related Posts Fundamentals of Machining Processes: With appropriate changes in functions dEqs x,y , inCond u and residual u the program can solve boundary value problems of any order greater than two. Singer, C. In this chapter we consider only initial value problems. The computations of the residuals and standard deviation are as follows: Credit is also due to the authors of Numerical Recipes Cambridge University Press whose presentation of numerical methods was inspirational in writing this book.
- DER WEG ZUM ERFOLGREICHEN UNTERNEHMER EBOOK
- POSITIONING THE BATTLE FOR YOUR MIND EBOOK
- TMH GENERAL STUDIES EBOOK
- 1762-IR4 EBOOK
- BIOPROCESS ENGINEERING PRINCIPLES 2ND EDITION PDF
- ISO 17999 EBOOK
- OTHELLO BY WILLIAM SHAKESPEARE EBOOK
- GWT IN ACTION PDF
- ODISSEA OMERO PDF
- BUSINESS BASICS OXFORD PDF
- THE ALTERNATE DAY DIET BOOK
- ALBERT CAMUS SISYPHOS PDF