TMS
ONLINE | TMS
PUBLICATIONS | SITE
MAP An Article from the December 2003 JOM-e: A Web-Only Supplement to JOM |
|
|
J.M. Rickman is an associate professor, and R.P. Vinci is an assistant professor in the Materials Science and Engineering Department at Lehigh University.
|
Exploring traditional, innovative, and revolutionary issues in the minerals,
metals, and materials fields.
|
OUR LATEST ISSUE |
|
OTHER ARTICLES IN THE SERIES |
---|
|
|
Several years ago, the materials science and engineering faculty at Lehigh University decided to introduce into the undergraduate curriculum a course in computational methods. This additional degree requirement was instituted to address the concern that, despite completing the traditional requirements of a course of study in engineering, many students still lacked some of the elementary analytical skills needed to perform simple calculations and estimates and to analyze and interpret experimental data. Furthermore, it was felt that the basic programming and numerical methods abilities acquired in a college-wide, freshman engineering course should be reinforced and supplemented with illustrative examples of relevance in materials science.
While there was widespread agreement on the potential benefits of such a course, there was, however, considerable debate as to its proper place in the course sequence. Some faculty members believed, for example, that it should follow immediately the introductory materials science course in the sophomore year, while others argued for inclusion in the senior year to permit discussion of more advanced topics. In the end, it was decided that the students would benefit most if they took the course relatively early in their academic careers and if the material was subsequently reinforced in the junior and senior years, and so Computational Methods in Materials Science (MAT 20) was launched for Lehigh sophomores in the spring of 1998. For reference, the specific objectives of this course are outlined in the sidebar. This article highlights the evolution and current contents of MAT 20 and documents the successes and difficulties encountered in its development.
It is worth noting at the outset that the course faced two global issues from its inception. First, with regard to computer platforms, an old cluster of Power Macintosh computers was available and later upgraded to iMac computers in a more spacious classroom. In recent years, after some consideration, the machines were replaced with Dell laptops having wireless local area network connections. The current mode of operation is more to the students’ liking given the personal computer environment on campus. Second, course organizers had to decide to what extent, if any, the students should write their own computer code. Although they had nominally been programming in either Fortran or C++ since their freshman year, most students were still uncomfortable writing and debugging code, especially in unfamiliar Unix environments. Thus, to lower the barrier to acquiring new computational skills, over the last two years classroom exercises were tailored to spreadsheet software, as the students’ comfort level with, for example, Microsoft Excel was relatively high. The authors found that Excel, in particular, is sufficiently powerful and adaptable to meet the course requirements, and students become rather adept at the necessary formulaic and graphical manipulations that drive spreadsheet programs. It also has the advantages of being highly portable and widely used in the engineering and business communities. Beyond this use of existing software, the students were expected to demonstrate minimal programming proficiency through homework sets that were coordinated with weekly class assignments.
The course was run in a highly participatory and interactive fashion, with a large fraction of each class period devoted to discussion and problem solving under the guidance of the instructor. In the classroom discussed in this article, the instructor’s work surface consists of a pair of SmartBoard large-screen rear projection television sets that enable the display of a variety of media, including paper documents and computer output. Each student has his/her own computer for use throughout the class period. The virtue of this approach is that new concepts are introduced and tested in the classroom environment, permitting immediate feedback and individual assistance. Its success, however, is contingent on a small class size, and so multiple two-hour sessions were offered, each with a limited enrollment of about a dozen students.
The course content was wide ranging and meant to build on the foundation laid in the pre- and co-requisites, namely the sophomore-level introductory lecture course, the associated materials laboratory, a freshman programming methods class, and the basic mathematics sequence. The overarching aim of the course was to illustrate the uses of the computer in modern materials science and engineering, including modeling and simulation, data collection, and analysis. While the coverage varied somewhat over time, certain core topics were emphasized each year. The summary that follows highlights many of these central themes and the exercises used to develop the desired skill base. For reference, the course outline for the spring 2003 semester can be found in the sidebar.
Probability and Statistics
The aim of this unit was to help the students gain an intuitive understanding of the concept of probability as well as an associated calculational facility. Toward this end, computer simulation was employed in the spreadsheet format to model several stochastic processes, followed by a compilation and subsequent analysis of the generated experimental data. For further study, students were assigned readings and selected problems in either An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements by J.R. Taylor1 or Essential Statistics by D.G. Rees.2 Both textbooks were found to provide excellent supplementary material for this course.
This unit began with the introduction of the notion of a pseudo-random number generator in the context of a simple, one-dimensional Monte Carlo integration scheme (see Reference 3, for example) wherein the area under a bounded curve is determined from the frequency of randomly distributed points that lie under the curve, as shown schematically in Figure 1. Random number generation was then used in the simulation of two prototypical processes, repetitive coin flipping and the time record of phone calls into a switchboard, characterized by the binomial and Poisson distributions, respectively. For this purpose, students were encouraged to compile histograms summarizing their simulation results, make comparisons with theoretical probability distributions, and calculate lower-order moments of the distributions (i.e., mean, variance, and skewness) as well as the corresponding cumulative distributions.4 A typical example of the required histogram analysis is shown in Figure 2. A connection was then established between the coin-flip experiment and a (possibly biased) random-walk model of solid-state diffusion and between the switchboard experiment and problems in quantitative stereology.
|
|
Figure 1. A schematic of the Monte Carlo integration scheme. The function of interest, f(x), is enclosed in a square, and a large number of points inside the square are randomly selected. The area under the curve is then determined from the frequency of points lying below f(x). |
Figure 2. The normalized probability distribution for the number of heads obtained in ten flips of a fair coin. This experiment was repeated approximately 3,000 times. Also shown is the corresponding, calculated binomial distribution. The agreement between the simulation results and the binomial distribution is excellent. |
|
|
This survey of probability and statistics continued with an analysis of continuous random variables that focused mainly on the uniform and normal densities, the latter introduced as a limiting form of the binomial distribution. In particular, the salient features of the normal density, such as the relationship between its full width at half-maximum and the standard deviation as well as the probabilities associated with events one or two standard deviations from the mean (obtained via numerical integration), were considered in some detail.
Data Analysis
Another important component of the course centers on methods for analyzing and interpreting experimental data. In this unit, the students were asked to use graphical methods to extract model parameters from a data set and to assess the suitability of the fit to a given model. These exercises force them to determine which variables to plot and how to obtain the desired information from the plot. Examples selected for analysis here are those encountered in other core materials-science courses and include description of thermodynamic data (e.g., applying Boyle’s law to pressure-volume information for a gas); extraction of activation energies from Arrhenius relationships for diffusive transport and electrical conduction;5 investigation of power laws in grain growth; and quantification of transformation kinetics with the Avrami relation.6
The extraction of model parameter information leads naturally then to the problem of estimating the slope and intercept in a linear data fit and, therefore, to a discussion of linear regression. To illustrate the important points, the students were asked to write their own least-squares fitting program, deduce algebraically the best values for the slope and intercept, apply their results to different data sets, and then compare their results with those obtained using standard packages (e.g., Excel). Having calculated the fitting parameters for a given data set, the students were then encouraged to push their analysis further to quantify goodness-of-fit.
This unit concluded with a discussion of the errors inherent in measurements of physical quantities and standard methods of error analysis. Here, the traditional approach was followed, describing the nature of various sources of error (i.e., random or systematic) and outlining procedures for obtaining from a series of measurements a best estimate for a quantity and the associated uncertainty.1 The problem of error propagation in measurements and the statistical assumptions underlying error analysis were also considered. Finally, faculty and students explored the effects of computer data acquisition on data quality, focusing in particular on the analog-to-digital conversion process. The inclusion of this material allowed discussion of topics related to the functioning of a computer (i.e., digital logic and binary numbers) that, for some reason, receive little coverage in other courses.
Methods and Approximations
The first part of this unit consisted of a survey of standard methods for accomplishing such tasks as finding the roots of a function, graphical solution of transcendental equations, differentiation, and integration. Again using a spreadsheet application, the students were asked to investigate, for example, different root-finding strategies for a given trial function, such as the bisection method and Newton’s method,7 compare their relative convergence rates, and identify any potential pitfalls. The techniques of numerical integration, such as the trapezoidal and Simpson rules,7 were also explored in the spreadsheet format and their relative accuracies assessed for calculations of the error integrals introduced in the section on probability and statistics.
The study of the methods of numerical differentiation was motivated by physical applications familiar to the students. This section began by constructing finite-differencing schemes, assessing their relative accuracy, and, as a first example, considering the problem of radioactive decay, as described by a first-order rate equation. The problems of heat and mass transport in a solid were used to introduce partial differential equations and, for the purposes of illustration, the students employed an iterative technique with a discrete version of the Laplacian operator to solve a steady-state heat flow problem in a metal piece subject to different boundary conditions. The graphical solution for a particular set of boundary conditions, as obtained by the students in the class, is shown in Figure 3.
The second thrust of this unit was the development of skills needed to make calculation-based approximations, as typified by the determination of random packing fractions and the use of Taylor series expansions to investigate small parametric changes and to obtain finite-difference formulae. One concept that received heavy emphasis here was the estimation of upper and lower bounds, and this material was linked with the discussion of error analysis.8 In addition, the validity of approximations was assessed by direct comparison with simulation data for several examples, including the aforementioned packing and heat-flow problems.
Group Presentations
Finally, to sharpen the students’ presentation skills and to give them an opportunity to explore subjects of contemporary interest in materials science and engineering, the semester concluded with a series of group presentations. The aims here were to foster teamwork, allow the students to gain experience with web design and presentation software tools, and permit them to interact with an audience. The topics have varied from semester to semester, including most recently three possible choices: advances in nanotechnology, optoelectronics, and biomaterials. Students are encouraged to seek out faculty members with expertise in a chosen topic so that ongoing campus research efforts are also reflected in the course. These semester projects are usually one of the highlights of the course, and most students feel that the considerable effort expended in preparation for their talks was well worth the investment.
1. J.R. Taylor, An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements (Sausalito, CA: University Scientific Books, 1997).
2. D.G. Rees, Essential Statistics (New York: Chapman and Hall, 1995).
3. M.H. Kalos and P.A. Whitlock, Monte Carlo Methods Volume I: Basics (New York: John Wiley and Sons, 1986).
4. J.E. Freund and R.E. Walpole, Mathematical Statistics (Englewood Cliffs, NJ: Prentice-Hall, 1980).
5. W.D. Callister, Jr., Materials Science and Engineering: An Introduction (New York: John Wiley and Sons, 2003).
6. D.A. Porter and K.E. Easterling, Phase Transformations in Metals and Alloys (New York: Chapman and Hall, 1992).
7. W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery, Numerical Recipes in FORTRAN: The Art of Scientific Computing (Cambridge, U.K.: Cambridge University Press, 1992).
8. A.M. Starfield, K.A. Smith, and A.L. Bleloch, How to Model It: Problem Solving for the Computer Age (Edina, MN: Burgess Publishing, 1994).
For more information, contact J.M. Rickman, Department of Materials Science and Engineering, Whitaker Laboratory, Lehigh University, 5 E. Packer Avenue, Bethlehem, PA 18015; e-mail jmr6@lehigh.edu.
Direct questions about this or any other JOM page to jom@tms.org.
Search | TMS Document Center | Subscriptions | Other Hypertext Articles | JOM | TMS OnLine |
---|