Though open source engineering
analysis tools have not been widely
deployed, several of them have recently
reached a point of maturity and usability
in industry. This article focuses
on the use of open source tools for
modeling of materials and materials
processes in particular. After defining
open source software, it presents two
case studies, surveys open source tools
aimed at modeling of materials behavior
and processes at multiple length
and time scales, and discusses future
prospects and application areas for
open source tools.
INTRODUCTION
…describe the overall significance
of this paper?
This paper describes a variety
of new high-quality, robust and
user-friendly open source tools
which are becoming available
for use in materials processing
applications. For new modeling
paradigms in particular, such as
integrated computational materials
engineering, open source tools are
playing an integral role, and their
use is likely to increase over time.
…describe this work to a materials
science and engineering professional
with no experience in your
technical specialty?
Modeling and simulation have
become an intrinsic part of
development of materials and
processes for making them. In
the past, proprietary tools have
dominated this application space,
as the licensing revenue stream has
funded development of new features
and usability. But new open source
software is becoming available for
these purposes, and the open source
advantages of customization, long term
maintainability, and freedom
from single-vendor lock-in are
potentially even more compelling
than the absence of licensing fees.
…describe this work to a layperson?
Open source is a relatively new
paradigm for creating inexpensive
robust software which users/
developers may customize at
will. The Firefox web browser is
perhaps the best-known example
of this paradigm; because its
source code is open for anyone
to add to, numerous companies
have contributed new features and
fixed its bugs. This paper describes
new open source codes for well established
simulation techniques,
and other trends in simulation
method development, which are
likely to increase the prominence of
open source code moving forward.
|
Open source software has become
prevalent in many applications, from
server operating systems to scientific
computing and high-end graphics. This
class of software is distributed with
source code, and with no restrictions
on making and redistributing modified versions. There are therefore no licensing
fees involved, and source code access
is of great benefit to educators, researchers,
and those who need custom
modifications. On the other hand, as it
is usually written for its authors’ needs,
the user interface of open source software
is often not as polished as that of
proprietary software, and the lack of licensing
revenue means that its feature
set often lags behind.
In spite of these general drawbacks,
several open source software suites
have recently grown in sophistication and polish to the point where they are
broadly useful for robust engineering
simulations. User interfaces have also
improved to the point where a small but
increasing number of these tools are usable
by non-experts. And the price
point, ability to customize, freedom
from single-vendor lock-in, and long-term
maintainability of these tools
make them potentially very attractive.
In fact, the open source phenomenon
has gained sufficient interest in the materials
science and engineering community
that the symposium “Open
Source Tools for Materials Research
and Education” is planned for the TMS
2009 Annual Meeting.
This article presents an overview of
open source tools for process modeling.
Case studies are considered in
which the ability to modify an open
source code played a key role in the
project’s success: boundary element
modeling of a new magnesium process,
and python scripting of pre-processing,
control and post-processing of ab initio
calculations of thermo-mechanical
properties of crystal structures. Also
provided are business models that generate
and sustain open source codes,
and consideration of future prospects
moving forward.
OPEN SOURCE SOFTWARE
Though on only a tiny minority of
desktop computers, open source dominates
several fields of use. For example,
Linux runs major data centers such as
those of Google, Yahoo, eBay, Amazon,
the New York Stock Exchange,
and many of the world’s largest banks.
In addition, 85% of the top 500 supercomputers
in the world run Linux,1 and
Linux is used exclusively by all of the
animation studios producing major motion
pictures, on both render farms and
animator desktops.2 The Apache web
server powers over half of websites,3
and the market share of the open source
Firefox web browser in Europe is nearly
30%, and is approaching 50% in several
countries.4
Broadly speaking, open source software
is software for which source code
is available, and for which modification
and distribution of modified versions
are not prohibited by its author. This includes
software with source code in the
public domain (i.e., not copyrighted),
and copyrighted software with licenses
that permit such modification and distribution.
Indeed, much open source
software uses licenses that require that
modified versions be distributed under
the same terms, with source code available
and no restrictions on redistribution
of derived works; such licenses are
often called “copyleft” licenses. The
commonly accepted definition5 of open
source software, and a list of licenses
which conform with that definition, is
maintained by the Open Source Initiative,
a nonprofit corporation dedicated
to advocating for the benefits of open
source products.6
The term “free software” is similar
in definition, but those who use it generally
have different goals from those
who use “open source.” Users of the
former term, such as the Free Software
Foundation, tend to focus on freedom
as a moral imperative. The Free Software
Foundation was founded in 1985
to advocate for free, rather than proprietary
software. Free software proponents
claim that restricting distribution
of source code is unkind to one’s fellow
programmers. Open source advocates
focus on its merits as a methodology
for producing robust and inexpensive
software.
In addition, copyright restrictions
must be distinguished from trademark
policies. For example, one can legally
distribute derivatives of the complete
RedHat Enterprise Linux product, but
RedHat does not permit the use of its
trademarked name in such derivatives.
For the purpose of this article, the authors
consider permission to modify
and re-distribute as the operating attributes
of open source software, while
recognizing that the particular characteristics
of the licensing scheme need to
be addressed on a per-case basis.
CASE STUDIES
Solid-Oxide Membrane
Electrolysis of Magnesium
Solid-oxide membrane (SOM) electrolysis,
shown in Figures 1 and 2, is a
promising method for producing pure
magnesium vapor from dolomite ores
or other magnesium oxide or hydroxide
sources using just 10 kWh of energy
per kg of product.7 The magnesium
(hydr)oxide dissolves in a molten salt
electrolyte (typically CaF2-MgF2 eutectic).
At the cathode, Mg2+ ions are reduced
to magnesium metal vapor, and
at the cathode, the SOM (typically
yttria-stabilized zirconia) allows only
oxide ions to pass through even at high
potential, so the by-product is oxygen.
The magnesium vapor in turn can either
condense to produce liquid or solid
product, or else can react with other
species. For example, reaction with
tantalum oxide produces tantalum metal,
with titanium oxide produces Ti3O
which reduces more readily to metal,
and with hydrogen gas produces MgH2
for hydrogen storage. A laboratory scale
reactor with a single SOM tube
has run at close to 1 A/cm2 for four days
without noticeable SOM degradation,
so the process appears to be robust.
Process scale-up is proceeding with
a three-tube reactor running at very
high current. But before building that
reactor, it was necessary to first model
the process to assess the uniformity of
heating of the SOM tubes, as non-uniform
heating would lead to breakage.
For this purpose, the open source Julian
boundary element code8 formed the basis
of a three-dimensional (3-D) model
of current density in the reactor. Results
of that model can be seen in Figure 3.9
Open source was helpful here in two
ways. First, Julian did not have sufficient geometric flexibility to represent
some features of the SOM tubes, so it
was necessary to extend its capabilities,
which only the software owner could
have done with a proprietary product.
(That the author was also an investigator
in this project somewhat diminishes
this advantage of open source.) Second,
as a component of Rachel De Lucas’s
graduate research, the ability to see and
understand the Julian source code added
significantly to the educational value
of the modeling task.
Interfaces to Open Source
Computational Materials
Science Tools
One of the criticisms commonly levied
against open source code is that in
many cases (although not always), extensive
expertise on the part of the user
is necessary in order to make maximum
use of the tool. This high barrier to entry
often hinders the widespread use of
these computational tools. Although
many of the user-unfriendly “features”
of many of the freely available open
source materials simulation tools have
been recently alleviated, there is still
significant room for improvement.
A particularly effective approach to
improve the interfacing between nonexperts
and sophisticated codes is the
development of wrapping interfaces
that would control the pre processing,
execution, and post-processing of expert-oriented codes through the use of
high level gluing programming languages.
The Python10 has become very
attractive due to its ability to interface
not only with operating systems but
also to native codes written in Fortran,
C, and C++, as well as their variants.
The power of Python goes beyond mere
gluing or wrapping and in fact, many
of the software tools described in this
paper (FiPy,11 OOF12) are written in
Python, using compiled Fortran and
C codes for the numerically intensive
components of the programs. To give
an example of the power of Python as
an interfacing layer, the problem of the
calculation of thermodynamic properties
of crystals at finite temperatures
through ab initio methods is considered
here. (Details of the codes mentioned
here are provided later.)
To a very first approximation, the finite temperature thermodynamic properties
of crystals in which configurational
degrees of freedom (DOF) can be
ignored, are dominated by thermal excitations
of vibrational modes. In order
to determine the vibrational properties
of crystal systems, one can make use
of the harmonic lattice dynamics approximation,13 in which the atoms are
assumed to oscillate within a harmonic
approximation to the real crystal potential.
The mass-spring system then oscillates
with characteristic frequencies
corresponding to the vibrational modes
available, which can in turn be determined
through conventional lattice dynamic
approaches.13 The open source ab
initio thermodynamics Alloy Theoretic
Automated Toolkit (ATAT) code14,15 implements
harmonic lattice dynamic calculations
in which the spring constants
are calculated through the perturbation
of the ground state crystal, and the calculation
of the resulting inter-atomic
forces. Although the ATAT code does
an excellent job at automating many of
the procedures necessary to determine
the vibrational behavior of crystals, it is
necessary to point out that many different
steps are necessary to perform such
calculations.
For example, once a crystal structure
has been optimized with respect
to external/internal degrees of freedom
through the use of a density functional
theory (DFT) code, it is then necessary
to perturb the crystal structure (using
ATAT) in order to calculate the reactive
forces to atomic displacements. If one
is to consider the effects of thermal expansion,
the quasi-harmonic approximation
requires the same procedure to
be performed for many different volumes
to capture the non-harmonicity
of the real crystal potential. In order to
extract the thermodynamic properties
of the crystal in question one then has
to post-process the information generated
by the ATAT code, which consists
of the vibrational phonon density
of states, along with tabulated vibrational
free energies. Figure 4 illustrates
these steps required for such a calculation.
Although the process is relatively
straightforward for experts, it goes
beyond the interests of a user only interested
in extracting this information
to adjust parameters of CALPHAD16
models.
Thanks to Python scripting, the whole
process can become more streamlined
for the casual user. The schematic in
Figure 5, for example, represents a
very user-friendly Python-based interface
developed for the calculation of ab
initio finite temperature thermodynamics
integrating ATAT with a DFT code.
The Python script takes user input such
as symmetry and composition of the
crystal structure, and then generates the
corresponding input files for the ATAT
and the DFT codes. The Python script
not only controls the interactions between
the different computer software,
but is also capable of controlling/steering
the calculations themselves, interacting
with the job scheduling service
used in the particular cluster computer
used. The post-processing of the calculations
can then be performed within
the Python environment, with almost
no interaction with the end user.
SURVEY OF OPEN SOURCE
MATERIALS AND PROCESS
MODELING TOOLS
When considering open source tools
for process modeling within the emerging
paradigm of integrated computational
materials engineering (ICME), there exists a very rich and diverse set
of open source tools for materials simulation.
However, the authors do not
make any explicit endorsement of any
particular computational tool.
Integrated Computational
Materials Engineering
The relatively new field ICME17 constitutes
a new paradigm for the efficient
design of materials and entails a comprehensive
integration of information
for all relevant material phenomena,
from the atomic through the meso- to
the macro-scale. This approach enables
“the concurrent analysis of manufacturing,
design and materials within a
holistic system.”17 Such an integrated
paradigm can potentially accelerate the
development of novel materials, mainly
through the reduction of the time/effort
involved in the design→synthesis→evaluation cycle that is achieved by integrating
materials process models and
property simulations into the design
process, as shown in Figure 6.
At the most fundamental, ICME
rests on the processing-structure-properties
(PSP) relationships that have
been the cornerstone of materials engineering
for decades. This paradigm has
been refined in the past decade, beginning
with the pioneering article by G.B.
Olson.18 In that work, Olson proposed a
systems approach that integrated PSP
relations in the conceptual design of
materials and in which a multi-level hierarchy
of necessary computational
tools was established based on the information
necessary to make design decisions
at all relevant length and time
scales. Unfortunately, the ICME paradigm
has been limited by the lack of an
integrated computational materials
toolkit capable of addressing the complex,
multi-scale phenomena relevant
to materials engineering. J. Allison et
al. claim that, apart from the inherent
complexity of materials phenomena,
the main roadblock to an effective
ICME results from the focus on understanding
isolated phenomena without
paying much attention to linkages between
the diverse knowledge base.17
Liu et al.19 have recently proposed the
linking of multiple length and time
scales in materials phenomena through
materials informatics.
Recent proof-of-concept studies20,21
illustrate the usefulness of ICME to reduce
the number of iterations in the materials
development process, reducing
in turn the cost and time associated
with finding optimal engineering solutions.
Both approaches accomplish the
integration of knowledge originating in
computational materials tools aimed at
multiple scales, ranging from atomistic
to the continuum. Widespread implementation
of ICME to many more materials
design problems has become
possible thanks to the recent emergence
of a collection of powerful open source
computational materials software.
Density Functional Theory and
Alloy Theory
In order to understand/manipulate
the macroscopic materials properties, it
is first necessary to rely on an accurate
description of the relationships between
such properties and the materials’ electronic
structure.22 Determination of the
latter requires, in principle, the solution
of the many-body Schroedinger equation,
an insurmountable task which has
been made tractable thanks to the development
of approximate theories.
Among them, DFT23 has become the
most widely used within the materials
science community.22,24 In just a few
years, a very powerful, open-source
DFT code, ABINIT,25 has become one
of the dominant computational tools to
investigate the electronic structure of
both molecules and periodic crystalline
solids. Thanks to first-principles DFT
calculations, a wide-range of physical
properties can be calculated.20,22
Through DFT, one can investigate
the properties of solids at the ground
state. However, one needs to investigate
the effects of thermal excitations
on the degrees of freedom available to
a given physical system if one is to understand
the phase stability of materials
at finite temperatures. Thermally excited
effects, such as phonons, electron
excitations in systems with finite e-dos
at the Fermi level, and configurational
DOF in crystals with compositional
disorder can have significant effects on
the thermodynamics of complex materials
systems. Calculations of excitations
of these DOF are complicated for
everyone other than experts. Very recently,
this process has become much
simpler thanks to the ATAT code.15 The
ATAT code incorporates lattice dynamics13
to take into account thermally excited
phonon DOF. The effects of configurational DOF are taken into account
through the cluster expansion formalism.26 Lattice Monte-Carlo simulations
can in turn be used to investigate finite
temperature phase stability. Through
this tool, it is possible to calculate,
within accuracy limits set by the precision
of DFT itself, the phase diagrams
of binary systems of technical importance
and, by extension, their thermodynamic
properties.19,27
Thermodynamics
Despite their promise, first-principles
methods are limited by the inherent
inaccuracy of the approximations
necessary within DFT. At the industrial
level, for example, it is necessary
to establish the phase stability of
systems within a few degrees, which
would require unrealistic accuracy levels
in electronic structure calculations.
Moreover, first-principles methods are
incapable of treating multi-component
systems relevant to practical applications.
To bridge the gap28 between first-principles
and practical applications,
the so-called CALPHAD16 approach
can be used. The CALPHAD approach
consists of the description of the Gibbs
energy of phases in a system through
simple phenomenological models. The
parameters of such models can then
be determined through experimental
data as well as through first-principles
calculations. Although at the moment
there are no open source computational
thermodynamics tools, complete models
for technologically important systems
are available to the general public.
Microstructure Evolution
In order to make sensible predictions
about the evolution of a material system
as it approaches equilibrium, it is
first necessary to have an accurate description
of its phase stability. Once
this is available, one can use microstructural
modeling approaches to link
local contributions to the free energy
to microstructure-dependent contributions.
A very successful method for
modeling microstructural evolution is
the phase-field approach.29 Despite its
success, the phase-field has not been
incorporated into large-scale ICME efforts,
in part due to the lack of easy-to-use, open source phase-field modeling
approaches. Very recently, this
has been alleviated through the FiPy
code,11 a Finite–Volume solver for partial
differential equations, written in
Python10 and based on an object-oriented
programming model. Thanks to
this impressive tool, it has now become
possible to focus on the development
of sophisticated phase field models to
investigate complex materials phenomena
without focusing too much on the
actual implementation of the numerical
simulations. This tool can thus be used
to develop higher-level simulations, in
the spirit of ICME.
Macroscopic Property Prediction
After microstructures have been simulated
either through the open source
FiPy or other open/close computational
tools, one would like to examine the response
of such microstructure to external
stimuli. A relatively new approach
to model the response of microstructure
is the OOF code.12 This code combines
graphical microstructure data (real or
virtual) with material properties (scalar
or tensorial) databases for each of
the constituent phases of the microstructure
to model the behavior of the
material under external boundary conditions.
Although the code is currently
limited to two-dimensional geometries
and simple material responses, it is to
be expected that further development
will allow materials researchers to
make use of OOF and other codes similar
to it as computational microstructural
engineering tools.
Crystal plasticity has also emerged
as a means of calculating macroscopic
mechanical properties by simulating
the motion of individual dislocations
through a polycrystalline domain with
many precipitates. Two proprietary
codes (MSC.Marc200x and ABAQUS/
Standard) implement this algorithm,
and are very successful at simulating
single crystal deformation, but have
problems with even bicrystals depending
on the orientation of the grains.30
The authors do not know of open source
software in this domain.
Recently, Liu and others19 have proposed
such an integrated approach toward the multi-scale modeling and design
of materials. Figure 7 presents a
schematic illustrating the implementation.
The schematic mainly focuses on
the integrated computational modeling
of multicomponent, heterogeneous materials
through the integration of several
existing or in-house computational
tools. As mentioned, many of the tools
in principle necessary to perform such
an integrated computational materials
design are already available as open
source code. Omitted from the discussion
above is the fact that open source-based
ICME depends strongly on the
availability of open material property
databases, or, in absence of this, a way
to naturally interface open source code
with commercially available databases.
A discussion of open vs. proprietary
databases would be very worthwhile,
but is beyond the scope of this article.
Macroscopic Phenomena
There are many open source tools
for solving the partial differential equations
involved in modeling macroscopic
phenomena such as mechanical deformation
and transport phenomena.
Numerous available codes for fluid dynamics,
heat transfer, and mass transfer
grew out of university and government
research projects. Of those codes, a few
have emerged as leaders. With few exceptions,
the tools listed below come
with the CAELinux live DVD Linux
distribution.31
In mechanics and heat transfer,
Code_Aster, CalculiX, and Impact are
leading open source tools. Code_Aster33
is a large finite element code which
Electricité de France (EDF) has written
to solve complex problems in nuclear
power. It comes with tools for adaptive
remeshing, and is ISO 9001 certified. CalculiX32 is an implicit code for
quickly calculating mechanical or thermal
steady state solutions, or performing
buckling calculations, but without
adaptive remeshing. And Impact34 is an
explicit mechanics code written in Java
whose eventual goal is to simulate automobile
collision dynamics. Each has
a different input file format, and Code_
Aster and Impact have user friendly
pre- and post-processing graphical interfaces.
Code_Aster in particular can
be controlled by the Salomé graphical
simulation environment, which interfaces
with numerous meshing and visualization
libraries, and will likely be
able to control more finite element software
in the future.
For computational fluid dynamics
(CFD) and coupled heat and mass
transfer, leaders include OpenFOAM,
libMesh, and Code_Saturne. Open-
FOAM35 comes from the OpenCFD
Ltd. consulting company, and includes
an extensive set of solvers, utilities for
pre-and post-processing, and physical
model toolbox libraries. It runs in parallel
with an efficient iterative implicit
time-stepping scheme, and includes
several turbulence models. LibMesh36
is a parallel finite element library with
implicit timestepping, adaptive remeshing,
and dynamic repartitioning across
a cluster. Like FiPy, it can solve fourth-order
biharmonic equations needed for
Cahn–Hilliard phase field simulations.
Unfortunately, it has no user interface
at all; to generate a new simulation, one
must write a short C++ program which
calls the library’s functions. Code_
Saturne37 is the EDF fluid dynamics
counterpart to Code_Aster, and features
magneto-hydrodynamics, incompressible
or compressible flows, multi-phase
flows (arbitrary Lagrange–Euler mesh
deformation), and turbulence models,
along with advanced heat transfer capabilities
such as radiation and combustion.
There are far too many PDE solver
codes for solving continuum problems
to list here. Those listed above have
comprehensive features, well-developed
front ends (except for libMesh),
and most importantly, commitment to
long-term maintenance.
BUSINESS MODELS AND
FUTURE TRENDS
At this point, open source software
might sound too good to be true, one of
those passing fads which should have
been swept away by the dot-com bust.
Why would one distribute source code,
the precious “DNA” of a computer program
which Microsoft and others work
very hard to protect?
One reason for forgoing the revenues
from a licensing stream is that
such profits are forbidden, or there are
more important motivations. Software
produced by U.S. government agencies
which is not classified cannot be copyrighted, and is thus in the public
domain. For academics, the goal of impact
or recognition can be at least as
important as financial reward. This has
always been true of publishing scientific results, and more recently, having
one’s operating system research project
accepted into the Linux kernel has conferred
significant credibility to its author.
This type of motivation resulted in
the authoring of the Berkeley System
Distribution implementation of Unix,
the X Window system at Massachussets
Institute of Technology, and numerous
other open source projects.
Others find a project on the Internet,
and are motivated to contribute to it in
order to improve its suitability to their
needs. Apache began as the University
of Illinois National Center for Supercomputing
Applications web server
research project, and system administrators
contributed patches extending
its functionality (hence “a patchy web
server”). Somewhat later, IBM “discovered”
Apache and decided to make
it the foundation of all of its web-services-related software, while putting
significant resources into its development.
This motivation is known as
“scratching an itch:” a developer has
a need for which there is no code, or
a feature not present in a given code.
Living with this problem “itches” until
the developer “scratches” it by writing
a new program or adding the feature to
an existing one.
But today, most open source software
is not created by academics or
hobbyists, but by for-profit companies.
What follows is a brief synopsis of other
motivations and business models for
authoring and extending open source
software as given by Eric Raymond’s
essay “The Magic Cauldron.”38
- Loss-leader/market positioner:
use open source software to create
or maintain a market position for
proprietary software (e.g., an open
source client creates a market for a
proprietary server).
- Widget frosting: publish open
source drivers for proprietary hardware,
both for peer review benefits
and also to allow operating system
vendors/maintainers to adapt the
driver to future changes in system
interfaces.
- Consulting, also known as “give
away the recipe, open a restaurant:”
use expertise in an open
source product to drive revenue
for packaging and/or consulting
services (e.g., OpenFOAM mentioned
previously).
- Accessorizing: sell books or other
accessories to open source products
(e.g., O’Reilly publishers).
- Free the future, sell the present:
sell a proprietary product with a license
that guarantees open source
release after a certain time, in order
to guarantee future maintainability
to prospective customers (e.g., Alladdin
GhostScript).
- Free the software, sell the brand:
charge for the branded, trademarked,
tested, and certified version
of an open source product
(e.g., RedHat).
Raymond’s essay also pointed out
that in most cases, open source software
does not have as much revenue
available to fund its development as
is provided by license fees of proprietary
software. For this reason, proprietary
software often leads the development
of end-user software, while open
source provides low-end users with an
inexpensive alternative, and also provides
an open and flexible platform for
those who need custom modifications.
Proprietary products must therefore
keep innovating to push the frontier
forward, as its open source competitors
catch up behind them. In some areas,
open source leads proprietary products.
These have tended to either be standardized
commodity infrastructure tasks
where multiple stake holders drive feature
addition and architecture updates
(e.g., Linux, Apache, Firefox), or new
application fields where open source
establishes an early lead (e.g., ABINIT,
ATAT). These trends are shown schematically
in Figure 8.
In the past, research codes have
formed the basis for proprietary products.
For example, John Hallquist wrote
the DYNA3D finite-element analysis
code for simulating deformation of
shell structures while working at Lawrence
Livermore National Laboratory
(LLNL) and released it into the public
domain in 1978. In 1989, Hallquist left
LLNL to form the Livermore Software
Technology Corporation (LSTC),
which has released and supported new
versions of LS-DYNA since then.
Moving forward, this is less likely
to happen because newer codes such
as ABINIT mentioned above tend to
use copyleft licenses which prohibit
proprietary derivatives. In fact, in the
commercial world, many potential
contributors refuse to submit patches to
non-copyleft open source software, because
competitors can incorporate their
contributions into proprietary products.
For this reason, open source software
which opens new fields, such as the
codes briefly introduced in this article,
are likely to remain at the forefront of
technology, as re-implementing them
would present a substantial task to a
prospective proprietary competitor.
CONCLUSION
Open source software plays a large
and growing role in research and engineering
for materials processing. Many new tools have recently reached a level
of feature completeness and usability
that makes them suitable for broader
industrial use. Moving forward, the
role of open source is likely to continue
to expand, as yesterday’s proprietary
features enter tomorrow’s open codes,
and as new fields open up with open
source software in a leading position.
On the other hand, the substantial lead
of proprietary software in many fields
will likely give it an edge in advanced
features for some time, particularly in
thermodynamics, crystal plasticity, and
macroscopic simulations.
ACKNOWLEDGEMENTS
Raymundo Arroyave would like to
thank Michael E. Williams for creating
some of the figures used in this article.
Adam Powell would like to thank Rachel
DeLucas and Uday Pal for their
work on simulating SOM electrolysis
of magnesium, and Francesco Poli
for pointing out some of the codes described
in this article.
REFERENCES
1. “Operating System Family Share for 11/2007 |
TOP500 Supercomputing Sites,”www.top500.org/stats/list/30/osfam.
2. M. Macedonia, “Linux in Hollywood: A Star is Born,”
Computer, 35 (2002), pp. 112–114.
3. Netcraft, http://news.netcraft.com.
4. “News.com,”www.news.com/8301-10784_3-9862803-7.html.
5. “The Open Source Defi nition | Open Source
Initiative,”http://opensource.org/docs/osd.
6. “Open Source Initiative,”www.opensource.org.
7. A. Krishnan, U.B. Pal, and X.G. Lu, “Solid Oxide
Membrane Process for Magnesium Production Directly
from Magnesium Oxide,” Metallurgical and Materials
Transactions B, 36 (2005), pp. 463–473.
8. A.C. Powell and Y. Lok, “Julian Boundary Element
Code,”http://matforge.org/powell/wiki/Julian.
9. R.A. DeLucas, A.C. Powell, and U.B. Pal, “Boundary
Element Modeling of Solid Oxide Membrane Process,”
TMS 2008 Annual Meeting Supplemental Proceedings
Volume 2: Materials Characterization, Computation
and Modeling (Warrendale, PA: TMS, 2008), pp.
301–306.
10. “Python Programming Language—Official Website,”www.python.org.
11. “FiPy,”www.ctcms.nist.gov/fipy.
12. S. Langer, E. Fuller, and W. Carter, “OOF: An
Image-based Finite-Element Analysis of Material
Microstructures,” Computing in Science & Engineering,
3 (2001), pp. 15–23.
13. A. van de Walle and G. Ceder, “The Effect of Lattice
Vibrations on Substitutional Alloy Thermodynamics,”
Reviews of Modern Physics, 74 (January 2002), p. 11.
14. Axel van de Walle, Gautam Ghosh, and Mark Asta,
“Ab initio Modeling of Alloy Phase Equilibria,” Applied
Computational Materials Modeling (2007), pp. 1–34;
http://dx.doi.org/10.1007/978-0-387-34565-9_1.
15. A. van de Walle, “Alloy Theoretic Automated Toolkit
(ATAT),”www.its.caltech.edu/~avdw/atat.
16. L. Kaufman, “Computational Thermodynamics
and Materials Design,” CALPHAD, 25 (2001), pp.141–161.
17. John Allison, Dan Backman, and Leo Christodoulou,
“Integrated Computational Materials Engineering: A
New Paradigm for the Global Materials Profession,”
JOM, 58 (11) (2006), pp. 25–27.
18. G.B. Olson, “Computational Design of Hierarchically
Structured Materials,” Science, 277 (August 1997), pp.
1237–1242.
19. Zi-Kui Liu, Long-Qing Chen, and Krishna Rajan,
“Linking Length Scales via Materials Informatics,”
JOM, 58 (11) (2006), pp. 42–50.
20. Daniel G. Backman et al., “ICME at GE: Accelerating
the Insertion of New Materials and Processes,” in Ref.
16, pp. 36–41.
21. J. Allison et al., “Virtual Aluminum Castings: An
Industrial Application of ICME,” in Ref. 16, pp. 28–35.
22. J. Hafner, “Atomic-Scale Computational Materials
Science,” Acta Materialia, 48 (January 2000), pp.
71–92.
23. W. Kohn and L.J. Sham, “Quantum Density
Oscillations in an Inhomogeneous Electron Gas,”
Physical Review, 137 (March 1965), p. A1697.
24. J. Hafner, “Materials Simulations Using VASP—A
Quantum Perspective to Materials Science,” Computer
Physics Communications, 177 (July 2007), pp. 6–13.
25. X. Gonze et al., “First-Principles Computation of
Material Properties: The ABINIT Software Project,”
Computational Materials Science, 25 (November
2002), pp. 478–492.
26. J.M. Sanchez, “Cluster Expansions and the
Confi gurational Energy of Alloys,” Physical Review B,
48 (November 1993), p. 14013.
27. Zi-Kui Liu and Long-Qing Chen, “Integration of
First-Principles Calculations, Calphad Modeling, and
Phase-Field Simulations,” Applied Computational
Materials Modeling (2007), pp. 171–213; http://dx.doi.org/10.1007/978-0-387-34565-9_6.
28. P.E.A. Turchi et al., “Interface between Quantum-
Mechanical-Based Approaches, Experiments, and
CALPHAD Methodology,” CALPHAD, 31 (March
2007), pp. 4–27.
29. J.Z. Zhu et al., “Linking Phase-Field Model to
CALPHAD: Application to Precipitate Shape Evolution
in Ni-Base Alloys,” Scripta Materialia, 46 (March 2002),
pp. 401–406.
30. F. Roters, “The Texture Component Crystal
Plasticity Finite Element Method,” Continuum Scale
Simulation of Engineering Materials (New York:
Wiley, 2004), www3.interscience.wiley.com/cgi-bin/summary/110544716/SUMMARY.
31. “CAELinux,”www.caelinux.org/.
32. Electricite de France, “Code_Aster,”www.code-aster.org/.
33. G. Dhondt and K. Wittig, “CALCULIX: A Three-
Dimensional Structural Finite Elemente Program,”
CALCULIX, www.calculix.de.
34. J. Forssell and Y. Mikhaylovski, “Impact Finite
Element Program,”http://impact.sourceforge.net/.
35. “OpenFOAM: The Open Source Computational
Fluid Dynamics (CFD) Toolbox,”www.opencfd.co.uk /openfoam/.
36. “libMesh—C++ Finite Element Library,” http
://libmesh.sourceforge.net/.
37. Electricite de France, “Code_Saturne,”http://rd.edf .com/code_saturne.
38. E. Raymond, “The Magic Cauldron,” The Cathedral
and the Bazaar (Sebastopol, CA: O’Reilly, 1999),
http://catb.org/~esr/writings/magic-cauldron/.
Adam C. Powell IV is with Opennovation, 1170
Chestnut St., Newton, MA 02464-1309; and Raymundo
Arroyave is with Texas A&M University, 119
Engineering Physics Building, College Station, TX
77843-3123. Dr. Powell can be reached at apowell@opennovation.com. |