Active maintenance of these web resources ceased in 2012 and they will not be updated. They are posted for archival purposes only.

Topic Title: Review of ICME talks given on 2/28/07 in TMS Annual Meeting, Orlando FL
Topic Summary: This is a post that contains reviews and comments on various talks given in ICME session in 2007 TMS Annual Meeting
Created On: 3/15/2007 5:40 PM

 3/15/2007 5:40 PM


Halim Meco

Posts: 1
Joined: 2/19/2007

Review of ICME talks given on 2/28/07 in TMS Annual Meeting, Orlando FL


Integrated Computational Materials Engineering - A New Paradigm for the Global Materials Profession: John Allison1; 1Ford Motor Company

Materials Science and Engineering as a research discipline reached the maturity level where one can design new materials and improve the properties of the existing ones in a totally virtual setting, without resorting to costly and time-consuming trial and error type experimental investigations. This new capability enables the engineer to fine-tune the properties as he/she pleases, only constrained by the computational power and modeling tools at his/her disposal. Dr. Allison’s talk addresses issues pertinent to the gaps in our modeling capabilities and how they can be overcome by sound physics based mappings onto the virtual design space. Accordingly, the ICME paradigm offers couplings between fields that were considered disparate before. One can establish computation-based linkages between materials, manufacturing processes and component design in a robust and optimized manner through use of CAE tools. Consequently, the cost of alloy development would drop and valuable R&D time would be saved for more important tasks. This requires an optimization process and consideration of all the processing and manufacturing steps necessary for making the part in question. A robust optimization process should take into account all the interrelations between individual materials processes and their impact on the final properties. This requires an iterative design scheme which should allow the design software to adjust chemical compositions and process parameters to achieve an optimum solution that satisfies an objective function with imposed constraints. Certainly, this kind of a brute force approach requires development of thermodynamics, kinetics and property databases. However, integrating our total knowledge to modeling and employing data-driven Materials Informatics techniques where there exists gaps in our modeling knowledge has potential for greater insight. Another issue, that was addressed in Dr. Allison’s talk is the necessity for bridging the gaps between different length and time scales associated with specific materials related phenomena. Specifically, Ford experiment in Virtual Aluminum Castings was given as an example where the entire engine block of a V-6 gasoline engine was designed and manufactured virtually first, and then produced in lab as a prototype, enabling Ford to save valuable time and man power during the design stage. Here, commercial software such as ProCast and MagmaSoft were used to compute microstructure evolution during the casting process, and subsequent mechanical property prediction is performed by Abaqus finite element analysis software. The computational infrastructure captures all of the knowledge for the Materials Scientist, and as a result, the materials and process parameters selection has been optimized for the desired mechanical response of the part. As an example, ICME routines helped optimize the heat treatment process resulting in an improved yield stress from 210 to 220 MPa at a critical location in the engine block. Another example given is regarding process selection for casting of the engine block where conventional wisdom suggests low pressure casting as the processing route compared to gravity casting. However, simulation results revealed that gravity casting gives superior yield strength which is a counter-intuitive result at first sight. These two examples and experimental verification of them alone demonstrates that using ICME tools and implementing them as an integral part of the alloy development process improves timing, quality and performance of the parts and reduce costs. We have to keep in mind that enablers of ICME are a well-developed knowledge base, increased computing power, commercial software, and the proven track record of various ICME tools. On the other hand, there is still room for improvement which can enable us to gain further insights and greater ease at designing new alloys. The improvements require advancement in the following areas: (i) Global infrastructure, (ii) Bridging the gaps in theory and mathematical models, (iii) Database creation and management, (iv) Computational efficiency, (v) experimental validation, (vi) An comprehensive strategy to create synergy between various research groups. The following talks address some of the above issues.


ICME at GE: Accelerating the Insertion of New Materials and Processes: Deborah Whitis1; Daniel Wei1; Matthew Buczek1; Peter Finnigan2; Dongming Gao2; Daniel Backman3; 1General Electric Company; 2General Electric Global Research Center; 3Worcester Polytechnic Institute

Increasing jet fuel prices and the competitive nature of the airline business has put enormous pressure on aircraft manufacturers to produce jetliners that operate in a more fuel efficient manner. This reflects to the design of the planes as reduced weight and use of high temperature materials as the operating temperature of a jet engine is ever increasing in an effort to burn jet fuel more efiiciently and yet generating more thrust. The required increases in fuel efficiency, speed and complex missions made it necessary the development of new materials with improved capabilities. Here, the bottle neck is materials design rather than part design which takes typically shorter times of the order of hours to weeks. On the other hand, materials design takes longer time and small changes in alloy chemistry and process parameters are very costly and add more time to the design process. However, materials design process can be organized so that an optimum solution can be found for the structure and property balance. The iterations are done in on computer, and if the model is based on sound foundations, the final alloy design can be experimentally validated. This paradigm shift results in an accelerated development process which in turn helps reduce costs in all areas of research. Collaborative efforts between universities, manufacturers, software developers, government agencies and GE Global Research are of essence for the ICME approach to work towards its full potential. Some of the software packages used for this purpose are JMatPro, DEFORM, Pandat, ThermoCalc, Pattern Master, ISIGHT and FIPER. Here, development of two parts are given as examples of the AIM (accelerated insertion of materials) approach. The first is microstructure and property optimization of Ni-superalloy disks where the desired microstructure is predicted based on process models such as flow stress response during forging, quench paths, cooling and residual stress maps. A linkage between structure models and property models helps us understand tensile, fatigue and creep properties of a predicted microstructure. The second example given is Ni-superalloy blades where process models of solidification, casting and heat treatment helps predict the microstructure, namely, the gamma-prime distribution. Finally based on the predicted microstructure fracture, creep, fatigue and environmental resistance properties of the part are predicted. The 4 mains steps that need to be followed for this approach to be effective are (i) developing composition libraries, (ii) characterize property libraries, (iii) generate physics-based predictive models, and (iv) validate the approach experimentally. Another important method employed to map inputs (processing and microstructure) to final properties is the Neural Net Modeling. It gives insights as to how everything connects within the Designer Knowledge Base (DKB) and provides a platform independent solution and enables robust and seamless integration. DKB also provides a unified tool for generating minimum mechanical property curves based on property test data and physics based models. These tools help the designer to quickly evaluate impact of material changes on design/life of components.


Overview of the Center for Computational Materials Design (CCMD): Zi-Kui Liu1; David Mcdowell2; 1Pennsylvania State University; 2Georgia Institute of Technology

The main aim of all ICME approaches is to accelerate the integration of computational-prediction models to materials design, and validating the computational techniques though experiments if applicable. Dr. Liu’s presentation gave a roadmap for such an effort in MatCASE (Materials Computation and Simulation Environment) developed in Penn State, and the RCEM (Robust Concept Exploration Method) developed at Georgia Tech. These two methods can be considered as online laboratories used in tandem for materials design. The CCMD is a framework of a prototype grid-enabled software package for materials design that predicts the relationships among the chemical, microstructural and mechanical properties of multicomponent materials using the technologically important aluminum-based alloys as model materials. It is possible with this software package to develop more efficient routes for creating thermodynamics and kinetics databases that will allow information exchange between various materials design stages using new algorithms and parallel computing schemes. The end result is improved predictive power in multicomponent materials design which enables materials scientists to develop new materials as well as tailor existing materials for better performance. MatCASE consists of four main computational steps scaling from atomistic to macroscopic levels. These are in increasing order of length scale: (1) First-principles calculations, (2) Calphad approach for phase diagram calculations (thermodynamic bulk properties), (3) Phase-field simulations to generate microstructures, and (4) Finite element analysis to obtain mechanical response using OOF, an object-oriented finite-element code. Goal of MatCASE is coupling those four steps to crate a fully automated on-line laboratory.


CyberDesign Optimization of Structural Automotive Components Employing Multiscale Modeling: Mark Horstemeyer1; 1Center for Advanced Vehicular Systems

Cradle-to-grave history modeling of a material through its manufacturing process and in-service life was discussed in the context of multiscale modeling. To predict final ductile fracture in a structural analysis employing finite element analysis, an analyst needs to consider the microstructure-property relationship to capture material history effects in the constitutive relations when performing the simulations. An effective method to capture the microstructure-property relationship is by use of internal state variable evolution equations, which are formulated at the macroscale. The internal state variables reflect lower spatial size scale microstructural rearrangements so that history effects can be modeled. To form the appropriate internal state variables for analysis of strain rate and temperature dependent plasticity and damage progression in ductile metals, a multiscale hierarchy of numerical simulations coupled with experiments presented in ascending spatial size scale can be used to determine the functional forms of the macroscale plasticity and damage progression equations. An example of the multi-scale analysis for structural components of an automobile is presented. One main material is a cast A356 aluminum alloy used for chassis components. Cast A356 aluminum has microstructural features at different spatial size scales that cause damage in which the synergistic cooperation of various length scales leads to final component failure. A methodology is presented in which a cascade of different spatial size scale finite element analyses coupled with experiments at each scale are performed to determine final failure of a structural component. Another caveat of multiscale modeling is that stochastic nature of failure mechanisms is also embedded in the models. The multiscale methodology was used to optimize the control arm by reducing the weight 25%, increasing the strength by 50%, and reducing the cost by $2 per part. The multiscale methodology also applies to experimental validation where experiments are designed to observe phenomena occurring at different length scales. Employing multicscale approach to materials design problems reflects as reduced time in the design process as well as the design of an optimized product for a certain application.


Cyberinfrastructure for Multiscale Simulations and Design Optimizations: Tomasz Haupt1; 1Mississippi State University

Multiscale simulations for optimized material design requires not only sound physics-based models that couple various length scales but also implementation of cyberinfrastructure on which the simulation codes run. The interdisciplinary areas pertinent to various models can be integrated through cyberinfrastructure greatly reducing the manual effort involved in computation. One such approach is Service Oriented Architecture (SOA) and Grid Computing, environments that enable the engineer and scientist to design and optimize a material by the help of an automated job distribution tool among processors. In a truly distributed computational system, all kinds of applications can be integrated to link various length scales in the models. The engineer does not necessarily need to understand the details of such computational infrastructures. A cyberinfrastructure is a valuable tool in our tool-box that helps us carve our way through the design process. Macro, meso, micro and quantum length scales can be easily linked through employing the relevant models within the framework of a cyberinfrastructure. Such mundane tasks like resource management where a GUI helps schedule submitting of jobs from a user space by reserving a block of processors are the workflow issues managed by an autonomous computing system - namely, a cyberinfrastructure capable of self-configuring, self-optimizing, self-healing, and self protecting, adaptive parallel applications, and distributed simulation environments. Here, distributed service based SOA delegates computation to remote high performance clusters, and a resource manager does scheduling based on adaptive applications, and a business manager controls the workflow engine where, for instance, Abaqus code checks yielding criteria and depending on the result runs mesoscale failure analysis. This whole process happens in an iterative fashion all controlled by resource and business manager systems embedded in the cyberinfrastructure. And the final design is a solution converged to an optimized design.


Reliability-Based Design Optimization of Automotive Structures for Improved Crash Performance: Masoud Rais-Rohani1; 1Mississippi State University

Ever increasing gas prices together with increasing safety and crash performance requirements put stringent design criteria for weight reduction and optimized materials in automobiles. Since uncertainty is involved in every aspect of materials design, from the geometry of the part to the materials properties, loading and other important operating conditions associated with the use of the component, one must include probabilistic design concepts in modeling crash simulations, in order to reliably analyze and optimize the geometry of the component in question. This can be achieved by implementing to the materials design process an objective function that consists of both deterministic and probabilistic attributes. The objective function is then subjected to an appropriate constraint formulation which itself is probabilistic. In this study, the geometry of a car frontal I-beams are optimized constrained by the design requirements that the weight of the component should be reduced, and strength and crash properties are increased through the application of Reliability-Based Design Optimization (RBDO). Here, the simulation program minimizes the chosen objective function which is contingent upon the appropriate constraint. The iterative improvements on the component geometry correspond to the simultaneous optimization of the weight and shape (i.e., I-beam wall thickness) of the component in question. This optimization in design is achieved by computing displacement and acceleration of the finite-element nodes for the first 100 microseconds of the crash. This allows calculation of the time-dependent stress distribution and prediction of the resulting deformation. Such an effort means that more than 100 points in design parameter space should be considered. Furthermore, the choice of objective functions proves to be important in that for different objective functions the study yields different results. The conclusion reached at the end of this study is that there are no single solution for the optimization of both size and shape for acceleration.


Computer Aided Heat Treatment Planning System for Quenching and Tempering: Lei Zhang1; Yiming Rong1; 1Worcester Polytechnic Institute

Heat-treatment in the US is a $15 billion/year industry, and there is a huge demand in the industry on lowering the associated energy costs. In an effort to reduce costs, one must resort to computational tools for simulating the process and reach an optimizing heat-treatment scheme for the work-piece in question. Dr. Zhang’s talk addresses issues relevant to optimization of the heat-treatment process from the perspective of ICME methodology. Mechanical properties of a heat-treated part depends on three main factors: (1) Processes that occurred outside of the heat-treatment furnace, i.e. process history of the work-piece, (2) Microstructure evolution during the heat-treatment process, and (3) Distortion and stress development within the part as a result of quenching. A robust process optimization for heat-treatment should address all of the above three issues. First of all, the design of the heat-treatment furnace should be optimized that would yield a furnace design with reduced heat losses, improved heat storage and a more uniform temperature distribution within the furnace. This requires heat transfer modeling by taking into account all three modes of heat exchange, namely, conduction, convection and radiation between various components of the furnace such as heating elements, insulation and work-pieces. Accordingly temperature profiles within the work-piece can be computed and appropriate heating, quenching and tempering models can be applied to optimize the heat-treatment process in an iterative manner. The computed temperature profiles at each step of the heat treatment process, i.e. annealing, tempering and quenching, can then be coupled with microstructure prediction models. Accordingly, TTT curves for specific materials can be generated provided that a suitable database, comprising of relevant heat transfer coefficients for the work-piece, quenchant and furnace, as well as the thermodynamics and kinetics parameters as they pertain to the possible phase transformations that can take place within the material in question, is available. In this study, by the help of such a database and a GUI, the design engineer can predict the cooling conditions prevalent within a work-piece and carry out the whole heat-treatment process in an optimized fashion specifically tailored for the part in question.


Report On: NSF Workshop for “CyberInfrastructure to CyberDiscovery for Materials Science”: Krishna Rajan1; 1Iowa State University

Cyberdiscovery and cyberinfrastructure (CI) for condensed matter and materials science has become one of the main themes of research enabling scientists to share and disseminate information in real-time resulting in great leaps in the discovery and development of new materials. Such an accelerated pace in materials discovery requires access to huge databases with previously defined and agreed upon tags and fields. And materials informatics can be implemented as a methodology to link various fields resulting in the discovery of cross-correlations between them. Dr. Rajan’s presentation addressed issues regarding various aspects of cyberinfrastructure as applied to materials research. One such approach is to emulate and even further develop the cyberinfrastructure methodologies prevalent in life sciences communities where the multi-length scale approach to biological problems significantly resembles a cascade of materials research problems. It is therefore imperative to identify analogies pertaining to CI in life sciences rather than reinventing the wheel, and apply those methodologies in materials research. This will not only help discovery of new knowledge through CI, but also will give rise to improvements in CI methodologies outside of materials research. Because materials theory remains at the cutting edge of CI exploitation, significant leaps in materials discovery can be achieved by easing the generation and transfer of massive amounts of data within the framework of a CI technology. Such an undertaking also makes the process of finding meaningful trends in huge amounts of data easier. Another advantage of CI is that it builds communities for science through the use of web portals and shared software for materials research problems where scientists can access, query and mine data remotely through internet. This brings scientific communities together and significantly enhances the speed of theoretical and computational developments. After NSF Workshop for “CyberInfrastructure to CyberDiscovery for Materials Science”, four main materials research problems were identified where CI can yield immediate benefits. These are: (1) Materials by Design, (2) Nano-structured materials, (3) Materials out of equilibrium, and (4) Building research and learning communities. However, there are a myriad of imperatives for CI to generate the expected outcomes in terms of research. Some of those needs are: software development for education and outreach, tools for remote collaboration, implementing materials informatics methodologies, increasing the use of shared facilities, developing better algorithms, integration and interoperability of software and communities, bringing together experiment and theory, effective scientific data management and curation, scalable real-time data analysis, and access to advanced computing resources at different scales. One such example of a CI framework, where most of the aforementioned issues are addressed, is the virtual lab created by the collaboration between Iowa State University (USA), University of Tokyo (Japan), and Uppsala University (Sweden) where the parties involved share a common thermodynamics and crystallography database together with a first principles calculation software and data-mining tools that greatly enhances the speed of materials discovery. The future looks bright for materials research as state-of-the-art IT technologies enables communities to work closer and share knowledge.


Computational Fluid Dynamics in Automotive Applications: Hrvoje Jasak1; 1Wikki Ltd

Computational Fluid Dynamics (CFD), in the last 20 years, has become a major tool in automotive design due to significant advances in computational power, physical models and numerical techniques. CFD, in essence, models fluid flow, heat transfer and associated phenomena such as chemical reactions within the framework of a computational grid created based on the geometry of the part in question. Some of the areas of application of CFD are, numerical stress analysis, modeling of electromagnetics in computer microchips, weather prediction and global oceanics/atmosphere, large-scale systems such as galactic dynamics and star formation, complex heat and mass transfer systems, and fluid structure interaction and similar coupled systems, to name a few. CFD involves solving fundamental equations that govern mass, momentum and energy conservation with imposed initial and boundary conditions dictated by the geometry and operating conditions of the system to be solved. In automotive applications, design issues such as aerodynamics of the car and design of the internal combustion chamber are handled based on the assumption that flow is incompressible turbulent flow of Newtonian fluids. Furthermore, due to the complex nature of automotive design where there are many interdependencies among the design components, a multi-objective optimization scheme should be employed that enables collective optimization of the car design. One such example is the flow organization in the engine compartment where the engine should be cooled sufficiently whereas the catalytic converter should be ensured to work at high enough temperatures in order to work effectively. Another example is the passenger compartment design where the temperature gradient between the passenger’s head and foot should not exceed 1 degrees Celsius and the velocity of the airflow to the cabin should not exceed 2 m/s. However, for certain applications CFD is not enough because the required accuracy in some cases is beyond the current state of physical modeling, especially in turbulent flow. Moreover, simulation costs can be prohibitive, and flow physics in some situations can be too complex resulting in incomplete modeling or insufficient understanding of the detailed physical process. As the part geometry gets very complex, mesh generation also becomes problematic. In such difficult cases, one may resort to simplified geometries with reduced dimensions (such as 2-D or even 1-D) that can still capture the physics of the problem at hand without making other assumptions. One of the major advantages of CFD is that it can complement experimental techniques in such cases where measuring flow patterns are nearly impossible in real time, a case in point is cylinder head and combustion chamber design. In summary, design of automotive components require an optimization approach where the current state of the art in computational resources and physical modeling can be integrated into computer-aided design.


Multiscale Modeling: Weather vs. Materials: Andreas Muschinski1; Robert Hyers1; 1University of Massachusetts

Weather prediction and materials design have common grounds in that the processes that govern the fundamental physics in both fields occur at multiple length-scales. Accordingly, modeling efforts in these areas should be able to link these length-scales yielding a better picture of the behavior of these systems in macro level. The thrust of Dr. Muschinski’s talk is to provide examples from multi-scale approach in weather prediction. Typical length scales important in materials related phenomena occur between 1 nm and 1 m whereas the weather related phenomena changes between 1 mm and 1000 km. The challenge here is to capture all these together and solve them in a meaningful way. This philosophy can be summarized as the ICME paradigm. Dr. Muschinski’s talk is aimed at drawing important analogies between weather and materials sciences from the vantage point of multiple length-scales. Some of the important phenomena and research topics in weather science are; modeling planetary waves, mesoscale weather features, atmospheric turbulence, frictional dissipation, and weather systems, all happening at different length and time scales. Research in multiscale dynamics of the atmosphere include low pressure system dynamics, cold-air outbreak, motion of cloud bands, convection, and Von Karman vortex street, to name a few. Since the changes in weather occur at various elevations of the atmosphere and periodically during daytime, modeling of such changes require models that work at different length and time scales. In theory, such changes can be predicted through the application of physical models of fluid dynamics and solving Navier-Stokes equations. However, such modeling does not take into account the stochastic nature of weather patterns. Moreover, parameterization still remains to be elusive. In other words, the parameters that affect weather patterns are enormous and difficult to pinpoint. Therefore, only a handful of parameters are used in deterministic weather prediction. Some of these are; 3 wind vector components, pressure, temperature, density of air, and water content. And 7 equations pertaining to mass, momentum, energy, water mass conservation as well as an equation of state are solved simultaneously. Two of the most prominent figures in weather modeling at the turn of 20th century are Richardson and Bjerknes who believed that weather can be predicted. In the pre-computer age, Richardson even envisioned a weather factory, a parallel computer of sorts, only CPUs replaced by human brains, to solve the aforementioned nonlinear differential equations. Various modeling approaches coupled with computational advances have emerged during the course of the previous century which helped improve forecast skill significantly. (Forecast skill is simply a measure of correlation between real and predicted forecast.) Richardson’s weather factory dream has become a reality thanks to advances made in 3 fronts: (1) Parameterization (i.e. theory), (2) Observational technologies such as satellites, and (3) Computational power. Since the weather related phenomena occurs over a length-scale differing in about 10 orders of magnitude (1 mm to 10,000 km) resembling multi-length scales in materials related phenomena, predictive weather modeling approaches can be emulated in materials science research. This will hopefully reveal new insights and pathways to predictive materials modeling as well as design and discovery of novel materials.


Moving Modeling from Theory to Practice: A Short History of the Adoption of Computational Fluid Dynamics in Aerospace Design: Deborah Whitis1; D. Holmes1; 1General Electric Company

Due to immense increases in computational power and abundance of physical modeling tools at our disposal, CFD has become an indispensable part of the design process in materials design. Dr. Holmes’s talk drew from the experience in the aerospace industry regarding applications of CFD as a design tool during manufacturing of aeroengines and power turbines. The most common use of CFD design tools at GE is evident in turbine blades, where CFD improved fuel efficiency of an aeroengine in Boeing 747-8. The evolution of CFD into an indispensable design tool started with its inception in academia as algorithm development and computational power reached a critical point 30 years ago. This enabled designers to overcome challenging design issues and analyze failure through Root Cause Analysis complemented by CFD tools revealing insights as to how and why a component/material failed. The use of CFD tools begins with generation of a simple computational grid, and then solving governing equations of mass, momentum, and energy conservation with imposed initial and boundary conditions. Then the results are examined and the design is modified if there is still room for improvement. Typical cycle time in this design process is overnight. Rewards of CFD are evident in unducted fan design, in solving the cavitation problem in jet engines, and in improved fuel-efficiency in counter-rotating propellers. In a way, CFD guides the design even if the results are not precise. For instance, one could visualize the flow patterns around the blades which is nearly impossible through implementation of experimental techniques. Moreover, CFD provides a fast and inexpensive virtual design environment for the engineer. The current state-of-the-art in hardware used in CFD calculations is dedicated network of Linux PC’s where the problem is partitioned in a parallel fashion through the implementation of parallel Fortran 90 codes. However, there exists issues of scalability, for instance, if only 1% of the code refuses to run in parallel, the computational power does not improve even 100 times more CPU’s are used. Other important key issues in adapting CFD into the design process are senior management support, designer commitment, high performance computing, ongoing CFD methods development, and CFD tool design and integration. Also, turbulence modeling remains to be elusive in that we are still a couple of orders of magnitude short in both algorithm and computer power due to length scale issues together with chaotic and therefore intractable nature of turbulence. The future that lies ahead of CFD holds more computational power and mass storage in store although algorithm development has plateaued. However, only incremental increases rather than dramatic increases in the power of turbulence models are to be expected according to Dr. Holmes. Furthermore, a multidisciplinary approach that combines both aerodynamics and structural design is imperative. Other topics that will keep CFD an active research filed in aerospace design are multiphase flow modeling and computational aero-acoustics for noise reduction in jet engines.


The BEN Collaborative: The National Sciences Digital Library (NSDL) Biological Sciences Pathway: Linda Akli1; Yolanda George1; Nancy Gough1; 1American Association for the Advancement of Science

ICME generates enormous amount of data which should subsequently be stored and managed in an efficient manner for later data mining and knowledge discovery. The important question begging answer is, how should data resources be organized and classified? BEN Collaborative serves as a template for materials science community in that it addresses current challenges regarding data management in biological sciences. Since the goal in collaborative development of a portal site is to share and disseminate data and metadata of the actual data, creating common resource descriptions and technical specifications is imperative in managing scientific data. This provides a common language that cuts across scientific communities increasing the speed at which research is conducted. Dr. Gough’s talk examined and dissected the methodology implemented in a gigantic search and browse tool, a portal application, in BEN. The example given in her talk is the data organization schemes employed for STKE online journal, a research journal on cell signaling, which investigates how cells communicate with each other. STKE’s web portal allows users to search and browse in a database of cell signaling, among different communities and available resources. In such an undertaking, first thing to do in creating a search and browse type web-portal is developing a schema for metadata. The relational database thus created would allow one to browse, for instance, data based on information about the resource using editor-supplied tags and information inherent to resource. This requires developing taxonomies or controlled vocabularies that are agreed upon by the whole research community, one of the outstanding problems that needs to be answered in life sciences community. A typical metadata of a database for cell signaling comprises of pathways, components, nodes, and relations among them, as well as scope, description, citations, organism name, tissue/cell description, links to canonical info, and an interactive pathway diagram. These seemingly unrelated fields are put together by visualization tools in order to make a clear view of the interactions between the genes and proteins. In a metadata search engine, the researcher can do advanced search where he/she can display the info about a resource, i.e. metadata. The resource/data itself can only be shared with parties that have access privileges. Moreover, the content awareness can be increased through the integration of alert systems in the search engine where the user is notified if a new entry relevant to a pre-chosen keyword is made to the database. The quality of this metadata driven process is ensured by peer-review and strict editor control throughout the creation of the creation of the database and its associated metadata. Materials Science community having developed its own mark-up language MaterialsML, has still a lot to learn form biological community in terms of the minutia of mark-up language they employ.


TextTextTextTextTextTextTextTextTextTextTextText
       
FORUMS > TMS ANNUAL MEETING ZONE

THE MINERALS, METALS & MATERIALS SOCIETY
5500 Corporate Drive Suite 750, Pittsburgh, PA 15237 USA (directions)
www.tms.org
Telephone 800-759-4867 (U.S. and Canada)
724-776-9000 (elsewhere)
Fax 724-776-3770 ·Email webmaster@tms.org