Neural Networks: Improving Performance in X-ray Lithography Applications ECE 539 Ryan T. Hogg May 10, 2000.

Slides:



Advertisements
Similar presentations
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Advertisements

Analog-to-Digital Converter (ADC) And
Mehran University of Engineering and Technology, Jamshoro Department of Electronic Engineering Neural Networks Feedforward Networks By Dr. Mukhtiar Ali.
Neural Networks Basic concepts ArchitectureOperation.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Improved BP algorithms ( first order gradient method) 1.BP with momentum 2.Delta- bar- delta 3.Decoupled momentum 4.RProp 5.Adaptive BP 6.Trinary BP 7.BP.
Multi Layer Perceptrons (MLP) Course website: The back-propagation algorithm Following Hertz chapter 6.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Chapter 11 Analysis and Explanation. Chapter 11 Outline Explain how CI systems do what they do Only a few methodologies are discussed here Sensitivity.
Artificial Neural Networks
Appendix B: An Example of Back-propagation algorithm
Backpropagation An efficient way to compute the gradient Hung-yi Lee.
Prediction of Voting Patterns Based on Census and Demographic Data Analysis Performed by: Mike He ECE 539, Fall 2005.
Back-Propagation MLP Neural Network Optimizer ECE 539 Andrew Beckwith.
Radial Basis Function Networks:
1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265
Yang, Luyu.  Postal service for sorting mails by the postal code written on the envelop  Bank system for processing checks by reading the amount of.
Computer Go : A Go player Rohit Gurjar CS365 Project Presentation, IIT Kanpur Guided By – Prof. Amitabha Mukerjee.
Multi-Layer Perceptron
Non-Bayes classifiers. Linear discriminants, neural networks.
A.N.N.C.R.I.P.S The Artificial Neural Networks for Cancer Research in Prediction & Survival A CSI – VESIT PRESENTATION Presented By Karan Kamdar Amit.
Prediction of the Foreign Exchange Market Using Classifying Neural Network Doug Moll Chad Zeman.
ADALINE (ADAptive LInear NEuron) Network and
An Artificial Neural Network Approach to Surface Waviness Prediction in Surface Finishing Process by Chi Ngo ECE/ME 539 Class Project.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Essential components of the implementation are:  Formation of the network and weight initialization routine  Pixel analysis of images for symbol detection.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Neural network based hybrid computing model for wind speed prediction K. Gnana Sheela, S.N. Deepa Neurocomputing Volume 122, 25 December 2013, Pages 425–429.
Learning: Neural Networks Artificial Intelligence CMSC February 3, 2005.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Machine Learning Supervised Learning Classification and Regression
Neural networks.
Hardware Descriptions of Multi-Layer Perceptions with Different Abstraction Levels Paper by E.M. Ortigosa , A. Canas, E.Ros, P.M. Ortigosa, S. Mota , J.
Applying Neural Networks
ECE 539 Project Jialin Zhang
Ranga Rodrigo February 8, 2014
DEPARTMENT: COMPUTER SC. & ENGG. SEMESTER : VII
CSE 473 Introduction to Artificial Intelligence Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
NEURAL NETWORK APPROACHES FOR AUTOMOBILE MPG PREDICTION
RECURRENT NEURAL NETWORKS FOR VOICE ACTIVITY DETECTION
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
MLP Based Feedback System for Gas Valve Control in a Madison Symmetric Torus Andrew Seltzman Dec 14, 2010.
Hebb and Perceptron.
Tomasz Maszczyk and Włodzisław Duch Department of Informatics,
Neural Networks Advantages Criticism
Biological and Artificial Neuron
Biological and Artificial Neuron
Variations on Backpropagation.
Outline Single neuron case: Nonlinear error correcting learning
Artificial Neural Network & Backpropagation Algorithm
Neuro-Computing Lecture 4 Radial Basis Function Network
of the Artificial Neural Networks.
Chap. 7 Regularization for Deep Learning (7.8~7.12 )
Biological and Artificial Neuron
Neural Network - 2 Mayank Vatsa
An Introduction To The Backpropagation Algorithm
Multilayer Perceptron & Backpropagation
Aleysha Becker Ece 539, Fall 2018
Artificial Intelligence 10. Neural Networks
Structure of a typical back-propagated multilayered perceptron used in this study. Structure of a typical back-propagated multilayered perceptron used.
GSPT-AS-based Neural Network Design
Variations on Backpropagation.
Artificial Neural Networks / Spring 2002
Presentation transcript:

Neural Networks: Improving Performance in X-ray Lithography Applications ECE 539 Ryan T. Hogg May 10, 2000

Abstract In order to improve the performance of existing networks used in X-ray lithography, a modular network consisting of three independent expert networks was designed. Each expert was trained using a radial basis function. The trained network should have a mean error of less than 0.2% for the three outputs of the network. Error values for the network have not been calculated yet. The speed of the training has been greatly increased as well. Originally 1327 training vectors were used. The new training set consists of only 300 vectors and appears to produce better results.

Outline The problem Existing solution Proposed solution Modular network layout Network options Network results Future Improvements Conclusion

Problem: X-ray Lithography Parameters The manufacturing of semiconductor devices using X-ray Lithography requires the evaluation of the following multivariate function: [linewidth, IMTF, fidelity] = F(absorber thickness, gap, bias ) Having a neural network to calculate these values with a minimal amount of error would be an invaluable tool to any lithographer. IMTF = Integrated Modulation Transfer Function

The Radial Basis Solution Currently there is a neural network that uses a radial basis function to compute these three variables. The error performance on the testing set is: Mean Error: 0.2% to 0.4% Maximum Error: 4% The goal of this project is to improve these percentages, ideally obtaining a maximum error of less than 0.1%.

A Modular Solution Rather than having a single network take in all three parameters and produce three outputs, each output will have its own expert network. Result Expert 1 Linewidth Expert 2 IMTF Expert 3 Fidelity abs gap bias

Battle of the Networks Initially two different types of networks were trained to see which one was better: Multilayer Perceptron (One Hidden Layer) Fast training, but low learning rate Large mean square error Radial Basis Function Neural Network - Slower training - Much lower error

MLP Approach The three inputs and three outputs were converted to 32-bit binary numbers for more effective training. This way, each expert network had 96 inputs and 32 outputs. The format of the binary representation was as follows: s b7 b6 b5 b4 b3 b2 b1 b0 b-1 b-2…b-23 There were two sets of weights wh and wo. Each hidden neuron had three weighted inputs and a bias. The output of each hidden neuron was weighed and summed together in each output node along with a bias.

MLP Results After training with several different learning rates and momentum constants, it became clear that the MLP was not learning at an adequate rate to achieve the low level of error that was desired. In some cases the level of error was greater than 100%!

Setting up the RBN Select smoothing factor (h) Numerous values of h were tested, mainly powers of ten. Most of the values generated inaccurate predictions, but one value performed with remarkable results: h = 0.0001 Determine size of training set to use To increase the training rate of the RBN, the size of the training set was reduced. Initially only the first two hundred vectors were used. This gave good results for the larger-valued test sets.

RBN Setup Continued… Since the training set was regularly distributed, a new set was generated like so: Total Vectors = 300 The test set has been reduced by over 75%! 100 425 525 1227 1327

Modular Radial Basis Results (First 10 Vectors) Predicted Actual Predicted Actual Predicted Actual Linewidth Linewidth IMTF IMTF Fidelity Fidelity 147.1649 146.8300 0.3775 0.3772 0.8023 0.8021 154.7935 154.8700 0.3847 0.3848 0.7851 0.7853 163.0653 163.1100 0.3707 0.3708 0.7508 0.7509 168.9189 168.9600 0.3438 0.3441 0.7153 0.7157 172.4245 171.9600 0.3309 0.3276 0.6994 0.6954 134.7077 135.2400 0.3953 0.3938 0.8452 0.8448 146.5182 146.4400 0.3837 0.3839 0.8222 0.8223 158.9090 158.8900 0.3371 0.3372 0.7661 0.7661 168.6808 168.6900 0.2764 0.2766 0.6984 0.6986 173.7676 174.3300 0.2429 0.2399 0.6592 0.6563 Note: Error calculations are not available yet because the code has not been written.

Future Improvements Adjust the smoothing factor for better results. Write code to calculate mean error and maximum error for the testing set. Possibly reduce training set even further.

Conclusions MLP is not suited well for this type of function. Radial basis functions work well here, which is why one was used in the first place. However, creating an expert network for each output produces even better results.