To effectively detect the faulty wafers

Overview

wafer_fault_detection

Aim of the project:

In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as crystalline silicon (c-Si), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built in and upon the wafer. The project aims to successfully identify the state of the provided wafer by classifying it between one of the two-class +1 (good, can be used as a substrate) or -1 (bad, the substrate need to be replaced). In this regard, a training dataset is provided to build a machine learning classification model, which can predict the wafer quality.

Data Description:

The columns of provided data can be classified into 3 parts: wafer name, sensor values and label. The wafer name contains the batch number of the wafer, whereas the sensor values obtained from the measurement carried out on the wafer. The label column contains two unique values +1 and -1 that identifies if the wafer is good or need to be replaced. Additionally, we also require a schema file, which contains all the relevant information about the training files such as file names, length of date value in the file name, length of time value in the file name, number of columns, name of the columns, and their datatype.

Directory creation:

All the necessary folders were created to effectively separate the files so that the end-user can get easy access to them.

Data Validation:

In this step, we matched our dataset with the provided schema file to match the file names, the number of columns it should contain, their names as well as their datatype. If the files matched with the schema values, then it is considered a good file on which we can train or predict our model, if not then the files are considered as bad and moved to the bad folder. Moreover, we also identify the columns with null values. If the whole column data is missing then we also consider the file as bad, on the contrary, if only a fraction of data in a column is missing then we initially fill it with NaN and consider it as good data.

Data Insertion in Database:

First, we create a database with the given name passed. If the database is already created, open the connection to the database. A table with the name- "train_good_raw_dt" or "pred_good_raw_dt" is created in the database, based on training or prediction, for inserting the good data files obtained from the data validation step. If the table is already present, then the new table is not created, and new files are inserted in the already present table as we want training to be done on new as well as old training files. In the end, the data in a stored database is exported as a CSV file to be used for model training.

Data Pre-processing and Model Training:

In the training section, first, the data is checked for the NaN values in the columns. If present, impute the NaN values using the KNN imputer. The column with zero standard deviation was also identified and removed as they don't give any information during model training. A prediction schema was created based on the remained dataset columns. Afterwards, the KMeans algorithm is used to create clusters in the pre-processed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using the "KneeLocator" function. The idea behind clustering is to implement different algorithms to train data in different clusters. The Kmeans model is trained over pre-processed data and the model is saved for further use in prediction. After clusters are created, we find the best model for each cluster. We are using four algorithms, "Random Forest" “K Neighbours”, “Logistic Regression” and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the best model is selected for each cluster. All the models for every cluster are saved for use in prediction. In the end, the confusion matrix of the model associated with every cluster is also saved to give a glance at the performance of the models.

Prediction:

In data prediction, first, the essential directories are created. The data validation, data insertion and data processing steps are similar to the training section. The KMeans model created during training is loaded, and clusters for the pre-processed prediction data is predicted. Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster. Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location.

Deployment:

We will be deploying the model to Heroku Cloud.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
A Python script to delete movies with a certain tag after a certain amount of days.

radarr_autodelete Simple script, which deletes movies with a specific tag after a certain amount of days Pip Packages pip3 install pyarr python-dotenv

7 Dec 06, 2022
Would upload anything I do with/related to brainfuck

My Brainfu*k Repo Basically wanted to create something with Brainfu*k but realized that with the smol brain I have, I need to see the cell values real

Rafeed 1 Mar 22, 2022
A simple Programming Language

R.S.O.C. A custom built programming language About The Project R.S.O.C. is a custom built programming language very similar to a low-level 8085 progra

Ravi Maurya 17 Sep 13, 2022
This tool allows you to do goole dorking much easier

This tool allows you to do goole dorking much easier

Steven 8 Mar 06, 2022
Python with the scientific stack, compiled to WebAssembly.

Pyodide may be used in any context where you want to run Python inside a web browser.

9.5k Jan 09, 2023
script to analyze EQ decay using python

pyq_decay script to analyze EQ decay using python PyQ Decay ver 1.0 A pythonic script to analyze EQ aftershock decay using method of Omori (1894), Mog

1 Nov 04, 2021
Python 3.9.4 Graphics and Compute Shader Framework and Primitives with no external module dependencies

pyshader Python 3.9.4 Graphics and Compute Shader Framework and Primitives with no external module dependencies Fully programmable shader model (even

Alastair Cota 1 Jan 11, 2022
A simple desktop application to scan and export Genshin Impact Artifacts.

「天目」 -- Amenoma 简体中文 | English 「天目流的诀窍就是滴水穿石的耐心和全力以赴的意志」 扫描背包中的圣遗物,并导出至 json 格式。之后可导入圣遗物分析工具( 莫娜占卜铺 、 MingyuLab 、 Genshin Optimizer 进行计算与规划等。 已支持 原神2.

夏至 475 Dec 30, 2022
Python Function to manage users via SCIM

Python Function to manage users via SCIM This script helps you to manage your v2 users. You can add and delete users or groups, add users to groups an

4 Oct 11, 2022
NCAR/UCAR virtual Python Tutorial Seminar Series lesson on MetPy.

The Project Pythia Python Tutorial Seminar Series continues with a lesson on MetPy on Wednesday, 2 February 2022 at 1 PM Mountain Standard Time.

Project Pythia Tutorials 6 Oct 09, 2022
A performant state estimator for power system

A state estimator for power system. Turbocharged with sparse matrix support, JIT, SIMD and improved ordering.

9 Dec 12, 2022
Block when attacker want to bypass the limit of request

Block when attacker want to bypass the limit of request

iFanpS 1 Dec 01, 2021
Url-check-migration-python - A python script using Apica API's to migrate URL checks between environments

url-check-migration-python A python script using Apica API's to migrate URL chec

Angelo Aquino 1 Feb 16, 2022
Null safe support for Python

Null Safe Python Null safe support for Python. Installation pip install nullsafe Quick Start Dummy Class class Dummy: pass Normal Python code: o =

Paaksing 13 Nov 17, 2022
Reactjs web app written entirely in python, using transcrypt compiler.

Reactjs web app written entirely in python, using transcrypt compiler.

Dan Shai 22 Nov 27, 2022
A fishing bot script written in Python!

A fishing bot script written in Python!

Anel Drocic 3 Nov 03, 2021
Mmr image postbot - Бот для создания изображений с новыми релизами в сообщество ВК MMR Aggregator

Mmr image postbot - Бот для создания изображений с новыми релизами в сообщество ВК MMR Aggregator

Max 3 Jan 07, 2022
PBN Obfuscator: A overpowered obfuscator for python, which will help you protect your source code

PBN Obfuscator PBN Obfuscator is a overpowered obfuscator for python, which will

Karim 6 Dec 22, 2022
A minimalist production ready plugin system

pluggy - A minimalist production ready plugin system This is the core framework used by the pytest, tox, and devpi projects. Please read the docs to l

pytest-dev 876 Jan 05, 2023
tox-gh is a tox plugin which helps running tox on GitHub Actions with multiple different Python versions on multiple workers in parallel

tox-gh is a tox plugin which helps running tox on GitHub Actions with multiple different Python versions on multiple workers in parallel. This project is inspired by tox-travis.

tox development team 19 Dec 26, 2022