Skip to main content

Data Pre-processing Techniques

Data Preprocessing

Data preprocessing is the process of transforming raw data into an understandable format. It is also an important step in data mining as we cannot work with raw data. The quality of the data should be checked before applying machine learning or data mining algorithms. 


How quality of the data will get?

By applying the Preprocessing techniques we get quality of data from raw data.

Why data preprocessing is important?

Preprocessing of data is mainly to check the data quality. The quality can be checked by the following

  • Accuracy: To check whether the data entered is correct or not.
  • Completeness: To check whether the data is available or not recorded.
  • Consistency: To check whether the same data is kept in all the places that do or do not match.
  • Timeliness: The data should be updated correctly.
  • Believability: The data should be trust
Major Tasks in Data Preprocessing:
  • Data cleaning
  • Data integration
  • Data reduction
  • Data transformation
Data cleaning:
Data cleaning is the process to remove incorrect data, incomplete data and inaccurate data from the datasets, and it also replaces the missing values. There are some techniques in data cleaning

Handling missing values:

Standard values like “Not Available” or “NA” can be used to replace the missing values.
Missing values can also be filled manually but it is not recommended when that dataset is big.
The attribute’s mean value can be used to replace the missing value when the data is normally distributed
wherein in the case of non-normal distribution median value of the attribute can be used.
While using regression or decision tree algorithms the missing value can be replaced by the most probable value.

Noisy:
          Noisy generally means random error or containing unnecessary data points. Here are some of the methods to handle noisy data.

Binning:
             This method is to smooth or handle noisy data. First, the data is sorted then and then the sorted values are separated and stored in the form of bins. There are three methods for smoothing data in the bin. Smoothing by bin mean method: In this method, the values in the bin are replaced by the mean value of the bin; Smoothing by bin median: In this method, the values in the bin are replaced by the median value; Smoothing by bin boundary: In this method, the using minimum and maximum values of the bin values are taken and the values are replaced by the closest boundary value.
Regression: This is used to smooth the data and will help to handle data when unnecessary data is present. For the analysis, purpose regression helps to decide the variable which is suitable for our analysis.

Clustering: 
                This is used for finding the outliers and also in grouping the data. Clustering is generally used in unsupervised learning.

Data integration:
The process of combining multiple sources into a single dataset. The Data integration process is one of the main components in data management. There are some problems to be considered during data integration.

Schema integration: 
                                Integrates metadata(a set of data that describes other data) from different sources.
Entity identification problem: Identifying entities from multiple databases. For example, the system or the user should know student _id of one database and student_name of another database belongs to the same entity.

Detecting and resolving data value concepts: 
                The data taken from different databases while merging may differ. Like the attribute values from one database may differ from another database. For example, the date format may differ like “MM/DD/YYYY” or “DD/MM/YYYY”.

Data reduction:
         This process helps in the reduction of the volume of the data which makes the analysis easier yet produces the same or almost the same result. This reduction also helps to reduce storage space. There are some of the techniques in data reduction are Dimensionality reduction, Numerosity reduction, Data compression.

Dimensionality reduction: 
    This process is necessary for real-world applications as the data size is big. In this process, the reduction of random variables or attributes is done so that the dimensionality of the data set can be reduced. Combining and merging the attributes of the data without losing its original characteristics. This also helps in the reduction of storage space and computation time is reduced. When the data is highly dimensional the problem called “Curse of Dimensionality” occurs.

Numerosity Reduction: 
In this method, the representation of the data is made smaller by reducing the volume. There will not be any loss of data in this reduction.

Data compression: The compressed form of data is called data compression. This compression can be lossless or lossy. When there is no loss of information during compression it is called lossless compression. Whereas lossy compression reduces information but it removes only the unnecessary information.

Data Transformation:
       The change made in the format or the structure of the data is called data transformation. This step can be simple or complex based on the requirements. There are some methods in data transformation.

Smoothing: With the help of algorithms, we can remove noise from the dataset and help in knowing the important features of the dataset. By smoothing we can find even a simple change that helps in prediction.

Aggregation: In this method, the data is stored and presented in the form of a summary. The data set which is from multiple sources is integrated into with data analysis description. This is an important step since the accuracy of the data depends on the quantity and quality of the data. When the quality and the quantity of the data are good the results are more relevant.

Discretization: The continuous data here is split into intervals. Discretization reduces the data size. For example, rather than specifying the class time, we can set an interval like (3 pm-5 pm, 6 pm-8 pm).

Normalization: It is the method of scaling the data so that it can be represented in a smaller range. Example ranging from -1.0 to 1.0.

Data preprocessing steps in machine learning

Data set as follows
 
Features/ attributes are Country, Age, Salary, Purchased.
You can observe that in the below table there are NAN values in the 5th row at Salary and the 7th row at Age.

Country

Age

Salary

Purchased

0

France

44

7200

0

1

Spain

27

48000

1

2

Germany

30

54000

0

3

Spain

38

61000

0

4

Germany

40

NaN

1

5

France

35

58000

1

6

Spain

NaN

52000

0

7

France

48

79000

1

8

Germany

50

83000

0

9

Fran ce

37

67000

1


Importing of Dataset

STEP1: importing Libraries and Dataset

 


STEP 2: Encoding Categorical Data


STEP 3: Replacing NaN value with Mean


STEP 4: Replacing NaN value with Median


STEP 5: Splitting of Dataset into Training and Testing data 




STEP 6: Normalzing the Data

Standard Scalar Before and after


Noramlization




Comments

Popular posts from this blog

Big Data Analytics Programs

  List of Programs for Big Data Analytics   CLICK ON ME 1.  Implement the following Data structures in Java       a)  Linked Lists            b)   Stacks       c)  Queues     d)   Set            e)   Map 2.  Perform setting up and Installing Hadoop in its three operating modes:      Standalone,     Pseudo distributed,     Fully distributed. 3.  Implement the following file management tasks in Hadoop:    a) Adding files and directories    b) Retrieving files    c) Deleting files 4. Run a basic Word Count Map Reduce program to understand Map Reduce Paradigm. 5. Write a Map Reduce program that mines weather data.     Weather sensors collecting data every hour at many locations across the globe gather a large volume of log data, which is a good candidate for analysis with MapReduce since it is semi-structured and record-oriented. 6. Implement Matrix Multiplication with Hadoop Map Reduce 7. Write a MapReduce program to count the occurrence of similar words in a file. Use partitioner to part

How to Install Parrot Operating System in Virtual Box using OVA

Step by Step Process of Parrot OS Installation What is Parrot OS Parrot is a free and open-source Linux system based on Debian that is popular among security researchers, security experts, developers, and privacy-conscious users. It comes with cyber security and digital forensics arsenal that is totally portable. It also includes everything you'll need to make your own apps and protect your online privacy. Parrot is offered in Home and Security Editions, as well as a virtual machine and a Docker image, featuring the KDE and Mate desktop environments. Features of Parrot OS The following are some of the features of Parrot OS that set it apart from other Debian distributions: Tor, Tor chat, I2P, Anonsurf, and Zulu Crypt, which are popular among developers, security researchers, and privacy-conscious individuals, are included as pre-installed development, forensics, and anonymity applications. It has a separate "Forensics Mode" that does not mount any of the system's hard

LAB

 Big Data Analytics Lab Programs 1.        Implement the following Data structures in Java for Linked Lists 2.    Perform setting up and Installing Hadoop in its three operating modes: Standalone, Pseudo distributed, Fully distributed 3.        Implement the following Data structures in Java for Stack 4.        Install and Run Pig then write Pig Latin scripts to sort, group, join, project, and filter your data 5.        Implement the following Data structures in Java for Queues 6.        Write a MapReduce program to search for a specific keyword in a file 7.        Implement the following Data structures in Java for Set 8.      Write a MapReduce program to count the occurrence of similar words in a file. Use partitioner to partition key based on alphabets 9.        Implement the following Data structures in Java for Map 10.   Implement the following file management tasks in Hadoop: 1. Adding files and directories 2. Retrieving files 3. Deleting files 11.    Run a basic Word Count Map R