Skip to main content

Posts

Showing posts from February, 2022

How to Install Parrot Operating System in Virtual Box using OVA

Step by Step Process of Parrot OS Installation What is Parrot OS Parrot is a free and open-source Linux system based on Debian that is popular among security researchers, security experts, developers, and privacy-conscious users. It comes with cyber security and digital forensics arsenal that is totally portable. It also includes everything you'll need to make your own apps and protect your online privacy. Parrot is offered in Home and Security Editions, as well as a virtual machine and a Docker image, featuring the KDE and Mate desktop environments. Features of Parrot OS The following are some of the features of Parrot OS that set it apart from other Debian distributions: Tor, Tor chat, I2P, Anonsurf, and Zulu Crypt, which are popular among developers, security researchers, and privacy-conscious individuals, are included as pre-installed development, forensics, and anonymity applications. It has a separate "Forensics Mode" that does not mount any of the system's hard

error

 

Data Pre-processing Techniques

Data Preprocessing Data preprocessing is the process of transforming raw data into an understandable format. I t is also an important step in data mining as we cannot work with raw data.  The quality of the data should be checked before applying machine learning or data mining algorithms.   How quality of the data will get? By applying the Preprocessing techniques we get quality of data from raw data. Why data preprocessing is important? Preprocessing of data is mainly to check the data quality. The quality can be checked by the following Accuracy: To check whether the data entered is correct or not. Completeness: To check whether the data is available or not recorded. Consistency: To check whether the same data is kept in all the places that do or do not match. Timeliness: The data should be updated correctly. Believability: The data should be trust Major Tasks in Data Preprocessing: Data cleaning Data integration Data reduction Data transformation Data cleaning: Data cleaning is the

Word Count Map Reduce program

  Aim: Run a basic Word Count Map Reduce program to understand Map Reduce Paradigm   Program: Source Code import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration;// provides access to configuration parameters import org.apache.hadoop.fs.Path;// Path class names a file or directory in a HDFS import org.apache.hadoop.io.IntWritable;// primtive Writable Wrapper class for integers. import org.apache.hadoop.io.Text;// This class stores text and provides methods to serialize, deserialize, and compare texts at byte level import org.apache.hadoop.mapreduce.Job;//Job class allows the user to configure the job, submit it, control its execution, and query the state //The Hadoop Map-Reduce framework spawns one map task for each InputSplit generated by the InputFormat for the job import org.apache.hadoop.mapreduce.Mapper;//Maps input key/value pairs to a set of intermediate key/value pairs. import org.apache.hadoop.mapred

Hadoop file Management Tasks

  Implement the following file management tasks in Hadoop: a) Adding files and directories b) Retrieving files c) Deleting files Hint: A typical Hadoop workflow creates data files (such as log files) elsewhere and copies them into HDFS using one of the above command line utilities. Program:  The most common file management tasks in Hadoop includes: Adding files and directories to HDFS Retrieving files from HDFS to local filesystem Deleting files from HDFS Hadoop file commands take the following form:     hadoop fs - cmd Where cmd is the specific file command and <args> is a variable number of arguments. The command cmd is usually named after the corresponding Unix equivalent. For example, the command for listing files is ls as in Unix. a) Adding Files and Directories to HDFS Creating Directory in HDFS    $ hadoop fs - mkdir foldername (syntax)  $ hadoop fs -mkdir cse Hadoop’