Write My Paper Button

Analytics Task And Data Mining Discussion Response

Analytics Task and Data Mining Discussion Response

Analytics, Task, Data, Mining, Discussion, Response

Please answer the below two discussion questions and answers should be 250-300 words. Reply to the below resposes.

Discussion 1 (Chapter 3): Why are the original/raw data not readily usable by analytics tasks? What are the main data preprocessing steps? List and explain their importance in analytics.

Your response should be 250-300 words. Respond to two postings provided by your classmates.

Response 1: Kareem

The data analysis of historical data may be a challenging task due to the complexity of data in today’s world. There are many different reasons for this. The first problem is that the data may be very old, and there may be some outliers of historical data. These data may not be used in decision-making applications.

The second reason is that the data may be very clean. In a future world, when more accurate data is available, this may benefit decision-making applications (Dawson et al., 2019). A fundamental problem with the use of original data is the inability to store the original data in the same format required by today’s analytical engines.

These data are often in old formats that do not support the sophisticated formats found in today’s analytics environments. The lack of capability to easily convert these data into high-level data sets has created a bottleneck for analytics. The data are not ready for analysis because the data structure is not well defined, and the structure does not make sense.

For instance, a categorical variable can be written in many different ways that vary from the basic binary variable to the more complex case of the dependent variable. The raw data are not easily usable for any analyzers because the raw data are not available for analysis (Dawson et al., 2019).

Data Consolidation

Data consolidation enables one to take back all records with a similar name in different data sources. It can be very important as it can delete useless data or duplicates that are irrelevant to the company (Sharda et al., 2020).

Data consolidation often leads to removing duplicate information, which may be useful for a certain business. Sometimes all data in one place can be retrieved, sometimes, not so much. A common method of data consolidation is to consolidate records between two or more data sources.

Data Cleaning

Data Cleaning is a processing step that performs the clean-up of the original data for specific reporting, analytical processing, etc. This makes it a data transformation step of an analytics platform.

Data Transformation

Data transformation is a key step in the primary data preprocessing steps. It enables the data processing on the selected data elements. Data transformation is similar to data mining, except that in this process, the transformation is carried out across all the data of the target data (Sharda et al., 2020).

Data Reduction

The main objective of the data reduction step in the primary data preprocessing steps is to eliminate unnecessary data, thus reducing the amount of data required for the main data transformation step. An important aspect of Data Reduction is separating the data from the process.

Response 2: Dependra

Raw data can be defined as a collection of records, entities, facts, etc. It could be an address, location, coordinates, date, time, amounts, etc. It could be represented numerically or alphabetically, or both. Nowadays, various digital assets like audio, video, and image can be considered data (Kirk, 2019).

Raw data is unstructured and might consist of stuff that might not be necessary for our analysis. It is undeniable that data is fuel to analytics, but data quality, authenticity, richness, and consistency must be factored in to perform analysis. The analysis is done to serve a purpose, and the data we feed for analysis should back the purpose. The raw data should be processed to support our model/algorithm in advancing steps of analysis.

To harness our raw data and make it more concise, it has to undergo a transformation process. In this process, we structured our raw data, arranged data, get familiar with variables and their data type. Then we look for any outside entities in our raw data, check for any invalid ranges in data and deal with it, look for a missing field, eliminate duplication of data, etc.

Also, we look for. After this preprocessing or transformation process, raw data will be more consistent, rich, concise, and up to date (Sharda et al., 2019). This preprocessing of raw data before analysis will also ensure data credibility, data validity, and data relevancy for our project (Sharda et al., 2019). The cleaning of raw data can be done in the following way:

  • Find and replace
  • Sort and filter, find data, isolate, and modify it.
  • Eliminate unnecessary data. (Kirk, 2019).

There must be at least one APA formatted reference (and APA in-text citation) to support the thoughts in the post. Do not use direct quotes, rather rephrase the author’s words and continue to use in-text citations

Discussion 2 (Chapter 4): What are the privacy issues with data mining? Do you think they are substantiated?

Your response should be 250-300 words. Respond to two postings provided by your classmates.

Response 1 : Aslam

The process that generates the power of AI is the building of models based on datasets (Sharda, Delen & Turban, 2020). Therefore, it is data that makes AI what it is. With machine learning, computer systems are programmed to learn from data that is input without being continually reprogrammed.

In other words, they continuously improve their performance on a task—for example, playing a game—without additional help from a human. Machine learning is being used in a wide range of fields: art, science, finance, healthcare—you name it. And there are different ways of getting machines to learn. Some are simple, such as a basic decision tree, and some are much more complex, involving multiple layers of artificial neural networks.

Just as machine learning is considered a type of AI, deep learning is often considered to be a type of machine learning—some call it a subset. While machine learning uses simpler concepts like predictive models, deep learning uses artificial neural networks designed to imitate the way humans think and learn.

You may remember from high school biology that the primary cellular component and the main computational element of the human brain is the neuron and that each neural connection is like a small computer. The network of neurons in the brain is responsible for processing all kinds of input: visual, sensory, and so on.

Whereas with machine learning systems, a human need to identify and hand-code the applied features based on the data type (for example, pixel value, shape, orientation), a deep learning system tries to learn those features without additional human intervention. Take the case of a facial recognition program.

The program first learns to detect and recognize edges and lines of faces, then more significant parts of the faces, and then finally the overall representations of faces. The amount of data involved in doing this is enormous, and as time goes on and the program trains itself, the probability of correct answers (that is, accurately identifying faces) increases.

And that training happens through the use of neural networks, similar to the way the human brain works, without the need for a human to recode the program (Sharda, Delen & Turban, 2020).

Response 2: Nikita

Artificial intelligence (AI) can be a computer, machine, or computer-controlled robot designed to mimic human intelligence to perform tasks that humans usually do. It uses algorithms to replicate the human mind and act, think, respond, and speak like humans.

AI is growing in manufacturing, service, healthcare, and government industries, changing how we interact daily. Machine learning and deep learning are artificial tools. It’s essential to understand the difference between each other.

Machine learning is a sub-branch of AI; it uses automated algorithms, historical data, and labels to identify patterns to make decisions with less human intervention. An example can be speech recognition, which can translate the speech into words.

Deep learning is a subset of machine learning that utilizes multi-layer artificial neural networks and computer-intensive training. Deep learning can determine the relations between data with experience. It is a technology behind autonomous cars, virtual assistants, facial recognition, and its architecture are capable of more complex works.

There must be at least one APA formatted reference (and APA in-text citation) to support the thoughts in the post. Do not use direct quotes, rather rephrase the author’s words and continue to use in-text citations.

RUBRIC

Excellent Quality

95-100%

 

Introduction

45-41 points

The background and significance of the problem and a clear statement of the research purpose is provided. The search history is mentioned.

Literature Support

91-84  points

The background and significance of the problem and a clear statement of the research purpose is provided. The search history is mentioned.

Methodology

58-53 points

Content is well-organized with headings for each slide and bulleted lists to group related material as needed. Use of font, color, graphics, effects, etc. to enhance readability and presentation content is excellent. Length requirements of 10 slides/pages or less is met.

Average Score

50-85%

40-38 points

More depth/detail for the background and significance is needed, or the research detail is not clear. No search history information is provided.

83-76  points

Review of relevant theoretical literature is evident, but there is little integration of studies into concepts related to problem. Review is partially focused and organized. Supporting and opposing research are included. Summary of information presented is included. Conclusion may not contain a biblical integration.

52-49  points

Content is somewhat organized, but no structure is apparent. The use of font, color, graphics, effects, etc. is occasionally detracting to the presentation content. Length requirements may not be met.

Poor Quality

0-45%

37-1 points

The background and/or significance are missing. No search history information is provided.

75-1 points

Review of relevant theoretical literature is evident, but there is no integration of studies into concepts related to problem. Review is partially focused and organized. Supporting and opposing research are not included in the summary of information presented. Conclusion does not contain a biblical integration.

48-1 points

There is no clear or logical organizational structure. No logical sequence is apparent. The use of font, color, graphics, effects etc. is often detracting to the presentation content. Length requirements may not be met

A BBBBB
CCCCCCC HHHHH XXXXXXXXX
or

Analytics Task and Data Mining Discussion Response

Analytics Task and Data Mining Discussion Response

WhatsApp Widget
GET YOUR PAPER DONE