Hadoop is an
open-source framework that allows to store and process big data in a
distributed environment across clusters of computers using simple programming
models. It is designed to scale up from single servers to thousands of
machines, each offering local computation and storage.
Due to the advent of new technologies, devices,
and communication means like social networking sites, the amount of data
produced by mankind is growing rapidly every year. The amount of data produced
by us from the beginning of time till 2003 was 5 billion gigabytes. If you pile
up the data in the form of disks it may fill an entire football field. The same
amount was created in every two days in 2011, and in every ten minutes in 2013.
This rate is still growing enormously. Though all this information produced is
meaningful and can be useful when processed, it is being neglected.
90% of the world’s data
was generated in the last few years.
What is Big Data?
Big data means really a big data, it is a
collection of large datasets that cannot be processed using traditional
computing techniques. Big data is not merely a data, rather it has become a
complete subject, which involves various tools, technqiues and frameworks.
Big Data Technologies
Big data technologies are important
in providing more accurate analysis, which may lead to more concrete
decision-making resulting in greater operational efficiencies, cost reductions,
and reduced risks for the business.
To harness the power of big data,
you would require an infrastructure that can manage and process huge volumes of
structured and unstructured data in realtime and can protect data privacy and
security.
There are various technologies in
the market from different vendors including Amazon, IBM, Microsoft, etc., to
handle big data. While looking into the technologies that handle big data, we
examine the following two classes of technology:
Operational
Big Data
This include systems like MongoDB
that provide operational capabilities for real-time, interactive workloads
where data is primarily captured and stored.
NoSQL Big Data systems are designed
to take advantage of new cloud computing architectures that have emerged over
the past decade to allow massive computations to be run inexpensively and
efficiently. This makes operational big data workloads much easier to manage,
cheaper, and faster to implement.
Some NoSQL systems can provide
insights into patterns and trends based on real-time data with minimal coding
and without the need for data scientists and additional infrastructure.
Analytical
Big Data
This includes systems like
Massively Parallel Processing (MPP) database systems and MapReduce that provide
analytical capabilities for retrospective and complex analysis that may touch
most or all of the data.
MapReduce provides a new method of
analyzing data that is complementary to the capabilities provided by SQL, and a
system based on MapReduce that can be scaled up from single servers to
thousands of high and low end machines.
These two classes of technology are
complementary and frequently deployed together.
Operational vs.
Analytical Systems
Operational
|
Analytical
|
|
Latency
|
1 ms -
100 ms
|
1 min -
100 min
|
Concurrency
|
1000 -
100,000
|
1 - 10
|
Access
Pattern
|
Writes
and Reads
|
Reads
|
Queries
|
Selective
|
Unselective
|
Data
Scope
|
Operational
|
Retrospective
|
End
User
|
Customer
|
Data
Scientist
|
Technology
|
NoSQL
|
MapReduce,
MPP Database
|
Big Data Challenges
The major challenges associated
with big data are as follows:
- Capturing
data
- Curation
- Storage
- Searching
- Sharing
- Transfer
- Analysis
- Presentation
To fulfill the above challenges,
organizations normally take the help of enterprise servers.
Website development Jodhpur | Software development Jodhpur | Bulk sms Jodhpur
DreamWebHouse Technologies Pvt. Ltd.
Comments
Post a Comment