<--- Back to Details
First PageDocument Content
Computing / Hadoop / Apache Software Foundation / Parallel computing / Cluster computing / Java platform / Apache Spark / MapReduce / Data-intensive computing / Apache Hadoop / Apache Hive / Scala
Date: 2016-08-21 15:09:53
Computing
Hadoop
Apache Software Foundation
Parallel computing
Cluster computing
Java platform
Apache Spark
MapReduce
Data-intensive computing
Apache Hadoop
Apache Hive
Scala

Spark: Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, Ion Stoica University of California, Berkeley Abstract MapReduce/Dryad job, each job must reload the data

Add to Reading List

Source URL: people.csail.mit.edu

Download Document from Source Website

File Size: 205,21 KB

Share Document on Facebook

Similar Documents

Llama: Leveraging Columnar Storage for Scalable Join Processing in the MapReduce Framework Yuting Lin National University of Singapore

Llama: Leveraging Columnar Storage for Scalable Join Processing in the MapReduce Framework Yuting Lin National University of Singapore

DocID: 1rbeT - View Document

External Data Access and Indexing in AsterixDB Abdullah Alamoudi, Raman Grover, Michael J. Carey, Vinayak Borkar Dept. of Computer Science, University of California Irvine, CA, USA  {alamouda, ramang, mjcarey, vb

External Data Access and Indexing in AsterixDB Abdullah Alamoudi, Raman Grover, Michael J. Carey, Vinayak Borkar Dept. of Computer Science, University of California Irvine, CA, USA {alamouda, ramang, mjcarey, vb

DocID: 1r3KW - View Document

Spark: Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, Ion Stoica University of California, Berkeley Abstract  MapReduce/Dryad job, each job must reload the data

Spark: Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, Ion Stoica University of California, Berkeley Abstract MapReduce/Dryad job, each job must reload the data

DocID: 1qNve - View Document

Getting Started Table of contents 1 Pig Setup............................................................................................................................ 2 2 Running Pig ..................................

Getting Started Table of contents 1 Pig Setup............................................................................................................................ 2 2 Running Pig ..................................

DocID: 1qIW4 - View Document

Stream Processing in “Big Data” world Jags Ramnarayan, Chief Architect, GemFire Pivotal

Stream Processing in “Big Data” world Jags Ramnarayan, Chief Architect, GemFire Pivotal

DocID: 1qH2J - View Document