Date: 2016-08-21 15:09:53Computing Hadoop Apache Software Foundation Parallel computing Cluster computing Java platform Apache Spark MapReduce Data-intensive computing Apache Hadoop Apache Hive Scala | | Spark: Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, Ion Stoica University of California, Berkeley Abstract MapReduce/Dryad job, each job must reload the dataAdd to Reading ListSource URL: people.csail.mit.eduDownload Document from Source Website File Size: 205,21 KBShare Document on Facebook
|