<--- Back to Details
First PageDocument Content
Computing / Concurrent computing / Distributed computing architecture / Apache Software Foundation / Parallel computing / Data management / Knowledge representation / Apache Spark / MapReduce / Workflow / Replication
Date: 2014-12-08 14:33:02
Computing
Concurrent computing
Distributed computing architecture
Apache Software Foundation
Parallel computing
Data management
Knowledge representation
Apache Spark
MapReduce
Workflow
Replication

Hurricane: Distributed real-time data-processing Jeffrey Warren, Vedha Sayyaparaju, Vikas Velagapudi, Zack Drach  {jtwarren, vedha, vvelaga, zdrach} @mit.edu    Demo link: https://www.youtube.com/watch?v=

Add to Reading List

Source URL: css.csail.mit.edu

Download Document from Source Website

File Size: 219,98 KB

Share Document on Facebook

Similar Documents

Databricks Security A Primer About Databricks Databricks is a hosted end-to-end data platform powered by Apache® Spark™. Databricks makes it easy to

DocID: 1uwO3 - View Document

Dublin Apache Kafka Meetup, 30 AugustThe SMACK Stack: Spark*, Mesos*, Akka, Cassandra*, Kafka* Elizabeth K. Joseph

DocID: 1tHnJ - View Document

Peeling the Onion How Data Abstractions Help Build BigData Apps Andreas Neumann @caskoid November 2016 Cask, CDAP, Cask Hydrator and Cask Tracker are trademarks or registered trademarks of Cask Data. Apache Spark, Spark,

DocID: 1tdJO - View Document

We explore the trade-offs of performing linear algebra in Apache Spark versus the traditional C and MPI approach by examining three widely-used matrix factorizations: NMF (for physical plausibility), PCA (for its ubiquit

DocID: 1sokL - View Document

Towards  a  Big  Data  Debugger  in   Apache  Spark   Tyson  Condie,  UCLA   Tuning  Spark  Applica>ons   •  Commonly  through  visualiza>on  tools  

DocID: 1s5yq - View Document