News
Google Eliminates Need for HDFS in its Big Data Cloud Platform
- By David Ramel
- January 16, 2014
Google this week announced a Google Cloud Storage Connector for Hadoop aiming to simplify Big Data analysis on its cloud platform by eliminating the need to use the oft-maligned Hadoop Distributed File System (HDFS).
The move is the latest in a continuing series of industry initiatives designed to simplify the use of Apache Hadoop, synonymous with the burgeoning Big Data phenomenon but perceived to be extremely complex and requiring highly specialized skills and systems.
The open-source HDFS has been criticized for issues such as its reliance on batch processing, lack of concurrency and complex file I/O machinations because it can't be mounted directly by an OS.
Google claims to simplify the use of Hadoop on its cloud platform by letting developers choose Google Cloud Storage as the default file system. The new connector doesn't preclude the use of HDFS, but instead gives developers the option of storing and accessing data directly on Google Cloud Storage without having to move it in and out of HDFS. Users can still store data in HDFS and access it with the connector.
Released as a preview, "The Google Cloud Storage connector for Hadoop lets you run MapReduce jobs directly on data in Google Cloud Storage, and offers a number of benefits over choosing [HDFS] as your default file system," Google said in an overview of the connector.
The benefits to developers include comparable performance with less management and overhead, better availability and scalability, faster startup and many more, Google said in a blog post announcing the new product.
The connector is available via
setup scripts as explained in a Hadoop on Google Cloud Platform
tutorial.
About the Author
David Ramel is an editor and writer at Converge 360.