Big Data Product Watch 4/1/16: Free SQL Server Dev Edition, Stream Processing, Hybrid Queries, More

No fooling here, just a roundup of news from various vendors including Microsoft, MapR Technologies, AtScale, MemSQL, Looker, Ryft, Dataiku and more who announced Big Data-related products this week.

  • Microsoft yesterday announced its SQL Server 2014 Developer Edition is now free for Visual Studio Dev Essentials members.

    "We are making this change so that all developers can leverage the capabilities that SQL Server 2014 has to offer for their data solution, and this is another step in making SQL Server more accessible," the company said in a blog post yesterday. "SQL Server Developer Edition is for development and testing only, and not for production environments or for use with production data."

    Visual Studio Dev Essentials is a free developer program that provides tools, cloud-based services, training and support.

    Microsoft said its Developer Edition offers all the features of the Enterprise Edition, touted for in-memory enhancements and hybrid on-premises/cloud implementations. The upcoming SQL Server 2016 Developer Edition will also be free when it's released.

    "SQL Server Developer Edition does not include a licensed OS, such as a license for Windows 10 included on a new laptop," the company said. "90 to 180 day free trials of Windows and Windows Server are available on the TechNet Eval Center."

  • MapR announced a new Stream Processing Quick Start Solution based on the company's Converged Data Platform to facilitate real-time analytics of Internet of Things (IoT) applications.

    It includes software, professional services and certification components. Software comes in the form of a one-year subscription for six nodes of MapR and a one-year subscription for the Apache Spark stack Professional services include six weeks of consultation with a MapR Big Data and streaming analytics expert. Certification includes three MapR Hadoop certification credits.

    "The Stream Processing Quick Start Solution is designed to accelerate the technological, educational and operational capabilities of customers who wish to leverage MapR advancements in event streaming and real-time analytics," the company said. "The solution can be tailored to specific customer requirements with the help of world-class data scientists and engineers from MapR Professional Services."

    MapR also announced its new Converge Community resource for Hadoop and Spark professionals. It offers "public online assets that enable Hadoop and Spark developers, data analysts and administrators to engage in technical discussions and share expertise that furthers the advancement of the Big Data community as a whole," the company said.

  • AtScale, which provides a self-service Business Intelligence (BI) platform, on Wednesday unveiled the AtScale Intelligence Platform 4.0. The company said the new offering "introduces unmatched innovations related to enterprise security and performance. It also introduces a new patent-pending innovation: the industry's first Hybrid Query Service for BI on Hadoop. This innovation enables enterprises to query Hadoop at top speed, from any BI tool in MDX and SQL mode, natively."

    The company said its new Hybrid Query Service simplifies the current complicated landscape of interactive BI queries, wherein many different BI tools require their own language to query Big Data sources.

    "This makes it complicated for IT to support the many use cases spread across a company's departments and their tools of preference, should they require MDX or SQL," the company said.

    "Enterprises can now support MDX and SQL natively, without having to download new clients or custom drivers to end-user machines," AtScale continued. "AtScale 4.0 is the industry's only platform that can support these standards and whose service can support advanced analytics functionality such as semi-additive metrics or many-to-many relationships."

  • MemSQL touted improved analytics performance in its new MemSQL 5 release across database, data warehouse, and streaming workloads. "The in-memory capabilities of MemSQL transform BI data access by enabling interactivity on live data for Tableau, Zoomdata, Looker and other BI solutions, in ways that traditional databases simply cannot," the company said.

    In addition to the analytics performance boost, new features and capabilities listed by MemSQL include: new LLVM-based code generation architecture; Streamliner, which provides one-click deployment of integrated Apache Spark, letting developers use a graphical UI to create real-time data pipelines and eliminate batch ETL; and Pluggable Authentication Module (PAM)-based authentication, which provides tools such as Kerberos for advanced security.

    "MemSQL shares our mission to help people see and understand their data," said Tableau exec Dan Kogan. "MemSQL 5 will help to bring advanced query performance and real-time analytics to our customers."

  • Looker announced a partnership with IBM to provide a suite of Looker Blocks, which are tools designed to help developers simplify and customize data analysis for any organization using IBM's Cloud Data Services.

    "Looker Blocks are templates of common analytic functions which can be assembled and customized to address data needs both company-wide and to any industry-specific requirement," the company said. "The Looker Block for IBM completes the vision of IBM's Simple Data Pipe app, utilizing Looker to quickly transform data that has been moved into dashDB using the Simple Data Pipe app."

    The company said the reusable and modular Looker Blocks provide a new approach to enterprise analytics by establishing a pattern language for such workloads. It helps developers, business users and data analysts rapidly create data platforms based on centralized business logic agreed upon by business teams. The new product provides code templates developed by Looker data analysts that users can leverage to create data models for the Looker platform.

    "IBM is committed to providing developers and builders with the right tools for the job," said IBM exec Derek Schoettle. "By working with Looker and co-developing projects like Simple Data Pipe, we're delivering on a vision of an open ecosystem where developers can use the tools they want with the support and experience of IBM."

  • Ryft announced the Ryft ONE Cluster, designed to enable organizations to implement petabyte-scale Big Data analytics by modernizing datacenter architectures. It builds on Ryft ONE, a proprietary, small-footprint box combining new-age technologies and techniques designed for speedy analytics, announced last month.

    It leverages a hybrid architecture approach that weds massively parallel field-programmable gate array (FPGA) integrated circuits with x86 CPU computing resources to modernize datacenter infrastructure. This approach, combined with a business-centric open API, helps organizations streamline and speed up Big Data ecosystems while reducing costs, the company said.

    "Game-changing performance, reliability and efficiency are absolutely vital to service providers that are competing to provide differentiated services to their customers," said company CEO Des Wilson. "Ryft's hybrid FPGA/x86 compute engine delivers the highest performance on the market and the ability to easily scale to petabyte levels, dramatically reducing the time, effort and cost of performing data-intensive tasks. Being able to do more with more types of data in a smaller hardware appliance drastically reduces the footprint of the data analytics cluster, which brings significant cost savings and operational improvements at a very competitive time."

  • Dataiku announced version 3 of its Dataiku Data Science Studio (DSS), which features an integrated visual environment to help developers and other users design and create batched or real-time data science applications.

    "The new integrated visual environment in DSS 3 includes a dedicated production node feature that solves the problem of development environments typically being disconnected and incompatible with production environments," the company said. "Users can now deploy, test, and roll-back (if needed) multiple instances of their data applications in all cycles of the data engineering process, from development to deployment. This allows the data team to independently design, build, run and continuously improve their data products even while they are running in production."

    The company said its offering benefits "DataOps," a new approach that it said is emerging in data-driven organizations, similar to the more established DevOps approach. While the latter meshes development, operations and business teams, DataOps is tasked with improving coordination between data app development and operations teams.

    New features in the product listed by the company include: real-time model deployment; advanced data metrics monitoring; enterprise integration scenarios; and resource usage monitoring.

    "In addition, Dataiku DSS 3 introduces new features to simplify the collaboration and documentation of data projects including Enriched Version Control, Team Activity Dashboards, User Defined Data Types, Metadata management, and Data Discovery," the company said.

  • The ODPi, a Linux Foundation project formed by Big Data players to develop an industry standard for Apache Hadoop and related technologies -- or an "open ecosystem of Big Data" -- announced its first release, a Runtime Specification hosted on GitHub.

    It was developed by the ODPi Technical Working Group, which says its goals are:

    1. For consumers: ability to run any "ODPi-compatible" software on any "ODPi-compliant" platform and have it work.
    2. For ISVs: compatibility guidelines that allow them to "test once, run everywhere."
    3. For Hadoop platform providers: compliance guidelines that enable ODPi-compatible software to run successfully on their solutions. But the guidelines must allow providers to patch their customers in an expeditious manner, to deal with emergencies.

    "The ODPi Runtime specification and test suite, available today, ensure applications can work across multiple Hadoop distributions," the group said. "This Runtime spec can be used by ISVs to properly build their software, and will be used as the basis of a test suite that can be used by ODPi-compliant platform providers to test compliance. The test suite includes a minimum set of basic capabilities that the platform must support in order to claim ODPi compatibility."

About the Author

David Ramel is an editor and writer for Converge360.