Databricks adds new ML modeling tool to lakehouse platform

Databricks on Tuesday launched Model Serving, a tool that enables users to deploy machine learning models as REST APIs on the vendor’s lakehouse platform and eliminates the need to manage a serving infrastructure.

Founded in 2013 and based in San Francisco, Databricks is data lakehouse vendor whose platform offers a combination of the structured data storage capabilities of a data warehouse with the unstructured data storage capabilities of a data lake.

The lakehouse structure is designed to enable data teams to access data and provide users with the most current data for use in data science, machine learning (ML) and analytics projects quickly and easily.

Most recently, Databricks unveiled Visual Code Extension. The feature lets developers build analytics, augmented intelligence and ML models with Microsoft’s Visual Studio Code — an integrated development environment — before moving it into Databricks’ lakehouse architecture.

New capabilities

Before the launch of Model Serving, users often had rely on batch files to move data into a cache within a data warehouse where they could build and train a model before moving the model back into an application where it could be used for analysis.

They will now using the new capability — generally available now — Databricks customers can build and deploy real-time ML applications, such as customer service chatbots and personalized recommendation systems, more simply.

By enabling users to deploy the models to the Databricks lakehouse as REST APIs, the vendor said it is eliminating the need for customers to build and manage complex machine learning infrastructures made up of a slew of tools from various 4vendors.

In addition, Model Serving comes with prepackaged integrations with other Databricks tools, including MLflow Model Registry for deployment and Unity Catalog for governance. Those integrations enable users to natively manage the entire ML process — including automating aspects of it — and removes the need to batch and cache models before moving them to the Databricks lakehouse.

Ultimately, simplification of the ML modeling and deployment process is the key benefit of Model Serving, according to Matt Aslett, an analyst at Ventana Research.

“Model Serving … expands Databricks’ capabilities beyond batch serving and complements the company’s existing functionality by providing integration with its feature store, MLOps and governance capabilities,” he said. “The combination of this functionality should enable customers to simplify model development and deployment.”

For the full article, please click below:

For the latest industry news, click here: