Jump to content

Databricks, Inc. patent applications on 2025-06-05

From WikiPatents

Patent Applications by Databricks, Inc. on June 5th, 2025

Databricks, Inc.: 2 patent applications

Databricks, Inc. has applied for patents in the areas of G06F9/44505 ({Configuring for program initiating, e.g. using registry, configuration files}, 1), G06F21/6281 (Protecting access to data via a platform, e.g. using keys or access control rules, 1)

With keywords such as: data, processing, container, service, trained, builds, customer, large, language, model in patent application abstracts.

Top Inventors:

Patent Applications by Databricks, Inc.

20250181358. SELECTING OPTIMAL HARDWARE CONFIGURATIONS (Databricks, .)

Abstract: a data processing service builds a container for a customer to run a trained large language model (llm). the data processing service receives a trained llm and a desired configuration from a user of a client device. based on the desired configuration, the data processing service selects a hardware configuration and structures weights of the trained llm based on the hardware configuration. the data processing service generates a container image reflecting the hardware configuration, registers the container image to a container registry, and generates a container from the container image as well as an application programming interface (api) endpoint for the container. the data processing service deploys the trained llm in the api endpoint using the container such that the trained llm is accessible through api calls.

20250181772. Clean Room Generation Data Collaboration Executing Clean Room Task Data Processing Pipeline (Databricks, .)

Abstract: a data processing service facilitates the creation and processing of data processing pipelines that process data processing jobs defined with respect to a set of tasks in a sequence and with data dependencies associated with each separate task such that the output from one task is used as input for a subsequent task. in various embodiments, the set of tasks include at least one cleanroom task that is executed in a cleanroom station and at least one non-cleanroom task executed in an execution environment of a user where each task is configured to read one or more input datasets and transform the one or more input datasets into one or more output datasets.

Cookies help us deliver our services. By using our services, you agree to our use of cookies.