Databricks, Inc. patent applications on 2025-06-05
Patent Applications by Databricks, Inc. on June 5th, 2025
Databricks, Inc.: 2 patent applications
Databricks, Inc. has applied for patents in the areas of G06F9/44505 ({Configuring for program initiating, e.g. using registry, configuration files}, 1), G06F21/6281 (Protecting access to data via a platform, e.g. using keys or access control rules, 1)
With keywords such as: data, processing, container, service, trained, builds, customer, large, language, model in patent application abstracts.
Top Inventors:
- Ahmed Bilal of Sammamish WA US (1 patents)
- Steven Yikun Chen of San Francisco CA US (1 patents)
- Bruce Laurent Fontaine of Sunnyvale CA US (1 patents)
- Daya Shanker Khudia of Fremont CA US (1 patents)
- Chenran Li of Mountain View CA US (1 patents)
Patent Applications by Databricks, Inc.
20250181358. SELECTING OPTIMAL HARDWARE CONFIGURATIONS (Databricks, .)
Abstract: a data processing service builds a container for a customer to run a trained large language model (llm). the data processing service receives a trained llm and a desired configuration from a user of a client device. based on the desired configuration, the data processing service selects a hardware configuration and structures weights of the trained llm based on the hardware configuration. the data processing service generates a container image reflecting the hardware configuration, registers the container image to a container registry, and generates a container from the container image as well as an application programming interface (api) endpoint for the container. the data processing service deploys the trained llm in the api endpoint using the container such that the trained llm is accessible through api calls.
Abstract: a data processing service facilitates the creation and processing of data processing pipelines that process data processing jobs defined with respect to a set of tasks in a sequence and with data dependencies associated with each separate task such that the output from one task is used as input for a subsequent task. in various embodiments, the set of tasks include at least one cleanroom task that is executed in a cleanroom station and at least one non-cleanroom task executed in an execution environment of a user where each task is configured to read one or more input datasets and transform the one or more input datasets into one or more output datasets.