Patent Application 17898797 - SYSTEM AND METHOD IMPLEMENTING A TASK SCHEDULER - Rejection
Appearance
Patent Application 17898797 - SYSTEM AND METHOD IMPLEMENTING A TASK SCHEDULER
Title: SYSTEM AND METHOD IMPLEMENTING A TASK SCHEDULER FOR A RESOURCE CONSTRAINED COMPUTATION SYSTEM
Application Information
- Invention Title: SYSTEM AND METHOD IMPLEMENTING A TASK SCHEDULER FOR A RESOURCE CONSTRAINED COMPUTATION SYSTEM
- Application Number: 17898797
- Submission Date: 2025-05-14T00:00:00.000Z
- Effective Filing Date: 2022-08-30T00:00:00.000Z
- Filing Date: 2022-08-30T00:00:00.000Z
- National Class: 718
- National Sub-Class: 102000
- Examiner Employee Number: 100695
- Art Unit: 2194
- Tech Center: 2100
Rejection Summary
- 102 Rejections: 0
- 103 Rejections: 8
Cited Patents
No patents were cited in this rejection.
Office Action Text
DETAILED ACTION 1. Claims 1-18 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 2. Claims 1, 3, 10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Fan (US 20150268992 A1) in further view of Manda et al. (US 20190340026 A1) in further view of DE et al. (US 20150356518 A1). 3. Regarding claim 1, Fan discloses a method for scheduling tasks ([0007]: “A method is provided for runtime handling of task dependencies. The method includes… scheduling the subset of tasks for execution by a processor in a manner that preserves the task dependencies”) in a resource constrained system, the tasks being initialized by defined task dependencies, priority weights, execution parameters, and power estimates, the method comprising: building an acyclic graph based on the task dependencies ([0005]: “The dependence graph can be a directed acyclic graph (DAG) generated at runtime”; [0006]: “The compiled code comprising a number of tasks compiled from corresponding task constructs of a source code, at least a subset of the tasks having associated tags that indicate data dependencies reflecting respective task dependencies of the source code; and a scheduler. The scheduler operates to: generate a dependence graph dynamically at runtime according to the subset of tasks of the compiled code and their associated tags (Building an acyclic graph based on the task dependencies)”); iteratively traversing the acyclic graph from a root node to leaf nodes to determine a set of tasks and an order of execution ([0006]: “and schedule the subset of tasks for execution by the processor in a manner that preserves the task dependencies (Order of execution) of the source code by traversing the dependence graph (Traversing the acyclic graph to determine a set of tasks and order of execution).”; [0029]: “The scheduler 135 can effectively generate a set of scheduled tasks 145 for execution according to a topological traversal of the dependence graph (A topological traversal begins at nodes with no incoming edges (E.g. the root node) and traverses through all the dependencies (E.g. to the leaf node).)140 in a manner that preserves task dependencies substantially as defined by source code 110 ”; Fig. 3A depicts the traversal starting from a root node to leaf nodes); selecting sets of tasks for execution ([0005]: “The runtime engine can schedule the tasks for execution according to a topological traversal of the dependence graph in a manner that preserves task dependencies (Traversing the dependence graph to determine the tasks to be scheduled which preserves task dependencies involves selecting a set of tasks to be scheduled, i.e. for execution. The selection of tasks happens during the traversal.)”); Submitting the selected sets of tasks to core processors ([0007]: “scheduling the subset of tasks for execution by a processor (Scheduling tasks to be executed by a processor corresponds to submitting the selected tasks to core processors) in a manner that preserves the task dependencies”); and Executing the submitted tasks ([0045]: “Tasks are effectively executed in order of their dependencies, and each corresponding task 205 is not invoked in any edges to nodes that will be added to the DAG in the future”). However, Fan does not explicitly teach the method comprising: determining tasks to be run based on time elapsed since last running of the task; merging sets of tasks based on common sub-tasks. But Manda et al. teaches determining tasks to be run based on time elapsed since last running of the task ([0051]: “By maintaining the mapping of evaluation intervals 142 to a respective last executed time, a suitable time for a subsequent execution of a recurring task may be determined (Determining tasks to be run). That is, for example, as shown in the execution table 150 of FIG. 7, after running a recurring task with an evaluation interval 142 of five minutes at 10:00, a subsequent execution of the recurring task may be performed at 10:05 based on the last executed time (based on time elapsed since last running of the task) of the recurring task (e.g., 10:00) and the evaluation interval 142 (e.g., five minutes) of the recurring task”. ) and DE et al. teaches merging sets of tasks based on common sub-tasks ([0024]: “An ‘aggregate task’ is a task aggregation (A merge of a set of tasks) that includes a plurality of similar or identical individual tasks (common sub-tasks), which are identified as ‘task parts’ or ‘aggregate parts’. An aggregate task is different from a ‘summary task’… A summary task generally has several child tasks or subtasks. However, an aggregate task is a task itself, rather than having child tasks, and the homogenous, or otherwise affiliated, aggregate parts (i.e., includes similar or identical parts (common sub-tasks)”; [0020]-[0021]: “Similar dependency relationships often exist (E.g. common sub-tasks) between aggregate parts of two aggregate tasks… it is also desirable to be able to partition a large aggregate task into two or more smaller aggregate tasks, or merge two or more similar (E.g. shares common sub-tasks) aggregate tasks into a larger aggregate task (Merging sets of tasks based on common sub-tasks), and still be able to keep and/or reuse the already defined dependency relationships.”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Fan with the teachings of Manda et al. to increase the efficiency of resource usage and/or limit the use of resources in a computing system as taught by Manda et al. ([0005]-[0006]: “Increasing the frequency of executing the task may use resources, such as logic resources and time, of the computing system that may otherwise be dedicated to performing additional operations. Accordingly, efficiently scheduling the recurring task may involve determining a periodicity for the task that is suitable to perform the operation while limiting the use of resources in the computing system.”). Moreover, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the invention of Fan with the teachings of DE et al. in order to execute dependent tasks and sub-tasks in a parallel manner as taught by DE et al. ([0013]: “The aggregate task system can further execute dependent sequential tasks in a parallel manner. The aggregate task system can accomplish this by converting tasks into aggregate tasks, and by converting a task dependency between the two tasks into an aggregate part dependency between the two aggregate tasks.”). 4. Regarding claim 3, Fan modified by Manda et al. and DE et al. teaches the method as set forth in claim 1 Manda et al. teaches wherein the determining comprises comparing the time elapsed between when a task was last run and the desired time interval between task runs to determine which tasks to run ([0034]: “A gap trigger may be a trigger in a database that may perform an operation under identification of a gap exceeding and/or meeting a threshold duration between data received at the database.”; [0036]: “ information related to data detected and/or evaluated may include trigger levels each corresponding to a respective data gap duration and/or a minimum data gap duration between each of the trigger levels. For example, identification of a data gap in the evaluated data set (e.g., metric) having a first duration may be handled with a first response, as indicated by a first trigger level, while identification of a data gap having a second duration may be handled with a second response, as indicated by a second trigger level.”. The trigger level corresponds to a desired time interval between task runs (E.g. minimum or maximum time interval between task runs); [0062]: “Accordingly, at the respective evaluation interval 142, a mapped gap trigger 190 may be evaluated… evaluating the mapped gap trigger 190 may involve determining whether a data gap exists in the time-series metric stored in the time-series database 110. To do so, the mapped gap trigger 190 may determine the last (e.g., most recent) time stamp and/or time index of time-series metric stored in the time-series database 110 by for example, accessing the time-series metric table 122B. The mapped gap trigger 190 may then compare the time stamp to a current system time, for example, in the time-series database 110, to determine a duration of a data gap (Duration of a data gap corresponds to the time elapsed from when a task was last run), if one exists. Further, the mapped gap trigger 190 may compare the duration of an identified data gap to one or more of the trigger levels (A trigger level corresponds to a desired time interval as different trigger levels corresponds to different desired gap durations (i.e. desired time intervals).),” Comparing the data gap, i.e. time elapsed since last task run, to a trigger level corresponds to comparing between when a task was last run to the desired time interval between task runs; [0006]: “For example, in some embodiments, the task may be scheduled based in part on an expected periodicity and/or duration of data and/or a change in data to be detected by the operation (E.g. a task being scheduled after a gap exceeding a threshold duration was identified)”. The tasks are determined to be scheduled by comparing elapsed time since a task was last run to a trigger level (desired time interval). A task will be scheduled if the elapsed time exceeds an evaluation interval as described above (E.g. 5 minutes between task runs). This corresponds to comparing the elapsed time between task runs to a desired time interval between task runs to determine which tasks to run). 5. Regarding claim 10, it is a system type claim with similar limitations as claim 1 above. Moreover, Fan discloses the additional limitations of at least one processor ([0019]: “In other embodiments, the compile-time processor 105 and the runtime processor 130 can be part of a single computational environment (e.g., a single computer).”) and at least one memory having code or instructions stored thereon ([0006]: “A memory having compiled code stored thereon, the compiled code comprising a number of tasks compiled from corresponding task constructs of a source code”). Therefore, it is rejected under the same rationale. 6. Regarding claim 12, it is a system type claim with similar limitations as claim 3 above. Therefore, it is rejected under the same rationale. 7. Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Fan in further view of Manda et al. in further view of DE et al., as applied to claims 1 and 10, in further view of Meng et al. (US 20130104140 A1). 8. Regarding claim 2, Fan modified by Manda et al. and DE et al. teaches the method as set forth in claim 1. However, Fan modified by Manda et al. and De et al. do not explicitly teach wherein the determining of tasks to be run is based on availability of data and/or availability of a hardware resource, such as a sensor, to determine which tasks to run. But Meng et al. teaches scheduling tasks to be run based on availability of data and/or availability of a hardware resource ([0036]: “Fairness emphasizes that users share the limited resources according to pre-assigned weights (generalized processor sharing); promptitude characterizes response time, such as the shortest remaining time first discipline that minimizes the average response time; and efficiency involves improving resource utilization, for example, increasing data locality and reducing network traffic, page in/out rate, and I/O rate.”; [0070]: “When the cluster cannot run the suggested number of map tasks due to lack of resources, or can run more than that number in presence of abundant resources, these N(t) jobs may allocate map tasks according to job.mapMin as weights in some fair fashion”; [0075]: “As tasks are executed, the performance level and availability of certain resources may fluctuate… Embodiments provide for computing a reward based on available resources and required resources, such that an optimal job schedule may be realized which maximizes the total reward. Jobs may then be allocated in accordance with the optimal job schedule. A reward function may also be configured to contain a function designed to measure resource availability and to determine the fitness of a task within the distributed computing environment, for example, on a particular server... This scheduling method may predict the available resources at a particular time in the future, compute the new reward, and compare it to the current reward to determine whether tasks should be launched or postponed until a future time (This corresponds to scheduling tasks to be run based on availability of resources. Availability of resources is interpreted to include hardware resources and software resources such as data (E.g. a file or access to a database).)”). Hence, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the invention of Fan modified by Manda et al. and DE et al. such that the determining of tasks to be run is based on availability of data and/or availability of a hardware resource, such as a sensor, to determine which tasks to run. One would be motivated to do so in order to improve resource utilization as taught by Meng et al. ([0036]: “Embodiments provide that the Coupling Scheduler may be arranged to optimize system performance with respect to different performance characteristics, such as fairness, promptitude, and efficiency... In general, fairness emphasizes that users share the limited resources according to pre-assigned weights (generalized processor sharing); promptitude characterizes response time, such as the shortest remaining time first discipline that minimizes the average response time; and efficiency involves improving resource utilization, for example, increasing data locality and reducing network traffic, page in/out rate, and I/O rate.”). 9. Regarding claim 11, it is a system type claim with similar limitations as claim 2 above. Therefore, it is rejected under the same rationale. 10. Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Fan in further view of Manda et al. in further view of DE et al., as applied to claims 1 and 10, in further view of Mostafa et al. (Mostafa et al. Coupled task scheduling with exact delays: Literature review and models, European Journal of Operational Research, Volume 282, Issue 1, 2020, Pages 19-39, ISSN 0377-2217, https://doi.org/10.1016/j.ejor.2019.08.045. (https://www.sciencedirect.com/science/)). 11. Regarding claim 4, Fan modified by Manda et al., and DE et al. teaches the method as set forth in claim 1 Manda et al. teaches wherein the task execution time is recorded in a log as a reference to determine the elapsed time from the last run of a particular task ([0050]: “The time-series database 110 may maintain a mapping of the last execution time of the recurring task (e.g., the gap trigger) at that evaluation interval.”; [0056]: “The last execution time 182 may be retrieved, for example, from the time-series database 110. In some embodiments, a table 122, such as the trigger log table 122C, may maintain information associated with the task, its executions, one or more evaluation intervals respectively associated with each of the executions of the task, and/or the like (Retrieving the execution time from a trigger log table means the execution time was recorded in a log prior.) so that the last execution time 182 of an evaluation interval 142 may be determined based on the information… the task may then be executed after the duration of the evaluation interval 142 has first elapsed (The execution time stored in the log is used as a reference to determine if the duration of the evaluation interval has elapsed (i.e. elapsed time from the last run of a particular task).)”; [0051]: “By maintaining the mapping of evaluation intervals 142 to a respective last executed time, a suitable time for a subsequent execution of a recurring task may be determined. That is, for example, as shown in the execution table 150 of FIG. 7, after running a recurring task with an evaluation interval 142 of five minutes at 10:00, a subsequent execution of the recurring task may be performed at 10:05 based on the last executed time of the recurring task (e.g., 10:00) and the evaluation interval 142 (e.g., five minutes) of the recurring task (The last execution time is retrieved from the log as a reference to determine the elapsed time from the last run to determine whether the time elapsed has exceeded the evaluation interval.)”). However, Manda et al. does not explicitly teach storing the task execution time only when the task was successfully completed. But Mostafa et al. teaches using a delay interval (E.g., the evaluation interval in Manda et al.) after a task was successfully completed to prevent the execution of the next task until the delay interval has elapsed to allow tasks to run sequentially (Section 2. Problem statement and notation, paragraph 1: “Both tasks have known durations and the second task of a job must be started with a delay after the completion of the first task.”; Section 4. The shop scheduling problem, paragraph 3: “The delay durations are strict, i.e., once the delay period is elapsed, the processing of the next task must immediately start… Fig. 6 shows a schematic of the m -machine flow shop coupled task scheduling problem”. Fig. 6 also shows the sequential execution of tasks.). Manda et al. similarly uses a time interval, i.e. a delay interval, to prevent the execution of a recurring task until the time interval has elapsed. Hence, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Fan modified by Manda et al., and DE et al. with the teachings of Mostafa et al. such that the task execution time is recorded in a log as a reference to determine the elapsed time from the last run of a particular task only when the task was successfully completed. One would be motivated to do so to enable tasks to be executed sequentially due to task dependencies (For example, a task depends on the result of another task and cannot execute until the task it depends on completes and the required data/results from the dependent task is received). One would be motivated to introduce delay periods after a task successfully completes in order to account for cases where some transport time is needed to receive the results of a dependent task (For example, a dependent task completes, but it takes 5 seconds for the results of the dependent task to be transferred and received), requiring task executions to be spaced out as taught by Mostafa et al. (Introduction, paragraph 1: “Shapiro (1980) introduced the coupled task scheduling problem to model a pulsed radar system, where a pulse of electromagnetic energy is used to track an object. The pulse is transmitted and its reflection is received after a period of time to measure the size and/or shape of the tracked object. The concept of exact delays between consecutive tasks of a job has been applied in scheduling problems in the shop setting as well”; 1.2 scope and classification, paragraphs 4-6: “The motivation for incorporating delays… stems from two reasons. First, it can be used to model a no-wait problem where there is a transport time… between two consecutive tasks of the same job (E.g. sub-tasks of a task). The time requirement can indeed be represented by the delay duration… Fondrevelle, Oulamara, Portmann, and Allahverdi (2009) discussed similar situations arising in the manufacturing process of thermic paper that involves chemical processing, where temporal constraints are imposed on the process. Likewise, such constraints must be satisfied in every processing step of a pharmaceutical plant, from raw materials and resource preparation to packaging. Second, it can be used to model the no-wait condition involving bottleneck machines. According to Mitten (1959) , the two-machine flow shop problem with delays can present the industrial environment involving two bottleneck machines, where the jobs must undergo a number of tasks on some intermediate machines between the two bottleneck machines. The elapsed time on those intermediate machines can be represented by the delay durations, where the delays are exact in the no-wait situation.”). The delay interval may be substituted for the time interval between recurring tasks of Manda et al. since both are time thresholds (i.e. desired time intervals) that when exceeded or met, will permit the execution/scheduling of a task. 12. Regarding claim 13, it is a system type claim with similar limitations as claim 4 above. Therefore, it is rejected under the same rationale. 13. Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Fan in further view of Manda et al. in further view of DE et al., as applied to claims 1 and 10, in further view of GitHub (Arkipenko: https://web.archive.org/web/20230526071703/https://github.com/arkhipenko/TaskScheduler/wiki/Implementation-scenarios-and-ideas/) in further view of GitHub (Agronholm: https://web.archive.org/web/20250423183504/https://github.com/agronholm/apscheduler/blob/01ebd0e4a501e6d0e01225cbe81f88bf0626a448/docs/userguide.rst) in further view of Medium (Serialising Functions in Python by Greyboi). 14. Regarding claim 5, Fan modified by Manda et al. and DE et al. teaches the method as set forth in claim 1. However, Fan modified by Manda et al. and DE et al. do not explicitly teach wherein task meta data is formatted to provide information sufficient to execute the tasks within the system and comprises function label, function call parameters, and function call environment configuration such as import calls. But, GitHub (Arkipenko) teaches creating task objects for task scheduling with properties comprising function label, function call parameters, and function call environment configuration such as import calls (Section 5: Implementation using the task scheduler shown in the image below), PNG media_image1.png 707 936 media_image1.png Greyscale And GitHub (Agronholm) teaches serializing tasks in task scheduling to save the task in some storage prior to being scheduled for execution (Basic Concepts section: “Job stores (i.e. storage for tasks) house the scheduled jobs (i.e. scheduled tasks). The default job (A job is considered a task) store simply keeps the jobs in memory, but others store them in various kinds of databases. A job's data is serialized when it is saved to a persistent job store (Serializing a task’s data converts it into metadata), and deserialized when it's loaded back from it. Job stores (other than the default one) don't keep the job data in memory, but act as middlemen for saving, loading, updating and searching jobs in the backend. Job stores must never be shared between schedulers.”). PNG media_image2.png 1041 1604 media_image2.png Greyscale Moreover, Medium teaches a method of serializing functions and/or objects such as the task objects described in GitHub (Arkipenko) such that it is converted into metadata comprising a function label, function call parameters, and function call environment configuration such as import calls (Paragraph 1: “Serialising something means turning it from an in-memory structure to a simple string. This process can be reversed by deserialising the string back to the in-memory structure.”; Serializing the Tasks as shown in the image above for the Task Scheduler implementation of Arkipenko will convert the tasks into metadata comprising function call parameters, function labels, and data related to the function call environment configuration as taught by Medium.). Hence, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include the teachings of GitHub (Arkipenko) in the invention of Fan modified by Manda et al. and DE et al. to first create the task objects with properties comprising a function label, function call parameters, and function call environment configuration such as import calls to allow flexible code/task execution via callbacks as taught by GitHub (Arkipenko) (Section 3. Multiple possible callbacks for task: “There may be a need to select an option for callback method based on certain criteria, or randomly. You can achieve that by defining an array of callback method pointers and selecting one based on the criteria you need. Example: when a robot detects an obstacle, it may go left, right backwards, etc. Each of the ‘directions’ or ‘behaviors’ are represented by a different callback methods.”; Section 4. Interrupt-driven execution support: “In case of interrupt-driven program flow, tasks could be scheduled to run once to request asynchronous execution (request), and then re-enabled (restarted) again with a different callback method to process the results. Example: event driven distance calculation for ultrasonic pulses. EchoPin #6 triggers pin change interrupts on rising and falling edges to determine the length of ultrasonic pulse.”). Moreover, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the invention of Fan modified by Manda et al. and DE et al. and GitHub (Arkipenko) with the teachings of Medium such that the task objects are serialized and converted to task meta data comprising a function label, function call parameters, and function call environment configuration such as import calls. One would be motivated to do so to enforce an order of task execution that preserves task dependencies (E.g. a task dependent on a sub-task completing execution must first wait for said sub-task to complete prior to execution. To enable this, a task and corresponding functions would need to be serialized and stored for later execution), and to store a task in some memory or storage to be loaded and executed at another time as taught by GitHub (Agronholm) (Basic Concepts section: “Job stores house the scheduled jobs (Jobs are considered tasks). The default job store simply keeps the jobs in memory, but others store them in various kinds of databases. A job's data is serialized when it is saved to a persistent job store, and deserialized when it's loaded back from it. Job stores (other than the default one) don't keep the job data in memory, but act as middlemen for saving, loading, updating and searching jobs in the backend. Job stores must never be shared between schedulers. Executors are what handle the running of the jobs. They do this typically by submitting the designated callable in a job to a thread or process pool. When the job is done, the executor notifies the scheduler which then emits an appropriate event. Schedulers are what bind the rest together.”). 15. Regarding claim 14, it is a system type claim with similar limitations as claim 5 above. Therefore, it is rejected under the same rationale. 16. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Fan in further view of Manda et al. in further view of DE et al., as applied to claims 1 and 10, in further view of in further view of Santos et al. (US 20060288346 A1) in further view of Savidis et al. (US 20180314308 A1). 17. Regarding claim 6, Fan modified by Manda et al. and DE et al. teaches the method as set forth in claim 1. However, Fan modified by Manda et al. and DE et al. do not explicitly teach wherein task run time or expected energy use is provided as task meta data and used against a per-cycle run time or energy use budget to determine which subset of tasks to run in a cycle. But Santos et al. teaches using task run time against a per-cycle run time to determine which subset of tasks to run in a cycle ([0025]: “The total expected processing time Tj of job j is the sum of expected task execution times over all tasks in the job… Tj is the minimum expected time to complete job j if it runs on a single processor.”; [0035]: “The mathematical program may be expressed mathematically as follows. Let binary decision variable xj = 1 if job j Is selected and xj = 0 otherwise. P is the number of processors. The problem may be given by mathematical expressions (1) and (2) below.”; [0036]: “The product P*D in the right-hand side of equation (2) is the total amount of processor time available between t=0 and t=D (i.e., the available processing capacity) (This corresponds to a per-cycle run time constraint. The available processor time between t=0 and t=D corresponds a per-cycle run time constraint for one cycle.). A selection parameter r allows selection of a set of jobs whose total processor time is less than or greater than the total available.“; [0038]: “A first greedy algorithm considers jobs in non increasing order of a ratio of completion reward to total processing time R. T. selecting jobs (A job is considered a task and the tasks of a job are considered sub-tasks) until selecting an additional job violates equation (2) (This involves comparing task run time (i.e. processing time) to the per-cycle run time constraint (total available processor time) as the per-cycle run time constraint is the constraint in the equation. This corresponds to using task run time against a per-cycle run time constraint to determine which subset of tasks to run in a cycle)”. Equation (2) depicts comparing task run time against a per-cycle run time constraint to determine which subset of tasks to run in a cycle. Equation (2) and (1) together shows to maximize the number of tasks that can run with the greatest combined reward value without exceeding a per-cycle run time constraint.) PNG media_image3.png 472 1116 media_image3.png Greyscale And Savidis et al. teaches using expected energy use against an energy use budget to determine which subset of tasks to run in a cycle ([0036]: “The tasks may violate the power constraint given by Equation 1 may be executed in the next scheduling cycle, leading to a performance penalty”. Checking if a task violates the power constraint in a scheduling cycle involves comparing expected energy use against an energy use budget; [0043]: “The power consumption of a processing element πj may be approximated as a function of frequency. The power consumed by any processing element is given by Equation 2. The κ*fα and β terms in Equation 2 represent, respectively, the dynamic and static power consumption of the cores. The model parameters κ, α, and β for the Samsung Exynos A15 and A7 processors may be used to validate Algorithm 1. The power consumption with frequency using the estimated model parameters is shown in FIG. 2.”; [0047]: “The computational capacity required by task τi on core πj may be defined as ui,j=Ci,j/Di. (Expected energy use) ”; [0050]: “The workload scheduling may be constrained by the total computational capacity available to execute the taskset Uj on πj, where the total capacity must exceed the computational demand of the taskset as given by Equation 4 (The computational capacity available corresponds to an energy use budget. This constraint is used to determine which subset of tasks to run)… The total power consumed by the cores at any time instant must be less than the combined maximum power supported by all OCVRs in the system as described by equation 6.”; [0053]: “The DVFS procedure reduces the operating frequency and the voltage of the cores until the constraint given by Equation 6 is satisfied (i.e. ensuring the subset of tasks running in a cycle do not exceed the energy use budget.)…The use of the DVFS procedure results in the optimal frequency of operation for each core by solving the bounded knapsack problem. (The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value (For example, determining the maximum number of tasks that can run without violating a per-cycle run time constraint or an energy use budget) is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items. The problem often arises in resource allocation.)”; [0054]: “The required computational demand at a given frequency fj on processor πi is ui, fj. The weight added to the knapsack is analogous to ui, fj. The objective of the knapsack problem is to maximize the number of tasks executed on a core, without violating the task deadline. The procedure lowers the operating frequency of each task until constraints Equations 7 and 8 are satisfied.”. Equations 7 and 8 depict using expected energy use of a task against an energy use budget to determine which subset of tasks to run in a cycle. It attempts to maximize the number of tasks that can run without violating the constraints where the energy use budget is one of the constraints; Equations 3 and 4 also show choosing a subset of tasks that can be run while not exceeding the energy use budget.). PNG media_image4.png 424 934 media_image4.png Greyscale PNG media_image5.png 411 1091 media_image5.png Greyscale It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the invention of Fan modified by Manda et al. and DE et al. such that task run time is provided as task meta data and used against a per-cycle run time to determine which subset of tasks to run in a cycle to address the case where a system is only available for an allotted amount of time and a decision must be made to which sets of tasks can complete within the allotted amount of time as taught by Santos et al. ([0001]: “Batch jobs typically involve multiple jobs, each job having one or more tasks, that need to be scheduled for execution on a computing system. It may be that the computing system is only available for a certain amount of time and that all of the jobs cannot be completed within the allotted time. Decisions have to be made as to which jobs to schedule for execution on the computing system.”). Moreover, it would have also been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the invention of Fan modified by Manda et al., DE et al., and Santos et al. such that expected energy use is provided as task meta data and used against a energy use budget to determine which subset of tasks to run in a cycle to ensure the total load consumption of hardware resources at any time is less than the total load capacity of said resources, and to reduce the total energy consumption in a scheduling cycle as taught by Savidis et al. (Abstract of Savidis et al.; [0036]: “ rescheduling of controllable tasks reduces the energy consumption of the CMP system for a given scheduling cycle.”). 18. Regarding claim 15, it is a system type claim with similar limitations as claim 6 above. Therefore, it is rejected under the same rationale. 19. Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Fan in further view of Manda et al. in further view of DE et al., as applied to claims 1 and 10, in further view of Topcuoglu et al. (Topcuoglu, et al. (1999). Task scheduling algorithms for heterogeneous processors. 3 - 14. 10.1109/HCW.1999.765092.) in further view of Savidis et al. (US 20180314308 A1). 20. Regarding claim 7, Fan modified by Manda et al. and DE et al. teaches the method as set forth in claim 1. However Fan modified by Manda et al. and DE et al. do not explicitly teach further comprising summing priority weights of all sub-tasks wherein the selecting of sets of tasks for execution is based on a maximum summed priority weight and power estimates. But, Topcuoglu et al. teaches summing priority weights of all sub-tasks wherein the selecting of sets of tasks for execution is based on a maximum summed priority weight (Introduction: “Algorithm selects the task with the highest upward rank… For the other nodes, the task selection phase of the algorithm is based on a summation of downward and upward ranks”; Problem Definition, paragraphs 1-2: “In this paper task and node terms are used interchangeably… Each edge (i, j)… represents the task-dependency constraint such that task ni should complete its execution before task nj can be started (Sub-tasks)… Each wi,j gives the estimated execution time (Estimated execution time is considered a priority weight and is also considered an execution cost) to complete task ni on processor pj . The average execution costs of tasks are used in the task priority equations. The average execution cost of a node ni (i.e. average execution cost of a task) is defined as PNG media_image6.png 31 162 media_image6.png Greyscale (Execution cost is considered a priority weight. Hence, the formula corresponds to the summation of priority weights for all sub-tasks (i.e. dependent tasks).); Section 2. Problem definition, paragraph 7: “The upward rank of a task ni is recursively defined… since it is computed recursively by traversing the task graph (E.g. the acyclic graph disclosed by Fan in claim 1) upward, starting from the exit node, it is referred to as an upward rank)… Basically, ranku(ni) is the length of the critical path (i.e., the longest path) (length of the critical path may be a priority weight as well) from ni to the exit node, including the computation cost of the node itself. In some previous algorithms the ranks of the nodes are computed using computation costs only (E.g. instead of including the maximum path length it only considers the maximum average execution cost. This alternative method of rank calculation corresponds to selecting of tasks for execution being based on a maximum summed priority weight.), which is referred to as static upward rank”. Fig. 3 depicts the formula for calculating a task rank). PNG media_image7.png 368 557 media_image7.png Greyscale And Savidis et al. teaches selecting tasks to be executed based on power estimates ([0043]: “The power consumption with frequency using the estimated model parameters is shown in FIG. 2”; [0047]: “The subset of tasks Tj that are executed… therefore require a total computational capacity of PNG media_image8.png 85 221 media_image8.png Greyscale cycles per second”; [0050]: “The total power consumed by the cores at any time instant must be less than the combined maximum power supported by all OCVRs in the system as described in Equation 6.”; See the formulas involved with constraints for task scheduling taught by Savidis et al. below.). PNG media_image9.png 154 892 media_image9.png Greyscale PNG media_image10.png 192 622 media_image10.png Greyscale It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the invention of Fan modified by Manda et al. and DE et al. to traverse the acyclic graph, as disclosed in Fan, in order sum the priorities of all the sub-tasks of a task as taught by Topcouglu et al. such that task selection based on a maximum summed priority weight. One would be motivated to do so to use a simpler algorithm for using/calculating heuristics for efficient scheduling of application tasks with improved performance of a task scheduler such as speed up as taught by Topcouglu et al. (Introduction of Topcuoglu et al., para 1-2: “Since the general DAG scheduling is NP-complete, there are many research efforts that have proposed heuristics for the task scheduling problem… only a few methods use variable execution times… however, they are either high-complexity algorithms and/or they do not generally provide good quality results”; Conclusion: “HEFT algorithm outperforms the other algorithms in all performance metrics… similarly, the CPOP algorithm is better than, or at least comparable to, the existing algorithms.”.). Moreover, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the invention of Fan modified by Manda et al., DE et al., and Topcuoglu et al. to include the additional limitation of choosing a task based on the lowest estimated required power consumption, i.e. power estimate, to ensure the total load consumption of hardware resources at any time is less than the total load capacity of said resources as taught by Savidis et al. (Abstract of Savidis et al.). 21. Regarding claim 16, it is a system type claim with similar limitations as claim 7 above. Therefore, it is rejected under the same rationale. 22. Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Fan in further view of Manda et al. in further view of DE et al., as applied to claims 1 and 10, in further view of Doshi et al. (US 20200136994 A1). 23. Regarding claim 8, Fan modified by Manda et al. and DE et al. teaches the method as set forth in claim 1. However, Fan modified by Manda et al. and DE et al. do not explicitly teach but Doshi et al. teaches wherein a shared resource, such as an attached sensor, is provided as task meta data and used to construct the acyclic graph ([0023]: “IoT devices can be physical or virtualized objects… and can include sensors, actuators, and other input/output components, which may be used to collect data or perform actions in a real-world environment”; [0088]: “The composition controller 230 can generate the composition(s) 146, 156 by composing isomorphic graphs based on relationship(s) between different resource objects, telemetry objects, etc. Advantageously… the composition controller 230 can… generate telemetry data associated with one(s) of the resource(s) 149, 159 as a whole (e.g., a hardware platform including a multi-core CPU and an FPGA) ) (telemetry data associated with one of the resources as a whole such as a hardware platform corresponds to the task meta data of a shared resource. A hardware platform is considered a shared resource.) rather than telemetry data associated with a portion of the resource(s)”; [0090]: “ the composition generator 238 generates a telemetry composition including one or more telemetry objects. In such examples, the composition generator 238 can aggregate a plurality of telemetry objects, link one(s) of the plurality of the telemetry objects (Includes the generated telemetry data (i.e. metadata) associated with one of the resources as a whole (i.e. a shared resource like the hardware platform)) using an acyclic and/or a cyclic graph or subgraph (In order to aggregate those objects together using an acyclic graph, the meta data of those objects needs to be provided and used to properly construct the graph. The telemetry objects include shared resources such as a hardware platform, as described above, and includes the generated telemetry data from composition controller 230. Hence, the generated telemetry data corresponds to the task meta data of a shared resource. Using an acyclic graph to link the telemetry objects then corresponds to using the task meta data to construct the acyclic graph.)”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the invention of Fan modified by Manda et al. and DE et al. with the teachings of Doshi et al. such that a shared resource, such as an attached sensor, is provided as task meta data and used to construct the acyclic graph to improve resource utilization as taught by Doshi et al. ([0032]: “To meet the low-latency and high-bandwidth demands of endpoint devices, orchestration in edge clouds has to be performed based on timely resource utilization information about the utilization of many resources (e.g., hardware resources, software resources, virtual hardware and/or software resources, etc.), and the efficiency with which those resources are able to meet the demands placed on them. In examples disclosed herein, such resource utilization information is generally referred to as telemetry (e.g., telemetry data, telemetry information, etc.).”). 24. Regarding claim 17, it is a system type claim with similar limitations as claim 8 above. Therefore, it is rejected under the same rationale. 25. Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Fan in further view of Manda et al. in further view of DE et al., as applied to claims 1 and 10, in further view of GitHub (Arkipenko: https://web.archive.org/web/20250424133012/https://github.com/arkhipenko/TaskScheduler/wiki/Concept-of-Task-and-Cooperative-Task-Scheduling/68a1704b6280c1b7321dd53fe3ff76f69930fc0f) in further view of Github (Arkipenko: https://web.archive.org/web/20230526071703/https://github.com/arkhipenko/TaskScheduler/wiki/Implementation-scenarios-and-ideas/) in further view of Savidis et al. (US 20180314308 A1). 26. Regarding claim 9, Fan modified by Manda et al. and DE et al. teaches the method as set forth in claim 1 Fan teaches wherein at least one of the following parameters is user-configurable: task dependencies ([0029]: “traversal of the dependence graph 140 in a manner that preserves task dependencies substantially as defined by the source code 110“. Task dependencies defined by source code is user configurable as a user can edit the source code to change the task dependencies.). However, Fan modified by Manda et al. and DE et al. do not explicitly teach priority weights, execution parameters, and power estimates to be user configurable. But Github (Arkipenko) teaches priority weights and execution parameters to be user configurable (Concept of task and cooperative task scheduling, Compile parameters section; Implementation scenarios and ideas, task scheduler implementation section). PNG media_image11.png 223 1014 media_image11.png Greyscale PNG media_image12.png 316 840 media_image12.png Greyscale And Savidis et al. teaches power estimates to be user configurable (([0043]: “The power consumption with frequency using the estimated model parameters is shown in FIG. 2”. The formula) PNG media_image13.png 230 892 media_image13.png Greyscale It would have been obvious to one of ordinary skill in the art, before the effective filing date of the clamed invention, to further modify the invention of Fan modified by Manda et al. and DE et al. to include the teachings of Github (Arkipenko) to allow flexible code/task execution via callbacks as taught by GitHub (Arkipenko) (Section 3. Multiple possible callbacks for task: “There may be a need to select an option for callback method based on certain criteria, or randomly. You can achieve that by defining an array of callback method pointers and selecting one based on the criteria you need. Example: when a robot detects an obstacle, it may go left, right backwards, etc. Each of the ‘directions’ or ‘behaviors’ are represented by a different callback methods.”; Section 4. Interrupt-driven execution support: “In case of interrupt-driven program flow, tasks could be scheduled to run once to request asynchronous execution (request), and then re-enabled (restarted) again with a different callback method to process the results. Example: event driven distance calculation for ultrasonic pulses. EchoPin #6 triggers pin change interrupts on rising and falling edges to determine the length of ultrasonic pulse.”). Moreover, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify the invention of Fan modified by Manda et al., DE et al., and Github (Arkipenko) et al. with the teachings of Savidis et al. to ensure the total load consumption of hardware resources at any time is less than the total load capacity of said resources as taught by Savidis et al. (Abstract of Savidis et al.). 27. Regarding claim 18, it is a system type claim with similar limitations as claim 9 above. Therefore, it is rejected under the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD J LI whose telephone number is (571)272-7695. The examiner can normally be reached Monday-Friday 9:00-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin Young can be reached on (571) 270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD JIANDE LI/Examiner, Art Unit 2194 /KEVIN L YOUNG/Supervisory Patent Examiner, Art Unit 2194