Jump to content

Patent Application 17492910 - TECHNIQUES FOR INFORMATION RANKING AND RETRIEVAL - Rejection

From WikiPatents

Patent Application 17492910 - TECHNIQUES FOR INFORMATION RANKING AND RETRIEVAL

Title: TECHNIQUES FOR INFORMATION RANKING AND RETRIEVAL

Application Information

  • Invention Title: TECHNIQUES FOR INFORMATION RANKING AND RETRIEVAL
  • Application Number: 17492910
  • Submission Date: 2025-05-20T00:00:00.000Z
  • Effective Filing Date: 2021-10-04T00:00:00.000Z
  • Filing Date: 2021-10-04T00:00:00.000Z
  • National Class: 707
  • National Sub-Class: 741000
  • Examiner Employee Number: 87610
  • Art Unit: 2152
  • Tech Center: 2100

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 2

Cited Patents

The following patents were cited in the rejection:

Office Action Text


    Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Arguments and amendments filed 2/20/2025 have been examined.
Claims 1-20, 22, 23, 25, 29, 30, 32, 36, 37, and 39 were previously cancelled.
Thus, Claims 21, 24, 26-28, 31, 33-35, 38 and 40 are currently pending.
This Office Action is Final.


Response to Arguments
Applicant’s arguments with respect to claim(s) regarding the previous prior-art rejection under 35 USC 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.











Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA  to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.


Claims 21, 28 and 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Griffith et al., US Pub. No.:  2018/0314705 A1, in view of Subbian et al., US Pub. No.: 2019/0114362 A1, in view of Lisuk et al. US Pub. No.: 2016/0078022.

As to claim 21 (and substantially similar claim 28 and claim 35),
Griffith discloses an apparatus, comprising:
a processor for executing an enterprise-level information retrieval system;; 
and a memory storing instructions of an enterprise-level information retrieval system which when executed by the processor 
(Griffith Fig. 34; [0213-0215])

cause the processor to:
batch upload documents,
(Griffith [0066] According to various examples, dataset ingestion controller 220 and its constituent elements may be configured to detect exceptions or anomalies among subsets of data ( e.g., columns of data) of an imported or uploaded set of data, and to facilitate corrective actions to negate data anomalies; see also [0079] To illustrate, consider
that a subset of separately-uploaded datasets are included in dataset data 203b, whereby each of these datasets in the subset include at least one similar or common dataset attribute that may be correlatable among datasets.)


ingest the documents into the enterprise-level information retrieval system;
(Griffith teaches a data ingestion controller ingesting documents for an organization (e.g., a corporation, a university, etc., i.e. “ingest the documents into the enterprise-level information retrieval system” see [0066] FIG. 2 is a diagram depicting an example of a data ingestion controller configured to generate a set of layer data files, according to some examples. Diagram 200 depicts a dataset ingestion controller 220 communicatively coupled to a dataset attribution manager 261, and is further coupled communicatively to one or both of a user interface ("UI") element generator 280 and a programmatic interface 290 to exchange data and/or commands (e.g., executable instructions) with a user interface, such as a collaborative dataset interface 202. According to various examples, dataset ingestion controller 220 and its constituent elements may be configured to detect exceptions or anomalies among subsets of data ( e.g., columns of data) of an imported or uploaded set of data, and to facilitate corrective actions to negate data anomalies; see also [0174] FIG. 25 is a diagram depicting a flow diagram as an example of remediating a dataset during ingestion, according to some embodiments. Flow 2500 may begin at 2502, at
which data representing a subset of data disposed in data fields (e.g., cells) of a data arrangement (e.g., a spreadsheet) may be received. A data field may include any unit of data
that can be extracted from an original data structure. For example, a tabular arrangement of data in a PDF document may be analyzed to extract data from the PDF document ( e.g., using logic functioning similar to optical character recognition) and format the data into a table, whereby a unit of data may include data at an intersection of a specific row and column;
see also [0207] credential data repository 3203 may store authentication data with which to provide authorization to access restriction manager 3284 to determine whether access ought to be granted to access one or more portions of graph data arrangement 3298. In this example, graph data arrangement 3298 depicts an example of a graph data arrangement that
includes data graph portion 3299 and additional links to a user account identifier 3266a node, a username node 3266b, an organization (e.g., a corporation, a university, etc.) node
3266c, and a role (e.g., job title or position) node) 

access the ingested document;
(Griffith teaches a data ingestion controller modifying/remediating/correcting ingested  datasets/documents, i.e. access the ingested document  see [0153] FIG. 23 is a diagram depicting an example of a dataset ingestion controller configured to analyze and modify datasets to enhance accuracy thereof, according to some embodiments. Diagram 2300 depicts an example of a collaborative dataset consolidation system 2310 that may be configured to consolidate one or more datasets to form collaborative datasets based on remediated data to enhance, for example, accuracy and reliability of datasets configured to be shared and repurposed by a community of user datasets. Diagram 2300 depicts an example of a collaborative dataset consolidation system 2310, which is shown in this example as including a dataset ingestion controller 2320 configured to remediate datasets, such as dataset 2305a (ingested data 2301a);
See also [0154] According to some examples, dataset analyzer 2330 and any of its components, including inference engine 2332, may be configured to analyze an imported or uploaded dataset 2305a to detect or determine whether dataset 2305a has an anomaly relating to data (e.g., improper or unexpected data formats, types or values) or to a structure of a
data arrangement in which the data is disposed. For example, inference engine 2332 may be configured to analyze data in dataset 2305a to identify tentative anomalies and to determine (e.g., infer or predict) one or more corrective actions.)

generate a search index for the ingested documents, according to an index
configuration;
(Griffith teaches a searchable/queryable collaborative dataset consolidation, i.e. a search index see [0047] Diagram 100 depicts an example of a collaborative dataset consolidation system 110 that may be configured to consolidate one or more datasets to form collaborative datasets. A collaborative dataset, according to some non-limiting examples, is a set of data that may be configured to facilitate data interoperability over disparate computing system platforms, architectures, and data storage devices. Further, a collaborative dataset may also be associated
with data configured to establish one or more associations (e.g., metadata) among subsets of dataset attribute data for datasets and multiple layers of layered data, whereby attribute data may be used to determine correlations (e.g., data patterns, trends, etc.) among the collaborative
datasets. Further, collaborative dataset consolidation system 110 may be configured to convert a dataset in a first format (e.g., a tabular data structure or an unstructured data arrangement) into a second format (e.g., a graph), and is further configured to interrelate data between a table and a graph. Thus, data operations, such as queries, that are designed for either a tabular or graph data structure may be implemented to access data in both formats or data arrangements.
For example, a query on a collaborative dataset may be accomplished using either a query designed to access a tabular or relational data arrangement (e.g., a SQL query or variant thereof) or another query designed to access a graph data arrangement (e.g., a SPARQL operation or a variant thereof) that includes data for the collaborative dataset. Therefore, a collaborative dataset of common data may be configured to be accessible by different queries and programming languages, according to some examples.;
See also [0060] Subsequent to introduction into collaborative dataset consolidation system 110, data in dataset 104 may be included in a data operation as linked data in dataset 142a, such as a query. In this case, one or more components of dataset ingestion controller 120 and a dataset attribute manager (not shown) may be configured to enhance dataset 142a by, for example, detecting and linking to additional datasets that may have been formed or made available
subsequent to ingestion or use of data in dataset 142a.;
see also [0102] Diagram 800 further depicts that each column node 814, 816, and 818 may be supplemented or "annotated" with metadata ( e.g., in one or more layers) that describe a column, such as a label, an index number, a datatype, etc.; see also Fig. 8)

store the ingested documents and the search index in a storage;
(Griffith teaches a converting a dataset into a collaborative data format and layer format generator for storage, i.e. store the ingested documents and the search index in a storage [0049] Diagram 100 depicts an example of a collaborative dataset consolidation system 110, which is shown in this example as including a repository 140 configured to store datasets, such as dataset 142a, and a dataset ingestion controller 120, which, in tum, is shown to include an inference engine 132, a format converter 134, and a layer data generator 136. In some examples, format converter 134 may be configured to receive data representing a set of data 104 having, for example, a particular data format, and may be further configured to convert dataset 104 into a collaborative data format for storage in a portion of data arrangement
142a in repository 140.;
See also [0055] format converter 134 may be configured to format the source data into, for example, a tabular data format 177 a, and layer data generator 136 may be configured to implement row nodes 172 to identify rows of underlying data and column nodes 175 to identify columns 174 and 176 of underlying data)

Griffith does not disclose:
generate an operating environment for information rank and retrieval processes to retrieve and rank via the rank model, wherein the operating environment comprises a search platform cloud for providing a search capability that encompasses information during document ingestion and index generation and an administrative user interface (UI) to facilitate onboarding of applications;
generate, in the operating environment via a UI template image and based on environment configurations, containers for application specific client Uls for the information rank and retrieval processes; and

retrieve and rank, via the rank model, at least one of the ingested documents responsive to a query, the rank model comprising a machine learning model;

however, Subbian discloses:
generate an operating environment for information rank and retrieval processes to retrieve and rank via the rank model, 
(Subbian teaches a relevance-and-ranking engine for application see [0037] relevance-and-ranking engine, see also [0046] In particular embodiments, ranking of the resources may be determined by a ranking algorithm implemented by the search engine. As an example and not by way of limitation, resources that are more relevant to the search query or to the user may be
ranked higher than the resources that are less relevant to the search query or the user.;
see also [0032] As an example and not by way of limitation, the items and objects may include groups or social networks to which users of the social-networking system 160 may belong, events or calendar entries in which a user might be interested, computer-based applications that a user may use;
see also [0005] The entity embeddings may be used in search ranking, entity disambiguation, intent classification, group search, typeahead, and other suitable applications.)

wherein the operating environment comprises a search platform cloud 
(Subbian teaches a cloud based search-results interface see [0100] Where appropriate, computer system 1300 may include one or more computer systems 1300; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks.; see also [0046] The social-networking system 160 may then generate a search-results interface with search results corresponding to the identified content and send the search-results interface to the user. The search results may be presented to the user, often in the form of a list of links on the search-results interface, each link being associated with a different interface that contains some of the identified resources or content)

for providing a search capability that encompasses information during document ingestion and index generation and an administrative user interface (UI) to facilitate onboarding of applications;
(Subbian teaches a relevance-and-ranking engine for application see [0037] relevance-and-ranking engine, see also [0046] In particular embodiments, ranking of the resources may be determined by a ranking algorithm implemented by the search engine. As an example and not by way of limitation, resources that are more relevant to the search query or to the user may be
ranked higher than the resources that are less relevant to the search query or the user.;
see also [0032] As an example and not by way of limitation, the items and objects may include groups or social networks to which users of the social-networking system 160 may belong, events or calendar entries in which a user might be interested, computer-based applications that a user may use,)

generate, in the operating environment via a UI template image and based on environment configurations, containers for application specific client Uls for the information rank and retrieval processes; 
(Subbian teaches customized search results/profile interface elements, i.e. see
[0046] The identified content may include, for example, social graph elements (i.e., user nodes 202, concept nodes 204, edges 206), profile interfaces, external web interfaces, or any
combination thereof. The social-networking system 160 may then generate a search-results interface with search results corresponding to the identified content and send the search-results interface to the user.;
see also [0056] In particular embodiments, the social-networking system 160 may provide customized keyword completion suggestions to a querying user as the user is inputting a text
string into a query field. Keyword completion suggestions may be provided to the user in a non-structured format;
see also [0046] The search results may also be ranked and presented to the user according to
their relative degree of relevance to the user. In other words, the search results may be personalized for the querying user based on, for example, social-graph information, user information, search or browsing history of the user, or other suitable information related to the user; 
see also [0041] Profile interfaces may be viewable by all or a selected subset of other users. As an example and not by way of limitation, a user node 202 may have a corresponding user-profile interface in which the corresponding user may add content, make declarations, or otherwise express himself or herself. As another example and not by way of limitation, a concept node 204 may have a corresponding conceptprofile interface in which one or more users may add content, make declarations, or express themselves, particularly in relation to the concept corresponding to concept node 204.)

and
retrieve and rank, via the rank model, at least one of the ingested documents responsive to a query, 
(Subbian [0046] The search engine may conduct a search based on the query phrase using various search algorithms and generate search results that identify resources or content ( e.g., userprofile interfaces, content-profile interfaces, or external resources) that are most likely to be related to the search query. To conduct a search, a user may input or send a search
query to the search engine. In response, the search engine may identify one or more resources that are likely to be related to the search query, each of which may individually be referred to as a "search result," or collectively be referred to as the "search results" corresponding to the search query. The identified content may include, for example, socialgraph elements (i.e., user nodes 202, concept nodes 204, edges 206), profile interfaces, external web interfaces, or any
combination thereof..)

the rank model comprising a machine learning model
(Subbian [0005] The entity embeddings may be used in search ranking, entity disambiguation, intent classification, group search, typeahead, and other suitable applications.;
See also [0064-0065] In particular embodiments, an n-gram may be mapped to a vector representation in the vector space 400 by using a machine leaning model (e.g., a neural network). The machine learning model may have been trained using a sequence of training data ( e.g., a corpus of objects each comprising n-grams). [0065] In particular embodiments, an object may be represented in the vector space 400 as a vector referred to as a feature vector or an object embedding.).

It would have been obvious to one having ordinary skill in the art at the time the time of the effective filing date to apply ranking and interfaces as taught by Subbian since it was known in the art that search systems provide  query embedding for the given query and find entities that have corresponding entity embeddings similar to the query embedding and processing queries based on entity embeddings may improve over simpler text matching systems that may have up-ranked pages with more terms matching the query, which may be a poor match for the intent of the query where experiments shows that query processing based on entity embeddings outperformed the traditional query processing based on text matching by improving Mean Average Precision (MAP) by 38.5%. (Subbian [0005])


While Griffith discloses “batch upload documents”;
Griffith / Subbian do not explicitly disclose:
including documents containing paragraphs of text

however, Lisuk discloses:
including documents containing paragraphs of text
(Lisuk teaches parsing uploaded documents into paragraphs, i.e. “including documents containing paragraphs of text” see [0090] At a second sub-step, the classification computer 301 parses the training documents into linguistic structures, such as sentences, phrases, paragraphs, and so forth.;
See also [0108] Thus, at block 600, the verification computer 304 may retrieve not only data from the classified document, but also data identifying the potential labels and a corresponding
score for each label. Furthermore, in some embodiments, the score for each potential label is stored in the classified document database 305 at varying levels of granularity. Thus, 
a score for each potential label may be stored for each sentence, phrase, paragraph, section, or any other breakdown of the document.; 
See also [0103] In an embodiment, the classification computer 301 accesses the classified document database 305 over network 308 to upload the classified document and the label information. In other embodiments, the classification computer 301 sends the classified document and the label information over network 308 to verification computer 304, which then assumes responsibility for inserting the new data into the classified document database 305;
See also [0086] For example, the classifier may require training examples to be in a particular format, identification of features in the documents on which to base the classification, parsing/identification of linguistic structures (e.g. words, phrases, sentences, paragraphs, etc.),).


It would have been obvious to one having ordinary skill in the art at the time the time of the effective filing date to apply parsing uploaded documents into paragraphs as taught by Lisuk since it was known in the art that search systems provide a classification computer which may   perform pre-processing to convert the training documents to a format understandable to
the classifier where the classification computer may parse the documents into various linguistic structures, such as words, phrases, sentences paragraphs, and so forth and furthermore, the classifier may require the features to be presented in a particular format, such as a table,
where each row of the table corresponds to a different instance of the problem domain and the columns store the features and label of the instance where the format is context specific to the classifier implemented by the classification computer and the problem domain, where when a document is placed in the verified document database the query computer updates an index that allows efficient searching for documents based on the associated verified label where for example, assuming the verified document database is implemented as a relational database, the query computer may store a metadata index for each label that holds pointers that link to
rows representing the different documents that are tagged with the label. (Lisuk [0068, 0142])


Claims 24, 26, 27, 31, 33-34, 38-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Griffith et al., US Pub. No.:  2018/0314705 A1, in view of Subbian et al., US Pub. No.: 2019/0114362 A1, in view of Lisuk et al. US Pub. No.: 2016/0078022,
in view of Brandys et al., US Pub. No.: 2017/0168778 A1.

As to claim 24, Griffith/Subbian/Lisuk do not disclose:
wherein the UI template image comprises a Docker template image, the containers comprise Docker containers, and the environment configurations comprise application environment configurations;

However, Brandys discloses 
as modified discloses the apparatus of claim 21, wherein the UI template
image comprises a Docker template image, the containers comprise Docker containers,
(Brandys teaches docker / template images see [0042] In exemplary embodiments, template repository 108 is a repository of template image(s) 112 created by software container engine 106. In other words, template repository 108 is a storage location for data that includes
information regarding the software content on software container(s) 110. In some embodiments, template repository 108 is part of a software container-based cloud system.;
see also [0065] In step 402, CEA program 114/CEP program 208 installs two types of hooks. The first type of hook is a "template commit" hook. In various embodiments, the template commit hook is triggered when the last step occurs during the creation of software container(s) 110 by an automated process, such as the creation of software container(s) 110 using DOCKERFILES, which is similar to "make" files used for automating the processes of software compiling, linking, and packaging.In other embodiments, the template commit hook is triggered when software container templates are manually
"pushed" (i.e., published) into template repository 108 and are represented as template image(s) 112.;
See also [0005] DOCKER is an example of the emerging trend for software container-based cloud systems. This is because software containers are rapid to deploy, execute, and migrate
in a cloud system.;)

and the environment configurations comprise application environment configurations
(Brandys [0042] In exemplary embodiments, software container(s) 110 each include an entire runtime environment: one or more applications, plus all their dependencies, libraries and
other binaries, and configuration files needed to execute the applications, bundled into one package. In some embodiments, software container(s) 110 are part of a software
container-based cloud system.;
See also [0029] 2) Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting
environment configurations.)

It would have been obvious to one having ordinary skill in the art at the time the time of the effective filing date to apply docket template images as taught by Brandys to the system of Griffith/Subbian/Lisuk since it was known in the art that software container systems provide software container engines create a template for every software container that the software container engine builds and software that is used to provision a software container is present on the source template for that software container where the system provide for searching a software container engine for a template repository where the system to extract data from one or more template images within the template repository, with data identifying the software content of the one or more corresponding software containers and providing where cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources ( e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be
rapidly provisioned and released with minimal management effort or interaction with a provider of the service where the cloud model may include at least five characteristics, at least three
service models, and at least four deployment models. (Brandys [0018-0020])


As to claim 26, Brandys as modified discloses the apparatus of claim 21, wherein the
applications are associated with properties files and/or UI configuration files
(Brandys [0065] The first type of hook is a "template commit" hook. In various embodiments, the template commit hook is triggered when the last step occurs during the creation of software container(s) 110 by an automated process, such as the creation of software container(s) 110 using DOCKERFILES, which is similar to "make" files used for automating the processes of software compiling, linking, and packaging. In other embodiments, the template commit hook is triggered when software container templates are manually "pushed" (i.e., published) into template repository 108 and are represented as template image(s) 112. The manual push
step occurs subsequent to the creation of new instances of software container(s) 110, which involves the manual addition and removal of applications, the changing of configuration
files, etc.;
see also [0062] In step 310, CEA program 114/CEP program 208 generates output in a SAM-acceptable format. In various embodiments, the output includes accurate software discovery
data in real-time for the software executing on software container(s) 110. SAM-acceptable formats for output include, but are not limited to, ISO 19770-2 and ISO 19770-4 XML files. In some embodiments, a specialized output format that is optimized for a specific SAM tool is
used.;
see also [0080] In various embodiments, SAM scanner/monitor module 503 generates output in a SAM-acceptable format. The output includes accurate software discovery data in real-time for the software executing on software container(s) 110. SAM-acceptable formats for output include, but are not limited to, ISO 19770-2 and ISO 19770-4 XML files. In some embodiments, a specialized output format that is optimized for a specific SAM tool is used)


As to claim 27, Brandys as modified discloses the apparatus of claim 26, wherein the properties files are associated with an application search platform collection and the UI configuration files are associated with application client Uls
(Brandys [0018] Embodiments of the present invention provide a method, computer program product, and computer system for searching a software container engine for a template repository. Embodiments of the present invention provide a method, computer program product, and computer system to extract data from one or more template images within the template repository, the data identifying the software content of the one or more corresponding software
containers.;
see also [0045] In exemplary embodiments, CEA program 114 is an adapter (i.e. a software adapter) that searches software container server 102 for software container engine 106. CEA
program 114 consequently searches software container engine 106 for template repository 108. CEA program 114 retrieves template image(s) 112 and instantiates template image(s) 112. CEA program 114 analyzes instantiated template image(s) 112 to determine which software container in software container(s) 110 is represented by a given instantiated template image in template image(s) 112. CEA program scans the contents of each instantiated template image
in instantiated template image(s) 112 for the identity of the software programs executing on the corresponding software container in software container(s) 110. CEA program 114 then creates or updates TSM database 116 with data that includes the mapping of software content on software containers in software container( s) 110 with the corresponding template images in template image(s) 112. CEA program 114 also includes a software asset management function,
which includes the function of scanning newly started software containers and reading TSM database 116 in order to create software inventory reports.)

Referring to claim 31, this dependent claim recites similar limitations as claim 24;
therefore, the arguments above regarding claim 24 are also applicable to claim 31.

Referring to claim 33, this dependent claim recites similar limitations as claim 26;
therefore, the arguments above regarding claim 26 are also applicable to claim 33.

Referring to claim 34, this dependent claim recites similar limitations as claim 27;
therefore, the arguments above regarding claim 27 are also applicable to claim 34.

Referring to claim 38, this dependent claim recites similar limitations as claim 24;
therefore, the arguments above regarding claim 24 are also applicable to claim 38.

Referring to claim 40, this dependent claim recites similar limitations as claim 27;
therefore, the arguments above regarding claim 27 are also applicable to claim 40.


Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.














CONTACT INFORMATION
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVAN S ASPINWALL whose telephone number is (571)270-7723. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at 571-270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.


/Evan Aspinwall/Primary Examiner, Art Unit 2152                                                                                                                                                                                                        


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


(Ad) Transform your business with AI in minutes, not months

✓
Custom AI strategy tailored to your specific industry needs
✓
Step-by-step implementation with measurable ROI
✓
5-minute setup that requires zero technical skills
Get your AI playbook

Trusted by 1,000+ companies worldwide

Cookies help us deliver our services. By using our services, you agree to our use of cookies.