WO2025042386A1 - Al integrated intelligent content system - Google Patents

Al integrated intelligent content system Download PDF

Info

Publication number
WO2025042386A1
WO2025042386A1 PCT/US2023/030712 US2023030712W WO2025042386A1 WO 2025042386 A1 WO2025042386 A1 WO 2025042386A1 US 2023030712 W US2023030712 W US 2023030712W WO 2025042386 A1 WO2025042386 A1 WO 2025042386A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
aspects
natural language
model
models
Prior art date
Application number
PCT/US2023/030712
Other languages
French (fr)
Inventor
Wei Lin
Mauro DAMO
Hareesh KOMMEPALLI
Mohan WANG
Bharti Patel
Original Assignee
Hitachi Vantara Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Vantara Llc filed Critical Hitachi Vantara Llc
Priority to PCT/US2023/030712 priority Critical patent/WO2025042386A1/en
Publication of WO2025042386A1 publication Critical patent/WO2025042386A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information

Definitions

  • the present disclosure is generally directed to a digital advisor related to product and/or sendees information.
  • the components and/or souices may include, for example, different teams associated with a business (e.g., marketing, sales, etc.) and different production lines and/or industrial processes each associated with multiple machines and/or production lines may produce information relevant to different stakeholders and/or users. Much of the information may be siloed in separate content management systems for different aspects of providing products and/or services . Additionally, even if information is appropriately shared or co-located, the different stakeholders and/or users may be interested in different aspects, or analysis, of the information.
  • Example implementations described herein involve an innovative method to generate content for multiple aspects of industrial operations.
  • aspects of the present disclosure include a method for receiving information from one or more input sources associated with at least one of the industrial process or the industrial product, processing the received information using one or more machine-trained (MT) models associated with a model management system, generating, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT natural language processing (NLP) models, generating, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received information or the processed information, and outputting, for the first user, a first indication of at least the first natural language recommendation via the user interface.
  • MT machine-trained
  • NLP MT natural language processing
  • aspects of the present disclosure include a non-transitory computer readable medium, storing instructions for execution by a processor, which can involve instructions for receiving information from one or more input sources associated with at least one of the industrial process or the industrial product, processing the received information using one or more MT models associated with a model management system, generating, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT NLP models, generating, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received infonnation or the processed information, and outputting, for the first user, a first indication of at least the first natural language recommendation via the user interface.
  • aspects of the present disclosure include a system, which can involve means for receiving infonnation from one or more input sources associated with at least one of the industrial process or the industrial product, processing the received information using one or more MT models associated with a model management system, generating, for the first user, a first natural language query regarding at least one of the received infonnation or the processed information using one or more MT NLP models, generating, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received infonnation or the processed information, and outputting, for the first user, a first indication of at least the first natural language recommendation via the user interface.
  • aspects of the present disclosure include an apparatus, which can involve a processor, configured to receive infonnation from one or more input sources associated with at least one of the industrial process or the industrial product , process the received infonnation using one or more MT models associated with a model management system, generate, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT NLP models, generate, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received hiformation or the processed information, and output, for the first user, a first indication of at least the first natural language recommendation via the user interface,
  • FIG. I is a diagram of components of the system in accordance with some aspects of the disclosure.
  • FIG. 2 illustrates an example hypothesis development canvas (HDC) that may be provided to the system in accordance with some aspects of the disclosure.
  • HDC hypothesis development canvas
  • FIG. 3 is a diagram of illustrating processing one or more HDCs to produce structured data in accordance with some aspects of the disclosure.
  • FIG. 4 is a diagram illustrating aspects of model management in accordance with some aspects of the disclosure.
  • FIG . 5 is a diagram illustrating a content graph in accordance with some aspects of the disclosure.
  • FIG . 6 is a diagram illustrating elements of a model training process in accordance with some aspects of the disclosure.
  • FIG. 7 is a diagram illustrating elements of an inferencing process hi accordance with some aspects of the disclosure.
  • FIG. 8 is a diagram illustrating recommendations based on model selection from a model management subsystem in accordance with some a spects of the disclosure.
  • FIG. 9 is a diagram illustrating aspects of training a generative pre-trained transform (GPT) model (or simply a GPT) in accordance with some aspects of the disclosure.
  • GPT generative pre-trained transform
  • FIG. 10 is a diagram illustrating additional aspects of training the GPT model of FIG.
  • FIG. 11 is a diagram illustrating related elements of an FMEA in accordance with some aspects of the disclosure for an example relating to a wind turbine (WT).
  • WT wind turbine
  • FIG. 12 is a diagram illustrating the use of a combination of a GPT model and a Bidirectional Encoder Representations from Transformers (BERT) model to generate and/or identify questions and answers in accordance with some aspects of the disclosure.
  • BET Bidirectional Encoder Representations from Transformers
  • FIG. 13 is a first diagram and a second diagram illustrating a first set of training operations, and a second set of inferencing operations, respectively, associated with the GPT and/or BERT inodei(s) in association with the CMS in accordance with some aspects of the disclosure.
  • FIG. 14 is a diagram illustrating components of semantic reasoning traceability associated with one or more GPT and/or BERT models in accordance with some aspects of the disclosure.
  • FIG. 15 is a diagram illustrating elements of a process or algorithm used to generate descriptive questions for the GPT in accordance with some aspects of the disclosure.
  • FIG. 16 is a diagram illustrating the use of subject mater experts (SMEs) and user feedback for a GPT training operation in accordance with some aspects of the disclosure.
  • SMEs subject mater experts
  • FIG. 17 is a diagram illustrating elements associated with using human feedback for reinforcement learning (RL)-based training of a model in accordance with some aspects of the disclosure.
  • FIG. 18 is a diagram illustrating elements of an orchestration system in accordance with some aspects of the disclosure.
  • FIG. 19 is a diagram 190 illustrating orchestration workflow compiling in accordance with some aspects of the disclosure.
  • FIG . 20 is a flow diagram illustrating a method in accordance with some aspects of the disclosure.
  • FIG. 21 is a flow diagram illustrating a method in accordance with some aspects of the disclosure.
  • FIG. 22 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
  • a digital advisor may use advanced Al-drivea technology that may be used to analyze large amounts of content and extract meaningfill insights in a content-driven manner guided by user preference approach.
  • the scope of this disclosure includes developing the software components for implementing the system on cloud, or on-premises, servers, relevant data sets curation and optimizing its performance over time.
  • the core components include an analytics pipeline, a content management system, a data catalog, a model management system, and workflow orchestration.
  • the system provided addresses traditional areas of concerns of content and knowledge management, e.g.
  • a content Al provided may be capable of creating high-quality content in a shorter period of time than it takes a humans counterpart. This allows businesses to produce more content, triage complex situations, and publish solutions faster which, in some aspects, improves an overall time to action (e.g., reduce a time to act to remedy an identified problem or to take advantage of an identified opportunity).
  • the content Al to create, or generate the content may allow human capital /and/or human resouices to aid in a complex decision-making process.
  • AI- powered content creation and optimization can be scaled up or down as needed with the suitable “size” of the model. It allows businesses to handle cross vertical or just vertical, public or private or converged content volumes without sacrificing quality.
  • the system may address personalization by having the content Al analyze data about individual users or subject matter experts (SMEs) and create personalized turnkey coatent tailored to their preferences and interests. This personalization may improve user engagement and drive improved solutions.
  • the system may also provide SEO (Search engine optimization) optimization by having the content Al analyze data, documents, tickets, and/or service records (or other similar data) to optimize content for search engines, thus improving its visibility and helping businesses address issues to their target audiences effectively.
  • the system in some aspects, may also improve cost-effectiveness by having the content Al create content faster and at lower cost than human writers (not including original content creators), which is beneficial for reducing budgets associated with content creation. The reduced cost is particularly associated with converged field(s) for which a single (more expensive) human resource with expertise in each of the fields or multiple humans with expertise in each field would otherwise be used leading to higher costs.
  • the system provided in the following disclosure may provide improvements in multiple cross-industrial setting. For example, by combining content management and large language models such as a generative pre-trained transform (GPT) model, the system may provide solutions to a wide range of problems in industrial areas, settings, or fields, by providing NLP capabilities.
  • GPT models in some aspects, may be used for image and text analysis to detect defects and anomalies in manufacturing processes, leading to improved quality control and reduced waste.
  • GPT models rnay analyze sensor data from manufacturing equipment to predict maintenance needs and improve equipment uptime, thus reducing downtime and maintenance costs.
  • GPT models may be used, in some aspects, to develop chatbots or virtual assistants that understand and respond to customer inquiries and issues in natural language (NL), thus improving customer service experiences.
  • GPT models may be used to develop advanced knowledge management systems, such as intelligent search engines or automated content tagging. Such advanced knowledge management systems may improve efficiency and producti vity in industrial settings where employees need to access and utilize large amounts of technical information. GPT models, in some aspects, may further be used to generate technical documentation, such as user manuals or service guides, automatically, based on user input or existing data. The automatic documentation generation may reduce the time and effort required to produce such documents and improve their accuracy.
  • the system may be associated with a Generative Content Al, guided by DEPPAA (Descriptive, Exploration, Predictive, Prescriptive, Automation, Autonomous) analytics that may facilitate cognitive processes in cross-industrial settings.
  • DEPPAA Descriptive, Exploration, Predictive, Prescriptive, Automation, Autonomous
  • the Generative Content Al may enable the generation of fresh content, innovative ideas, and effective solutions.
  • the descriptive phase of the DEPPAA analytics in some aspects, may capture essential details of business use cases, allowing adaptability to changing content.
  • the design approach in accordance with some aspects of the disclosure may be content-driven, accommodating evolving use cases. These capabilities additionally address subsequent components of the DEPPAA, e.g., the EPPAA phases/coniponents.
  • the Generative Content Al encompasses text, images, video, audio, and data generation, benefiting industries across physical systems, human interactions, and operational processes. It enables content extraction, insights creation, intelligent generation, process automation, personalization, and operation optimization.
  • DEPPAA analytics process may be employed.
  • the primary objective may be to comprehensively describe the business use case(s) . This entails capturing the essential details of the use case, allowing the system to adapt to new content when the use case undergoes changes.
  • the design approach employed is content -driven, ensuring that the system can effectively accommodate evolving use eases.
  • a Content Al e.g., a generative Al or generative content Al
  • EPPAA e.g. Exploration, Predictive, Prescriptive, Automation, and Autonomous
  • Generative Content Al holds broad applicability across various industrial sectors, as it possesses the ability to generate fresh content, innovative ideas, and effective solutions. This technology becomes a valuable asset for businesses operating in diverse industries, hi the context of Industrial Generative Al, the term “content” may encompass a wide range of elements, including text, images, video, audio, and/or data.
  • a Generative AL in some aspects, may be capable of processing and/or producing text content, such as operational manuals, technician notes, maintenance records, system design and other articles, blog posts, product descriptions, innovations reports, Gardner reports or marketing copy.
  • the Generative Al may be capable of processing and/or generating new images and/or video(s) using sources such as asset images and/or videos taken from drone and helicopter for inspection, asset defect images and/or videos, and other industrial product photos and/or videos.
  • a Generative Al in some aspects, may be capable of processing and/or generating audio and simulated audio content for tasks such as detecting an anomaly or classifying audio classes.
  • a Generative Al may be capable of processing and/or generating data sets, including data from industrial sensors, customer issue ticket data, and product defect data
  • the applications of Generative Content Al in this context may include one or more of extraction of content from existing or simulated systems, creation of insights fr om extracted content, generation of intelligent content, automation of content-related processes, personalization of content for business owners, and/or optimization of operations through content-driven approaches.
  • FIG. 1 is a diagram 100 of components of the system in accordance with some aspects of the disclosure. While it may be useful to generally categorize the different elements of the system into functions associated with content management 110, model management 120, and content Al 130, the different elements may be used for, or associated with multiple functions and/or components of the system. For example, some of the elements may provide functions associated two or more of content management 110, model management 120, and/or content Al 130.
  • Diagram 100 illustrates elements of a system for providing personalized insight for multiple users and/or across multiple settings, in accordance with some aspects of the disclosure.
  • the functions associated with content management 110 may be provided to store and manage business insights, content, and enablement.
  • the functions associated with content management 110 may include processing and content extraction at content templates creation module 113 based on the input that includes input from business use cases module 111.
  • the business use cases module 111 may process data (e.g., a hypothesis development canvas (HDC) or a resource description framework (RDF)) regarding a business goal or problem statement, and provide the data in a standardized format (e.g., an HDC, an RDF, or a YAML ain’t markup language (YAML) format as further discussed below) from which content may be extracted for analysis (e.g., to guide the analysis) of historical, current, and/or subsequently collected data based on, e.g., a language model.
  • data e.g., a hypothesis development canvas (HDC) or a resource description framework (RDF)
  • RDF resource description framework
  • YAML YAML aint markup language
  • Content management 110 may further be associated with data and model management which may be associated with a data and model module 112 configured to manage the data and models used by an organization to make decisions in association with the content templates creation module 113 which may implement knowledge content management standards (e.g., stored and/or managed at a knowledge management module 117).
  • Content management 110 may be associated with a metadata management module 118 whose function, in some aspects, may be to arrange data and its corresponding metadata and /or definition (e.g., associate data with metadata in association with content management 110 in order to identify the contexts in which the data is to be used to provide analysis or insight).
  • Content management 110 may also be associated with a platform information management function performed by a platform management module 115 to support infrastructure (e.g., a platform management software or code).
  • a model and workflow management module 116 may be associated with content management 110 to support the system during runtime, e.g., to implement one or more workflows for processing and/or analyzing data to provide insights or other output for one or more users (e.g., a set of one or more programs may execute code during runtime to implement one or more workflows), hi some aspects, the one or more workflows may be based on the data, models, and/or business use cases as described above along with information or feedback received from the functions associated with model management 120 or content Al 130.
  • model management 120 may include managing (e.g., training and updating) and executing one or more analytics models and extracting insights based on the models.
  • Model management 120 may be associated with a data collection and curation processing module 121 that may be used to ensure that the models are developed, trained, and/or updated (maintained) using accurate and up-to-date data (e.g.. that incoming data is processed appropriately to ensure accuracy),
  • a model development and training module 122 may, in some aspects, implement processes and techniques used to create and prepare a model for use in a predictive analytics, prescriptive optimization, automation, and/or reinforced learning environment associated with model management 120.
  • a model management module 124 may implement core functions to integrate other functions and/or modules associated with the functions of the model management 120. Financial functions of model management, in some aspects, may also be associated with the model management module 124.
  • a feature management module 125 may provide analytics model management to manage the lifecycle of features used in analytics models.
  • a performance management module 127 may be associated with analytics model management including evaluating the performance of an analytical model and adjusting or improving the analytical model to ensure it meets the needs of the business, hr some aspects, the functions of model management 120, ami specifically the model management module 124, may further be associated with a model store module 126 providing storage of analytical models and from which analytical models may be retrieved.
  • the model management module 124 in some aspects, may further be associated with model deployment module 123 associated with identifying and deploying analytical models based on requests generated by the GPT module 140.
  • the GPT module 140 may be associated with content Al 130 (and/or a digital advisor).
  • the GPT module 140 may be a private GPT (e.g., a GPT trained on proprietary' and/or private data) acting as a language model used to generate humanconsumable text, content, computation, and/or code.
  • the GPT module 140 may be used to narrow a large number of possible tailure modes to a smaller number of failure modes that may be (content filtering) by user inputs.
  • the GPT module 140 may be used in associated with semantic reasoning (e.g., decision making and/or group triage).
  • the output of an analytics model deployed by the model deployment module 123 may be formatted and/or pre- processed (by a separate program and/or module, not shown, or a component of the GPT module 140) for ingestion by the GPT module 140.
  • the GPT module 140 may be used for content generation from analytics models Output.
  • the GPT module 140 may be associated with a set of modules associated with a knowledge content curation and/or generation with a “human in the loop (HIL)” including an evaluation and monitoring module 131, a SME and/or HIL module 133, and/or a user interface and user feedback module 132.
  • HIL human in the loop
  • Al SME 140 may be used as an Al SME (e.g., to automatically generate a recommendation) that can be used to generate a user response for a reinforcement learning process.
  • the functions associated with content Al 130 may include functions associated with a digital advisor that can provide analysis or warnings for each user of multiple users or types of users.
  • a user interface (III) and user experience (UX) design in some aspects, may be employed to enhance the user's experience and make it more intuitive and efficient.
  • the functions of content Al 130 may be associated with an evaluation and monitoring module 131 that may be used to evaluate the content generated by the GPT module 140 and provide opportunities for monitoring the quality of the generated content.
  • the evaluation and monitoring module 131, the user interface and user feedback module 132, the SME and/or HIL module 133 , and the user preference model development module 134 may interact to improve the quality and specificity of the content generation (e.g., warnings, analysis, and/or information) for each of a plurality of users.
  • the content generation e.g., warnings, analysis, and/or information
  • C Content Al may refer to an Al aided technology that leverages artificial intelligence algorithms (e.g., GPT, Transformer) to analyze large amounts of content (e.g., data and/or information provided by different components in an industrial setting, such as video data, audio data, sensor data, manuals, HDCs, etc.), and extract meaningfill insights in a way that is guided by user preferences.
  • Content Al may be associated with a wide range of applications, such as turn-key reporting, conditional monitoring, sentiment analysis, NLP, and text classification.
  • the system discussed below may use the latest large language model in content Al development, build the required software components for implementation, deploy 7 the system on one or more servers (e.g..
  • the components of the system include one or more of an analytics pipeline, hierarchical clustering process, link-density clustering. model management, and workflow orchestration using Node-Red.
  • the system may be designed to provide business users, SME, and operators with preference-specific content (e.g., personalized content) for turn-key decision making or group triage decision making for a given event or situation.
  • a system in accordance with some aspects of the disclosure may include four subsystems including a content management system (CMS) (e.g., implementing a set of functions associated with content management 110), model management and workflow execution (e.g., implementing a set of functions associated with model management 120), GPT (e.g., GPT module 140), User content generation and feedback (e.g., implementing a set of functions associated with content Al 130).
  • CMS content management system
  • model management and workflow execution e.g., implementing a set of functions associated with model management 120
  • GPT e.g., GPT module 140
  • User content generation and feedback e.g., implementing a set of functions associated with content Al 130.
  • user and SME preference may be extracted and managed as part of the content (e.g., by
  • the system may be a software system using an API-based interface and a common platform backend to store and manage content including images and documents in a searchable, rank-able, and relate-able (e.g., able to be identified as related for one or more different analyses) way.
  • customer preference and persona data will be tracked, analyzed, and leveraged hi the content generated for users and users inquires.
  • data may be tracked for a particular user or each of a set of user roles or characteristics used to determine the types of analyses performed, and content generated, for the particular user or for a user having a particular user role from the set of user roles.
  • the system in some aspects, may also use a gateway and other third-party 7 APIs to process and orchestrate workflows.
  • the development process may follow an agile methodology and the system may be scalable to meet any future needs.
  • the system may include the busmess use cases module 111 that, in some aspects, may process and extract content associated with business use cases (e.g., data provided in an HDC).
  • Content extraction management may include automated digital content curation that automatically curates digital content from inputs of use cases and that may be aligned and/or enriched from one or more corresponding business sources such as an HDC, technician notes (e.g., notes imported from a commercial asset management program), meeting notes, a collaboration website, feeds, and/or social media to provide relevant and timely information of stakeholders’ business intent.
  • the business use cases module I I I may be configured to provide text analytics and NLP to extract structured information from unstructured content to gain meaningful insights.
  • Automated document classification in some aspects, may further be provided by the business use cases module 111 configured to automatically classify documents into predefined categories based on the content.
  • the automatic classification in some aspects, may facilitate efficient search and retrieval of relevant documents, hi addition to, or independently from, the automatic document classification, the business use cases module 111, in some aspects, may be configured to automatically summarize content (e.g., in a document received at the business use cases module 111) to provide a quick overview of the content.
  • the business use cases module 111 may be configured to provide automated content categorization to automatically categorize content into predefined categories for easy retrieval.
  • the business use cases module 111 in some aspects, maybe configured to automatically extract content from corresponding sources for a given use case such as circumstances, persona, outcomes of implementations to gain meaningfill insights of a use case and its effec tiveness.
  • the CMS may further be associated with data and model management functions (e.g., implemented by data and model module 112 configured to provide the data and model management functions).
  • Data and model management may relate to the process of managing the data and models used by an organization to make decisions.
  • the data and model management may include the development, maintenance, and use of data and models to support business decisions.
  • the data and model management in some aspects, may also include the tracking, analysis, and storage of data and models, as well as the development and evaluation of models to optimize the provision of data services and a cost of replacenient/operation.
  • the CMS may include content templates creation providing knowledge content management (e.g., implemented by content templates creation module 113).
  • the content templates creation and/or the knowledge content management may include processes relating to creating pre-defined templates to facilitate the creation of content.
  • the creation of pre-defined templates (e.g., by the content templates creation module 113), in some aspects, may be used to ensure that all content created is consistent with the organization’s brand, standards, and guidelines.
  • Content templates in some aspects, may be used for various types of content including articles, webpages, documents, emails, and more.
  • content templates may also be used to help streamline the content creation process, reducing the time it takes to create content from scratch.
  • Content templates may also help ensure that all content created follows a set of guidelines, ensuring that all content meets the organization’s standards and is optimized for SEO.
  • the modules associated with the content manager 110 may be managed by a content management system module 114.
  • the CMS may include functions related to platform information and information technology (IT) management (e.g., implemented by platform management module 115).
  • the platform information and IT management may be an adaptable (migratable and upgradeable) approach to the implementation, maintenance, and management of an organization’s technology and data infrastructure.
  • the platform information and IT management may involve understanding how different components of the technology and data infrastructure interact and function, as well as making sure that all components are maintained and managed in a secure and efficient manner.
  • platform information and IT management may include planning and implementation of hardware and software, as well as data security, storage, and backup solutions.
  • the platform information and IT management in some aspects, may also include monitoring of system performance, troubleshooting, and system maintenance.
  • Model and workflow information and management may be associated with the CMS (e.g., model and workflow information and management functions may be associated with content management 110 and/or may be implemented by model and workflow management module 116).
  • Model ami workflow information and management may include processes relating to managing the data associated with the various models and workflows that are used to extract insights in an organization.
  • Managing the data in some aspects, may include managing documentation, configuration, and tracking of the models and workflows.
  • Model and workflow information and management in some aspects, may allow for the efficient use and management of the data associated with the models and workflows. Additionally, or alternatively, the model and workflow information and management may operate (e.g., may perform lifecycle management for the models and workflows) to ensure that the models and workflows are up to date and properly configured for the organization's needs as conditions change.
  • the CMS may include knowledge management functions (e.g., implemented by knowledge management module 117) for managing knowledge content (e.g., functions related to Insert, Update, Delete (de-dup), Rank, etc.).
  • Knowledge management in this context may include processes related to capturing, organizing, and distributing knowledge regarding insights and/or analyses provided by the system within an organization.
  • knowledge management may involve the identification, capture, and sharing of knowledge assets (with their corresponding users and persona) such as documents, images, databases, and systems.
  • Knowledge management in some aspects, may include the development and implementation of processes to ensure that knowledge is used in an effective and efficient maimer.
  • the CMS may include metadata management functions (e.g., implemented by metadata management module 118).
  • Metadata management in some aspects, may include processes relating to arranging data and corresponding metadata and/or definitions.
  • metadata management may be associated ’with organizing information about data and its associated elements such as descriptions, definitions, and other relevant information.
  • Metadata management in some aspects, may include data cataloging, data classification, data governance, data quality, and other related activities, hi some aspects, metadata management may be used to enable data-driven decision-making, facilitate data sharing and collaboration, and ensure data security.
  • FIG. 2 illustrates an example HDC 200 that may be provided to the system in accordance with some aspects of the disclosure.
  • the HDC 200 may include information regarding an author, a creation date, and an iteration number.
  • the information regarding an author, the creation date, and the iteration number may be used to ensure that the most up-to-date version (and/or the most authoritative version, e.g., created by a user with more authority or expertise) of an HDC 200 is used for subsequent processing.
  • the HDC 200 may include information related to, and/or define, a hypothesis 205 and one or more factors related to the hypothesis.
  • HDC 200 includes information related to a set of key performance indicators (KPIs) 210, a set of business values 215, a set of stakeholders 220, a set of entities 225. a set of decisions 230, a set of predictions 235, a set of data sources 240, a set of variables 245, a set of recommendations 250, a set of impediments 255, a set of risks 260. a set of financial assessments 265, a set of impediment assessments 270.
  • KPIs key performance indicators
  • an HDC may allow (or facilitate) busmess owners, researchers, and scientists to more effectively develop, refine, and test hypotheses in a systematic and structured way with a business objective in mind.
  • the HDC as an example or a data structure for organizing aspects of a “problem” to be solved, may be used to facilitate collaboration and communication among researchers and stakeholders, helping to ensure that all aspects of the hypothesis development process are properly considered and addressed.
  • the hypothesis 205 may define a business problem as “reduce unplanned downtime costs by X% while maintaining operational effectiveness (e.g., uptime, service level agreements (SLAs)).
  • the hypothesis 205 may be associated with a set of KPIs 210 that may include one or more performance indicators associated with anomalies, failures, component quality, inventory costs, inventory turnover, obsolete inventory, excessive inventory, supplier quality, and/or supplier reliability.
  • the set of business values 215, in some aspects, may include one or more values, objectives, or factors affecting and/or associated with the business problem defined by the hypothesis 205, such as reducing inventory and procurement costs, improving operational effectiveness (uptime), providing and/or obtaining fresher consumables, freeing up working capital, and/or increasing maintenance equipment utilization.
  • the set of stakeholders 220 may identify stakeholders associated with inventory management, factory operations, procurement, suppliers, and/or customers.
  • the set of entities 225 may identify the entities associated with the business problem defined by the hypothesis 205, such as factories, distribution centers, products and/or components, suppliers, customers, and/or competitors, hi some aspects, the set of decisions 230 may include one or more of decisions regarding demand forecasting, materials procurement, inventory management, datacenter management, staffing and training, consumables management, product quality, supplier quality and'or reliability, supplier reallocations, supplier management, supplier acquisition, and/or motion designer.
  • the set of predictions 235 may include predictions related to one or more of anomaly (or anomalous) events (or anomalies), failure events, product (and/or item) level demand, inventory procurement, inventory logistics and/or location(s), a demand for consumables, obsolete inventory, excessive inventory, staffing requirements, operation prognosis, supplier deliveries, supplier quality, weather-impact, and/or sustainability.
  • the HDC 200 may include an indication of a set of data sources 240 for the hypothesis 205 that may include data sources associated with one or more of inventory, sales, orders, returns, staffing, consumables, economic indicators, and/or weather.
  • the hypothesis 205 may be associated with a set of variable 245 that may include variables (or dimensions) associated with product components, product specifications, plant locations), DC location(s), DC size, a set of sensors, motion profile, asset behaviors, failure niode(s), supplier history, day of week (e.g., Monday, Tuesday, or weekday vs. weekends), seasonality (or season), weather conditions, assembly line structure, and/or economic conditions.
  • the set of recommendation 250 may relate to one or more of inventory levels, consumables levels, procurement schedule(s), staffing and/or hiring, training and/or retraining, DC logistics, supplier allocations, and/or supplier corrective actions.
  • the set of impediments 255 may include impediments related to one or more of data quality and/or access concerns, that operating the solution may become onerous, that system reliability and quality may not (or does not) meet SLA’s, a lack of field management buy-in, managing modem technology architecture, financing and/or budgeting.
  • the set of risks 260 may include risks (and, in some aspects, associated rates of false positives and false negatives) associated with one or more of customer risks associated with delayed deliveries or quality; manufacturing risks associated with inventory shortages; staffing risks associated with deploying new technologies; financial risks associated with poor obsolete and/or excessive handling and/or execution; supply chain risks associated with weather, strikes, and/or economic risks that may disrupt the supply chain; and/or supplier risks associated with demand, reliability, and/or quality.
  • the set of financial assessments 265 may include assessments related to one or more of operation costs, uptime, customer satisfaction, product quality, supplier quality, or employee satisfaction.
  • the set of impediment assessments 270 may include assessments of impediments related to data operations, analytic skills, operations management, infrastructure, budgeting, and/or staffing. While the above discussion provides lists of possible elements of each of the set of KPIs 210, the set of business values 215, the set of stakeholders 220, the set of entities 225, the set of decisions 230, the set of predictions 235, the set of data sources 240, the set of variables 245, the set of recommendations 250, the set of impediments 255, the set of risks 260, the set of financial assessments 265, the set of impediment assessments 270, they are provided solely as examples and are not meant to be exhaustive.
  • FIG. 3 is a diagram 300 of illustrating processing one or more HDCs to produce structured data in accordance with some aspects of the disclosure.
  • a set of HDCs including an HDC 310 may be converted into a set of corresponding structured data sets such as a set of YAML files including YAML file 320.
  • the YAML file 320 may be subsequently processed to produce data in a structured format (e.g., structured data 330).
  • the structured data 330 may be loaded into the data storage 340, e.g., as a key/value pair, and may be validated based on a taxonomy. In some aspects, the taxonomy may be defined, and/or may be associated, with an RDF.
  • the structured data 330 loaded into the data storage 340 e.g., key/value pairs included in, and/or based on, the structured data
  • converting an HDC to a YAML and/or RDF file and/or format in some aspects may provide one or more benefits.
  • the conversions may provide improved content collaboration and querying, a standardized format, an integration with DevOps pipelines, version control, reproducibility, traceability, and reusability, hr some aspects, YAML/RDF files may be easily shared and edited by multiple team members, which can help to improve collaboration and communication during the hypothesis development process.
  • YAML/RDF files may provide a standardized format for data and information, which can help to ensure that all aspects of the hypothesis are properly captured and organized, hi some aspects, YAML/RDF files can be easily integrated into DevOps pipelines and other automated workflows, which can help to streamline the testing and validation process.
  • YAML/RDF files in some aspects, may be easily tracked, and version controlled using tools such as Git, which can help to ensure that changes are properly documented and managed.
  • Git tools
  • YAML/RDF files by using YAML/RDF files to document the hypothesis development process, it may be easier to trace the various steps and decisions that were made throughout the process, which can help to improve accountability and transparency.
  • YAML/RDF files me be easily reused or repurposed for future research projects, which can help to save time and effort in the long run.
  • FIG. 4 is a diagram 400 illustrating aspects of model management in accordance with some aspects of the disclosure.
  • the aspects of model management illustrated in diagram 400 may be associated with the functions associa ted w ith model management 120 of FIG. 1.
  • model management may include data collection and curation processing (e.g., associated with the data collection and curation processing module 121) .
  • Data collection and curation in some aspects, may be used to ensure that the models are developed and/or trained using accurate and up-to-date data.
  • data collection involves gathering data (or matching data descriptions) at 402 from various targeted sources (e.g., a set of inputs 401 ) described in the use cases via data fabric and/or data catalog to identify their storage’s physical and/or virtual locations for retrieval. Gathering the data at 402, in some aspects, may be based on workflows, pipelines, and or other automation instructions provided by other components and/or functions of the system 450.
  • the other components and/or functions of the system 450 may include one or more additional modules associated with one or more of model management, content Al, a content advisor, or GPT and the workflows, pipelines, and/or automation instructions may be based on models or workflows trained using feedback from a human user (e.g., an SME).
  • the gathered data may then (e.g., based on the workflows, pipelines, and or other automation instructions) be extr acted, cleaned, and formated at 403, 404, 405, and/or 406 as one or more of a YAML and/or RDF file based on the use case and data quality/resolutions requirements to create a dataset suitable for model development.
  • a YAML file generated at 405 may be mapped to an RDF-based taxonomy 407 (and ultimately to an RDF file).
  • Mapping the YAML file to an RDF -based taxonomy and/or an RDF file may include one or more of identifying YAML data entities and properties at 408, designing and/or defining the RDF vocabulary at 409, and/or mapping the elements of the YAML (e.g., the data entities and properties identified at 408) to the RDF vocabulary (e.g., the RDF vocabulary designed and/or defined at 409) to produce an RDF file at 410.
  • the data associated with the YAML and/or RDF file produced by the mapping at 406 and/or the RDF file produced at 410 may be serialized at 411, for storage, retrieval, and/or querying.
  • the serialized data in some aspects, may then be stored in a graph DB (e.g., one or more cloud, or on-premises, servers or databases) at 412.
  • the data may then be displayed at 413 via a display unit or other output device implementing a UI or providing a UX for verification by an SME or HIL at 414.
  • the data may be processed, at 415, using an RDF to DevOps pipeline, hi some aspects, the processing at 415 may produce intermediate results (e.g., artifacts) that may, at 416 be associated with one or more virtualized, cloud-based, or on-premises, services (e.g., KubemetesTM 416a, Docket with Run Time 416b, and/or a pipeline execution service 416c).
  • the pipeline may conclude with, or the artifacts generated by the pipeline processing at 415 may be used for, generating new content at 417.
  • the content generated at 417 may then be verified at 418 by a user (e.g., SME or HIL) and may be incorporated into the system by the components and/or functions of the system 450.
  • a user e.g., SME or HIL
  • the system may, at 430, link and/or map the entities and/or information included in the YAML and/or RDF file to other content in a content graph (as described in relation to FIG. 5).
  • the linking may identify one or more of relationships between entities, common properties, gr anularity of the entities, and/or a usage of the link for insight (e.g., how the link may be used to provide insight and/or analysis).
  • the linking in some aspects, may be based on one or more of similarity-based algorithms 431, clustering algorithms 432, community detection algorithms 433, link prediction algorithms 434, and/or rule-based algorithms 435 (e.g., for exception handling).
  • data curation may include one or more processes associated with importing, cleaning, transforming, and organizing the data with sufficient volume to ensure it is ready and/or useful for model training (e.g.. that there is enough data of a good enough quality to train a model with general applicability 7 ).
  • Validation of the output data e.g., at 414 and/or at 418
  • these data curation and verification processes may contribute to ensuring that the models are developed using the “right-quality” data and can make accurate predictions.
  • model development and training may include processes and techniques used to create and prepare a model for use in a predictive analytics, prescriptive optimization, automation, and/or reinforced/reinforcenient learning (RL) environment.
  • the processes and techniques may include one or more of data preprocessing and cleaning, feature engineering, model selection, hyperparameter tuning, training, evaluation, and/or model deployment.
  • Model development and training functions in some aspects, may 7 be used to create and optimize models for supervised learning tasks such as classification, regression, and clustering.
  • the model development and training functions may be used to create and optimize models for unsupervised learning tasks such as dimensionality reduction and anomaly 7 detection.
  • FIG. 5 is a diagram 500 illustrating a content graph in accordance with some aspects of the disclosure.
  • Diagram 500 illustrates different elements (e.g., components and/or inputs) of a system that maybe associated with different sets of functions.
  • diagram 500 illustrates a first set of elements associated with model and workflow management 510, a second set of elements associated with meta data management 520, a third set of elements associated with data management 530, a fourth set of elements associated with automation management 540, a fifth set of elements associated with knowledge management 550, a sixth set of elements associated with business use cases 560, and a seventh set of elements associated with platform management 570.
  • FIG. 500 illustrates different elements (e.g., components and/or inputs) of a system that maybe associated with different sets of functions.
  • diagram 500 illustrates a first set of elements associated with model and workflow management 510, a second set of elements associated with meta data management 520, a third set of elements associated with data management 530, a fourth set of elements associated with automation management
  • a model training process in accordance with some aspects of the disclosure may be associated with a set of physic al constructs 602 and a set of use cases and KPIs 604 representing the input of the models for training with a corresponding set of physical constructs 702 and set of use cases and KPIs 704 representing the input of the models for inferencing.
  • the set of physical constructs 602 and the set of use cases and KPIs 604 maybe incorporated into an asset library 606 and an asset hierarchy 610 associated with model training.
  • the set of physical constructs 602 and/or 702 and the set of use cases and KPIs 604 and/or 704 may be associated with a set of physical sensor data 608 and/or a set of physical sensor data 708 (or other data extracted horn input data such as from the set of inputs 401 of FIG. 4 based on the set of physical constructs 602 and/or 702 and the set of use cases and KPIs 604 and/or 704) for training and/or inferencing.
  • At least one of the model training and the inferencing may be associated with using the set of physical sensor data 60S and/or 708 (and the asset hierarchy 610 in the case of training) to generate data 612 for a model training operation (or data 712 for an inferencing operation) that may include data characters and curation information 614 and data features 616 (or data characters and curation information 714 and data features 716).
  • the data characters and curation information 614 and data features 616 may be processed by one or more models to be trained 624 (e.g., models associated with, or corresponding to, “algorithm 1” 626 to “algorithm N” 628) while the data characters and curation information 614 and data features 616 along with the asset library 7 606 may be used to produce updates to the one or more models based on a set of updating operations 618 that may include, for example, one or more of a ranking operation 620 (e.g., a ranking for RL or other feedback and/or updates) or a grid search operation 622 (e.g., an operation to identify updates to one or more parameters associated with the one or more models) to optimize the one or more models.
  • a ranking operation 620 e.g., a ranking for RL or other feedback and/or updates
  • a grid search operation 622 e.g., an operation to identify updates to one or more parameters associated with the one or more models
  • the updates to the one or more models may further be associated with feedback associated with information regarding the accuracy of the current one or more models such as rewards and/or penalties 634 (e.g., information regarding a difference between an output of a current model of the one or more models and a “ground truth’' associated with the input data provided to the current model of the one or more models).
  • the rewards and/or penalties 634 may be produced by a feedback operation 630 that may further include identifying a configuration and model set 636 including a selected configuration 638 and a selected model, e.g., a selected algorithm 640, and identifying, and/or applying, prediction criteria 644 and optimization criteria 646.
  • a particular model meets a set of optimization criteria 646 for a particular application (e.g., is determined to be sufficiently accurate and/or optimized for a combination of speed, accuracy, etc. for a process associated with a particular business use case, monitoring task, or analysis), it may be provided for incorporation into a set of models managed by a model management module 650 or into a set of models used in producing recommendations at a recommendation engine 652.
  • a set of optimization criteria 646 for a particular application e.g., is determined to be sufficiently accurate and/or optimized for a combination of speed, accuracy, etc. for a process associated with a particular business use case, monitoring task, or analysis
  • the model management module 750 may provide one or more of the of the models trained using, e.g. , the elements of FIG. 6, and the generated data 712 for an inferencing operation to an inferencing module 772 to generate one or more inferences.
  • the one or more inferences may be based on using the data 712 as inputs to the one or more of the models provided by the model management module 750.
  • the inferencing operation described in relation to FIG. 7 may involve model deployment functions included in the functions associated with model management and may include functions (e.g., steps andtor processes) associated with operationalizing a model (e.g., using a model for run-time analysis, and content anchor recommendation generation.
  • the model deployment functions may be implemented by, or associated with, a model deployment module such as model deployment module 123.
  • the model deployment firnctions may include testing and validating the model, automating model runs, setting up feedback loops to monitor model performance, and/or deploying the model into a production environment.
  • the model deployment firnctions may include monitoring model results, updating models as needed, and/or deploying new versions of the model.
  • the system may include platform automation firnctions.
  • the platform automation firnctions may include automated model versioning that may include firnctions associated with automatically versioning models, tracking changes, and storing model versions in an organized repository, hi some aspects, the platform automation firnctions may include automated model testing associated with automatically testing models against, or based on, certain criteria, such as accuracy and stability, and outputting the results.
  • the platform automation functions may include automated model deployment associated with deploying models automatically to new environments arid ensuring that the automatically deployed models are up and running in an efficient and secure maimer.
  • the platform automation functions may include automated model management associated with defining, monitoring, and managing the lifecycle of models, such as creation, deployment, and retirement.
  • model storage functions may be included in the functions associated with model management.
  • the model storage functions may be associated with one or more model repositories that may be used to index and store models, e.g., in an MLFlow format to be better self-contained.
  • the model storage functions may be associated with data integration blueprints that may be associated with collecting and integrating data and/or information from corresponding sources for the purpose of analysis.
  • the model storage functions may include functions associated with a model design codex that may be used to develop models to address specific business questions and objectives from a model codex.
  • the model storage functions may include a model deployment inferencing API that may be associated with deploying models into production, monitoring performance, and adjusting the models as inaccuracies are identified.
  • the model storage functions may include model evaluation functions that may track and evaluate associated metrics such as KPIs of models against defined and/or identified goals and objectives associated with one or more use cases.
  • model storage functions may include model governance process integration functions associated with integrating policies and processes to ensure compliance with regulations, data privacy and security.
  • the model storage in some aspects, may include model auditing integration functions associated with tracking and documenting model performance, accuracy, and usage.
  • the model storage functions may include model optimization, continuous development, and decommission alert functions associated with alerts to refine models over time to improve accuracy and/or performance or to suggest decommissioning (e.g., due to compliance issues, policies changes, and/or improved AI/ML approaches outperforming existing models).
  • the model storage functions may further include one or more financial functions associated with a model cost management.
  • the financial functions associated with the model cost management may relate to one or more of budgeting, risk management, cash flow analysis, forecasting, portfolio management, and/or auditing and cost control.
  • a budgeting function may relate to planning and forecasting the future demand of models and their cost in execution management.
  • a risk management function may be associated with identifying, analyzing, and mitigating risks in fulfilling SLA.
  • a cash flow analysis function may be associated with analyzing and forecasting the inflow and outflow of cash in value realization of analytics models.
  • a forecasting function in some aspects, may relate to predicting future trends in types of models’ demand.
  • a portfolio management function may be associated with managing investments and right-sizing analytics portfolios to maximize returns.
  • An auditing and cost control function in some aspects, may verify financial statements and reports of analytics development and return and/or identify and reduce costs to maximize profits.
  • feature management functions may be included in the functions associated with model management.
  • the feature management functions (as part of analytics model management) may be associated with one or more processes for managing the lifecycle of features used in analytics models.
  • the one or more feature management processes may be associated with understanding the data sources that feed the features, validating their accuracy, tracking feature usage and performance, managing feature versions, and versioning feature sets.
  • feature management functions may be provided to ensure that models are up to date and accurate while also allowing teams to quickly iterate and experiment with different feature combinations.
  • feature management functions in some aspects, may be associated with activities such as one or more of exploring differ ent features and selecting the most relevant, engineering features by transforming and combining them to create new features, and/or validating features to ensure they are accurate and up to date, hi some aspects, the feature management functions may additionally, or alternatively, be associated with one or more of monitoring features for changes in their performance and/or automating feature engineering and selection processes.
  • the system may implement stacked modeling using the content in the model store (e.g., the model store module 126).
  • Composing models using stacked modeling may include using the outputs of analytics model as fea tines and/or inputs to another model in analytics model management.
  • one or more of predicted values, model scores, feature importance, model coefficients, clustering results, outlier detection, and/or dimensionality reduction results from one or more models may be used to define, or be used as, features and/or inputs to a model composed using stacked modeling.
  • predicted values for one or more of a machine operating anomaly score, a customer chum, or a credit risk from one or more supervised machine learning models may be used as features and/or inputs of another model.
  • model scores may be metrics that measure the accuracy of a model, and the model scores may be used to compare different models and select the best one or ones to ensemble for further analysis.
  • Feature importance in some aspects, may be a measure of how important a feature is for making predictions and may be used to identify which features are most important for making accurate predictions for inclusion in a model composed using stacked modeling.
  • model coefficients e.g., weights
  • Clustering algorithms in some aspects, rnay be used to group data points into distinct categories (e.g., to produce clustering results) that may be used as features for further analysis.
  • outlier detection algorithms may be used to identify data points that are significantly different from the rest of the data and the identified outliers may be used to define features for further analysis (e.g., by including or excluding the outliers).
  • Dimensionality reduction algorithms may produce dimensionality reduction results that may be used to reduce a number of features and/or inputs for a model composed using stacked modeling while preserving the (relevant) original information. The results of the dimensionality reduction can be used as features for further analysis.
  • performance management functions may be included in the functions assoc iated with model management.
  • Performance management functions may be designed to be, and/or to provide, a common service for analytics model management associated with one or more processes for evaluating the performance of an analytical model with its corresponding business use case(s) (e.g., using another analytics inodel(s) for performance scoring).
  • the performance management functions may monitor the model’s accuracy and effectiveness, as well as ensuring it is up to date with industry SME standards, users’ (e.g., stakeholders, decision makers, etc.) standards, and users’ preferences so the users can make informed decisions.
  • Performance management function may include data mining of analytics model performance using a common standard model and a common standard process.
  • the performance management functions may be associated with gathering and analyzing data related to the performance of one or more analytics models, e.g., the performance management functions may collect data on accuracy, speed, scalability, and other metrics that can be used to measure the performance of the model.
  • the performance data gathered, and/'or collected, may then be used for analyzing the one or more analytics models (e.g., one or more models stored in the model store) to identify areas of improvement or areas of strength in the model, allowing the model to be optimized for better results.
  • the performance management functions may be useful to leverage common standard models and agreed upon criteria to identify and address areas of weakness, by taking remedial and/or collective actions such as adding more data to the training set, improving the model architecture, or timing the hyperparameters of the model.
  • Regular testing of the models, and/or the system as a whole, using the performance management functions to monitor the performance of models within the system may ensure that any changes that are made are improving the moders, and/or the system’s, performance (instead of degrading the performance).
  • FIG . 8 is a diagram 800 illustrating recommendations based on model selection from a model management subsystem in accordance with some aspects of the disclosure.
  • a recommender 850 may include a recommendation engine 852 that receives input associated with a business use case (UC) 810 (e.g., a set of KPIs), a workflow 820, assets and data 830, and a user 840 to generate recommendations in accordance with some aspects of the disclosure.
  • UC business use case
  • the system may incorporate a GPT model (e.g., may include GPT module 140 implementing a GPT model).
  • the GPT model in some aspects, maybe implemented individually, or in combination with a Bidirectional Encoder Representations from Transformers (BERT) model.
  • the GPT model may be used to perform content filtering, where the GPT model may be leveraged (and/or used) to narrow down many possible failure modes to a smaller set of specific failure modes of interest to a user (either a particular user or a generic user) based on user input (e.g., from the particular user or from an SME), to provide, or aid in, content filtering.
  • a pre-trained BERT model may be utilized to extract actionable information from sources such as manuals, maintenance logs, service logs, and FAQs.
  • the aspects of the system described in relation to FIGs. 6-8 may be associated with interactions between the functions associated with content management (e.g., content management 110) and the functions associated with the model management 120.
  • FIG. 9 is a diagram 900 illustrating aspects of training a GPT model (or simply a GPT) in accordance with some aspects of the disclosure.
  • the GPT model may be a private GPT model (or private GPT) specific to the system and/or to a particular user or user type.
  • the system may receive as inputs documents and images (e.g., documents and image artifacts 902) and extract data (e.g., content and metadata) from the input data (e.g., by recognizing a data type at 904 and by inferencmg using a model (e.g., a language-based model at 906 or an image based inspection model at 908) based on the recognized data type).
  • documents and images e.g., documents and image artifacts 902
  • data e.g., content and metadata
  • the extracted data may then be provided as input to a GPT model to produce and/or generate content at 910.
  • the content produced at 910 may then be provided to an SME to validate and/or to improve the output content through feedback at 912 associated with an SME behavior at 918.
  • the feedback mechanism may be designed to minimize the review and retraining cycles.
  • the system may provide the validated (curated) artifacts (e.g., data and metadata) to the content management system functions at 916.
  • FIG. 10 is a diagram 1000 illustrating additional aspects of training the GPT model of FIG. 9 in accordance with some aspects of the disclosure.
  • the (validated) output from the GPT may be used to generate (or may include) a set of semi pre-curated content 1002 that may be subject to further optimization to improve context and content via additional retraining operations for the GPT model.
  • the semi pre-curated content 1002 in some aspects, may be used in combination with a catalog of existing reports and available data 1006 to produce a set of user request 1010.
  • the semi pre-curated content 1002 and the user request 1010 may represent preliminary results to be validated and/or improve upon (e.g., via additional training of the GPT model in FIG.
  • the user request 1010 may be used to generate a user request 1012 that is presented to a user for feedback at 1014 to improve the fit to the task.
  • the user feedback may be used to provide a better match of a user’s preference and/or improve the content quality and/or granularity.
  • the feedback mechanism may be designed to minimize the review and retraining cycles.
  • 9 and 10 may be associated with interactions between the functions associated with the model management (e.g., model management 120) and the fimctions associated with content Al (e.g., content Al 130) and/or a digital advisor associated with one or more human users.
  • model management e.g., model management 120
  • fimctions associated with content Al e.g., content Al 130
  • digital advisor associated with one or more human users.
  • the system may use one or both of the GPT and/or BERT models for failure mode analysis by combining supervised and unsupervised learning techniques to narrow down many failure modes to a few possible ones.
  • the supervised component will train the GPT model to recognize features of each of a set of failure types, while the unsupervised clustering will group failure modes into general categories, reducing the total number of failure modes.
  • the GPT/BERT model may be fine-tuned, in some aspects, using hyper-parameter timing and feature selection techniques to improve the accuracy of the failure mode analysis, e.g., as described in relation to the model management functions of FIG. 1 or the training described in relation to FIGs. 9 arid 10.
  • the system may train the BERT model using labeled failure modes and their associated questions, content, and/or operational manuals to extract prescribed content from asset failure mode content.
  • the system may then use the trained BERT model to refine and answer specific questions related to the identified failure modes (e.g., the set of failure modes identified by the GPT model) as indicated by the training materials.
  • the GPT may be used to parse a document produced for a failure mode(s) and effects analysis (FMEA) to identify and group failure modes.
  • FIG. 11 is a diagram 1100 illustrating related elements of an FMEA in accordance with some aspects of the disclosure for an example relating to a wind turbine (WT).
  • an FMEA may provide an item overview 1110. a standard FMEA 1120, a set of usefol parameters 1130, and a set of risks 1140.
  • FIG. 12 is a diagram 1200 illustrating the use of a combination of a GPT model and a BERT model to generate and/or identify questions and answers in accordance with some aspects of the disclosure.
  • the GPT model may be used to generate “questions,” e.g., to identify issues or problems based on the input data sources and the taxonomy provided in an FMEA in association with a first set of operations 1210 while the BERT model may be used to identify the “answers,” e.g., the content from the associated questions, content, and/or operational manuals, in association with a second set of operations 1220.
  • the separation of roles may be based on the BERT model being more reliable when working from a known universe of authoritative ma terials than a GPT model.
  • FIG. 13 is a first diagram 1300 and a second diagram 1350 illustrating a fest set of training operations, and a second set of inferencing operations, respectively, associated with the GPT and/or BERT modelfs) in association with the CMS in accordance with some aspects of the disclosure.
  • the GPT and/or BERT model training is in addition to the Al ML training performed in connection with other aspects of the model management functions (e.g., the training associated with FIG. 6).
  • the GPT and/or BERT models may be trained in parallel using similar inputs. For example, information from one or more of the CMS content 1302, the knowledge graph 1304 (e.g., the knowledge graph of FIG. 5).
  • the questions graph 1306 (e.g., the questions graph of FIG. 12 based on the FMEA of FIG. 11), external data sources 1308, and FMEA scenarios and data 1310 may be provided to a first GPT model (e.g., GPT questions 1312 and/or GPT answers 1314) and/or a first BERT model (e.g., BERT questions 1316 and/or BERT content 1318) to produce input for a “GPT” 1320 (e.g., one or more of the GPT model 1322 or the BERT model 1324).
  • the GPT 1320 may produce GPT output content 1328 and/or BERT output content 1330 based on a current state of the GPT model and/or the BERT model.
  • the GPT output content 1328 and/or BERT output content 1330 may be provided to a HIL 1334 (e.g., an SME or user) that may define one or more of a set of prediction criteria 1336 (e.g., identifying a ground truth, providing prediction evaluation criteria, etc.), a set of optimization criteria 1338 (e.g., criteria or parameters related to a desired speed of convergence, a threshold accuracy, etc.), and/or a set of suitability criteria 1340 and may further provide feedback approving or rejecting the content.
  • a HIL 1334 e.g., an SME or user
  • a set of prediction criteria 1336 e.g., identifying a ground truth, providing prediction evaluation criteria, etc.
  • optimization criteria 1338 e.g., criteria or parameters related to a desired speed of convergence, a threshold accuracy, etc.
  • suitability criteria 1340 may further provide feedback approving or rejecting the content.
  • the system may generate an RL reward- penalty 1332 used to update one or more of the GPT questions 1312, GPT answers 1314, BERT questions 1316, BERT content 1318, GPT model 1322, and/or BERT model 1324.
  • the set of inferencing operations may mirror the inferencing operations discussed above in relation to FIG. 7.
  • diagram 1350 illustrates that the inferencing operation may include receiving inputs based on a physical construct 1362 and a set of UCs and/or KPIs 1364 that may be used to define data retrieved from physical sensor data 1366.
  • Data characters and curation 1368 and data features generation 1370 may then be processed by one or more models as determined by model management 1372.
  • the GPT model and/or the BERT model e.g., the GPT 1320 including one or more of the GPT model 1322 or the BERT model 1324
  • the components and/or functions of the system described in relation to FIGs. 11-13 may be associated with interactions between the functions associated with content management (e.g., content management 110) and the functions associated with the GPT (e.g., the GPT module 140).
  • content management e.g., content management 110
  • GPT e.g., the GPT module 140
  • FIG. 14 is a diagram 1400 illustrating components of semantic reasoning traceability associated with one or more GPT and'or BERT models in accordance with some aspects of the disclosure.
  • the semantic reasoning traceability may be associated with tracing decision making and/or group triage evidences associated with the one or more GPT and/or BERT models.
  • the semantic reasoning traceability may include transforming the knowledge graph data 1404 into a sentence at 1438 and concatenating, at 1440, the transformed knowledge graph data with text input data (e.g., based on one or more of the CMS content 1402, the knowledge graph 1404 (e.g., the knowledge graph of FIG. 5), the questions graph 1406 (e.g., the questions graph of FIG.
  • the GPT 1420 including one or more of the GPT model 1422 or the BERT model 1424 may be fine-timed on a dataset that includes both text and knowledge graph data, hi some aspects, during the training described in relation to FIG. 13 , the GPT model 1422 and/or the BERT model 1424 may be trained to generate, in the output text, (additional) text that may be used to identify a context (or source) of the generated text from the knowledge graph.
  • the output from the GPT 1420 may then be preprocessed, at 1436, to extract entity and concepts included in the output from the GPT 1420.
  • the system may construct a knowledge subgraph at 1444 and map, at 1442, the (nodes of the) knowledge subgraph to the (nodes of the) knowledge graph 1404.
  • the mapping may then be presented to a user at 1446 for the user to understand the basis of the text output of one or more of the GPT 1422 and/or the BERT 1424 (e.g., to analyze the results based on the semantic reasoning traceability).
  • the functions associated with content management e.g., content management 110
  • the functions associated with the GPT e.g., the GPT module 140
  • the functions associated with content Al e.g., content Al 130
  • a digital advisor associated xvith one or more human users.
  • FIG. 15 is a diagram 1500 illustrating elements of a process or algorithm used to generate descriptive questions for the GPT in accordance with some aspects of the disclosure.
  • the process illustrated in diagram 1500 may be used to convert a set of numerical outputs of an analytical model into a text format that may be used for GPT (e.g., may be provided to a GPT model as an input).
  • Diagram 1500 illustrates that an analytics model pre-processing 1502 may pre-process elements associated with an analytical model such as an asset associated with the model (e.g., asset in scope 1504 or a particular device in an industrial setting), a set of “asks” and/or decisions associated with the analytics model (e.g., analytics asks/decisions 1506 such as failure modes), a set of KPIs associated with the analytics model (e.g., analytics KPIs 1508 such as a vibration anomaly score), ami set of outputs produced by the analytics model (e.g., analytics outputs 1510 such as time series data for average vibration score per minute).
  • an analytical model such as an asset associated with the model (e.g., asset in scope 1504 or a particular device in an industrial setting), a set of “asks” and/or decisions associated with the analytics model (e.g., analytics asks/decisions 1506 such as failure modes), a set of KPIs associated with the analytics model (e.g., analytics KPIs
  • the analytics model pre-processing 1502 may produce one or more of an output and/or results based on the raw analytics output data and corresponding meta data content in a format supported by the GPT (e.g.. a text format supported for GPT inputs).
  • a format supported by the GPT e.g.. a text format supported for GPT inputs.
  • the output and/or results of the analytics model pre-processing 1502 may be subject to additional (pre-processing via a set of NLP operations 1512.
  • the NLP operations 1512 may include a text cleaning operation 1514 that may remove any irrelevant characters, punctuation, and/or formatting from the raw analytics output data, arid corresponding meta data content.
  • the NLP operations 1512 in some aspects may further include a normalization operation 1516 that may convert the text data produced by previous operations into a consistent format.
  • a summarization and/or aggregation operation 1518 may extract insights (e.g., a set of insights considered to be the most important) and corresponding meta information from the analytics output and may summarize and/or aggregate the extracted insights and corresponding metadata in a readable content and format via text summarization, hr some aspects, the summarization and/or aggregation operations 1518 may include tokenizing the data using a tokenizer, e.g., using the natural language toolkit (NLTK).
  • the NLP operations 1512 may include a formating operation 1520 that formats the text output into a format that is suitable for input into GPT (e.g. , in to a specialized format for GPT training and./or inferencing).
  • the formating operations 1520 may include converting the tokenized data into a special format for GPT tr aining data.
  • the formated data (input for at least one of a GPT training or inferencing) may then have a set of validation operation 1522 applied (e.g., a set of automated or human-assisted operations) as part of the NLP operations 1512 to validate the output text and ensure that it accurately reflects the insights or information contained hi the original analytics output.
  • the NLP operation 1512 may generate or produce one or more statements 1524 (e.g., clean, normalized, summarized, formated, and validated statements) that may be saved and/or provided (e.g., uploaded) for a GPT training and/or inferencing.
  • an analytical model for predicting one or more failure modes associated with a Hitachi compressor Model H based on a time series of vibration anomaly scores may indicate the aspects of the analytics model in various locations (e.g., in different metadata fields, in an output of the analytical model, in a set of documents used to generate the analytics model, etc.).
  • the analytics model pre-processing 1502 and the NLP operations 1512 may, identify content to be used to generate the one or more statements including “3 -year-old Hitachi Compressor model H,” “average vibration score of 75 per horn,” and “failure modes,” and may produce a summary statement including the identified elements in a natural language format (e.g., “What are the potential failure modes associated with a 3-year-old Hitachi Compressor model H that has an average vibration score of 75 per horn?”).
  • FIG. 16 is a diagram 1600 illustrating the use of SME and user feedback for a GPT training operation in accordance with some aspects of the disclosure.
  • content generation may be performed by GPT 1602 (one or more of GPT model 1604 and/or BERT model 1606) based on the output of an analytics model (not shown), hr some aspects, a prompt or API call maybe created and/or generated to provide the GPT model (e.g., GPT 1602, GPT model 1604, and/or BERT model 1606) with a starting point.
  • the created and/or generated prompt or API call may introduce a topic of a desired knowledge content to generate.
  • the topic might be UC KPIs, analytics model outputs, sensor outputs trending, or any other issue or metric of interest.
  • the prompt or API call may be provided as input to one or more GPT models (e.g., GPT 1602, GPT model 1604, and/or BERT model 1606).
  • the one or more GPT models e.g., GPT 1602, GPT model 1604, and/or BERT model 1606), in some aspects, may generate text (output content) based on the prompt or API call.
  • the generated text (output content) may include GPT output content 1608 and BERT output content 1610.
  • the generated text (e.g., GPT output content 1608 and BERT output content 1610) may be provided to one or more of an SME or a user.
  • the SME may, at 1612, review the generated text and make modifications for its correctness (e.g., to improve the quality of the output, such as its relevance to the topic, or its technical accuracy or use of jargon).
  • the modified text (the modified GPT output content 1608 and/or the modified BERT output content 1610 after review at 1612) may then be provided as knowledge content and stored in the CMS with lineages (e.g., with metadata indicating the source of the modified text such as an iteration of the training or other information that may be used to determine how to use the modified text) and may fur ther be used to update and/or retrain the GPT 1602 (e.g., to update one or more of the GPT model 1604, and/or the BERT model 1606) based on RL approach 1620 (e.g., an RL algorithm/operation) to produce new content that is expected to be beter (e.g., more technically accurate or useful) than the previously generated content.
  • lineages e.g., with metadata indicating the source of the modified text such as an iteration of the training or other information that may be used to determine how to use the modified text
  • RL approach 1620 e.g., an RL algorithm/operation
  • the user review, at. 1614 the generated text and make modifications for its usability (e.g., to improve the quality of the output, such as its understandability or its utility to the particular user).
  • the modified text (the modified GPT output content 1608 and/or the modified BERT output content 1610 after review at 1614) may then be provided to generate a preference model (e.g., a user preference model 1616) that may be stored hi the CMS with lineages (e.g., with metadata indicating the source of the modified text such as an iteration of the training or other information that may be used to determine how to use the modified text).
  • the modified text may be related to the content which may be used for decision making by the user and may further be used to update and/or retrain the GPT 1602 (e.g., to update one or more of the GPT model 1604, and/or the BERT model 1606) based on RL approach 1618 (e.g., an RL algorithm/ operation) to produce new content that is expected to be better (e.g., more useful for the user) than the previously generated content.
  • RL approach 1618 e.g., an RL algorithm/ operation
  • FIG. 17 is a diagram 1700 illustrating elements associated with using human feedback for RL-based training of a model in accordance with some aspects of the disclosure.
  • one or more of the GPT model 1604 and/or the BERT model 1606 may output content to be reviewed by a HIL (e.g., an SME or a user) to generate one or more reports and associated feedback in the set of reports/feedback 1702 for a particular version and/or iteration of a trained GPT model.
  • a HIL e.g., an SME or a user
  • the user e.g., the SME
  • the user may be presented with open-ended questions and/or a structured questionnaire to elicit feedback from the user (e.g., to identify the user’s interpretation of the reports and/or to elicit the user’s intents and patterns from the user point of views from reports).
  • NLP may be used at 1708 to extract intents from the SME's interpretation, e.g., NLP techniques may be used to identify key themes and intents.
  • the operations at 1706 and/or 170S may be used to generate an additional set of reports/feedback in the set of reports/feedback 1702 for a current version and/or iteration of a trained GPT model.
  • a topic modeling algorithm may be used to identify the most important topics discussed in the interpretation or use a named entity recognition algorithm to identify entities that are mentioned frequently.
  • the system may use the SME inputs and/or intents from reports to create rules and/or heuristics that may be incorporated into the algorithm.
  • the system may use the SME inputs/intents from reports to tram a machine learning model (e.g. .the GPT model). Additionally, or alternatively, the system may use the SME inputs/intents from reports to guide the development of the algorithm (or the machine learning and/or GPT model).
  • the output of the operations at 1710 may impr ove the relevance of the content output by the GPT model based on the content curation performed by the user (e.g.. based on a similarity
  • the output of the operations at 1710 may be aggregated to generate an ensemble for an RL policy at 1714, that may in turn, be provided to. and/or used by, the functions associated with knowledge management at 1712.
  • the components of a digital advisor UI and/or UX may include an administrator UI.
  • An administrator UI in some aspects, may include a set of functions and/or UIs for evaluating and monitoring the GPT output and/or the CMS output content.
  • the set of functions and/or UIs for evaluating and monitoring the GPT output and/or the CMS output content may include UIs for quality control engineering, standardizing criteria and dimensions for content evaluation and feedback (user preferences), human review and curation (e.g., HIL), automated content testing, system performance metrics, defining/creating one or more personas, defining UI and/or UX components and user behavior KPIs and collecting user feedback regarding reports and content, analyzing user feedback, incorporating feedback into GPT, and/or iteration.
  • Quality control in some aspects, may be used to ensure that the content produced by GPT is accurate, relevant, and well-written and may be part of admin UI’UX.
  • standardizing criteria and dimensions for content evaluation and feedback may be associated with defining a checklist for the GPT output/CMS output against a set of predetermined criteria, such as relevance, actionable recommendation, readability, or other relevant factors.
  • the UI for human review and curation may be an important step to evaluate and monitor GPT output/CMS output content, hr some aspects, the HIL may involve having a person (e.g., an SME or a user) review the output and give feedback on its accuracy and quality.
  • Automated content testing UIs in some aspect, may be a tool for evaluating and monitoring GPT output/CMS output content involving using a set of predetermined tests to measure the accuracy and quality of the output. In some aspects, the set of tests may be used to measure the accuracy of the output against a given dataset, or to compare the output against a set of predetermined standards. The validation outcome could be used for fine tuning GPT output/CMS output.
  • a UI for system performance metrics may be associated with one or more performance metrics used to evaluate and monitor the GPT output and/or the CMS output content and corresponding platform system.
  • the performance metrics may measure the accuracy and quality of the output, as well as the speed and scalability of the system.
  • Performance metrics may, in some aspects, be used to track the performance of the system over time and identify areas for improvement (e.g., in association with a model management system and/or a set of functions associated with model management.
  • a set of UIs for defining and/or creating one or more personas may be used to identify a target audience for the GPT output and/or the CMS output content and to create a persona that reflects their values, interests, and needs (e.g., the type of data that provides usefill information to the particular target audience).
  • the set of UIs for defining and/or creating one or more personas may be used to define UI and/or UX components and user behavior KPIs and collect user feedback regarding reports and content.
  • the set of UIs for analyzing user feedback in some aspects, may be used to identify patterns and understand what users think about the content and decision-making process.
  • the UI for incorporating feedback into GPT and for iteration may provide a user control of the incorporation of the user feedback into the GPT output content, such as by controlling the number of iterations of the user feedback to use.
  • a user interface and user feedbacks UI may be provided, in some aspects, in association with a decision making process.
  • the user interface and user feedbacks UI may include a use UI that may reflect the user feedback and allows users to interact with the GPT output/CMS output content.
  • the system may leverage the UI/UX in conjunction with use surveys, interviews, and/or other methods to collect user feedback about the content and their decision-making process and preference.
  • FIG. 18 is a diagram 1800 illustrating elements of an orchestration system in accordance with some aspects of the disclosure.
  • the orchestration system is described below using terminology associated with a node-RED tool, but is to be understood to be representative and not limiting.
  • Node-RED is a tool for orchestration and automation of different services and applications via Node-RED to integrate with Content Al by using the Content Al API.
  • Using the Node-RED tool (or other similar orchestration approaches and/or systems) uses a HyperText Transfer Protocol (HTTP or http)) request node to call the Content Al API and get a response back. From there, additional nodes may be available in Node-RED to process the response, create flows, and connect other nodes or services to the API.
  • HTTP or http HyperText Transfer Protocol
  • Node-RED in some aspects, may be used to integrate with other different services and applications. With the Content Al API, Node-RED may be used to create powerfid flows and automate content analysis and Al processes.
  • Node-RED may be used for system tasks orchestration, e.g., to create flows that integrate with different APIs to perform tasks automatically.
  • Node-RED in some aspects, may be used to set up the credentials for the scope associated with different APIs, e.g., by creating an account with the API provider and generating API keys or access tokens.
  • the API nodes may, in some aspects, be installed for the APIs to use.
  • the lar ge library of pre-built nodes for different APIs in Node-RED may be used and/or le veraged to simplify the designing and implementation of the orchestration tasks.
  • the system or a user of the system
  • FIG. 19 is a diagram 1900 illustrating orchestration workflow compiling in accordance with some aspects of the disclosure.
  • the elements of the diagram 1800 or the diagram 1900 may be associated with, or used to perform and/or orchestrate, (1) a CMS of documents, text, and images, (2) a model management system which contains feature management, model management, and performance management as a core function, (3) GPT services, (4) digital advisor UI/UX and HIL Services, and/or (5) SME content review and update services.
  • the orchestration of the CMS of documents, text, and images may be associated with setting up a Node-RED flow that handles the CMS requests, e.g., by creating a Node-RED flow that contains a trigger node, a function node, and a node to interact with the CMS.
  • the trigger node in some aspects, may be configured to listen for requests to the CMS, e.g., by setting the type of request (e.g., POST, GET, DELETE), the URL of the CMS, and any other parameters as required by the CMS.
  • the function node created to process the request may be used to process the data from the request (e.g.
  • the CMS node may be configured to process the request, e.g., to interact with the CMS and perform the desired operation, e.g. creating a new document, updating an existing document, or deleting (marked for delete) a document.
  • the Node-RED flow including the above nodes and/'or functions may then be deployed and tested/validated by sending requests to the Node-RED flow and testing the response from the CMS . Once the Node-RED flow is set up and tested, it can be used to orchestrate a CMS of documents, text, and images.
  • the orchestr ation of the model management system may be associated with using Node-RED to create a flow that receives requests from users (or workflow) and passes them to the appropriate model management system.
  • the functions associated with the flow may include creating, updating, and deleting models, as well as retrieving performance and feature data.
  • Node-RED may be used, in some aspects, to create a flow that receives requests for performance and feature data from the model management system and forwards them to the appropriate feature and performance management systems.
  • Node-RED may further be used to create a flow that receives performance and feature data from the feature and performance management systems and forwards it to the model management system.
  • Node-RED may also be used to create a flow that receives model updates from the model management system and forwards them to the feature and performance management systems.
  • Node-RED in some aspects, may additionally be used to create a flow that receives feature and performance updates from the feature and performance management systems and forwards them to the model management system.
  • Orchestrating GPT sendees may include creating a Node-RED flow that calls the GPT service.
  • Creating the Node-RED flow may include adding an “HTTP In” node to the beginning of the flow to receive the input from the GPT service, creating a function node to process the input from the GPT service, and creating an “HTTP request” node to call the GPT service.
  • the “HTTP request” node may be configured to send the input from the function node.
  • An “HTTP response” node in some aspects, may be added to the end of the flow to send the response back to the GPT sendee.
  • Node-RED may be deployed to orchestrate the GPT services with other workflows.
  • Node-RED may further be used to create a flow that receives requests fr om users (or ’workflows) and passes them to the appropriate GPT sendees .
  • Node-RED may be used, in some aspects, to orchestrate digital advisor UI/UX and HIL sendees by providing a graphical user interface (GUI) to create flows to configure, automate and monitor the services. In some aspects, this may allow administrators and users to create their specific (or preferred) reporting content that can be triggered by events, report on KPIs or obtain data from a variety of sources.
  • GUI graphical user interface
  • the Node-RED library of nodes may be leveraged to connect to external services, such as digital advisor UI/UX and HIL services.
  • Each node may be configured to access and interact with the services using a specific set of instructions. Accordingly, administrators and users can customize the nodes to create more complex flows that can be triggered by events or data from other sources. Once the flows are configured, they can be deployed to the cloud or an edge device to automate the services.
  • Node-RED in some aspects, may provide a dashboard to monitor the flows and make sure they are running as expected. This makes it easy to track the performance of the services and make necessary changes when needed.
  • the SME content review service may be designed to allow SMEs to review content and provide feedback. This can be accomplished by integrating a review tool (e.g., evat for image) into the Node-RED flow, which allows SMEs to easily access and review the content.
  • the SME content update service may be designed to allow SMEs to make updates to the content. This can be accomplished by integrating a content management system into the Node-RED flow, which allows SMEs to easily access and update the content and flow to the CMS for update.
  • the services may be tested to ensure that they are working as intended. This includes testing the Node-RED flows, the SME content review service, and the SME content update sendee. In some aspects, once these services have been tested, they can be deployed to the production environment. This includes configuring the services to work with the client's infrastructure and ensuring that they are accessible to the appropriate users. Accordingly, in some aspects, a Node-RED workflow(s) for SME content review and/or SME content update sendees requires a combination of technical expertise, SME, HIT and an understanding of the client’s specific requirements to develop effective services that meet the client’s needs. [0112] FIG.
  • FIG. 20 is a flow diagram 2000 illustrating a method in accordance with some aspects of the disclosure.
  • the method is performed by a system or device (e.g., the system illustrated in diagram 100 or computing device 2205) that performs various analyses, based oil content in a CMS, models associated with a model management system, GPT models and user preferences.
  • the system may train one or more MT NLP models (e.g., a GPT model and/or a BERT model) to produce accurat e NL content using a first set of feedback assoc iated with an accuracy of the natural language.
  • MT NLP models e.g., a GPT model and/or a BERT model
  • the NL queries may be based on at least one of the received information or the processed information (e.g., information received and/or processed by a CMS or the functions associated with content management 110).
  • the feedback may be received from an SME as described in relation the operations performed at 912, 1008, 1612, and/or 1706/1708 of FIGs. 9, 10, 16, and/or 17, the SME and/or HIL module 133, HIL 1334 and/or the set of reports/feedback 1702 of FIGs. 1, 13, and/or 17.
  • the feedback may be provided for one or more of an initial training (before model deployment) or a run-time retraining (e.g., based on content output by the trained and/or deployed one or more MT NLP models during) as described in relation to FIGs. 9, 10, 13, 16, and 17.
  • a run-time retraining e.g., based on content output by the trained and/or deployed one or more MT NLP models during
  • the system may train one or more MT NLP models (e.g., a GPT model and/or a BERT model) to produce, for at least a first user, usefill NL content using a second set of feedback associated with a usefulness, to the first user, of the NL content.
  • the NL queries may be based on at least one of the received information or the processed information (e.g., information received and/or processed by a CMS or the functions associated with content management 110) as well as preference data associated with one or more of the SME and/or HIL module 133, the user feedback module 132 , and or the user preference model development module 134.
  • the feedback may be received from a first user as described in relation the operations performed at 1014 1614, and/or 1706/1708 of FIGs. 10, 16, and/or 17, the SME and/or HIL module 133, HIL 1334, the user preference model 1616. and/or the set of reports/feedback 1702 of FIGs. 1, 13, 16, and/or 17.
  • the feedback in some aspects, may be provided for one or more of an initial training (before model deployment) or a ran-time retraining (e.g., based on content output by the trained and/or deployed one or more MT NLP models during) as described in relation to FIGs. 9, 10, 13, 16, and 17.
  • the system may receive information from one or more input sources associated with at least one of an industrial process or an industrial product.
  • the data may be received from a plurality of sources.
  • 2008 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination.
  • the received information may include two or more of image data, text, vibration data, audio data, structured data, or sensor data.
  • the one or more information sources may be associated with a set of user-defined parameters received from a user before receiving the information.
  • the set of user-defined parameters may include one or more of a desired outcome, a set of KPIs, a set of affected users, or a taxonomy associated with a desired outcome, hi some aspects, a plurality of sets of user-defined parameters may be received in association with a corresponding plurality of use cases (e.g., business use cases).
  • the system may process the received information using one or more MT models.
  • the models may be associated with a model management system.
  • 2010 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination, hi some aspects, processing the received information at 2010 may include processing the received information based, at least hi part, on a set of user-defined parameters.
  • the set of user-defined parameters may include one or more of a desired outcome, a set of KPIs, a set of affected users, or a taxonomy associated with a desired outcome, hi some aspects, a plurality of sets of user-defined parameters may be received in association with at least one of an HDC, an RDF, or a YAML format or file.
  • the user-defined parameters and/or the at least one of the HDC, RDF, or YAML format may be associated with a set of corresponding use cases (e.g., business use cases).
  • the processing may be associated with, e.g., the functions associated with the content management 110 andfor the functions associated with the model management 120, the inputs 401 , the data management 530, the processing associated with generating data at 612/712, and/or the set of physical sensor data 608/708 of FIGs. 1-7.
  • the one or more MT models may be associated with different analytics models configured based on different use cases and/or user personas, e.g., may be configured to process the received information to extract data relevant to a related and/or associated use case.
  • the system may generate, for the first user, a first NL query regarding at least one of the received information or the processed information using the one or more MT NLP models.
  • the one or more MT NLP models may be associated with a model management system.
  • 2012 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination.
  • the one or more MT NLP models may include one or more of a GPT model or a BERT model, hi some aspects, the system may generate, at 2012, the fust NL query using the GPT model
  • generating the first NL query may be associated with, e.g., the functions associated with the GPT module 140 and/or the functions associated with the model management 120, the “GPT” inferencing operation 1374 or GPT 1420 to produce GPT output content 1428 and/or BERT output content 1430 using one or more trained “GPT” models of FIGs. 1, 13, and 14.
  • the one or more MT NLP models may be associated with different analytics models configured based on different use cases, e.g., may be configured to process inputs to extract semantic content and generate a “query” based on data relevant to a related and/or associated use case or for a particular user or user persona.
  • the one or more MT NLP models used to generate the first NL query may be managed by a model management system or set of functions associated with model management.
  • the system may generate, tor the first user, a first NL recommendation based on the first NL query and at least one of the received information or the processed information using at least one corresponding MT NLP model.
  • the at least one MT NLP model may be associated with a model management system .
  • 2014 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination.
  • the one or more MT NLP models in some aspects, may include one or more of a GPT model or a BERT model, hi some aspects, the system may generate, at 2014, the first NL recommendation using the BERT model.
  • generating the first NL recommendation may be associated with, e.g., the functions associated with the GPT module 140 and/or the functions associated with the model management 120, the “GPT’ inferencing operation 1374 using one or more trained “GPT’ models of FIGs. 1 and 13.
  • the one or more MT NLP models may be associated with different analytics models configured based on different use cases, e.g., may be configured to process inputs to extract semantic content and generate a “recommendation” based on data relevant to a related and/or associated use case or for a particular user or user persona.
  • an NL recommendation for a user associated with a business role may different from an NL recommendation for a user associated with an engineering and/or facility management role.
  • the one or more MT NLP models used to generate the first NL recommendation may be managed by a model management system or set of functions associated with model management.
  • the system rnay output, for the first user, a first indication of at least the first NL recommendation via a user interface.
  • a first indication of at least the first NL recommendation may be presented as a notification or warning for a user to perform a maintenance operation or to address a time-sensitive issue identified through the processing associated with the first NL recommendation.
  • the indication may further include an indication of the first NL query associated with the first NL recommendation to provide a context for the first NL recommendation.
  • the system may receive feedback regarding the accuracy and/or the utility of one or more NL queries and/or NL recommendations.
  • the feedback may be received from a first user as described in relation the operations performed at 1014 1614, and/or 1706/1708 of FIGs. 10, 16, and/or 17, the SME and/or HIL module 133, HIL 1334, the user preference model 1616, and/or the set of reports? feedback 1702 of FIGs. 1, 13, 16, and/or 17.
  • the feedback in some aspects, may be provided for a run-time retraining as described in relation to FIGs. 9, 10, 13, 16, and 17.
  • FIG. 21 is a flow diagram 2100 illustrating a method in accordance with some aspects of the disclosure.
  • the method is performed by a system or device (e.g., the system illustrated in diagram 100 or computing device 2205) that performs various analyses, based on content in a CMS, models associated with a model management system, GPT models and user preferences.
  • the system may train one or more MT NLP models (e.g., a GPT model and/or a BERT model) to produce accurate NL content using a first set of feedback associated with an accuracy of the natural language.
  • 2102 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination.
  • the NL queries may be based on at least one of the received information or the processed information (e.g., information received and/or processed by a CMS or the functions associated with content management 110).
  • the feedback may be received from an SME as described in relation the operations performed at 912, 1008, 1612, and/or 1706/1708 of FIGs. 9, 10. 16, and/or 17, the SME and/or HIL module 133, HIL 1334 and/or the set ofreports/feedback 1702 of FIGs. 1, 13, and/or 17.
  • the feedback may be provided for one or more of an initial training (before model deployment) or a run-time retraining (e.g., based on content output by the trained and/or deployed one or more MT NLP models during) as described in relation to FIGs. 9, 10, 13, 16, and 17.
  • a run-time retraining e.g., based on content output by the trained and/or deployed one or more MT NLP models during
  • the system may train one or more MT NLP models (e.g., a GPT model and/or a BERT model) to produce, for a first user, useful NL content using a second set of feedback associated with a usefulness, to the first user, of the NL content.
  • MT NLP models e.g., a GPT model and/or a BERT model
  • 2104 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination.
  • the NL queries may be based on at least one of the received information or the processed information (e.g., information received and/or processed by a CMS or the functions associated with content management 110) as well as preference data associated with one or more of the SME and/or HIT module 133, the user feedback module 132, and or the user preference model development module 134.
  • the feedback may be received from a first user as described in relation the operations performed at 1014 1614, and/or 1706/1708 of FIGs. 10, 16, and/or 17, the SME and/or HIL module 133, HIL 1334, the user preference model 1616, and/or the set ofreports/feedback 1702 of FIGs. 1. 13. 16. and/or 17.
  • the feedback may be provided for one or more of an initial training (before model deployment) or a run-time retaining (e.g., based on content output by the trained and/or deployed one or more MT NLP models dining) as described in relation to FIGs. 9, 10, 13. 16, and 17.
  • a run-time retaining e.g., based on content output by the trained and/or deployed one or more MT NLP models dining
  • the system may tram one or more MT NLP models (e.g., a GPT model and/or a BERT model) to produce, for at least one additional (e.g., a second) user, useful NL content using a third set of feedback associated with a usefulness, to the at least one additional (e.g., a second) user, of the NL content.
  • MT NLP models e.g., a GPT model and/or a BERT model
  • 2106 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination.
  • the NL queries may be based on at least one of the received information or the processed information (e.g., information received and/or processed by a CMS or the functions associated with content management 110) as well as preference data associated with one or more of the SME and/or HIL module 133, the user feedback module 132, and or the user preference model development module 134.
  • the feedback may be received from the second user as described in relation the operations performed at 1014 1614, and/or 1706/1708 of FIGs. 10, 16, and/or 17, the SME and/or HIT module 133, HIL 1334, the user preference model 1616, and/or the set of reports/ feedback 1702 of FIGs. 1 , 13, 16, and/or 17.
  • the feedback may be provided for one or more of an initial training (before model deployment) or a run-time retraining (e.g. , based on content output by the trained and/or deployed one or more MT NLP models during) as described in relation to FIGs. 9, 10, 13, 16, and 17.
  • a run-time retraining e.g. , based on content output by the trained and/or deployed one or more MT NLP models during
  • the system may receive information from one or more input sources associated with at least one of an industrial process or an industrial product.
  • the data may be received from a plurality of sources.
  • 2108 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combmation.
  • the received information may include two or more of image data, text, vibration data, audio data, structured data, or sensor data.
  • the one or more information sources may be associated with a set of user-defined parameters received from a user before receiving the information.
  • the set of user-defined parameters may include one or more of a desired outcome, a set of KPIs, a set of affected users, or a taxonomy associated with a desired outcome.
  • a plurality of sets of user-defined parameters may be received in association with a corresponding plurality of use cases (e.g., business use cases).
  • the system may process the received information using one or more MT models.
  • the models may be associated with a model management system.
  • 2110 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination, hi some aspects, processing the received information at 2110 may include processing the received information based, at least in part, on a set of user-defined parameters.
  • the set of user-defined parameters in some aspects, may include one or more of a desired outcome, a set of KPIs, a set of affected users, or a taxonomy associated with a desired outcome.
  • a plurality of sets of user-defined parameters may be received in association with at least one of an HDC, an RDF, or a YAML format or file.
  • the user-defined parameters and/or the at least one of the HDC, RDF, or YAML format may be associated with a set of corresponding use cases (e.g., business use cases).
  • the processing (and the set of user-defined parameters) may be associated with, e.g., the functions associated with the content management 110 and/or the functions associated with the model management 120, the inputs 401, the data management 530, the processing associated with generating data at 612/712, and/or the set of physical sensor data 608/708 of FIGs. 1-7.
  • the one or more MT models may be associa ted with different analytics models configured based on different use cases and/or user personas, e.g., may be configured to process the received information to extract data relevant to a related and/or associated use case.
  • the system may generate, for the current user (or for a current use case), an NL query regarding at least one of the rec eived information or the processed information using the one or more MT NLP models.
  • the one or more MT NLP models may be associated with a model management system.
  • 2112 maybe performed by the one or more processors 2210 of the computing device 2205, individually or in combination.
  • the one or more MT NLP models may include one or more of a GPT model or a BERT model.
  • the system may generate, at 2112, the NL query using the GPT model.
  • generating the NL query may be associated with, e.g., the functions associated with the GPT module 140 and/or the functions associated with the model management 120, the “GPT” inferencing operation 1374 or GPT 1420 to produce GPT output content 1428 and/or BERT output content 1430 using one or more trained “GPT” models of FIGs. 1, 13, and 14.
  • the one or more MT NLP models may be associated with different analytics models configured based on different use cases, e.g., may be configured to process inputs to extract semantic content and generate a “query” based on data relevant to a related and/or associated use case or for a particular user or user persona.
  • the one or more MT NLP models used to generate the first NL query may be managed by a model management system or set of functions associated with model management.
  • the system may generate, for the current user (or for a current use case), an NL recommendation based on the first NL query and at least one of the received information or the processed information using at least one corresponding MT NLP model.
  • the at least one MT NLP model may be associated with a model management system.
  • 2114 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination.
  • the one or more MT NLP models in some aspects, may include one or more of a GPT model or a BERT model.
  • the system may generate, at 2114, the NL recommendation using the BERT model.
  • generating the NL recommendation may be associated with, e.g., the functions associated with the GPT module 140 and/or the functions associated with the model management 120, the “GPT” inferencing operation 1374 using one or more trained “GPT” models of FIGs. 1 and 13.
  • the one or more MT NLP models may be associated with different analytics models configured based on different use eases, e.g., may be configured to process inputs to extract semantic content and generate a “recommendation” based on data relevant to a related and/or associated use case or for a particular user or user persona.
  • an NL recommendation for a user associated with a business role may different from an NL recommendation for a user associated with an engineering and/or facility management role.
  • the one or more MT NLP models used to generate the first NL recommendation may 7 be managed by a model management system or set of functions associa ted with model management.
  • the system may output, for the current user (or for a current use case), an indication of at least the NL recommendation via a user interface.
  • 2116 may- be performed by the one or more processors 2210 of the computing device 2205, individually or in combination and/or by the output device/interface 2240 of FIG . 22.
  • the indication of at least the NL recommendation in some aspects, may be presented as a notification or warning for a user to perform a maintenance operation or to address a time-sensitive issue identified through the processing associated with the NL recommendation.
  • the indication may further include an indication of the NL query associated with the NL recommendation to provide a context for the NL recommendation.
  • the system may determine if there are additional users (or models and/or use cases) for which to produce a NL query and/or NL response.
  • 2118 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination. The determination may be based on a period (e.g., a period defined for producing a daily, weekly, monthly or other time-period based report) associated with each of a plurality of models or based on triggering events (e.g., a detection of an anomaly score associated with a vibration of an industrial component that is above a threshold value for a threshold amount of time).
  • a period e.g., a period defined for producing a daily, weekly, monthly or other time-period based report
  • triggering events e.g., a detection of an anomaly score associated with a vibration of an industrial component that is above a threshold value for a threshold amount of time.
  • the system may return to 2112-2114 to generate the corresponding NL queries and NL recommendations, and to output corresponding indications.
  • 2112-2116 may be performed in parallel at one or multiple locations (e.g., datacenters, terminals accessing a same centralized CMS, etc.) and the determination at 2118 may be a determination whether a triggering condition for generating the NL query and the NL recommendation has been met for a same user or use case.
  • the system may receive feedback regarding the accuracy and/or the utility of one or more NL queries and/or NL recommendations.
  • 2120 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination, and/or the IO interface 2225, or the input/user interface 2235.
  • the feedback may be received from a first user as described in relation the operations performed at 1014 1614, and/or 1706/1708 of FIGs. 10, 16, and/or 17, the SME and/or HIL module 133 , HIL 1334, the user preference model 1616, and/or the set of reports/feedback 1702 of FIGs. 1, 13, 16, and/or 17.
  • the feedback in some aspects, may be provided for a rim- time retraining as described in relation to FIGs. 9, 10, 13, 16, and 17.
  • an automated content creation function in accordance with some aspects of the disclosure may be used to generate high- quality content automatically, reducing the need for manual content creation. This can save time and resources while still producing content that is relevant and engaging.
  • an automated technical documentation function in accordance with some aspects of the disclosure may be used to generate technical documentation automatically based on user input or existing data, reducing the time and effort required to produce such documents and improving their accuracy.
  • a multilingual content creation function in accordance with some aspects of the disclosure may be used to generate content in multiple languages, allowing businesses to reach a broader audience and expand their global reach.
  • the system may be used to improve content recommendations based on user behavior and preferences, leading to higher engagement and increased customer satisfaction.
  • the system hi some aspects, may be used to develop advanced knowledge management systems, such as intelligent search engines or automated content tagging. This can improve efficiency and productivity in industrial settings where employees need to access and utilize large amounts of technical information.
  • the system may be used to create personalized content based on user behavior and preferences, providing a more personalized experience for users and increased engagement and/or improved customer satisfaction.
  • Th system in some aspects, may be used to develop chatbots or virtual assistants that can miderstand and respond to customer inquiries and issues in natural language, improving customer service experiences.
  • FIG. 22 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
  • Computer device 2205 in computing environment 2200 can include one or more processing units, cores, or processors 2210, memory 2215 (e.g., RAM, ROM, and/or the like), internal storage 2220 (e.g., magnetic, optical, solid-state storage, and/or organic), and/or IO interface 2225, any of which can be coupled on a communication mechanism or bus 2230 for communicating information or embedded in the computer device 2205.
  • IO interface 2225 is also configured to receive images from cameras or provide images to projectors or displays, depending on the desired implementation.
  • Computer device 2205 can be communicatively coupled to input/user interface 2235 and output device/interface 2240. Either one or both of the input/user interface 2235 and output device? interface 2240 can be a wired or wireless interface and can be detachable.
  • Input/user interface 2235 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a poiriting/cuisor control, microphone, camera, braille, motion sensor, accelerometer, optical reader, and/or the like).
  • Output device/interface 2240 may include a display, television, monitor, printer, speaker, braille, or the like.
  • input/user interface 2235 and output device/interface 2240 can be embedded with or physically coupled to the computer device 2205.
  • other computer devices may function as or provide the functions of input/user interface 2235 and output device/interface 2240 for a computer device 2205.
  • Examples of computer device 2205 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like).
  • highly mobile devices e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like
  • mobile devices e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like
  • devices not designed for mobility e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like.
  • Computer device 2205 can be communicatively coupled (e.g., via IO interface 2225) to external storage 2245 and network 2250 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration.
  • Computer device 2205 or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
  • IO interface 2225 can include but is not limited to, wired and/or wireless interfaces using any communication or IO protocols or standards (e.g., Ethernet, 802.1 lx. Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 2200.
  • Network 2250 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
  • Computer device 2205 can use and/or communicate using computer-usable or computer readable media, including transitory media and non- transitory media.
  • Transitory media include transmission media (e.g., metal cables, fiber optics), signals, earner waves, and the like.
  • Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid-state media (e.g., RAM, ROM, flash memory, solid-state storage), ami other non-volatile storage or memory.
  • Computer device 2205 can be used to implement techniques, methods, applications. processes, or computer-executable instructions in some example computing environments.
  • Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media.
  • the executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g., C, C++. C#, Java, Visual Basic.
  • Processors 2210 can execute under any operating system (OS) (not shown), in a native or virtual environment.
  • OS operating system
  • One or more applications can be deployed that include logic unit 2260, application programming interface (API) unit 2265, input unit 2270. output unit 2275, and inter-unit communication mechanism 2295 for the different units to communicate with each other, with the OS, and with other applications (not shown).
  • API application programming interface
  • Processors) 2210 can be in the form of hardware processors such as centr al processing units (CPUs) or in a combination of hardware and software units.
  • API unit 2265 when information or an execution instruction is received by API unit 2265, it may be communicated to one or more other units (e.g., logic unit 2260, input unit 2270, output unit 2275).
  • logic unit 2260 may be configured to control the information flow among the units and direct the services provided by API unit 2265, the input unit 2270, the output unit 2275, in some example implementations described above.
  • the flow of one or more processes or implementations may be controlled by logic, unit 2260 alone or in conjunction with API unit 2265.
  • the input unit 2270 may be configured to obtain input for the calculations described in the example implementations
  • the output unit 2275 may be configured to provide an output based on the calculations described in example implementations.
  • Processors) 2210 can be configured to receive information from one or more input sources associated with at least one of the industrial process or the industrial product.
  • the processor(s) 2210 can be configured to process the received information using one or more MT models associated with a model management system.
  • the processor(s) 2210 can be configured to generate, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT NLP models.
  • the processors) 2210 can be configured to generate, for the first user, a first NL recommendation based on the first NL query arid at least one of the received information or the processed information.
  • the processors) 2210 can be configured to output, for the first user, a first indication of at least the first natural language recommendation via the user interface.
  • the processors) 2210 can be configured to train the one or more MT NLP models to produce accurate natural language queries based on at least one of the received information or the processed information using a first set of feedback associated with an accuracy of the natural language queries produced by the one or more MT NLP models.
  • the processorfs) 2210 can also be configured to train the one or more MT NLP models to produce, for the first user, useful natural language queries based on at least one of the received information or the processed information using a second set of feedback associated with a usefulness, to the first user, of the natural language queries produced by the one or more MT NLP models for the first user.
  • the processors) 2210 can also be configured to train the one or more MT NLP models to produce, for a second user, usefill natural language queries based on at least one of the received information or the processed information using a third set of feedback associated with a usefulness, to the second user, of the natural language queries produced by the one or more MT NLP models for the second user.
  • the processorfs) 2210 can also be configured to generate, for a second user, a second natural language query regarding at least one of the received information or the processed information using one or more MT NLP models.
  • the processor(s) 2210 can also be configured to generate, for the second user, a second natural language recommendation based on the second natural language query and at least one of the received information or the processed information.
  • the processors) 2210 can also be configured to output, for the second user, a second indication of at least the second natural language recommendation via the user interface.
  • the techniques described herein relate to a system for generating recommendations for a first user regarding at least one of an industrial process or an industrial product, the system including: at least one memory; a user interface; and at least one processor coupled to the at least one memory and, based at least in part on stored information that is stored in the at least one memory, the at least one processor, individually or in any combination, is configured to: receive information from one or more input sources associated with at least one of the industrial process or the industrial product; process the received information using one or more machme-trained (MT) models associated with a model management system: generate, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT natural language processing (NLP) models; generate, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received information or the processed information: and output, for the first user, a first indication of at least the first natural language recommendation via the user interface.
  • MT machine-trained
  • the techniques described herein relate to a system, wherein the at least one processor, individually or in any combination, is further configured to: generate, for a second user, a second natural language query regarding at least one of the received information or the processed information using one or more MT NLP models; generate, for the second user, a second natural language recommendation based on the second natural language query and at least one of the received information or the processed information: and output, for the second user, a second indication of at least the second natural language recommendation via the user interface, wherein a first role associated with the first user is different than a second role associated with the second user and the second natural language query and the second natural language recommendation are different than the first natural language query and the first natural language recommendation based on the different second and first roles, respectively.
  • the techniques described herein relate to a system, wherein the at least one processor, individually or in any combination, is further configured to: train the one or more MT NLP models to produce accura te natural language queries based on at least one of the received information or the processed information using a first set of feedback associated with an accuracy of the natural language queries produced by the one or more MT NLP models; and train the one or more MT NLP models to produce, for the first user, usefill natural language queries based on at least one of the received information or the processed information using a second set of feedback associated with a usefulness, to the first user, of the natural language queries produced by the one or more MT NLP models for the first user.
  • the techniques described herein relate to a system, wherein the at least one processor, individually or in any combination, is further configured to: train the one or more MT NLP models to produce, for a second user, useful natural language queries based on at least one of the received information or the processed information using a third set of feedback associated with a usefulness, to the second user, of the natural language queries produced by the one or more MT NLP models for the second user,
  • the techniques described herein relate to a system, wherein the one or more MT NLP models includes one or more of a generative pre-trained transform (GPT) model or a bidirectional encoder representations from transformers (BERT) model.
  • GPT generative pre-trained transform
  • BERT bidirectional encoder representations from transformers
  • the techniques described herein relate to a system, wherein the at least one processor is configured to generate the first natural language query using the GPT model and to generate the first natural language recommendation using the BERT model.
  • the techniques described herein relate to a system, wherein the at least one processor is configured to process the received information based, at least in part, on a set of user-defined parameters, wherein the set of user-defined parameters includes one or more of a desired outcome, a set of key performance indicators (KPIs), a set of affected users, or a taxonomy associated with the desired outcome.
  • KPIs key performance indicators
  • the techniques described herein relate to a system, wherein the set of user-defined parameters is provided to the system in association with at least one of a hypothesis development canvas (HDC), a resource description framework (RDF), or a YAML ain’t markup language (YAML) format.
  • HDC hypothesis development canvas
  • RDF resource description framework
  • YAML YAML ain’t markup language
  • the techniques described herein relate to a system, wherein the received information includes two or more of image data, text, vibration data, audio data, structured data, or sensor data.
  • the techniques described herein relate to a system, wherein the at least one processor is configured to process the received information by associating the data with at least one MT model of the one or more MT models associated with the model management system.
  • the techniques described herein relate to a method for generating recommendations for a first user regarding at least one of an industrial process or an industrial product, including: receiving information from one or more input sources associated with at least one of the industrial process or the industrial product; processing the received information using one or more machine-trained (MT) models associated with a model management system; generating, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT natural language processing (NLP) models; generating, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received information or the processed information; and outputting, for the first user, a first indication of at least the first natural language recommendation via the user interface.
  • MT machine-trained
  • NLP MT natural language processing
  • the techniques described herein relate to a method, further including: generating, for a second user, a second natural language query regarding at least one of the received information or the processed information using one or more MT NLP models; generating, for the second user, a second natural language recommendation based on the second natural language query 7 and at least one of the received information or the processed information; and outputting, for the second user, a second indication of at least the second natural language recommendation via the user interface, wherein a first role associated with the first user is different than a second role associated with the second user and the second natural language query and the second natural language recommendation are different than the first natur al language query and the first natural language recommendation based on the different second and first roles, respectively 7
  • the techniques described herein relate to a method, further including: train the one or more MT NLP models to produce accurate natural language queries based on at least one of the received information or the processed information using a first set of feedback associated with an accuracy of the natural language queries produced by 7 the one or more MT NLP models; and train the one or more MT NLP models to produce, for the first user, useful natural language queries based on at least one of the received information or the processed information using a second set of feedback associated with a usefulness, to the first user, of the natural language queries produced by the one or more MT NLP models for the first user.
  • the techniques described herein relate to a method, further including: train the one or more MT NLP models to produce, for a second user, useful natural language queries based on at least one of the received information or the processed information using a third set of feedback associated with a usefulness, to the second user, of the natural language queries produced by the one or more MT NLP models for the second user.
  • the techniques described herein relate to a method, wherein the one or more MT NLP models includes one or more of a generative pre-tamed tr ansform (GPT) model or a bidirectional encoder representations from transformers (BERT) model,
  • GPS generative pre-tamed tr ansform
  • BERT bidirectional encoder representations from transformers
  • the techniques described herein relate to a method, wherein the at least one processor is configured to generate the first natural language query using the GPT model and to generate the first natural language recommendation using the BERT model.
  • the techniques described herein relate to a method, wherein the at least one processor is configured to process the received information based, at least in part, on a set of user-defined parameters, wherein the set of user-defined parameters includes one or more of a desired outcome, a set of key performance indicators (KPIs), a set of affected users, or a taxonomy associated with the desired outcome,
  • KPIs key performance indicators
  • the techniques described herein relate to a method, wherein the set of user-defined parameters is provided to the system in association with at least one of a hypothesis development canvas (HDC), a resource description framework (RDF), or a YAML ain’t markup language (YAML) format.
  • HDC hypothesis development canvas
  • RDF resource description framework
  • YAML YAML ain’t markup language
  • the techniques described herein relate to a method, wherein the received information includes two or more of image data, text, vibration data, audio data, structured data, or sensor data,
  • processing the received information includes associating the data with at least one MT model of the one or more MT models associated with the model management system.
  • Example implementations may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs.
  • Such computer programs may be stored in a computer readable medium, such as a computer readable storage medium or a computer readable signal medium.
  • a computer readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid-state devices, and drives, or any other types of tangible or non-transitory media suitable for storing electronic information.
  • a computer readable signal medium may include mediums such as carrier waves.
  • the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus.
  • Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation .
  • aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application.
  • some example implementat ions of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software.
  • the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways.
  • the methods may be executed by a processor, such as a general-purpose computer, based on instructions stored on a computer readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

In example implementations described herein, there are systems and methods for generating recommendations for a first user regarding at least one of an industrial process or an industrial product including receiving information from one or more input sources associated with at least one of the industrial process or the industrial product, processing the received information using one or more MT models associated with a model management system, generating, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT NLP models, generating, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received information or the processed information, and outputting, for the first user, a first indication of at least the first natural language recommendation via the user interface.

Description

Al INTEGRATED INTELLIGENT CONTENT SYSTEM
BACKGROUND
Field
(0001] The present disclosure is generally directed to a digital advisor related to product and/or sendees information.
Related Art
(0002] Providing products and/or sendees, in some cases, involves large amounts of information from many different components and/or sources. The components and/or souices may include, for example, different teams associated with a business (e.g., marketing, sales, etc.) and different production lines and/or industrial processes each associated with multiple machines and/or production lines may produce information relevant to different stakeholders and/or users. Much of the information may be siloed in separate content management systems for different aspects of providing products and/or services . Additionally, even if information is appropriately shared or co-located, the different stakeholders and/or users may be interested in different aspects, or analysis, of the information.
(0003] Accordingly, a system that can process information from multiple sources and provide tailored analysis and/or recommendations is provided.
SUMMARY
(0004] Example implementations described herein involve an innovative method to generate content for multiple aspects of industrial operations.
(0005] Aspects of the present disclosure include a method for receiving information from one or more input sources associated with at least one of the industrial process or the industrial product, processing the received information using one or more machine-trained (MT) models associated with a model management system, generating, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT natural language processing (NLP) models, generating, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received information or the processed information, and outputting, for the first user, a first indication of at least the first natural language recommendation via the user interface.
(0006] Aspects of the present disclosure include a non-transitory computer readable medium, storing instructions for execution by a processor, which can involve instructions for receiving information from one or more input sources associated with at least one of the industrial process or the industrial product, processing the received information using one or more MT models associated with a model management system, generating, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT NLP models, generating, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received infonnation or the processed information, and outputting, for the first user, a first indication of at least the first natural language recommendation via the user interface.
[0007] Aspects of the present disclosure include a system, which can involve means for receiving infonnation from one or more input sources associated with at least one of the industrial process or the industrial product, processing the received information using one or more MT models associated with a model management system, generating, for the first user, a first natural language query regarding at least one of the received infonnation or the processed information using one or more MT NLP models, generating, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received infonnation or the processed information, and outputting, for the first user, a first indication of at least the first natural language recommendation via the user interface.
[0008] Aspects of the present disclosure include an apparatus, which can involve a processor, configured to receive infonnation from one or more input sources associated with at least one of the industrial process or the industrial product , process the received infonnation using one or more MT models associated with a model management system, generate, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT NLP models, generate, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received hiformation or the processed information, and output, for the first user, a first indication of at least the first natural language recommendation via the user interface,
BRIEF DESCRIPTION OF DRAWINGS
(0009] FIG. I is a diagram of components of the system in accordance with some aspects of the disclosure.
(0010] FIG. 2 illustrates an example hypothesis development canvas (HDC) that may be provided to the system in accordance with some aspects of the disclosure.
(0011] FIG. 3 is a diagram of illustrating processing one or more HDCs to produce structured data in accordance with some aspects of the disclosure.
(0012] FIG. 4 is a diagram illustrating aspects of model management in accordance with some aspects of the disclosure.
(0013] FIG . 5 is a diagram illustrating a content graph in accordance with some aspects of the disclosure.
(0014] FIG . 6 is a diagram illustrating elements of a model training process in accordance with some aspects of the disclosure.
(0015] FIG. 7 is a diagram illustrating elements of an inferencing process hi accordance with some aspects of the disclosure.
(0016] FIG. 8 is a diagram illustrating recommendations based on model selection from a model management subsystem in accordance with some a spects of the disclosure.
[0017] FIG. 9 is a diagram illustrating aspects of training a generative pre-trained transform (GPT) model (or simply a GPT) in accordance with some aspects of the disclosure.
[0018] FIG. 10 is a diagram illustrating additional aspects of training the GPT model of FIG.
9 in accordance with some aspects of the disclosur e. [0019] FIG. 11 is a diagram illustrating related elements of an FMEA in accordance with some aspects of the disclosure for an example relating to a wind turbine (WT).
[0020] FIG. 12 is a diagram illustrating the use of a combination of a GPT model and a Bidirectional Encoder Representations from Transformers (BERT) model to generate and/or identify questions and answers in accordance with some aspects of the disclosure.
[0021] FIG. 13 is a first diagram and a second diagram illustrating a first set of training operations, and a second set of inferencing operations, respectively, associated with the GPT and/or BERT inodei(s) in association with the CMS in accordance with some aspects of the disclosure.
[0022] FIG. 14 is a diagram illustrating components of semantic reasoning traceability associated with one or more GPT and/or BERT models in accordance with some aspects of the disclosure.
[0023] FIG. 15 is a diagram illustrating elements of a process or algorithm used to generate descriptive questions for the GPT in accordance with some aspects of the disclosure.
[0024] FIG. 16 is a diagram illustrating the use of subject mater experts (SMEs) and user feedback for a GPT training operation in accordance with some aspects of the disclosure.
[0025] FIG. 17 is a diagram illustrating elements associated with using human feedback for reinforcement learning (RL)-based training of a model in accordance with some aspects of the disclosure.
[0026] FIG. 18 is a diagram illustrating elements of an orchestration system in accordance with some aspects of the disclosure.
[0027] FIG. 19 is a diagram 190 illustrating orchestration workflow compiling in accordance with some aspects of the disclosure.
[0028] FIG . 20 is a flow diagram illustrating a method in accordance with some aspects of the disclosure. [0029] FIG. 21 is a flow diagram illustrating a method in accordance with some aspects of the disclosure.
[0030] FIG. 22 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
DETAILED DESCRIPTION
[0031] The following detailed description provides details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout. the description are provided as examples and are not intended to be limiting. For example, the use of the term “automatic” may involve fully automatic or semi-automatic implementations involving user or administrator control over certain aspects of the implementation, depending on the desired implementation of one of the ordinary skills in the art practicing implementations of the present application. Selection can be conducted by a user through a user interface or other input means, or can be implemented through a desired algorithm. Example implementations as described herein can be utilized either singularly or in combination and the functionality of the example implementations can be implemented through any means according to the desired implementations .
[0032] In some aspects of the disclosure one or more of a digital advisor, content artificial intelligence (Al), and/or a common software foundation may be provided. The content Al, in some aspects, may use advanced Al-drivea technology that may be used to analyze large amounts of content and extract meaningfill insights in a content-driven manner guided by user preference approach. The scope of this disclosure includes developing the software components for implementing the system on cloud, or on-premises, servers, relevant data sets curation and optimizing its performance over time. The core components, in some aspects, include an analytics pipeline, a content management system, a data catalog, a model management system, and workflow orchestration. [0033] In some aspects, the system provided addresses traditional areas of concerns of content and knowledge management, e.g. efficiency, scalability, and personalized search engine optimization. For example, to address efficiency, a content Al provided may be capable of creating high-quality content in a shorter period of time than it takes a humans counterpart. This allows businesses to produce more content, triage complex situations, and publish solutions faster which, in some aspects, improves an overall time to action (e.g., reduce a time to act to remedy an identified problem or to take advantage of an identified opportunity). Using the content Al to create, or generate, the content may allow human capital /and/or human resouices to aid in a complex decision-making process. Additionally, to address scalability, in some aspects, AI- powered content creation and optimization can be scaled up or down as needed with the suitable “size” of the model. It allows businesses to handle cross vertical or just vertical, public or private or converged content volumes without sacrificing quality.
[0034] hi some aspects, the system may address personalization by having the content Al analyze data about individual users or subject matter experts (SMEs) and create personalized turnkey coatent tailored to their preferences and interests. This personalization may improve user engagement and drive improved solutions. The system may also provide SEO (Search engine optimization) optimization by having the content Al analyze data, documents, tickets, and/or service records (or other similar data) to optimize content for search engines, thus improving its visibility and helping businesses address issues to their target audiences effectively. The system, in some aspects, may also improve cost-effectiveness by having the content Al create content faster and at lower cost than human writers (not including original content creators), which is beneficial for reducing budgets associated with content creation. The reduced cost is particularly associated with converged field(s) for which a single (more expensive) human resource with expertise in each of the fields or multiple humans with expertise in each field would otherwise be used leading to higher costs.
[0035] The system provided in the following disclosure may provide improvements in multiple cross-industrial setting. For example, by combining content management and large language models such as a generative pre-trained transform (GPT) model, the system may provide solutions to a wide range of problems in industrial areas, settings, or fields, by providing NLP capabilities. Additionally, GPT models, in some aspects, may be used for image and text analysis to detect defects and anomalies in manufacturing processes, leading to improved quality control and reduced waste. In some aspects, GPT models rnay analyze sensor data from manufacturing equipment to predict maintenance needs and improve equipment uptime, thus reducing downtime and maintenance costs. GPT models may be used, in some aspects, to develop chatbots or virtual assistants that understand and respond to customer inquiries and issues in natural language (NL), thus improving customer service experiences. In some aspects, GPT models may be used to develop advanced knowledge management systems, such as intelligent search engines or automated content tagging. Such advanced knowledge management systems may improve efficiency and producti vity in industrial settings where employees need to access and utilize large amounts of technical information. GPT models, in some aspects, may further be used to generate technical documentation, such as user manuals or service guides, automatically, based on user input or existing data. The automatic documentation generation may reduce the time and effort required to produce such documents and improve their accuracy.
(0036] In some aspects, the system may be associated with a Generative Content Al, guided by DEPPAA (Descriptive, Exploration, Predictive, Prescriptive, Automation, Autonomous) analytics that may facilitate cognitive processes in cross-industrial settings. For example, in some aspects, the Generative Content Al may enable the generation of fresh content, innovative ideas, and effective solutions. The descriptive phase of the DEPPAA analytics, in some aspects, may capture essential details of business use cases, allowing adaptability to changing content. The design approach in accordance with some aspects of the disclosure may be content-driven, accommodating evolving use cases. These capabilities additionally address subsequent components of the DEPPAA, e.g., the EPPAA phases/coniponents. In some aspects, the Generative Content Al encompasses text, images, video, audio, and data generation, benefiting industries across physical systems, human interactions, and operational processes. It enables content extraction, insights creation, intelligent generation, process automation, personalization, and operation optimization.
[0037] To facilitate the cognitive process in the industrial setting, the principles of DEPPAA analytics process may be employed. In the descriptive and/or “D” phase of DEPPAA, the primary objective may be to comprehensively describe the business use case(s) . This entails capturing the essential details of the use case, allowing the system to adapt to new content when the use case undergoes changes. The design approach employed is content -driven, ensuring that the system can effectively accommodate evolving use eases. A Content Al (e.g., a generative Al or generative content Al) in accordance with some aspects of the disclosure may be configured to effectively address the subsequent components of EPPAA (e.g.. Exploration, Predictive, Prescriptive, Automation, and Autonomous).
[0038] Generative Content Al holds broad applicability across various industrial sectors, as it possesses the ability to generate fresh content, innovative ideas, and effective solutions. This technology becomes a valuable asset for businesses operating in diverse industries, hi the context of Industrial Generative Al, the term “content” may encompass a wide range of elements, including text, images, video, audio, and/or data. For example, a Generative AL in some aspects, may be capable of processing and/or producing text content, such as operational manuals, technician notes, maintenance records, system design and other articles, blog posts, product descriptions, innovations reports, Gardner reports or marketing copy. In some aspects, the Generative Al may be capable of processing and/or generating new images and/or video(s) using sources such as asset images and/or videos taken from drone and helicopter for inspection, asset defect images and/or videos, and other industrial product photos and/or videos. A Generative Al, in some aspects, may be capable of processing and/or generating audio and simulated audio content for tasks such as detecting an anomaly or classifying audio classes. In some aspects, a Generative Al may be capable of processing and/or generating data sets, including data from industrial sensors, customer issue ticket data, and product defect data
[0039] In an industrial setting, this work encompasses operations involving physical industrial systems, human-system interactions, operational processes, and processing systems. Therefore, the applications of Generative Content Al in this context may include one or more of extraction of content from existing or simulated systems, creation of insights fr om extracted content, generation of intelligent content, automation of content-related processes, personalization of content for business owners, and/or optimization of operations through content-driven approaches.
[0040] FIG. 1 is a diagram 100 of components of the system in accordance with some aspects of the disclosure. While it may be useful to generally categorize the different elements of the system into functions associated with content management 110, model management 120, and content Al 130, the different elements may be used for, or associated with multiple functions and/or components of the system. For example, some of the elements may provide functions associated two or more of content management 110, model management 120, and/or content Al 130.
[0041] Diagram 100 illustrates elements of a system for providing personalized insight for multiple users and/or across multiple settings, in accordance with some aspects of the disclosure. At a high level, the functions associated with content management 110 (which may alternatively be referred to as a data management system) may be provided to store and manage business insights, content, and enablement. In some aspects, the functions associated with content management 110 may include processing and content extraction at content templates creation module 113 based on the input that includes input from business use cases module 111. The business use cases module 111 may process data (e.g., a hypothesis development canvas (HDC) or a resource description framework (RDF)) regarding a business goal or problem statement, and provide the data in a standardized format (e.g., an HDC, an RDF, or a YAML ain’t markup language (YAML) format as further discussed below) from which content may be extracted for analysis (e.g., to guide the analysis) of historical, current, and/or subsequently collected data based on, e.g., a language model. Content management 110, in some aspects, may further be associated with data and model management which may be associated with a data and model module 112 configured to manage the data and models used by an organization to make decisions in association with the content templates creation module 113 which may implement knowledge content management standards (e.g., stored and/or managed at a knowledge management module 117). Content management 110 may be associated with a metadata management module 118 whose function, in some aspects, may be to arrange data and its corresponding metadata and /or definition (e.g., associate data with metadata in association with content management 110 in order to identify the contexts in which the data is to be used to provide analysis or insight). Content management 110 may also be associated with a platform information management function performed by a platform management module 115 to support infrastructure (e.g., a platform management software or code). In some aspects, a model and workflow management module 116 may be associated with content management 110 to support the system during runtime, e.g., to implement one or more workflows for processing and/or analyzing data to provide insights or other output for one or more users (e.g., a set of one or more programs may execute code during runtime to implement one or more workflows), hi some aspects, the one or more workflows may be based on the data, models, and/or business use cases as described above along with information or feedback received from the functions associated with model management 120 or content Al 130.
(0042] At a high level, the functions associated with model management 120, in some aspects, may include managing (e.g., training and updating) and executing one or more analytics models and extracting insights based on the models. Model management 120 may be associated with a data collection and curation processing module 121 that may be used to ensure that the models are developed, trained, and/or updated (maintained) using accurate and up-to-date data (e.g.. that incoming data is processed appropriately to ensure accuracy), A model development and training module 122 may, in some aspects, implement processes and techniques used to create and prepare a model for use in a predictive analytics, prescriptive optimization, automation, and/or reinforced learning environment associated with model management 120. hi some aspects, a model management module 124 may implement core functions to integrate other functions and/or modules associated with the functions of the model management 120. Financial functions of model management, in some aspects, may also be associated with the model management module 124. A feature management module 125, in some aspects, may provide analytics model management to manage the lifecycle of features used in analytics models. In some aspects, a performance management module 127 may be associated with analytics model management including evaluating the performance of an analytical model and adjusting or improving the analytical model to ensure it meets the needs of the business, hr some aspects, the functions of model management 120, ami specifically the model management module 124, may further be associated with a model store module 126 providing storage of analytical models and from which analytical models may be retrieved. The model management module 124, in some aspects, may further be associated with model deployment module 123 associated with identifying and deploying analytical models based on requests generated by the GPT module 140.
(0043] In some aspects, the GPT module 140 may be associated with content Al 130 (and/or a digital advisor). The GPT module 140, in some aspects, may be a private GPT (e.g., a GPT trained on proprietary' and/or private data) acting as a language model used to generate humanconsumable text, content, computation, and/or code. The GPT module 140, in some aspects, may be used to narrow a large number of possible tailure modes to a smaller number of failure modes that may be (content filtering) by user inputs. In some aspects, the GPT module 140 may be used in associated with semantic reasoning (e.g., decision making and/or group triage). The output of an analytics model deployed by the model deployment module 123 may be formatted and/or pre- processed (by a separate program and/or module, not shown, or a component of the GPT module 140) for ingestion by the GPT module 140. In some aspects, the GPT module 140 may be used for content generation from analytics models Output. In some aspects, the GPT module 140 may be associated with a set of modules associated with a knowledge content curation and/or generation with a “human in the loop (HIL)” including an evaluation and monitoring module 131, a SME and/or HIL module 133, and/or a user interface and user feedback module 132. The GPT module
140, in some aspects, may be used as an Al SME (e.g., to automatically generate a recommendation) that can be used to generate a user response for a reinforcement learning process.
(0044] In some aspects, the functions associated with content Al 130 may include functions associated with a digital advisor that can provide analysis or warnings for each user of multiple users or types of users. A user interface (III) and user experience (UX) design, in some aspects, may be employed to enhance the user's experience and make it more intuitive and efficient. In order to improve the content generation at the GPT module the functions of content Al 130 may be associated with an evaluation and monitoring module 131 that may be used to evaluate the content generated by the GPT module 140 and provide opportunities for monitoring the quality of the generated content. For example, the evaluation and monitoring module 131, the user interface and user feedback module 132, the SME and/or HIL module 133 , and the user preference model development module 134 may interact to improve the quality and specificity of the content generation (e.g., warnings, analysis, and/or information) for each of a plurality of users.
[0045] C Content Al, in some aspects, may refer to an Al aided technology that leverages artificial intelligence algorithms (e.g., GPT, Transformer) to analyze large amounts of content (e.g., data and/or information provided by different components in an industrial setting, such as video data, audio data, sensor data, manuals, HDCs, etc.), and extract meaningfill insights in a way that is guided by user preferences. Content Al may be associated with a wide range of applications, such as turn-key reporting, conditional monitoring, sentiment analysis, NLP, and text classification. [0046] The system discussed below, in some aspects, may use the latest large language model in content Al development, build the required software components for implementation, deploy7 the system on one or more servers (e.g.. one or more cloud and/or on-premises, servers), tain it with relevant datasets, and continually optimize its performance over time. The components of the system include one or more of an analytics pipeline, hierarchical clustering process, link-density clustering. model management, and workflow orchestration using Node-Red.
[0047] In some aspects, the system may be designed to provide business users, SME, and operators with preference-specific content (e.g., personalized content) for turn-key decision making or group triage decision making for a given event or situation. As described in relation to FIG. 1 , a system in accordance with some aspects of the disclosure may include four subsystems including a content management system (CMS) (e.g., implementing a set of functions associated with content management 110), model management and workflow execution (e.g., implementing a set of functions associated with model management 120), GPT (e.g., GPT module 140), User content generation and feedback (e.g., implementing a set of functions associated with content Al 130). In some aspects, user and SME preference may be extracted and managed as part of the content (e.g., by the CMS).
[0048] The system, in some aspects, may be a software system using an API-based interface and a common platform backend to store and manage content including images and documents in a searchable, rank-able, and relate-able (e.g., able to be identified as related for one or more different analyses) way. hr addition, customer preference and persona data will be tracked, analyzed, and leveraged hi the content generated for users and users inquires. For example, data may be tracked for a particular user or each of a set of user roles or characteristics used to determine the types of analyses performed, and content generated, for the particular user or for a user having a particular user role from the set of user roles. The system, in some aspects, may also use a gateway and other third-party7 APIs to process and orchestrate workflows. The development process may follow an agile methodology and the system may be scalable to meet any future needs.
[0049] Referring again to FIG. 1 , the system may include the busmess use cases module 111 that, in some aspects, may process and extract content associated with business use cases (e.g., data provided in an HDC). Content extraction management, in some aspects, may include automated digital content curation that automatically curates digital content from inputs of use cases and that may be aligned and/or enriched from one or more corresponding business sources such as an HDC, technician notes (e.g., notes imported from a commercial asset management program), meeting notes, a collaboration website, feeds, and/or social media to provide relevant and timely information of stakeholders’ business intent. For example, the business use cases module I I I, in some aspects, may be configured to provide text analytics and NLP to extract structured information from unstructured content to gain meaningful insights. Automated document classification, in some aspects, may further be provided by the business use cases module 111 configured to automatically classify documents into predefined categories based on the content. The automatic classification, in some aspects, may facilitate efficient search and retrieval of relevant documents, hi addition to, or independently from, the automatic document classification, the business use cases module 111, in some aspects, may be configured to automatically summarize content (e.g., in a document received at the business use cases module 111) to provide a quick overview of the content. In some aspects, the business use cases module 111 may be configured to provide automated content categorization to automatically categorize content into predefined categories for easy retrieval. The business use cases module 111 , in some aspects, maybe configured to automatically extract content from corresponding sources for a given use case such as circumstances, persona, outcomes of implementations to gain meaningfill insights of a use case and its effec tiveness.
(0050] The CMS (e.g., the system associated with the functions of content management 110), in some aspects, may further be associated with data and model management functions (e.g., implemented by data and model module 112 configured to provide the data and model management functions). Data and model management, in some aspects, may relate to the process of managing the data and models used by an organization to make decisions. In some aspects, the data and model management may include the development, maintenance, and use of data and models to support business decisions. The data and model management, in some aspects, may also include the tracking, analysis, and storage of data and models, as well as the development and evaluation of models to optimize the provision of data services and a cost of replacenient/operation. [0051] In some aspects, the CMS may include content templates creation providing knowledge content management (e.g., implemented by content templates creation module 113). The content templates creation and/or the knowledge content management, in some aspects, may include processes relating to creating pre-defined templates to facilitate the creation of content. The creation of pre-defined templates (e.g., by the content templates creation module 113), in some aspects, may be used to ensure that all content created is consistent with the organization’s brand, standards, and guidelines. Content templates, in some aspects, may be used for various types of content including articles, webpages, documents, emails, and more. In some aspects, content templates may also be used to help streamline the content creation process, reducing the time it takes to create content from scratch. Content templates may also help ensure that all content created follows a set of guidelines, ensuring that all content meets the organization’s standards and is optimized for SEO. In some aspects, the modules associated with the content manager 110 may be managed by a content management system module 114.
(0052] The CMS, in some aspects, may include functions related to platform information and information technology (IT) management (e.g., implemented by platform management module 115). In some aspects, the platform information and IT management may be an adaptable (migratable and upgradeable) approach to the implementation, maintenance, and management of an organization’s technology and data infrastructure. The platform information and IT management, in some aspects, may involve understanding how different components of the technology and data infrastructure interact and function, as well as making sure that all components are maintained and managed in a secure and efficient manner. In some aspects, platform information and IT management may include planning and implementation of hardware and software, as well as data security, storage, and backup solutions. The platform information and IT management, in some aspects, may also include monitoring of system performance, troubleshooting, and system maintenance.
(0053] Model and workflow information and management, in some aspects, may be associated with the CMS (e.g., model and workflow information and management functions may be associated with content management 110 and/or may be implemented by model and workflow management module 116). Model ami workflow information and management, in some aspects, may include processes relating to managing the data associated with the various models and workflows that are used to extract insights in an organization. Managing the data, in some aspects, may include managing documentation, configuration, and tracking of the models and workflows. Model and workflow information and management, in some aspects, may allow for the efficient use and management of the data associated with the models and workflows. Additionally, or alternatively, the model and workflow information and management may operate (e.g., may perform lifecycle management for the models and workflows) to ensure that the models and workflows are up to date and properly configured for the organization's needs as conditions change.
[0054] The CMS, in some aspects, may include knowledge management functions (e.g., implemented by knowledge management module 117) for managing knowledge content (e.g., functions related to Insert, Update, Delete (de-dup), Rank, etc.). Knowledge management in this context may include processes related to capturing, organizing, and distributing knowledge regarding insights and/or analyses provided by the system within an organization. In some aspects, knowledge management may involve the identification, capture, and sharing of knowledge assets (with their corresponding users and persona) such as documents, images, databases, and systems. Knowledge management, in some aspects, may include the development and implementation of processes to ensure that knowledge is used in an effective and efficient maimer.
[0055] In some aspects, the CMS may include metadata management functions (e.g., implemented by metadata management module 118). Metadata management, in some aspects, may include processes relating to arranging data and corresponding metadata and/or definitions. In some aspects, metadata management may be associated ’with organizing information about data and its associated elements such as descriptions, definitions, and other relevant information. Metadata management, in some aspects, may include data cataloging, data classification, data governance, data quality, and other related activities, hi some aspects, metadata management may be used to enable data-driven decision-making, facilitate data sharing and collaboration, and ensure data security.
[0056] FIG. 2 illustrates an example HDC 200 that may be provided to the system in accordance with some aspects of the disclosure. The HDC 200, in some aspects, may include information regarding an author, a creation date, and an iteration number. In some aspects, the information regarding an author, the creation date, and the iteration number may be used to ensure that the most up-to-date version (and/or the most authoritative version, e.g., created by a user with more authority or expertise) of an HDC 200 is used for subsequent processing.
(0057] The HDC 200, in some aspects, may include information related to, and/or define, a hypothesis 205 and one or more factors related to the hypothesis. For example, HDC 200 includes information related to a set of key performance indicators (KPIs) 210, a set of business values 215, a set of stakeholders 220, a set of entities 225. a set of decisions 230, a set of predictions 235, a set of data sources 240, a set of variables 245, a set of recommendations 250, a set of impediments 255, a set of risks 260. a set of financial assessments 265, a set of impediment assessments 270.
(0058] In some aspects, an HDC may allow (or facilitate) busmess owners, researchers, and scientists to more effectively develop, refine, and test hypotheses in a systematic and structured way with a business objective in mind. The HDC, as an example or a data structure for organizing aspects of a “problem” to be solved, may be used to facilitate collaboration and communication among researchers and stakeholders, helping to ensure that all aspects of the hypothesis development process are properly considered and addressed.
(0059] For example, the hypothesis 205 may define a business problem as “reduce unplanned downtime costs by X% while maintaining operational effectiveness (e.g., uptime, service level agreements (SLAs)). In some aspects, the hypothesis 205 may be associated with a set of KPIs 210 that may include one or more performance indicators associated with anomalies, failures, component quality, inventory costs, inventory turnover, obsolete inventory, excessive inventory, supplier quality, and/or supplier reliability. The set of business values 215, in some aspects, may include one or more values, objectives, or factors affecting and/or associated with the business problem defined by the hypothesis 205, such as reducing inventory and procurement costs, improving operational effectiveness (uptime), providing and/or obtaining fresher consumables, freeing up working capital, and/or increasing maintenance equipment utilization. In some aspects, the set of stakeholders 220 may identify stakeholders associated with inventory management, factory operations, procurement, suppliers, and/or customers. Similarly, the set of entities 225, in some aspects, may identify the entities associated with the business problem defined by the hypothesis 205, such as factories, distribution centers, products and/or components, suppliers, customers, and/or competitors, hi some aspects, the set of decisions 230 may include one or more of decisions regarding demand forecasting, materials procurement, inventory management, datacenter management, staffing and training, consumables management, product quality, supplier quality and'or reliability, supplier reallocations, supplier management, supplier acquisition, and/or motion designer. The set of predictions 235, in some aspects, may include predictions related to one or more of anomaly (or anomalous) events (or anomalies), failure events, product (and/or item) level demand, inventory procurement, inventory logistics and/or location(s), a demand for consumables, obsolete inventory, excessive inventory, staffing requirements, operation prognosis, supplier deliveries, supplier quality, weather-impact, and/or sustainability.
(0060] The HDC 200, in some aspects, may include an indication of a set of data sources 240 for the hypothesis 205 that may include data sources associated with one or more of inventory, sales, orders, returns, staffing, consumables, economic indicators, and/or weather. The hypothesis 205, in some aspects, may be associated with a set of variable 245 that may include variables (or dimensions) associated with product components, product specifications, plant locations), DC location(s), DC size, a set of sensors, motion profile, asset behaviors, failure niode(s), supplier history, day of week (e.g., Monday, Tuesday, or weekday vs. weekends), seasonality (or season), weather conditions, assembly line structure, and/or economic conditions. The set of recommendation 250, in some aspects, may relate to one or more of inventory levels, consumables levels, procurement schedule(s), staffing and/or hiring, training and/or retraining, DC logistics, supplier allocations, and/or supplier corrective actions. In some aspects, the set of impediments 255 may include impediments related to one or more of data quality and/or access concerns, that operating the solution may become onerous, that system reliability and quality may not (or does not) meet SLA’s, a lack of field management buy-in, managing modem technology architecture, financing and/or budgeting.
[0061] The set of risks 260, in some aspects, may include risks (and, in some aspects, associated rates of false positives and false negatives) associated with one or more of customer risks associated with delayed deliveries or quality; manufacturing risks associated with inventory shortages; staffing risks associated with deploying new technologies; financial risks associated with poor obsolete and/or excessive handling and/or execution; supply chain risks associated with weather, strikes, and/or economic risks that may disrupt the supply chain; and/or supplier risks associated with demand, reliability, and/or quality. In some aspects, the set of financial assessments 265 may include assessments related to one or more of operation costs, uptime, customer satisfaction, product quality, supplier quality, or employee satisfaction. The set of impediment assessments 270, in some aspects, may include assessments of impediments related to data operations, analytic skills, operations management, infrastructure, budgeting, and/or staffing. While the above discussion provides lists of possible elements of each of the set of KPIs 210, the set of business values 215, the set of stakeholders 220, the set of entities 225, the set of decisions 230, the set of predictions 235, the set of data sources 240, the set of variables 245, the set of recommendations 250, the set of impediments 255, the set of risks 260, the set of financial assessments 265, the set of impediment assessments 270, they are provided solely as examples and are not meant to be exhaustive.
(0062] FIG. 3 is a diagram 300 of illustrating processing one or more HDCs to produce structured data in accordance with some aspects of the disclosure. For example, a set of HDCs including an HDC 310 may be converted into a set of corresponding structured data sets such as a set of YAML files including YAML file 320. The YAML file 320 may be subsequently processed to produce data in a structured format (e.g., structured data 330). The structured data 330 may be loaded into the data storage 340, e.g., as a key/value pair, and may be validated based on a taxonomy. In some aspects, the taxonomy may be defined, and/or may be associated, with an RDF. The structured data 330 loaded into the data storage 340 (e.g., key/value pairs included in, and/or based on, the structured data), in some aspects, may be indexed (e.g., by key and value) for searching.
(0063] In some aspects, converting an HDC to a YAML and/or RDF file and/or format in some aspects may provide one or more benefits. For example, the conversions may provide improved content collaboration and querying, a standardized format, an integration with DevOps pipelines, version control, reproducibility, traceability, and reusability, hr some aspects, YAML/RDF files may be easily shared and edited by multiple team members, which can help to improve collaboration and communication during the hypothesis development process. Using YAML/RDF files, in some aspects, may provide a standardized format for data and information, which can help to ensure that all aspects of the hypothesis are properly captured and organized, hi some aspects, YAML/RDF files can be easily integrated into DevOps pipelines and other automated workflows, which can help to streamline the testing and validation process. YAML/RDF files, in some aspects, may be easily tracked, and version controlled using tools such as Git, which can help to ensure that changes are properly documented and managed. By capturing all aspects of the hypothesis development process in YAML/RDF files, in some aspects, it may be easier to reproduce and replicate the testing and validation process, which can help to improve the rigor and validity of scientific research. In some aspects, by using YAML/RDF files to document the hypothesis development process, it may be easier to trace the various steps and decisions that were made throughout the process, which can help to improve accountability and transparency. YAML/RDF files, in some aspects, me be easily reused or repurposed for future research projects, which can help to save time and effort in the long run.
[0064] FIG. 4 is a diagram 400 illustrating aspects of model management in accordance with some aspects of the disclosure. The aspects of model management illustrated in diagram 400 may be associated with the functions associa ted w ith model management 120 of FIG. 1. For example, model management may include data collection and curation processing (e.g., associated with the data collection and curation processing module 121) . Data collection and curation, in some aspects, may be used to ensure that the models are developed and/or trained using accurate and up-to-date data.
[0065] In some aspects, data collection involves gathering data (or matching data descriptions) at 402 from various targeted sources (e.g., a set of inputs 401 ) described in the use cases via data fabric and/or data catalog to identify their storage’s physical and/or virtual locations for retrieval. Gathering the data at 402, in some aspects, may be based on workflows, pipelines, and or other automation instructions provided by other components and/or functions of the system 450. In some aspects, the other components and/or functions of the system 450 may include one or more additional modules associated with one or more of model management, content Al, a content advisor, or GPT and the workflows, pipelines, and/or automation instructions may be based on models or workflows trained using feedback from a human user (e.g., an SME). The gathered data, in some aspects, may then (e.g., based on the workflows, pipelines, and or other automation instructions) be extr acted, cleaned, and formated at 403, 404, 405, and/or 406 as one or more of a YAML and/or RDF file based on the use case and data quality/resolutions requirements to create a dataset suitable for model development. For example, at 406, a YAML file generated at 405 may be mapped to an RDF-based taxonomy 407 (and ultimately to an RDF file). Mapping the YAML file to an RDF -based taxonomy and/or an RDF file, in some aspects, may include one or more of identifying YAML data entities and properties at 408, designing and/or defining the RDF vocabulary at 409, and/or mapping the elements of the YAML (e.g., the data entities and properties identified at 408) to the RDF vocabulary (e.g., the RDF vocabulary designed and/or defined at 409) to produce an RDF file at 410.
[0066] The data associated with the YAML and/or RDF file produced by the mapping at 406 and/or the RDF file produced at 410, may be serialized at 411, for storage, retrieval, and/or querying. The serialized data, in some aspects, may then be stored in a graph DB (e.g., one or more cloud, or on-premises, servers or databases) at 412. The data may then be displayed at 413 via a display unit or other output device implementing a UI or providing a UX for verification by an SME or HIL at 414. After verification at 414 (e.g., after zero or more rounds of negative feedback followed by at least one positive feedback/verification), the data may be processed, at 415, using an RDF to DevOps pipeline, hi some aspects, the processing at 415 may produce intermediate results (e.g., artifacts) that may, at 416 be associated with one or more virtualized, cloud-based, or on-premises, services (e.g., Kubemetes™ 416a, Docket with Run Time 416b, and/or a pipeline execution service 416c). The pipeline may conclude with, or the artifacts generated by the pipeline processing at 415 may be used for, generating new content at 417. The content generated at 417, may then be verified at 418 by a user (e.g., SME or HIL) and may be incorporated into the system by the components and/or functions of the system 450.
[0067] hi some aspects, in addition to producing the YAML and/or RDF file at 406, the system may, at 430, link and/or map the entities and/or information included in the YAML and/or RDF file to other content in a content graph (as described in relation to FIG. 5). The linking, in some aspects, may identify one or more of relationships between entities, common properties, gr anularity of the entities, and/or a usage of the link for insight (e.g., how the link may be used to provide insight and/or analysis). The linking in some aspects, may be based on one or more of similarity-based algorithms 431, clustering algorithms 432, community detection algorithms 433, link prediction algorithms 434, and/or rule-based algorithms 435 (e.g., for exception handling). [0068] As described above, data curation may include one or more processes associated with importing, cleaning, transforming, and organizing the data with sufficient volume to ensure it is ready and/or useful for model training (e.g.. that there is enough data of a good enough quality to train a model with general applicability7). Validation of the output data (e.g., at 414 and/or at 418), in some aspects, may ensure its quality and accuracy for processing in association with one or more corresponding use cases and/or models, hi some aspects, these data curation and verification processes may contribute to ensuring that the models are developed using the “right-quality” data and can make accurate predictions.
[0069] hi some aspects, the functions associated with model management may include model development and training (e.g., implemented by the model development and training module 122 of FIG. 1 ). Model development and training functions associated with model management, in some aspects, may include processes and techniques used to create and prepare a model for use in a predictive analytics, prescriptive optimization, automation, and/or reinforced/reinforcenient learning (RL) environment. In some aspects, the processes and techniques may include one or more of data preprocessing and cleaning, feature engineering, model selection, hyperparameter tuning, training, evaluation, and/or model deployment. Model development and training functions, in some aspects, may7 be used to create and optimize models for supervised learning tasks such as classification, regression, and clustering. In some aspects, the model development and training functions may be used to create and optimize models for unsupervised learning tasks such as dimensionality reduction and anomaly7 detection.
[0070] FIG. 5 is a diagram 500 illustrating a content graph in accordance with some aspects of the disclosure. Diagram 500 illustrates different elements (e.g., components and/or inputs) of a system that maybe associated with different sets of functions. For example, diagram 500 illustrates a first set of elements associated with model and workflow management 510, a second set of elements associated with meta data management 520, a third set of elements associated with data management 530, a fourth set of elements associated with automation management 540, a fifth set of elements associated with knowledge management 550, a sixth set of elements associated with business use cases 560, and a seventh set of elements associated with platform management 570. [0071] FIG. 6 is a diagram 600 illustrating elements of a model training process in accordance with some aspects of the disclosure. FIG. 7 is a diagram 700 illustrating elements of an inferencing process in accordance with some aspects of the disclosure. In some aspects, a model training process in accordance with some aspects of the disclosure may be associated with a set of physic al constructs 602 and a set of use cases and KPIs 604 representing the input of the models for training with a corresponding set of physical constructs 702 and set of use cases and KPIs 704 representing the input of the models for inferencing. The set of physical constructs 602 and the set of use cases and KPIs 604 maybe incorporated into an asset library 606 and an asset hierarchy 610 associated with model training. In some aspects, the set of physical constructs 602 and/or 702 and the set of use cases and KPIs 604 and/or 704 may be associated with a set of physical sensor data 608 and/or a set of physical sensor data 708 (or other data extracted horn input data such as from the set of inputs 401 of FIG. 4 based on the set of physical constructs 602 and/or 702 and the set of use cases and KPIs 604 and/or 704) for training and/or inferencing. At least one of the model training and the inferencing, in some aspects, may be associated with using the set of physical sensor data 60S and/or 708 (and the asset hierarchy 610 in the case of training) to generate data 612 for a model training operation (or data 712 for an inferencing operation) that may include data characters and curation information 614 and data features 616 (or data characters and curation information 714 and data features 716). In some aspects of model training, the data characters and curation information 614 and data features 616 may be processed by one or more models to be trained 624 (e.g., models associated with, or corresponding to, “algorithm 1” 626 to “algorithm N” 628) while the data characters and curation information 614 and data features 616 along with the asset library7 606 may be used to produce updates to the one or more models based on a set of updating operations 618 that may include, for example, one or more of a ranking operation 620 (e.g., a ranking for RL or other feedback and/or updates) or a grid search operation 622 (e.g., an operation to identify updates to one or more parameters associated with the one or more models) to optimize the one or more models.
(0072] The updates to the one or more models, in some aspects, may further be associated with feedback associated with information regarding the accuracy of the current one or more models such as rewards and/or penalties 634 (e.g., information regarding a difference between an output of a current model of the one or more models and a “ground truth’' associated with the input data provided to the current model of the one or more models). The rewards and/or penalties 634 may be produced by a feedback operation 630 that may further include identifying a configuration and model set 636 including a selected configuration 638 and a selected model, e.g., a selected algorithm 640, and identifying, and/or applying, prediction criteria 644 and optimization criteria 646. If a particular model meets a set of optimization criteria 646 for a particular application (e.g., is determined to be sufficiently accurate and/or optimized for a combination of speed, accuracy, etc. for a process associated with a particular business use case, monitoring task, or analysis), it may be provided for incorporation into a set of models managed by a model management module 650 or into a set of models used in producing recommendations at a recommendation engine 652.
[0073] In a subsequent inferencing operation, the model management module 750 may provide one or more of the of the models trained using, e.g. , the elements of FIG. 6, and the generated data 712 for an inferencing operation to an inferencing module 772 to generate one or more inferences. For example, the one or more inferences may be based on using the data 712 as inputs to the one or more of the models provided by the model management module 750.
[0074] In some aspects, the inferencing operation described in relation to FIG. 7 may involve model deployment functions included in the functions associated with model management and may include functions (e.g., steps andtor processes) associated with operationalizing a model (e.g., using a model for run-time analysis, and content anchor recommendation generation. In some aspects, the model deployment functions may be implemented by, or associated with, a model deployment module such as model deployment module 123. The model deployment firnctions, in some aspects, may include testing and validating the model, automating model runs, setting up feedback loops to monitor model performance, and/or deploying the model into a production environment. In some aspects, the model deployment firnctions may include monitoring model results, updating models as needed, and/or deploying new versions of the model.
[0075] In some aspects, the system may include platform automation firnctions. The platform automation firnctions, in some aspects, may include automated model versioning that may include firnctions associated with automatically versioning models, tracking changes, and storing model versions in an organized repository, hi some aspects, the platform automation firnctions may include automated model testing associated with automatically testing models against, or based on, certain criteria, such as accuracy and stability, and outputting the results. The platform automation functions, in some aspects, may include automated model deployment associated with deploying models automatically to new environments arid ensuring that the automatically deployed models are up and running in an efficient and secure maimer. In some aspects, the platform automation functions may include automated model management associated with defining, monitoring, and managing the lifecycle of models, such as creation, deployment, and retirement.
(0076] In some aspects, model storage functions (e.g., implemented by a model store module 126 of FIG. 1) may be included in the functions associated with model management. The model storage functions, in some aspects, may be associated with one or more model repositories that may be used to index and store models, e.g., in an MLFlow format to be better self-contained. In some aspects, the model storage functions may be associated with data integration blueprints that may be associated with collecting and integrating data and/or information from corresponding sources for the purpose of analysis. The model storage functions, in some aspects, may include functions associated with a model design codex that may be used to develop models to address specific business questions and objectives from a model codex.
[0077] In some aspects, the model storage functions may include a model deployment inferencing API that may be associated with deploying models into production, monitoring performance, and adjusting the models as inaccuracies are identified. The model storage functions, in some aspects, may include model evaluation functions that may track and evaluate associated metrics such as KPIs of models against defined and/or identified goals and objectives associated with one or more use cases. In some aspects, model storage functions may include model governance process integration functions associated with integrating policies and processes to ensure compliance with regulations, data privacy and security. The model storage, in some aspects, may include model auditing integration functions associated with tracking and documenting model performance, accuracy, and usage. In some aspects, the model storage functions may include model optimization, continuous development, and decommission alert functions associated with alerts to refine models over time to improve accuracy and/or performance or to suggest decommissioning (e.g., due to compliance issues, policies changes, and/or improved AI/ML approaches outperforming existing models). [0078] The model storage functions, in some aspects, may further include one or more financial functions associated with a model cost management. The financial functions associated with the model cost management may relate to one or more of budgeting, risk management, cash flow analysis, forecasting, portfolio management, and/or auditing and cost control. In some aspects, a budgeting function may relate to planning and forecasting the future demand of models and their cost in execution management. A risk management function, in some aspects, may be associated with identifying, analyzing, and mitigating risks in fulfilling SLA. In some aspects, a cash flow analysis function may be associated with analyzing and forecasting the inflow and outflow of cash in value realization of analytics models. A forecasting function, in some aspects, may relate to predicting future trends in types of models’ demand. In some aspects, a portfolio management function may be associated with managing investments and right-sizing analytics portfolios to maximize returns. An auditing and cost control function, in some aspects, may verify financial statements and reports of analytics development and return and/or identify and reduce costs to maximize profits.
[0079] hi some aspects, feature management functions (e.g., implemented by a feature management module 125 of FIG. 1) may be included in the functions associated with model management. In some aspects, the feature management functions (as part of analytics model management) may be associated with one or more processes for managing the lifecycle of features used in analytics models. The one or more feature management processes, in some aspects, may be associated with understanding the data sources that feed the features, validating their accuracy, tracking feature usage and performance, managing feature versions, and versioning feature sets.
[0080] Accordingly, feature management functions may be provided to ensure that models are up to date and accurate while also allowing teams to quickly iterate and experiment with different feature combinations. For example, feature management functions, in some aspects, may be associated with activities such as one or more of exploring differ ent features and selecting the most relevant, engineering features by transforming and combining them to create new features, and/or validating features to ensure they are accurate and up to date, hi some aspects, the feature management functions may additionally, or alternatively, be associated with one or more of monitoring features for changes in their performance and/or automating feature engineering and selection processes. [0081] In some aspects, the system may implement stacked modeling using the content in the model store (e.g., the model store module 126). Composing models using stacked modeling, in some aspects, may include using the outputs of analytics model as fea tines and/or inputs to another model in analytics model management. For example, one or more of predicted values, model scores, feature importance, model coefficients, clustering results, outlier detection, and/or dimensionality reduction results from one or more models may be used to define, or be used as, features and/or inputs to a model composed using stacked modeling. For example, predicted values for one or more of a machine operating anomaly score, a customer chum, or a credit risk from one or more supervised machine learning models, in some aspects, may be used as features and/or inputs of another model. In some aspects, model scores may be metrics that measure the accuracy of a model, and the model scores may be used to compare different models and select the best one or ones to ensemble for further analysis. Feature importance, in some aspects, may be a measure of how important a feature is for making predictions and may be used to identify which features are most important for making accurate predictions for inclusion in a model composed using stacked modeling. In some aspects, model coefficients (e.g., weights) may be values that represent the relationship between a predictor variable and the response variable and may be used to understand the impact of each predictor on the response. Clustering algorithms, in some aspects, rnay be used to group data points into distinct categories (e.g., to produce clustering results) that may be used as features for further analysis. In some aspects, outlier detection algorithms may be used to identify data points that are significantly different from the rest of the data and the identified outliers may be used to define features for further analysis (e.g., by including or excluding the outliers). Dimensionality reduction algorithms, in some aspects, may produce dimensionality reduction results that may be used to reduce a number of features and/or inputs for a model composed using stacked modeling while preserving the (relevant) original information. The results of the dimensionality reduction can be used as features for further analysis.
[0082] In some aspects, performance management functions (e.g., implemented by a performance management module 127 of FIG. 1) may be included in the functions assoc iated with model management. Performance management functions, in some aspects, may be designed to be, and/or to provide, a common service for analytics model management associated with one or more processes for evaluating the performance of an analytical model with its corresponding business use case(s) (e.g., using another analytics inodel(s) for performance scoring). In some aspects, the performance management functions may monitor the model’s accuracy and effectiveness, as well as ensuring it is up to date with industry SME standards, users’ (e.g., stakeholders, decision makers, etc.) standards, and users’ preferences so the users can make informed decisions.
(0083] Performance management function, in some aspects, may include data mining of analytics model performance using a common standard model and a common standard process. In some aspects, the performance management functions may be associated with gathering and analyzing data related to the performance of one or more analytics models, e.g., the performance management functions may collect data on accuracy, speed, scalability, and other metrics that can be used to measure the performance of the model. The performance data gathered, and/'or collected, may then be used for analyzing the one or more analytics models (e.g., one or more models stored in the model store) to identify areas of improvement or areas of strength in the model, allowing the model to be optimized for better results. In some aspects, to improve model performance using the performance management functions, it may be useful to leverage common standard models and agreed upon criteria to identify and address areas of weakness, by taking remedial and/or collective actions such as adding more data to the training set, improving the model architecture, or timing the hyperparameters of the model. Regular testing of the models, and/or the system as a whole, using the performance management functions to monitor the performance of models within the system (e.g., models stored in the model store module 126) may ensure that any changes that are made are improving the moders, and/or the system’s, performance (instead of degrading the performance).
(0084] FIG . 8 is a diagram 800 illustrating recommendations based on model selection from a model management subsystem in accordance with some aspects of the disclosure. For example, a recommender 850 may include a recommendation engine 852 that receives input associated with a business use case (UC) 810 (e.g., a set of KPIs), a workflow 820, assets and data 830, and a user 840 to generate recommendations in accordance with some aspects of the disclosure.
(0085] In some aspects, the system may incorporate a GPT model (e.g., may include GPT module 140 implementing a GPT model). The GPT model, in some aspects, maybe implemented individually, or in combination with a Bidirectional Encoder Representations from Transformers (BERT) model. As described above in relation to FIG. 1, the GPT model may be used to perform content filtering, where the GPT model may be leveraged (and/or used) to narrow down many possible failure modes to a smaller set of specific failure modes of interest to a user (either a particular user or a generic user) based on user input (e.g., from the particular user or from an SME), to provide, or aid in, content filtering. Additionally, a pre-trained BERT model may be utilized to extract actionable information from sources such as manuals, maintenance logs, service logs, and FAQs. In some aspects, the aspects of the system described in relation to FIGs. 6-8 may be associated with interactions between the functions associated with content management (e.g., content management 110) and the functions associated with the model management 120.
[0086] FIG. 9 is a diagram 900 illustrating aspects of training a GPT model (or simply a GPT) in accordance with some aspects of the disclosure. In some aspects, the GPT model may be a private GPT model (or private GPT) specific to the system and/or to a particular user or user type. As described above in other contexts, the system may receive as inputs documents and images (e.g., documents and image artifacts 902) and extract data (e.g., content and metadata) from the input data (e.g., by recognizing a data type at 904 and by inferencmg using a model (e.g., a language-based model at 906 or an image based inspection model at 908) based on the recognized data type). The extracted data may then be provided as input to a GPT model to produce and/or generate content at 910. The content produced at 910 may then be provided to an SME to validate and/or to improve the output content through feedback at 912 associated with an SME behavior at 918. In some aspects, the feedback mechanism may be designed to minimize the review and retraining cycles. Based on the feedback from the SME the system may provide the validated (curated) artifacts (e.g., data and metadata) to the content management system functions at 916.
[0087] FIG. 10 is a diagram 1000 illustrating additional aspects of training the GPT model of FIG. 9 in accordance with some aspects of the disclosure. As indicated by the element “A,” the (validated) output from the GPT may be used to generate (or may include) a set of semi pre-curated content 1002 that may be subject to further optimization to improve context and content via additional retraining operations for the GPT model. The semi pre-curated content 1002, in some aspects, may be used in combination with a catalog of existing reports and available data 1006 to produce a set of user request 1010. The semi pre-curated content 1002 and the user request 1010 may represent preliminary results to be validated and/or improve upon (e.g., via additional training of the GPT model in FIG. 9) based on the feedback from an SME at 1008. For a request that is validated by the SME at 1008, the user request 1010 may be used to generate a user request 1012 that is presented to a user for feedback at 1014 to improve the fit to the task. For example, the user feedback may be used to provide a better match of a user’s preference and/or improve the content quality and/or granularity. In some aspects, the feedback mechanism may be designed to minimize the review and retraining cycles. The components and/or functions of the system described in relation to FIGs. 9 and 10, in some aspects, may be associated with interactions between the functions associated with the model management (e.g., model management 120) and the fimctions associated with content Al (e.g., content Al 130) and/or a digital advisor associated with one or more human users.
(0088] In some aspects, the system may use one or both of the GPT and/or BERT models for failure mode analysis by combining supervised and unsupervised learning techniques to narrow down many failure modes to a few possible ones. The supervised component will train the GPT model to recognize features of each of a set of failure types, while the unsupervised clustering will group failure modes into general categories, reducing the total number of failure modes. The GPT/BERT model may be fine-tuned, in some aspects, using hyper-parameter timing and feature selection techniques to improve the accuracy of the failure mode analysis, e.g., as described in relation to the model management functions of FIG. 1 or the training described in relation to FIGs. 9 arid 10. In some aspects, the system may train the BERT model using labeled failure modes and their associated questions, content, and/or operational manuals to extract prescribed content from asset failure mode content. The system may then use the trained BERT model to refine and answer specific questions related to the identified failure modes (e.g., the set of failure modes identified by the GPT model) as indicated by the training materials. Accordingly, in some aspects, the GPT may be used to parse a document produced for a failure mode(s) and effects analysis (FMEA) to identify and group failure modes. FIG. 11 is a diagram 1100 illustrating related elements of an FMEA in accordance with some aspects of the disclosure for an example relating to a wind turbine (WT). For example, an FMEA may provide an item overview 1110. a standard FMEA 1120, a set of usefol parameters 1130, and a set of risks 1140.
(0089] FIG. 12 is a diagram 1200 illustrating the use of a combination of a GPT model and a BERT model to generate and/or identify questions and answers in accordance with some aspects of the disclosure. For example, in some aspects, the GPT model may be used to generate “questions,” e.g., to identify issues or problems based on the input data sources and the taxonomy provided in an FMEA in association with a first set of operations 1210 while the BERT model may be used to identify the “answers,” e.g., the content from the associated questions, content, and/or operational manuals, in association with a second set of operations 1220. In some instances, the separation of roles may be based on the BERT model being more reliable when working from a known universe of authoritative ma terials than a GPT model.
[0090] FIG. 13 is a first diagram 1300 and a second diagram 1350 illustrating a fest set of training operations, and a second set of inferencing operations, respectively, associated with the GPT and/or BERT modelfs) in association with the CMS in accordance with some aspects of the disclosure. In some aspects, the GPT and/or BERT model training is in addition to the Al ML training performed in connection with other aspects of the model management functions (e.g., the training associated with FIG. 6). As shown in FIG. 13, the GPT and/or BERT models may be trained in parallel using similar inputs. For example, information from one or more of the CMS content 1302, the knowledge graph 1304 (e.g., the knowledge graph of FIG. 5). the questions graph 1306 (e.g., the questions graph of FIG. 12 based on the FMEA of FIG. 11), external data sources 1308, and FMEA scenarios and data 1310 may be provided to a first GPT model (e.g., GPT questions 1312 and/or GPT answers 1314) and/or a first BERT model (e.g., BERT questions 1316 and/or BERT content 1318) to produce input for a “GPT” 1320 (e.g., one or more of the GPT model 1322 or the BERT model 1324). The GPT 1320, in some aspects, may produce GPT output content 1328 and/or BERT output content 1330 based on a current state of the GPT model and/or the BERT model. The GPT output content 1328 and/or BERT output content 1330 may be provided to a HIL 1334 (e.g., an SME or user) that may define one or more of a set of prediction criteria 1336 (e.g., identifying a ground truth, providing prediction evaluation criteria, etc.), a set of optimization criteria 1338 (e.g., criteria or parameters related to a desired speed of convergence, a threshold accuracy, etc.), and/or a set of suitability criteria 1340 and may further provide feedback approving or rejecting the content. Based on one or more of the sets of criteria, e.g., set of prediction criteria 1336, set of optimization criteria 1338, and set of suitability criteria 1340, and/or the feedback from the HIL 1334, the system may generate an RL reward- penalty 1332 used to update one or more of the GPT questions 1312, GPT answers 1314, BERT questions 1316, BERT content 1318, GPT model 1322, and/or BERT model 1324. [0091] The set of inferencing operations, in some aspects, may mirror the inferencing operations discussed above in relation to FIG. 7. For example, diagram 1350 illustrates that the inferencing operation may include receiving inputs based on a physical construct 1362 and a set of UCs and/or KPIs 1364 that may be used to define data retrieved from physical sensor data 1366. Data characters and curation 1368 and data features generation 1370, in some aspects, may then be processed by one or more models as determined by model management 1372. Based on the model management 1372, the GPT model and/or the BERT model (e.g., the GPT 1320 including one or more of the GPT model 1322 or the BERT model 1324) may perform a “GPT” inferencing operation 1374 using the trained “GPT” models (e.g., one or more GPT models and/or BERT models) . In some aspects, the components and/or functions of the system described in relation to FIGs. 11-13 may be associated with interactions between the functions associated with content management (e.g., content management 110) and the functions associated with the GPT (e.g., the GPT module 140).
(0092] FIG. 14 is a diagram 1400 illustrating components of semantic reasoning traceability associated with one or more GPT and'or BERT models in accordance with some aspects of the disclosure. In some aspects, the semantic reasoning traceability may be associated with tracing decision making and/or group triage evidences associated with the one or more GPT and/or BERT models. In some aspects, the semantic reasoning traceability may include transforming the knowledge graph data 1404 into a sentence at 1438 and concatenating, at 1440, the transformed knowledge graph data with text input data (e.g., based on one or more of the CMS content 1402, the knowledge graph 1404 (e.g., the knowledge graph of FIG. 5), the questions graph 1406 (e.g., the questions graph of FIG. 12 based on the FMEA of FIG. 11), external data sources 1408, and FMEA scenarios and data 1410) before feeding it into one or more of the GPT questions 1412, GPT answers 1414, BERT questions 1416, and' or BERT content 1418. After processing through one or more of the GPT questions 1412, GPT answers 1414, BERT questions 1416, and- or BERT content 1418.
[0093] As described above, the GPT 1420 including one or more of the GPT model 1422 or the BERT model 1424 (corresponding to the GPT 1320, the GPT model 1322, and the BERT model 1324 of FIG. 13) may be fine-timed on a dataset that includes both text and knowledge graph data, hi some aspects, during the training described in relation to FIG. 13 , the GPT model 1422 and/or the BERT model 1424 may be trained to generate, in the output text, (additional) text that may be used to identify a context (or source) of the generated text from the knowledge graph. The output from the GPT 1420, may then be preprocessed, at 1436, to extract entity and concepts included in the output from the GPT 1420. After identifying the entities and concepts, the system may construct a knowledge subgraph at 1444 and map, at 1442, the (nodes of the) knowledge subgraph to the (nodes of the) knowledge graph 1404. The mapping may then be presented to a user at 1446 for the user to understand the basis of the text output of one or more of the GPT 1422 and/or the BERT 1424 (e.g., to analyze the results based on the semantic reasoning traceability). The components and/or functions of the system described in relation to FIG. 14, in some aspects, may be associated with interactions between the functions associated with content management (e.g., content management 110), the functions associated with the GPT (e.g., the GPT module 140), and the functions associated with content Al (e.g., content Al 130) and/or a digital advisor associated xvith one or more human users.
(0094] FIG. 15 is a diagram 1500 illustrating elements of a process or algorithm used to generate descriptive questions for the GPT in accordance with some aspects of the disclosure. In some aspects, the process illustrated in diagram 1500 may be used to convert a set of numerical outputs of an analytical model into a text format that may be used for GPT (e.g., may be provided to a GPT model as an input). Diagram 1500 illustrates that an analytics model pre-processing 1502 may pre-process elements associated with an analytical model such as an asset associated with the model (e.g., asset in scope 1504 or a particular device in an industrial setting), a set of “asks” and/or decisions associated with the analytics model (e.g., analytics asks/decisions 1506 such as failure modes), a set of KPIs associated with the analytics model (e.g., analytics KPIs 1508 such as a vibration anomaly score), ami set of outputs produced by the analytics model (e.g., analytics outputs 1510 such as time series data for average vibration score per minute). The analytics model pre-processing 1502, in some aspects, may produce one or more of an output and/or results based on the raw analytics output data and corresponding meta data content in a format supported by the GPT (e.g.. a text format supported for GPT inputs).
(0095] The output and/or results of the analytics model pre-processing 1502, in some aspects, may be subject to additional (pre-processing via a set of NLP operations 1512. The NLP operations 1512, in some aspects, may include a text cleaning operation 1514 that may remove any irrelevant characters, punctuation, and/or formatting from the raw analytics output data, arid corresponding meta data content. The NLP operations 1512, in some aspects may further include a normalization operation 1516 that may convert the text data produced by previous operations into a consistent format. A summarization and/or aggregation operation 1518 may extract insights (e.g., a set of insights considered to be the most important) and corresponding meta information from the analytics output and may summarize and/or aggregate the extracted insights and corresponding metadata in a readable content and format via text summarization, hr some aspects, the summarization and/or aggregation operations 1518 may include tokenizing the data using a tokenizer, e.g., using the natural language toolkit (NLTK). The NLP operations 1512, in some aspects, may include a formating operation 1520 that formats the text output into a format that is suitable for input into GPT (e.g. , in to a specialized format for GPT training and./or inferencing). The formating operations 1520, in some aspects, may include converting the tokenized data into a special format for GPT tr aining data. The formated data (input for at least one of a GPT training or inferencing) may then have a set of validation operation 1522 applied (e.g., a set of automated or human-assisted operations) as part of the NLP operations 1512 to validate the output text and ensure that it accurately reflects the insights or information contained hi the original analytics output. The NLP operation 1512, may generate or produce one or more statements 1524 (e.g., clean, normalized, summarized, formated, and validated statements) that may be saved and/or provided (e.g., uploaded) for a GPT training and/or inferencing.
(0096] For example, an analytical model for predicting one or more failure modes associated with a Hitachi compressor Model H based on a time series of vibration anomaly scores may indicate the aspects of the analytics model in various locations (e.g., in different metadata fields, in an output of the analytical model, in a set of documents used to generate the analytics model, etc.). Given the disparate data sources associated with the analytics model, the analytics model pre-processing 1502 and the NLP operations 1512 may, identify content to be used to generate the one or more statements including “3 -year-old Hitachi Compressor model H,” “average vibration score of 75 per horn,” and “failure modes,” and may produce a summary statement including the identified elements in a natural language format (e.g., “What are the potential failure modes associated with a 3-year-old Hitachi Compressor model H that has an average vibration score of 75 per horn?”). [0097] FIG. 16 is a diagram 1600 illustrating the use of SME and user feedback for a GPT training operation in accordance with some aspects of the disclosure. For example, content generation may be performed by GPT 1602 (one or more of GPT model 1604 and/or BERT model 1606) based on the output of an analytics model (not shown), hr some aspects, a prompt or API call maybe created and/or generated to provide the GPT model (e.g., GPT 1602, GPT model 1604, and/or BERT model 1606) with a starting point. The created and/or generated prompt or API call, in some aspects, may introduce a topic of a desired knowledge content to generate. The topic might be UC KPIs, analytics model outputs, sensor outputs trending, or any other issue or metric of interest. Once the prompt or API call is created and/or generated, the prompt or API call may be provided as input to one or more GPT models (e.g., GPT 1602, GPT model 1604, and/or BERT model 1606). The one or more GPT models (e.g., GPT 1602, GPT model 1604, and/or BERT model 1606), in some aspects, may generate text (output content) based on the prompt or API call. The generated text (output content), in some aspects, may include GPT output content 1608 and BERT output content 1610.
[0098] After the text is generated by the one or more GPT models, the generated text (e.g., GPT output content 1608 and BERT output content 1610) may be provided to one or more of an SME or a user. The SME, in some aspects, may, at 1612, review the generated text and make modifications for its correctness (e.g., to improve the quality of the output, such as its relevance to the topic, or its technical accuracy or use of jargon). The modified text (the modified GPT output content 1608 and/or the modified BERT output content 1610 after review at 1612) may then be provided as knowledge content and stored in the CMS with lineages (e.g., with metadata indicating the source of the modified text such as an iteration of the training or other information that may be used to determine how to use the modified text) and may fur ther be used to update and/or retrain the GPT 1602 (e.g., to update one or more of the GPT model 1604, and/or the BERT model 1606) based on RL approach 1620 (e.g., an RL algorithm/operation) to produce new content that is expected to be beter (e.g., more technically accurate or useful) than the previously generated content.
[0099] Similarly, in some aspects, the user review, at. 1614, the generated text and make modifications for its usability (e.g., to improve the quality of the output, such as its understandability or its utility to the particular user). The modified text (the modified GPT output content 1608 and/or the modified BERT output content 1610 after review at 1614) may then be provided to generate a preference model (e.g., a user preference model 1616) that may be stored hi the CMS with lineages (e.g., with metadata indicating the source of the modified text such as an iteration of the training or other information that may be used to determine how to use the modified text). The modified text may be related to the content which may be used for decision making by the user and may further be used to update and/or retrain the GPT 1602 (e.g., to update one or more of the GPT model 1604, and/or the BERT model 1606) based on RL approach 1618 (e.g., an RL algorithm/ operation) to produce new content that is expected to be better (e.g., more useful for the user) than the previously generated content.
(0100] FIG. 17 is a diagram 1700 illustrating elements associated with using human feedback for RL-based training of a model in accordance with some aspects of the disclosure. For example, referring to FIG. 16, one or more of the GPT model 1604 and/or the BERT model 1606 may output content to be reviewed by a HIL (e.g., an SME or a user) to generate one or more reports and associated feedback in the set of reports/feedback 1702 for a particular version and/or iteration of a trained GPT model. At 1706, the user (e.g., the SME) may be presented with open-ended questions and/or a structured questionnaire to elicit feedback from the user (e.g., to identify the user’s interpretation of the reports and/or to elicit the user’s intents and patterns from the user point of views from reports). Additionally, or alternatively, NLP may be used at 1708 to extract intents from the SME's interpretation, e.g., NLP techniques may be used to identify key themes and intents. The operations at 1706 and/or 170S, in some aspects, may be used to generate an additional set of reports/feedback in the set of reports/feedback 1702 for a current version and/or iteration of a trained GPT model. For example, a topic modeling algorithm may be used to identify the most important topics discussed in the interpretation or use a named entity recognition algorithm to identify entities that are mentioned frequently.
[0101] At 1710, the system may use the SME inputs and/or intents from reports to create rules and/or heuristics that may be incorporated into the algorithm. In some aspects, at 1710, the system may use the SME inputs/intents from reports to tram a machine learning model ( e.g. .the GPT model). Additionally, or alternatively, the system may use the SME inputs/intents from reports to guide the development of the algorithm (or the machine learning and/or GPT model). The output of the operations at 1710 , in some aspects, may impr ove the relevance of the content output by the GPT model based on the content curation performed by the user (e.g.. based on a similarity
(probability) or exceptions) at 1704, In some aspects, the output of the operations at 1710 may be aggregated to generate an ensemble for an RL policy at 1714, that may in turn, be provided to. and/or used by, the functions associated with knowledge management at 1712.
[0102] In some aspects, the components of a digital advisor UI and/or UX may include an administrator UI. An administrator UI, in some aspects, may include a set of functions and/or UIs for evaluating and monitoring the GPT output and/or the CMS output content. In some aspects, the set of functions and/or UIs for evaluating and monitoring the GPT output and/or the CMS output content, in some aspects, may include UIs for quality control engineering, standardizing criteria and dimensions for content evaluation and feedback (user preferences), human review and curation (e.g., HIL), automated content testing, system performance metrics, defining/creating one or more personas, defining UI and/or UX components and user behavior KPIs and collecting user feedback regarding reports and content, analyzing user feedback, incorporating feedback into GPT, and/or iteration. Quality control, in some aspects, may be used to ensure that the content produced by GPT is accurate, relevant, and well-written and may be part of admin UI’UX. In some aspects, standardizing criteria and dimensions for content evaluation and feedback (user preferences) may be associated with defining a checklist for the GPT output/CMS output against a set of predetermined criteria, such as relevance, actionable recommendation, readability, or other relevant factors. The UI for human review and curation (HIL), in some aspects, may be an important step to evaluate and monitor GPT output/CMS output content, hr some aspects, the HIL may involve having a person (e.g., an SME or a user) review the output and give feedback on its accuracy and quality. Automated content testing UIs, in some aspect, may be a tool for evaluating and monitoring GPT output/CMS output content involving using a set of predetermined tests to measure the accuracy and quality of the output. In some aspects, the set of tests may be used to measure the accuracy of the output against a given dataset, or to compare the output against a set of predetermined standards. The validation outcome could be used for fine tuning GPT output/CMS output.
(0103] In some aspects, a UI for system performance metrics may be associated with one or more performance metrics used to evaluate and monitor the GPT output and/or the CMS output content and corresponding platform system. The performance metrics, in some aspects, may measure the accuracy and quality of the output, as well as the speed and scalability of the system. Performance metrics may, in some aspects, be used to track the performance of the system over time and identify areas for improvement (e.g., in association with a model management system and/or a set of functions associated with model management. A set of UIs for defining and/or creating one or more personas, in some aspects, may be used to identify a target audience for the GPT output and/or the CMS output content and to create a persona that reflects their values, interests, and needs (e.g., the type of data that provides usefill information to the particular target audience). For example, in some aspects, the set of UIs for defining and/or creating one or more personas may be used to define UI and/or UX components and user behavior KPIs and collect user feedback regarding reports and content. The set of UIs for analyzing user feedback, in some aspects, may be used to identify patterns and understand what users think about the content and decision-making process. In some aspects, the UI for incorporating feedback into GPT and for iteration may provide a user control of the incorporation of the user feedback into the GPT output content, such as by controlling the number of iterations of the user feedback to use.
[0104] A user interface and user feedbacks UI may be provided, in some aspects, in association with a decision making process. The user interface and user feedbacks UI, in some aspects, may include a use UI that may reflect the user feedback and allows users to interact with the GPT output/CMS output content. In some aspects, the system may leverage the UI/UX in conjunction with use surveys, interviews, and/or other methods to collect user feedback about the content and their decision-making process and preference.
[0105] FIG. 18 is a diagram 1800 illustrating elements of an orchestration system in accordance with some aspects of the disclosure. The orchestration system is described below using terminology associated with a node-RED tool, but is to be understood to be representative and not limiting. For system xvorkflow, Node-RED is a tool for orchestration and automation of different services and applications via Node-RED to integrate with Content Al by using the Content Al API. Using the Node-RED tool (or other similar orchestration approaches and/or systems) uses a HyperText Transfer Protocol (HTTP or http)) request node to call the Content Al API and get a response back. From there, additional nodes may be available in Node-RED to process the response, create flows, and connect other nodes or services to the API. For example, using Node- RED to call the Content Al API to analyze a document and then use other nodes to process, ETL,
— 3 f — execute corresponding algorithms, display to UI/UX, and store the results in databases/downstream systems or to send an alert if certain conditions are met. Node-RED, in some aspects, may be used to integrate with other different services and applications. With the Content Al API, Node-RED may be used to create powerfid flows and automate content analysis and Al processes.
(0106] In some aspects, Node-RED may be used for system tasks orchestration, e.g., to create flows that integrate with different APIs to perform tasks automatically. Node-RED, in some aspects, may be used to set up the credentials for the scope associated with different APIs, e.g., by creating an account with the API provider and generating API keys or access tokens. Based on the credentials, the API nodes may, in some aspects, be installed for the APIs to use. In some aspects, the lar ge library of pre-built nodes for different APIs in Node-RED may be used and/or le veraged to simplify the designing and implementation of the orchestration tasks. Once the API nodes are installed the system (or a user of the system) may, in some aspects, build one or more workflows to orchestrate system tasks. This involves “wiring” together different nodes to create a sequence of actions that perform the task.
(0107] FIG. 19 is a diagram 1900 illustrating orchestration workflow compiling in accordance with some aspects of the disclosure. In some aspects, the elements of the diagram 1800 or the diagram 1900 may be associated with, or used to perform and/or orchestrate, (1) a CMS of documents, text, and images, (2) a model management system which contains feature management, model management, and performance management as a core function, (3) GPT services, (4) digital advisor UI/UX and HIL Services, and/or (5) SME content review and update services. hi some aspects, the orchestration of the CMS of documents, text, and images may be associated with setting up a Node-RED flow that handles the CMS requests, e.g., by creating a Node-RED flow that contains a trigger node, a function node, and a node to interact with the CMS. The trigger node, in some aspects, may be configured to listen for requests to the CMS, e.g., by setting the type of request (e.g., POST, GET, DELETE), the URL of the CMS, and any other parameters as required by the CMS. In some aspects, the function node created to process the request may be used to process the data from the request (e.g. extracting the insights of document, text, or image from the request) and then route it to the CMS node. The CMS node, in some aspects, may be configured to process the request, e.g., to interact with the CMS and perform the desired operation, e.g. creating a new document, updating an existing document, or deleting (marked for delete) a document. The Node-RED flow including the above nodes and/'or functions may then be deployed and tested/validated by sending requests to the Node-RED flow and testing the response from the CMS . Once the Node-RED flow is set up and tested, it can be used to orchestrate a CMS of documents, text, and images.
(0108] Similarly, the orchestr ation of the model management system may be associated with using Node-RED to create a flow that receives requests from users (or workflow) and passes them to the appropriate model management system. The functions associated with the flow, in some aspects, may include creating, updating, and deleting models, as well as retrieving performance and feature data. Accordingly, Node-RED may be used, in some aspects, to create a flow that receives requests for performance and feature data from the model management system and forwards them to the appropriate feature and performance management systems. Node-RED may further be used to create a flow that receives performance and feature data from the feature and performance management systems and forwards it to the model management system. In some aspects, Node-RED may also be used to create a flow that receives model updates from the model management system and forwards them to the feature and performance management systems. Node-RED, in some aspects, may additionally be used to create a flow that receives feature and performance updates from the feature and performance management systems and forwards them to the model management system.
(0109] Orchestrating GPT sendees, in some aspects, may include creating a Node-RED flow that calls the GPT service. Creating the Node-RED flow, in some aspects, may include adding an “HTTP In” node to the beginning of the flow to receive the input from the GPT service, creating a function node to process the input from the GPT service, and creating an “HTTP request” node to call the GPT service. In some aspects, the “HTTP request” node may be configured to send the input from the function node. An “HTTP response” node, in some aspects, may be added to the end of the flow to send the response back to the GPT sendee. Once the Node-RED flow that calls the GPT service is created it may be deployed to orchestrate the GPT services with other workflows. For example, Node-RED may further be used to create a flow that receives requests fr om users (or ’workflows) and passes them to the appropriate GPT sendees . [0110] Node-RED may be used, in some aspects, to orchestrate digital advisor UI/UX and HIL sendees by providing a graphical user interface (GUI) to create flows to configure, automate and monitor the services. In some aspects, this may allow administrators and users to create their specific (or preferred) reporting content that can be triggered by events, report on KPIs or obtain data from a variety of sources. The Node-RED library of nodes may be leveraged to connect to external services, such as digital advisor UI/UX and HIL services. Each node may be configured to access and interact with the services using a specific set of instructions. Accordingly, administrators and users can customize the nodes to create more complex flows that can be triggered by events or data from other sources. Once the flows are configured, they can be deployed to the cloud or an edge device to automate the services. Node-RED, in some aspects, may provide a dashboard to monitor the flows and make sure they are running as expected. This makes it easy to track the performance of the services and make necessary changes when needed.
[0111] Once the SME content review/update requirements have been identified, the scope would identify the specific tasks that need to be performed, the resources that will be required, and the timeline for completion for SME content review and update services, hi some aspects, the SME content review service may be designed to allow SMEs to review content and provide feedback. This can be accomplished by integrating a review tool (e.g., evat for image) into the Node-RED flow, which allows SMEs to easily access and review the content. In some aspects, the SME content update service may be designed to allow SMEs to make updates to the content. This can be accomplished by integrating a content management system into the Node-RED flow, which allows SMEs to easily access and update the content and flow to the CMS for update. Once the services have been developed, in some aspects, they may be tested to ensure that they are working as intended. This includes testing the Node-RED flows, the SME content review service, and the SME content update sendee. In some aspects, once these services have been tested, they can be deployed to the production environment. This includes configuring the services to work with the client's infrastructure and ensuring that they are accessible to the appropriate users. Accordingly, in some aspects, a Node-RED workflow(s) for SME content review and/or SME content update sendees requires a combination of technical expertise, SME, HIT and an understanding of the client’s specific requirements to develop effective services that meet the client’s needs. [0112] FIG. 20 is a flow diagram 2000 illustrating a method in accordance with some aspects of the disclosure. In some aspects, the method is performed by a system or device (e.g., the system illustrated in diagram 100 or computing device 2205) that performs various analyses, based oil content in a CMS, models associated with a model management system, GPT models and user preferences. In order to perform the method in accordance with some aspects of the disclosure, the system may train one or more MT NLP models (e.g., a GPT model and/or a BERT model) to produce accurat e NL content using a first set of feedback assoc iated with an accuracy of the natural language. The NL queries, in some aspects, may be based on at least one of the received information or the processed information (e.g., information received and/or processed by a CMS or the functions associated with content management 110). The feedback may be received from an SME as described in relation the operations performed at 912, 1008, 1612, and/or 1706/1708 of FIGs. 9, 10, 16, and/or 17, the SME and/or HIL module 133, HIL 1334 and/or the set of reports/feedback 1702 of FIGs. 1, 13, and/or 17. The feedback, hi some aspects, may be provided for one or more of an initial training (before model deployment) or a run-time retraining (e.g., based on content output by the trained and/or deployed one or more MT NLP models during) as described in relation to FIGs. 9, 10, 13, 16, and 17.
[0113] The system may train one or more MT NLP models (e.g., a GPT model and/or a BERT model) to produce, for at least a first user, usefill NL content using a second set of feedback associated with a usefulness, to the first user, of the NL content. The NL queries, in some aspects, may be based on at least one of the received information or the processed information (e.g., information received and/or processed by a CMS or the functions associated with content management 110) as well as preference data associated with one or more of the SME and/or HIL module 133, the user feedback module 132 , and or the user preference model development module 134. The feedback may be received from a first user as described in relation the operations performed at 1014 1614, and/or 1706/1708 of FIGs. 10, 16, and/or 17, the SME and/or HIL module 133, HIL 1334, the user preference model 1616. and/or the set of reports/feedback 1702 of FIGs. 1, 13, 16, and/or 17. The feedback, in some aspects, may be provided for one or more of an initial training (before model deployment) or a ran-time retraining (e.g., based on content output by the trained and/or deployed one or more MT NLP models during) as described in relation to FIGs. 9, 10, 13, 16, and 17. [0114] At 2008, the system may receive information from one or more input sources associated with at least one of an industrial process or an industrial product. The data, in some aspects, may be received from a plurality of sources. For example, 2008 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination. In some aspects, the received information may include two or more of image data, text, vibration data, audio data, structured data, or sensor data. In some aspects, the one or more information sources may be associated with a set of user-defined parameters received from a user before receiving the information. The set of user-defined parameters, in some aspects, may include one or more of a desired outcome, a set of KPIs, a set of affected users, or a taxonomy associated with a desired outcome, hi some aspects, a plurality of sets of user-defined parameters may be received in association with a corresponding plurality of use cases (e.g., business use cases). For example, the received data (and the set of user-defined parameters) may be any of the data described in association with, e.g., the functions associated with the content management 110, the inputs 401, the data management 530, and/or the set of physical sensor data 608/708 of FIGs. 1-7.
[0115] At 2010, the system may process the received information using one or more MT models. The models, in some aspects, may be associated with a model management system. For example, 2010 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination, hi some aspects, processing the received information at 2010 may include processing the received information based, at least hi part, on a set of user-defined parameters. The set of user-defined parameters, in some aspects, may include one or more of a desired outcome, a set of KPIs, a set of affected users, or a taxonomy associated with a desired outcome, hi some aspects, a plurality of sets of user-defined parameters may be received in association with at least one of an HDC, an RDF, or a YAML format or file. The user-defined parameters and/or the at least one of the HDC, RDF, or YAML format may be associated with a set of corresponding use cases (e.g., business use cases). For example, the processing (and the set of user-defined parameters) may be associated with, e.g., the functions associated with the content management 110 andfor the functions associated with the model management 120, the inputs 401 , the data management 530, the processing associated with generating data at 612/712, and/or the set of physical sensor data 608/708 of FIGs. 1-7. In some aspects, the one or more MT models, in some aspects, may be associated with different analytics models configured based on different use cases and/or user personas, e.g., may be configured to process the received information to extract data relevant to a related and/or associated use case.
[0116] At 2012, the system may generate, for the first user, a first NL query regarding at least one of the received information or the processed information using the one or more MT NLP models. The one or more MT NLP models, in some aspects, may be associated with a model management system. For example, 2012 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination. The one or more MT NLP models, in some aspects, may include one or more of a GPT model or a BERT model, hi some aspects, the system may generate, at 2012, the fust NL query using the GPT model For example, generating the first NL query may be associated with, e.g., the functions associated with the GPT module 140 and/or the functions associated with the model management 120, the “GPT” inferencing operation 1374 or GPT 1420 to produce GPT output content 1428 and/or BERT output content 1430 using one or more trained “GPT” models of FIGs. 1, 13, and 14. In some aspects, the one or more MT NLP models, in some aspects, may be associated with different analytics models configured based on different use cases, e.g., may be configured to process inputs to extract semantic content and generate a “query” based on data relevant to a related and/or associated use case or for a particular user or user persona. In some aspects, the one or more MT NLP models used to generate the first NL query may be managed by a model management system or set of functions associated with model management.
[0117] At 2014, the system may generate, tor the first user, a first NL recommendation based on the first NL query and at least one of the received information or the processed information using at least one corresponding MT NLP model. The at least one MT NLP model, in some aspects, may be associated with a model management system . For example, 2014 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination. The one or more MT NLP models, in some aspects, may include one or more of a GPT model or a BERT model, hi some aspects, the system may generate, at 2014, the first NL recommendation using the BERT model. For example, generating the first NL recommendation may be associated with, e.g., the functions associated with the GPT module 140 and/or the functions associated with the model management 120, the “GPT’ inferencing operation 1374 using one or more trained “GPT’ models of FIGs. 1 and 13. In some aspects, the one or more MT NLP models, in some aspects, may be associated with different analytics models configured based on different use cases, e.g., may be configured to process inputs to extract semantic content and generate a “recommendation” based on data relevant to a related and/or associated use case or for a particular user or user persona. For example, an NL recommendation for a user associated with a business role may different from an NL recommendation for a user associated with an engineering and/or facility management role. In some aspects, the one or more MT NLP models used to generate the first NL recommendation may be managed by a model management system or set of functions associated with model management.
[0118] At 2016, the system rnay output, for the first user, a first indication of at least the first NL recommendation via a user interface. For example, 2016 may be performed by the one or more processors 2210 of the computing device 2205 , individually or in combination and/or by the output device/interface 2240 of FIG. 22. The indication of at least the first NL recommendation, in some aspects, may be presented as a notification or warning for a user to perform a maintenance operation or to address a time-sensitive issue identified through the processing associated with the first NL recommendation. In some aspects, the indication may further include an indication of the first NL query associated with the first NL recommendation to provide a context for the first NL recommendation.
[0119] In some aspects, the system may receive feedback regarding the accuracy and/or the utility of one or more NL queries and/or NL recommendations. The feedback may be received from a first user as described in relation the operations performed at 1014 1614, and/or 1706/1708 of FIGs. 10, 16, and/or 17, the SME and/or HIL module 133, HIL 1334, the user preference model 1616, and/or the set of reports? feedback 1702 of FIGs. 1, 13, 16, and/or 17. The feedback, in some aspects, may be provided for a run-time retraining as described in relation to FIGs. 9, 10, 13, 16, and 17.
[0120] FIG. 21 is a flow diagram 2100 illustrating a method in accordance with some aspects of the disclosure. In some aspects, the method is performed by a system or device (e.g., the system illustrated in diagram 100 or computing device 2205) that performs various analyses, based on content in a CMS, models associated with a model management system, GPT models and user preferences. At 2102, the system may train one or more MT NLP models (e.g., a GPT model and/or a BERT model) to produce accurate NL content using a first set of feedback associated with an accuracy of the natural language. For example, 2102 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination. The NL queries, in some aspects, may be based on at least one of the received information or the processed information (e.g., information received and/or processed by a CMS or the functions associated with content management 110). The feedback may be received from an SME as described in relation the operations performed at 912, 1008, 1612, and/or 1706/1708 of FIGs. 9, 10. 16, and/or 17, the SME and/or HIL module 133, HIL 1334 and/or the set ofreports/feedback 1702 of FIGs. 1, 13, and/or 17. The feedback, in some aspects, may be provided for one or more of an initial training (before model deployment) or a run-time retraining (e.g., based on content output by the trained and/or deployed one or more MT NLP models during) as described in relation to FIGs. 9, 10, 13, 16, and 17.
[0121] At 2104, the system may train one or more MT NLP models (e.g., a GPT model and/or a BERT model) to produce, for a first user, useful NL content using a second set of feedback associated with a usefulness, to the first user, of the NL content. For example, 2104 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination. The NL queries, in some aspects, may be based on at least one of the received information or the processed information (e.g., information received and/or processed by a CMS or the functions associated with content management 110) as well as preference data associated with one or more of the SME and/or HIT module 133, the user feedback module 132, and or the user preference model development module 134. The feedback may be received from a first user as described in relation the operations performed at 1014 1614, and/or 1706/1708 of FIGs. 10, 16, and/or 17, the SME and/or HIL module 133, HIL 1334, the user preference model 1616, and/or the set ofreports/feedback 1702 of FIGs. 1. 13. 16. and/or 17. The feedback, in some aspects, may be provided for one or more of an initial training (before model deployment) or a run-time retaining (e.g., based on content output by the trained and/or deployed one or more MT NLP models dining) as described in relation to FIGs. 9, 10, 13. 16, and 17.
[0122] At 2106, the system may tram one or more MT NLP models (e.g., a GPT model and/or a BERT model) to produce, for at least one additional (e.g., a second) user, useful NL content using a third set of feedback associated with a usefulness, to the at least one additional (e.g., a second) user, of the NL content. For example, 2106 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination. The NL queries, hi some aspects, may be based on at least one of the received information or the processed information (e.g., information received and/or processed by a CMS or the functions associated with content management 110) as well as preference data associated with one or more of the SME and/or HIL module 133, the user feedback module 132, and or the user preference model development module 134. The feedback may be received from the second user as described in relation the operations performed at 1014 1614, and/or 1706/1708 of FIGs. 10, 16, and/or 17, the SME and/or HIT module 133, HIL 1334, the user preference model 1616, and/or the set of reports/ feedback 1702 of FIGs. 1 , 13, 16, and/or 17. The feedback, hi some aspects, may be provided for one or more of an initial training (before model deployment) or a run-time retraining (e.g. , based on content output by the trained and/or deployed one or more MT NLP models during) as described in relation to FIGs. 9, 10, 13, 16, and 17.
(0123] At 2108, the system may receive information from one or more input sources associated with at least one of an industrial process or an industrial product. The data, in some aspects, may be received from a plurality of sources. For example, 2108 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combmation. In some aspects, the received information may include two or more of image data, text, vibration data, audio data, structured data, or sensor data. In some aspects, the one or more information sources may be associated with a set of user-defined parameters received from a user before receiving the information. The set of user-defined parameters, in some aspects, may include one or more of a desired outcome, a set of KPIs, a set of affected users, or a taxonomy associated with a desired outcome. In some aspects, a plurality of sets of user-defined parameters may be received in association with a corresponding plurality of use cases (e.g., business use cases). For example, the received data (and the set of user-defined parameters) may be any of the data described in association with, e.g., the functions associated with the content management 110, the inputs 401. the data management 530, and/or the set of physical sensor data 608/708 of FIGs. 1-7.
(0124] At 2110, the system may process the received information using one or more MT models. The models, in some aspects, may be associated with a model management system. For example, 2110 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination, hi some aspects, processing the received information at 2110 may include processing the received information based, at least in part, on a set of user-defined parameters. The set of user-defined parameters, in some aspects, may include one or more of a desired outcome, a set of KPIs, a set of affected users, or a taxonomy associated with a desired outcome. In some aspects, a plurality of sets of user-defined parameters may be received in association with at least one of an HDC, an RDF, or a YAML format or file. The user-defined parameters and/or the at least one of the HDC, RDF, or YAML format may be associated with a set of corresponding use cases (e.g., business use cases). For example, the processing (and the set of user-defined parameters) may be associated with, e.g., the functions associated with the content management 110 and/or the functions associated with the model management 120, the inputs 401, the data management 530, the processing associated with generating data at 612/712, and/or the set of physical sensor data 608/708 of FIGs. 1-7. In some aspects, the one or more MT models, in some aspects, may be associa ted with different analytics models configured based on different use cases and/or user personas, e.g., may be configured to process the received information to extract data relevant to a related and/or associated use case.
(0125] At 2112, the system may generate, for the current user (or for a current use case), an NL query regarding at least one of the rec eived information or the processed information using the one or more MT NLP models. The one or more MT NLP models, in some aspects, may be associated with a model management system. For example, 2112 maybe performed by the one or more processors 2210 of the computing device 2205, individually or in combination. The one or more MT NLP models, in some aspects, may include one or more of a GPT model or a BERT model. In some aspects, the system may generate, at 2112, the NL query using the GPT model. For example, generating the NL query may be associated with, e.g., the functions associated with the GPT module 140 and/or the functions associated with the model management 120, the “GPT” inferencing operation 1374 or GPT 1420 to produce GPT output content 1428 and/or BERT output content 1430 using one or more trained “GPT” models of FIGs. 1, 13, and 14. In some aspects, the one or more MT NLP models, in some aspects, may be associated with different analytics models configured based on different use cases, e.g., may be configured to process inputs to extract semantic content and generate a “query” based on data relevant to a related and/or associated use case or for a particular user or user persona. In some aspects, the one or more MT NLP models used to generate the first NL query may be managed by a model management system or set of functions associated with model management.
[0126] At 2114, the system may generate, for the current user (or for a current use case), an NL recommendation based on the first NL query and at least one of the received information or the processed information using at least one corresponding MT NLP model. The at least one MT NLP model, in some aspects, may be associated with a model management system. For example, 2114 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination. The one or more MT NLP models, in some aspects, may include one or more of a GPT model or a BERT model. In some aspects, the system may generate, at 2114, the NL recommendation using the BERT model. For example, generating the NL recommendation may be associated with, e.g., the functions associated with the GPT module 140 and/or the functions associated with the model management 120, the “GPT” inferencing operation 1374 using one or more trained “GPT” models of FIGs. 1 and 13. In some aspects, the one or more MT NLP models, in some aspects, may be associated with different analytics models configured based on different use eases, e.g., may be configured to process inputs to extract semantic content and generate a “recommendation” based on data relevant to a related and/or associated use case or for a particular user or user persona. For example, an NL recommendation for a user associated with a business role may different from an NL recommendation for a user associated with an engineering and/or facility management role. In some aspects, the one or more MT NLP models used to generate the first NL recommendation may7 be managed by a model management system or set of functions associa ted with model management.
[0127] At 2116, the system may output, for the current user (or for a current use case), an indication of at least the NL recommendation via a user interface. For example, 2116 may- be performed by the one or more processors 2210 of the computing device 2205, individually or in combination and/or by the output device/interface 2240 of FIG . 22. The indication of at least the NL recommendation, in some aspects, may be presented as a notification or warning for a user to perform a maintenance operation or to address a time-sensitive issue identified through the processing associated with the NL recommendation. In some aspects, the indication may further include an indication of the NL query associated with the NL recommendation to provide a context for the NL recommendation. [0128] At 2118, the system may determine if there are additional users (or models and/or use cases) for which to produce a NL query and/or NL response. For example, 2118 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination. The determination may be based on a period (e.g., a period defined for producing a daily, weekly, monthly or other time-period based report) associated with each of a plurality of models or based on triggering events (e.g., a detection of an anomaly score associated with a vibration of an industrial component that is above a threshold value for a threshold amount of time). If the system determines, at 2118, that there are additional user (or models and/or use cases) for which to produce a NL query and/or NL response the system may return to 2112-2114 to generate the corresponding NL queries and NL recommendations, and to output corresponding indications. In some aspects, 2112-2116 may be performed in parallel at one or multiple locations (e.g., datacenters, terminals accessing a same centralized CMS, etc.) and the determination at 2118 may be a determination whether a triggering condition for generating the NL query and the NL recommendation has been met for a same user or use case.
[0129] At 2120. the system may receive feedback regarding the accuracy and/or the utility of one or more NL queries and/or NL recommendations. For example, 2120 may be performed by the one or more processors 2210 of the computing device 2205, individually or in combination, and/or the IO interface 2225, or the input/user interface 2235. The feedback may be received from a first user as described in relation the operations performed at 1014 1614, and/or 1706/1708 of FIGs. 10, 16, and/or 17, the SME and/or HIL module 133 , HIL 1334, the user preference model 1616, and/or the set of reports/feedback 1702 of FIGs. 1, 13, 16, and/or 17. The feedback, in some aspects, may be provided for a rim- time retraining as described in relation to FIGs. 9, 10, 13, 16, and 17.
[0130] As discussed above, industrial verticals may benefit from using Content Al (content management, GPT, digital advisor, and model management). For example, an automated content creation function in accordance with some aspects of the disclosure may be used to generate high- quality content automatically, reducing the need for manual content creation. This can save time and resources while still producing content that is relevant and engaging. In some aspects, an automated technical documentation function in accordance with some aspects of the disclosure may be used to generate technical documentation automatically based on user input or existing data, reducing the time and effort required to produce such documents and improving their accuracy. A multilingual content creation function in accordance with some aspects of the disclosure may be used to generate content in multiple languages, allowing businesses to reach a broader audience and expand their global reach. In some aspects, the system may be used to improve content recommendations based on user behavior and preferences, leading to higher engagement and increased customer satisfaction. The system, hi some aspects, may be used to develop advanced knowledge management systems, such as intelligent search engines or automated content tagging. This can improve efficiency and productivity in industrial settings where employees need to access and utilize large amounts of technical information. In some aspects, the system may be used to create personalized content based on user behavior and preferences, providing a more personalized experience for users and increased engagement and/or improved customer satisfaction. Th system, in some aspects, may be used to develop chatbots or virtual assistants that can miderstand and respond to customer inquiries and issues in natural language, improving customer service experiences.
[0131] FIG. 22 illustrates an example computing environment with an example computer device suitable for use in some example implementations. Computer device 2205 in computing environment 2200 can include one or more processing units, cores, or processors 2210, memory 2215 (e.g., RAM, ROM, and/or the like), internal storage 2220 (e.g., magnetic, optical, solid-state storage, and/or organic), and/or IO interface 2225, any of which can be coupled on a communication mechanism or bus 2230 for communicating information or embedded in the computer device 2205. IO interface 2225 is also configured to receive images from cameras or provide images to projectors or displays, depending on the desired implementation.
[0132] Computer device 2205 can be communicatively coupled to input/user interface 2235 and output device/interface 2240. Either one or both of the input/user interface 2235 and output device? interface 2240 can be a wired or wireless interface and can be detachable. Input/user interface 2235 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a poiriting/cuisor control, microphone, camera, braille, motion sensor, accelerometer, optical reader, and/or the like). Output device/interface 2240 may include a display, television, monitor, printer, speaker, braille, or the like. In some example implementations, input/user interface 2235 and output device/interface 2240 can be embedded with or physically coupled to the computer device 2205. In other example implementations, other computer devices may function as or provide the functions of input/user interface 2235 and output device/interface 2240 for a computer device 2205.
[0133] Examples of computer device 2205 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like).
[0134] Computer device 2205 can be communicatively coupled (e.g., via IO interface 2225) to external storage 2245 and network 2250 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration. Computer device 2205 or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
[0135] IO interface 2225 can include but is not limited to, wired and/or wireless interfaces using any communication or IO protocols or standards (e.g., Ethernet, 802.1 lx. Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 2200. Network 2250 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
[0136] Computer device 2205 can use and/or communicate using computer-usable or computer readable media, including transitory media and non- transitory media. Transitory media include transmission media (e.g., metal cables, fiber optics), signals, earner waves, and the like. Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid-state media (e.g., RAM, ROM, flash memory, solid-state storage), ami other non-volatile storage or memory. [0137] Computer device 2205 can be used to implement techniques, methods, applications. processes, or computer-executable instructions in some example computing environments. Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media. The executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g., C, C++. C#, Java, Visual Basic.
Python, Perl, JavaScript, and others).
[0138] Processors) 2210 can execute under any operating system (OS) (not shown), in a native or virtual environment. One or more applications can be deployed that include logic unit 2260, application programming interface (API) unit 2265, input unit 2270. output unit 2275, and inter-unit communication mechanism 2295 for the different units to communicate with each other, with the OS, and with other applications (not shown). The described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided. Processors) 2210 can be in the form of hardware processors such as centr al processing units (CPUs) or in a combination of hardware and software units.
[0139] In some example implementations, when information or an execution instruction is received by API unit 2265, it may be communicated to one or more other units (e.g., logic unit 2260, input unit 2270, output unit 2275). In some instances, logic unit 2260 may be configured to control the information flow among the units and direct the services provided by API unit 2265, the input unit 2270, the output unit 2275, in some example implementations described above. For example, the flow of one or more processes or implementations may be controlled by logic, unit 2260 alone or in conjunction with API unit 2265. The input unit 2270 may be configured to obtain input for the calculations described in the example implementations, and the output unit 2275 may be configured to provide an output based on the calculations described in example implementations.
[0140] Processors) 2210 can be configured to receive information from one or more input sources associated with at least one of the industrial process or the industrial product. The processor(s) 2210 can be configured to process the received information using one or more MT models associated with a model management system. The processor(s) 2210 can be configured to generate, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT NLP models. The processors) 2210 can be configured to generate, for the first user, a first NL recommendation based on the first NL query arid at least one of the received information or the processed information. The processors) 2210 can be configured to output, for the first user, a first indication of at least the first natural language recommendation via the user interface. The processors) 2210 can be configured to train the one or more MT NLP models to produce accurate natural language queries based on at least one of the received information or the processed information using a first set of feedback associated with an accuracy of the natural language queries produced by the one or more MT NLP models. The processorfs) 2210 can also be configured to train the one or more MT NLP models to produce, for the first user, useful natural language queries based on at least one of the received information or the processed information using a second set of feedback associated with a usefulness, to the first user, of the natural language queries produced by the one or more MT NLP models for the first user. The processors) 2210 can also be configured to train the one or more MT NLP models to produce, for a second user, usefill natural language queries based on at least one of the received information or the processed information using a third set of feedback associated with a usefulness, to the second user, of the natural language queries produced by the one or more MT NLP models for the second user. The processorfs) 2210 can also be configured to generate, for a second user, a second natural language query regarding at least one of the received information or the processed information using one or more MT NLP models. The processor(s) 2210 can also be configured to generate, for the second user, a second natural language recommendation based on the second natural language query and at least one of the received information or the processed information. The processors) 2210 can also be configured to output, for the second user, a second indication of at least the second natural language recommendation via the user interface.
[0141] hi some aspects, the techniques described herein relate to a system for generating recommendations for a first user regarding at least one of an industrial process or an industrial product, the system including: at least one memory; a user interface; and at least one processor coupled to the at least one memory and, based at least in part on stored information that is stored in the at least one memory, the at least one processor, individually or in any combination, is configured to: receive information from one or more input sources associated with at least one of the industrial process or the industrial product; process the received information using one or more machme-trained (MT) models associated with a model management system: generate, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT natural language processing (NLP) models; generate, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received information or the processed information: and output, for the first user, a first indication of at least the first natural language recommendation via the user interface.
[0142] In some aspects, the techniques described herein relate to a system, wherein the at least one processor, individually or in any combination, is further configured to: generate, for a second user, a second natural language query regarding at least one of the received information or the processed information using one or more MT NLP models; generate, for the second user, a second natural language recommendation based on the second natural language query and at least one of the received information or the processed information: and output, for the second user, a second indication of at least the second natural language recommendation via the user interface, wherein a first role associated with the first user is different than a second role associated with the second user and the second natural language query and the second natural language recommendation are different than the first natural language query and the first natural language recommendation based on the different second and first roles, respectively.
[0143] In some aspects, the techniques described herein relate to a system, wherein the at least one processor, individually or in any combination, is further configured to: train the one or more MT NLP models to produce accura te natural language queries based on at least one of the received information or the processed information using a first set of feedback associated with an accuracy of the natural language queries produced by the one or more MT NLP models; and train the one or more MT NLP models to produce, for the first user, usefill natural language queries based on at least one of the received information or the processed information using a second set of feedback associated with a usefulness, to the first user, of the natural language queries produced by the one or more MT NLP models for the first user.
[0144] In some aspects, the techniques described herein relate to a system, wherein the at least one processor, individually or in any combination, is further configured to: train the one or more MT NLP models to produce, for a second user, useful natural language queries based on at least one of the received information or the processed information using a third set of feedback associated with a usefulness, to the second user, of the natural language queries produced by the one or more MT NLP models for the second user,
[0145] In some aspects, the techniques described herein relate to a system, wherein the one or more MT NLP models includes one or more of a generative pre-trained transform (GPT) model or a bidirectional encoder representations from transformers (BERT) model.
[0146] In some aspects, the techniques described herein relate to a system, wherein the at least one processor is configured to generate the first natural language query using the GPT model and to generate the first natural language recommendation using the BERT model.
[0147] In some aspects, the techniques described herein relate to a system, wherein the at least one processor is configured to process the received information based, at least in part, on a set of user-defined parameters, wherein the set of user-defined parameters includes one or more of a desired outcome, a set of key performance indicators (KPIs), a set of affected users, or a taxonomy associated with the desired outcome.
[0148] In some aspects, the techniques described herein relate to a system, wherein the set of user-defined parameters is provided to the system in association with at least one of a hypothesis development canvas (HDC), a resource description framework (RDF), or a YAML ain’t markup language (YAML) format.
[0149] In some aspects, the techniques described herein relate to a system, wherein the received information includes two or more of image data, text, vibration data, audio data, structured data, or sensor data.
[0150] In some aspects, the techniques described herein relate to a system, wherein the at least one processor is configured to process the received information by associating the data with at least one MT model of the one or more MT models associated with the model management system.
[G151J In some aspects, the techniques described herein relate to a method for generating recommendations for a first user regarding at least one of an industrial process or an industrial product, including: receiving information from one or more input sources associated with at least one of the industrial process or the industrial product; processing the received information using one or more machine-trained (MT) models associated with a model management system; generating, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT natural language processing (NLP) models; generating, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received information or the processed information; and outputting, for the first user, a first indication of at least the first natural language recommendation via the user interface.
[0152] In some aspects, the techniques described herein relate to a method, further including: generating, for a second user, a second natural language query regarding at least one of the received information or the processed information using one or more MT NLP models; generating, for the second user, a second natural language recommendation based on the second natural language query7 and at least one of the received information or the processed information; and outputting, for the second user, a second indication of at least the second natural language recommendation via the user interface, wherein a first role associated with the first user is different than a second role associated with the second user and the second natural language query and the second natural language recommendation are different than the first natur al language query and the first natural language recommendation based on the different second and first roles, respectively7
[0153] In some aspects, the techniques described herein relate to a method, further including: train the one or more MT NLP models to produce accurate natural language queries based on at least one of the received information or the processed information using a first set of feedback associated with an accuracy of the natural language queries produced by7 the one or more MT NLP models; and train the one or more MT NLP models to produce, for the first user, useful natural language queries based on at least one of the received information or the processed information using a second set of feedback associated with a usefulness, to the first user, of the natural language queries produced by the one or more MT NLP models for the first user.
[0154] In some aspects, the techniques described herein relate to a method, further including: train the one or more MT NLP models to produce, for a second user, useful natural language queries based on at least one of the received information or the processed information using a third set of feedback associated with a usefulness, to the second user, of the natural language queries produced by the one or more MT NLP models for the second user.
[0155] In some aspects, the techniques described herein relate to a method, wherein the one or more MT NLP models includes one or more of a generative pre-tamed tr ansform (GPT) model or a bidirectional encoder representations from transformers (BERT) model,
[0156] In some aspects, the techniques described herein relate to a method, wherein the at least one processor is configured to generate the first natural language query using the GPT model and to generate the first natural language recommendation using the BERT model.
[0157] In some aspects, the techniques described herein relate to a method, wherein the at least one processor is configured to process the received information based, at least in part, on a set of user-defined parameters, wherein the set of user-defined parameters includes one or more of a desired outcome, a set of key performance indicators (KPIs), a set of affected users, or a taxonomy associated with the desired outcome,
[0158] In some aspects, the techniques described herein relate to a method, wherein the set of user-defined parameters is provided to the system in association with at least one of a hypothesis development canvas (HDC), a resource description framework (RDF), or a YAML ain’t markup language (YAML) format.
[0159] In some aspects, the techniques described herein relate to a method, wherein the received information includes two or more of image data, text, vibration data, audio data, structured data, or sensor data,
[0160] In some aspects, the techniques described herein relate to a method, wherein processing the received information includes associating the data with at least one MT model of the one or more MT models associated with the model management system.
[0161] Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading to a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result.
[0162] Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating.” “determining,” “displaying,” or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system’s registers and memories into other data similarly represented as physical quantities within the computer system’s memories or registers or other information storage, transmission or display devices.
[0163] Example implementations may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer readable medium, such as a computer readable storage medium or a computer readable signal medium. A computer readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid-state devices, and drives, or any other types of tangible or non-transitory media suitable for storing electronic information. A computer readable signal medium may include mediums such as carrier waves. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation .
[0164] Various general-purpose systems may be used with programs and modules in accordance with the examples herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the example implementations are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the example implementations as described herein. The instructions of the programming language(s) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers. [0165] As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application. Further, some example implementat ions of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general-purpose computer, based on instructions stored on a computer readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.
[0166] Moreover, other implementations of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the teachings of the present application. Various aspects and/or components of the described example implementations may be used singly or in any combination. It is intended that the specification and example implementations be considered as examples only, with the true scope and spirit of the present application being indicated by the following claims.

Claims

CLAIMS What is claimed is:
1. A system for generating recommendations for a first user regarding at least one of an industrial process or an industrial product, the system comprising: at least one memory; a user interface; and at least one processor coupled to the at least one memory and, based at least in part on stored information that is stored in the at least one memory, the at least one processor, individually or in any combination, is configured to: receive information from one or more input sources associated with at least one of the industrial process or the industrial product; process the received information using one or more machine-trained (MT) models associated with a model management system; generate, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT natural language processing (NLP) models; generate, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received information or the processed information; and output, for the first user, a first indication of at least the first natural language recommendation via the user interface.
2. The system of claim 1, wherein the at least one processor, individually or in any combination, is fiirther configured to: generate, for a second user, a second natural language query regarding at least one of the received information or the processed information using one or more MT NLP models: generate, for the second user, a second natural language recommendation based on the second natural language query and at least one of the received information or the processed information: and output, for the second user, a second indication of at least the second natural language recommendation via the user interface, wherein a first role associated with the first user is different than a second role associated with the second user and the second natural language query and the second natural language recommendation are different than the first natural language query and the first natural language recommendation based on the different second and first roles, respectively.
3. The system of claim 1 , wherein the at least one processor, individually or in any combination, is further configured to: train the one or more MT NLP models to produce accurate natural language queries based on at least one of the received information or the processed information using a first set of feedback associated with an accuracy of the natural language queries produced by the one or more MT NLP models; and train the one or more MT NLP models to produce, for the first user, useful natural language queries based on at least one of the received information or the processed information using a second set of feedback associated with a usefulness, to the first user, of the natural language queries produced by the one or more MT NLP models for the first user.
4. The system of claim 3, wherein the at least one processor, individually or in any combination, is further configured to: train the one or more MT NLP models to produce, for a second user, usefill natural language queries based on at least one of the received information or the processed information using a third set of feedback associated with a usefulness, to the second user, of the natural language queries produced by the one or more MT NLP models for the second user.
5. Tire system of claim 1, wherein the one or more MT NLP models comprises one or more of a generative pre-trained transform (GPT) model or a bidirectional encoder representations from transformers (BERT) model.
6. The system of claim 5, wherein the at least one processor is configured to generate the first natural language query using the GPT model and to generate the first natural language recommendation using the BERT model.
7. The system of claim 1 , wherein the at least one processor is configured to process the received information based, at least in part, on a set of user-defined parameters, wherein the set of user- defined parameters comprises one or more of a desired outcome, a set of key performance indicators (KPIs), a set of affected users, or a taxonomy associated with the desired outcome.
8. The system of claim 7, wherein the set of user-defined parameters is provided to the system in association with at least one of a hypothesis development canvas (HDC), a resource description framework (RDF), or a YAML ain’t markup language (YAML) format.
9. The system of claim 1, wherein the received information comprises two or more of image data, text, vibration data, audio data, structured data, or sensor data.
IQ . The system of claim 9, wherein the at least one processor is configured to process the received information by associating the data with at least one MT model of the one or more MT models associated with the model management system.
11. A method for generating recommendations for a first user regarding at least one of an industrial process or an industrial product, comprising: receiving information from one or more input sources associated with at least one of the industrial process or the industrial product; processing the received information using one or more machine-trained (MT) models associated with a model management system: generating, for the first user, a first natural language query regarding at least one of the received information or the processed information using one or more MT natural language processing (NLP) models; generating, for the first user, a first natural language recommendation based on the first natural language query and at least one of the received information or the processed information; arid outputting, for the first user, a first indication of at least the first natural language recommendation via a user interface.
12. The method of claim 11 , further comprising; generating, for a second user, a second natural language query regarding at least one of the received information or the processed information using one or more MT NLP models; generating, for the second user, a second natural language recommendation based on the second natural language query and at least one of the received information or the processed information; and outputting, for the second user, a second indication of at least the second natural language recommendation via the user interface, wherein a first role associated with the first user is different than a second role associated with the second user and the second natural language query and the second natural language recommendation are different than the first natural language query and the first natural language recommendation based on the different second and first roles, respectively.
13. The method of claim 11 , further comprising: train the one or more MT NLP models to produce accurate natural language queries based on at least one of the received information or the processed information using a first set of feedback associated with an accur acy of the natural language queries produced by the one or more MT NLP models; and train the one or more MT NLP models to produce, for the first user, usefill natural language queries based on at least one of the received information or the processed information using a second set of feedback associated with a usefulness, to the first user, of the natural language queries produced by the one or more MT NLP models for the first user.
14. The method of claim 13, further comprising: train the one or more MT NLP models to produce, for a second user, useful natural language queries based on at least one of the received information or the processed information using a third set of feedback associated with a usefulness, to the second user, of the natural language queries produced by the one or more MT NLP models for the second user.
15. The method of claim 11 , wherein the one or more MT NLP models comprises one or more of a generative pre-trained transform (GPT) model or a bidirectional encoder representations from transformers (BERT) model.
16. The method of claim 15, wherein generating the first natural language query is based on the GPT model and generating the first natural language recommendation is based on the BERT model.
17. The method of claim 11 , wherein processing the received mfomiation is based, at least in par t, on a set of user-defined parameters, wherein the set of user-defined parameters comprises one or more of a desired outcome, a set of key performance indicators (KPIs), a set of affected users, or a taxonomy associated with the desired outcome.
18. The method of claim 17, wherein the set of user-defined parameters is provided to the system hi association with at least one of a hypothesis development canvas (HDC), a resource description framework (RDF), or a YAML ain’t markup language (YAML) format.
19. The method of claim 11, wherein the received information comprises two or more of image data, text, vibration data, audio data, structured data, or sensor data.
20. The method of claim 19, wherein processing the received information comprises associating the data with at least one MT model of the one or more MT models associated with the model management system.
PCT/US2023/030712 2023-08-21 2023-08-21 Al integrated intelligent content system WO2025042386A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2023/030712 WO2025042386A1 (en) 2023-08-21 2023-08-21 Al integrated intelligent content system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2023/030712 WO2025042386A1 (en) 2023-08-21 2023-08-21 Al integrated intelligent content system

Publications (1)

Publication Number Publication Date
WO2025042386A1 true WO2025042386A1 (en) 2025-02-27

Family

ID=94732372

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/030712 WO2025042386A1 (en) 2023-08-21 2023-08-21 Al integrated intelligent content system

Country Status (1)

Country Link
WO (1) WO2025042386A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190243831A1 (en) * 2018-01-19 2019-08-08 Servicenow, Inc. Query translation
US20210026924A1 (en) * 2019-07-23 2021-01-28 International Business Machines Corporation Natural language response improvement in machine assisted agents
US20220137582A1 (en) * 2015-03-16 2022-05-05 Rockwell Automation Technologies, Inc. Cloud-based analytics for industrial automation
US20220182343A1 (en) * 2020-12-04 2022-06-09 Platfarm Inc. Method and apparatus for providing chat service including expression items
WO2022139789A1 (en) * 2020-12-21 2022-06-30 Hitachi Vantara Llc Self-learning analytical solution core
US20220237368A1 (en) * 2021-01-22 2022-07-28 Bao Tran Systems and methods for machine content generation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220137582A1 (en) * 2015-03-16 2022-05-05 Rockwell Automation Technologies, Inc. Cloud-based analytics for industrial automation
US20190243831A1 (en) * 2018-01-19 2019-08-08 Servicenow, Inc. Query translation
US20210026924A1 (en) * 2019-07-23 2021-01-28 International Business Machines Corporation Natural language response improvement in machine assisted agents
US20220182343A1 (en) * 2020-12-04 2022-06-09 Platfarm Inc. Method and apparatus for providing chat service including expression items
WO2022139789A1 (en) * 2020-12-21 2022-06-30 Hitachi Vantara Llc Self-learning analytical solution core
US20220237368A1 (en) * 2021-01-22 2022-07-28 Bao Tran Systems and methods for machine content generation

Similar Documents

Publication Publication Date Title
US11367008B2 (en) Artificial intelligence techniques for improving efficiency
CN114616560B (en) Technology for adaptive and context-aware automated service composition for machine learning (ML)
US20210192650A1 (en) System and method for managing data state across linked electronic resources
US20200410001A1 (en) Networked computer-system management and control
US9092802B1 (en) Statistical machine learning and business process models systems and methods
WO2024178265A1 (en) Data visibility and quality management platform
US12423386B2 (en) Model-driven data insights for latent topic materiality
US11706311B2 (en) Engine to propagate data across systems
US20220291966A1 (en) Systems and methods for process mining using unsupervised learning and for automating orchestration of workflows
Gupta et al. Reducing user input requests to improve IT support ticket resolution process
US20210110394A1 (en) Intelligent automation of self service product identification and delivery
US20250131492A1 (en) Consumer collections and servicing (ccs) analytics platform and ccs application
US20240403658A1 (en) Machine learning model optimization explainability
CN117371940A (en) Holographic intelligent control method and system for financial credit and debit management
CA3211991A1 (en) Method and system for using robotic process automation to provide real-time case assistance to client support professionals
CN114140004A (en) Data processing method and device, electronic equipment and storage medium
Jiang et al. A knowledge graph–based requirement identification model for products remanufacturing design
US20230066770A1 (en) Cross-channel actionable insights
Yang et al. The impact of modern ai in metadata management
US20250045528A1 (en) Systems and methods for automating property management tasks
WO2025042386A1 (en) Al integrated intelligent content system
Kumar Software Engineering for Big Data Systems
US12346713B1 (en) Unified artificial intelligence agent, robotic process automation robot, and agentic orchestration process development applications
CN119741025B (en) Customer management method and system based on cloud server
Galla et al. AI-DRIVEN DATA ENGINEERING TRANSFORMING BIG DATA INTO ACTIONABLE INSIGHT

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23949897

Country of ref document: EP

Kind code of ref document: A1