Monday

The Artificial Intelligence Ecosystem

We have posted a number of times discussing aspects of Artificial Intelligence (AI) in project management, but what exactly is AI?  This post looks at the components in the AI ecosystem and briefly outlines what the various terms mean.

๐—”๐—ฟ๐˜๐—ถ๐—ณ๐—ถ๐—ฐ๐—ถ๐—ฎ๐—น ๐—œ๐—ป๐˜๐—ฒ๐—น๐—น๐—ถ๐—ด๐—ฒ๐—ป๐—ฐ๐—ฒ: a range of computer algorithms and functions that enable computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

โ€ข Automatic Programming: is a technology that enables computers to generate code or write programs with minimal human intervention.

โ€ข Knowledge Representation: is concerned with representing information about the real world in a way that a computer can understand, so it can utilize this knowledge and behave intelligently.

โ€ข Expert System: is a computer system emulating the decision-making ability of a human expert. A system typically includes: a knowledge base, an inference engine that applies logical rules to the knowledge base to deduce new information, an explanation facility, a knowledge acquisition facility, and a user interface.

โ€ข Planning and Scheduling: an automated process that achieves the realization of strategies or action sequences that are complex and must be discovered and optimized in multidimensional space, typically for execution by intelligent agents, autonomous robots, and unmanned vehicles.

โ€ข Speech Recognition: the ability of devices to respond to spoken commands. Speech recognition enables hands-free control of various devices, provides input to automatic translation, and creates print-ready dictation.

โ€ข Intelligent Robotics: robots that function as an intelligent machine and it can be programmed to take actions or make choices based on input from sensors.

โ€ข Visual Perception: enables machines to derive information from, and understand images and visual data in a way similar to humans

โ€ข Natural Language Processing (NLP): gives computers the ability to understand text and spoken words in much the same way human beings can.

โ€ข Problem Solving & Search Strategies: Involves the use of algorithms to find solutions to complex problems by exploring possible paths and evaluating the outcomes. A search algorithm takes a problem as input and returns a solution in the form of an action sequence.

๐— ๐—ฎ๐—ฐ๐—ต๐—ถ๐—ป๐—ฒ ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ถ๐—ป๐—ด: is concerned with the development and study of statistical algorithms that allow a machine to be trained so it can learn from the training data and then generalize to unseen data, to perform tasks without explicit instructions. There are three basic machine learning paradigms, supervised learning, unsupervised learning, and reinforcement learning.

โ€ข Supervised learning: is when algorithms learn to make decisions based on past known outcomes. The data set containing past known outcomes and other related variables used in the learning process is known as training data.

โ€ข Unsupervised learning: is a type of machine learning that learns from data without human supervision. Unlike supervised learning, unsupervised machine learning models are given unlabelled data and allowed to discover patterns and insights without any explicit guidance or instruction.

โ€ข Reinforcement Learning (RL): is an interdisciplinary area of machine learning concerned with how an intelligent agent ought to take actions in a dynamic environment to maximize the cumulative reward.

โ€ข Classification: a process where AI systems are trained to categorize data into predefined classes or labels.

โ€ข K-Means Clustering: cluster analysis is an analytical technique used in data mining and machine learning to group similar objects into related clusters.

โ€ข Principal Component Analysis (PCA): is a dimensionality reduction method used to reduce the dimensionality of large data sets, by transforming a large set of variables into a smaller one that still contains most of the information in the large set.

โ€ข Automatic Reasoning: attempts to provide assurance about what a system or program will do or will never do based on mathematical proof.

โ€ข Decision Trees:  is a flow chart created by a computer algorithm to make decisions or numeric predictions based on information in a digital data set.

โ€ข Random Forest: is an algorithm that combines the output of multiple decision trees to reach a single result. It handles both classification and regression problems.

โ€ข Ensemble Methods: are techniques that aim at improving the accuracy of results in models by combining multiple models instead of using a single model. The combined models increase the accuracy of the results significantly.

โ€ข Naive Bayes: is a statistical classification technique based on Bayes Theorem. It is one of the simplest supervised learning algorithms.

โ€ข Anomaly Detection: the identification of rare events, items, or observations which are suspicious because they differ significantly from standard behaviours or patterns.

๐—ก๐—ฒ๐˜‚๐—ฟ๐—ฎ๐—น ๐—ก๐—ฒ๐˜๐˜„๐—ผ๐—ฟ๐—ธ๐˜€: are machine learning (ML) models designed to mimic the function and structure of the human brain and help computers gather insights and meaning from text, data, and documents by being trained to recognising patterns and sequences.

โ€ข Large Language Model (LLM): is a type of neural network called a transformer program that can recognize and generate text, answer questions, and generate high-quality, contextually appropriate responses in natural language. LLMs are trained on huge sets of data.

โ€ข Radial Basis Function Networks: are a type of neural network used for function approximation problems. They are distinguished from other neural networks due to their universal approximation and faster learning speed.

โ€ข Recurrent Neural Networks (RNN): is a type of neural network where the output from the previous step is used as input to the current step. In traditional neural networks, all the inputs and outputs are independent of each other. For example, when it is required to predict the next word of a sentence, the previous words are required and hence there is a need to remember the previous words.

โ€ข Autoencoders: is a type of neural network used to learn efficient coding of unlabelled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data into code, and a decoding function that recreates the input data from the encoded representation.

โ€ข Hopfield Networks: is a recurrent neural network having synaptic connection pattern such that there is an underlying Lyapunov function (method of stability) for the activity dynamics. Started in any initial state, the state of the system evolves to a final state that is a (local) minimum of the Lyapunov function.

โ€ข Modular Neural Networks: are characterized by a series of independent neural networks moderated by some intermediary to allow for more complex management processes.

โ€ข Adaptive Resonance Theory (ART): is a theory developed to address the stability-plasticity dilemma. The terms adaptive and resonance means that it can adapt to new learning (adaptive) without losing previous information (resonance).

Deep Learning:  is a method in artificial intelligence (AI) that teaches computers to process data in a way that is inspired by the human brain. Deep learning models can recognize complex patterns in pictures, text, sounds, and other data to produce accurate insights and predictions. The adjective deep refers to the use of multiple layers in the network.

โ€ข Transformer Model:  is a neural network that learns context and thus meaning by tracking relationships in sequential data by applying an evolving set of mathematical techniques to detect subtle ways even distant data elements in a series influence and depend on each other.

โ€ข Convolutional Neural Networks (CNN): is a regularized type of feed-forward neural network that learns feature engineering by itself via filters or kernel optimization.

โ€ข Long Short-Term Memory Networks (LSTM): is a recurrent neural network (RNN), aimed to deal with the vanishing gradient problem present in traditional RNNs.

โ€ข Deep Reinforcement Learning: is a subfield of machine learning that combines reinforcement learning (RL) and deep learning.

โ€ข Generative Adversarial Networks (GAN): is a class of machine learning frameworks for approaching generative AI. Two neural networks contest with each other in the form of a zero-sum game, where one agent’s gain is another agent’s loss.  Given a training set, this technique learns to generate new data with the same statistics as the training set. A GAN trained on photographs can generate new photographs that look at least superficially authentic.

โ€ข Deep Belief Networks (DBN): are a type of neural network that is composed of several layers of shallow neural networks (RBMs) that can be trained using unsupervised learning. The output of the RBMs is then used as input to the next layer of the network, until the final layer is reached. The final layer of the DBN is typically a classifier that is trained using supervised learning. DBNs are effective in applications, such as image recognition, speech recognition, and natural language processing.

For more discussion on the use of AI in project management see:
https://mosaicprojects.com.au/PMKI-SCH-033.php#AI-Discussion

Leave a comment