These days, it is very difficult to have a conversation with a colleague or customer for half an hour without veering into the topic of AI.
Not sure if this is talked about more than the weather or sports, but everyone seems to have a perspective and story about AI.
In all this, I have come across some personification of AI – such as AI can do that, use AI, AI would have invented sliced bread before even bread was made etc etc etc.
On one hand, this is all good and the hope that people have in AI, combined with a sense of fear, are positive. But, on the other hand, real world experiences vary from one extreme of burnt fingers to 10X [i also read about one tool that claims 11X!] productivity improvements..
The start of the current tsunami of AI products started after the launch of ChatGPT.
Which has led to the popular understanding and equating AI to ChatGPT or LLM based solutions.
This brief note is intended to provide clarity on three classes of AI techniques and approaches that are best suited for an enterprise.
These are Predictive, Generative and Agentic approaches.
We will not go into the earlier generations of AI such as Inference engines and Rule based systems in this article.
A judicial mix of these techniques applied to specific use cases will help an organization derive the most of these advances in technology.
Organizations aspire to get continually better and excel at satisfying their customers not only based on the products and services they offer, but also in improving the experience of all their stakeholders, including employees.
There is an old quote from the philosopher George Santayana that says ‘Those who forget history, are condemned to repeat it’. The implication is that if one does not learn from the past events, one might be making the same mistakes.
Predictive AI:
This is one of the early applications in the recent years, when the terms AI and ML were used together. AI/ML was clubbed with BigData.
This was building on the fact that many organizations had collected a lot of data – mostly transactional, that would usually be forgotten after the transactions are completed.
Using techniques to cleanse, collate and find correlations, were yielding results to identify some patterns and anti-patterns that helped organizations identify opportunities to plug revenue leakages, detect fraud etc.
In some domains, such as manufacturing, past data was also very useful to predict potential breakdowns of machinery or equipment and take preventive actions.
Such applications helped companies increase the customer satisfaction levels.
The rise of e-commerce also enabled opportunities to predict user behavior and guessing the intensity of an intent to engage [read – buy] based on behavior in the current session as well as past interactions.
the Ad-tech industry grew a lot on this basis, for targeted communications and nudges to increase conversion rates.
Other domains that have also been able to leverage the benefits of predictive solutions include weather forecasting, social intelligence platforms that can sense collective sentiments and project potential crowd behavior, production planning in both manufacturing and service businesses such as hospitality.
Adopting such solutions is a serious exercise that requires commitment across the organization to ‘train’ the models and a lot of patience to validate the initial predictions with experience.
This also requires a strong data discipline – that is having correct, complete and current data and related contexts.
Generative AI:
Generative capabilities existed before the Large Language Models [LLMs] became mainstream.
The general purpose transformer work done by Google based on Natural Language Processing [NLP], the encoding and decoding transformers such as BERT [Bidirectional Encoder Representations from Transformers] paved the way for more general purpose Generative Pre-trained Transformer [GPT] such as ChatGPT from OpenAI.
In a very short time, there has been a spurt of applications built based on LLMs.
From simple language checking – for grammar, style etc, to complex image and video generation tools, these have touched almost everyone.
Since generative AI models also can predict, combining these techniques with the analytical or predictive models mentioned earlier can throw up many innovative opportunities to explore.
Some examples are: find correlations across data sets, such as customer information maintained by the sales teams and service teams to detect anomalies or opportunities for cross or upsell.
LLMs are good at drawing inferences from structured and unstructured data.
One needs to be careful to validate the responses based on experience, as with the predictive models, as LLMs may have a tendency to hallucinate.
Another area where LLMs have created a significant impact are in improving productivity in various tasks.
A recent study by Stanford university and the World Bank has identified many opportunities for benefitting from the capabilities of GenAI.
These include writing, active learning, systems analysis, programming and technology design.
The visualization by Visual capitalist gives you a quick overview of areas of applicability.
Agentic AI:
When you embed some business logic, including some rules for decision making, you can create an intelligent agent!
Agentic solutions have been used in customer service – such as chatbots and process automation systems.
While generative AI solutions are very good at content generation, they require a lot of context to be specified and require user prompts to refine the results progressively.
Agentic solutions are good at goal oriented decision making, with a reasonable autonomy.
They become better by reinforced learning and by interacting with other agents.
the earlier generation of RPA [Robotic process Automation] approaches become more dynamic and contextually responsive when they are adapted to become agentic.
There are some generic or universal agents that may be ‘programmed’ or ‘taught’ to do specific activities.
Domain based agents are trained on the rules related to the domain [how much cargo can be loaded on to an aircraft or which section of the aircraft cargo area is safe for, say, perishables].
To achieve end-to-end process capabilities, one may need to deploy multiple agents, that can talk to each other.
This may also require an infrastructure to orchestrate these agents – manager of agents?
Ifhey are more sophisticated, they could be all autonomous, but the testing involved for such systems would be very effort intensive and .. well, unpredictable, as these are not very deterministic algorithms. The influence of the context on the decisions and behavior of an agent is dynamic, making it difficult to simulate all possibilities under test conditions.
The choice of the specific use cases to automate with agents should also be based on the business value, efforts / complexities in configuring them and the need for any regulatory record keeping, as the actions need to have traceability and reasoning to be explained later.
AI based approaches offer a lot of opportunity to reduce variability within an organization, shorten cycle and lead times and improve the customer experience.
However, there are some downsides that one needs to be aware of.
Some cautionary notes:
Be mindful of the carbon footprint of AI based solutions.
The huge demand for computing power is already pushing many of the base model providers to start investing in clean energy plants such as nuclear, but that is going to take some time.
There have been many studies that have documented the increasing carbon footprint left by these computing centers.
Satisfy yourself that you need the power of AI to address your need. Don’t plan to use the AI sledgehammer to fix a small nail.
An overdependence on LLM solutions, to the extent of outsourcing one’s thinking has serious implications.
A recent study by MIT found that cognitive behaviors and capabilities were impacted [impaired?] by over reliance on ChatGPT..
Key point from the summary:
“While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.”
Agentic solutions have been in the news recently when they responded to phishing attacks and exposed sensitive company information, as they could not exercise discretion or have any reason to suspect the source of the request.
Not having a clear plan and wrong choices of use cases would lead to disillusionment and erode the confidence in teams for adopting newer approaches, particularly when the humans are held accountable for the commissions and omissions of the autonomous agents.
Gartner predicts over 40% of agentic AI projects will be canceled by the end of 2027.
Being conscious of the benefits and risks in charting out your AI adoption will help you manage your path to mastering AI in your enterprise.
With all that said, what is that one skill that will make a person stay valuable in the future?
Staying human!
That is hope for us.
If you would like to take a quick self test of your understanding of these three approaches and their suitability to solve specific use cases, take the challenge here
Cheers!