Contextual AI within the enterprise
Me: “Hey Google – What’s the latest album from Coldplay”
Answer : “Coldplay’s latest album is a Head full of Dreams”
Me: “Can you play it”
And it start streaming.
Imagine the same interaction between a user in a company and the AI engine that is supporting their activity – whether its planning, sales, marketing, forecasting etc.
AI inside an enterprise (with the exception of a few simple chatbots for smaller tasks like Travel policies) are not built to help the users in the context of the process that the user follows. They are built to leverage large sets of data that is there within the enterprise and churn out the smaller outputs which are not easily interpreted or are far from the process that the user follows.
For example : while applying AI/ML to a figure out the planning for trade promotions in a store chain, the output given to the planners is the entire listing of the past years trade promotions and the probability of it succeeding this year in giving a lift to the sales. This itself is a “clean” version of the insights – typically its even worse where the users are sifting through outputs from the AI/ML engine to figure out what the engine is trying to tell them.
We don’t expect the AI engines to start interacting like Google Assistant in the enterprise space in the near future, but what should happen is that the AI engine should be able to provide context specific inputs during the process. In the above scenario, the AI engine should be able to give the exact input as to which promotions are likely to succeed in a store within the planning tool as the trade promotion expert is evaluating the various options. This happens without the user analyzing the output of AI, this happens without them getting out of the context of planning for their Trade Promotion.
This is # Contextual AI within the enterprise.
This is what Tailwyndz is bringing to an application near you.