You Need To Stop Doing This On Your AI Projects

It’s easy to get excited about AI projects. Especially when you hear about all the amazing things people are doing with AI, from conversational and natural language processing (NLP) systems, to image recognition, autonomous systems and great predictive analytics and pattern and anomaly detection capabilities. However, when people get excited about AI projects, they tend to overlook some significant red flags. And it’s those red flags that are causing over 80% of AI projects to fail.

One of the biggest reasons for AI project failure is that companies don’t justify the use of AI from an return on investment (ROI) perspective. Simply put, they’re not worth the time and expense given the cost, complexity, and difficulty of implementing the AI systems.

Organizations rush past the exploration phase of AI adoption, jumping from simple proof-of-concept “demos” right to production without first assessing whether the solution will provide any positive return. One big reason for this is that measuring AI project ROI can prove more difficult than first expected. Far too often teams are getting pressure from upper management, colleagues, or external teams to just get started with their AI efforts, and projects move forward without a clear answer to the problem they are actually trying to solve or the ROI that’s going to be seen. When companies struggle to develop a clear understanding of what to expect when it comes to the ROI of AI, misalignment of expectations is always the result.

Missing and Misaligned ROI Expectations

So, what happens when the ROI of an AI project isn’t aligned with expectations from management? One of the most common reasons why AI projects fail is the ROI is not justified by the investment of money, resources, and time. If you’re going to be spending your time, effort, human resources, and money implementing an AI system, you want to get a well-identified positive return.

Even worse than a misaligned ROI is the fact that many organizations aren’t even measuring or quantifying ROI to begin with. ROI can be measured in a variety of ways from a financial return such as generating income or reducing expenses, but it can also be measured as a return on time, shifting or reallocating of critical resources, improving reliability and safety, reducing errors and improving quality control, or improving security and compliance. It’s easy to see how an AI project could provide a positive ROI if you spend a hundred thousand dollars on an AI project to eliminate two million dollars of potential cost or liability, then it’s worth every dollar spent to reduce the liability. But you’ll only see that ROI if you actually plan for it ahead of time and manage that ROI.

Management guru Peter Drucker once famously said, “you can’t manage what you don’t measure.” The act of measuring and managing AI ROI is what sets apart those who see positive value from AI from those who end up canceling their projects years and millions of dollars into their efforts.

Boiling the Ocean and Biting off More than You Can Chew

Another big reason why companies aren’t seeing the ROI they are expecting is that projects are trying to bite off way too much all at once. Iterative, agile best-practices, especially those employed by best practice AI methodologies such as CPMAI clearly advise project owners to “Think Big. Start Small. Iterate Often.” There are unfortunately many unsuccessful AI implementations that have taken the opposite approach by thinking big, starting big, and iterating infrequently. One case in point is Walmart’s investment in AI-powered robots for inventory management. In 2017 Walmart invested in robots to scan store shelves, and by 2022 they pulled them out of stores.

Clearly Walmart had sufficient resources and smart people. So you can’t blame their failure on bad people or bad technology. Rather, the main issue was a bad solution to the problem. Walmart realized that it was just cheaper and easier to use human employees they already had working in the stores to complete the same tasks the robot was supposed to do. Another example of a project not returning the expected results can be found with the various applications of the Pepper robot in supermarkets, museums, and tourist areas. Better people or better technology wouldn’t have solved this problem. Rather just a better approach to managing and evaluating AI projects. Methodology, folks.

Adopting a Step-by-step approach to running AI and machine learning projects

Did these companies get caught up in the hype of the technology? Were these companies just looking to have a robot roaming the halls for the “cool” factor? Because being cool isn’t solving any real business problems nor solving a pain point. Don’t do AI for the sake of AI. If you do AI just for the sake of AI then don’t be surprised when you don’t have a positive ROI.

So, what can companies do to ensure positive ROI for their projects? First, stop implementing AI projects for AI’s sake. Successful companies are adopting a step by step approach to running AI and machine learning projects. As mentioned earlier, methodology is often the missing secret sauce to successful AI projects. Organizations are now seeing benefit in employing approaches such as the Cognitive Project Management for AI (CPMAI) methodology, built upon decades-old data centric project approaches such as CRISP-DM and incorporating established best-practice agile approaches to provide for short, iterative sprints for projects.

These approaches all start with the business user and requirements in mind. The very first step of CRISP-DM, CPMAI, and even Agile is to figure out if you should even move forward with an AI project. These methodologies suggest alternate approaches, such as automation or straight up programming or even just more people might be more appropriate to solve the problem at hand.

The “AI Go No Go” Analysis

If AI is the right solution then you need to make sure that you answer “yes” to a variety of different questions to assess if you’re ready to embark on your AI project. The set of questions you need to ask to determine whether to move forward with an AI project is called the “AI Go No Go” analysis and this is part of the very first phase in the CPMAI methodology. The “AI Go No Go” analysis has users ask a series of nine questions in three general categories. In order for an AI project to actually go forward, you need three things in alignment: the business feasibility, the data feasibility, and the technology / execution feasibility. The first of the three general categories asks about the business feasibility and asks you if there is a clear problem definition, if the organization is actually willing to invest in this change once created, and if there is sufficient ROI or impact.

These may seem like very basic questions, but far too often these very simple questions are skipped. The second set of questions deals with data including data quality, data quantity, and data access considerations. The third set of questions is around implementation including whether you have the correct team and skill sets needed, can execute the model as required, and that the model can be used where planned.

The most difficult part of asking these questions is being honest with the answers. It’s important to be really honest when addressing whether to move forward with the project, and if you answer “no” to one or more of these questions it means either you’re not ready to move forward yet or you should not move forward at all. Don’t just plow ahead and do it anyway because if you do, don’t be surprised when you’ve wasted a lot of time, energy and resources and don’t get the ROI you were hoping for.

https://www.forbes.com/sites/cognitiveworld/2022/08/07/you-need-to-stop-doing-this-on-your-ai-projects/

Leave a Reply