To many people, 2023 felt like the Year of the AI Proof of Concept – or depending on your longevity in the industry, the Year of AI Hype. The Magnificent 7 companies and the leading financial services players all had spent years investing in AI, but in 2023 there were a lot of new passengers on AI train. For those of us in the industry it was great to see so many people start to find value in AI. 

Now that we’re one quarter into 2024, more sobriety is entering conversations, and companies I work with are asking substantive questions about AI hype versus reality:

  • How many AI use cases actually will work in my organization? 
  • How many of them are we able to operationalize? 
  • How many will lead to real ROI results? 

Over a year since the mass adoption of tools like ChatGPT, this is a good juncture to share a few lessons I’ve learned from the first waves of AI hype. 

  1. “Sprinkle Some AI on It” is not a business strategy. While there is a gee-whiz moment in being able to show your C-suite what’s possible with AI – and to secure more funding for it – the hard part comes when you realize that the AI juice may not be worth the squeeze based on the resources and budget required to realize your vision at scale. This isn’t to say that most use cases aren’t worth it, but the shotgun approach of looking for issues to solve with AI, rather than aligning with business needs, is unlikely to succeed. Remember to ask yourself: What’s the point of operationalizing this, and what business problem are we solving? Is there incremental value to be found here? The good news is, sane voices are emerging from data teams to say, “This is what I need AI-driven automation or insights for, and let’s forget whether or not it’s generative.”
  2. AI is more than a technology change; it’s a people and process change. This was a clear point of difference for many when it came to AI hype versus reality. When I worked in financial services, I learned quickly that AI tools had to conform to an operational flow, rules and regulatory framework and must fit into a change management process as much as any other technology does. Technologists seem to forget that it is often a heavier lift to change a process or tooling people are comfortable with than saying “just connect to our APIs.” People also need to think about compliance, risk, and all the “boring” aspects of explainability, transparency, accountability, and controls. Serious businesses cannot say, “Let’s throw that into production and see what happens.” Today even non-regulated businesses are looking at AI through the lens of requirements to ensure accuracy and validity with their customers.
  3. Don’t swing for the fences on your first at-bat. Although it can be tempting to use AI to solve the largest and thorniest issues at an organization immediately,  data leaders have learned quickly that it’s a bad idea as your first AI project to solve a mission-critical problem. These types of problems often require a steering committee with dozens of stakeholders and massive process change. These are the kinds of programs that can get mired down for a year or 18 months without making any meaningful progress and that don’t allow for healthy experimentation and learning. What does a more reasonable approach look like? It’s a project that will require you to put in a little bit of work, wire up some systems, examine how you’re collecting your data, and decide whether all the data that you’re drowning in is even useful for AI. So start small to mid-size with something you can implement in the short term before placing a make-or-break bet.  
  4. There is still work to be done to ensure responsible AI. Although I’m optimistic that responsible AI is coming to the forefront of conversation, there is still a lot of work to be done in this category as there are many examples of misuse or unintended consequences. Because we’re still in the infancy of many AI technologies, we’re entering a new age where these issues will likely take years to sort out. However, as with any new technology, prioritizing transparency and open conversation of effected groups are the first steps in creating fair and ethical programs. Remember, there were ethical and safety considerations around the automobile as well, and they weren’t solved overnight.
  5. AI needs humans as much as humans need AI. I like to think of AI as an accelerator tool for humans. For example, AI-generated coding copilots have helped developers produce more code, in much less time. However, any copilot requires an underlying knowledge of what you are actually trying to achieve.  If you’re asking a coding copilots to generate Python code for you but you don’t have an underlying knowledge of that software language or how different software systems work together, you run the risk of being so dependent on AI that you introduce new and creative errors. I recently had a conversation with someone who has been designing large systems for decades, and he predicts that the amount of errors and technical debt introduced into code by AI in the next five years will be staggering. There is a very real danger that if no one’s reviewing some of this AI generated code, the organization could run into new quality and security risks down the road.  

Staying ahead of AI hype
Staying on top of AI current events is becoming a full-time job in itself. To get yourself out of the hype cycles, I would do a few things:

  • Find a balance between future predictions and actionable use cases. Don’t get distracted by overnight experts talking in big platitudes about what AI could do – many are just chasing clicks. These predictions may be entertaining, but likely have no application to your business or problem set.
  • Find somebody who’s an AI expert in your domain or industry. These people are more likely to understand your problem set and the nuances of your industry. An AI influencer who has never dealt with compliance or risk is not really going to be helpful if you’re in insurance or financial services.   
  • Don’t fall into the common trap that data and systems need to be perfect to even start with AI. If you insist on perfect data for AI you may create analysis paralysis or data paralysis before you even start with any AI experimentation. The reality is that data is never going to be perfect. Start with what you have and let the AI improve versus focusing on getting every input into the exact right positioning. 

That sudden clarity feeling
What’s the ultimate difference between following AI hype and using AI in a meaningful way? It tends to occur when a company stops chasing shiny objects and starts getting practical with AI and attaching it to measurable business needs and goals. Only then will an organization start really feeling data-focused, answering questions that they weren’t able to answer before, or that someone’s gut feeling dictated. The good news is, I’m starting to see more and more companies moving in this direction – of swapping out AI hype for practical AI solutions.