Artificial intelligence (AI) adoption is at a tipping point, as more and more organizations develop their AI strategies for implementing the revolutionary technology within their organizations. However, there are still major challenges to AI adoption; in fact, cost of the solution and lack of skilled resources are cited as the top inhibitors of adopting AI.
While we’ve discussed the disruptive power that artificial intelligence (AI) applications bring to enterprise organizations, the truth is that AI adoption is still low for these businesses. However, adoption level is at a tipping point; investment in AI has tripled, and recent technical innovations promise to make AI not just an underlying technology capability, but a fundamental business tool. Here are just three of the technical innovations that enterprises can use to better leverage the disruptive power of AI:
As artificial intelligence’s (AI’s) potential grows, so does the need for a cohesive AI strategy to leverage AI to prioritize and execute the enterprise’s goals. Aside from articulating business goals and mapping out the ways organizations can use AI to achieve those goals, there is another extremely important element that every AI strategy needs: a code of ethics.
For companies that are committed to creating Digital Transformation (DX) within their organizations, artificial intelligence (AI) is a critical component. The data that is created in DX initiatives has limited value if an organization can’t extract valuable, accurate, and timely insights from it. That’s why enterprise organizations are using AI technologies to pull actionable value from its data; in fact, by the end of 2019, 40% of all DX initiatives will be related to AI.
In October 2018, a Reuters article informed the world that Amazon had scrapped an AI–based recruitment application that turned out to be biased against women. Most headlines about this story highlighted the company’s failure in developing an actionable and fair solution for one of the most important processes of the HR team.
However, what this and similar examples of today’s AI “failures” neglect to acknowledge is the complexity of end-to-end process automation based on AI technology. This complexity stems not only from current technical limitations but also from the immaturity of corporate policies, government regulations, and legal systems to deal with machines that automatically analyze, decide, and act.
More than 3 years ago, I wrote a IDC Community post, “Using Robots to Curb Labor Shortage in Chinese Manufacturing” highlighting a factory in China that replaced 90% of the people in the factory with automation and robots. In that case the workforce was reduced from 650 employees down to only 60 people, those remaining were doing drastically different work than those jobs that were replaced. The jobs shifted from manual labor to oversight, maintenance, and support of the automation and robotics systems.
As the market for intelligent applications and the software platforms used to build them has emerged, nomenclature confusion has grown. What should we call these applications, and what should we call the platforms, libraries and software tools used to build them?
The terminology matters. Vendors need to differentiate their products from the business intelligence and predictive analytics software that has existed for decades. ‘Intelligent applications’ and ‘business intelligence’ software provide two very different sets of functionality. For technology buyers who need to justify new solutions to budget holders, the terminology matters too.