While we’ve discussed the disruptive power that artificial intelligence (AI) applications bring to enterprise organizations, the truth is that AI adoption is still low for these businesses. However, adoption level is at a tipping point; investment in AI has tripled, and recent technical innovations promise to make AI not just an underlying technology capability, but a fundamental business tool. Here are just three of the technical innovations that enterprises can use to better leverage the disruptive power of AI:
1. Automated Machine Learning
Automated machine learning (AutomML) is the automation of the end-to-end process of applying machine learning (ML) to real-world problems. In a typical machine learning application, practitioners must apply the appropriate data preprocessing, feature engineering, feature extraction, and feature selection methods to make the data set amenable for machine learning. Following those preprocessing steps, practitioners must then perform algorithm selection and hyperparameter optimization to maximize the predictive performance of their final machine learning model.
AutoML empowers business analysts and developers to evolve machine learning models that can address complex scenarios without going through the typical process of training ML models. When dealing with an AutoML platform, business analysts can stay focused on the business problem instead of getting lost in the process and workflow.
AutoML perfectly fits in between AI APIs and AI software platforms. It delivers the right level of customization without forcing the developers to go through the elaborate workflow. Additionally, AutoMLs significantly change the traditional training machine learning model workflow behind the scenes and support a scalable deployment without the need for DevOps deep machine learning knowledge.
2. Embedded AI
While many organizations will invest in creating their own AI models to gain a competitive edge, it’s becoming apparent that most organizations will first experience AI as a functionality that gets embedded within a packaged application. In fact, within the next two years, it’s probable that every packaged application will make extensive use of embedded machine learning capabilities to automate processes, where most of the heavy lifting in terms of training those AI models is done by vendors.
Likewise, AI gets embedded in enterprise infrastructure for intelligence and self-management. Self-configurable, self-healing, and self-optimizing infrastructure will prevent issues before they occur, help improve performance proactively, and optimize available resources to
3. Cloud Services
AI is computing intensive. AI applications demand fast central processing units, accelerators, very large data sets, and fast networking to support the high degree of scaling typically required. All this fast hardware can be expensive and difficult to manage. The cloud is one of the least expensive ways to host AI development and production. The best solution may depend on where you are on your AI journey, how intensively you will be building out your AI capabilities, and what your endgame looks like.
Cloud service providers (cloud SPs) have extensive portfolios of development tools and pretrained deep neural networks for voice, text, image, and translation processing. Much of this work stems from their internal development of AI for in-house applications, so it is robust. Cloud services make building AI applications seem enticingly easy. Since most companies struggle to find the right skills to staff an AI project, this is very attractive.
Cloud services also offer ease of use, promising click-and-go simplicity in a field full of relatively obscure technology. Cloud services can offer a flexible hardware infrastructure for AI, complete with state-of-the-art GPUs or FPGAs to accelerate the training process and handle the flood of inference processing you hope to attract to your new AI (where the trained neural network is used for real work or play). You don’t have to deal with complex hardware configuration and purchase decisions, and the AI software stacks and development frameworks are all ready to go. For these reasons, many AI start-ups begin their development work in the cloud, and then move to their own infrastructure for production.
Enterprise organizations are exploring how to best roll out AI applications across their businesses. These companies expect their AI investment to go beyond improving productivity and cutting costs. They see AI as a path to grow profits and revenue, create better customer experiences, improve decision making, and innovate products this year and beyond. Multiple key elements need to come together for AI success: data, talent mix, domain knowledge, key decisions, external partnerships, and scalable infrastructure.
Learn how to create an AI strategy that supports your organization in achieving these AI goals; download IDC’s new eBook today: