Category : Technologies

Artificial Intelligence and DaaS

How Affective Computing is Driving Innovation

affective computing

We’ve discussed how the term artificial intelligence (AI) covers a wide array of applications; just like many of these functionalities, affective computing is beginning to see some growth in the market. Spanning across computer science, behavioral psychology, and cognitive science, affective computing uses hardware and software to identify human feelings, behaviors, and cognitive states through the detection and analysis of facial, body language, biometric, verbal and/or vocal signals.

While affective computing requires hardware such as a camera, a touch device, or a microphone to capture the signals described above, the bulk of AC technology uses software technologies to detect and analyze an individual’s current mood or cognitive state. Synthesis and analysis of the captured data heavily relies on AI and machine learning algorithms and modeling, as well as aspects of computer vision (CV), natural language processing (NLP) and natural language understanding (NLU).

Affective computing offers a great opportunity for organizations to augment human capabilities and build trust between humans and the machines they interact with. Businesses are developing more and more use cases that incorporate affective computing, and AI vendors are developing innovative capabilities to help organizations capitalize on this opportunity. Here are just a few of the vendors driving the affective computing market forward with these innovations:

Using Affective Computing to Make Driving Safer and More Enjoyable

MIT Media Lab spin-off Affectiva uses in-cabin cameras and microphones in its Automotive AI product to capture facial and vocal signals, enabling its AI to analyze and recognize the cognitive and emotional states of automobile drivers and passengers, including drowsiness or distraction. Affectiva’s key differentiator is the technology used in Automotive AI’s multimodal emotion recognition tool, which uses deep neural networks, trained on an ever-growing emotional data repository of 8+ million faces, to analyze both facial and vocal signals.

Aside from making consumer driving safer, affective computing solutions like Affectiva could make ridesharing, taxis, and fleet management safer by monitoring and alerting drivers to distracted or unsafe driving behaviors and by providing enhanced in-cabin experiences.

Building Trust in Financial Services with Emotional Analysis

Swiss company NVISO has created Insights ADVISE, which uses facial analysis to recognize emotions and build accurate financial behavioral profiles for financial services clients. The app uses biometric facial recognition technology to track 68 facial points and head pose in financial clients as they answer questions. NVISO’s deep learning network takes this information and analyzes the clients’ emotions in real-time to build a corresponding financial behavioral profile.

NVISO helps financial advisors to better understand and serve their clients through a more accurate financial behavioral profile. The Insights ADVISE app empowers financial advisors to better understand their clients’ needs, risk appetite, and preferences and serve them more effectively, building mutual trust and ensuring realistic goals and expectations.

Using Behavioral Insights to Train Customer-Facing Employees

Cogito, an MIT Human Dynamics lab spin-off, creates products that analyze behavioral signals within voice conversations to provide real-time guidance to customer-facing agents in service, sales, and care roles. Cogito’s products analyze over 200 behavioral signals such as pitch, tone, pace, turn-taking, vocal effort, and vocal tension within voice conversations, and deliver in-call guidance that is simple for the agent to interpret. Cogito also records and bundles insights from customer interactions and delivers them to supervisors to identify training opportunities and share best practices.

The level of guidance helps employees in these roles deliver more satisfactory care to individual clients, but it also empowers agents to augment their own emotional intelligence and improve their overall performance through goal-oriented insights.

Improving the Retail Experience for Customers and Businesses with Facial Recognition

Sightcorp uses computer vision and machine learning/deep learning to analyze and recognize faces in a retail environment, generating in-store analytics that measure customer satisfaction and provide real-time customer insights. Sightcorp also offers multiple privacy settings in its products by allowing businesses to activate automatic face blurring and by refusing to store images. By processing on the edge and following these rules, retailers can ensure that sensitive data is not transferred or is vulnerable outside of the device running the facial recognition and analysis.

Affective computing in retail delivers insights businesses need to improve and optimize customer experiences while allowing human retail workers to focus on physical customer interactions, increasing both the immediate customers’ experiences and long-term customer satisfaction.

Affective computing, like other AI applications, empowers organizations with the insights they need into their businesses and consumers to succeed. This technology requires a comprehensive AI strategy to deliver business value while navigating ethics and privacy considerations.

Learn more about incorporating affective computing in your organization’s AI strategy; watch IDC’s on-demand webcast “Affective Computing: Augmenting Human Emotional Intelligence with AI”.

Read More
Artificial Intelligence and DaaS

Why Organizations Want Consumption-Based Artificial Intelligence Pricing

artificial intelligence pricing

Artificial intelligence (AI) adoption is at a tipping point, as more and more organizations develop their AI strategies for implementing the revolutionary technology within their organizations. However, there are still major challenges to AI adoption; in fact, cost of the solution and lack of skilled resources are cited as the top inhibitors of adopting AI.

Even as the technology landscape has experienced massive change and disruption, the way organizations pay for technologies has not kept pace. The traditional pricing model is based on a perpetual licensing model where the business had to estimate what technology and how much of it was needed — usually for the next three to five years. Once decided, the organization would purchase the capacity and pay upfront. However, there is a big problem with this model – if the estimate was wrong, the organization would end up wasting money on under-used or unused technology, leading to disconnect between the cost of the technology and the actual business value.

Many SaaS and cloud-based technologies first disrupted this pricing model by introducing the cloud-based subscription model. With no need for hardware or software installation in their datacenters, and a flexible subscription that allowed for fluctuations in need, this move towards consumption-based pricing disrupted how organizations evaluated and purchased information technology. AI suppliers will need to follow this lead and move towards even more granular pricing that favors transaction-based models to help their clients overcome the cost hurdle of implementing AI solutions.

The AI Stack Build Influences the AI Pricing Model

AI covers a wide range of applications, including natural language processing, machine learning, image/video analytics, and deep learning, among others. While organizations have to choose whether to buy off-the-shelf AI, build their own AI solutions on-premises, or outsource the AI build, there are three main approaches organizations take when building out their AI capabilities: use off-the-shelf, train a model, or integrate a pre-made model that will need inferencing (AI- and ML-driven predictions executed at the edge after having been trained in the cloud, where data storage and processing power are plentiful and scalable) to work within the organization.

AI Training vs AI Inferencing

  1. An organization decides to use an off-the-shelf embedded AI solution for one of its business initiatives, whether it is running on-premises, on public cloud, or at the edge location.

    Vendors typically do not charge any premium for the embedded AI services; this pricing reflects how standard the market expects embedded AI capabilities to become.
  2. An organization decides to train a model. There are 4 separate ways that this model can be trained:
    1. Build and training of the model are done on-premises.The commercially available AI software platforms deployed on-premises is typically priced on a per-seat basis with an annual software license based on the number of individual users who have access to a digital service or product. Vendors who offer perpetual licensing for on-premises AI software platforms usually do it due to the legacy offering and approach and should move to a subscription pricing model.

      Organizations who choose to train a model on-premises will need underlying infrastructure to support the platform. To accelerate time to value, some leading AI technology vendors are offering an AI-optimized hardware and software bundled solution. The pricing approach for these optimized solutions is evolving to be annual subscription based.

    2. Build and training of the model are done on public cloud.Here, data scientists have the option to either launch virtual instances with popular deep learning frameworks and interfaces to train sophisticated, custom AI models, or use commercial AI software platforms available as a service.

      When using commercially available AI software platform as a service, one only pays for what one uses. Building, training, and deploying ML models are typically billed by the second with a one-minute minimum, with no minimum fees, and no up-front commitments.

      However, when using preconfigured environments, hours of training are typically multiplied by various forms of units of capacity to reflect the compute power or the underlying virtual machine instance subject to the kind of instance selected. Choosing acceleration chips adds another cost on top of the base price.

    3. Build and training of the model are outsourced to a domain services provider.This training creates a partnership between the customer and the domain services provider. The domain services provider takes the lead on AI model build and training, and the customer provides their custom/proprietary data sets and business processes information to fine-tune the model.

      The pricing dynamics in this scenario is custom on an engagement-by-engagement basis.

    4. A pretrained model is available as a public cloud service. An organization’s proprietary data is uploaded to the public cloud and is used to tune or customize a pretrained model.Here, a pretrained model is available as a public cloud service. An organization’s proprietary data is uploaded to the public cloud and is used to tune the pretrained model and then deployed as an API for integration with an application for runtime inferencing. The pricing specifics for this scenario are interdependent with those of using a pretrained, public cloud model deployed as an API for Inferencing (3.2, described below).
  3. An organization integrates an off-the-shelf pretrained model or a tuned model to an application.
    The inferencing needed for this integration can happen in one of three ways:

    1. A pretrained model is available as a public cloud service and invoked on-demand for real-time inferencing.Here, the pretrained model or AI applications is available as a public cloud service and invoked on-demand for real-time inferencing.

      Organizations pay only for what they use, with no minimum fees or mandatory service usage. For some of the API choices, such as video search, image categorization, or text to speech, payment is done in allotments of pennies or dollars per minute of video, or thousands of images, or thousands of characters of text, based on how frequent API requests are sent to perform inference.

      Some technology vendors have their AI applications included with the unlimited version of their core application bundle, but for other cloud SKUs, there’s an extra monthly charge that increases depending on the units of millions of predictions you ask of the software.

      Some leading AI marketing cloud vendors do subscription-based licensing and charge a flat fee for three years, leading to the benefit that the more decisions/recommendations/predictions the client makes and the more they use the AI-powered software, the better pricing they have.

    2. A pretrained model available as a public cloud service is tuned with proprietary data and then the capability is deployed as an API for Inferencing.The pretrained model is available here as a public cloud service where an organization’s proprietary data is uploaded to the public cloud and is used to tune a pretrained model, create a custom model, and then deploy the capability as an API for inferencing.

      This scenario pricing also covers training pricing referenced above in scenario 2.4. While organizations pay only for what they use with no minimum fees and no up-front commitments, there are three additional types of costs associated in this type of training-inferencing combination:

      1. Training hours: Cost for each hour of training required for a custom model based on data provided by customers.
      2. Data storage: Cost for each unit of data capacity stored and used to train customer’s models.
    3. A pretrained model is stored in a DevOps-style repository for edge inferencing.This inference style is extremely new, and there is a lack of resources at the edge. However, suppliers have begun introducing technologies, platforms, and devices that can cost effectively extend AI and ML capabilities to the edge of the network. Working in concert with public cloud services, these devices are capable of processing large volumes of data locally and enabling highly localized and timely inference.

      Organizations pay only for what they use, with no minimum fees or mandatory service usage. For some of the API choices, such as video search, image categorization, or text to speech, payment is done in allotments of pennies or dollars per minute of video, or thousands of images, or thousands of characters of text, based on how frequent API requests are sent to perform inference.

How AI Suppliers Can Help Clients with AI Costs

Cost is one of the biggest inhibitors to enterprise AI adoption. Suppliers can empower their clients to overcome this hurdle by offering granular, consumption-based pricing and flexible licensing for all AI offerings, from core technologies, applications, APIs, or services.

For on-premises build and training of AI models as well as for edge inferencing, technology providers should provide AI-optimized hardware/software bundled solution. This should help ease the complexities of provisioning and management and keep the overall costs in check. Additionally, AI offerings on public cloud should automatically terminate the instances after the jobs are completed, and the billing should be done only for the running time of the jobs.

Consumption-based pricing empowers clients to pay for what they need, lowering inhibitions and fueling AI adoption.

Learn more about AI pricing models in IDC’s Market Perspective: Pricing Dynamics for Artificial Intelligence Offerings: Rapid Emergence of Granular and True Consumption-Based Approach.

Organizations with disruptive AI capabilities and maturity are reaping multiple benefits. Watch IDC’s webcast to hear how some of these innovative businesses are taking advantage of AI – and what they look for in an AI partner.

Read More
Cloud

How IDC’s Industry CloudPath & SaaSPath Surveys Can Inform Your Cloud/SaaS Strategy

Industry CloudPath SaaSPath

Data is the fuel for modernizing and transforming business. While there is no shortage of data – IDC expects data to reach 175 Zettabytes by 2025 – insightful, relevant data is in much shorter supply. I experience this daily in the cloud-related inquiries I receive. In addition to generating timely and relevant insights from data, my role is also to make the data actionable for customers. This means providing the data in a more consumable format and context based on a deeper understanding of what the client needs.

Read More
IoT and the Edge

What a Small Canadian Company is Doing for Consumer AR

consumer AR headset

For the past few years I’ve been covering the Augmented Reality and Virtual Reality (AR/VR) market and over the years I’ve seen some impressive demos. For VR headsets these usually revolved around some sort of game or other-worldly experience — the kind of stuff that’s always fun and exciting for users.

Read More
Cloud Markets and Trends

3 Ways That Service Providers Are Changing the IT Market

IT market update service providers

Cloud, hyperscale and digital service providers already account for 20% of IT infrastructure hardware spending, with 75% of that spending from the 8 largest hyperscalers alone. Add in colocation and managed services hosting providers, plus communications service providers, and by 2023 more than 60% of infrastructure hardware spend will come from the overall service provider segment.

While commercial end-users in other industries shift an increasing proportion of their budget to IT ‘as a service’, service providers will increasingly be the driver for IT vendor strategies and product development. From the outsize impact of hyperscalers, to the shifting focus of infrastructure and hosting providers, here are 3 ways in which these IT buyers are changing the IT market.

Read More
1 2 3 11