Updated: May 31, 2023 (September 12, 2022)

  Analyst Report

Accelerating ML: CPU, GPU, TPU, NPU, and Oh, My

My Atlas / Analyst Reports

1,103 wordsTime to read: 6 min
Barry Briggs by
Barry Briggs

Before joining Directions on Microsoft in 2020, Barry worked at Microsoft for 12 years in a variety of roles, including... more

  • Azure and other cloud providers offer different hardware options for accelerating machine learning workloads.
  • The choice depends upon workload, desired performance, cost, and portability.

Machine learning (ML) solutions can help organizations gain new insights from their data, automate tasks that previously required a human, and make predictions about future performance. However, ML solutions are computationally intensive. For example, to train a ML model to recognize an image as that of a dog or a defective part can require detailed analysis of thousands or tens of thousands (or more) of sample images, depending upon the level of accuracy required. Because common ML algorithms such as neural networks rely heavily on certain specific types of computational power—in particular, matrix multiplication and exponentiation—offloading these calculations to specially designed hardware can accelerate both training and inferencing.

Using Azure to host hardware-accelerated ML workloads can be cost and time efficient, as Azure provides numerous ML instance types and services that can be rented on a pay-as-you-go basis, thus saving the capital cost of purchasing expensive hardware. Using such hardware can speed up calculations by as much as 30 times, depending on the workload, thus enabling much faster turnaround for data scientists.

Atlas Members have full access

Get access to this and thousands of other unbiased analyses, roadmaps, decision kits, infographics, reference guides, and more, all included with membership. Comprehensive access to the most in-depth and unbiased expertise for Microsoft enterprise decision-making is waiting.

Membership Options

Already have an account? Login Now