Semiconductor up close and electrified
| | Digital Transformation | Sustainable Computing

Insights from the World Economic Forum: Joris Poort on Computing Sustainability and AI

Rescale’s CEO provides strategic guidance for addressing today’s most pressing computing challenges

Recently, Joris Poort, Rescale’s CEO and founder, shared his insights about the future of computing in two articles published by the World Economic Forum.

“How Organizations Can Adopt AI Without Expanding Their Carbon Footprint” and “How AI Physics Has the Potential to Revolutionize Product Design” covered key considerations for technology and business leaders as they grapple with these pressing issues.

The articles were part of Poort’s participation as an official Agenda Contributor at “Summer Davos,” the WEF’s 14th Annual Meeting of New Champions in Tianjin, China.

The meeting gathered more than 1,500 global leaders and innovators from business, academia, government, and international organizations to discuss the ongoing transformation of the global economy. The theme of the New Champions meeting was “Entrepreneurship: The Driving Force of the Global Economy.”

Finding a New Path to Compute Sustainability

In his article “How Organizations Can Adopt AI Without Expanding Their Carbon Footprint,” Poort discusses how computing’s massive role in driving innovation and commerce is accompanied by an equally significant and growing carbon footprint.

The Uptime Institute, for example, reports that server power consumption has increased by 266% since 2017. And data centers now account for 3% of global electricity consumption, likely reaching 4% by 2030.

The explosion of artificial intelligence (AI) across industries will only further accelerate the demand for computing power and energy.

How can we benefit from the promise of the AI revolution without also growing the carbon footprint of computing?

To address this challenge, Poort details five steps that businesses and governments can take to develop sustainable computing practices. These include:

  • Improve utilization via cloud computing
  • Develop domain-specific architectures
  • Enable workload portability
  • Automate performance optimization
  • Use sustainable energy sources

Poort emphasized that each step tackles sustainable computing in a different way. All combined, they can make a significant and necessary improvement in current practices.

Most importantly, a rapidly expanding array of cloud-based high performance computing (HPC) services now offer organizations much more flexibility in the kinds of hardware infrastructure they can use to improve the energy efficiency of their R&D computing operations.

With cloud-based services from the likes of AWS, Microsoft Azure, and Google Cloud, R&D teams can tap into the latest, most efficient supercomputing clusters for running their data-intensive digital engineering simulations and other big compute workloads.

And the number of specialized chips has exploded. With potential to run 10-times more efficiently than traditional chips, such as CPUs (central processing unit), new chips like GPUs (graphical process unit) and TPUs (tensor processing unit) promise major sustainability gains.

This world of high-performance, highly efficient hardware architectures in the cloud stands in stark contrast to the traditional constraints of on-premises data centers, which typically need to use the same hardware for years to amortize the capital costs of building supercomputing systems. Organizations are then burdened with older and less efficient HPC architectures.

But moving workloads to the cloud can be a complex task. It takes time, money and expertise to ensure it is done right. Even with cloud services, it is very difficult to port applications from one cloud infrastructure to another. This is inhibiting adoption of best-fit architectures for greener computing.

To address this, Poort says organizations need new ways to automate the steps required to set up an HPC cloud service while being able to orchestrate applications across a variety of multi-cloud architectures.

Being able to easily switch to different cloud-based infrastructures isn’t helpful if organizations don’t know the right ones to use for specific applications (and even specific types of compute workloads).

This process of matching the best-fit hardware for an application requires a high degree of automation because of the increasing speed of chip innovation. Things are simply changing too quickly for organizations to constantly test, benchmark and then adopt new hardware.

And getting this wrong can have major energy implications. Poort writes that with automation we can improve access to the cloud for more organizations, and improve overall utilization worldwide.

The Promise of AI for a New Era of Innovation

In his article “How AI Physics Has the Potential to Revolutionize Product Design,” Poort calls attention to the tremendous opportunity for artificial intelligence (AI) to transform and greatly accelerate digital engineering and product development.

While tools like ChatGPT are now mainstream and garnering much attention and debate, much less has been written about the profound potential for AI to define a new era of digital engineering innovation.

Engineering and scientific computing is now the foundation of innovation. These methods often require massive computing power from supercomputing clusters (also known as high performance computing or HPC) to run detailed simulation models that replicate the real world.

Across industries, research and development (R&D) teams use digital simulations to explore the physical world. Use cases vary from inventing life-saving medicine and improving aircraft design to pioneering sustainable energy and creating self-driving vehicles.

Poort writes that AI offers the potential to supercharge engineering and scientific computing and transform how organizations innovate.

The computer simulations used today for engineering and scientific computing are increasingly benefiting from AI (and in some cases replaced by AI), dramatically lowering costs and helping engineers find the best answers faster.

Running simulations can be expensive, often requiring supercomputers to crunch massive data sets and execute highly complex calculations. But if you can build a machine learning (ML) model on how the physics works, you don’t need to run simulations every time since your ML inference model can extrapolate the answer from the data.

Organizations that become adept at creating well-crafted AI-physics models will gain a solid competitive advantage. They will accelerate engineering and scientific discovery while developing innovative new products that would be computationally prohibitive using traditional approaches.

Despite the promise of AI, organizations across industries will need to establish engineering and research best practices to help ensure they navigate this transition safely to maximize the benefits to society without needless risk.

Most importantly, Poort writes, AI is only as good as the information it trains on. Organizations will need to find ways provide rich troves of high quality data to AI tools to make them smart in the right ways.

Bring the Promise of AI and Sustainability to Your Organization
Learn how the Rescale platform helps organizations
orchestrate and automate their multi-cloud HPC operations

Author

  • At Rescale Marketing, we're the driving force behind the seamless convergence of advanced technology and strategic marketing. Our team specializes in catalyzing the potential of High Performance Computing (HPC), Physics AI, and pioneering Cloud Research & Development (R&D) initiatives. Our team comes from a diverse blend of visionaries, strategists and implementors that focuses on creativity collaboration and drive for innovation. We thrive on challenges, pushing boundaries and redefining what is possible.

Similar Posts