New ML Simulations Reduce Energy Need for Mask Fabrics, Other Materials

2023-01-05 18:02:13 By : Ms. Linda wang

Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

Nov. 2, 2022 — Making the countless numbers of N95 masks that have protected millions of Americans from COVID requires a process that not only demands attention to detail but also requires lots of energy. Many of the materials in these masks are produced by a technique called melt blowing, in which tiny plastic fibers are spun at high temperatures that necessitate the use of a lot of energy. The process is also used for other products like furnace filters, coffee filters and diapers.

Thanks to a new computational effort being pioneered by the U.S. Department of Energy’s (DOE) Argonne National Laboratory in conjunction with 3M and supported by the DOE’S High Performance Computing for Energy Innovation (HPC4EI) program, researchers are finding new ways to dramatically reduce the amount of energy required for melt blowing the materials needed in N95 masks and other applications.

Currently, the process used to create a nozzle to spin non-woven materials produces a very high-quality product, but it is quite energy intensive.  Approximately 300,000 tons of melt-blown materials are produced annually worldwide, requiring roughly 245 gigawatt-hours per year of energy, approximately the amount generated by a large solar farm. By using Argonne supercomputing resources to pair computational fluid dynamics simulations and machine-learning techniques, the Argonne and 3M collaboration sought to reduce energy consumption by 20% without compromising material quality.

The melt blowing process uses a die to extrude plastic at high temperatures. Finding a way to create identical plastic components at lower temperatures and pressures motivated the machine-learning search, said Argonne computational scientist Benjamin Blaiszik, an author of the study. ​“It’s kind of like we are trying to make a pizza in an oven — we’re trying to find the right dimensions, materials for our pizza stone, and cooking temperature using an algorithm to minimize the amount of energy used while keeping the taste the same,” he said.

By using simulations and machine learning, Argonne researchers can run hundreds or even thousands of use cases, an exponential improvement on prior work. ​“We have the ability to tweak things like the parameters for the die geometry,” Blaiszik said. ​“Our simulations will make it possible for someone to make an item at an actual industrial facility, and our computer can tell you about its potential for real-world applications.”

The simulations provide key insights into the process, a method to assess a combination of parameters that are used to generate data for the machine-learning algorithm. The machine-learning model can then be leveraged to ultimately converge on a design that can deliver the required energy savings.

Because the process of making a new nozzle is very expensive, the information gained from the machine-learning model can equip material manufacturers with a way to narrow down to a set of optimal designs. ​“Machine-learning-enhanced simulation is the best way of cheaply getting at the right combination of parameters like temperatures, material composition, and pressures for creating these materials at high quality with less energy,” Blaiszik said.

The initial model for the melt-blowing process was developed through a series of simulation runs performed on the Theta supercomputer at the Argonne Leadership Computing Facility (ALCF) with the computational fluid dynamics (CFD) software OpenFOAM and CONVERGE. The ALCF is a DOE Office of Science user facility located at Argonne.

The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

Many panels at SC22 focused on how supercomputing centers can help others recover from disasters – but one panel, “Facing the Unexpected: Disaster Management Capabilities,” focused mainly on how supercomputing cent Read more…

Is it possible to get quantum advantage without actually using a quantum device? One Swiss start-up, Terra Quantum, says it’s not only possible but that doing so is a core part of its business model, which uses clever Read more…

Intel's ongoing reorganization has a new victim: the AXG group – which was formed in 2021 – is now mincemeat after it was chopped up on Wednesday. The consumer graphics portion of AXG (Accelerated Computing Systems and Graphics Group) will fold into the client computing group, while the enterprise accelerated computing operations of AXG will move over the data center group. Read more…

On December 5th, the research team at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) achieved a historic win in energy science: for the first time ever, more energy was produced by Read more…

When we launched the Elastic Fabric Adapter (EFA) at re:Invent 2018, we delivered on a goal of accelerating computational fluid dynamics (CFD) and weather applications in Amazon EC2, without sacrificing the elasticity, regional availability, cost, and instance choice that makes EC2 so popular. Read more…

By now, most people are aware of the vast amount of trash currently clogging our oceans, with the Great Pacific garbage patch alone estimated to be twice the size of Texas. Ocean pollution threatens the lives of the peop Read more…

Many panels at SC22 focused on how supercomputing centers can help others recover from disasters – but one panel, “Facing the Unexpected: Disaster Managemen Read more…

Is it possible to get quantum advantage without actually using a quantum device? One Swiss start-up, Terra Quantum, says it’s not only possible but that doing Read more…

Intel's ongoing reorganization has a new victim: the AXG group – which was formed in 2021 – is now mincemeat after it was chopped up on Wednesday. The consumer graphics portion of AXG (Accelerated Computing Systems and Graphics Group) will fold into the client computing group, while the enterprise accelerated computing operations of AXG will move over the data center group. Read more…

On December 5th, the research team at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) achieved a historic win in energy sc Read more…

By now, most people are aware of the vast amount of trash currently clogging our oceans, with the Great Pacific garbage patch alone estimated to be twice the si Read more…

The European Union will release €270 million in funds as it tries to attain technology independence by building chips based on the open RISC-V instruction set Read more…

Argentina’s Minister of Science, Technology and Innovation and its Minister of Defense announced this week that the country would soon host a new Top500-worth Read more…

Texas Advanced Computing Center Director Dan Stanzione and HPCwire Managing Editor Tiffany Trader met in Dallas to discuss the biggest trends in HPC and the hot Read more…

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

NVIDIA is excited to announce the release of the stdexec library on GitHub and in the 22.11 release of the NVIDIA HPC Software Development Kit. The stdexec libr Read more…

Chipmakers regularly indulge in a game of brinkmanship, with an example being Intel and AMD trying to upstage one another with server chip launches this week. But each of those companies are in different positions, with AMD playing its traditional role of a scrappy underdog trying to unseat the behemoth Intel... Read more…

Most talk about quantum computing today, at least in HPC circles, focuses on advancing technology and the hurdles that remain. There are plenty of the latter. F Read more…

AMD’s fourth-generation Epyc processor line has arrived, starting with the “general-purpose” architecture, called “Genoa,” the successor to third-gen Eypc Milan, which debuted in March of last year. At a launch event held today in San Francisco, AMD announced the general availability of the latest Epyc CPUs with up to 96 TSMC 5nm Zen 4 cores... Read more…

The European Union will release €270 million in funds as it tries to attain technology independence by building chips based on the open RISC-V instruction set Read more…

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

Intel is opening up its fabs for academic institutions so researchers can get their hands on physical versions of its chips, with the end goal of boosting semic Read more…

For all of its politeness, a fascinating panel on the last day of SC22 – Quantum Computing: A Future for HPC Acceleration? – mostly served to illustrate the Read more…

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's ann Read more…

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

The 60th edition of the Top500 list, revealed today at SC22 in Dallas, Texas, showcases many of the same systems as the previous installment, with Frontier stil Read more…

Intel has had trouble getting its chips in the hands of customers on time, but is providing the next best thing – to try out those chips in the cloud. Delayed chips such as Sapphire Rapids server processors and Habana Gaudi 2 AI chip will be available on a platform called the Intel Developer Cloud, which was announced at the Intel Innovation event being held in San Jose, California. Read more…

The ability to scale current computing designs is reaching a breaking point, and chipmakers such as Intel, Qualcomm and AMD are putting their brains together on an alternate architecture to push computing forward. The chipmakers are coalescing around the new concept of sparse computing, which involves bringing computing to data... Read more…

© 2023 HPCwire. All Rights Reserved. A Tabor Communications Publication

HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.

Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.