AMD has added the Radeon RX 7900 XT to its list of RDNA 3 consumer-tier GPUs that are supported in the ROCm 5.7 ecosystem including PyTorch.
AMD's Top-Brass of RDNA 3 "Navi 31" GPUs Are Now ROCm 5.7 Supported, PyTorch ML & AI Acceleration Available On Radeon RX 7900 XT
Today's announcement from AMD builds upon its promise to deliver support for ROCm and other machine learning and development features to its consumer-centric products within the RDNA 3 GPU family. The red team had already introduced support for the Radeon RX 7900 XTX & Radeon PRO W7900 GPUs last month and now, it is adding the Radeon RX 7900 XT too which covers the top echelon of RDNA 3 products.
We are excited about this latest addition to our portfolio. In combination with ROCm, these high-end GPUs make AI more accessible both from a software and hardware perspective, so developers can choose the solution that best fits their needs
Erik Hultgren, Software Product Manager at AMD
Just like the other two solutions, the AMD Radeon RX 7900 XT has a large 20 GB memory buffer and a total of 168 AI accelerator units on board the Navi 31 chip which should offer accelerated and fast ML training and inferencing performance. The GPU will also support PyTorch-based ML models and algorithms which are specifically optimized for AMD and will be compatible with the Linux OS that's running the 22.04.3 version (Ubuntu).
You can grab the driver "23.20.00.48 for Ubuntu 22.04.3 HWE" here.
This is just a stepping stone though as AMD still has a large range of consumer-centric GPUs within the RDNA 3 stack that consumers would love to have ROCm support for. NVIDIA & Intel have been offering some great support for compute on their consumer-centric parts with the likes of CUDA & the Open-Source Intel Compute Runtime which supports the entire range of Arc GPUs. AMD should keep on building up this momentum so that users of RDNA 3 GPUs can get the most out of their products.
News Source: Phoronix