-
Intel said that its Lion cove is 10%–18% better than Redwood cove
cross-posted from: https://lemmy.world/post/16224208
> Intel said much re Lion cove lately. I don't wanna read a long article re it. I just want a comparison between it and Redwood cove. I just wanna share that it's about 10%–18% better. Let's await the Lunar lake 💻 and see the performance in programs. If what Intel said is true, props to them for continuously improving x86-architecture chips.
- videocardz.com Samsung will begin mass production of 2nm process in 2025, expanding to HPC in 2026 - VideoCardz.com
Samsung Electronics Unveils Foundry Vision in the AI Era at Samsung Foundry Forum 2023 Samsung further solidifies foundry leadership and customer commitment with advanced process roadmap Samsung Electronics, a world leader in advanced semiconductor technology, today announced its latest foundry tech...
- semiengineering.com 193i Lithography Takes Center Stage...Again
High-NA EUV is still in the works, but more chips/chiplets will be developed using older, less-expensive equipment.
-
AMD confirms CDNA3 based Instinct MI300X GPU requires 750W of power
videocardz.com AMD confirms CDNA3 based Instinct MI300X GPU requires 750W of power - VideoCardz.comAMD Instinct MI300X, a powerhouse for AI large language models AMD discloses how much power the MI300X requires. The OAM-based (OCP Accelerator Module) design of the MI300X graphics processor is listed with 750W on AMD slides. This was not mentioned by early reports covering the introduction day ea...
- www.reuters.com STMicroelectronics, GlobalFoundries win EU approval for French chip factory
Chipmakers STMicroelectronics and GlobalFoundries secured EU approval on Friday to build a chip factory with French state aid in France.
The companies announced their plan in July last year, with the new facility to be situated next to STM's existing plant in Crolles and targeted to reach full capacity by 2026, with up to 620,000 wafers per year of production at a size of 18-nanometers.
- huggingface.co Hugging Face and AMD partner on accelerating state-of-the-art models for CPU and GPU platforms
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
cross-posted from: https://lemmy.world/post/136244
> cross-posted from: https://lemmy.world/post/135600 > > > For anyone following the AI space of technology - this is pretty cool - especially since AMD has fallen behind its NVIDIA CUDA competitors. > > > > I wish we had a new hardware announcement or benchmark/spec sheet to pair with the announcement, but I'll take what I can get. > > > > Curious to see what AMD can muster in terms of AI computation. It's going to be hard to beat NVIDIA's Grace Hopper Superchip, but I'm all for the competition! > > > > (full article for convenience) > > > > >Hugging Face and AMD partner on accelerating state-of-the-art models for CPU and GPU platforms > > > > >Whether language models, large language models, or foundation models, transformers require significant computation for pre-training, fine-tuning, and inference. To help developers and organizations get the most performance bang for their infrastructure bucks, Hugging Face has long been working with hardware companies to leverage acceleration features present on their respective chips. > > > > >Today, we're happy to announce that AMD has officially joined our Hardware Partner Program. Our CEO Clement Delangue gave a keynote at AMD's Data Center and AI Technology Premiere in San Francisco to launch this exciting new collaboration. > > > > >AMD and Hugging Face work together to deliver state-of-the-art transformer performance on AMD CPUs and GPUs. This partnership is excellent news for the Hugging Face community at large, which will soon benefit from the latest AMD platforms for training and inference. > > > > >The selection of deep learning hardware has been limited for years, and prices and supply are growing concerns. This new partnership will do more than match the competition and help alleviate market dynamics: it should also set new cost-performance standards. > > > > >Supported hardware platforms > > > > >On the GPU side, AMD and Hugging Face will first collaborate on the enterprise-grade Instinct MI2xx and MI3xx families, then on the customer-grade Radeon Navi3x family. In initial testing, AMD recently reported that the MI250 trains BERT-Large 1.2x faster and GPT2-Large 1.4x faster than its direct competitor. > > > > >On the CPU side, the two companies will work on optimizing inference for both the client Ryzen and server EPYC CPUs. As discussed in several previous posts, CPUs can be an excellent option for transformer inference, especially with model compression techniques like quantization. > > > > >Lastly, the collaboration will include the Alveo V70 AI accelerator, which can deliver incredible performance with lower power requirements. > > > > >Supported model architectures and frameworks > > > > >We intend to support state-of-the-art transformer architectures for natural language processing, computer vision, and speech, such as BERT, DistilBERT, ROBERTA, Vision Transformer, CLIP, and Wav2Vec2. Of course, generative AI models will be available too (e.g., GPT2, GPT-NeoX, T5, OPT, LLaMA), including our own BLOOM and StarCoder models. Lastly, we will also support more traditional computer vision models, like ResNet and ResNext, and deep learning recommendation models, a first for us. > > > > >We'll do our best to test and validate these models for PyTorch, TensorFlow, and ONNX Runtime for the above platforms. Please remember that not all models may be available for training and inference for all frameworks or all hardware platforms. > > > > >The road ahead > > > > >Our initial focus will be ensuring the models most important to our community work great out of the box on AMD platforms. We will work closely with the AMD engineering team to optimize key models to deliver optimal performance thanks to the latest AMD hardware and software features. We will integrate the AMD ROCm SDK seamlessly in our open-source libraries, starting with the transformers library. > > > > >Along the way, we'll undoubtedly identify opportunities to optimize training and inference further, and we'll work closely with AMD to figure out where to best invest moving forward through this partnership. We expect this work to lead to a new Optimum library dedicated to AMD platforms to help Hugging Face users leverage them with minimal code changes, if any. > > > > >Conclusion > > > > >We're excited to work with a world-class hardware company like AMD. Open-source means the freedom to build from a wide range of software and hardware solutions. Thanks to this partnership, Hugging Face users will soon have new hardware platforms for training and inference with excellent cost-performance benefits. In the meantime, feel free to visit the AMD page on the Hugging Face hub. Stay tuned!
-
Breaking News! AMD is open sourcing the API for the x86 bootloader
community.amd.com Empowering The Industry with Open System Firmware – AMD openSILTHE IMPETUS Platform & Silicon Firmware Development has historically been a niche field in the compute industry, requiring specific, hard-to-find engineering skill sets. As time progressed, firmware capabilities expanded, offering a large range of enhanced capabilities and platform intelligen...
cross-posted from: https://lemmy.world/post/136245
> I think this means we will eventually see a fully open source Coreboot/Libreboot soon. Someone correct me if I am wrong please! > > the openSIL github repo > > I'm not clear about where this API sits relative to the AMD Platform Security Processor. > > found via this post: https://lemmy.world/post/134243
-
semiconductor fabrication
YouTube Video
Click to view this content.
Just trying to provide some basic content to the community to grow. People in this community may be well acquainted with this process but newcomers may benefit so I figured I'd chuck this one up.
-
TSMC board approves $3.5B capital injection for Arizona factory | ComputerWorld
www.computerworld.com TSMC board approves $3.5B capital injection for Arizona factoryThe capital injection is part of the $40 billion investment announced in December.
-
AMD announces Instinct MI300X GPU with 192GB of HBM3 memory
videocardz.com AMD announces Instinct MI300X GPU with 192GB of HBM3 memory - VideoCardz.comAMD Expands Leadership Data Center Portfolio with New EPYC CPUs and Shares Details on Next-Generation AMD Instinct Accelerator and Software Enablement for Generative AI — AMD unleashes the power of specialized compute for the data center with new AMD EPYC processors for cloud native and technical co...
-
AMD introduces 4th Gen EPYC Genoa, Bergamo and Genoa-X Zen4 data-center processors
videocardz.com AMD introduces 4th Gen EPYC Genoa, Bergamo and Genoa-X Zen4 data-center processors - VideoCardz.comAMD Expands 4th Gen EPYC CPU Portfolio with Leadership Processors for Cloud Native and Technical Computing Workloads — New 4th Gen AMD EPYC processors offer leadership performance in cloud native and technical computing — — Microsoft Azure and Meta showcase support for new AMD EPYC CPUs at “Data Cen...
-
AMD launches Ryzen PRO 7000 65W desktop series
videocardz.com AMD launches Ryzen PRO 7000 65W desktop series, Ryzen 9 PRO 7945 features 12 Zen4 cores - VideoCardz.comAMD Ryzen 7000 PRO for desktop announced Today AMD launches its Zen4 based Ryzen PRO 7000 series. AMD is refreshing its Ryzen PRO series in less than a year since the 5000 PRO series was released. These desktop CPUs will now use a new AM5 socket which requires a new DDR5 memory. Another key upgrade ...
- ir.amd.com AMD to Showcase Next-Generation Data Center and AI Technology at June 13 Livestream Event
Browse AMD’s company-wide and financial press releases.
11:00 AM MTN