Categories
NVIDIA News

NVIDIA Accelerates European AI Infrastructure for the Next Industrial Revolution

Source: Europe Builds AI Infrastructure With NVIDIA to Fuel Region’s Next Industrial Transformation | NVIDIA Newsroom.

At GTC Paris during VivaTech, NVIDIA announced partnerships with European nations and technology leaders to build advanced AI infrastructure based on NVIDIA Blackwell systems. The initiative aims to strengthen digital sovereignty, support economic growth, and position Europe as a leader in the AI-driven industrial revolution.

Jensen Huang, NVIDIA founder and CEO, emphasized the importance of this shift:

“Every industrial revolution begins with infrastructure. AI is the essential infrastructure of our time, just as electricity and the internet once were.”

France, Italy, Spain, and the United Kingdom are developing domestic AI infrastructure in collaboration with companies such as Domyn, Mistral AI, Nebius, and Nscale, as well as major telecom providers Orange, Swisscom, Telefónica, and Telenor. Together, these deployments will deliver over 3,000 exaflops of NVIDIA Blackwell compute power, enabling European enterprises, startups, and the public sector to securely develop and deploy advanced AI applications.

NVIDIA is also expanding its network of AI technology centers in Germany, Sweden, Italy, Spain, the UK, and Finland to accelerate research, workforce development, and scientific breakthroughs. In France, Mistral AI is building a cloud platform powered by 18,000 Grace Blackwell systems; in the UK, Nebius and Nscale will deploy 14,000 Blackwell GPUs in new data centers. In Germany, NVIDIA and its partners are constructing the world’s first industrial AI cloud for European manufacturers, based on DGX B200 and RTX PRO Server systems.

These efforts represent a strategic investment in Europe’s future, as artificial intelligence becomes essential infrastructure for innovation and competitiveness on the global stage.

Categories
News

XENYA at the DATA CENTER INDUSTRY 2025 conference

In 2025, XENYA d.o.o. once again participated in one of the key regional conferences in the field of data centers – DATA CENTER INDUSTRY 2025, traditionally organized by Palsit. This year’s event saw a record number of attendees, including experts, technology providers, and representatives of the region’s largest companies. We were proud to showcase our own brand of optical equipment, XenOpt, as well as two globally leading brands that XENYA represents in the Slovenian market – NVIDIA and Supermicro.

Showcasing Innovation: XenOpt, NVIDIA & Supermicro

At the event, we presented our proprietary brand of optical equipment – XenOpt – alongside two world-leading brands we represent on the Slovenian market: NVIDIA and Supermicro.

As part of the expert program, our specialists Pavel Snoj and Matic Zajc delivered a presentation titled:

“Advanced Solutions for Data Centers: XenOpt, Supermicro, and NVIDIA.”

They demonstrated how these three technologies create a powerful synergy – combining high-performance optical infrastructure, next-gen server solutions, and state-of-the-art AI accelerators. The presentation focused on:

  • Energy efficiency
  • High data throughput
  • Scalability for future data center needs
Building Connections & Future Readiness

In addition to technology presentations, the conference provided an excellent opportunity to establish new connections and strengthen existing partnerships. The high turnout and great interest in our solutions confirm that XENYA continues to successfully meet the market’s demand for advanced, reliable, and flexible solutions in and between data centers.

Thank you to everyone who visited us – see you next year!

For more information about NVIDIA, Supermicro, and XenOpt solutions or to schedule a presentation of our equipment at your company, feel free to contact us at info@xenya.si.

Categories
News

NVIDIA DGX Spark: Desktop Supercomputer for AI

NVIDIA has unveiled the DGX Spark, the smallest AI supercomputer, bringing the power of the Grace Blackwell platform to the desktop. Equipped with the GB10 Superchip and 128 GB of memory, it supports AI models with up to 200 billion parameters. With 1000 AI TOPS performance and NVIDIA AI software, it’s ideal for developers, researchers, and data scientists. It enables prototyping, optimization, and deployment of models, while connecting two systems supports models up to 405 billion parameters.

Read more on NVIDIA DGX Spark.

The DGX Spark is available for pre-order. Contact us and reserve your DGX Spark today!

Categories
NVIDIA News

NVIDIA GTC 2025 – Jensen Huang Unveils the Future of AI with New Chips and Robotics

At the GTC 2025 conference, NVIDIA CEO Jensen Huang unveiled key innovations set to shape the future of artificial intelligence (AI) and computing.​

New Chip Announcements:

  • Blackwell Ultra: The next generation of graphics processing units (GPUs), scheduled for release in the second half of 2025.
  • Vera Rubin: Named after renowned astronomer Vera Rubin, this chip is slated for a 2026 release, offering much higher performance of the current Blackwell chip.
  • Vera Rubin Ultra: Planned for 2027, this chip will further enhance capabilities and energy requirements.​

Advancements in Robotics:

  • Isaac GR00T N1: An open-source model designed for the development of humanoid robots, enabling faster and more efficient learning and adaptation to various tasks.​
  • Cosmos AI Model: An updated model that facilitates the generation of synthetic data for robot training, reducing the costs and time associated with collecting real-world data.​

Partnerships and Infrastructure:

  • Collaboration with General Motors (GM): NVIDIA and GM are partnering to develop systems for autonomous vehicles and integrate AI into manufacturing processes and future vehicles.​
  • NVIDIA Dynamo: A new open-source software platform designed to optimize data center operations and improve efficiency in executing complex AI models.​

Attention was given to two new personal AI supercomputers: DGX Spark and DGX Station. DGX Spark, dubbed “the world’s smallest AI supercomputer,” is powered by the Grace Blackwell chip and designed for researchers, students, and developers to build advanced AI models locally. Meanwhile, DGX Station, equipped with the Blackwell Ultra chip and 784 GB of memory, is tailored for large-scale AI tasks on a desktop, making cutting-edge technology more accessible.

These innovations underscore NVIDIA’s commitment to advancing artificial intelligence, robotics, and computing infrastructure, opening new possibilities for industries worldwide.

View more on GTC 2025 – Announcements and Live Updates | NVIDIA Blog.

Categories
NVIDIA News

Invitation to NVIDIA GTC 2025: Explore the Future of AI

Join NVIDIA GTC 2025, the world’s premier event for artificial intelligence, high-performance computing, and innovation. Discover the latest breakthroughs, connect with industry experts, and see how cutting-edge AI is solving today’s biggest challenges.

📅 Date: March 17–21, 2025
📍 Location: San Jose, California & Online

Don’t miss exclusive keynotes, hands-on workshops, and networking opportunities with top AI professionals. Register now and shape the future of AI!

🔗 More information: NVIDIA GTC 2025.

Categories
NVIDIA News

DeepSeek-R1 Now Live With NVIDIA NIM

Source: DeepSeek-R1 Now Live With NVIDIA NIM | NVIDIA Blog

To help developers securely experiment with DeepSeek-R1 capabilities and build their own specialized agents, the 671-billion-parameter DeepSeek-R1 model is now available as an NVIDIA NIM microservice preview on build.nvidia.com. The DeepSeek-R1 NIM microservice can deliver up to 3,872 tokens per second on a single NVIDIA HGX H200 system.

Developers can test and experiment with the application programming interface (API), which is expected to be available soon as a downloadable NIM microservice, part of the NVIDIA AI Enterprise software platform.

The DeepSeek-R1 NIM microservice simplifies deployments with support for industry-standard APIs. Enterprises can maximize security and data privacy by running the NIM microservice on their preferred accelerated computing infrastructure. Using NVIDIA AI Foundry with NVIDIA NeMo software, enterprises will also be able to create customized DeepSeek-R1 NIM microservices for specialized AI agents.

Read more on DeepSeek-R1 Now Live With NVIDIA NIM | NVIDIA Blog.

Categories
NVIDIA News

Fast Forward to Generative AI With NVIDIA Blueprints

NVIDIA Expands AI Workflows With NVIDIA NIM™ and NVIDIA Blueprints

Source: https://blogs.nvidia.com/blog/nim-agent-blueprints/

NVIDIA offers a wide range of software, including NIM (NVIDIA Inference Microservices) and NVIDIA Blueprints, to simplify the deployment of generative AI across industries. NVIDIA NIM™ provides optimized, cloud-native inference microservices for seamless integration of AI models, while NVIDIA Blueprints offer pre-built workflows for faster development and deployment.

These solutions help businesses accelerate AI implementation, reduce infrastructure complexity, and enhance productivity. Whether in the cloud, on-premises, or hybrid environments, NVIDIA’s new AI tools provide flexibility and scalability.

Learn more about NVIDIA Blueprints: NVIDIA AI Workflows.

Categories
NVIDIA News

NVIDIA Brings Grace Blackwell AI Supercomputing to Every Desk

Source: NVIDIA Puts Grace Blackwell on Every Desk and at Every AI Developer’s Fingertips | NVIDIA Newsroom

At CES 2025, NVIDIA introduced Project DIGITS, a personal AI supercomputer designed to provide AI researchers, data scientists, and students with desktop access to the NVIDIA Grace Blackwell platform. Central to this system is the new NVIDIA GB10 Grace Blackwell Superchip, delivering up to 1 petaflop of AI performance at FP4 precision. The GB10 integrates an NVIDIA Blackwell GPU with the latest CUDA® cores and fifth-generation Tensor Cores, connected via NVLink®-C2C to a high-performance NVIDIA Grace™ CPU comprising 20 Arm-based cores. Developed in collaboration with MediaTek, the GB10 emphasizes power efficiency and performance. Each Project DIGITS unit includes 128GB of unified memory and up to 4TB of NVMe storage, enabling the handling of AI models with up to 200 billion parameters. For larger models, two units can be linked to support up to 405 billion parameters. This setup allows users to develop and run inference on models locally and seamlessly deploy them on accelerated cloud or data center infrastructures.

Categories
NVIDIA News

The Importance of GPU Memory for AI Performance

Source: GPU Memory Essentials for AI Performance | NVIDIA Technical Blog

The NVIDIA blog highlights the critical role of GPU memory capacity in running advanced artificial intelligence (AI) models. Large AI models, such as Llama 2 with 7 billion parameters, require significant amounts of memory. For instance, processing at FP16 precision demands at least 28 GB of memory.

NVIDIA offers high-performance RTX GPUs, such as the RTX 6000 Ada Generation, featuring up to 48 GB of VRAM. These GPUs are designed to handle the largest AI models, enabling local development and execution of complex tasks. Additionally, they come equipped with specialized hardware, including Tensor Cores, which significantly accelerate computations required for AI workloads.

With NVIDIA’s powerful solutions, businesses and researchers can optimize the development and deployment of AI models directly on local devices, opening up new possibilities for advancements in artificial intelligence.

For more details, visit the official NVIDIA blog: developer.nvidia.com.


Interested in learning more about NVIDIA’s powerful solutions? Contact Xenya d.o.o., and we’ll be happy to help you find the right solution for your needs!

Categories
News

Integration of NVIDIA BlueField DPUs with WEKA Client Boosts AI Workload Efficiency

Source: Integration of NVIDIA BlueField DPUs with WEKA Client Boosts AI Workload Efficiency | NVIDIA Technical Blog

WEKA and NVIDIA are collaborating to integrate NVIDIA BlueField DPU processing units with WEKA’s data storage platform, enhancing AI workload efficiency. This integration improves data transfer rates, reduces latency, and increases system security by running the WEKA client directly on NVIDIA BlueField DPUs instead of the host server’s CPU. This approach not only boosts performance but also reduces CPU load and enhances security by moving storage operations to the DPU.

The features and discussions of these integrations were highlighted at the Supercomputing 2024 conference, where attendees witnessed firsthand how enhanced data access speeds and efficient workload processing can transform data center operations. For more detailed information, visit the NVIDIA Technical Blog Integration of NVIDIA BlueField DPUs with WEKA Client Boosts AI Workload Efficiency | NVIDIA Technical Blog.

css.php