<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Deployment on</title><link>https://docs.opennebula.io/7.2/solutions/ai_factory_blueprints/deployment/</link><description>Recent content in Deployment on</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 21 Oct 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://docs.opennebula.io/7.2/solutions/ai_factory_blueprints/deployment/index.xml" rel="self" type="application/rss+xml"/><item><title>On-premises AI Factory Deployment</title><link>https://docs.opennebula.io/7.2/solutions/ai_factory_blueprints/deployment/cd_on-premises/</link><pubDate>Tue, 21 Oct 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/solutions/ai_factory_blueprints/deployment/cd_on-premises/</guid><description>&lt;p&gt;&lt;a id="cd_on-premises"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Machine Learning (ML) training and inference are resource-intensive tasks that often require the full power of a dedicated GPU. PCI passthrough allows a Virtual Machine (VM) to have exclusive access to a physical GPU, delivering bare-metal performance for the most demanding AI workloads.&lt;/p&gt;
&lt;p&gt;In this guide you will find the details to deploy and configure an AI-ready OpenNebula cloud using the &lt;a href="https://github.com/OpenNebula/one-deploy"&gt;OneDeploy&lt;/a&gt; tool. It covers the general process for preparing an environment for demanding AI workloads by leveraging PCI passthrough for GPUs like the NVIDIA H100 and L40S.&lt;/p&gt;</description></item><item><title>On-cloud Deployment on Scaleway</title><link>https://docs.opennebula.io/7.2/solutions/ai_factory_blueprints/deployment/cd_cloud/</link><pubDate>Tue, 21 Oct 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/solutions/ai_factory_blueprints/deployment/cd_cloud/</guid><description>&lt;p&gt;&lt;a id="cd_cloud"&gt;&lt;/a&gt;
This document describes the procedure to deploy an AI-ready OpenNebula cloud using OneDeploy on a single &lt;a href="https://www.scaleway.com/en/elastic-metal/"&gt;Scaleway Elastic Metal&lt;/a&gt; bare-metal server equipped with GPUs.&lt;/p&gt;
&lt;p&gt;The architecture is a converged OpenNebula installation, where the frontend services and KVM hypervisor run on the same physical host. This approach is ideal for demonstrations, proofs-of-concept (PoCs), or for quickly trying out the solution without the need for a complex physical infrastructure.&lt;/p&gt;
&lt;p&gt;The outlined procedure is based on an instance with NVIDIA L40S GPUs as an example.&lt;/p&gt;</description></item></channel></rss>